text
stringlengths 12
1.05M
| repo_name
stringlengths 5
86
| path
stringlengths 4
191
| language
stringclasses 1
value | license
stringclasses 15
values | size
int32 12
1.05M
| keyword
listlengths 1
23
| text_hash
stringlengths 64
64
|
|---|---|---|---|---|---|---|---|
# generic docstring for whitelist/extract
GENERIC_DOCSTRING_WE = '''
Barcode extraction
------------------
""""""""""""""""
``--bc-pattern``
""""""""""""""""
Pattern for barcode(s) on read 1. See ``--extract-method``
"""""""""""""""""
``--bc-pattern2``
"""""""""""""""""
Pattern for barcode(s) on read 2. See ``--extract-method``
""""""""""""""""""""
``--extract-method``
""""""""""""""""""""
There are two methods enabled to extract the umi barcode (+/-
cell barcode). For both methods, the patterns should be provided
using the ``--bc-pattern`` and ``--bc-pattern2`` options.x
- ``string``
This should be used where the barcodes are always in the same
place in the read.
- N = UMI position (required)
- C = cell barcode position (optional)
- X = sample position (optional)
Bases with Ns and Cs will be extracted and added to the read
name. The corresponding sequence qualities will be removed from
the read. Bases with an X will be reattached to the read.
E.g. If the pattern is `NNNNCC`,
Then the read::
@HISEQ:87:00000000 read1
AAGGTTGCTGATTGGATGGGCTAG
+
DA1AEBFGGCG01DFH00B1FF0B
will become::
@HISEQ:87:00000000_TT_AAGG read1
GCTGATTGGATGGGCTAG
+
1AFGGCG01DFH00B1FF0B
where 'TT' is the cell barcode and 'AAGG' is the UMI.
- ``regex``
This method allows for more flexible barcode extraction and
should be used where the cell barcodes are variable in
length. Alternatively, the regex option can also be used to
filter out reads which do not contain an expected adapter
sequence. UMI-tools uses the regex module rather than the more
standard re module since the former also enables fuzzy matching
The regex must contain groups to define how the barcodes are
encoded in the read. The expected groups in the regex are:
umi_n = UMI positions, where n can be any value (required)
cell_n = cell barcode positions, where n can be any value (optional)
discard_n = positions to discard, where n can be any value (optional)
UMI positions and cell barcode positions will be extracted and
added to the read name. The corresponding sequence qualities
will be removed from the read.
Discard bases and the corresponding quality scores will be
removed from the read. All bases matched by other groups or
components of the regex will be reattached to the read sequence
For example, the following regex can be used to extract reads
from the Klein et al inDrop data::
(?P<cell_1>.{8,12})(?P<discard_1>GAGTGATTGCTTGTGACGCCTT)(?P<cell_2>.{8})(?P<umi_1>.{6})T{3}.*
Where only reads with a 3' T-tail and `GAGTGATTGCTTGTGACGCCTT` in
the correct position to yield two cell barcodes of 8-12 and 8bp
respectively, and a 6bp UMI will be retained.
You can also specify fuzzy matching to allow errors. For example if
the discard group above was specified as below this would enable
matches with up to 2 errors in the discard_1 group.
::
(?P<discard_1>GAGTGATTGCTTGTGACGCCTT){s<=2}
Note that all UMIs must be the same length for downstream
processing with dedup, group or count commands
""""""""""""
``--3prime``
""""""""""""
By default the barcode is assumed to be on the 5' end of the
read, but use this option to sepecify that it is on the 3' end
instead. This option only works with ``--extract-method=string``
since 3' encoding can be specified explicitly with a regex, e.g
``.*(?P<umi_1>.{5})$``
""""""""""""""
``--read2-in``
""""""""""""""
Filename for read pairs
""""""""""""""""""
``--filtered-out``
""""""""""""""""""
Write out reads not matching regex pattern or cell barcode
whitelist to this file
"""""""""""""""""""
``--filtered-out2``
"""""""""""""""""""
Write out read pairs not matching regex pattern or cell barcode
whitelist to this file
""""""""""""""
``--ignore-read-pair-suffixes``
""""""""""""""
Ignore \1 and \2 read name suffixes. Note that this options is
required if the suffixes are not whitespace separated from the
rest of the read name
'''
# generic docstring for group/dedup/count/count_tab
GENERIC_DOCSTRING_GDC = '''
Extracting barcodes
-------------------
It is assumed that the FASTQ files were processed with `umi_tools
extract` before mapping and thus the UMI is the last word of the read
name. e.g::
@HISEQ:87:00000000_AATT
where `AATT` is the UMI sequeuence.
If you have used an alternative method which does not separate the
read id and UMI with a "_", such as bcl2fastq which uses ":", you can
specify the separator with the option ``--umi-separator=<sep>``,
replacing <sep> with e.g ":".
Alternatively, if your UMIs are encoded in a tag, you can specify this
by setting the option --extract-umi-method=tag and set the tag name
with the --umi-tag option. For example, if your UMIs are encoded in
the 'UM' tag, provide the following options:
``--extract-umi-method=tag`` ``--umi-tag=UM``
Finally, if you have used umis to extract the UMI +/- cell barcode,
you can specify ``--extract-umi-method=umis``
The start position of a read is considered to be the start of its alignment
minus any soft clipped bases. A read aligned at position 500 with
cigar 2S98M will be assumed to start at position 498.
""""""""""""""""""""""""
``--extract-umi-method``
""""""""""""""""""""""""
How are the barcodes encoded in the read?
Options are:
- read_id (default)
Barcodes are contained at the end of the read separated as
specified with ``--umi-separator`` option
- tag
Barcodes contained in a tag(s), see ``--umi-tag``/``--cell-tag``
options
- umis
Barcodes were extracted using umis (https://github.com/vals/umis)
"""""""""""""""""""""""""""""""
``--umi-separator=[SEPARATOR]``
"""""""""""""""""""""""""""""""
Separator between read id and UMI. See ``--extract-umi-method``
above. Default=``_``
"""""""""""""""""""
``--umi-tag=[TAG]``
"""""""""""""""""""
Tag which contains UMI. See ``--extract-umi-method`` above
"""""""""""""""""""""""""""
``--umi-tag-split=[SPLIT]``
"""""""""""""""""""""""""""
Separate the UMI in tag by SPLIT and take the first element
"""""""""""""""""""""""""""""""""""
``--umi-tag-delimiter=[DELIMITER]``
"""""""""""""""""""""""""""""""""""
Separate the UMI in by DELIMITER and concatenate the elements
""""""""""""""""""""
``--cell-tag=[TAG]``
""""""""""""""""""""
Tag which contains cell barcode. See `--extract-umi-method` above
""""""""""""""""""""""""""""
``--cell-tag-split=[SPLIT]``
""""""""""""""""""""""""""""
Separate the cell barcode in tag by SPLIT and take the first element
""""""""""""""""""""""""""""""""""""
``--cell-tag-delimiter=[DELIMITER]``
""""""""""""""""""""""""""""""""""""
Separate the cell barcode in by DELIMITER and concatenate the elements
UMI grouping options
---------------------------
""""""""""""
``--method``
""""""""""""
What method to use to identify group of reads with the same (or
similar) UMI(s)?
All methods start by identifying the reads with the same mapping position.
The simplest methods, unique and percentile, group reads with
the exact same UMI. The network-based methods, cluster, adjacency and
directional, build networks where nodes are UMIs and edges connect UMIs
with an edit distance <= threshold (usually 1). The groups of reads
are then defined from the network in a method-specific manner. For all
the network-based methods, each read group is equivalent to one read
count for the gene.
- unique
Reads group share the exact same UMI
- percentile
Reads group share the exact same UMI. UMIs with counts < 1% of the
median counts for UMIs at the same position are ignored.
- cluster
Identify clusters of connected UMIs (based on hamming distance
threshold). Each network is a read group
- adjacency
Cluster UMIs as above. For each cluster, select the node (UMI)
with the highest counts. Visit all nodes one edge away. If all
nodes have been visited, stop. Otherwise, repeat with remaining
nodes until all nodes have been visted. Each step
defines a read group.
- directional (default)
Identify clusters of connected UMIs (based on hamming distance
threshold) and umi A counts >= (2* umi B counts) - 1. Each
network is a read group.
"""""""""""""""""""""""""""""
``--edit-distance-threshold``
"""""""""""""""""""""""""""""
For the adjacency and cluster methods the threshold for the
edit distance to connect two UMIs in the network can be
increased. The default value of 1 works best unless the UMI is
very long (>14bp).
"""""""""""""""""""""""
``--spliced-is-unique``
"""""""""""""""""""""""
Causes two reads that start in the same position on the same
strand and having the same UMI to be considered unique if one is spliced
and the other is not. (Uses the 'N' cigar operation to test for
splicing).
"""""""""""""""""""""""""
``--soft-clip-threshold``
"""""""""""""""""""""""""
Mappers that soft clip will sometimes do so rather than mapping a
spliced read if there is only a small overhang over the exon
junction. By setting this option, you can treat reads with at least
this many bases soft-clipped at the 3' end as spliced. Default=4.
""""""""""""""""""""""""""""""""""""""""""""""
``--multimapping-detection-method=[NH/X0/XT]``
""""""""""""""""""""""""""""""""""""""""""""""
If the sam/bam contains tags to identify multimapping reads, you can
specify for use when selecting the best read at a given loci.
Supported tags are "NH", "X0" and "XT". If not specified, the read
with the highest mapping quality will be selected.
"""""""""""""""""
``--read-length``
"""""""""""""""""
Use the read length as a criteria when deduping, for e.g sRNA-Seq.
Single-cell RNA-Seq options
---------------------------
""""""""""""""
``--per-gene``
""""""""""""""
Reads will be grouped together if they have the same gene. This
is useful if your library prep generates PCR duplicates with non
identical alignment positions such as CEL-Seq. Note this option
is hardcoded to be on with the count command. I.e counting is
always performed per-gene. Must be combined with either
``--gene-tag`` or ``--per-contig`` option.
""""""""""""""
``--gene-tag``
""""""""""""""
Deduplicate per gene. The gene information is encoded in the bam
read tag specified
"""""""""""""""""""""""""
``--assigned-status-tag``
"""""""""""""""""""""""""
BAM tag which describes whether a read is assigned to a
gene. Defaults to the same value as given for ``--gene-tag``
"""""""""""""""""""""
``--skip-tags-regex``
"""""""""""""""""""""
Use in conjunction with the ``--assigned-status-tag`` option to
skip any reads where the tag matches this regex. Default
(``"^[__|Unassigned]"``) matches anything which starts with "__"
or "Unassigned":
""""""""""""""""
``--per-contig``
""""""""""""""""
Deduplicate per contig (field 3 in BAM; RNAME).
All reads with the same contig will be considered to have the
same alignment position. This is useful if you have aligned to a
reference transcriptome with one transcript per gene. If you
have aligned to a transcriptome with more than one transcript
per gene, you can supply a map between transcripts and gene
using the ``--gene-transcript-map`` option
"""""""""""""""""""""""""
``--gene-transcript-map``
"""""""""""""""""""""""""
File mapping genes to transcripts (tab separated), e.g::
gene1 transcript1
gene1 transcript2
gene2 transcript3
""""""""""""""
``--per-cell``
""""""""""""""
Reads will only be grouped together if they have the same cell
barcode. Can be combined with ``--per-gene``.
SAM/BAM Options
---------------
"""""""""""""""""""""
``--mapping-quality``
"""""""""""""""""""""
Minimium mapping quality (MAPQ) for a read to be retained. Default is 0.
""""""""""""""""""""
``--unmapped-reads``
""""""""""""""""""""
How should unmapped reads be handled. Options are:
- discard (default)
Discard all unmapped reads
- use
If read2 is unmapped, deduplicate using read1 only. Requires
``--paired``
- output
Output unmapped reads/read pairs without UMI
grouping/deduplication. Only available in umi_tools group
""""""""""""""""""""
``--chimeric-pairs``
""""""""""""""""""""
How should chimeric read pairs be handled. Options are:
- discard
Discard all chimeric read pairs
- use (default)
Deduplicate using read1 only
- output
Output chimeric read pairs without UMI
grouping/deduplication. Only available in umi_tools group
""""""""""""""""""""
``--unpaired-reads``
""""""""""""""""""""
How should unpaired reads be handled. Options are:
- discard
Discard all unpaired reads
- use (default)
Deduplicate using read1 only
- output
Output unpaired reads without UMI
grouping/deduplication. Only available in umi_tools group
""""""""""""""""
``--ignore-umi``
""""""""""""""""
Ignore the UMI and group reads using mapping coordinates only
""""""""""""
``--subset``
""""""""""""
Only consider a fraction of the reads, chosen at random. This is useful
for doing saturation analyses.
"""""""""""
``--chrom``
"""""""""""
Only consider a single chromosome. This is useful for
debugging/testing purposes
Input/Output Options
---------------------
"""""""""""""""""""""""
``--in-sam, --out-sam``
"""""""""""""""""""""""
By default, inputs are assumed to be in BAM format and outputs are written
in BAM format. Use these options to specify the use of SAM format for
input or output.
""""""""""""
``--paired``
""""""""""""
BAM is paired end - output both read pairs. This will also
force the use of the template length to determine reads with
the same mapping coordinates.
'''
# generic docstring for dedup/group
GROUP_DEDUP_GENERIC_OPTIONS = '''
Group/Dedup options
-------------------
""""""""""""""""""""
``--no-sort-output``
""""""""""""""""""""
By default, output is sorted. This involves the
use of a temporary unsorted file since reads are considered in
the order of their start position which may not be the same
as their alignment coordinate due to soft-clipping and reverse
alignments. The temp file will be saved (in ``--temp-dir``) and deleted
when it has been sorted to the outfile. Use this option to turn
off sorting.
"""""""""""""""""""""""""
``--buffer-whole-contig``
"""""""""""""""""""""""""
forces dedup to parse an entire contig before yielding any reads
for deduplication. This is the only way to absolutely guarantee
that all reads with the same start position are grouped together
for deduplication since dedup uses the start position of the
read, not the alignment coordinate on which the reads are
sorted. However, by default, dedup reads for another 1000bp
before outputting read groups which will avoid any reads being
missed with short read sequencing (<1000bp).
'''
|
CGATOxford/UMI-tools
|
umi_tools/Documentation.py
|
Python
|
mit
| 15,892
|
[
"VisIt"
] |
1d831ba6655f913b231fe040ee551d968e5580ca8a2dcfc97136923745fed43f
|
# -*- coding: utf-8 -*-
from math import sqrt
import numpy as np
from ase.units import kJ
class EquationOfState:
"""Fit equation of state for bulk systems.
The following equation is used::
2 3 -1/3
E(V) = c + c t + c t + c t , t = V
0 1 2 3
Use::
eos = EquationOfState(volumes, energies)
v0, e0, B = eos.fit()
eos.plot()
"""
def __init__(self, volumes, energies):
self.v = np.array(volumes)
self.e = np.array(energies)
self.v0 = None
def fit(self):
"""Calculate volume, energy, and bulk modulus.
Returns the optimal volume, the minumum energy, and the bulk
modulus. Notice that the ASE units for the bulk modulus is
eV/Angstrom^3 - to get the value in GPa, do this::
v0, e0, B = eos.fit()
print B / kJ * 1.0e24, 'GPa'
"""
fit0 = np.poly1d(np.polyfit(self.v**-(1.0 / 3), self.e, 3))
fit1 = np.polyder(fit0, 1)
fit2 = np.polyder(fit1, 1)
self.v0 = None
for t in np.roots(fit1):
if t > 0 and fit2(t) > 0:
self.v0 = t**-3
break
if self.v0 is None:
raise ValueError('No minimum!')
self.e0 = fit0(t)
self.B = t**5 * fit2(t) / 9
self.fit0 = fit0
return self.v0, self.e0, self.B
def plot(self, filename=None, show=None):
"""Plot fitted energy curve.
Uses Matplotlib to plot the energy curve. Use *show=True* to
show the figure and *filename='abc.png'* or
*filename='abc.eps'* to save the figure to a file."""
#import matplotlib.pyplot as plt
import pylab as plt
if self.v0 is None:
self.fit()
if filename is None and show is None:
show = True
x = 3.95
f = plt.figure(figsize=(x * 2.5**0.5, x))
f.subplots_adjust(left=0.12, right=0.9, top=0.9, bottom=0.15)
plt.plot(self.v, self.e, 'o')
x = np.linspace(min(self.v), max(self.v), 100)
plt.plot(x, self.fit0(x**-(1.0 / 3)), '-r')
plt.xlabel(u'volume [Å^3]')
plt.ylabel(u'energy [eV]')
plt.title(u'E: %.3f eV, V: %.3f Å^3, B: %.3f GPa' %
(self.e0, self.v0, self.B / kJ * 1.0e24))
if show:
plt.show()
if filename is not None:
f.savefig(filename)
return f
|
JConwayAWT/PGSS14CC
|
lib/python/multimetallics/ase/utils/eos.py
|
Python
|
gpl-2.0
| 2,541
|
[
"ASE"
] |
522ca6a4ed80e3d76df4e85e21ee4f7480d9e722824cb1e90d4dba5825ec9e9d
|
import hashlib
import pickle
import sys
import warnings
import numpy as np
import pytest
from numpy.testing import (
assert_, assert_raises, assert_equal, assert_warns,
assert_no_warnings, assert_array_equal, assert_array_almost_equal,
suppress_warnings
)
from numpy.random import MT19937, PCG64
from numpy import random
INT_FUNCS = {'binomial': (100.0, 0.6),
'geometric': (.5,),
'hypergeometric': (20, 20, 10),
'logseries': (.5,),
'multinomial': (20, np.ones(6) / 6.0),
'negative_binomial': (100, .5),
'poisson': (10.0,),
'zipf': (2,),
}
if np.iinfo(int).max < 2**32:
# Windows and some 32-bit platforms, e.g., ARM
INT_FUNC_HASHES = {'binomial': '670e1c04223ffdbab27e08fbbad7bdba',
'logseries': '6bd0183d2f8030c61b0d6e11aaa60caf',
'geometric': '6e9df886f3e1e15a643168568d5280c0',
'hypergeometric': '7964aa611b046aecd33063b90f4dec06',
'multinomial': '68a0b049c16411ed0aa4aff3572431e4',
'negative_binomial': 'dc265219eec62b4338d39f849cd36d09',
'poisson': '7b4dce8e43552fc82701c2fa8e94dc6e',
'zipf': 'fcd2a2095f34578723ac45e43aca48c5',
}
else:
INT_FUNC_HASHES = {'binomial': 'b5f8dcd74f172836536deb3547257b14',
'geometric': '8814571f45c87c59699d62ccd3d6c350',
'hypergeometric': 'bc64ae5976eac452115a16dad2dcf642',
'logseries': '84be924b37485a27c4a98797bc88a7a4',
'multinomial': 'ec3c7f9cf9664044bb0c6fb106934200',
'negative_binomial': '210533b2234943591364d0117a552969',
'poisson': '0536a8850c79da0c78defd742dccc3e0',
'zipf': 'f2841f504dd2525cd67cdcad7561e532',
}
@pytest.fixture(scope='module', params=INT_FUNCS)
def int_func(request):
return (request.param, INT_FUNCS[request.param],
INT_FUNC_HASHES[request.param])
def assert_mt19937_state_equal(a, b):
assert_equal(a['bit_generator'], b['bit_generator'])
assert_array_equal(a['state']['key'], b['state']['key'])
assert_array_equal(a['state']['pos'], b['state']['pos'])
assert_equal(a['has_gauss'], b['has_gauss'])
assert_equal(a['gauss'], b['gauss'])
class TestSeed:
def test_scalar(self):
s = random.RandomState(0)
assert_equal(s.randint(1000), 684)
s = random.RandomState(4294967295)
assert_equal(s.randint(1000), 419)
def test_array(self):
s = random.RandomState(range(10))
assert_equal(s.randint(1000), 468)
s = random.RandomState(np.arange(10))
assert_equal(s.randint(1000), 468)
s = random.RandomState([0])
assert_equal(s.randint(1000), 973)
s = random.RandomState([4294967295])
assert_equal(s.randint(1000), 265)
def test_invalid_scalar(self):
# seed must be an unsigned 32 bit integer
assert_raises(TypeError, random.RandomState, -0.5)
assert_raises(ValueError, random.RandomState, -1)
def test_invalid_array(self):
# seed must be an unsigned 32 bit integer
assert_raises(TypeError, random.RandomState, [-0.5])
assert_raises(ValueError, random.RandomState, [-1])
assert_raises(ValueError, random.RandomState, [4294967296])
assert_raises(ValueError, random.RandomState, [1, 2, 4294967296])
assert_raises(ValueError, random.RandomState, [1, -2, 4294967296])
def test_invalid_array_shape(self):
# gh-9832
assert_raises(ValueError, random.RandomState, np.array([],
dtype=np.int64))
assert_raises(ValueError, random.RandomState, [[1, 2, 3]])
assert_raises(ValueError, random.RandomState, [[1, 2, 3],
[4, 5, 6]])
def test_cannot_seed(self):
rs = random.RandomState(PCG64(0))
with assert_raises(TypeError):
rs.seed(1234)
def test_invalid_initialization(self):
assert_raises(ValueError, random.RandomState, MT19937)
class TestBinomial:
def test_n_zero(self):
# Tests the corner case of n == 0 for the binomial distribution.
# binomial(0, p) should be zero for any p in [0, 1].
# This test addresses issue #3480.
zeros = np.zeros(2, dtype='int')
for p in [0, .5, 1]:
assert_(random.binomial(0, p) == 0)
assert_array_equal(random.binomial(zeros, p), zeros)
def test_p_is_nan(self):
# Issue #4571.
assert_raises(ValueError, random.binomial, 1, np.nan)
class TestMultinomial:
def test_basic(self):
random.multinomial(100, [0.2, 0.8])
def test_zero_probability(self):
random.multinomial(100, [0.2, 0.8, 0.0, 0.0, 0.0])
def test_int_negative_interval(self):
assert_(-5 <= random.randint(-5, -1) < -1)
x = random.randint(-5, -1, 5)
assert_(np.all(-5 <= x))
assert_(np.all(x < -1))
def test_size(self):
# gh-3173
p = [0.5, 0.5]
assert_equal(random.multinomial(1, p, np.uint32(1)).shape, (1, 2))
assert_equal(random.multinomial(1, p, np.uint32(1)).shape, (1, 2))
assert_equal(random.multinomial(1, p, np.uint32(1)).shape, (1, 2))
assert_equal(random.multinomial(1, p, [2, 2]).shape, (2, 2, 2))
assert_equal(random.multinomial(1, p, (2, 2)).shape, (2, 2, 2))
assert_equal(random.multinomial(1, p, np.array((2, 2))).shape,
(2, 2, 2))
assert_raises(TypeError, random.multinomial, 1, p,
float(1))
def test_invalid_prob(self):
assert_raises(ValueError, random.multinomial, 100, [1.1, 0.2])
assert_raises(ValueError, random.multinomial, 100, [-.1, 0.9])
def test_invalid_n(self):
assert_raises(ValueError, random.multinomial, -1, [0.8, 0.2])
def test_p_non_contiguous(self):
p = np.arange(15.)
p /= np.sum(p[1::3])
pvals = p[1::3]
random.seed(1432985819)
non_contig = random.multinomial(100, pvals=pvals)
random.seed(1432985819)
contig = random.multinomial(100, pvals=np.ascontiguousarray(pvals))
assert_array_equal(non_contig, contig)
class TestSetState:
def setup(self):
self.seed = 1234567890
self.random_state = random.RandomState(self.seed)
self.state = self.random_state.get_state()
def test_basic(self):
old = self.random_state.tomaxint(16)
self.random_state.set_state(self.state)
new = self.random_state.tomaxint(16)
assert_(np.all(old == new))
def test_gaussian_reset(self):
# Make sure the cached every-other-Gaussian is reset.
old = self.random_state.standard_normal(size=3)
self.random_state.set_state(self.state)
new = self.random_state.standard_normal(size=3)
assert_(np.all(old == new))
def test_gaussian_reset_in_media_res(self):
# When the state is saved with a cached Gaussian, make sure the
# cached Gaussian is restored.
self.random_state.standard_normal()
state = self.random_state.get_state()
old = self.random_state.standard_normal(size=3)
self.random_state.set_state(state)
new = self.random_state.standard_normal(size=3)
assert_(np.all(old == new))
def test_backwards_compatibility(self):
# Make sure we can accept old state tuples that do not have the
# cached Gaussian value.
old_state = self.state[:-2]
x1 = self.random_state.standard_normal(size=16)
self.random_state.set_state(old_state)
x2 = self.random_state.standard_normal(size=16)
self.random_state.set_state(self.state)
x3 = self.random_state.standard_normal(size=16)
assert_(np.all(x1 == x2))
assert_(np.all(x1 == x3))
def test_negative_binomial(self):
# Ensure that the negative binomial results take floating point
# arguments without truncation.
self.random_state.negative_binomial(0.5, 0.5)
def test_get_state_warning(self):
rs = random.RandomState(PCG64())
with suppress_warnings() as sup:
w = sup.record(RuntimeWarning)
state = rs.get_state()
assert_(len(w) == 1)
assert isinstance(state, dict)
assert state['bit_generator'] == 'PCG64'
def test_invalid_legacy_state_setting(self):
state = self.random_state.get_state()
new_state = ('Unknown', ) + state[1:]
assert_raises(ValueError, self.random_state.set_state, new_state)
assert_raises(TypeError, self.random_state.set_state,
np.array(new_state, dtype=object))
state = self.random_state.get_state(legacy=False)
del state['bit_generator']
assert_raises(ValueError, self.random_state.set_state, state)
def test_pickle(self):
self.random_state.seed(0)
self.random_state.random_sample(100)
self.random_state.standard_normal()
pickled = self.random_state.get_state(legacy=False)
assert_equal(pickled['has_gauss'], 1)
rs_unpick = pickle.loads(pickle.dumps(self.random_state))
unpickled = rs_unpick.get_state(legacy=False)
assert_mt19937_state_equal(pickled, unpickled)
def test_state_setting(self):
attr_state = self.random_state.__getstate__()
self.random_state.standard_normal()
self.random_state.__setstate__(attr_state)
state = self.random_state.get_state(legacy=False)
assert_mt19937_state_equal(attr_state, state)
def test_repr(self):
assert repr(self.random_state).startswith('RandomState(MT19937)')
class TestRandint:
rfunc = random.randint
# valid integer/boolean types
itype = [np.bool_, np.int8, np.uint8, np.int16, np.uint16,
np.int32, np.uint32, np.int64, np.uint64]
def test_unsupported_type(self):
assert_raises(TypeError, self.rfunc, 1, dtype=float)
def test_bounds_checking(self):
for dt in self.itype:
lbnd = 0 if dt is np.bool_ else np.iinfo(dt).min
ubnd = 2 if dt is np.bool_ else np.iinfo(dt).max + 1
assert_raises(ValueError, self.rfunc, lbnd - 1, ubnd, dtype=dt)
assert_raises(ValueError, self.rfunc, lbnd, ubnd + 1, dtype=dt)
assert_raises(ValueError, self.rfunc, ubnd, lbnd, dtype=dt)
assert_raises(ValueError, self.rfunc, 1, 0, dtype=dt)
def test_rng_zero_and_extremes(self):
for dt in self.itype:
lbnd = 0 if dt is np.bool_ else np.iinfo(dt).min
ubnd = 2 if dt is np.bool_ else np.iinfo(dt).max + 1
tgt = ubnd - 1
assert_equal(self.rfunc(tgt, tgt + 1, size=1000, dtype=dt), tgt)
tgt = lbnd
assert_equal(self.rfunc(tgt, tgt + 1, size=1000, dtype=dt), tgt)
tgt = (lbnd + ubnd)//2
assert_equal(self.rfunc(tgt, tgt + 1, size=1000, dtype=dt), tgt)
def test_full_range(self):
# Test for ticket #1690
for dt in self.itype:
lbnd = 0 if dt is np.bool_ else np.iinfo(dt).min
ubnd = 2 if dt is np.bool_ else np.iinfo(dt).max + 1
try:
self.rfunc(lbnd, ubnd, dtype=dt)
except Exception as e:
raise AssertionError("No error should have been raised, "
"but one was with the following "
"message:\n\n%s" % str(e))
def test_in_bounds_fuzz(self):
# Don't use fixed seed
random.seed()
for dt in self.itype[1:]:
for ubnd in [4, 8, 16]:
vals = self.rfunc(2, ubnd, size=2**16, dtype=dt)
assert_(vals.max() < ubnd)
assert_(vals.min() >= 2)
vals = self.rfunc(0, 2, size=2**16, dtype=np.bool_)
assert_(vals.max() < 2)
assert_(vals.min() >= 0)
def test_repeatability(self):
# We use a md5 hash of generated sequences of 1000 samples
# in the range [0, 6) for all but bool, where the range
# is [0, 2). Hashes are for little endian numbers.
tgt = {'bool': '7dd3170d7aa461d201a65f8bcf3944b0',
'int16': '1b7741b80964bb190c50d541dca1cac1',
'int32': '4dc9fcc2b395577ebb51793e58ed1a05',
'int64': '17db902806f448331b5a758d7d2ee672',
'int8': '27dd30c4e08a797063dffac2490b0be6',
'uint16': '1b7741b80964bb190c50d541dca1cac1',
'uint32': '4dc9fcc2b395577ebb51793e58ed1a05',
'uint64': '17db902806f448331b5a758d7d2ee672',
'uint8': '27dd30c4e08a797063dffac2490b0be6'}
for dt in self.itype[1:]:
random.seed(1234)
# view as little endian for hash
if sys.byteorder == 'little':
val = self.rfunc(0, 6, size=1000, dtype=dt)
else:
val = self.rfunc(0, 6, size=1000, dtype=dt).byteswap()
res = hashlib.md5(val.view(np.int8)).hexdigest()
assert_(tgt[np.dtype(dt).name] == res)
# bools do not depend on endianness
random.seed(1234)
val = self.rfunc(0, 2, size=1000, dtype=bool).view(np.int8)
res = hashlib.md5(val).hexdigest()
assert_(tgt[np.dtype(bool).name] == res)
def test_int64_uint64_corner_case(self):
# When stored in Numpy arrays, `lbnd` is casted
# as np.int64, and `ubnd` is casted as np.uint64.
# Checking whether `lbnd` >= `ubnd` used to be
# done solely via direct comparison, which is incorrect
# because when Numpy tries to compare both numbers,
# it casts both to np.float64 because there is
# no integer superset of np.int64 and np.uint64. However,
# `ubnd` is too large to be represented in np.float64,
# causing it be round down to np.iinfo(np.int64).max,
# leading to a ValueError because `lbnd` now equals
# the new `ubnd`.
dt = np.int64
tgt = np.iinfo(np.int64).max
lbnd = np.int64(np.iinfo(np.int64).max)
ubnd = np.uint64(np.iinfo(np.int64).max + 1)
# None of these function calls should
# generate a ValueError now.
actual = random.randint(lbnd, ubnd, dtype=dt)
assert_equal(actual, tgt)
def test_respect_dtype_singleton(self):
# See gh-7203
for dt in self.itype:
lbnd = 0 if dt is np.bool_ else np.iinfo(dt).min
ubnd = 2 if dt is np.bool_ else np.iinfo(dt).max + 1
sample = self.rfunc(lbnd, ubnd, dtype=dt)
assert_equal(sample.dtype, np.dtype(dt))
for dt in (bool, int, np.compat.long):
lbnd = 0 if dt is bool else np.iinfo(dt).min
ubnd = 2 if dt is bool else np.iinfo(dt).max + 1
# gh-7284: Ensure that we get Python data types
sample = self.rfunc(lbnd, ubnd, dtype=dt)
assert_(not hasattr(sample, 'dtype'))
assert_equal(type(sample), dt)
class TestRandomDist:
# Make sure the random distribution returns the correct value for a
# given seed
def setup(self):
self.seed = 1234567890
def test_rand(self):
random.seed(self.seed)
actual = random.rand(3, 2)
desired = np.array([[0.61879477158567997, 0.59162362775974664],
[0.88868358904449662, 0.89165480011560816],
[0.4575674820298663, 0.7781880808593471]])
assert_array_almost_equal(actual, desired, decimal=15)
def test_rand_singleton(self):
random.seed(self.seed)
actual = random.rand()
desired = 0.61879477158567997
assert_array_almost_equal(actual, desired, decimal=15)
def test_randn(self):
random.seed(self.seed)
actual = random.randn(3, 2)
desired = np.array([[1.34016345771863121, 1.73759122771936081],
[1.498988344300628, -0.2286433324536169],
[2.031033998682787, 2.17032494605655257]])
assert_array_almost_equal(actual, desired, decimal=15)
random.seed(self.seed)
actual = random.randn()
assert_array_almost_equal(actual, desired[0, 0], decimal=15)
def test_randint(self):
random.seed(self.seed)
actual = random.randint(-99, 99, size=(3, 2))
desired = np.array([[31, 3],
[-52, 41],
[-48, -66]])
assert_array_equal(actual, desired)
def test_random_integers(self):
random.seed(self.seed)
with suppress_warnings() as sup:
w = sup.record(DeprecationWarning)
actual = random.random_integers(-99, 99, size=(3, 2))
assert_(len(w) == 1)
desired = np.array([[31, 3],
[-52, 41],
[-48, -66]])
assert_array_equal(actual, desired)
random.seed(self.seed)
with suppress_warnings() as sup:
w = sup.record(DeprecationWarning)
actual = random.random_integers(198, size=(3, 2))
assert_(len(w) == 1)
assert_array_equal(actual, desired + 100)
def test_tomaxint(self):
random.seed(self.seed)
rs = random.RandomState(self.seed)
actual = rs.tomaxint(size=(3, 2))
if np.iinfo(int).max == 2147483647:
desired = np.array([[1328851649, 731237375],
[1270502067, 320041495],
[1908433478, 499156889]], dtype=np.int64)
else:
desired = np.array([[5707374374421908479, 5456764827585442327],
[8196659375100692377, 8224063923314595285],
[4220315081820346526, 7177518203184491332]],
dtype=np.int64)
assert_equal(actual, desired)
rs.seed(self.seed)
actual = rs.tomaxint()
assert_equal(actual, desired[0, 0])
def test_random_integers_max_int(self):
# Tests whether random_integers can generate the
# maximum allowed Python int that can be converted
# into a C long. Previous implementations of this
# method have thrown an OverflowError when attempting
# to generate this integer.
with suppress_warnings() as sup:
w = sup.record(DeprecationWarning)
actual = random.random_integers(np.iinfo('l').max,
np.iinfo('l').max)
assert_(len(w) == 1)
desired = np.iinfo('l').max
assert_equal(actual, desired)
with suppress_warnings() as sup:
w = sup.record(DeprecationWarning)
typer = np.dtype('l').type
actual = random.random_integers(typer(np.iinfo('l').max),
typer(np.iinfo('l').max))
assert_(len(w) == 1)
assert_equal(actual, desired)
def test_random_integers_deprecated(self):
with warnings.catch_warnings():
warnings.simplefilter("error", DeprecationWarning)
# DeprecationWarning raised with high == None
assert_raises(DeprecationWarning,
random.random_integers,
np.iinfo('l').max)
# DeprecationWarning raised with high != None
assert_raises(DeprecationWarning,
random.random_integers,
np.iinfo('l').max, np.iinfo('l').max)
def test_random_sample(self):
random.seed(self.seed)
actual = random.random_sample((3, 2))
desired = np.array([[0.61879477158567997, 0.59162362775974664],
[0.88868358904449662, 0.89165480011560816],
[0.4575674820298663, 0.7781880808593471]])
assert_array_almost_equal(actual, desired, decimal=15)
random.seed(self.seed)
actual = random.random_sample()
assert_array_almost_equal(actual, desired[0, 0], decimal=15)
def test_choice_uniform_replace(self):
random.seed(self.seed)
actual = random.choice(4, 4)
desired = np.array([2, 3, 2, 3])
assert_array_equal(actual, desired)
def test_choice_nonuniform_replace(self):
random.seed(self.seed)
actual = random.choice(4, 4, p=[0.4, 0.4, 0.1, 0.1])
desired = np.array([1, 1, 2, 2])
assert_array_equal(actual, desired)
def test_choice_uniform_noreplace(self):
random.seed(self.seed)
actual = random.choice(4, 3, replace=False)
desired = np.array([0, 1, 3])
assert_array_equal(actual, desired)
def test_choice_nonuniform_noreplace(self):
random.seed(self.seed)
actual = random.choice(4, 3, replace=False, p=[0.1, 0.3, 0.5, 0.1])
desired = np.array([2, 3, 1])
assert_array_equal(actual, desired)
def test_choice_noninteger(self):
random.seed(self.seed)
actual = random.choice(['a', 'b', 'c', 'd'], 4)
desired = np.array(['c', 'd', 'c', 'd'])
assert_array_equal(actual, desired)
def test_choice_exceptions(self):
sample = random.choice
assert_raises(ValueError, sample, -1, 3)
assert_raises(ValueError, sample, 3., 3)
assert_raises(ValueError, sample, [[1, 2], [3, 4]], 3)
assert_raises(ValueError, sample, [], 3)
assert_raises(ValueError, sample, [1, 2, 3, 4], 3,
p=[[0.25, 0.25], [0.25, 0.25]])
assert_raises(ValueError, sample, [1, 2], 3, p=[0.4, 0.4, 0.2])
assert_raises(ValueError, sample, [1, 2], 3, p=[1.1, -0.1])
assert_raises(ValueError, sample, [1, 2], 3, p=[0.4, 0.4])
assert_raises(ValueError, sample, [1, 2, 3], 4, replace=False)
# gh-13087
assert_raises(ValueError, sample, [1, 2, 3], -2, replace=False)
assert_raises(ValueError, sample, [1, 2, 3], (-1,), replace=False)
assert_raises(ValueError, sample, [1, 2, 3], (-1, 1), replace=False)
assert_raises(ValueError, sample, [1, 2, 3], 2,
replace=False, p=[1, 0, 0])
def test_choice_return_shape(self):
p = [0.1, 0.9]
# Check scalar
assert_(np.isscalar(random.choice(2, replace=True)))
assert_(np.isscalar(random.choice(2, replace=False)))
assert_(np.isscalar(random.choice(2, replace=True, p=p)))
assert_(np.isscalar(random.choice(2, replace=False, p=p)))
assert_(np.isscalar(random.choice([1, 2], replace=True)))
assert_(random.choice([None], replace=True) is None)
a = np.array([1, 2])
arr = np.empty(1, dtype=object)
arr[0] = a
assert_(random.choice(arr, replace=True) is a)
# Check 0-d array
s = tuple()
assert_(not np.isscalar(random.choice(2, s, replace=True)))
assert_(not np.isscalar(random.choice(2, s, replace=False)))
assert_(not np.isscalar(random.choice(2, s, replace=True, p=p)))
assert_(not np.isscalar(random.choice(2, s, replace=False, p=p)))
assert_(not np.isscalar(random.choice([1, 2], s, replace=True)))
assert_(random.choice([None], s, replace=True).ndim == 0)
a = np.array([1, 2])
arr = np.empty(1, dtype=object)
arr[0] = a
assert_(random.choice(arr, s, replace=True).item() is a)
# Check multi dimensional array
s = (2, 3)
p = [0.1, 0.1, 0.1, 0.1, 0.4, 0.2]
assert_equal(random.choice(6, s, replace=True).shape, s)
assert_equal(random.choice(6, s, replace=False).shape, s)
assert_equal(random.choice(6, s, replace=True, p=p).shape, s)
assert_equal(random.choice(6, s, replace=False, p=p).shape, s)
assert_equal(random.choice(np.arange(6), s, replace=True).shape, s)
# Check zero-size
assert_equal(random.randint(0, 0, size=(3, 0, 4)).shape, (3, 0, 4))
assert_equal(random.randint(0, -10, size=0).shape, (0,))
assert_equal(random.randint(10, 10, size=0).shape, (0,))
assert_equal(random.choice(0, size=0).shape, (0,))
assert_equal(random.choice([], size=(0,)).shape, (0,))
assert_equal(random.choice(['a', 'b'], size=(3, 0, 4)).shape,
(3, 0, 4))
assert_raises(ValueError, random.choice, [], 10)
def test_choice_nan_probabilities(self):
a = np.array([42, 1, 2])
p = [None, None, None]
assert_raises(ValueError, random.choice, a, p=p)
def test_choice_p_non_contiguous(self):
p = np.ones(10) / 5
p[1::2] = 3.0
random.seed(self.seed)
non_contig = random.choice(5, 3, p=p[::2])
random.seed(self.seed)
contig = random.choice(5, 3, p=np.ascontiguousarray(p[::2]))
assert_array_equal(non_contig, contig)
def test_bytes(self):
random.seed(self.seed)
actual = random.bytes(10)
desired = b'\x82Ui\x9e\xff\x97+Wf\xa5'
assert_equal(actual, desired)
def test_shuffle(self):
# Test lists, arrays (of various dtypes), and multidimensional versions
# of both, c-contiguous or not:
for conv in [lambda x: np.array([]),
lambda x: x,
lambda x: np.asarray(x).astype(np.int8),
lambda x: np.asarray(x).astype(np.float32),
lambda x: np.asarray(x).astype(np.complex64),
lambda x: np.asarray(x).astype(object),
lambda x: [(i, i) for i in x],
lambda x: np.asarray([[i, i] for i in x]),
lambda x: np.vstack([x, x]).T,
# gh-11442
lambda x: (np.asarray([(i, i) for i in x],
[("a", int), ("b", int)])
.view(np.recarray)),
# gh-4270
lambda x: np.asarray([(i, i) for i in x],
[("a", object, (1,)),
("b", np.int32, (1,))])]:
random.seed(self.seed)
alist = conv([1, 2, 3, 4, 5, 6, 7, 8, 9, 0])
random.shuffle(alist)
actual = alist
desired = conv([0, 1, 9, 6, 2, 4, 5, 8, 7, 3])
assert_array_equal(actual, desired)
def test_shuffle_masked(self):
# gh-3263
a = np.ma.masked_values(np.reshape(range(20), (5, 4)) % 3 - 1, -1)
b = np.ma.masked_values(np.arange(20) % 3 - 1, -1)
a_orig = a.copy()
b_orig = b.copy()
for i in range(50):
random.shuffle(a)
assert_equal(
sorted(a.data[~a.mask]), sorted(a_orig.data[~a_orig.mask]))
random.shuffle(b)
assert_equal(
sorted(b.data[~b.mask]), sorted(b_orig.data[~b_orig.mask]))
def test_permutation(self):
random.seed(self.seed)
alist = [1, 2, 3, 4, 5, 6, 7, 8, 9, 0]
actual = random.permutation(alist)
desired = [0, 1, 9, 6, 2, 4, 5, 8, 7, 3]
assert_array_equal(actual, desired)
random.seed(self.seed)
arr_2d = np.atleast_2d([1, 2, 3, 4, 5, 6, 7, 8, 9, 0]).T
actual = random.permutation(arr_2d)
assert_array_equal(actual, np.atleast_2d(desired).T)
random.seed(self.seed)
bad_x_str = "abcd"
assert_raises(IndexError, random.permutation, bad_x_str)
random.seed(self.seed)
bad_x_float = 1.2
assert_raises(IndexError, random.permutation, bad_x_float)
integer_val = 10
desired = [9, 0, 8, 5, 1, 3, 4, 7, 6, 2]
random.seed(self.seed)
actual = random.permutation(integer_val)
assert_array_equal(actual, desired)
def test_beta(self):
random.seed(self.seed)
actual = random.beta(.1, .9, size=(3, 2))
desired = np.array(
[[1.45341850513746058e-02, 5.31297615662868145e-04],
[1.85366619058432324e-06, 4.19214516800110563e-03],
[1.58405155108498093e-04, 1.26252891949397652e-04]])
assert_array_almost_equal(actual, desired, decimal=15)
def test_binomial(self):
random.seed(self.seed)
actual = random.binomial(100.123, .456, size=(3, 2))
desired = np.array([[37, 43],
[42, 48],
[46, 45]])
assert_array_equal(actual, desired)
random.seed(self.seed)
actual = random.binomial(100.123, .456)
desired = 37
assert_array_equal(actual, desired)
def test_chisquare(self):
random.seed(self.seed)
actual = random.chisquare(50, size=(3, 2))
desired = np.array([[63.87858175501090585, 68.68407748911370447],
[65.77116116901505904, 47.09686762438974483],
[72.3828403199695174, 74.18408615260374006]])
assert_array_almost_equal(actual, desired, decimal=13)
def test_dirichlet(self):
random.seed(self.seed)
alpha = np.array([51.72840233779265162, 39.74494232180943953])
actual = random.dirichlet(alpha, size=(3, 2))
desired = np.array([[[0.54539444573611562, 0.45460555426388438],
[0.62345816822039413, 0.37654183177960598]],
[[0.55206000085785778, 0.44793999914214233],
[0.58964023305154301, 0.41035976694845688]],
[[0.59266909280647828, 0.40733090719352177],
[0.56974431743975207, 0.43025568256024799]]])
assert_array_almost_equal(actual, desired, decimal=15)
bad_alpha = np.array([5.4e-01, -1.0e-16])
assert_raises(ValueError, random.dirichlet, bad_alpha)
random.seed(self.seed)
alpha = np.array([51.72840233779265162, 39.74494232180943953])
actual = random.dirichlet(alpha)
assert_array_almost_equal(actual, desired[0, 0], decimal=15)
def test_dirichlet_size(self):
# gh-3173
p = np.array([51.72840233779265162, 39.74494232180943953])
assert_equal(random.dirichlet(p, np.uint32(1)).shape, (1, 2))
assert_equal(random.dirichlet(p, np.uint32(1)).shape, (1, 2))
assert_equal(random.dirichlet(p, np.uint32(1)).shape, (1, 2))
assert_equal(random.dirichlet(p, [2, 2]).shape, (2, 2, 2))
assert_equal(random.dirichlet(p, (2, 2)).shape, (2, 2, 2))
assert_equal(random.dirichlet(p, np.array((2, 2))).shape, (2, 2, 2))
assert_raises(TypeError, random.dirichlet, p, float(1))
def test_dirichlet_bad_alpha(self):
# gh-2089
alpha = np.array([5.4e-01, -1.0e-16])
assert_raises(ValueError, random.dirichlet, alpha)
def test_dirichlet_alpha_non_contiguous(self):
a = np.array([51.72840233779265162, -1.0, 39.74494232180943953])
alpha = a[::2]
random.seed(self.seed)
non_contig = random.dirichlet(alpha, size=(3, 2))
random.seed(self.seed)
contig = random.dirichlet(np.ascontiguousarray(alpha),
size=(3, 2))
assert_array_almost_equal(non_contig, contig)
def test_exponential(self):
random.seed(self.seed)
actual = random.exponential(1.1234, size=(3, 2))
desired = np.array([[1.08342649775011624, 1.00607889924557314],
[2.46628830085216721, 2.49668106809923884],
[0.68717433461363442, 1.69175666993575979]])
assert_array_almost_equal(actual, desired, decimal=15)
def test_exponential_0(self):
assert_equal(random.exponential(scale=0), 0)
assert_raises(ValueError, random.exponential, scale=-0.)
def test_f(self):
random.seed(self.seed)
actual = random.f(12, 77, size=(3, 2))
desired = np.array([[1.21975394418575878, 1.75135759791559775],
[1.44803115017146489, 1.22108959480396262],
[1.02176975757740629, 1.34431827623300415]])
assert_array_almost_equal(actual, desired, decimal=15)
def test_gamma(self):
random.seed(self.seed)
actual = random.gamma(5, 3, size=(3, 2))
desired = np.array([[24.60509188649287182, 28.54993563207210627],
[26.13476110204064184, 12.56988482927716078],
[31.71863275789960568, 33.30143302795922011]])
assert_array_almost_equal(actual, desired, decimal=14)
def test_gamma_0(self):
assert_equal(random.gamma(shape=0, scale=0), 0)
assert_raises(ValueError, random.gamma, shape=-0., scale=-0.)
def test_geometric(self):
random.seed(self.seed)
actual = random.geometric(.123456789, size=(3, 2))
desired = np.array([[8, 7],
[17, 17],
[5, 12]])
assert_array_equal(actual, desired)
def test_geometric_exceptions(self):
assert_raises(ValueError, random.geometric, 1.1)
assert_raises(ValueError, random.geometric, [1.1] * 10)
assert_raises(ValueError, random.geometric, -0.1)
assert_raises(ValueError, random.geometric, [-0.1] * 10)
with suppress_warnings() as sup:
sup.record(RuntimeWarning)
assert_raises(ValueError, random.geometric, np.nan)
assert_raises(ValueError, random.geometric, [np.nan] * 10)
def test_gumbel(self):
random.seed(self.seed)
actual = random.gumbel(loc=.123456789, scale=2.0, size=(3, 2))
desired = np.array([[0.19591898743416816, 0.34405539668096674],
[-1.4492522252274278, -1.47374816298446865],
[1.10651090478803416, -0.69535848626236174]])
assert_array_almost_equal(actual, desired, decimal=15)
def test_gumbel_0(self):
assert_equal(random.gumbel(scale=0), 0)
assert_raises(ValueError, random.gumbel, scale=-0.)
def test_hypergeometric(self):
random.seed(self.seed)
actual = random.hypergeometric(10.1, 5.5, 14, size=(3, 2))
desired = np.array([[10, 10],
[10, 10],
[9, 9]])
assert_array_equal(actual, desired)
# Test nbad = 0
actual = random.hypergeometric(5, 0, 3, size=4)
desired = np.array([3, 3, 3, 3])
assert_array_equal(actual, desired)
actual = random.hypergeometric(15, 0, 12, size=4)
desired = np.array([12, 12, 12, 12])
assert_array_equal(actual, desired)
# Test ngood = 0
actual = random.hypergeometric(0, 5, 3, size=4)
desired = np.array([0, 0, 0, 0])
assert_array_equal(actual, desired)
actual = random.hypergeometric(0, 15, 12, size=4)
desired = np.array([0, 0, 0, 0])
assert_array_equal(actual, desired)
def test_laplace(self):
random.seed(self.seed)
actual = random.laplace(loc=.123456789, scale=2.0, size=(3, 2))
desired = np.array([[0.66599721112760157, 0.52829452552221945],
[3.12791959514407125, 3.18202813572992005],
[-0.05391065675859356, 1.74901336242837324]])
assert_array_almost_equal(actual, desired, decimal=15)
def test_laplace_0(self):
assert_equal(random.laplace(scale=0), 0)
assert_raises(ValueError, random.laplace, scale=-0.)
def test_logistic(self):
random.seed(self.seed)
actual = random.logistic(loc=.123456789, scale=2.0, size=(3, 2))
desired = np.array([[1.09232835305011444, 0.8648196662399954],
[4.27818590694950185, 4.33897006346929714],
[-0.21682183359214885, 2.63373365386060332]])
assert_array_almost_equal(actual, desired, decimal=15)
def test_lognormal(self):
random.seed(self.seed)
actual = random.lognormal(mean=.123456789, sigma=2.0, size=(3, 2))
desired = np.array([[16.50698631688883822, 36.54846706092654784],
[22.67886599981281748, 0.71617561058995771],
[65.72798501792723869, 86.84341601437161273]])
assert_array_almost_equal(actual, desired, decimal=13)
def test_lognormal_0(self):
assert_equal(random.lognormal(sigma=0), 1)
assert_raises(ValueError, random.lognormal, sigma=-0.)
def test_logseries(self):
random.seed(self.seed)
actual = random.logseries(p=.923456789, size=(3, 2))
desired = np.array([[2, 2],
[6, 17],
[3, 6]])
assert_array_equal(actual, desired)
def test_logseries_exceptions(self):
with suppress_warnings() as sup:
sup.record(RuntimeWarning)
assert_raises(ValueError, random.logseries, np.nan)
assert_raises(ValueError, random.logseries, [np.nan] * 10)
def test_multinomial(self):
random.seed(self.seed)
actual = random.multinomial(20, [1 / 6.] * 6, size=(3, 2))
desired = np.array([[[4, 3, 5, 4, 2, 2],
[5, 2, 8, 2, 2, 1]],
[[3, 4, 3, 6, 0, 4],
[2, 1, 4, 3, 6, 4]],
[[4, 4, 2, 5, 2, 3],
[4, 3, 4, 2, 3, 4]]])
assert_array_equal(actual, desired)
def test_multivariate_normal(self):
random.seed(self.seed)
mean = (.123456789, 10)
cov = [[1, 0], [0, 1]]
size = (3, 2)
actual = random.multivariate_normal(mean, cov, size)
desired = np.array([[[1.463620246718631, 11.73759122771936],
[1.622445133300628, 9.771356667546383]],
[[2.154490787682787, 12.170324946056553],
[1.719909438201865, 9.230548443648306]],
[[0.689515026297799, 9.880729819607714],
[-0.023054015651998, 9.201096623542879]]])
assert_array_almost_equal(actual, desired, decimal=15)
# Check for default size, was raising deprecation warning
actual = random.multivariate_normal(mean, cov)
desired = np.array([0.895289569463708, 9.17180864067987])
assert_array_almost_equal(actual, desired, decimal=15)
# Check that non positive-semidefinite covariance warns with
# RuntimeWarning
mean = [0, 0]
cov = [[1, 2], [2, 1]]
assert_warns(RuntimeWarning, random.multivariate_normal, mean, cov)
# and that it doesn't warn with RuntimeWarning check_valid='ignore'
assert_no_warnings(random.multivariate_normal, mean, cov,
check_valid='ignore')
# and that it raises with RuntimeWarning check_valid='raises'
assert_raises(ValueError, random.multivariate_normal, mean, cov,
check_valid='raise')
cov = np.array([[1, 0.1], [0.1, 1]], dtype=np.float32)
with suppress_warnings() as sup:
random.multivariate_normal(mean, cov)
w = sup.record(RuntimeWarning)
assert len(w) == 0
mu = np.zeros(2)
cov = np.eye(2)
assert_raises(ValueError, random.multivariate_normal, mean, cov,
check_valid='other')
assert_raises(ValueError, random.multivariate_normal,
np.zeros((2, 1, 1)), cov)
assert_raises(ValueError, random.multivariate_normal,
mu, np.empty((3, 2)))
assert_raises(ValueError, random.multivariate_normal,
mu, np.eye(3))
def test_negative_binomial(self):
random.seed(self.seed)
actual = random.negative_binomial(n=100, p=.12345, size=(3, 2))
desired = np.array([[848, 841],
[892, 611],
[779, 647]])
assert_array_equal(actual, desired)
def test_negative_binomial_exceptions(self):
with suppress_warnings() as sup:
sup.record(RuntimeWarning)
assert_raises(ValueError, random.negative_binomial, 100, np.nan)
assert_raises(ValueError, random.negative_binomial, 100,
[np.nan] * 10)
def test_noncentral_chisquare(self):
random.seed(self.seed)
actual = random.noncentral_chisquare(df=5, nonc=5, size=(3, 2))
desired = np.array([[23.91905354498517511, 13.35324692733826346],
[31.22452661329736401, 16.60047399466177254],
[5.03461598262724586, 17.94973089023519464]])
assert_array_almost_equal(actual, desired, decimal=14)
actual = random.noncentral_chisquare(df=.5, nonc=.2, size=(3, 2))
desired = np.array([[1.47145377828516666, 0.15052899268012659],
[0.00943803056963588, 1.02647251615666169],
[0.332334982684171, 0.15451287602753125]])
assert_array_almost_equal(actual, desired, decimal=14)
random.seed(self.seed)
actual = random.noncentral_chisquare(df=5, nonc=0, size=(3, 2))
desired = np.array([[9.597154162763948, 11.725484450296079],
[10.413711048138335, 3.694475922923986],
[13.484222138963087, 14.377255424602957]])
assert_array_almost_equal(actual, desired, decimal=14)
def test_noncentral_f(self):
random.seed(self.seed)
actual = random.noncentral_f(dfnum=5, dfden=2, nonc=1,
size=(3, 2))
desired = np.array([[1.40598099674926669, 0.34207973179285761],
[3.57715069265772545, 7.92632662577829805],
[0.43741599463544162, 1.1774208752428319]])
assert_array_almost_equal(actual, desired, decimal=14)
def test_noncentral_f_nan(self):
random.seed(self.seed)
actual = random.noncentral_f(dfnum=5, dfden=2, nonc=np.nan)
assert np.isnan(actual)
def test_normal(self):
random.seed(self.seed)
actual = random.normal(loc=.123456789, scale=2.0, size=(3, 2))
desired = np.array([[2.80378370443726244, 3.59863924443872163],
[3.121433477601256, -0.33382987590723379],
[4.18552478636557357, 4.46410668111310471]])
assert_array_almost_equal(actual, desired, decimal=15)
def test_normal_0(self):
assert_equal(random.normal(scale=0), 0)
assert_raises(ValueError, random.normal, scale=-0.)
def test_pareto(self):
random.seed(self.seed)
actual = random.pareto(a=.123456789, size=(3, 2))
desired = np.array(
[[2.46852460439034849e+03, 1.41286880810518346e+03],
[5.28287797029485181e+07, 6.57720981047328785e+07],
[1.40840323350391515e+02, 1.98390255135251704e+05]])
# For some reason on 32-bit x86 Ubuntu 12.10 the [1, 0] entry in this
# matrix differs by 24 nulps. Discussion:
# https://mail.python.org/pipermail/numpy-discussion/2012-September/063801.html
# Consensus is that this is probably some gcc quirk that affects
# rounding but not in any important way, so we just use a looser
# tolerance on this test:
np.testing.assert_array_almost_equal_nulp(actual, desired, nulp=30)
def test_poisson(self):
random.seed(self.seed)
actual = random.poisson(lam=.123456789, size=(3, 2))
desired = np.array([[0, 0],
[1, 0],
[0, 0]])
assert_array_equal(actual, desired)
def test_poisson_exceptions(self):
lambig = np.iinfo('l').max
lamneg = -1
assert_raises(ValueError, random.poisson, lamneg)
assert_raises(ValueError, random.poisson, [lamneg] * 10)
assert_raises(ValueError, random.poisson, lambig)
assert_raises(ValueError, random.poisson, [lambig] * 10)
with suppress_warnings() as sup:
sup.record(RuntimeWarning)
assert_raises(ValueError, random.poisson, np.nan)
assert_raises(ValueError, random.poisson, [np.nan] * 10)
def test_power(self):
random.seed(self.seed)
actual = random.power(a=.123456789, size=(3, 2))
desired = np.array([[0.02048932883240791, 0.01424192241128213],
[0.38446073748535298, 0.39499689943484395],
[0.00177699707563439, 0.13115505880863756]])
assert_array_almost_equal(actual, desired, decimal=15)
def test_rayleigh(self):
random.seed(self.seed)
actual = random.rayleigh(scale=10, size=(3, 2))
desired = np.array([[13.8882496494248393, 13.383318339044731],
[20.95413364294492098, 21.08285015800712614],
[11.06066537006854311, 17.35468505778271009]])
assert_array_almost_equal(actual, desired, decimal=14)
def test_rayleigh_0(self):
assert_equal(random.rayleigh(scale=0), 0)
assert_raises(ValueError, random.rayleigh, scale=-0.)
def test_standard_cauchy(self):
random.seed(self.seed)
actual = random.standard_cauchy(size=(3, 2))
desired = np.array([[0.77127660196445336, -6.55601161955910605],
[0.93582023391158309, -2.07479293013759447],
[-4.74601644297011926, 0.18338989290760804]])
assert_array_almost_equal(actual, desired, decimal=15)
def test_standard_exponential(self):
random.seed(self.seed)
actual = random.standard_exponential(size=(3, 2))
desired = np.array([[0.96441739162374596, 0.89556604882105506],
[2.1953785836319808, 2.22243285392490542],
[0.6116915921431676, 1.50592546727413201]])
assert_array_almost_equal(actual, desired, decimal=15)
def test_standard_gamma(self):
random.seed(self.seed)
actual = random.standard_gamma(shape=3, size=(3, 2))
desired = np.array([[5.50841531318455058, 6.62953470301903103],
[5.93988484943779227, 2.31044849402133989],
[7.54838614231317084, 8.012756093271868]])
assert_array_almost_equal(actual, desired, decimal=14)
def test_standard_gamma_0(self):
assert_equal(random.standard_gamma(shape=0), 0)
assert_raises(ValueError, random.standard_gamma, shape=-0.)
def test_standard_normal(self):
random.seed(self.seed)
actual = random.standard_normal(size=(3, 2))
desired = np.array([[1.34016345771863121, 1.73759122771936081],
[1.498988344300628, -0.2286433324536169],
[2.031033998682787, 2.17032494605655257]])
assert_array_almost_equal(actual, desired, decimal=15)
def test_randn_singleton(self):
random.seed(self.seed)
actual = random.randn()
desired = np.array(1.34016345771863121)
assert_array_almost_equal(actual, desired, decimal=15)
def test_standard_t(self):
random.seed(self.seed)
actual = random.standard_t(df=10, size=(3, 2))
desired = np.array([[0.97140611862659965, -0.08830486548450577],
[1.36311143689505321, -0.55317463909867071],
[-0.18473749069684214, 0.61181537341755321]])
assert_array_almost_equal(actual, desired, decimal=15)
def test_triangular(self):
random.seed(self.seed)
actual = random.triangular(left=5.12, mode=10.23, right=20.34,
size=(3, 2))
desired = np.array([[12.68117178949215784, 12.4129206149193152],
[16.20131377335158263, 16.25692138747600524],
[11.20400690911820263, 14.4978144835829923]])
assert_array_almost_equal(actual, desired, decimal=14)
def test_uniform(self):
random.seed(self.seed)
actual = random.uniform(low=1.23, high=10.54, size=(3, 2))
desired = np.array([[6.99097932346268003, 6.73801597444323974],
[9.50364421400426274, 9.53130618907631089],
[5.48995325769805476, 8.47493103280052118]])
assert_array_almost_equal(actual, desired, decimal=15)
def test_uniform_range_bounds(self):
fmin = np.finfo('float').min
fmax = np.finfo('float').max
func = random.uniform
assert_raises(OverflowError, func, -np.inf, 0)
assert_raises(OverflowError, func, 0, np.inf)
assert_raises(OverflowError, func, fmin, fmax)
assert_raises(OverflowError, func, [-np.inf], [0])
assert_raises(OverflowError, func, [0], [np.inf])
# (fmax / 1e17) - fmin is within range, so this should not throw
# account for i386 extended precision DBL_MAX / 1e17 + DBL_MAX >
# DBL_MAX by increasing fmin a bit
random.uniform(low=np.nextafter(fmin, 1), high=fmax / 1e17)
def test_scalar_exception_propagation(self):
# Tests that exceptions are correctly propagated in distributions
# when called with objects that throw exceptions when converted to
# scalars.
#
# Regression test for gh: 8865
class ThrowingFloat(np.ndarray):
def __float__(self):
raise TypeError
throwing_float = np.array(1.0).view(ThrowingFloat)
assert_raises(TypeError, random.uniform, throwing_float,
throwing_float)
class ThrowingInteger(np.ndarray):
def __int__(self):
raise TypeError
throwing_int = np.array(1).view(ThrowingInteger)
assert_raises(TypeError, random.hypergeometric, throwing_int, 1, 1)
def test_vonmises(self):
random.seed(self.seed)
actual = random.vonmises(mu=1.23, kappa=1.54, size=(3, 2))
desired = np.array([[2.28567572673902042, 2.89163838442285037],
[0.38198375564286025, 2.57638023113890746],
[1.19153771588353052, 1.83509849681825354]])
assert_array_almost_equal(actual, desired, decimal=15)
def test_vonmises_small(self):
# check infinite loop, gh-4720
random.seed(self.seed)
r = random.vonmises(mu=0., kappa=1.1e-8, size=10**6)
assert_(np.isfinite(r).all())
def test_vonmises_nan(self):
random.seed(self.seed)
r = random.vonmises(mu=0., kappa=np.nan)
assert_(np.isnan(r))
def test_wald(self):
random.seed(self.seed)
actual = random.wald(mean=1.23, scale=1.54, size=(3, 2))
desired = np.array([[3.82935265715889983, 5.13125249184285526],
[0.35045403618358717, 1.50832396872003538],
[0.24124319895843183, 0.22031101461955038]])
assert_array_almost_equal(actual, desired, decimal=14)
def test_weibull(self):
random.seed(self.seed)
actual = random.weibull(a=1.23, size=(3, 2))
desired = np.array([[0.97097342648766727, 0.91422896443565516],
[1.89517770034962929, 1.91414357960479564],
[0.67057783752390987, 1.39494046635066793]])
assert_array_almost_equal(actual, desired, decimal=15)
def test_weibull_0(self):
random.seed(self.seed)
assert_equal(random.weibull(a=0, size=12), np.zeros(12))
assert_raises(ValueError, random.weibull, a=-0.)
def test_zipf(self):
random.seed(self.seed)
actual = random.zipf(a=1.23, size=(3, 2))
desired = np.array([[66, 29],
[1, 1],
[3, 13]])
assert_array_equal(actual, desired)
class TestBroadcast:
# tests that functions that broadcast behave
# correctly when presented with non-scalar arguments
def setup(self):
self.seed = 123456789
def set_seed(self):
random.seed(self.seed)
def test_uniform(self):
low = [0]
high = [1]
uniform = random.uniform
desired = np.array([0.53283302478975902,
0.53413660089041659,
0.50955303552646702])
self.set_seed()
actual = uniform(low * 3, high)
assert_array_almost_equal(actual, desired, decimal=14)
self.set_seed()
actual = uniform(low, high * 3)
assert_array_almost_equal(actual, desired, decimal=14)
def test_normal(self):
loc = [0]
scale = [1]
bad_scale = [-1]
normal = random.normal
desired = np.array([2.2129019979039612,
2.1283977976520019,
1.8417114045748335])
self.set_seed()
actual = normal(loc * 3, scale)
assert_array_almost_equal(actual, desired, decimal=14)
assert_raises(ValueError, normal, loc * 3, bad_scale)
self.set_seed()
actual = normal(loc, scale * 3)
assert_array_almost_equal(actual, desired, decimal=14)
assert_raises(ValueError, normal, loc, bad_scale * 3)
def test_beta(self):
a = [1]
b = [2]
bad_a = [-1]
bad_b = [-2]
beta = random.beta
desired = np.array([0.19843558305989056,
0.075230336409423643,
0.24976865978980844])
self.set_seed()
actual = beta(a * 3, b)
assert_array_almost_equal(actual, desired, decimal=14)
assert_raises(ValueError, beta, bad_a * 3, b)
assert_raises(ValueError, beta, a * 3, bad_b)
self.set_seed()
actual = beta(a, b * 3)
assert_array_almost_equal(actual, desired, decimal=14)
assert_raises(ValueError, beta, bad_a, b * 3)
assert_raises(ValueError, beta, a, bad_b * 3)
def test_exponential(self):
scale = [1]
bad_scale = [-1]
exponential = random.exponential
desired = np.array([0.76106853658845242,
0.76386282278691653,
0.71243813125891797])
self.set_seed()
actual = exponential(scale * 3)
assert_array_almost_equal(actual, desired, decimal=14)
assert_raises(ValueError, exponential, bad_scale * 3)
def test_standard_gamma(self):
shape = [1]
bad_shape = [-1]
std_gamma = random.standard_gamma
desired = np.array([0.76106853658845242,
0.76386282278691653,
0.71243813125891797])
self.set_seed()
actual = std_gamma(shape * 3)
assert_array_almost_equal(actual, desired, decimal=14)
assert_raises(ValueError, std_gamma, bad_shape * 3)
def test_gamma(self):
shape = [1]
scale = [2]
bad_shape = [-1]
bad_scale = [-2]
gamma = random.gamma
desired = np.array([1.5221370731769048,
1.5277256455738331,
1.4248762625178359])
self.set_seed()
actual = gamma(shape * 3, scale)
assert_array_almost_equal(actual, desired, decimal=14)
assert_raises(ValueError, gamma, bad_shape * 3, scale)
assert_raises(ValueError, gamma, shape * 3, bad_scale)
self.set_seed()
actual = gamma(shape, scale * 3)
assert_array_almost_equal(actual, desired, decimal=14)
assert_raises(ValueError, gamma, bad_shape, scale * 3)
assert_raises(ValueError, gamma, shape, bad_scale * 3)
def test_f(self):
dfnum = [1]
dfden = [2]
bad_dfnum = [-1]
bad_dfden = [-2]
f = random.f
desired = np.array([0.80038951638264799,
0.86768719635363512,
2.7251095168386801])
self.set_seed()
actual = f(dfnum * 3, dfden)
assert_array_almost_equal(actual, desired, decimal=14)
assert_raises(ValueError, f, bad_dfnum * 3, dfden)
assert_raises(ValueError, f, dfnum * 3, bad_dfden)
self.set_seed()
actual = f(dfnum, dfden * 3)
assert_array_almost_equal(actual, desired, decimal=14)
assert_raises(ValueError, f, bad_dfnum, dfden * 3)
assert_raises(ValueError, f, dfnum, bad_dfden * 3)
def test_noncentral_f(self):
dfnum = [2]
dfden = [3]
nonc = [4]
bad_dfnum = [0]
bad_dfden = [-1]
bad_nonc = [-2]
nonc_f = random.noncentral_f
desired = np.array([9.1393943263705211,
13.025456344595602,
8.8018098359100545])
self.set_seed()
actual = nonc_f(dfnum * 3, dfden, nonc)
assert_array_almost_equal(actual, desired, decimal=14)
assert np.all(np.isnan(nonc_f(dfnum, dfden, [np.nan] * 3)))
assert_raises(ValueError, nonc_f, bad_dfnum * 3, dfden, nonc)
assert_raises(ValueError, nonc_f, dfnum * 3, bad_dfden, nonc)
assert_raises(ValueError, nonc_f, dfnum * 3, dfden, bad_nonc)
self.set_seed()
actual = nonc_f(dfnum, dfden * 3, nonc)
assert_array_almost_equal(actual, desired, decimal=14)
assert_raises(ValueError, nonc_f, bad_dfnum, dfden * 3, nonc)
assert_raises(ValueError, nonc_f, dfnum, bad_dfden * 3, nonc)
assert_raises(ValueError, nonc_f, dfnum, dfden * 3, bad_nonc)
self.set_seed()
actual = nonc_f(dfnum, dfden, nonc * 3)
assert_array_almost_equal(actual, desired, decimal=14)
assert_raises(ValueError, nonc_f, bad_dfnum, dfden, nonc * 3)
assert_raises(ValueError, nonc_f, dfnum, bad_dfden, nonc * 3)
assert_raises(ValueError, nonc_f, dfnum, dfden, bad_nonc * 3)
def test_noncentral_f_small_df(self):
self.set_seed()
desired = np.array([6.869638627492048, 0.785880199263955])
actual = random.noncentral_f(0.9, 0.9, 2, size=2)
assert_array_almost_equal(actual, desired, decimal=14)
def test_chisquare(self):
df = [1]
bad_df = [-1]
chisquare = random.chisquare
desired = np.array([0.57022801133088286,
0.51947702108840776,
0.1320969254923558])
self.set_seed()
actual = chisquare(df * 3)
assert_array_almost_equal(actual, desired, decimal=14)
assert_raises(ValueError, chisquare, bad_df * 3)
def test_noncentral_chisquare(self):
df = [1]
nonc = [2]
bad_df = [-1]
bad_nonc = [-2]
nonc_chi = random.noncentral_chisquare
desired = np.array([9.0015599467913763,
4.5804135049718742,
6.0872302432834564])
self.set_seed()
actual = nonc_chi(df * 3, nonc)
assert_array_almost_equal(actual, desired, decimal=14)
assert_raises(ValueError, nonc_chi, bad_df * 3, nonc)
assert_raises(ValueError, nonc_chi, df * 3, bad_nonc)
self.set_seed()
actual = nonc_chi(df, nonc * 3)
assert_array_almost_equal(actual, desired, decimal=14)
assert_raises(ValueError, nonc_chi, bad_df, nonc * 3)
assert_raises(ValueError, nonc_chi, df, bad_nonc * 3)
def test_standard_t(self):
df = [1]
bad_df = [-1]
t = random.standard_t
desired = np.array([3.0702872575217643,
5.8560725167361607,
1.0274791436474273])
self.set_seed()
actual = t(df * 3)
assert_array_almost_equal(actual, desired, decimal=14)
assert_raises(ValueError, t, bad_df * 3)
assert_raises(ValueError, random.standard_t, bad_df * 3)
def test_vonmises(self):
mu = [2]
kappa = [1]
bad_kappa = [-1]
vonmises = random.vonmises
desired = np.array([2.9883443664201312,
-2.7064099483995943,
-1.8672476700665914])
self.set_seed()
actual = vonmises(mu * 3, kappa)
assert_array_almost_equal(actual, desired, decimal=14)
assert_raises(ValueError, vonmises, mu * 3, bad_kappa)
self.set_seed()
actual = vonmises(mu, kappa * 3)
assert_array_almost_equal(actual, desired, decimal=14)
assert_raises(ValueError, vonmises, mu, bad_kappa * 3)
def test_pareto(self):
a = [1]
bad_a = [-1]
pareto = random.pareto
desired = np.array([1.1405622680198362,
1.1465519762044529,
1.0389564467453547])
self.set_seed()
actual = pareto(a * 3)
assert_array_almost_equal(actual, desired, decimal=14)
assert_raises(ValueError, pareto, bad_a * 3)
assert_raises(ValueError, random.pareto, bad_a * 3)
def test_weibull(self):
a = [1]
bad_a = [-1]
weibull = random.weibull
desired = np.array([0.76106853658845242,
0.76386282278691653,
0.71243813125891797])
self.set_seed()
actual = weibull(a * 3)
assert_array_almost_equal(actual, desired, decimal=14)
assert_raises(ValueError, weibull, bad_a * 3)
assert_raises(ValueError, random.weibull, bad_a * 3)
def test_power(self):
a = [1]
bad_a = [-1]
power = random.power
desired = np.array([0.53283302478975902,
0.53413660089041659,
0.50955303552646702])
self.set_seed()
actual = power(a * 3)
assert_array_almost_equal(actual, desired, decimal=14)
assert_raises(ValueError, power, bad_a * 3)
assert_raises(ValueError, random.power, bad_a * 3)
def test_laplace(self):
loc = [0]
scale = [1]
bad_scale = [-1]
laplace = random.laplace
desired = np.array([0.067921356028507157,
0.070715642226971326,
0.019290950698972624])
self.set_seed()
actual = laplace(loc * 3, scale)
assert_array_almost_equal(actual, desired, decimal=14)
assert_raises(ValueError, laplace, loc * 3, bad_scale)
self.set_seed()
actual = laplace(loc, scale * 3)
assert_array_almost_equal(actual, desired, decimal=14)
assert_raises(ValueError, laplace, loc, bad_scale * 3)
def test_gumbel(self):
loc = [0]
scale = [1]
bad_scale = [-1]
gumbel = random.gumbel
desired = np.array([0.2730318639556768,
0.26936705726291116,
0.33906220393037939])
self.set_seed()
actual = gumbel(loc * 3, scale)
assert_array_almost_equal(actual, desired, decimal=14)
assert_raises(ValueError, gumbel, loc * 3, bad_scale)
self.set_seed()
actual = gumbel(loc, scale * 3)
assert_array_almost_equal(actual, desired, decimal=14)
assert_raises(ValueError, gumbel, loc, bad_scale * 3)
def test_logistic(self):
loc = [0]
scale = [1]
bad_scale = [-1]
logistic = random.logistic
desired = np.array([0.13152135837586171,
0.13675915696285773,
0.038216792802833396])
self.set_seed()
actual = logistic(loc * 3, scale)
assert_array_almost_equal(actual, desired, decimal=14)
assert_raises(ValueError, logistic, loc * 3, bad_scale)
self.set_seed()
actual = logistic(loc, scale * 3)
assert_array_almost_equal(actual, desired, decimal=14)
assert_raises(ValueError, logistic, loc, bad_scale * 3)
assert_equal(random.logistic(1.0, 0.0), 1.0)
def test_lognormal(self):
mean = [0]
sigma = [1]
bad_sigma = [-1]
lognormal = random.lognormal
desired = np.array([9.1422086044848427,
8.4013952870126261,
6.3073234116578671])
self.set_seed()
actual = lognormal(mean * 3, sigma)
assert_array_almost_equal(actual, desired, decimal=14)
assert_raises(ValueError, lognormal, mean * 3, bad_sigma)
assert_raises(ValueError, random.lognormal, mean * 3, bad_sigma)
self.set_seed()
actual = lognormal(mean, sigma * 3)
assert_array_almost_equal(actual, desired, decimal=14)
assert_raises(ValueError, lognormal, mean, bad_sigma * 3)
assert_raises(ValueError, random.lognormal, mean, bad_sigma * 3)
def test_rayleigh(self):
scale = [1]
bad_scale = [-1]
rayleigh = random.rayleigh
desired = np.array([1.2337491937897689,
1.2360119924878694,
1.1936818095781789])
self.set_seed()
actual = rayleigh(scale * 3)
assert_array_almost_equal(actual, desired, decimal=14)
assert_raises(ValueError, rayleigh, bad_scale * 3)
def test_wald(self):
mean = [0.5]
scale = [1]
bad_mean = [0]
bad_scale = [-2]
wald = random.wald
desired = np.array([0.11873681120271318,
0.12450084820795027,
0.9096122728408238])
self.set_seed()
actual = wald(mean * 3, scale)
assert_array_almost_equal(actual, desired, decimal=14)
assert_raises(ValueError, wald, bad_mean * 3, scale)
assert_raises(ValueError, wald, mean * 3, bad_scale)
assert_raises(ValueError, random.wald, bad_mean * 3, scale)
assert_raises(ValueError, random.wald, mean * 3, bad_scale)
self.set_seed()
actual = wald(mean, scale * 3)
assert_array_almost_equal(actual, desired, decimal=14)
assert_raises(ValueError, wald, bad_mean, scale * 3)
assert_raises(ValueError, wald, mean, bad_scale * 3)
assert_raises(ValueError, wald, 0.0, 1)
assert_raises(ValueError, wald, 0.5, 0.0)
def test_triangular(self):
left = [1]
right = [3]
mode = [2]
bad_left_one = [3]
bad_mode_one = [4]
bad_left_two, bad_mode_two = right * 2
triangular = random.triangular
desired = np.array([2.03339048710429,
2.0347400359389356,
2.0095991069536208])
self.set_seed()
actual = triangular(left * 3, mode, right)
assert_array_almost_equal(actual, desired, decimal=14)
assert_raises(ValueError, triangular, bad_left_one * 3, mode, right)
assert_raises(ValueError, triangular, left * 3, bad_mode_one, right)
assert_raises(ValueError, triangular, bad_left_two * 3, bad_mode_two,
right)
self.set_seed()
actual = triangular(left, mode * 3, right)
assert_array_almost_equal(actual, desired, decimal=14)
assert_raises(ValueError, triangular, bad_left_one, mode * 3, right)
assert_raises(ValueError, triangular, left, bad_mode_one * 3, right)
assert_raises(ValueError, triangular, bad_left_two, bad_mode_two * 3,
right)
self.set_seed()
actual = triangular(left, mode, right * 3)
assert_array_almost_equal(actual, desired, decimal=14)
assert_raises(ValueError, triangular, bad_left_one, mode, right * 3)
assert_raises(ValueError, triangular, left, bad_mode_one, right * 3)
assert_raises(ValueError, triangular, bad_left_two, bad_mode_two,
right * 3)
assert_raises(ValueError, triangular, 10., 0., 20.)
assert_raises(ValueError, triangular, 10., 25., 20.)
assert_raises(ValueError, triangular, 10., 10., 10.)
def test_binomial(self):
n = [1]
p = [0.5]
bad_n = [-1]
bad_p_one = [-1]
bad_p_two = [1.5]
binom = random.binomial
desired = np.array([1, 1, 1])
self.set_seed()
actual = binom(n * 3, p)
assert_array_equal(actual, desired)
assert_raises(ValueError, binom, bad_n * 3, p)
assert_raises(ValueError, binom, n * 3, bad_p_one)
assert_raises(ValueError, binom, n * 3, bad_p_two)
self.set_seed()
actual = binom(n, p * 3)
assert_array_equal(actual, desired)
assert_raises(ValueError, binom, bad_n, p * 3)
assert_raises(ValueError, binom, n, bad_p_one * 3)
assert_raises(ValueError, binom, n, bad_p_two * 3)
def test_negative_binomial(self):
n = [1]
p = [0.5]
bad_n = [-1]
bad_p_one = [-1]
bad_p_two = [1.5]
neg_binom = random.negative_binomial
desired = np.array([1, 0, 1])
self.set_seed()
actual = neg_binom(n * 3, p)
assert_array_equal(actual, desired)
assert_raises(ValueError, neg_binom, bad_n * 3, p)
assert_raises(ValueError, neg_binom, n * 3, bad_p_one)
assert_raises(ValueError, neg_binom, n * 3, bad_p_two)
self.set_seed()
actual = neg_binom(n, p * 3)
assert_array_equal(actual, desired)
assert_raises(ValueError, neg_binom, bad_n, p * 3)
assert_raises(ValueError, neg_binom, n, bad_p_one * 3)
assert_raises(ValueError, neg_binom, n, bad_p_two * 3)
def test_poisson(self):
max_lam = random.RandomState()._poisson_lam_max
lam = [1]
bad_lam_one = [-1]
bad_lam_two = [max_lam * 2]
poisson = random.poisson
desired = np.array([1, 1, 0])
self.set_seed()
actual = poisson(lam * 3)
assert_array_equal(actual, desired)
assert_raises(ValueError, poisson, bad_lam_one * 3)
assert_raises(ValueError, poisson, bad_lam_two * 3)
def test_zipf(self):
a = [2]
bad_a = [0]
zipf = random.zipf
desired = np.array([2, 2, 1])
self.set_seed()
actual = zipf(a * 3)
assert_array_equal(actual, desired)
assert_raises(ValueError, zipf, bad_a * 3)
with np.errstate(invalid='ignore'):
assert_raises(ValueError, zipf, np.nan)
assert_raises(ValueError, zipf, [0, 0, np.nan])
def test_geometric(self):
p = [0.5]
bad_p_one = [-1]
bad_p_two = [1.5]
geom = random.geometric
desired = np.array([2, 2, 2])
self.set_seed()
actual = geom(p * 3)
assert_array_equal(actual, desired)
assert_raises(ValueError, geom, bad_p_one * 3)
assert_raises(ValueError, geom, bad_p_two * 3)
def test_hypergeometric(self):
ngood = [1]
nbad = [2]
nsample = [2]
bad_ngood = [-1]
bad_nbad = [-2]
bad_nsample_one = [0]
bad_nsample_two = [4]
hypergeom = random.hypergeometric
desired = np.array([1, 1, 1])
self.set_seed()
actual = hypergeom(ngood * 3, nbad, nsample)
assert_array_equal(actual, desired)
assert_raises(ValueError, hypergeom, bad_ngood * 3, nbad, nsample)
assert_raises(ValueError, hypergeom, ngood * 3, bad_nbad, nsample)
assert_raises(ValueError, hypergeom, ngood * 3, nbad, bad_nsample_one)
assert_raises(ValueError, hypergeom, ngood * 3, nbad, bad_nsample_two)
self.set_seed()
actual = hypergeom(ngood, nbad * 3, nsample)
assert_array_equal(actual, desired)
assert_raises(ValueError, hypergeom, bad_ngood, nbad * 3, nsample)
assert_raises(ValueError, hypergeom, ngood, bad_nbad * 3, nsample)
assert_raises(ValueError, hypergeom, ngood, nbad * 3, bad_nsample_one)
assert_raises(ValueError, hypergeom, ngood, nbad * 3, bad_nsample_two)
self.set_seed()
actual = hypergeom(ngood, nbad, nsample * 3)
assert_array_equal(actual, desired)
assert_raises(ValueError, hypergeom, bad_ngood, nbad, nsample * 3)
assert_raises(ValueError, hypergeom, ngood, bad_nbad, nsample * 3)
assert_raises(ValueError, hypergeom, ngood, nbad, bad_nsample_one * 3)
assert_raises(ValueError, hypergeom, ngood, nbad, bad_nsample_two * 3)
assert_raises(ValueError, hypergeom, -1, 10, 20)
assert_raises(ValueError, hypergeom, 10, -1, 20)
assert_raises(ValueError, hypergeom, 10, 10, 0)
assert_raises(ValueError, hypergeom, 10, 10, 25)
def test_logseries(self):
p = [0.5]
bad_p_one = [2]
bad_p_two = [-1]
logseries = random.logseries
desired = np.array([1, 1, 1])
self.set_seed()
actual = logseries(p * 3)
assert_array_equal(actual, desired)
assert_raises(ValueError, logseries, bad_p_one * 3)
assert_raises(ValueError, logseries, bad_p_two * 3)
class TestThread:
# make sure each state produces the same sequence even in threads
def setup(self):
self.seeds = range(4)
def check_function(self, function, sz):
from threading import Thread
out1 = np.empty((len(self.seeds),) + sz)
out2 = np.empty((len(self.seeds),) + sz)
# threaded generation
t = [Thread(target=function, args=(random.RandomState(s), o))
for s, o in zip(self.seeds, out1)]
[x.start() for x in t]
[x.join() for x in t]
# the same serial
for s, o in zip(self.seeds, out2):
function(random.RandomState(s), o)
# these platforms change x87 fpu precision mode in threads
if np.intp().dtype.itemsize == 4 and sys.platform == "win32":
assert_array_almost_equal(out1, out2)
else:
assert_array_equal(out1, out2)
def test_normal(self):
def gen_random(state, out):
out[...] = state.normal(size=10000)
self.check_function(gen_random, sz=(10000,))
def test_exp(self):
def gen_random(state, out):
out[...] = state.exponential(scale=np.ones((100, 1000)))
self.check_function(gen_random, sz=(100, 1000))
def test_multinomial(self):
def gen_random(state, out):
out[...] = state.multinomial(10, [1 / 6.] * 6, size=10000)
self.check_function(gen_random, sz=(10000, 6))
# See Issue #4263
class TestSingleEltArrayInput:
def setup(self):
self.argOne = np.array([2])
self.argTwo = np.array([3])
self.argThree = np.array([4])
self.tgtShape = (1,)
def test_one_arg_funcs(self):
funcs = (random.exponential, random.standard_gamma,
random.chisquare, random.standard_t,
random.pareto, random.weibull,
random.power, random.rayleigh,
random.poisson, random.zipf,
random.geometric, random.logseries)
probfuncs = (random.geometric, random.logseries)
for func in funcs:
if func in probfuncs: # p < 1.0
out = func(np.array([0.5]))
else:
out = func(self.argOne)
assert_equal(out.shape, self.tgtShape)
def test_two_arg_funcs(self):
funcs = (random.uniform, random.normal,
random.beta, random.gamma,
random.f, random.noncentral_chisquare,
random.vonmises, random.laplace,
random.gumbel, random.logistic,
random.lognormal, random.wald,
random.binomial, random.negative_binomial)
probfuncs = (random.binomial, random.negative_binomial)
for func in funcs:
if func in probfuncs: # p <= 1
argTwo = np.array([0.5])
else:
argTwo = self.argTwo
out = func(self.argOne, argTwo)
assert_equal(out.shape, self.tgtShape)
out = func(self.argOne[0], argTwo)
assert_equal(out.shape, self.tgtShape)
out = func(self.argOne, argTwo[0])
assert_equal(out.shape, self.tgtShape)
def test_three_arg_funcs(self):
funcs = [random.noncentral_f, random.triangular,
random.hypergeometric]
for func in funcs:
out = func(self.argOne, self.argTwo, self.argThree)
assert_equal(out.shape, self.tgtShape)
out = func(self.argOne[0], self.argTwo, self.argThree)
assert_equal(out.shape, self.tgtShape)
out = func(self.argOne, self.argTwo[0], self.argThree)
assert_equal(out.shape, self.tgtShape)
# Ensure returned array dtype is correct for platform
def test_integer_dtype(int_func):
random.seed(123456789)
fname, args, md5 = int_func
f = getattr(random, fname)
actual = f(*args, size=2)
assert_(actual.dtype == np.dtype('l'))
def test_integer_repeat(int_func):
random.seed(123456789)
fname, args, md5 = int_func
f = getattr(random, fname)
val = f(*args, size=1000000)
if sys.byteorder != 'little':
val = val.byteswap()
res = hashlib.md5(val.view(np.int8)).hexdigest()
assert_(res == md5)
|
WarrenWeckesser/numpy
|
numpy/random/tests/test_randomstate.py
|
Python
|
bsd-3-clause
| 77,995
|
[
"Gaussian"
] |
c0a2e343564f1d60b958a6b5bd5c75145d0ac35dfd463fb5d5590472f166080a
|
# Copyright (C) 2010-2018 The ESPResSo project
#
# This file is part of ESPResSo.
#
# ESPResSo is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# ESPResSo is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
"""
This modules allows to expose ESPREsSo's coordinates and particle attributes
to MDAnalysis without need to save information to files.
The main class is :class:`Stream`, which is used to initialize the stream of
data to MDAnalysis' readers. These are the topology reader :class:`ESPParser`
and the coordinates reader :class:`ESPReader`.
A minimal working example is the following:
>>> # imports
>>> import espressomd
>>> from espressomd import MDA_ESP
>>> import MDAnalysis as mda
>>> # system setup
>>> system = espressomd.System()
>>> system.time_step = 1.
>>> system.cell_system.skin = 1.
>>> system.box_l = [10.,10.,10.]
>>> system.part.add(id=0,pos=[1.,2.,3.])
>>> # set up the stream
>>> eos = MDA_ESP.Stream(system)
>>> # feed Universe with a topology and with coordinates
>>> u = mda.Universe(eos.topology,eos.trajectory)
>>> print u
<Universe with 1 atoms>
"""
try:
import cStringIO as StringIO
StringIO = StringIO.StringIO
except ImportError:
from io import StringIO
import numpy as np
import MDAnalysis
from distutils.version import LooseVersion
from MDAnalysis.lib import util
from MDAnalysis.coordinates.core import triclinic_box
from MDAnalysis.lib.util import NamedStream
from MDAnalysis.topology.base import TopologyReaderBase
from MDAnalysis.coordinates import base
from MDAnalysis.coordinates.base import SingleFrameReaderBase
from MDAnalysis.core.topology import Topology
from MDAnalysis.core.topologyattrs import (
Atomnames, Atomids, Atomtypes, Masses,
Resids, Resnums, Segids, Resnames, AltLocs,
ICodes, Occupancies, Tempfactors, Charges
)
class Stream:
"""
Create an object that provides a MDAnalysis topology and a coordinate reader
>>> eos = MDA_ESP.Stream(system)
>>> u = mda.Universe(eos.topology,eos.trajectory)
Parameters
----------
system : :obj:`espressomd.system.System`
"""
def __init__(self, system):
self.topology = ESPParser(None, espresso=system).parse()
self.system = system
@property
def trajectory(self):
"""
Particles' coordinates at the current time
Returns
-------
stream : :class:`MDAnalysis.lib.util.NamedStream`
A stream in the format that can be parsed by :class:`ESPReader`
"""
# time
_xyz = str(self.system.time) + '\n'
# number of particles
_xyz += str(len(self.system.part)) + '\n'
# box edges
_xyz += str(self.system.box_l) + '\n'
# configuration
for _p in self.system.part:
_xyz += str(_p.pos) + '\n'
for _p in self.system.part:
_xyz += str(_p.v) + '\n'
for _p in self.system.part:
_xyz += str(_p.f) + '\n'
return NamedStream(StringIO(_xyz), "__.ESP")
class ESPParser(TopologyReaderBase):
"""
An MDAnalysis reader of espresso's topology
"""
format = 'ESP'
def __init__(self, filename, **kwargs):
self.kwargs = kwargs
def parse(self):
"""
Access ESPResSo data and return the topology object
Returns
-------
top : :class:`MDAnalysis.core.topology.Topology`
a topology object
"""
espresso = self.kwargs['espresso']
names = []
atomtypes = []
masses = []
charges = []
for p in espresso.part:
names.append("A" + repr(p.type))
atomtypes.append("T" + repr(p.type))
masses.append(p.mass)
charges.append(p.q)
natoms = len(espresso.part)
attrs = [Atomnames(np.array(names, dtype=object)),
Atomids(np.arange(natoms) + 1),
Atomtypes(np.array(atomtypes, dtype=object)),
Masses(masses),
Resids(np.array([1])),
Resnums(np.array([1])),
Segids(np.array(['System'], dtype=object)),
AltLocs(np.array([' '] * natoms, dtype=object)),
Resnames(np.array(['R'], dtype=object)),
Occupancies(np.zeros(natoms)),
Tempfactors(np.zeros(natoms)),
ICodes(np.array([' '], dtype=object)),
Charges(np.array(charges)),
]
top = Topology(natoms, 1, 1, attrs=attrs)
return top
class Timestep(base.Timestep):
_ts_order_x = [0, 3, 4]
_ts_order_y = [5, 1, 6]
_ts_order_z = [7, 8, 2]
def _init_unitcell(self):
return np.zeros(9, dtype=np.float32)
@property
def dimensions(self):
# This information now stored as _ts_order_x/y/z to keep DRY
x = self._unitcell[self._ts_order_x]
y = self._unitcell[self._ts_order_y]
z = self._unitcell[self._ts_order_z]
# this ordering is correct! (checked it, OB)
return triclinic_box(x, y, z)
@dimensions.setter
def dimensions(self, box):
x, y, z = triclinic_vectors(box)
np.put(self._unitcell, self._ts_order_x, x)
np.put(self._unitcell, self._ts_order_y, y)
class ESPReader(SingleFrameReaderBase):
"""
An MDAnalysis single frame reader for the stream provided by Stream()
"""
format = 'ESP'
units = {'time': None, 'length': 'nm', 'velocity': 'nm/ps'}
_Timestep = Timestep
def _read_first_frame(self):
with util.openany(self.filename, 'rt') as espfile:
n_atoms = 1
for pos, line in enumerate(espfile, start=-3):
if (pos == -3):
time = float(line[1:-1])
elif(pos == -2):
n_atoms = int(line)
self.n_atoms = n_atoms
positions = np.zeros(
self.n_atoms * 3, dtype=np.float32).reshape(self.n_atoms, 3)
velocities = np.zeros(
self.n_atoms * 3, dtype=np.float32).reshape(self.n_atoms, 3)
forces = np.zeros(
self.n_atoms * 3, dtype=np.float32).reshape(self.n_atoms, 3)
self.ts = ts = self._Timestep(
self.n_atoms, **self._ts_kwargs)
self.ts.time = time
elif(pos == -1):
self.ts._unitcell[:3] = np.array(
list(map(float, line[1:-2].split())))
elif(pos < n_atoms):
positions[pos] = np.array(
list(map(float, line[1:-2].split())))
elif(pos < 2 * n_atoms):
velocities[pos - n_atoms] = np.array(
list(map(float, line[1:-2].split())))
else:
forces[pos - 2 * n_atoms] = np.array(
list(map(float, line[1:-2].split())))
ts.positions = np.copy(positions)
ts.velocities = np.copy(velocities)
ts.forces = np.copy(forces)
|
mkuron/espresso
|
src/python/espressomd/MDA_ESP/__init__.py
|
Python
|
gpl-3.0
| 7,651
|
[
"ESPResSo",
"MDAnalysis"
] |
dd3cadf68e7eb00daf14fd6989ed7f47f5a91f324b14773532d4ffa8542d1423
|
"""
adaptive basin-hopping Markov-chain Monte Carlo for Bayesian optimisation
This is the python (v2.7) implementation of the hoppMCMC algorithm aiming to identify and sample from the high-probability regions of a posterior distribution. The algorithm combines three strategies: (i) parallel MCMC, (ii) adaptive Gibbs sampling and (iii) simulated annealing. Overall, hoppMCMC resembles the basin-hopping algorithm implemented in the optimize module of scipy, but it is developed for a wide range of modelling approaches including stochastic models with or without time-delay.
"""
__version__ = '1.2.1'
import os
import sys
import numpy
from struct import *
from scipy.stats import ttest_1samp as ttest
MPI_MASTER = 0
try:
from mpi4py import MPI
MPI_SIZE = MPI.COMM_WORLD.Get_size()
MPI_RANK = MPI.COMM_WORLD.Get_rank()
def Abort(str):
print("ERROR: "+str)
MPI.COMM_WORLD.Abort(1)
except:
MPI_SIZE = 1
MPI_RANK = 0
def Abort(str):
raise errorMCMC(str)
EPS_PULSE_VAR_MIN = 1e-12
EPS_VARMAT_MIN = 1e-7
EPS_VARMAT_MAX = 1e7
class errorMCMC(Exception):
def __init__(self, value):
self.value = value
def __str__(self):
return repr(self.value)
class binfile():
def __init__(self,fname,mode,rowsize=1):
self.headsize = 4
self.bitsize = 8
self.fname = fname
self.mode = mode
self.rowsize = rowsize
if self.mode=='r':
try:
self.f = open(self.fname,"rb")
except IOError:
Abort("File not found: "+self.fname)
self.rowsize = unpack('<i',self.f.read(self.headsize))[0]
elif self.mode=='w':
try:
self.f = open(self.fname,"w+b")
except IOError:
Abort("File not found: "+self.fname)
tmp = self.f.read(self.headsize)
if not tmp:
self.f.write(pack('<i',self.rowsize))
else:
self.rowsize = unpack('<i',tmp)[0]
else:
Abort("Wrong i/o mode: "+self.mode)
self.fmt = '<'+'d'*self.rowsize
self.size = self.bitsize*self.rowsize
print("Success: %s opened for %s size %d double rows" %(self.fname,"reading" if self.mode=='r' else "writing",self.rowsize))
# ---
def writeRow(self,row):
self.f.write(pack(self.fmt,*row))
self.f.flush()
# ---
def readRows(self):
self.f.seek(0,os.SEEK_END)
filesize = self.f.tell()
self.f.seek(self.headsize)
tmp=self.f.read(filesize-self.headsize)
ret=numpy.array(unpack('<'+'d'*int(numpy.floor((filesize-self.headsize)/self.bitsize)),tmp),dtype=numpy.float64).reshape((int(numpy.floor(((filesize-self.headsize)/self.bitsize)/self.rowsize)),self.rowsize))
return ret
# ---
def close(self):
self.f.close()
print("Success: %s closed" %(self.fname if self.fname else "file"))
def readFile(filename):
"""
Reads a binary output file and returns all rows/columns
Parameters
----------
filename:
name of the output file
Returns
-------
a numpy array with all the rows/columns
"""
a=binfile(filename,"r")
b=a.readRows()
a.close()
return b
def parMin(parmat):
prm = min(parmat[:,0])
prmi = numpy.where(parmat[:,0]==prm)[0][0]
return {'i':prmi,'f':prm}
def diagdot(mat,vec):
for n in numpy.arange(mat.shape[0]):
mat[n,n] *= vec[n]
def covariance(mat):
return numpy.cov(mat,rowvar=False)
def coefVar(mat):
mean0 = 1.0/numpy.mean(mat,0)
return mean0*(numpy.cov(mat,rowvar=False).T*mean0).T
def sensVar(mat):
mean0 = numpy.mean(mat,0)
return mean0*(numpy.linalg.inv(numpy.cov(mat,rowvar=False)).T*mean0).T
cov = covariance
rnorm = numpy.random.multivariate_normal
determinant = numpy.linalg.det
def join(a,s):
return s.join(["%.16g" %(x) for x in a])
def logsumexp(x):
a = numpy.max(x)
return a+numpy.log(numpy.sum(numpy.exp(x-a)))
def compareAUC(parmat0,parmat1,T):
from scipy.stats import gaussian_kde
parmats = numpy.vstack((parmat0,parmat1))
parmat0cp = parmat0[:,numpy.var(parmats,axis=0)!=0].copy()
parmat1cp = parmat1[:,numpy.var(parmats,axis=0)!=0].copy()
parmats = parmats[:,numpy.var(parmats,axis=0)!=0]
try:
kde = gaussian_kde(parmats[:,1:].T)
wt1 = numpy.array([kde.evaluate(pr) for pr in parmat1cp[:, 1:]])
except:
print("Warning: Problem encountered in compareAUC!")
return {'acc':0, 'favg0':0, 'favg1':0}
try:
wt0 = numpy.array([kde.evaluate(pr) for pr in parmat0cp[:, 1:]])
except:
print("Warning: Problem encountered in compareAUC!")
return {'acc':1, 'favg0':0, 'favg1':0}
mn = numpy.min([wt0,wt1])
wt0 /= mn
wt1 /= mn
# Importance sampling for Monte Carlo integration:
favg0 = numpy.mean([numpy.exp(-parmat0cp[m,0]/T) / wt0[m] for m in range(parmat0cp.shape[0])])
favg1 = numpy.mean([numpy.exp(-parmat1cp[m,0]/T) / wt1[m] for m in range(parmat1cp.shape[0])])
acc = (not numpy.isnan(favg0) and not numpy.isnan(favg1) and
favg0 > 0 and favg1 >= 0 and
(favg1 >= favg0 or
(numpy.random.uniform() < (favg1/favg0))))
return {'acc':acc, 'favg0':favg0, 'favg1':favg1}
def anneal_exp(y0,y1,steps):
return y0*numpy.exp(-(numpy.arange(steps,dtype=numpy.float64)/(steps-1.0))*numpy.log(numpy.float64(y0)/y1))
def anneal_linear(y0,y1,steps):
return numpy.append(numpy.arange(y0,y1,(y1-y0)/(steps-1),dtype=numpy.float64),y1)
def anneal_sigma(y0,y1,steps):
return y0+(y1-y0)*(1.0-1.0/(1.0+numpy.exp(-(numpy.arange(steps)-(0.5*steps)))))
def anneal_sigmasoft(y0,y1,steps):
return y0+(y1-y0)*(1.0-1.0/(1.0+numpy.exp(-12.5*(numpy.arange(steps)-(0.5*steps))/steps)))
def ldet(mat):
try:
det = determinant(mat)
except:
return -numpy.Inf
if det==0:
return -numpy.Inf
else:
return numpy.log(det)
def finalTest(fitFun,param,testnum=10):
for n in range(testnum):
f = fitFun(param)
if not numpy.isnan(f) and not numpy.isinf(f):
return f
Abort("Incompatible parameter set (%g): %s" %(f,join(param,",")))
# --- Class: hoppMCMC
class hoppMCMC:
def __init__(self,
fitFun,
param,
varmat,
inferpar=None,
gibbs=True,
num_hopp=3,
num_adapt=25,
num_chain=12,
chain_length=50,
rangeT=None,
model_comp=1000.0,
outfilename=''):
"""
Adaptive Basin-Hopping MCMC Algorithm
Parameters
----------
fitFun:
fitFun(x) - objective function which takes a numpy array as the only argument
param:
initial parameter vector
varmat:
2-dimensional array of initial covariance matrix
inferpar:
an array of indexes of parameter dimensions to be inferred
(all parameters are inferred by default)
gibbs:
indicates the type of chain iteration
True - Gibbs iteration where each parameter dimension has its own univariate
Gaussian proposal distribution (default)
False - Metropolis-Hastings iteration where there is a single multivariate
Gaussian proposal distribution
num_hopp:
number of hopp-steps (default=3)
num_adapt:
number of adaptation steps (default=25)
num_chain:
number of MCMC chains (default=12)
chain_length:
size of each chain (default=50)
rangeT:
[min,max] - range of annealing temperatures for each hopp-step (default=[1,1000])
min should be as low as possible but not lower
max should be sufficiently permissive to be able to jump between posterior modes
model_comp:
tolerance for accepting subsequent hopp-steps (default=1000)
this should ideally be equal to or higher than max(rangeT)
outfilename:
name of the output file (default='')
use this option for a detailed account of the results
outfilename.final lists information on hopp-steps:
hopp-step
acceptance (0/1)
weighted average of exp{-f0/model_comp}
weighted average of exp{-f1/model_comp}
outfilename.parmat lists chain status at the end of each adaptation step:
adaptation step
chain id
annealing temperature
score (f)
parameter values
both files can be read using the readFile function
Returns
-------
A hoppMCMC object with
parmat: an array of num_chain x (1+len(param))
current score (f) and parameter values for each chain
varmat: an array of len(inferpar) x len(inferpar)
latest proposal distribution
parmats: a list of parameter values (i.e. parmat) accepted at the end of hopp-steps
See also
--------
the documentation (doc/hoppMCMC_manual.pdf) for more information and examples
"""
self.multi = {'cov': covariance,
'rnorm': numpy.random.multivariate_normal,
'det': numpy.linalg.det}
self.single = {'cov': covariance,
'rnorm': numpy.random.normal,
'det': abs}
self.stat = self.multi
self.gibbs = gibbs
self.num_hopp = num_hopp
self.num_adapt = num_adapt
self.num_chain = num_chain
self.chain_length = chain_length
self.rangeT = numpy.sort([1.0,1000.0] if rangeT is None else rangeT)
self.model_comp = model_comp
# ---
self.fitFun = fitFun
self.param = numpy.array(param,dtype=numpy.float64,ndmin=1)
f0 = finalTest(self.fitFun,self.param)
self.parmat = numpy.array([[f0]+self.param.tolist() for n in range(self.num_chain)],dtype=numpy.float64)
self.varmat = numpy.array(varmat,dtype=numpy.float64,ndmin=2)
self.parmats = []
# ---
if inferpar is None:
self.inferpar = numpy.arange(len(self.param),dtype=numpy.int32)
else:
self.inferpar = numpy.array(inferpar,dtype=numpy.int32)
print("Parameters to infer: %s" %(join(self.inferpar,",")))
# ---
self.rank_indices = [numpy.arange(i,self.num_chain,MPI_SIZE) for i in range(MPI_SIZE)]
self.worker_indices = numpy.delete(range(MPI_SIZE),MPI_MASTER)
# ---
self.outfilename = outfilename
self.outparmat = None
self.outfinal = None
if MPI_RANK==MPI_MASTER:
if self.outfilename:
self.outparmat = binfile(self.outfilename+'.parmat','w',self.parmat.shape[1]+3)
self.outfinal = binfile(self.outfilename+'.final','w',4)
# ---
for hopp_step in range(self.num_hopp):
self.anneal = anneal_sigmasoft(self.rangeT[0],self.rangeT[1],self.num_adapt)
for adapt_step in range(self.num_adapt):
self.runAdaptStep(hopp_step*self.num_adapt+adapt_step)
if MPI_RANK == MPI_MASTER:
test = {'acc':True, 'favg0':numpy.nan, 'favg1':numpy.nan} if len(self.parmats)==0 else compareAUC(self.parmats[-1][:,[0]+(1+self.inferpar).tolist()],self.parmat[:,[0]+(1+self.inferpar).tolist()],self.model_comp)
if test['acc']:
self.parmats.append(self.parmat)
else:
self.parmat = self.parmats[-1].copy()
self.param = self.parmat[parMin(self.parmat)['i'],1:].copy()
# ---
if self.outfinal:
self.outfinal.writeRow([hopp_step,test['acc'],test['favg0'],test['favg1']])
else:
print("parMatAcc.final: %d,%s" %(hopp_step,join([test['acc'],test['favg0'],test['favg1']],",")))
# ---
if MPI_SIZE>1:
self.parmat = MPI.COMM_WORLD.bcast(self.parmat, root=MPI_MASTER)
self.param = MPI.COMM_WORLD.bcast(self.param, root=MPI_MASTER)
# ---
if MPI_SIZE>1:
self.parmats = MPI.COMM_WORLD.bcast(self.parmats, root=MPI_MASTER)
if self.outparmat:
self.outparmat.close()
if self.outfinal:
self.outfinal.close()
def runAdaptStep(self,adapt_step):
if MPI_RANK == MPI_MASTER:
pm = parMin(self.parmat)
self.param = self.parmat[pm['i'],1:].copy()
self.parmat = numpy.array([self.parmat[pm['i'],:].tolist() for n in range(self.num_chain)],dtype=numpy.float64)
if MPI_SIZE>1:
self.param = MPI.COMM_WORLD.bcast(self.param, root=MPI_MASTER)
self.parmat = MPI.COMM_WORLD.bcast(self.parmat, root=MPI_MASTER)
# ---
for chain_id in self.rank_indices[MPI_RANK]:
# ---
mcmc = chainMCMC(self.fitFun,
self.param,
self.varmat,
gibbs=self.gibbs,
chain_id=chain_id,
pulsevar=1.0,
anneal=self.anneal[0],
accthr=0.5,
inferpar=self.inferpar,
varmat_change=0,
pulse_change=10,
pulse_change_ratio=2,
print_iter=0)
for m in range(self.chain_length):
mcmc.iterate()
self.parmat[chain_id,:] = mcmc.getParam()
# ---
if MPI_RANK == MPI_MASTER:
for worker in self.worker_indices:
parmat = MPI.COMM_WORLD.recv(source=worker, tag=1)
for chain_id in self.rank_indices[worker]:
self.parmat[chain_id,:] = parmat[chain_id,:]
# ---
self.varmat = numpy.array(self.stat['cov'](self.parmat[:,1+self.inferpar]),ndmin=2)
self.varmat[numpy.abs(self.varmat)<EPS_VARMAT_MIN] = 1.0
# ---
for chain_id in range(self.num_chain):
if self.outparmat:
tmp = [adapt_step,chain_id,self.anneal[0]]+self.parmat[chain_id,:].tolist()
self.outparmat.writeRow(tmp)
else:
print("param.mat.step: %d,%d,%g,%s" %(adapt_step,chain_id,self.anneal[0],join(self.parmat[chain_id,:],",")))
# ---
if len(self.anneal)>1: self.anneal = self.anneal[1:]
else:
MPI.COMM_WORLD.send(self.parmat, dest=MPI_MASTER, tag=1)
# ---
if MPI_SIZE>1:
self.parmat = MPI.COMM_WORLD.bcast(self.parmat, root=MPI_MASTER)
self.varmat = MPI.COMM_WORLD.bcast(self.varmat, root=MPI_MASTER)
self.anneal = MPI.COMM_WORLD.bcast(self.anneal, root=MPI_MASTER)
# ---
# --- Class: chainMCMC
class chainMCMC:
def __init__(self,
fitFun,
param,
varmat,
inferpar=None,
gibbs=True,
chain_id=0,
pulsevar=1.0,
anneal=1,
accthr=0.5,
varmat_change=0,
pulse_change=10,
pulse_change_ratio=2,
pulse_allow_decrease=True,
pulse_allow_increase=True,
pulse_min=1e-7,
pulse_max=1e7,
print_iter=0):
"""
MCMC Chain with Adaptive Proposal Distribution
Usage
-----
Once created, a chainMCMC is iterated using the iterate method.
Depending on the value of gibbs, this method calls either iterateMulti or iterateSingle.
Parameters
----------
fitFun:
fitFun(x) - objective function which takes a numpy array as the only argument
param:
initial parameter vector
varmat:
2-dimensional array of initial covariance matrix
inferpar:
an array of indexes of parameter dimensions to be inferred
(all parameters are inferred by default)
gibbs:
indicates the type of chain iteration
True - Gibbs iteration where each parameter dimension has its own univariate
Gaussian proposal distribution (default)
False - Metropolis-Hastings iteration where there is a single multivariate
Gaussian proposal distribution
chain_id:
a chain identifier (default=0)
pulsevar:
scaling factor for the variance of the proposal distribution (default=1)
anneal:
annealing temperature (default=1)
accthr:
desired acceptance rate (default=0.5)
varmat_change:
how often variance should be updated? (default=0)
varmat_change=0 - fixed variance
varmat_change=n - variance is updated at each nth step
pulse_change:
how often pulsevar should be updated? (default=10)
pulse_change=0 - fixed pulsevar
pulse_change=n - pulsevar is updated at each nth step
pulse_change_ratio:
how should pulsevar be updated? (default=2)
(pulsevar *= pulse_change_ratio)
pulse_allow_increase:
allow pulse to increase (default=True)
pulse_allow_decrease:
allow pulse to decrease (default=True)
pulse_min:
minimum value of pulse (default=1e-7)
pulse_max:
maximum value of pulse (default=1e7)
print_iter:
how often chain status should be printed? (default=0)
print_iter=0 - do not print status
print_iter=n - print status at each nth step
(default=0)
Returns
-------
A chainMCMC object with
getParam: a method for obtaining the latest iteration (f + parameter values)
getVarmat: a method for obtaining the latest proposal distribution (varmat * pulsevar)
See also
--------
the documentation (doc/hoppMCMC_manual.pdf) for more information and examples
"""
self.multi = {'cov': covariance,
'rnorm': numpy.random.multivariate_normal,
'det': numpy.linalg.det}
self.single = {'cov': covariance,
'rnorm': numpy.random.normal,
'det': abs}
# ---
self.chain_id = chain_id;
self.fitFun = fitFun
self.parmat = numpy.array(param,dtype=numpy.float64,ndmin=1)
self.varmat = numpy.array(varmat,dtype=numpy.float64,ndmin=2)
if inferpar is None:
self.inferpar = numpy.arange(len(param),dtype=numpy.int32)
else:
self.inferpar = numpy.array(inferpar,dtype=numpy.int32)
self.anneal = numpy.array(anneal,dtype=numpy.float64,ndmin=1)
self.accthr = numpy.array(accthr,dtype=numpy.float64,ndmin=1)
# ---
self.pulse_change_ratio = pulse_change_ratio
self.pulse_nochange = numpy.float64(1)
self.pulse_increase = numpy.float64(self.pulse_change_ratio)
self.pulse_decrease = numpy.float64(1.0/self.pulse_change_ratio)
self.pulse_change = pulse_change
self.pulse_collect = max(1,self.pulse_change)
self.allow_pincr = pulse_allow_increase
self.allow_pdecr = pulse_allow_decrease
self.pulse_min = pulse_min
self.pulse_max = pulse_max
self.varmat_change = varmat_change
self.varmat_collect = max(1,self.varmat_change)
# ---
if self.parmat.ndim==1:
f0 = finalTest(self.fitFun,self.parmat)
self.parmat = numpy.array([[f0]+self.parmat.tolist() for i in range(self.varmat_collect)])
elif self.parmat.shape[0]!=self.varmat_collect:
Abort("Dimension mismatch in chainMCMC! parmat.shape[0]=%d collect=%d" %(self.parmat.shape[0],self.varmat_collect))
if self.varmat.shape and self.inferpar.shape[0] != self.varmat.shape[0]:
Abort("Dimension mismatch in chainMCMC! inferpar.shape[0]=%d varmat.shape[0]=%d" %(self.inferpar.shape[0],self.varmat.shape[0]))
# ---
self.gibbs = gibbs
if self.gibbs:
self.pulsevar = numpy.array(numpy.repeat(pulsevar,len(self.inferpar)),dtype=numpy.float64)
self.acc_vecs = [numpy.repeat(False,self.pulse_collect) for n in range(len(self.inferpar))]
self.iterate = self.iterateSingle
else:
self.pulsevar = pulsevar
self.acc_vec = numpy.repeat(False,self.pulse_collect)
self.iterate = self.iterateMulti
self.pulsevar0 = self.pulsevar
if not self.gibbs and (self.parmat.shape[1]==2 or self.inferpar.shape[0]==1):
Abort("Please set gibbs=True!")
self.varmat = self.varmat*self.pulsevar
# ---
self.halfa = 0.025
if self.pulse_change<25:
self.halfa = 0.05
self.print_iter = print_iter
self.step = 0
self.index = 0
self.index_acc = 0
def getParam(self):
return self.parmat[self.index,:].copy()
def getVarmat(self):
return (self.varmat*self.pulsevar).copy()
def getVarPar(self):
return ldet(self.multi['cov'](self.parmat[:,1+self.inferpar]))
def getVarVar(self):
return ldet(self.varmat)
def getAcc(self):
if self.gibbs:
return numpy.array([numpy.mean(acc_vec) for acc_vec in self.acc_vecs])
else:
return numpy.mean(self.acc_vec)
def setParam(self,parmat):
self.parmat[self.index,:] = numpy.array(parmat,dtype=numpy.float64,ndmin=1).copy()
def newParamSingle(self,param,param_id):
try:
param1 = self.single['rnorm'](param,
self.varmat[param_id,param_id]*self.pulsevar[param_id])
except:
print("Warning: Failed to generate a new parameter set")
param1 = numpy.copy(param)
return param1
def newParamMulti(self):
try:
param1 = self.multi['rnorm'](self.parmat[self.index,1:][self.inferpar],self.varmat*self.pulsevar)
except numpy.linalg.linalg.LinAlgError:
print("Warning: Failed to generate a new parameter set")
param1 = numpy.copy(self.parmat[self.index,1:][self.inferpar])
return param1
def checkMove(self,f0,f1):
acc = (not numpy.isnan(f1) and not numpy.isinf(f1) and
f1 >= 0 and
(f1 <= f0 or
(numpy.log(numpy.random.uniform()) < (f0-f1)/self.anneal[0])))
# --- f0 = 0.5*SS_0
# --- f = 0.5*SS
# --- sqrt(anneal) == st.dev.
# --- 0.5*x^2/(T*s^2)
# --- return exp(-0.5*SS/anneal)/exp(-0.5*SS_0/anneal)
return(acc)
def pulsevarUpdate(self,acc_vec):
# --- Test if mean(acc_vec) is equal to accthr
try:
r = ttest(acc_vec,self.accthr)
except ZeroDivisionError:
if all(acc_vec)<=0 and self.allow_pdecr:
return self.pulse_decrease
elif all(acc_vec)>=0 and self.allow_pincr:
return self.pulse_increase
else:
return self.pulse_nochange
# ---
if r[1]>=self.halfa: return self.pulse_nochange
if r[0]>0 and self.allow_pincr: return self.pulse_increase
if r[0]<0 and self.allow_pdecr: return self.pulse_decrease
# --- Return default
return self.pulse_nochange
def iterateMulti(self):
self.step += 1
# ---
acc = False
f0 = self.parmat[self.index,0]
param1 = numpy.copy(self.parmat[self.index,1:])
param1[self.inferpar] = self.newParamMulti()
f1 = self.fitFun(param1)
acc = self.checkMove(f0,f1)
# ---
self.index_acc = (self.index_acc+1)%self.pulse_collect
if acc:
self.index = (self.index+1)%self.varmat_collect
# ---
self.acc_vec[self.index_acc] = acc
if acc:
self.parmat[self.index,0] = f1
self.parmat[self.index,1:] = param1
# ---
if self.print_iter and (self.step%self.print_iter)==0:
print("param.mat.chain: %d,%d,%s" %(self.step,self.chain_id,join(self.parmat[self.index,:],",")))
# ---
if self.step>1:
# ---
if self.pulse_change and (self.step%self.pulse_change)==0:
self.pulsevar = max(1e-7,self.pulsevar*self.pulsevarUpdate(self.acc_vec))
# ---
if self.varmat_change and (self.step%self.varmat_change)==0:
self.varmat = numpy.array(self.multi['cov'](self.parmat[:,1+self.inferpar]),ndmin=2)
a = numpy.diag(self.varmat)<EPS_VARMAT_MIN
self.varmat[a,a] = EPS_VARMAT_MIN
# ---
if self.print_iter and (self.step%self.print_iter)==0:
print("parMatAcc.chain: %s" %(join([self.step,self.chain_id,ldet(self.multi['cov'](self.parmat[:,1+self.inferpar])),ldet(self.varmat),numpy.mean(self.acc_vec),self.pulsevar],",")))
def iterateSingle(self):
self.step += 1
self.index_acc = (self.index_acc+1)%self.pulse_collect
# ---
acc_steps = False
f0 = self.parmat[self.index,0]
param0 = self.parmat[self.index,1:].copy()
for param_id in numpy.arange(len(self.inferpar)):
param1 = param0.copy()
param1[self.inferpar[param_id]] = self.newParamSingle(param1[self.inferpar[param_id]],param_id)
f1 = self.fitFun(param1)
acc = self.checkMove(f0,f1)
if acc:
acc_steps = True
f0 = f1
param0[self.inferpar[param_id]] = numpy.copy(param1[self.inferpar[param_id]])
self.acc_vecs[param_id][self.index_acc] = acc
# ---
if acc_steps:
self.index = (self.index+1)%self.varmat_collect
self.parmat[self.index,0] = f0
self.parmat[self.index,1:] = param0
if numpy.isnan(f0) or numpy.isinf(f0):
Abort("Iterate single failed with %g: %s" %(f0,join(param0,",")))
# ---
if self.print_iter and (self.step%self.print_iter)==0:
print("param.mat.chain: %d,%d,%s" %(self.step,self.chain_id,join(self.parmat[self.index,:],",")))
# ---
if self.step>1:
# ---
if self.pulse_change and (self.step%self.pulse_change)==0:
for param_id in numpy.arange(len(self.inferpar)):
tmp = min(self.pulse_max,max(self.pulse_min,self.pulsevar[param_id]*self.pulsevarUpdate(self.acc_vecs[param_id])))
if numpy.abs(self.varmat[param_id,param_id]*tmp) >= EPS_PULSE_VAR_MIN:
self.pulsevar[param_id] = tmp
# ---
if self.varmat_change and (self.step%self.varmat_change)==0:
for param_id in numpy.arange(len(self.inferpar)):
tmp = max(EPS_VARMAT_MIN,self.single['cov'](self.parmat[:,1+self.inferpar[param_id]]))
if numpy.abs(tmp*self.pulsevar[param_id]) >= EPS_PULSE_VAR_MIN:
self.varmat[param_id,param_id] = tmp
# ---
if self.print_iter and (self.step%self.print_iter)==0:
print("parMatAcc.chain: %s" %(join([self.step,self.chain_id,ldet(self.multi['cov'](self.parmat[:,1+self.inferpar])),ldet(self.varmat)],",")))
print("parMatAcc.chain.accs: %d,%d,%s" %(self.step,self.chain_id,join([numpy.mean(acc_vec) for acc_vec in self.acc_vecs],",")))
print("parMatAcc.chain.pulses: %d,%d,%s" %(self.step,self.chain_id,join(self.pulsevar,",")))
|
kerguler/hoppMCMC
|
src/hoppMCMC/__init__.py
|
Python
|
gpl-3.0
| 28,781
|
[
"Gaussian"
] |
a78dc11e45fa4462e8ce363a97b8ed0fdd7a45124563efce88d242a4963daa39
|
from collections import namedtuple
import themis.channel
# all in milliseconds
SpeedControlSpecs = namedtuple("SpeedControlSpecs", "rev_max rev_min rest fwd_min fwd_max frequency_hz")
# from WPILib HAL
TALON_SR = SpeedControlSpecs(0.989, 1.487, 1.513, 1.539, 2.037, 200.0)
JAGUAR = SpeedControlSpecs(0.697, 1.454, 1.507, 1.55, 2.31, 198.0)
VICTOR_OLD = SpeedControlSpecs(1.026, 1.49, 1.507, 1.525, 2.027, 100.0)
SERVO = SpeedControlSpecs(0.6, 1.6, 1.6, 1.6, 2.6, 50.0) # essentially just linear from 0.6 to 2.6
VICTOR_SP = SpeedControlSpecs(0.997, 1.48, 1.50, 1.52, 2.004, 200.0)
SPARK = SpeedControlSpecs(0.999, 1.46, 1.50, 1.55, 2.003, 200.0)
SD540 = SpeedControlSpecs(0.94, 1.44, 1.50, 1.55, 2.05, 200.0)
TALON_SRX = SpeedControlSpecs(0.997, 1.48, 1.50, 1.52, 2.004, 200.0)
def filter_to(spec: SpeedControlSpecs, out: themis.channel.FloatOutput) -> themis.channel.FloatOutput:
return out.filter("pwm_map", (), (spec.rev_max, spec.rev_min, spec.rest, spec.fwd_min, spec.fwd_max))
|
celskeggs/themis
|
themis/pwm.py
|
Python
|
mit
| 992
|
[
"Jaguar"
] |
273329880341d6c2a03c4aa77a62865e0ceecc6361b06d0aa9bdd6eec919e019
|
# -*- coding: utf-8 -*-
"""
The :mod:`sklearn.naive_bayes` module implements Naive Bayes algorithms. These
are supervised learning methods based on applying Bayes' theorem with strong
(naive) feature independence assumptions.
"""
# Author: Vincent Michel <vincent.michel@inria.fr>
# Minor fixes by Fabian Pedregosa
# Amit Aides <amitibo@tx.technion.ac.il>
# Yehuda Finkelstein <yehudaf@tx.technion.ac.il>
# Lars Buitinck
# Jan Hendrik Metzen <jhm@informatik.uni-bremen.de>
# (parts based on earlier work by Mathieu Blondel)
#
# License: BSD 3 clause
from abc import ABCMeta, abstractmethod
import numpy as np
from scipy.sparse import issparse
from .base import BaseEstimator, ClassifierMixin
from .preprocessing import binarize
from .preprocessing import LabelBinarizer
from .preprocessing import label_binarize
from .utils import check_X_y, check_array
from .utils.extmath import safe_sparse_dot, logsumexp
from .utils.multiclass import _check_partial_fit_first_call
from .utils.fixes import in1d
from .utils.validation import check_is_fitted
from .externals import six
__all__ = ['BernoulliNB', 'GaussianNB', 'MultinomialNB']
class BaseNB(six.with_metaclass(ABCMeta, BaseEstimator, ClassifierMixin)):
"""Abstract base class for naive Bayes estimators"""
@abstractmethod
def _joint_log_likelihood(self, X):
"""Compute the unnormalized posterior log probability of X
I.e. ``log P(c) + log P(x|c)`` for all rows x of X, as an array-like of
shape [n_classes, n_samples].
Input is passed to _joint_log_likelihood as-is by predict,
predict_proba and predict_log_proba.
"""
def predict(self, X):
"""
Perform classification on an array of test vectors X.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
Returns
-------
C : array, shape = [n_samples]
Predicted target values for X
"""
jll = self._joint_log_likelihood(X)
return self.classes_[np.argmax(jll, axis=1)]
def predict_log_proba(self, X):
"""
Return log-probability estimates for the test vector X.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
Returns
-------
C : array-like, shape = [n_samples, n_classes]
Returns the log-probability of the samples for each class in
the model. The columns correspond to the classes in sorted
order, as they appear in the attribute `classes_`.
"""
jll = self._joint_log_likelihood(X)
# normalize by P(x) = P(f_1, ..., f_n)
log_prob_x = logsumexp(jll, axis=1)
return jll - np.atleast_2d(log_prob_x).T
def predict_proba(self, X):
"""
Return probability estimates for the test vector X.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
Returns
-------
C : array-like, shape = [n_samples, n_classes]
Returns the probability of the samples for each class in
the model. The columns correspond to the classes in sorted
order, as they appear in the attribute `classes_`.
"""
return np.exp(self.predict_log_proba(X))
class GaussianNB(BaseNB):
"""
Gaussian Naive Bayes (GaussianNB)
Can perform online updates to model parameters via `partial_fit` method.
For details on algorithm used to update feature means and variance online,
see Stanford CS tech report STAN-CS-79-773 by Chan, Golub, and LeVeque:
http://i.stanford.edu/pub/cstr/reports/cs/tr/79/773/CS-TR-79-773.pdf
Read more in the :ref:`User Guide <gaussian_naive_bayes>`.
Parameters
----------
priors : array-like, shape (n_classes,)
Prior probabilities of the classes. If specified the priors are not
adjusted according to the data.
Attributes
----------
class_prior_ : array, shape (n_classes,)
probability of each class.
class_count_ : array, shape (n_classes,)
number of training samples observed in each class.
theta_ : array, shape (n_classes, n_features)
mean of each feature per class
sigma_ : array, shape (n_classes, n_features)
variance of each feature per class
Examples
--------
>>> import numpy as np
>>> X = np.array([[-1, -1], [-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2]])
>>> Y = np.array([1, 1, 1, 2, 2, 2])
>>> from sklearn.naive_bayes import GaussianNB
>>> clf = GaussianNB()
>>> clf.fit(X, Y)
GaussianNB(priors=None)
>>> print(clf.predict([[-0.8, -1]]))
[1]
>>> clf_pf = GaussianNB()
>>> clf_pf.partial_fit(X, Y, np.unique(Y))
GaussianNB(priors=None)
>>> print(clf_pf.predict([[-0.8, -1]]))
[1]
"""
def __init__(self, priors=None):
self.priors = priors
def fit(self, X, y, sample_weight=None):
"""Fit Gaussian Naive Bayes according to X, y
Parameters
----------
X : array-like, shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples
and n_features is the number of features.
y : array-like, shape (n_samples,)
Target values.
sample_weight : array-like, shape (n_samples,), optional (default=None)
Weights applied to individual samples (1. for unweighted).
.. versionadded:: 0.17
Gaussian Naive Bayes supports fitting with *sample_weight*.
Returns
-------
self : object
Returns self.
"""
X, y = check_X_y(X, y)
return self._partial_fit(X, y, np.unique(y), _refit=True,
sample_weight=sample_weight)
@staticmethod
def _update_mean_variance(n_past, mu, var, X, sample_weight=None):
"""Compute online update of Gaussian mean and variance.
Given starting sample count, mean, and variance, a new set of
points X, and optionally sample weights, return the updated mean and
variance. (NB - each dimension (column) in X is treated as independent
-- you get variance, not covariance).
Can take scalar mean and variance, or vector mean and variance to
simultaneously update a number of independent Gaussians.
See Stanford CS tech report STAN-CS-79-773 by Chan, Golub, and LeVeque:
http://i.stanford.edu/pub/cstr/reports/cs/tr/79/773/CS-TR-79-773.pdf
Parameters
----------
n_past : int
Number of samples represented in old mean and variance. If sample
weights were given, this should contain the sum of sample
weights represented in old mean and variance.
mu : array-like, shape (number of Gaussians,)
Means for Gaussians in original set.
var : array-like, shape (number of Gaussians,)
Variances for Gaussians in original set.
sample_weight : array-like, shape (n_samples,), optional (default=None)
Weights applied to individual samples (1. for unweighted).
Returns
-------
total_mu : array-like, shape (number of Gaussians,)
Updated mean for each Gaussian over the combined set.
total_var : array-like, shape (number of Gaussians,)
Updated variance for each Gaussian over the combined set.
"""
if X.shape[0] == 0:
return mu, var
# Compute (potentially weighted) mean and variance of new datapoints
if sample_weight is not None:
n_new = float(sample_weight.sum())
new_mu = np.average(X, axis=0, weights=sample_weight / n_new)
new_var = np.average((X - new_mu) ** 2, axis=0,
weights=sample_weight / n_new)
else:
n_new = X.shape[0]
new_var = np.var(X, axis=0)
new_mu = np.mean(X, axis=0)
if n_past == 0:
return new_mu, new_var
n_total = float(n_past + n_new)
# Combine mean of old and new data, taking into consideration
# (weighted) number of observations
total_mu = (n_new * new_mu + n_past * mu) / n_total
# Combine variance of old and new data, taking into consideration
# (weighted) number of observations. This is achieved by combining
# the sum-of-squared-differences (ssd)
old_ssd = n_past * var
new_ssd = n_new * new_var
total_ssd = (old_ssd + new_ssd +
(n_past / float(n_new * n_total)) *
(n_new * mu - n_new * new_mu) ** 2)
total_var = total_ssd / n_total
return total_mu, total_var
def partial_fit(self, X, y, classes=None, sample_weight=None):
"""Incremental fit on a batch of samples.
This method is expected to be called several times consecutively
on different chunks of a dataset so as to implement out-of-core
or online learning.
This is especially useful when the whole dataset is too big to fit in
memory at once.
This method has some performance and numerical stability overhead,
hence it is better to call partial_fit on chunks of data that are
as large as possible (as long as fitting in the memory budget) to
hide the overhead.
Parameters
----------
X : array-like, shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and
n_features is the number of features.
y : array-like, shape (n_samples,)
Target values.
classes : array-like, shape (n_classes,), optional (default=None)
List of all the classes that can possibly appear in the y vector.
Must be provided at the first call to partial_fit, can be omitted
in subsequent calls.
sample_weight : array-like, shape (n_samples,), optional (default=None)
Weights applied to individual samples (1. for unweighted).
.. versionadded:: 0.17
Returns
-------
self : object
Returns self.
"""
return self._partial_fit(X, y, classes, _refit=False,
sample_weight=sample_weight)
def _partial_fit(self, X, y, classes=None, _refit=False,
sample_weight=None):
"""Actual implementation of Gaussian NB fitting.
Parameters
----------
X : array-like, shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and
n_features is the number of features.
y : array-like, shape (n_samples,)
Target values.
classes : array-like, shape (n_classes,), optional (default=None)
List of all the classes that can possibly appear in the y vector.
Must be provided at the first call to partial_fit, can be omitted
in subsequent calls.
_refit: bool, optional (default=False)
If true, act as though this were the first time we called
_partial_fit (ie, throw away any past fitting and start over).
sample_weight : array-like, shape (n_samples,), optional (default=None)
Weights applied to individual samples (1. for unweighted).
Returns
-------
self : object
Returns self.
"""
X, y = check_X_y(X, y)
# If the ratio of data variance between dimensions is too small, it
# will cause numerical errors. To address this, we artificially
# boost the variance by epsilon, a small fraction of the standard
# deviation of the largest dimension.
epsilon = 1e-9 * np.var(X, axis=0).max()
if _refit:
self.classes_ = None
if _check_partial_fit_first_call(self, classes):
# This is the first call to partial_fit:
# initialize various cumulative counters
n_features = X.shape[1]
n_classes = len(self.classes_)
self.theta_ = np.zeros((n_classes, n_features))
self.sigma_ = np.zeros((n_classes, n_features))
self.class_count_ = np.zeros(n_classes, dtype=np.float64)
# Initialise the class prior
n_classes = len(self.classes_)
# Take into account the priors
if self.priors is not None:
priors = np.asarray(self.priors)
# Check that the provide prior match the number of classes
if len(priors) != n_classes:
raise ValueError('Number of priors must match number of'
' classes.')
# Check that the sum is 1
if priors.sum() != 1.0:
raise ValueError('The sum of the priors should be 1.')
# Check that the prior are non-negative
if (priors < 0).any():
raise ValueError('Priors must be non-negative.')
self.class_prior_ = priors
else:
# Initialize the priors to zeros for each class
self.class_prior_ = np.zeros(len(self.classes_),
dtype=np.float64)
else:
if X.shape[1] != self.theta_.shape[1]:
msg = "Number of features %d does not match previous data %d."
raise ValueError(msg % (X.shape[1], self.theta_.shape[1]))
# Put epsilon back in each time
self.sigma_[:, :] -= epsilon
classes = self.classes_
unique_y = np.unique(y)
unique_y_in_classes = in1d(unique_y, classes)
if not np.all(unique_y_in_classes):
raise ValueError("The target label(s) %s in y do not exist in the "
"initial classes %s" %
(y[~unique_y_in_classes], classes))
for y_i in unique_y:
i = classes.searchsorted(y_i)
X_i = X[y == y_i, :]
if sample_weight is not None:
sw_i = sample_weight[y == y_i]
N_i = sw_i.sum()
else:
sw_i = None
N_i = X_i.shape[0]
new_theta, new_sigma = self._update_mean_variance(
self.class_count_[i], self.theta_[i, :], self.sigma_[i, :],
X_i, sw_i)
self.theta_[i, :] = new_theta
self.sigma_[i, :] = new_sigma
self.class_count_[i] += N_i
self.sigma_[:, :] += epsilon
# Update if only no priors is provided
if self.priors is None:
# Empirical prior, with sample_weight taken into account
self.class_prior_ = self.class_count_ / self.class_count_.sum()
return self
def _joint_log_likelihood(self, X):
check_is_fitted(self, "classes_")
X = check_array(X)
joint_log_likelihood = []
for i in range(np.size(self.classes_)):
jointi = np.log(self.class_prior_[i])
n_ij = - 0.5 * np.sum(np.log(2. * np.pi * self.sigma_[i, :]))
n_ij -= 0.5 * np.sum(((X - self.theta_[i, :]) ** 2) /
(self.sigma_[i, :]), 1)
joint_log_likelihood.append(jointi + n_ij)
joint_log_likelihood = np.array(joint_log_likelihood).T
return joint_log_likelihood
class BaseDiscreteNB(BaseNB):
"""Abstract base class for naive Bayes on discrete/categorical data
Any estimator based on this class should provide:
__init__
_joint_log_likelihood(X) as per BaseNB
"""
def _update_class_log_prior(self, class_prior=None):
n_classes = len(self.classes_)
if class_prior is not None:
if len(class_prior) != n_classes:
raise ValueError("Number of priors must match number of"
" classes.")
self.class_log_prior_ = np.log(class_prior)
elif self.fit_prior:
# empirical prior, with sample_weight taken into account
self.class_log_prior_ = (np.log(self.class_count_) -
np.log(self.class_count_.sum()))
else:
self.class_log_prior_ = np.zeros(n_classes) - np.log(n_classes)
def partial_fit(self, X, y, classes=None, sample_weight=None):
"""Incremental fit on a batch of samples.
This method is expected to be called several times consecutively
on different chunks of a dataset so as to implement out-of-core
or online learning.
This is especially useful when the whole dataset is too big to fit in
memory at once.
This method has some performance overhead hence it is better to call
partial_fit on chunks of data that are as large as possible
(as long as fitting in the memory budget) to hide the overhead.
Parameters
----------
X : {array-like, sparse matrix}, shape = [n_samples, n_features]
Training vectors, where n_samples is the number of samples and
n_features is the number of features.
y : array-like, shape = [n_samples]
Target values.
classes : array-like, shape = [n_classes], optional (default=None)
List of all the classes that can possibly appear in the y vector.
Must be provided at the first call to partial_fit, can be omitted
in subsequent calls.
sample_weight : array-like, shape = [n_samples], optional (default=None)
Weights applied to individual samples (1. for unweighted).
Returns
-------
self : object
Returns self.
"""
X = check_array(X, accept_sparse='csr', dtype=np.float64)
_, n_features = X.shape
if _check_partial_fit_first_call(self, classes):
# This is the first call to partial_fit:
# initialize various cumulative counters
n_effective_classes = len(classes) if len(classes) > 1 else 2
self.class_count_ = np.zeros(n_effective_classes, dtype=np.float64)
self.feature_count_ = np.zeros((n_effective_classes, n_features),
dtype=np.float64)
elif n_features != self.coef_.shape[1]:
msg = "Number of features %d does not match previous data %d."
raise ValueError(msg % (n_features, self.coef_.shape[-1]))
Y = label_binarize(y, classes=self.classes_)
if Y.shape[1] == 1:
Y = np.concatenate((1 - Y, Y), axis=1)
n_samples, n_classes = Y.shape
if X.shape[0] != Y.shape[0]:
msg = "X.shape[0]=%d and y.shape[0]=%d are incompatible."
raise ValueError(msg % (X.shape[0], y.shape[0]))
# label_binarize() returns arrays with dtype=np.int64.
# We convert it to np.float64 to support sample_weight consistently
Y = Y.astype(np.float64)
if sample_weight is not None:
sample_weight = np.atleast_2d(sample_weight)
Y *= check_array(sample_weight).T
class_prior = self.class_prior
# Count raw events from data before updating the class log prior
# and feature log probas
self._count(X, Y)
# XXX: OPTIM: we could introduce a public finalization method to
# be called by the user explicitly just once after several consecutive
# calls to partial_fit and prior any call to predict[_[log_]proba]
# to avoid computing the smooth log probas at each call to partial fit
self._update_feature_log_prob()
self._update_class_log_prior(class_prior=class_prior)
return self
def fit(self, X, y, sample_weight=None):
"""Fit Naive Bayes classifier according to X, y
Parameters
----------
X : {array-like, sparse matrix}, shape = [n_samples, n_features]
Training vectors, where n_samples is the number of samples and
n_features is the number of features.
y : array-like, shape = [n_samples]
Target values.
sample_weight : array-like, shape = [n_samples], optional (default=None)
Weights applied to individual samples (1. for unweighted).
Returns
-------
self : object
Returns self.
"""
X, y = check_X_y(X, y, 'csr')
_, n_features = X.shape
labelbin = LabelBinarizer()
Y = labelbin.fit_transform(y)
self.classes_ = labelbin.classes_
if Y.shape[1] == 1:
Y = np.concatenate((1 - Y, Y), axis=1)
# LabelBinarizer().fit_transform() returns arrays with dtype=np.int64.
# We convert it to np.float64 to support sample_weight consistently;
# this means we also don't have to cast X to floating point
Y = Y.astype(np.float64)
if sample_weight is not None:
sample_weight = np.atleast_2d(sample_weight)
Y *= check_array(sample_weight).T
class_prior = self.class_prior
# Count raw events from data before updating the class log prior
# and feature log probas
n_effective_classes = Y.shape[1]
self.class_count_ = np.zeros(n_effective_classes, dtype=np.float64)
self.feature_count_ = np.zeros((n_effective_classes, n_features),
dtype=np.float64)
self._count(X, Y)
self._update_feature_log_prob()
self._update_class_log_prior(class_prior=class_prior)
return self
# XXX The following is a stopgap measure; we need to set the dimensions
# of class_log_prior_ and feature_log_prob_ correctly.
def _get_coef(self):
return (self.feature_log_prob_[1:]
if len(self.classes_) == 2 else self.feature_log_prob_)
def _get_intercept(self):
return (self.class_log_prior_[1:]
if len(self.classes_) == 2 else self.class_log_prior_)
coef_ = property(_get_coef)
intercept_ = property(_get_intercept)
class MultinomialNB(BaseDiscreteNB):
"""
Naive Bayes classifier for multinomial models
The multinomial Naive Bayes classifier is suitable for classification with
discrete features (e.g., word counts for text classification). The
multinomial distribution normally requires integer feature counts. However,
in practice, fractional counts such as tf-idf may also work.
Read more in the :ref:`User Guide <multinomial_naive_bayes>`.
Parameters
----------
alpha : float, optional (default=1.0)
Additive (Laplace/Lidstone) smoothing parameter
(0 for no smoothing).
fit_prior : boolean, optional (default=True)
Whether to learn class prior probabilities or not.
If false, a uniform prior will be used.
class_prior : array-like, size (n_classes,), optional (default=None)
Prior probabilities of the classes. If specified the priors are not
adjusted according to the data.
Attributes
----------
class_log_prior_ : array, shape (n_classes, )
Smoothed empirical log probability for each class.
intercept_ : property
Mirrors ``class_log_prior_`` for interpreting MultinomialNB
as a linear model.
feature_log_prob_ : array, shape (n_classes, n_features)
Empirical log probability of features
given a class, ``P(x_i|y)``.
coef_ : property
Mirrors ``feature_log_prob_`` for interpreting MultinomialNB
as a linear model.
class_count_ : array, shape (n_classes,)
Number of samples encountered for each class during fitting. This
value is weighted by the sample weight when provided.
feature_count_ : array, shape (n_classes, n_features)
Number of samples encountered for each (class, feature)
during fitting. This value is weighted by the sample weight when
provided.
Examples
--------
>>> import numpy as np
>>> X = np.random.randint(5, size=(6, 100))
>>> y = np.array([1, 2, 3, 4, 5, 6])
>>> from sklearn.naive_bayes import MultinomialNB
>>> clf = MultinomialNB()
>>> clf.fit(X, y)
MultinomialNB(alpha=1.0, class_prior=None, fit_prior=True)
>>> print(clf.predict(X[2:3]))
[3]
Notes
-----
For the rationale behind the names `coef_` and `intercept_`, i.e.
naive Bayes as a linear classifier, see J. Rennie et al. (2003),
Tackling the poor assumptions of naive Bayes text classifiers, ICML.
References
----------
C.D. Manning, P. Raghavan and H. Schuetze (2008). Introduction to
Information Retrieval. Cambridge University Press, pp. 234-265.
http://nlp.stanford.edu/IR-book/html/htmledition/naive-bayes-text-classification-1.html
"""
def __init__(self, alpha=1.0, fit_prior=True, class_prior=None):
self.alpha = alpha
self.fit_prior = fit_prior
self.class_prior = class_prior
def _count(self, X, Y):
"""Count and smooth feature occurrences."""
if np.any((X.data if issparse(X) else X) < 0):
raise ValueError("Input X must be non-negative")
self.feature_count_ += safe_sparse_dot(Y.T, X)
self.class_count_ += Y.sum(axis=0)
def _update_feature_log_prob(self):
"""Apply smoothing to raw counts and recompute log probabilities"""
smoothed_fc = self.feature_count_ + self.alpha
smoothed_cc = smoothed_fc.sum(axis=1)
self.feature_log_prob_ = (np.log(smoothed_fc) -
np.log(smoothed_cc.reshape(-1, 1)))
def _joint_log_likelihood(self, X):
"""Calculate the posterior log probability of the samples X"""
check_is_fitted(self, "classes_")
X = check_array(X, accept_sparse='csr')
return (safe_sparse_dot(X, self.feature_log_prob_.T) +
self.class_log_prior_)
class BernoulliNB(BaseDiscreteNB):
"""Naive Bayes classifier for multivariate Bernoulli models.
Like MultinomialNB, this classifier is suitable for discrete data. The
difference is that while MultinomialNB works with occurrence counts,
BernoulliNB is designed for binary/boolean features.
Read more in the :ref:`User Guide <bernoulli_naive_bayes>`.
Parameters
----------
alpha : float, optional (default=1.0)
Additive (Laplace/Lidstone) smoothing parameter
(0 for no smoothing).
binarize : float or None, optional (default=0.0)
Threshold for binarizing (mapping to booleans) of sample features.
If None, input is presumed to already consist of binary vectors.
fit_prior : boolean, optional (default=True)
Whether to learn class prior probabilities or not.
If false, a uniform prior will be used.
class_prior : array-like, size=[n_classes,], optional (default=None)
Prior probabilities of the classes. If specified the priors are not
adjusted according to the data.
Attributes
----------
class_log_prior_ : array, shape = [n_classes]
Log probability of each class (smoothed).
feature_log_prob_ : array, shape = [n_classes, n_features]
Empirical log probability of features given a class, P(x_i|y).
class_count_ : array, shape = [n_classes]
Number of samples encountered for each class during fitting. This
value is weighted by the sample weight when provided.
feature_count_ : array, shape = [n_classes, n_features]
Number of samples encountered for each (class, feature)
during fitting. This value is weighted by the sample weight when
provided.
Examples
--------
>>> import numpy as np
>>> X = np.random.randint(2, size=(6, 100))
>>> Y = np.array([1, 2, 3, 4, 4, 5])
>>> from sklearn.naive_bayes import BernoulliNB
>>> clf = BernoulliNB()
>>> clf.fit(X, Y)
BernoulliNB(alpha=1.0, binarize=0.0, class_prior=None, fit_prior=True)
>>> print(clf.predict(X[2:3]))
[3]
References
----------
C.D. Manning, P. Raghavan and H. Schuetze (2008). Introduction to
Information Retrieval. Cambridge University Press, pp. 234-265.
http://nlp.stanford.edu/IR-book/html/htmledition/the-bernoulli-model-1.html
A. McCallum and K. Nigam (1998). A comparison of event models for naive
Bayes text classification. Proc. AAAI/ICML-98 Workshop on Learning for
Text Categorization, pp. 41-48.
V. Metsis, I. Androutsopoulos and G. Paliouras (2006). Spam filtering with
naive Bayes -- Which naive Bayes? 3rd Conf. on Email and Anti-Spam (CEAS).
"""
def __init__(self, alpha=1.0, binarize=.0, fit_prior=True,
class_prior=None):
self.alpha = alpha
self.binarize = binarize
self.fit_prior = fit_prior
self.class_prior = class_prior
def _count(self, X, Y):
"""Count and smooth feature occurrences."""
if self.binarize is not None:
X = binarize(X, threshold=self.binarize)
self.feature_count_ += safe_sparse_dot(Y.T, X)
self.class_count_ += Y.sum(axis=0)
def _update_feature_log_prob(self):
"""Apply smoothing to raw counts and recompute log probabilities"""
smoothed_fc = self.feature_count_ + self.alpha
smoothed_cc = self.class_count_ + self.alpha * 2
self.feature_log_prob_ = (np.log(smoothed_fc) -
np.log(smoothed_cc.reshape(-1, 1)))
def _joint_log_likelihood(self, X):
"""Calculate the posterior log probability of the samples X"""
check_is_fitted(self, "classes_")
X = check_array(X, accept_sparse='csr')
if self.binarize is not None:
X = binarize(X, threshold=self.binarize)
n_classes, n_features = self.feature_log_prob_.shape
n_samples, n_features_X = X.shape
if n_features_X != n_features:
raise ValueError("Expected input with %d features, got %d instead"
% (n_features, n_features_X))
neg_prob = np.log(1 - np.exp(self.feature_log_prob_))
# Compute neg_prob · (1 - X).T as ∑neg_prob - X · neg_prob
jll = safe_sparse_dot(X, (self.feature_log_prob_ - neg_prob).T)
jll += self.class_log_prior_ + neg_prob.sum(axis=1)
return jll
|
toastedcornflakes/scikit-learn
|
sklearn/naive_bayes.py
|
Python
|
bsd-3-clause
| 30,634
|
[
"Gaussian"
] |
7a8d7b7b4ee32ec38f712c55e73d63b3cd3e7e7e32f040873f6194aa91223884
|
# unboxer.py
from . import drg
import sys
def generate_from_referent(drg, ref, surface, complete=False, generic=False):
#sys.stderr.write("generate from %s\n" % (ref))
in_edges_dict = dict()
for edge in drg.in_edges(ref):
in_edges_dict[edge.token_index] = edge
in_edges = []
for key in sorted(in_edges_dict.iterkeys()):
in_edges.append(in_edges_dict[key])
for edge in in_edges:
for t in edge.tokens:
if not t in surface:
surface.append(t)
if edge.edge_type == "int":
if complete:
# recursively generate complete surface forms
generate_from_referent(drg, drg.out_edges(edge.from_node, edge_type="ext")[0].to_node, surface)
else:
# generate incomplete surface forms
if generic:
surface.append("*")
else:
surface.append(drg.out_edges(edge.from_node, edge_type="ext")[0].to_node)
# search for embedded DRSs
for edge in in_edges:
if edge.edge_type == "int":
potential_embed = drg.out_edges(edge.from_node, edge_type="ext")[0].to_node
# if it has an event
#if len(drg.out_edges(potential_embed, edge_type="event")) > 0:
# generate_from_referent(drg, drg.out_edges(drg.out_edges(potential_embed, edge_type="event")[0].to_node, edge_type="instance")[0].to_node, surface)
def generate_from_relation(drg, int_ref, ext_ref, generic=False):
#sys.stderr.write("generate from %s\n" % (ref))
in_edges_dict = dict()
for edge in drg.in_edges(int_ref):
in_edges_dict[edge.token_index] = edge
in_edges_int = []
for key in sorted(in_edges_dict.iterkeys()):
in_edges_int.append(in_edges_dict[key])
in_edges_ext = []
for key in sorted(in_edges_dict.iterkeys()):
in_edges_ext.append(in_edges_dict[key])
surface_ext = []
generate_from_referent(drg, ext_ref, surface_ext)
surface_int = []
generate_from_referent(drg, int_ref, surface_int)
# crude composition
surface = []
composition = False
for token in surface_int:
if token == ext_ref:
surface.extend(surface_ext)
composition = True
else:
surface.append(token)
if composition:
surface_generic = []
for token in surface:
if token in drg.nodes:
if generic:
surface_generic.append("*")
else:
surface_generic.append(drg.dereificated[token])
else:
surface_generic.append(token)
return ' '.join(surface_generic)
def unbox(tuples):
surface = []
parser = drg.DRGParser()
drg = parser.parse_tup_lines(tuples)
du_list = drg.discourse_units()
sys.stderr.write("discourse unit to visit: %s\n" % ", ".join(du_list))
for du in du_list:
sys.stderr.write("discourse unit: %s\n" % du)
# generate tokens from discourse structure
for edge in drg.in_edges(du, structure="discourse"):
for t in edge.tokens:
if not t in surface:
surface.append(t)
# look for events (assume at most one event per discourse unit)
for e1 in drg.out_edges(du, edge_type="event"):
for e2 in drg.out_edges(e1.to_node, edge_type="arg"):
sys.stderr.write("found event: %s\n" % e2.to_node)
generate_from_referent(drg, e2.to_node, surface)
return surface
|
valeriobasile/learningbyreading
|
src/unboxer/unboxer.py
|
Python
|
gpl-2.0
| 3,582
|
[
"VisIt"
] |
ff286df92c8f6a1c0cbe166acc035375963cd0d597b81be340f9e56953843f39
|
# -*- coding: utf-8 -*-
"""
Local settings
- Run in Debug mode
- Use console backend for emails
- Add Django Debug Toolbar
- Add django-extensions as app
"""
import socket
import os
from .common import * # noqa
# DEBUG
# ------------------------------------------------------------------------------
DEBUG = env.bool('DJANGO_DEBUG', default=True)
TEMPLATES[0]['OPTIONS']['debug'] = DEBUG
# SECRET CONFIGURATION
# ------------------------------------------------------------------------------
# See: https://docs.djangoproject.com/en/dev/ref/settings/#secret-key
# Note: This key only used for development and testing.
SECRET_KEY = env('DJANGO_SECRET_KEY', default='3h5bns&s^1qgbep*^o@rw3-yxpt-1ib((t#ax_61+l0ktf##nl')
# Mail settings
# ------------------------------------------------------------------------------
EMAIL_PORT = 1025
EMAIL_HOST = 'localhost'
EMAIL_BACKEND = env('DJANGO_EMAIL_BACKEND',
default='django.core.mail.backends.console.EmailBackend')
# CACHING
# ------------------------------------------------------------------------------
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.locmem.LocMemCache',
'LOCATION': ''
}
}
# django-debug-toolbar
# ------------------------------------------------------------------------------
# MIDDLEWARE += ('debug_toolbar.middleware.DebugToolbarMiddleware',)
# INSTALLED_APPS += ('debug_toolbar', )
INTERNAL_IPS = ['127.0.0.1', '10.0.2.2', '0']
# tricks to have debug toolbar when developing with docker
if os.environ.get('USE_DOCKER') == 'yes':
ip = socket.gethostbyname(socket.gethostname())
INTERNAL_IPS += [ip[:-1] + "1"]
DEBUG_TOOLBAR_CONFIG = {
'DISABLE_PANELS': [
'debug_toolbar.panels.redirects.RedirectsPanel',
],
'SHOW_TEMPLATE_CONTEXT': True,
}
# django-extensions
# ------------------------------------------------------------------------------
INSTALLED_APPS += ('django_extensions', )
# django-gulp
# ------------------------------------------------------------------------------
# Enable django_gulp before django.contrib.staticfiles so we can override
# manage.py commands
INSTALLED_APPS = ('django_gulp', ) + INSTALLED_APPS
# TESTING
# ------------------------------------------------------------------------------
TEST_RUNNER = 'django.test.runner.DiscoverRunner'
# Your local stuff: Below this line define 3rd party library settings
# ------------------------------------------------------------------------------
# COMPRESS_ENABLED = True
# COMPRESS_ROOT = str(APPS_DIR.path('static/public'))
X_FRAME_OPTIONS = 'ALLOW-FROM https://pstest2.irondistrict.org'
|
IronCountySchoolDistrict/powerschool_apps
|
config/settings/local.py
|
Python
|
mit
| 2,640
|
[
"GULP"
] |
41328a6a5bbd41a2c18e992b74342716d9f70715b1a28f1557b03e4e19204d61
|
# coding: utf-8
# Copyright (c) Pymatgen Development Team.
# Distributed under the terms of the MIT License.
import os
from unittest import TestCase
import unittest
from pymatgen.analysis.molecule_structure_comparator import \
MoleculeStructureComparator
from pymatgen.core.structure import Molecule
__author__ = 'xiaohuiqu'
test_dir = os.path.join(os.path.dirname(__file__), "..", "..", "..",
"test_files", "molecules", "structural_change")
class TestMoleculeStructureComparator(TestCase):
def test_are_equal(self):
msc1 = MoleculeStructureComparator()
mol1 = Molecule.from_file(os.path.join(test_dir, "t1.xyz"))
mol2 = Molecule.from_file(os.path.join(test_dir, "t2.xyz"))
mol3 = Molecule.from_file(os.path.join(test_dir, "t3.xyz"))
self.assertFalse(msc1.are_equal(mol1, mol2))
self.assertTrue(msc1.are_equal(mol2, mol3))
thio1 = Molecule.from_file(os.path.join(test_dir, "thiophene1.xyz"))
thio2 = Molecule.from_file(os.path.join(test_dir, "thiophene2.xyz"))
# noinspection PyProtectedMember
msc2 = MoleculeStructureComparator(
priority_bonds=msc1._get_bonds(thio1))
self.assertTrue(msc2.are_equal(thio1, thio2))
hal1 = Molecule.from_file(os.path.join(test_dir, "molecule_with_halogen_bonds_1.xyz"))
hal2 = Molecule.from_file(os.path.join(test_dir, "molecule_with_halogen_bonds_2.xyz"))
msc3 = MoleculeStructureComparator(priority_bonds=msc1._get_bonds(hal1))
self.assertTrue(msc3.are_equal(hal1, hal2))
def test_get_bonds(self):
mol1 = Molecule.from_file(os.path.join(test_dir, "t1.xyz"))
msc = MoleculeStructureComparator()
# noinspection PyProtectedMember
bonds = msc._get_bonds(mol1)
bonds_ref = [(0, 1), (0, 2), (0, 3), (0, 23), (3, 4), (3, 5), (5, 6),
(5, 7), (7, 8), (7, 9), (7, 21), (9, 10), (9, 11),
(9, 12), (12, 13), (12, 14), (12, 15), (15, 16), (15, 17),
(15, 18), (18, 19), (18, 20), (18, 21), (21, 22),
(21, 23), (23, 24), (23, 25)]
self.assertEqual(bonds, bonds_ref)
mol2 = Molecule.from_file(os.path.join(test_dir, "MgBH42.xyz"))
bonds = msc._get_bonds(mol2)
self.assertEqual(bonds, [(1, 3), (2, 3), (3, 4), (3, 5), (6, 8), (7, 8),
(8, 9), (8, 10)])
msc = MoleculeStructureComparator(ignore_ionic_bond=False)
bonds = msc._get_bonds(mol2)
self.assertEqual(bonds, [(0, 1), (0, 2), (0, 3), (0, 5), (0, 6), (0, 7),
(0, 8), (0, 9), (1, 3), (2, 3), (3, 4), (3, 5),
(6, 8), (7, 8), (8, 9), (8, 10)])
mol1 = Molecule.from_file(os.path.join(test_dir, "molecule_with_halogen_bonds_1.xyz"))
msc = MoleculeStructureComparator()
# noinspection PyProtectedMember
bonds = msc._get_bonds(mol1)
self.assertEqual(bonds, [(0, 12), (0, 13), (0, 14), (0, 15), (1, 12), (1, 16),
(1, 17), (1, 18), (2, 4), (2, 11), (2, 19), (3, 5),
(3, 10), (3, 20), (4, 6), (4, 10), (5, 11), (5, 12),
(6, 7), (6, 8), (6, 9)])
def test_to_and_from_dict(self):
msc1 = MoleculeStructureComparator()
d1 = msc1.as_dict()
d2 = MoleculeStructureComparator.from_dict(d1).as_dict()
self.assertEqual(d1, d2)
thio1 = Molecule.from_file(os.path.join(test_dir, "thiophene1.xyz"))
# noinspection PyProtectedMember
msc2 = MoleculeStructureComparator(
bond_length_cap=0.2,
priority_bonds=msc1._get_bonds(thio1),
priority_cap=0.5)
d1 = msc2.as_dict()
d2 = MoleculeStructureComparator.from_dict(d1).as_dict()
self.assertEqual(d1, d2)
# def test_structural_change_in_geom_opt(self):
# qcout_path = os.path.join(test_dir, "mol_1_3_bond.qcout")
# qcout = QcOutput(qcout_path)
# mol1 = qcout.data[0]["molecules"][0]
# mol2 = qcout.data[0]["molecules"][-1]
# priority_bonds = [[0, 1], [0, 2], [1, 3], [1, 4], [1, 7], [2, 5], [2, 6], [2, 8], [4, 6], [4, 10], [6, 9]]
# msc = MoleculeStructureComparator(priority_bonds=priority_bonds)
# self.assertTrue(msc.are_equal(mol1, mol2))
def test_get_13_bonds(self):
priority_bonds = [[0, 1], [0, 2], [1, 3], [1, 4], [1, 7], [2, 5], [2, 6], [2, 8], [4, 6], [4, 10], [6, 9]]
bonds_13 = MoleculeStructureComparator.get_13_bonds(priority_bonds)
ans = ((0, 3), (0, 4), (0, 5), (0, 6), (0, 7), (0, 8), (1, 2), (1, 6), (1, 10), (2, 4), (2, 9), (3, 4),
(3, 7), (4, 7), (4, 9), (5, 6), (5, 8), (6, 8), (6, 10))
self.assertEqual(bonds_13, tuple(ans))
if __name__ == '__main__':
unittest.main()
|
tschaume/pymatgen
|
pymatgen/analysis/tests/test_molecule_structure_comparator.py
|
Python
|
mit
| 4,916
|
[
"pymatgen"
] |
73ca9335a71e206e7d26be9c6577e86cbdc9c8c8cb9a94999a69048e9fddf781
|
"""
CBMPy: CBConfig module
======================
PySCeS Constraint Based Modelling (http://cbmpy.sourceforge.net)
Copyright (C) 2009-2022 Brett G. Olivier, VU University Amsterdam, Amsterdam, The Netherlands
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>
Author: Brett G. Olivier
Contact email: bgoli@users.sourceforge.net
Last edit: $Author: bgoli $ ($Id: CBConfig.py 711 2020-04-27 14:22:34Z bgoli $)
"""
# gets rid of "invalid variable name" info
# pylint: disable=C0103
# gets rid of "line to long" info
# pylint: disable=C0301
# use with caution: gets rid of module xxx has no member errors (run once enabled)
# pylint: disable=E1101
# preparing for Python 3 port
from __future__ import division, print_function
from __future__ import absolute_import
# from __future__ import unicode_literals
import platform
__VERSION_MAJOR__ = 0
__VERSION_MINOR__ = 8
__VERSION_MICRO__ = 2
__CBCONFIG__ = {
'VERSION_MAJOR': __VERSION_MAJOR__,
'VERSION_MINOR': __VERSION_MINOR__,
'VERSION_MICRO': __VERSION_MICRO__,
'VERSION_STATUS': '',
'VERSION': '{}.{}.{}'.format(
__VERSION_MAJOR__, __VERSION_MINOR__, __VERSION_MICRO__
),
'DEBUG': False,
'SOLVER_PREF': 'CPLEX',
#'SOLVER_PREF': 'GLPK',
'SOLVER_ACTIVE': None,
'REVERSIBLE_SYMBOL': '<==>',
'IRREVERSIBLE_SYMBOL': '-->',
'HAVE_SBML2': False,
'HAVE_SBML3': False,
'CBMPY_DIR': None,
'SYMPY_DENOM_LIMIT': 10 ** 32,
'ENVIRONMENT': '{} {} ({})'.format(
platform.system(), platform.release(), platform.architecture()[0]
),
}
def current_version():
"""
Return the current CBMPy version as a string
"""
return '{}.{}.{}'.format(__VERSION_MAJOR__, __VERSION_MINOR__, __VERSION_MICRO__)
def current_version_tuple():
"""
Return the current CBMPy version as a tuple (x, y, z)
"""
return (__VERSION_MAJOR__, __VERSION_MINOR__, __VERSION_MICRO__)
|
SystemsBioinformatics/cbmpy
|
cbmpy/CBConfig.py
|
Python
|
gpl-3.0
| 2,474
|
[
"PySCeS"
] |
0a706a420168d647ce0772002dbc63e8e06580113c44204f7cb6294ecdb274b8
|
# (c) 2017, Brian Coca <bcoca@ansible.com>
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import optparse
from operator import attrgetter
from ansible import constants as C
from ansible.cli import CLI
from ansible.errors import AnsibleError, AnsibleOptionsError
from ansible.inventory.host import Host
from ansible.plugins.loader import vars_loader
from ansible.parsing.dataloader import DataLoader
from ansible.utils.vars import combine_vars
try:
from __main__ import display
except ImportError:
from ansible.utils.display import Display
display = Display()
INTERNAL_VARS = frozenset(['ansible_diff_mode',
'ansible_facts',
'ansible_forks',
'ansible_inventory_sources',
'ansible_limit',
'ansible_playbook_python',
'ansible_run_tags',
'ansible_skip_tags',
'ansible_verbosity',
'ansible_version',
'inventory_dir',
'inventory_file',
'inventory_hostname',
'inventory_hostname_short',
'groups',
'group_names',
'omit',
'playbook_dir', ])
class InventoryCLI(CLI):
''' used to display or dump the configured inventory as Ansible sees it '''
ARGUMENTS = {'host': 'The name of a host to match in the inventory, relevant when using --list',
'group': 'The name of a group in the inventory, relevant when using --graph', }
def __init__(self, args):
super(InventoryCLI, self).__init__(args)
self.vm = None
self.loader = None
self.inventory = None
self._new_api = True
def parse(self):
self.parser = CLI.base_parser(
usage='usage: %prog [options] [host|group]',
epilog='Show Ansible inventory information, by default it uses the inventory script JSON format',
inventory_opts=True,
vault_opts=True,
basedir_opts=True,
)
# remove unused default options
self.parser.remove_option('--limit')
self.parser.remove_option('--list-hosts')
# Actions
action_group = optparse.OptionGroup(self.parser, "Actions", "One of following must be used on invocation, ONLY ONE!")
action_group.add_option("--list", action="store_true", default=False, dest='list', help='Output all hosts info, works as inventory script')
action_group.add_option("--host", action="store", default=None, dest='host', help='Output specific host info, works as inventory script')
action_group.add_option("--graph", action="store_true", default=False, dest='graph',
help='create inventory graph, if supplying pattern it must be a valid group name')
self.parser.add_option_group(action_group)
# graph
self.parser.add_option("-y", "--yaml", action="store_true", default=False, dest='yaml',
help='Use YAML format instead of default JSON, ignored for --graph')
self.parser.add_option('--toml', action='store_true', default=False, dest='toml',
help='Use TOML format instead of default JSON, ignored for --graph')
self.parser.add_option("--vars", action="store_true", default=False, dest='show_vars',
help='Add vars to graph display, ignored unless used with --graph')
# list
self.parser.add_option("--export", action="store_true", default=C.INVENTORY_EXPORT, dest='export',
help="When doing an --list, represent in a way that is optimized for export,"
"not as an accurate representation of how Ansible has processed it")
# self.parser.add_option("--ignore-vars-plugins", action="store_true", default=False, dest='ignore_vars_plugins',
# help="When doing an --list, skip vars data from vars plugins, by default, this would include group_vars/ and host_vars/")
super(InventoryCLI, self).parse()
display.verbosity = self.options.verbosity
self.validate_conflicts(vault_opts=True)
# there can be only one! and, at least, one!
used = 0
for opt in (self.options.list, self.options.host, self.options.graph):
if opt:
used += 1
if used == 0:
raise AnsibleOptionsError("No action selected, at least one of --host, --graph or --list needs to be specified.")
elif used > 1:
raise AnsibleOptionsError("Conflicting options used, only one of --host, --graph or --list can be used at the same time.")
# set host pattern to default if not supplied
if len(self.args) > 0:
self.options.pattern = self.args[0]
else:
self.options.pattern = 'all'
def run(self):
results = None
super(InventoryCLI, self).run()
# Initialize needed objects
if getattr(self, '_play_prereqs', False):
self.loader, self.inventory, self.vm = self._play_prereqs(self.options)
else:
# fallback to pre 2.4 way of initialzing
from ansible.vars import VariableManager
from ansible.inventory import Inventory
self._new_api = False
self.loader = DataLoader()
self.vm = VariableManager()
# use vault if needed
if self.options.vault_password_file:
vault_pass = CLI.read_vault_password_file(self.options.vault_password_file, loader=self.loader)
elif self.options.ask_vault_pass:
vault_pass = self.ask_vault_passwords()
else:
vault_pass = None
if vault_pass:
self.loader.set_vault_password(vault_pass)
# actually get inventory and vars
self.inventory = Inventory(loader=self.loader, variable_manager=self.vm, host_list=self.options.inventory)
self.vm.set_inventory(self.inventory)
if self.options.host:
hosts = self.inventory.get_hosts(self.options.host)
if len(hosts) != 1:
raise AnsibleOptionsError("You must pass a single valid host to --host parameter")
myvars = self._get_host_variables(host=hosts[0])
self._remove_internal(myvars)
# FIXME: should we template first?
results = self.dump(myvars)
elif self.options.graph:
results = self.inventory_graph()
elif self.options.list:
top = self._get_group('all')
if self.options.yaml:
results = self.yaml_inventory(top)
elif self.options.toml:
results = self.toml_inventory(top)
else:
results = self.json_inventory(top)
results = self.dump(results)
if results:
# FIXME: pager?
display.display(results)
exit(0)
exit(1)
def dump(self, stuff):
if self.options.yaml:
import yaml
from ansible.parsing.yaml.dumper import AnsibleDumper
results = yaml.dump(stuff, Dumper=AnsibleDumper, default_flow_style=False)
elif self.options.toml:
from ansible.plugins.inventory.toml import toml_dumps, HAS_TOML
if not HAS_TOML:
raise AnsibleError(
'The python "toml" library is required when using the TOML output format'
)
results = toml_dumps(stuff)
else:
import json
from ansible.parsing.ajson import AnsibleJSONEncoder
results = json.dumps(stuff, cls=AnsibleJSONEncoder, sort_keys=True, indent=4)
return results
# FIXME: refactor to use same for VM
def get_plugin_vars(self, path, entity):
data = {}
def _get_plugin_vars(plugin, path, entities):
data = {}
try:
data = plugin.get_vars(self.loader, path, entity)
except AttributeError:
try:
if isinstance(entity, Host):
data = combine_vars(data, plugin.get_host_vars(entity.name))
else:
data = combine_vars(data, plugin.get_group_vars(entity.name))
except AttributeError:
if hasattr(plugin, 'run'):
raise AnsibleError("Cannot use v1 type vars plugin %s from %s" % (plugin._load_name, plugin._original_path))
else:
raise AnsibleError("Invalid vars plugin %s from %s" % (plugin._load_name, plugin._original_path))
return data
for plugin in vars_loader.all():
data = combine_vars(data, _get_plugin_vars(plugin, path, entity))
return data
def _get_group_variables(self, group):
# get info from inventory source
res = group.get_vars()
# FIXME: add switch to skip vars plugins, add vars plugin info
for inventory_dir in self.inventory._sources:
res = combine_vars(res, self.get_plugin_vars(inventory_dir, group))
if group.priority != 1:
res['ansible_group_priority'] = group.priority
return res
def _get_host_variables(self, host):
if self.options.export:
hostvars = host.get_vars()
# FIXME: add switch to skip vars plugins
# add vars plugin info
for inventory_dir in self.inventory._sources:
hostvars = combine_vars(hostvars, self.get_plugin_vars(inventory_dir, host))
else:
if self._new_api:
hostvars = self.vm.get_vars(host=host, include_hostvars=False)
else:
hostvars = self.vm.get_vars(self.loader, host=host, include_hostvars=False)
return hostvars
def _get_group(self, gname):
if self._new_api:
group = self.inventory.groups.get(gname)
else:
group = self.inventory.get_group(gname)
return group
def _remove_internal(self, dump):
for internal in INTERNAL_VARS:
if internal in dump:
del dump[internal]
def _remove_empty(self, dump):
# remove empty keys
for x in ('hosts', 'vars', 'children'):
if x in dump and not dump[x]:
del dump[x]
def _show_vars(self, dump, depth):
result = []
self._remove_internal(dump)
if self.options.show_vars:
for (name, val) in sorted(dump.items()):
result.append(self._graph_name('{%s = %s}' % (name, val), depth))
return result
def _graph_name(self, name, depth=0):
if depth:
name = " |" * (depth) + "--%s" % name
return name
def _graph_group(self, group, depth=0):
result = [self._graph_name('@%s:' % group.name, depth)]
depth = depth + 1
for kid in sorted(group.child_groups, key=attrgetter('name')):
result.extend(self._graph_group(kid, depth))
if group.name != 'all':
for host in sorted(group.hosts, key=attrgetter('name')):
result.append(self._graph_name(host.name, depth))
result.extend(self._show_vars(host.get_vars(), depth + 1))
result.extend(self._show_vars(self._get_group_variables(group), depth))
return result
def inventory_graph(self):
start_at = self._get_group(self.options.pattern)
if start_at:
return '\n'.join(self._graph_group(start_at))
else:
raise AnsibleOptionsError("Pattern must be valid group name when using --graph")
def json_inventory(self, top):
def format_group(group):
results = {}
results[group.name] = {}
if group.name != 'all':
results[group.name]['hosts'] = [h.name for h in sorted(group.hosts, key=attrgetter('name'))]
results[group.name]['children'] = []
for subgroup in sorted(group.child_groups, key=attrgetter('name')):
results[group.name]['children'].append(subgroup.name)
results.update(format_group(subgroup))
if self.options.export:
results[group.name]['vars'] = self._get_group_variables(group)
self._remove_empty(results[group.name])
if not results[group.name]:
del results[group.name]
return results
results = format_group(top)
# populate meta
results['_meta'] = {'hostvars': {}}
hosts = self.inventory.get_hosts()
for host in hosts:
hvars = self._get_host_variables(host)
if hvars:
self._remove_internal(hvars)
results['_meta']['hostvars'][host.name] = hvars
return results
def yaml_inventory(self, top):
seen = []
def format_group(group):
results = {}
# initialize group + vars
results[group.name] = {}
# subgroups
results[group.name]['children'] = {}
for subgroup in sorted(group.child_groups, key=attrgetter('name')):
if subgroup.name != 'all':
results[group.name]['children'].update(format_group(subgroup))
# hosts for group
results[group.name]['hosts'] = {}
if group.name != 'all':
for h in sorted(group.hosts, key=attrgetter('name')):
myvars = {}
if h.name not in seen: # avoid defining host vars more than once
seen.append(h.name)
myvars = self._get_host_variables(host=h)
self._remove_internal(myvars)
results[group.name]['hosts'][h.name] = myvars
if self.options.export:
gvars = self._get_group_variables(group)
if gvars:
results[group.name]['vars'] = gvars
self._remove_empty(results[group.name])
return results
return format_group(top)
def toml_inventory(self, top):
seen = set()
has_ungrouped = bool(next(g.hosts for g in top.child_groups if g.name == 'ungrouped'))
def format_group(group):
results = {}
results[group.name] = {}
results[group.name]['children'] = []
for subgroup in sorted(group.child_groups, key=attrgetter('name')):
if subgroup.name == 'ungrouped' and not has_ungrouped:
continue
if group.name != 'all':
results[group.name]['children'].append(subgroup.name)
results.update(format_group(subgroup))
if group.name != 'all':
for host in sorted(group.hosts, key=attrgetter('name')):
if host.name not in seen:
seen.add(host.name)
host_vars = self._get_host_variables(host=host)
self._remove_internal(host_vars)
else:
host_vars = {}
try:
results[group.name]['hosts'][host.name] = host_vars
except KeyError:
results[group.name]['hosts'] = {host.name: host_vars}
if self.options.export:
results[group.name]['vars'] = self._get_group_variables(group)
self._remove_empty(results[group.name])
if not results[group.name]:
del results[group.name]
return results
results = format_group(top)
return results
|
shepdelacreme/ansible
|
lib/ansible/cli/inventory.py
|
Python
|
gpl-3.0
| 16,793
|
[
"Brian"
] |
7def2b30f793edd7272f05f2bb689e15eb6e988bed488410f117aa380d7f4049
|
import csv
import random
class NeuronParameters:
def __init__(self, name, brain_region_name, brain_side, threshold):
self.name = name
self.brain_region_name = brain_region_name
self.brain_side = brain_side
self.threshold = threshold
class NeuralConnectionParameters:
def __init__(self, src_neuron_name, tar_neuron_name, weight):
self.src_neuron_name = src_neuron_name
self.tar_neuron_name = tar_neuron_name
self.weight = weight
class ConnectivityMatrixIO:
def load_matrix(self, connectivity_matrix_file_name, brain, progress_bar):
# Load the neuron and neural connection parameters from file
neuron_params, conn_params, load_errors = self.__load_csv_file(connectivity_matrix_file_name)
# Add them to the brain and the data container
neuron_errors = brain.create_neurons(neuron_params, progress_bar)
nc_errors = brain.create_neural_connections(conn_params)
# Return all the error messages
return load_errors + neuron_errors + nc_errors
def __load_csv_file(self, file_name):
# Open the file and read in the lines
try:
f = open(file_name, "r")
except:
return (None, None, ["Could not open '" + file_name + "'"])
csv_reader = csv.reader(f)
names_set = set()
neuron_names = list()
# Get the neuron names
for row in csv_reader:
for cell in row:
cell = cell.strip()
if cell:
names_set.add(cell)
neuron_names.append(cell)
break
if len(neuron_names) == len(names_set):
return self.__load_general_matrix(file_name)
elif len(neuron_names) == 2*len(names_set):
return self.__load_symmetric_matrix(neuron_names, csv_reader)
else:
return (None, None, ["invalid matrix format"])
return (None, None, [])
def __load_general_matrix(self, file_name):
file_lines = list()
# Open the file and read in the lines
try:
f = open(file_name, "r")
except:
return (None, None, ["Could not open '" + file_name + "'"])
file_lines = f.readlines()
# Make sure we got enough lines
if len(file_lines) <= 1:
return (None, None, ["Too few lines in the matrix file"])
# These two lists will be filled below and returned
neurons = list()
neural_connections = list()
# Process the first line (it contains the neuron names). First, get rid of all white spaces.
neuron_names = "".join(file_lines[0].split())
# This is a (column id, neuron name) dictionary
col_id_to_neuron_name = dict()
# Populate the (column id, neuron name) dictionary
col_id = 0
for neuron_name in neuron_names.split(","):
if not neuron_name:
continue
col_id_to_neuron_name[col_id] = neuron_name
col_id += 1
# How many neurons are there
num_neurons = len(col_id_to_neuron_name)
# Now, loop over the other table lines and create the neurons and neural connections
for file_line in file_lines[1:]:
# The current line contains the cells separated by a ","
cells = file_line.split(",")
if len(cells) < 1:
continue
# Get the name of the current neuron
neuron_name = cells[0]
# Create a new neuron using the cells which contain the neuron parameters
neuron = self.__create_neuron(neuron_name, cells[num_neurons+1:])
if neuron: # save the neuron
neurons.append(neuron)
else:
continue # we couldn't create the neuron => we cannot create neural connections from/to it
# Create the connections using the cells which contain the connection weights
connections = self.__create_neural_connections(neuron_name, cells[1:num_neurons+1], col_id_to_neuron_name)
# Save the connections
if connections:
neural_connections.extend(connections)
return (neurons, neural_connections, [])
def __create_neural_connections(self, src_neuron_name, cells, col_id_to_neuron_name):
neural_connections = list()
col_id = -1
# Loop over the cells. Each one contains the weight of the neural connection
for cell in cells:
col_id += 1
try:
tar_neuron_name = col_id_to_neuron_name[col_id]
except KeyError:
continue
if cell == "":
continue
try:
weight = float(cell)
except ValueError:
continue
if weight == 0:
continue
neural_connections.append(NeuralConnectionParameters(src_neuron_name, tar_neuron_name, weight))
return neural_connections
def __create_neuron(self, neuron_name, cells):
# Get the brain region name
try:
brain_region_name = cells[0]
except IndexError:
return None
else:
brain_region_name = brain_region_name.strip()
if not brain_region_name:
return None
# Get the threshold for this neuron
try:
threshold = float(cells[1])
except (ValueError, IndexError):
threshold = random.uniform(-1, 1)
# Get the side of the brain (left or right) which contains the neuron
try:
brain_side = cells[2].strip()
except IndexError:
brain_side = ""
return NeuronParameters(neuron_name, brain_region_name, brain_side, threshold)
def __load_symmetric_matrix(self, neuron_names, csv_reader):
neurons = list()
neural_connections = list()
index = 0
for row in csv_reader:
try:
neuron_name = row[0].strip()
except IndexError:
continue
else:
if not neuron_name:
continue
neuron = self.__create_neuron(neuron_name, row[len(neuron_names)+1:])
if neuron:
neuron.brain_side = "mirror"
neurons.append(neuron)
neural_connections.extend(self.__create_symmetric_neural_connections(neuron_names, row))
index += 1
if index >= len(neuron_names) // 2:
break
print(neuron_names)
for n in neurons:
print(n.name + " in " + n.brain_region_name + " @ " + str(n.threshold))
for c in neural_connections:
print(c.src_neuron_name + " -> " + c.tar_neuron_name + " @ " + str(c.weight))
return (neurons, neural_connections, [])
#return (None, None, [])
def __extract_neuron_names(self, row):
neuron_names = list()
for cell in row:
if cell.strip():
neuron_names.append(cell)
return neuron_names
def __create_symmetric_neural_connections(self, target_neuron_names, row):
try:
src_neuron_name = row[0]
except IndexError:
return []
end = len(target_neuron_names) // 2
neural_connections = list()
# The first half of the table (the ipsilateral connections)
for cell, tar_neuron_name in zip(row[1:], target_neuron_names[0:end]):
try:
weight = float(cell)
except ValueError:
continue
neural_connections.append(NeuralConnectionParameters(src_neuron_name + "_L", tar_neuron_name + "_L", weight))
neural_connections.append(NeuralConnectionParameters(src_neuron_name + "_R", tar_neuron_name + "_R", weight))
# The second half of the table (the contralateral connections)
for cell, tar_neuron_name in zip(row[end+1:], target_neuron_names[end:2*end]):
try:
weight = float(cell)
except ValueError:
continue
neural_connections.append(NeuralConnectionParameters(src_neuron_name + "_L", tar_neuron_name + "_R", weight))
neural_connections.append(NeuralConnectionParameters(src_neuron_name + "_R", tar_neuron_name + "_L", weight))
return neural_connections
|
zibneuro/brainvispy
|
IO/conmat.py
|
Python
|
bsd-3-clause
| 7,528
|
[
"NEURON"
] |
7dda5acab416956bf29ab73e794869c933cc56598d2a1b4b5dc0758a0a546dea
|
########################################################################
#
# (C) 2013, James Cammarata <jcammarata@ansible.com>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
########################################################################
import os
import os.path
import sys
import yaml
from collections import defaultdict
from distutils.version import LooseVersion
from jinja2 import Environment
import ansible.constants as C
import ansible.utils
import ansible.galaxy
from ansible.cli import CLI
from ansible.errors import AnsibleError, AnsibleOptionsError
from ansible.galaxy import Galaxy
from ansible.galaxy.api import GalaxyAPI
from ansible.galaxy.role import GalaxyRole
from ansible.playbook.role.requirement import RoleRequirement
class GalaxyCLI(CLI):
VALID_ACTIONS = ("init", "info", "install", "list", "remove", "search")
SKIP_INFO_KEYS = ("name", "description", "readme_html", "related", "summary_fields", "average_aw_composite", "average_aw_score", "url" )
def __init__(self, args, display=None):
self.api = None
self.galaxy = None
super(GalaxyCLI, self).__init__(args, display)
def parse(self):
''' create an options parser for bin/ansible '''
self.parser = CLI.base_parser(
usage = "usage: %%prog [%s] [--help] [options] ..." % "|".join(self.VALID_ACTIONS),
epilog = "\nSee '%s <command> --help' for more information on a specific command.\n\n" % os.path.basename(sys.argv[0])
)
self.set_action()
# options specific to actions
if self.action == "info":
self.parser.set_usage("usage: %prog info [options] role_name[,version]")
elif self.action == "init":
self.parser.set_usage("usage: %prog init [options] role_name")
self.parser.add_option('-p', '--init-path', dest='init_path', default="./",
help='The path in which the skeleton role will be created. The default is the current working directory.')
self.parser.add_option(
'--offline', dest='offline', default=False, action='store_true',
help="Don't query the galaxy API when creating roles")
elif self.action == "install":
self.parser.set_usage("usage: %prog install [options] [-r FILE | role_name(s)[,version] | scm+role_repo_url[,version] | tar_file(s)]")
self.parser.add_option('-i', '--ignore-errors', dest='ignore_errors', action='store_true', default=False,
help='Ignore errors and continue with the next specified role.')
self.parser.add_option('-n', '--no-deps', dest='no_deps', action='store_true', default=False,
help='Don\'t download roles listed as dependencies')
self.parser.add_option('-r', '--role-file', dest='role_file',
help='A file containing a list of roles to be imported')
elif self.action == "remove":
self.parser.set_usage("usage: %prog remove role1 role2 ...")
elif self.action == "list":
self.parser.set_usage("usage: %prog list [role_name]")
elif self.action == "search":
self.parser.add_option('-P', '--platforms', dest='platforms',
help='list of OS platforms to filter by')
self.parser.add_option('-C', '--categories', dest='categories',
help='list of categories to filter by')
self.parser.set_usage("usage: %prog search [<search_term>] [-C <category1,category2>] [-P platform]")
# options that apply to more than one action
if self.action != "init":
self.parser.add_option('-p', '--roles-path', dest='roles_path', default=C.DEFAULT_ROLES_PATH,
help='The path to the directory containing your roles. '
'The default is the roles_path configured in your '
'ansible.cfg file (/etc/ansible/roles if not configured)')
if self.action in ("info","init","install","search"):
self.parser.add_option('-s', '--server', dest='api_server', default="https://galaxy.ansible.com",
help='The API server destination')
if self.action in ("init","install"):
self.parser.add_option('-f', '--force', dest='force', action='store_true', default=False,
help='Force overwriting an existing role')
# get options, args and galaxy object
self.options, self.args =self.parser.parse_args()
self.display.verbosity = self.options.verbosity
self.galaxy = Galaxy(self.options, self.display)
return True
def run(self):
super(GalaxyCLI, self).run()
# if not offline, get connect to galaxy api
if self.action in ("info","install", "search") or (self.action == 'init' and not self.options.offline):
api_server = self.options.api_server
self.api = GalaxyAPI(self.galaxy, api_server)
if not self.api:
raise AnsibleError("The API server (%s) is not responding, please try again later." % api_server)
self.execute()
def get_opt(self, k, defval=""):
"""
Returns an option from an Optparse values instance.
"""
try:
data = getattr(self.options, k)
except:
return defval
if k == "roles_path":
if os.pathsep in data:
data = data.split(os.pathsep)[0]
return data
def exit_without_ignore(self, rc=1):
"""
Exits with the specified return code unless the
option --ignore-errors was specified
"""
if not self.get_opt("ignore_errors", False):
raise AnsibleError('- you can use --ignore-errors to skip failed roles and finish processing the list.')
def parse_requirements_files(self, role):
if 'role' in role:
# Old style: {role: "galaxy.role,version,name", other_vars: "here" }
role_info = role_spec_parse(role['role'])
if isinstance(role_info, dict):
# Warning: Slight change in behaviour here. name may be being
# overloaded. Previously, name was only a parameter to the role.
# Now it is both a parameter to the role and the name that
# ansible-galaxy will install under on the local system.
if 'name' in role and 'name' in role_info:
del role_info['name']
role.update(role_info)
else:
# New style: { src: 'galaxy.role,version,name', other_vars: "here" }
if 'github.com' in role["src"] and 'http' in role["src"] and '+' not in role["src"] and not role["src"].endswith('.tar.gz'):
role["src"] = "git+" + role["src"]
if '+' in role["src"]:
(scm, src) = role["src"].split('+')
role["scm"] = scm
role["src"] = src
if 'name' not in role:
role["name"] = GalaxyRole.url_to_spec(role["src"])
if 'version' not in role:
role['version'] = ''
if 'scm' not in role:
role['scm'] = None
return role
def _display_role_info(self, role_info):
text = "\nRole: %s \n" % role_info['name']
text += "\tdescription: %s \n" % role_info['description']
for k in sorted(role_info.keys()):
if k in self.SKIP_INFO_KEYS:
continue
if isinstance(role_info[k], dict):
text += "\t%s: \n" % (k)
for key in sorted(role_info[k].keys()):
if key in self.SKIP_INFO_KEYS:
continue
text += "\t\t%s: %s\n" % (key, role_info[k][key])
else:
text += "\t%s: %s\n" % (k, role_info[k])
return text
############################
# execute actions
############################
def execute_init(self):
"""
Executes the init action, which creates the skeleton framework
of a role that complies with the galaxy metadata format.
"""
init_path = self.get_opt('init_path', './')
force = self.get_opt('force', False)
offline = self.get_opt('offline', False)
role_name = self.args.pop(0).strip()
if role_name == "":
raise AnsibleOptionsError("- no role name specified for init")
role_path = os.path.join(init_path, role_name)
if os.path.exists(role_path):
if os.path.isfile(role_path):
raise AnsibleError("- the path %s already exists, but is a file - aborting" % role_path)
elif not force:
raise AnsibleError("- the directory %s already exists." % role_path + \
"you can use --force to re-initialize this directory,\n" + \
"however it will reset any main.yml files that may have\n" + \
"been modified there already.")
# create the default README.md
if not os.path.exists(role_path):
os.makedirs(role_path)
readme_path = os.path.join(role_path, "README.md")
f = open(readme_path, "wb")
f.write(self.galaxy.default_readme)
f.close
for dir in GalaxyRole.ROLE_DIRS:
dir_path = os.path.join(init_path, role_name, dir)
main_yml_path = os.path.join(dir_path, 'main.yml')
# create the directory if it doesn't exist already
if not os.path.exists(dir_path):
os.makedirs(dir_path)
# now create the main.yml file for that directory
if dir == "meta":
# create a skeleton meta/main.yml with a valid galaxy_info
# datastructure in place, plus with all of the available
# tags/platforms included (but commented out) and the
# dependencies section
platforms = []
if not offline and self.api:
platforms = self.api.get_list("platforms") or []
categories = []
if not offline and self.api:
categories = self.api.get_list("categories") or []
# group the list of platforms from the api based
# on their names, with the release field being
# appended to a list of versions
platform_groups = defaultdict(list)
for platform in platforms:
platform_groups[platform['name']].append(platform['release'])
platform_groups[platform['name']].sort()
inject = dict(
author = 'your name',
company = 'your company (optional)',
license = 'license (GPLv2, CC-BY, etc)',
issue_tracker_url = 'http://example.com/issue/tracker',
min_ansible_version = '1.2',
platforms = platform_groups,
categories = categories,
)
rendered_meta = Environment().from_string(self.galaxy.default_meta).render(inject)
f = open(main_yml_path, 'w')
f.write(rendered_meta)
f.close()
pass
elif dir not in ('files','templates'):
# just write a (mostly) empty YAML file for main.yml
f = open(main_yml_path, 'w')
f.write('---\n# %s file for %s\n' % (dir,role_name))
f.close()
self.display.display("- %s was created successfully" % role_name)
def execute_info(self):
"""
Executes the info action. This action prints out detailed
information about an installed role as well as info available
from the galaxy API.
"""
if len(self.args) == 0:
# the user needs to specify a role
raise AnsibleOptionsError("- you must specify a user/role name")
roles_path = self.get_opt("roles_path")
data = ''
for role in self.args:
role_info = {}
gr = GalaxyRole(self.galaxy, role)
#self.galaxy.add_role(gr)
install_info = gr.install_info
if install_info:
if 'version' in install_info:
install_info['intalled_version'] = install_info['version']
del install_info['version']
role_info.update(install_info)
remote_data = False
if self.api:
remote_data = self.api.lookup_role_by_name(role, False)
if remote_data:
role_info.update(remote_data)
if gr.metadata:
role_info.update(gr.metadata)
req = RoleRequirement()
__, __, role_spec= req.parse({'role': role})
if role_spec:
role_info.update(role_spec)
data += self._display_role_info(role_info)
if not data:
data += "\n- the role %s was not found" % role
self.pager(data)
def execute_install(self):
"""
Executes the installation action. The args list contains the
roles to be installed, unless -f was specified. The list of roles
can be a name (which will be downloaded via the galaxy API and github),
or it can be a local .tar.gz file.
"""
role_file = self.get_opt("role_file", None)
if len(self.args) == 0 and role_file is None:
# the user needs to specify one of either --role-file
# or specify a single user/role name
raise AnsibleOptionsError("- you must specify a user/role name or a roles file")
elif len(self.args) == 1 and not role_file is None:
# using a role file is mutually exclusive of specifying
# the role name on the command line
raise AnsibleOptionsError("- please specify a user/role name, or a roles file, but not both")
no_deps = self.get_opt("no_deps", False)
roles_path = self.get_opt("roles_path")
roles_done = []
roles_left = []
if role_file:
self.display.debug('Getting roles from %s' % role_file)
try:
self.display.debug('Processing role file: %s' % role_file)
f = open(role_file, 'r')
if role_file.endswith('.yaml') or role_file.endswith('.yml'):
try:
rolesparsed = map(self.parse_requirements_files, yaml.safe_load(f))
except Exception as e:
raise AnsibleError("%s does not seem like a valid yaml file: %s" % (role_file, str(e)))
roles_left = [GalaxyRole(self.galaxy, **r) for r in rolesparsed]
else:
# roles listed in a file, one per line
self.display.deprecated("Non yaml files for role requirements")
for rname in f.readlines():
if rname.startswith("#") or rname.strip() == '':
continue
roles_left.append(GalaxyRole(self.galaxy, rname.strip()))
f.close()
except (IOError,OSError) as e:
raise AnsibleError("Unable to read requirements file (%s): %s" % (role_file, str(e)))
else:
# roles were specified directly, so we'll just go out grab them
# (and their dependencies, unless the user doesn't want us to).
for rname in self.args:
roles_left.append(GalaxyRole(self.galaxy, rname.strip()))
while len(roles_left) > 0:
# query the galaxy API for the role data
role_data = None
role = roles_left.pop(0)
role_path = role.path
if role_path:
self.options.roles_path = role_path
else:
self.options.roles_path = roles_path
self.display.debug('Installing role %s from %s' % (role.name, self.options.roles_path))
tmp_file = None
installed = False
if role.src and os.path.isfile(role.src):
# installing a local tar.gz
tmp_file = role.src
else:
if role.scm:
# create tar file from scm url
tmp_file = GalaxyRole.scm_archive_role(role.scm, role.src, role.version, role.name)
if role.src:
if '://' not in role.src:
role_data = self.api.lookup_role_by_name(role.src)
if not role_data:
self.display.warning("- sorry, %s was not found on %s." % (role.src, self.options.api_server))
self.exit_without_ignore()
continue
role_versions = self.api.fetch_role_related('versions', role_data['id'])
if not role.version:
# convert the version names to LooseVersion objects
# and sort them to get the latest version. If there
# are no versions in the list, we'll grab the head
# of the master branch
if len(role_versions) > 0:
loose_versions = [LooseVersion(a.get('name',None)) for a in role_versions]
loose_versions.sort()
role.version = str(loose_versions[-1])
else:
role.version = 'master'
elif role.version != 'master':
if role_versions and role.version not in [a.get('name', None) for a in role_versions]:
self.display.warning('role is %s' % role)
self.display.warning("- the specified version (%s) was not found in the list of available versions (%s)." % (role.version, role_versions))
self.exit_without_ignore()
continue
# download the role. if --no-deps was specified, we stop here,
# otherwise we recursively grab roles and all of their deps.
tmp_file = role.fetch(role_data)
if tmp_file:
installed = role.install(tmp_file)
# we're done with the temp file, clean it up
if tmp_file != role.src:
os.unlink(tmp_file)
# install dependencies, if we want them
if not no_deps and installed:
if not role_data:
role_data = gr.get_metadata(role.get("name"), options)
role_dependencies = role_data['dependencies']
else:
role_dependencies = role_data['summary_fields']['dependencies'] # api_fetch_role_related(api_server, 'dependencies', role_data['id'])
for dep in role_dependencies:
self.display.debug('Installing dep %s' % dep)
if isinstance(dep, basestring):
dep = ansible.utils.role_spec_parse(dep)
else:
dep = ansible.utils.role_yaml_parse(dep)
if not get_role_metadata(dep["name"], options):
if dep not in roles_left:
self.display.display('- adding dependency: %s' % dep["name"])
roles_left.append(dep)
else:
self.display.display('- dependency %s already pending installation.' % dep["name"])
else:
self.display.display('- dependency %s is already installed, skipping.' % dep["name"])
if not tmp_file or not installed:
self.display.warning("- %s was NOT installed successfully." % role.name)
self.exit_without_ignore()
return 0
def execute_remove(self):
"""
Executes the remove action. The args list contains the list
of roles to be removed. This list can contain more than one role.
"""
if len(self.args) == 0:
raise AnsibleOptionsError('- you must specify at least one role to remove.')
for role_name in self.args:
role = GalaxyRole(self.galaxy, role_name)
try:
if role.remove():
self.display.display('- successfully removed %s' % role_name)
else:
self.display.display('- %s is not installed, skipping.' % role_name)
except Exception as e:
raise AnsibleError("Failed to remove role %s: %s" % (role_name, str(e)))
return 0
def execute_list(self):
"""
Executes the list action. The args list can contain zero
or one role. If one is specified, only that role will be
shown, otherwise all roles in the specified directory will
be shown.
"""
if len(self.args) > 1:
raise AnsibleOptionsError("- please specify only one role to list, or specify no roles to see a full list")
if len(self.args) == 1:
# show only the request role, if it exists
name = self.args.pop()
gr = GalaxyRole(self.galaxy, name)
if gr.metadata:
install_info = gr.install_info
version = None
if install_info:
version = install_info.get("version", None)
if not version:
version = "(unknown version)"
# show some more info about single roles here
self.display.display("- %s, %s" % (name, version))
else:
self.display.display("- the role %s was not found" % name)
else:
# show all valid roles in the roles_path directory
roles_path = self.get_opt('roles_path')
roles_path = os.path.expanduser(roles_path)
if not os.path.exists(roles_path):
raise AnsibleOptionsError("- the path %s does not exist. Please specify a valid path with --roles-path" % roles_path)
elif not os.path.isdir(roles_path):
raise AnsibleOptionsError("- %s exists, but it is not a directory. Please specify a valid path with --roles-path" % roles_path)
path_files = os.listdir(roles_path)
for path_file in path_files:
gr = GalaxyRole(self.galaxy, path_file)
if gr.metadata:
install_info = gr.metadata
version = None
if install_info:
version = install_info.get("version", None)
if not version:
version = "(unknown version)"
self.display.display("- %s, %s" % (path_file, version))
return 0
def execute_search(self):
search = None
if len(self.args) > 1:
raise AnsibleOptionsError("At most a single search term is allowed.")
elif len(self.args) == 1:
search = self.args.pop()
response = self.api.search_roles(search, self.options.platforms, self.options.categories)
if 'count' in response:
self.galaxy.display.display("Found %d roles matching your search:\n" % response['count'])
data = ''
if 'results' in response:
for role in response['results']:
data += self._display_role_info(role)
self.pager(data)
|
jaddison/ansible
|
lib/ansible/cli/galaxy.py
|
Python
|
gpl-3.0
| 24,621
|
[
"Galaxy"
] |
bba0bfcc603fea8cf65c305091693d6413dac14039be4ed00f7a771605f75619
|
from yade import pack, export
pred = pack.inAlignedBox((0,0,0),(10,10,10))
O.bodies.append(pack.randomDensePack(pred,radius=1.,rRelFuzz=.5,spheresInCell=500,memoizeDb='/tmp/pack.db'))
export.textExt('/tmp/test.txt',format='x_y_z_r_attrs',attrs=('b.state.pos.norm()','b.state.pos'),comment='dstN dstV_x dstV_y dstV_z')
# text2vtk and text2vtkSection function can be copy-pasted from yade/py/export.py into separate py file to avoid executing yade or to use pure python
export.text2vtk('/tmp/test.txt','/tmp/test.vtk')
export.text2vtkSection('/tmp/test.txt','/tmp/testSection.vtk',point=(5,5,5),normal=(1,1,1))
# now open paraview, click "Tools" menu -> "Python Shell", click "Run Script", choose "pv_section.py" from this directiory
# or just run python pv_section.py
# and enjoy :-)
|
bcharlas/mytrunk
|
examples/test/paraview-spheres-solid-section/export_text.py
|
Python
|
gpl-2.0
| 785
|
[
"ParaView",
"VTK"
] |
bf8365e6a0b3daceef357bc9ffe16345bcd6b1c74fc1bda163b9f8bd9dc3583b
|
"""
An implementation of a random forest. Uses the provided DecisionTree class.
The RF is parameterized by the following values:
1. n_trees: The number of trees in the forest
2. boot_percent: The bootstrap percent. For each tree, we select a
bootstrap sample of the dataset (with replacement). This is the
percentage of the dataset to use for each tree.
3. feat_percent: The percentage of features to consider at each split.
Instead of examining all features at each split, we select feat_percent
of them to consider.
In addition, you need to specify the typical decision tree parameters,
namely max depth and the number of linear split points.
==============
Copyright Info
==============
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>.
Copyright Brian Dolhansky 2014
bdolmail@gmail.com
"""
from interface_utils import prog_bar
from fast_decision_tree import FastDecisionTree
from scipy import stats
import numpy as np
class RandomForest:
def __init__(self, n_trees, max_depth, num_splits,
boot_percent=0.3, feat_percent=0.3, threaded=False, debug=False):
self.n_trees = n_trees
self.max_depth = max_depth
self.num_splits = num_splits
self.boot_percent = boot_percent
self.feat_percent = feat_percent
self.threaded = threaded
self.debug = debug
self.roots = []
def train(self, train_data, train_target):
t = 0
for i in range(self.n_trees):
prog_bar(t, self.n_trees)
t += 1
keep_idx = np.random.rand(train_data.shape[0]) <= \
self.boot_percent
boot_train_data = train_data[keep_idx, :]
boot_train_target = train_target[keep_idx]
dt = FastDecisionTree(self.max_depth, self.num_splits,
feat_subset=self.feat_percent,
debug=self.debug)
r = dt.train(boot_train_data, boot_train_target)
self.roots.append(r)
prog_bar(self.n_trees, self.n_trees)
def test(self, test_data, test_target):
t = 0
# TODO: refactor the RF test function to depend not on an external
# root but on itself
dt = FastDecisionTree(1, 1)
yhat_forest = np.zeros((test_data.shape[0], self.n_trees))
for i in range(len(self.roots)):
r = self.roots[i]
prog_bar(t, self.n_trees)
t += 1
yhat_forest[:, i:] = dt.test_preds(r, test_data)
prog_bar(self.n_trees, self.n_trees)
yhat = stats.mode(yhat_forest, axis=1)[0]
return yhat
|
bdol/bdol-ml
|
random_forests/random_forest.py
|
Python
|
lgpl-3.0
| 3,210
|
[
"Brian"
] |
68dbd82afa1b7c52b96ec162c79369508a5143b8ef890d927f89fad380575955
|
#!/usr/bin/python
#
# #################################################################
# ## WARNING : this is a generated file, don't change it !
# #################################################################
#
# \file 1_export.py
# \brief Export clodbank
# \date 2011-09-21-20-51-GMT
# \author Jan Boon (Kaetemi)
# Python port of game data build pipeline.
# Export clodbank
#
# NeL - MMORPG Framework <http://dev.ryzom.com/projects/nel/>
# Copyright (C) 2010 Winch Gate Property Limited
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
import time, sys, os, shutil, subprocess, distutils.dir_util
sys.path.append("../../configuration")
if os.path.isfile("log.log"):
os.remove("log.log")
if os.path.isfile("temp_log.log"):
os.remove("temp_log.log")
log = open("temp_log.log", "w")
from scripts import *
from buildsite import *
from process import *
from tools import *
from directories import *
printLog(log, "")
printLog(log, "-------")
printLog(log, "--- Export clodbank")
printLog(log, "-------")
printLog(log, time.strftime("%Y-%m-%d %H:%MGMT", time.gmtime(time.time())))
printLog(log, "")
# Find tools
# ...
# Export clodbank 3dsmax
if MaxAvailable:
# Find tools
Max = findMax(log, MaxDirectory, MaxExecutable)
printLog(log, "")
printLog(log, ">>> Export clodbank 3dsmax <<<")
mkPath(log, ExportBuildDirectory + "/" + ClodExportDirectory)
mkPath(log, ExportBuildDirectory + "/" + ClodTagExportDirectory)
for dir in ClodSourceDirectories:
mkPath(log, DatabaseDirectory + "/" + dir)
if (needUpdateDirByTagLog(log, DatabaseDirectory + "/" + dir, ".max", ExportBuildDirectory + "/" + ClodTagExportDirectory, ".max.tag")):
scriptSrc = "maxscript/clod_export.ms"
scriptDst = MaxUserDirectory + "/scripts/clod_export.ms"
outputLogfile = ScriptDirectory + "/processes/clodbank/log.log"
outputDirectory = ExportBuildDirectory + "/" + ClodExportDirectory
tagDirectory = ExportBuildDirectory + "/" + ClodTagExportDirectory
maxSourceDir = DatabaseDirectory + "/" + dir
maxRunningTagFile = tagDirectory + "/max_running.tag"
tagList = findFiles(log, tagDirectory, "", ".max.tag")
tagLen = len(tagList)
if os.path.isfile(scriptDst):
os.remove(scriptDst)
tagDiff = 1
sSrc = open(scriptSrc, "r")
sDst = open(scriptDst, "w")
for line in sSrc:
newline = line.replace("%OutputLogfile%", outputLogfile)
newline = newline.replace("%MaxSourceDirectory%", maxSourceDir)
newline = newline.replace("%OutputDirectory%", outputDirectory)
newline = newline.replace("%TagDirectory%", tagDirectory)
sDst.write(newline)
sSrc.close()
sDst.close()
zeroRetryLimit = 3
while tagDiff > 0:
mrt = open(maxRunningTagFile, "w")
mrt.write("moe-moe-kyun")
mrt.close()
printLog(log, "MAXSCRIPT " + scriptDst)
subprocess.call([ Max, "-U", "MAXScript", "clod_export.ms", "-q", "-mi", "-vn" ])
if os.path.exists(outputLogfile):
try:
lSrc = open(outputLogfile, "r")
for line in lSrc:
lineStrip = line.strip()
if (len(lineStrip) > 0):
printLog(log, lineStrip)
lSrc.close()
os.remove(outputLogfile)
except Exception:
printLog(log, "ERROR Failed to read 3dsmax log")
else:
printLog(log, "WARNING No 3dsmax log")
tagList = findFiles(log, tagDirectory, "", ".max.tag")
newTagLen = len(tagList)
tagDiff = newTagLen - tagLen
tagLen = newTagLen
addTagDiff = 0
if os.path.exists(maxRunningTagFile):
printLog(log, "FAIL 3ds Max crashed and/or file export failed!")
if tagDiff == 0:
if zeroRetryLimit > 0:
zeroRetryLimit = zeroRetryLimit - 1
addTagDiff = 1
else:
printLog(log, "FAIL Retry limit reached!")
else:
addTagDiff = 1
os.remove(maxRunningTagFile)
printLog(log, "Exported " + str(tagDiff) + " .max files!")
tagDiff += addTagDiff
os.remove(scriptDst)
printLog(log, "")
log.close()
if os.path.isfile("log.log"):
os.remove("log.log")
shutil.move("temp_log.log", "log.log")
# end of file
|
osgcc/ryzom
|
nel/tools/build_gamedata/processes/clodbank/1_export.py
|
Python
|
agpl-3.0
| 4,656
|
[
"MOE"
] |
20da85838b33cd59c868be2a8ee7ec0ec15a3cf156102f1cfd4a916f6a80c18c
|
"""
NetcdfReader class file
There are several different Netcdf libraries out there. ArcGIS has some
netcdf reading ability,and there is Scientific Python among others. We use
this class as a wrapper to plug in different libraries. I've notice the way
ArcGIS reads data is kinda slow for large files..maybe I have
more to learn about how to use it properly.
@author: Robert Toomey (retoomey)
"""
import os, gzip
import w2py.log as log
import w2py.resource as w2res
class netcdfReader(object):
def __init__(self):
self.haveTemp = False
self.fileName = None
def getFileLocation(self):
""" Get the actual file location. Can be the original file or a
temporary uncompressed file """
return self.fileName
def setFileLocation(self, f):
""" Set the file location used by this reader """
self.fileName = f
def uncompressTempFile(self, datafile):
""" Uncompress a gzip file to a temp file if needed """
if datafile.endswith(".gz"):
uncompressed = w2res.getAbsBaseMulti(datafile)
uncompressed += ".netcdf"
log.info("Uncompressing file to "+uncompressed)
i = gzip.GzipFile(datafile, 'rb')
fileData = i.read()
i.close()
o = file(uncompressed, 'wb')
o.write(fileData)
o.close()
datafile = uncompressed
self.setFileLocation(uncompressed)
self.fileName = uncompressed
self.haveTemp = True
datafile = uncompressed
else:
self.setFileLocation(datafile)
return datafile
def __del__(self):
if self.haveTemp:
log.info("Attempting to remove temp file"+self.fileName)
os.remove(self.fileName)
def haveDimension(self, dim):
return False
def haveAttribute(self, param1, param2):
return False
#def getAttributeNames(self, params):
# return None
def getAttributeValue(self, param1, param2):
return None
def getDimensionSize(self, param1):
return None
def getDimensionSizeByVariable(self, param1):
return None
def getValueLookup(self, param1):
return None
def getValue(self, vlookup, index):
return None
def getValue2D(self, vlookup, index, x, y):
return None
|
retoomey/WDSS2Python
|
w2py/netcdf/netcdf_reader.py
|
Python
|
apache-2.0
| 2,495
|
[
"NetCDF"
] |
adf0c74228d7ccdd1a62019e4cca99889890e4757230ed258b5d201356edfe4f
|
#
# Copyright (c) 2015 nexB Inc. and others. All rights reserved.
# http://nexb.com and https://github.com/nexB/scancode-toolkit/
# The ScanCode software is licensed under the Apache License version 2.0.
# Data generated with ScanCode require an acknowledgment.
# ScanCode is a trademark of nexB Inc.
#
# You may not use this software except in compliance with the License.
# You may obtain a copy of the License at: http://apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software distributed
# under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
# CONDITIONS OF ANY KIND, either express or implied. See the License for the
# specific language governing permissions and limitations under the License.
#
# When you publish or redistribute any data created with ScanCode or any ScanCode
# derivative work, you must accompany this data with the following acknowledgment:
#
# Generated with ScanCode and provided on an "AS IS" BASIS, WITHOUT WARRANTIES
# OR CONDITIONS OF ANY KIND, either express or implied. No content created from
# ScanCode should be considered or used as legal advice. Consult an Attorney
# for any legal advice.
# ScanCode is a free software code scanning tool from nexB Inc. and others.
# Visit https://github.com/nexB/scancode-toolkit/ for support and download.
|
yasharmaster/scancode-toolkit
|
src/textcode/__init__.py
|
Python
|
apache-2.0
| 1,358
|
[
"VisIt"
] |
47d54fb3e40f3eba76beddfc1c75cc3225873137394015463fec99420ad4e588
|
# Copyright 2008-2015 Nokia Solutions and Networks
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from robotide.lib.robot.utils import html_escape, setter
from .itemlist import ItemList
from .modelobject import ModelObject
class Message(ModelObject):
"""A message outputted during the test execution.
The message can be a log message triggered by a keyword, or a warning
or an error occurred during the test execution.
"""
__slots__ = ['message', 'level', 'html', 'timestamp', 'parent', '_sort_key']
def __init__(self, message='', level='INFO', html=False, timestamp=None,
parent=None):
#: The message content as a string.
self.message = message
#: Severity of the message. Either ``TRACE``, ``INFO``,
#: ``WARN``, ``DEBUG`` or ``FAIL``/``ERROR``.
self.level = level
#: ``True`` if the content is in HTML, ``False`` otherwise.
self.html = html
#: Timestamp in format ``%Y%m%d %H:%M:%S.%f``.
self.timestamp = timestamp
self._sort_key = -1
#: The object this message was triggered by.
self.parent = parent
@setter
def parent(self, parent):
if parent and parent is not getattr(self, 'parent', None):
self._sort_key = getattr(parent, '_child_sort_key', -1)
return parent
@property
def html_message(self):
"""Returns the message content as HTML."""
return self.message if self.html else html_escape(self.message)
def visit(self, visitor):
visitor.visit_message(self)
def __unicode__(self):
return self.message
class Messages(ItemList):
__slots__ = []
def __init__(self, message_class=Message, parent=None, messages=None):
ItemList.__init__(self, message_class, {'parent': parent}, messages)
def __setitem__(self, index, item):
old = self[index]
ItemList.__setitem__(self, index, item)
self[index]._sort_key = old._sort_key
|
fingeronthebutton/RIDE
|
src/robotide/lib/robot/model/message.py
|
Python
|
apache-2.0
| 2,506
|
[
"VisIt"
] |
da5e758fb7e39173a4fe8d5fd26809f3f9961ac9f7787b42568897d1d16a5bb9
|
# Copyright (C) 2009, Thomas Leonard
# See the README file for details, or visit http://0install.net.
import gtk, os, pango, sys
from zeroinstall import _, logger
from zeroinstall.cmd import slave
from zeroinstall.support import tasks
from zeroinstall.injector import model
from zeroinstall.gtkui import gtkutils
from zeroinstall.gui.gui import gobject
def _build_stability_menu(config, impl_details):
menu = gtk.Menu()
choices = list(model.stability_levels.values())
choices.sort()
choices.reverse()
@tasks.async
def set(new):
try:
blocker = slave.invoke_master(["set-impl-stability", impl_details['from-feed'], impl_details['id'], new])
yield blocker
tasks.check(blocker)
from zeroinstall.gui import main
main.recalculate()
except Exception:
logger.warning("set", exc_info = True)
raise
item = gtk.MenuItem()
item.set_label(_('Unset'))
item.connect('activate', lambda item: set(None))
item.show()
menu.append(item)
item = gtk.SeparatorMenuItem()
item.show()
menu.append(item)
for value in choices:
item = gtk.MenuItem()
item.set_label(_(str(value)).capitalize())
item.connect('activate', lambda item, v = value: set(str(v)))
item.show()
menu.append(item)
return menu
rox_filer = 'http://rox.sourceforge.net/2005/interfaces/ROX-Filer'
# Columns
ITEM = 0
ARCH = 1
STABILITY = 2
VERSION = 3
FETCH = 4
UNUSABLE = 5
RELEASED = 6
NOTES = 7
WEIGHT = 8 # Selected item is bold
LANGS = 9
class ImplementationList(object):
tree_view = None
model = None
interface = None
driver = None
def __init__(self, driver, interface, widgets):
self.interface = interface
self.driver = driver
self.model = gtk.ListStore(object, str, str, str, # Item, arch, stability, version,
str, gobject.TYPE_BOOLEAN, str, str, # fetch, unusable, released, notes,
int, str) # weight, langs
self.tree_view = widgets.get_widget('versions_list')
self.tree_view.set_model(self.model)
text = gtk.CellRendererText()
text_strike = gtk.CellRendererText()
stability = gtk.TreeViewColumn(_('Stability'), text, text = STABILITY)
for column in (gtk.TreeViewColumn(_('Version'), text_strike, text = VERSION, strikethrough = UNUSABLE, weight = WEIGHT),
gtk.TreeViewColumn(_('Released'), text, text = RELEASED, weight = WEIGHT),
stability,
gtk.TreeViewColumn(_('Fetch'), text, text = FETCH, weight = WEIGHT),
gtk.TreeViewColumn(_('Arch'), text_strike, text = ARCH, strikethrough = UNUSABLE, weight = WEIGHT),
gtk.TreeViewColumn(_('Lang'), text_strike, text = LANGS, strikethrough = UNUSABLE, weight = WEIGHT),
gtk.TreeViewColumn(_('Notes'), text, text = NOTES, weight = WEIGHT)):
self.tree_view.append_column(column)
self.tree_view.set_property('has-tooltip', True)
def tooltip_callback(widget, x, y, keyboard_mode, tooltip):
x, y = self.tree_view.convert_widget_to_bin_window_coords(x, y)
pos = self.tree_view.get_path_at_pos(x, y)
if pos:
self.tree_view.set_tooltip_cell(tooltip, pos[0], None, None)
path = pos[0]
row = self.model[path]
if row[ITEM]:
tooltip.set_text(row[ITEM]['tooltip'])
return True
return False
self.tree_view.connect('query-tooltip', tooltip_callback)
def button_press(tree_view, bev):
if bev.button not in (1, 3):
return False
pos = tree_view.get_path_at_pos(int(bev.x), int(bev.y))
if not pos:
return False
path, col, x, y = pos
impl = self.model[path][ITEM]
global menu # Fix GC problem with PyGObject
menu = gtk.Menu()
stability_menu = gtk.MenuItem()
stability_menu.set_label(_('Rating'))
stability_menu.set_submenu(_build_stability_menu(self.driver.config, impl))
stability_menu.show()
menu.append(stability_menu)
impl_dir = impl['impl-dir']
if impl_dir:
def open():
os.spawnlp(os.P_WAIT, '0launch',
'0launch', rox_filer, '-d',
impl_dir)
item = gtk.MenuItem()
item.set_label(_('Open cached copy'))
item.connect('activate', lambda item: open())
item.show()
menu.append(item)
item = gtk.MenuItem()
item.set_label(_('Explain this decision'))
item.connect('activate', lambda item: self.show_explaination(impl))
item.show()
menu.append(item)
if sys.version_info[0] < 3:
menu.popup(None, None, None, bev.button, bev.time)
else:
menu.popup(None, None, None, None, bev.button, bev.time)
self.tree_view.connect('button-press-event', button_press)
@tasks.async
def show_explaination(self, impl):
try:
blocker = slave.justify_decision(self.interface.uri, impl['from-feed'], impl['id'])
yield blocker
tasks.check(blocker)
reason = blocker.result
parent = self.tree_view.get_toplevel()
if '\n' not in reason:
gtkutils.show_message_box(parent, reason, gtk.MESSAGE_INFO)
return
box = gtk.Dialog(_("{prog} version {version}").format(
prog = self.interface.uri,
version = impl['version']),
parent,
gtk.DIALOG_DESTROY_WITH_PARENT,
(gtk.STOCK_OK, gtk.RESPONSE_OK))
swin = gtk.ScrolledWindow()
swin.set_policy(gtk.POLICY_AUTOMATIC, gtk.POLICY_AUTOMATIC)
text = gtk.Label(reason)
swin.add_with_viewport(text)
swin.show_all()
box.vbox.pack_start(swin)
box.set_position(gtk.WIN_POS_CENTER)
def resp(b, r):
b.destroy()
box.connect('response', resp)
box.set_default_size(gtk.gdk.screen_width() * 3 / 4, gtk.gdk.screen_height() / 3)
box.show()
except Exception:
logger.warning("show_explaination", exc_info = True)
raise
def get_selection(self):
return self.tree_view.get_selection()
def update(self, details):
self.model.clear()
selected = details.get('selected-feed', None), details.get('selected-id', None)
impls = details.get('implementations', None)
self.tree_view.set_sensitive(impls is not None)
for item in impls or []:
new = self.model.append()
self.model[new][ITEM] = item
self.model[new][VERSION] = item['version']
self.model[new][RELEASED] = item['released']
self.model[new][FETCH] = item['fetch']
user_stability = item['user-stability']
self.model[new][STABILITY] = user_stability.upper() if user_stability else item['stability']
self.model[new][ARCH] = item['arch']
if (item['from-feed'], item['id']) == selected:
self.model[new][WEIGHT] = pango.WEIGHT_BOLD
else:
self.model[new][WEIGHT] = pango.WEIGHT_NORMAL
self.model[new][UNUSABLE] = not bool(item['usable'])
self.model[new][LANGS] = item['langs']
self.model[new][NOTES] = item['notes']
def clear(self):
self.model.clear()
|
afb/0install
|
zeroinstall/gui/impl_list.py
|
Python
|
lgpl-2.1
| 6,575
|
[
"VisIt"
] |
5ad454fb28cd195b6207f02dc7bf1cc6d21e553aee7bbf3239f65be8190147d9
|
"""
Example psfMC model file. The ModelComponents and distributions import
statements are optional (they are injected automatically when the model file is
processed), but their explicit inclusion is recommended.
Model components have several parameters that can generally be supplied as
either a fixed value or as a prior distribution. For instance:
Sersic(..., index=1.0, ...)
will create a Sersic profile with a fixed index of 1 (exponential profile), but:
Sersic(..., index=Uniform(lower=0.5, upper=10.0), ...)
will leave the index as a simulated free parameter with a Uniform prior.
See the docstrings for individual components and distributions for the available
parameters.
"""
from numpy import array
from psfMC.ModelComponents import Configuration, Sky, PointSource, Sersic
from psfMC.distributions import Normal, Uniform, WeibullMinimum
total_mag = 20.66
center = array((64.5, 64.5))
max_shift = array((8, 8))
# The Configuration component is mandatory, and specifies the observation files
Configuration(obs_file='sci_J0005-0006.fits',
obsivm_file='ivm_J0005-0006.fits',
psf_files='sci_psf.fits',
psfivm_files='ivm_psf.fits',
mask_file='mask_J0005-0006.reg',
mag_zeropoint=25.9463)
# We can treat the sky as an unknown component if the subtraction is uncertain
Sky(adu=Normal(loc=0, scale=0.01))
# Point source component
PointSource(xy=Uniform(loc=center - max_shift, scale=2 * max_shift),
mag=Uniform(loc=total_mag-0.2, scale=0.2+1.5))
# Sersic profile, modeling a galaxy under the point source
Sersic(xy=Uniform(loc=center-max_shift, scale=2*max_shift),
mag=Uniform(loc=total_mag, scale=27.5-total_mag),
reff=Uniform(loc=2.0, scale=12.0-2.0),
reff_b=Uniform(loc=2.0, scale=12.0-2.0),
index=WeibullMinimum(c=1.5, scale=4),
angle=Uniform(loc=0, scale=180), angle_degrees=True)
# Second sersic profile, modeling the faint blob to the upper left of the quasar
center = array((46, 85.6))
max_shift = array((5, 5))
Sersic(xy=Uniform(loc=center-max_shift, scale=2*max_shift),
mag=Uniform(loc=23.5, scale=25.5-23.5),
reff=Uniform(loc=2.0, scale=8.0-2.0),
reff_b=Uniform(loc=2.0, scale=8.0-2.0),
index=WeibullMinimum(c=1.5, scale=4),
angle=Uniform(loc=0, scale=180), angle_degrees=True)
|
mmechtley/psfMC
|
examples/model_J0005-0006.py
|
Python
|
mit
| 2,347
|
[
"Galaxy"
] |
50b3fe90790825570b199a4cbb0a0f21b85b39e705579187c804f74e909439a1
|
# -*- coding: utf-8 -*-
"""An implementation of the Zephyr Abstract Syntax Definition Language.
See http://asdl.sourceforge.net/ and
http://www.cs.princeton.edu/research/techreps/TR-554-97
Only supports top level module decl, not view. I'm guessing that view
is intended to support the browser and I'm not interested in the
browser.
Changes for Python: Add support for module versions
"""
from __future__ import print_function, division, absolute_import
import os
import traceback
from . import spark
class Token(object):
# spark seems to dispatch in the parser based on a token's
# type attribute
def __init__(self, type, lineno):
self.type = type
self.lineno = lineno
def __str__(self):
return self.type
def __repr__(self):
return str(self)
class Id(Token):
def __init__(self, value, lineno):
self.type = 'Id'
self.value = value
self.lineno = lineno
def __str__(self):
return self.value
class String(Token):
def __init__(self, value, lineno):
self.type = 'String'
self.value = value
self.lineno = lineno
class ASDLSyntaxError(Exception):
def __init__(self, lineno, token=None, msg=None):
self.lineno = lineno
self.token = token
self.msg = msg
def __str__(self):
if self.msg is None:
return "Error at '%s', line %d" % (self.token, self.lineno)
else:
return "%s, line %d" % (self.msg, self.lineno)
class ASDLScanner(spark.GenericScanner, object):
def tokenize(self, input):
self.rv = []
self.lineno = 1
super(ASDLScanner, self).tokenize(input)
return self.rv
def t_id(self, s):
r"[\w\.]+"
# XXX doesn't distinguish upper vs. lower, which is
# significant for ASDL.
self.rv.append(Id(s, self.lineno))
def t_string(self, s):
r'"[^"]*"'
self.rv.append(String(s, self.lineno))
def t_xxx(self, s): # not sure what this production means
r"<="
self.rv.append(Token(s, self.lineno))
def t_punctuation(self, s):
r"[\{\}\*\=\|\(\)\,\?\:]"
self.rv.append(Token(s, self.lineno))
def t_comment(self, s):
r"\-\-[^\n]*"
pass
def t_newline(self, s):
r"\n"
self.lineno += 1
def t_whitespace(self, s):
r"[ \t]+"
pass
def t_default(self, s):
r" . +"
raise ValueError("unmatched input: %r" % s)
class ASDLParser(spark.GenericParser, object):
def __init__(self):
super(ASDLParser, self).__init__("module")
def typestring(self, tok):
return tok.type
def error(self, tok):
raise ASDLSyntaxError(tok.lineno, tok)
def p_module_0(self, xxx_todo_changeme):
" module ::= Id Id version { } "
(module, name, version, _0, _1) = xxx_todo_changeme
if module.value != "module":
raise ASDLSyntaxError(module.lineno,
msg="expected 'module', found %s" % module)
return Module(name, None, version)
def p_module(self, xxx_todo_changeme1):
" module ::= Id Id version { definitions } "
(module, name, version, _0, definitions, _1) = xxx_todo_changeme1
if module.value != "module":
raise ASDLSyntaxError(module.lineno,
msg="expected 'module', found %s" % module)
return Module(name, definitions, version)
def p_version(self, xxx_todo_changeme2):
"version ::= Id String"
(version, V) = xxx_todo_changeme2
if version.value != "version":
raise ASDLSyntaxError(version.lineno,
msg="expected 'version', found %" % version)
return V
def p_definition_0(self, xxx_todo_changeme3):
" definitions ::= definition "
(definition,) = xxx_todo_changeme3
return definition
def p_definition_1(self, xxx_todo_changeme4):
" definitions ::= definition definitions "
(definitions, definition) = xxx_todo_changeme4
return definitions + definition
def p_definition(self, xxx_todo_changeme5):
" definition ::= Id = type "
(id, _, type) = xxx_todo_changeme5
return [Type(id, type)]
def p_type_0(self, xxx_todo_changeme6):
" type ::= product "
(product,) = xxx_todo_changeme6
return product
def p_type_1(self, xxx_todo_changeme7):
" type ::= sum "
(sum,) = xxx_todo_changeme7
return Sum(sum)
def p_type_2(self, xxx_todo_changeme8):
" type ::= sum Id ( fields ) "
(sum, id, _0, attributes, _1) = xxx_todo_changeme8
if id.value != "attributes":
raise ASDLSyntaxError(id.lineno,
msg="expected attributes, found %s" % id)
if attributes:
attributes.reverse()
return Sum(sum, attributes)
def p_product(self, xxx_todo_changeme9):
" product ::= ( fields ) "
(_0, fields, _1) = xxx_todo_changeme9
fields.reverse()
return Product(fields)
def p_sum_0(self, xxx_todo_changeme10):
" sum ::= constructor "
(constructor,) = xxx_todo_changeme10
return [constructor]
def p_sum_1(self, xxx_todo_changeme11):
" sum ::= constructor | sum "
(constructor, _, sum) = xxx_todo_changeme11
return [constructor] + sum
def p_sum_2(self, xxx_todo_changeme12):
" sum ::= constructor | sum "
(constructor, _, sum) = xxx_todo_changeme12
return [constructor] + sum
def p_constructor_0(self, xxx_todo_changeme13):
" constructor ::= Id "
(id,) = xxx_todo_changeme13
return Constructor(id)
def p_constructor_1(self, xxx_todo_changeme14):
" constructor ::= Id ( fields ) "
(id, _0, fields, _1) = xxx_todo_changeme14
fields.reverse()
return Constructor(id, fields)
def p_fields_0(self, xxx_todo_changeme15):
" fields ::= field "
(field,) = xxx_todo_changeme15
return [field]
def p_fields_1(self, xxx_todo_changeme16):
" fields ::= field , fields "
(field, _, fields) = xxx_todo_changeme16
return fields + [field]
def p_field_0(self, xxx_todo_changeme17):
" field ::= Id "
(type,) = xxx_todo_changeme17
return Field(type)
def p_field_1(self, xxx_todo_changeme18):
" field ::= Id Id "
(type, name) = xxx_todo_changeme18
return Field(type, name)
def p_field_2(self, xxx_todo_changeme19):
" field ::= Id * Id "
(type, _, name) = xxx_todo_changeme19
return Field(type, name, seq=True)
def p_field_3(self, xxx_todo_changeme20):
" field ::= Id ? Id "
(type, _, name) = xxx_todo_changeme20
return Field(type, name, opt=True)
def p_field_4(self, xxx_todo_changeme21):
" field ::= Id * "
(type, _) = xxx_todo_changeme21
return Field(type, seq=True)
def p_field_5(self, xxx_todo_changeme22):
" field ::= Id ? "
(type, _) = xxx_todo_changeme22
return Field(type, opt=True)
builtin_types = ("identifier", "string", "int", "bool", "object")
# below is a collection of classes to capture the AST of an AST :-)
# not sure if any of the methods are useful yet, but I'm adding them
# piecemeal as they seem helpful
class AST(object):
pass # a marker class
class Module(AST):
def __init__(self, name, dfns, version):
self.name = name
self.dfns = dfns
self.version = version
self.types = {} # maps type name to value (from dfns)
for type in dfns:
self.types[type.name.value] = type.value
def __repr__(self):
return "Module(%s, %s)" % (self.name, self.dfns)
class Type(AST):
def __init__(self, name, value):
self.name = name
self.value = value
def __repr__(self):
return "Type(%s, %s)" % (self.name, self.value)
class Constructor(AST):
def __init__(self, name, fields=None):
self.name = name
self.fields = fields or []
def __repr__(self):
return "Constructor(%s, %s)" % (self.name, self.fields)
class Field(AST):
def __init__(self, type, name=None, seq=False, opt=False):
self.type = type
self.name = name
self.seq = seq
self.opt = opt
def __repr__(self):
if self.seq:
extra = ", seq=True"
elif self.opt:
extra = ", opt=True"
else:
extra = ""
if self.name is None:
return "Field(%s%s)" % (self.type, extra)
else:
return "Field(%s, %s%s)" % (self.type, self.name, extra)
class Sum(AST):
def __init__(self, types, attributes=None):
self.types = types
self.attributes = attributes or []
def __repr__(self):
if self.attributes is None:
return "Sum(%s)" % self.types
else:
return "Sum(%s, %s)" % (self.types, self.attributes)
class Product(AST):
def __init__(self, fields):
self.fields = fields
def __repr__(self):
return "Product(%s)" % self.fields
class VisitorBase(object):
def __init__(self, skip=False):
self.cache = {}
self.skip = skip
def visit(self, object, *args):
meth = self._dispatch(object)
if meth is None:
return
try:
meth(object, *args)
except Exception as err:
print(("Error visiting", repr(object)))
print(err)
traceback.print_exc()
# XXX hack
if hasattr(self, 'file'):
self.file.flush()
os._exit(1)
def _dispatch(self, object):
assert isinstance(object, AST), repr(object)
klass = object.__class__
meth = self.cache.get(klass)
if meth is None:
methname = "visit" + klass.__name__
if self.skip:
meth = getattr(self, methname, None)
else:
meth = getattr(self, methname)
self.cache[klass] = meth
return meth
class Check(VisitorBase):
def __init__(self):
super(Check, self).__init__(skip=True)
self.cons = {}
self.errors = 0
self.types = {}
def visitModule(self, mod):
for dfn in mod.dfns:
self.visit(dfn)
def visitType(self, type):
self.visit(type.value, str(type.name))
def visitSum(self, sum, name):
for t in sum.types:
self.visit(t, name)
def visitConstructor(self, cons, name):
key = str(cons.name)
conflict = self.cons.get(key)
if conflict is None:
self.cons[key] = name
else:
print(("Redefinition of constructor %s" % key))
print(("Defined in %s and %s" % (conflict, name)))
self.errors += 1
for f in cons.fields:
self.visit(f, key)
def visitField(self, field, name):
key = str(field.type)
l = self.types.setdefault(key, [])
l.append(name)
def visitProduct(self, prod, name):
for f in prod.fields:
self.visit(f, name)
def check(mod):
v = Check()
v.visit(mod)
for t in v.types:
if t not in mod.types and not t in builtin_types:
v.errors += 1
uses = ", ".join(v.types[t])
print(("Undefined type %s, used in %s" % (t, uses)))
return not v.errors
def parse(file):
scanner = ASDLScanner()
parser = ASDLParser()
buf = open(file).read()
tokens = scanner.tokenize(buf)
try:
return parser.parse(tokens)
except ASDLSyntaxError as err:
print(err)
lines = buf.split("\n")
print((lines[err.lineno - 1])) # lines starts at 0, files at 1
if __name__ == "__main__":
import glob
import sys
if len(sys.argv) > 1:
files = sys.argv[1:]
else:
testdir = "tests"
files = glob.glob(testdir + "/*.asdl")
for file in files:
print(file)
mod = parse(file)
print(("module", mod.name))
print((len(mod.dfns), "definitions"))
if not check(mod):
print("Check failed")
else:
for dfn in mod.dfns:
print((dfn.type))
|
shiquanwang/numba
|
numba/asdl/py2_7/asdl.py
|
Python
|
bsd-2-clause
| 12,446
|
[
"VisIt"
] |
7c8afa72702aef01df607225a1f29de4b454c9eea07846278ac0a9272b44ee4e
|
#
# Copyright (C) 2006-2007 Cooper Street Innovations Inc.
# Charles Eidsness <charles@cooper-street.com>
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
# 02110-1301, USA.
#
"""
This module provides the basic eispice devices.
Basic Devices:
L -- Inductor
C -- Capacitor
R -- Resistor
B -- Behaivioral
I -- Current Source
V -- Voltage Source
VI -- Voltage/Current Curve
G -- Voltage-Controlled Current Source
E -- Voltage-Controlled Voltage Source
F -- Current-Controlled Current Source
H -- Current-Controlled Voltage Source
T -- Simple Transmission Line
W -- Multi-Conductor Frequency Dependent T-Line
D -- Diode Model
PyB -- A Python Base Behaivorial Model
D -- Diode Model
Q -- BJT Model
Composite Models:
RealC -- Capacitor Model with ESR and ESL
"""
import warnings
from numpy import array
import units
import simulator_
import calc
import subckt
Current = 'i'
Voltage = 'v'
Capacitor = 'c'
Time = 'time'
GND = '0'
#-----------------------------------------------------------------------------#
# Passives #
#-----------------------------------------------------------------------------#
class L(simulator_.Inductor_):
"""Inductor Model
Example:
>>> import eispice
>>> cct = eispice.Circuit("Inductor Test")
>>> wave = eispice.Pulse(4, 8, '10n', '2n', '3n', '5n', '20n')
>>> cct.Ix = eispice.I(1, eispice.GND, 4, wave)
>>> cct.Lx = eispice.L(1, eispice.GND, '10n')
>>> cct.tran('0.5n', '100n')
>>> cct.check_v(1, 1.333333e+01, '1.85e-8')
True
>>> cct.check_v(1, -2.000000e+01, '1.05e-8')
True
"""
def __init__(self, pNode, nNode, L):
"""
Arguments:
pNode -- positive node name
nNode -- negative node name
L -- inductance in Henrys
"""
simulator_.Inductor_.__init__(self, str(pNode), str(nNode),
units.float(L))
class C(simulator_.Capacitor_):
"""Capacitor Model
Example:
>>> import eispice
>>> cct = eispice.Circuit("Capacitor Test")
>>> wave = eispice.Pulse(4, 8, '10n', '2n', '3n', '5n', '20n')
>>> cct.Vx = eispice.V(1, eispice.GND, 4, wave)
>>> cct.Cx = eispice.C(1, eispice.GND, '10n')
>>> cct.tran('0.5n', '100n')
>>> cct.check_i('Vx', 1.333333e+01, '1.85e-8')
True
>>> cct.check_i('Vx', -2.000000e+01, '1.05e-8')
True
"""
def __init__(self, pNode, nNode, C):
"""
Arguments:
pNode -- positive node name
nNode -- negative node name
C -- capacitance in Farads
"""
simulator_.Capacitor_.__init__(self, str(pNode), str(nNode),
units.float(C))
class R(simulator_.Resistor_):
"""Resistor Model
Example:
>>> import eispice
>>> cct = eispice.Circuit("Resistor Test")
>>> cct.Rv = eispice.R(1, eispice.GND, 324)
>>> cct.Vx = eispice.V(1, eispice.GND, 12.6)
>>> cct.op()
>>> cct.check_v(1, 1.26e+01)
True
>>> cct.check_i('Vx', -3.88889e-02)
True
"""
def __init__(self, pNode, nNode, R):
"""
Arguments:
pNode -- positive node name
nNode -- negative node name
R -- resistance in Ohms
"""
simulator_.Resistor_.__init__(self, str(pNode), str(nNode),
units.float(R))
class RealC(subckt.Subckt):
"""Capacitor Model that includes ESL and ESR.
Example:
>>> import eispice
>>> cct = eispice.Circuit("Real Capacitor Test")
>>> wave = eispice.Pulse(4, 8, '10n', '2n', '3n', '5n', '20n')
>>> cct.Vx = eispice.V(1, eispice.GND, 4, wave)
>>> cct.Cx = eispice.RealC(1, eispice.GND, '10n', '1n', '0.1')
>>> cct.tran('0.5n', '100n')
>>> cct.check_i('Vx', -4.832985, '1.85e-8')
True
>>> cct.check_i('Vx', -0.2574886, '1.05e-8')
True
"""
def __init__(self, pNode, nNode, cap, ESL, ESR):
"""
Arguments:
pNode -- positive node name
nNode -- negative node name
C -- capacitance in Farads
ESL -- Effective Series Inductance in Henrys
ESR -- Effective Series Resistance in Ohms
"""
self.C = C(pNode, self.node('n0'), cap)
self.R = R(self.node('n0'), self.node('n1'), ESR)
self.L = L(self.node('n1'), nNode, ESL)
#-----------------------------------------------------------------------------#
# Behaivioral #
#-----------------------------------------------------------------------------#
class B(simulator_.Behavioral_):
"""
Behaivioral Model
This is the most versatile device (and the easiest to abuse). This
device is similar to the B element in Spice3, i.e. it should be
possible to cut and paste B element equations in from spice3.
The equation string can consist of any of the following functions and
operators: abs(), acosh(), acos(), asinh(), asin(), atanh(), atan(),
cosh(), cos(), exp(), ln(), log(), sinh(), sin(), sqrt(), tan(), uramp(),
u(), +, -, *, /, ^.
The function "u" is the unit step function, with a value of one for
arguments greater than one and a value of zero for arguments less than
zero. The function "uramp" is the integral of the unit step: for an input
x, the value is zero if x is less than zero, or if x is greater than zero\
the value is x.
It is also possible to add a time variable to the B element equations using
the keyword time, i.e.
device.B(2, 0, device.Voltage, 'sin(2*3.14159*100e6*time)')
Will result in a sinewave input (though it will run slower than the V
element with a sin waveform input).
Examples:
1. Nonlinear Current Source
>>> import eispice
>>> cct = eispice.Circuit("Nonlinear Current Test")
>>> cct.Rv = eispice.R(1, eispice.GND, 10)
>>> cct.Vx = eispice.V(1, eispice.GND, 7)
>>> cct.Rb = eispice.R(2, eispice.GND, 3)
>>> cct.Bx = eispice.B(2, eispice.GND, eispice.Current, \
"4.739057e-04 * (uramp( v(2,0) --5.060000e+00)) " \
"+ sqrt(v(1)) / sinh(i(Vx))")
>>> cct.op()
>>> cct.check_v(1, 7)
True
>>> cct.check_v(2, 10.44121563)
True
>>> cct.check_i('Vx', -0.7)
True
2. Nonlinear Voltage Source
>>> import eispice
>>> cct = eispice.Circuit("Nonlinear Time Test")
>>> cct.Rb = eispice.R(2, eispice.GND, 3)
>>> cct.Bx = eispice.B(2, eispice.GND, eispice.Voltage, \
'sin(2*3.14159*100e6*time)')
>>> cct.tran('0.5n', '15n')
>>> cct.check_v(2, 9.486671194e-01, '3n')
True
>>> cct.check_v(2, 3.088685702e-01, '10.5n')
True
3. Nonlinear Capacitor
>>> import eispice
>>> cct = eispice.Circuit("Non-Linear Capacitor Test")
>>> wave = eispice.Pulse('1u', '10u', '10n', '5n', '3n', '5n', '50n')
>>> cct.Vc = eispice.V(3, 0, '1n', wave)
>>> cct.Vx = eispice.V(1, 0, 1, \
eispice.Pulse('1m', '10m', '10n', '2n', '3n', '5n', '20n'))
>>> cct.Cx = eispice.B(1, 0, eispice.Capacitor, 'v(3)*10')
>>> cct.tran('0.5n', '100n')
>>> cct.check_i('Vx', -1.810333469e+02, '12n')
True
>>> cct.check_i('Vx', -3.770187057e2, '71n')
True
"""
def __init__(self, pNode, nNode, type, equation):
"""
Arguments:
pNode -- positive node name
nNode -- negative node name
type -- either Voltage to create a voltage source, Current to create
a current source or Capacitor to create a non-linear Capacitor
equation -- string containing B Element equation
"""
simulator_.Behavioral_.__init__(self, str(pNode), str(nNode),
type, str(equation))
class PyB(simulator_.CallBack_):
"""
Python Based Behavioural Model (Call-Back Model)
This device is intended as an alternate to the spice3-like B-Element.
It provides an inheritable class that can be used to define Python based
Behavioural models. The constructor takes as arguments, the positive node,
the negative node, the type (Voltage or Current), and a list of node
voltages, source currents, or 'Time' to be passed to the callback method,
model and should redefined as part of the new device class.
Examples:
1. Nonlinear Current Source
>>> import eispice
>>> class MyDevice(eispice.PyB):
... def __init__(self, pNode, nNode):
... eispice.PyB.__init__(self, pNode, nNode, eispice.Current, \
self.v(pNode))
... def model(self, vP):
... return 2*vP
>>> cct = eispice.Circuit("Call-Back Current Test")
>>> cct.Vx = eispice.V(1, 0, 4)
>>> cct.PyBx = MyDevice(1, 0)
>>> cct.op()
>>> cct.check_i('Vx', -8.0)
True
2. Nonlinear Voltage Source
>>> import eispice
>>> class MyDevice(eispice.PyB):
... def __init__(self, pNode, nNode):
... eispice.PyB.__init__(self, pNode, nNode, eispice.Current,
... self.v(pNode), self.v(nNode), eispice.Time)
... def model(self, vP, vN, time):
... if time > 10e-9:
... return 2 * (vP - vN)
... else:
... return 0.0
>>> cct = eispice.Circuit("Call-Back Time Test")
>>> cct.Vx = eispice.V(1, 0, 4)
>>> cct.PyBx = MyDevice(1, 0)
>>> cct.tran('1n', '25n')
>>> cct.check_i('Vx', 0, '2n')
True
>>> cct.check_i('Vx', -8, '20n')
True
3. Simple CMOS Model
>>> import eispice
>>> cct = eispice.Circuit('PyB Defined Behavioral MOS Divider Test')
>>> class PMOS(eispice.PyB):
... def __init__(self, d, g, s, k=2.0e-6, w=2, l=1, power=2.0):
... eispice.PyB.__init__(self, d, s, eispice.Current, self.v(g), \
self.v(s))
... self.Vt = 0.7
... self.beta = k * (w/l)
... self.power = power
... def model(self, Vg, Vs):
... if ((Vs - Vg) > self.Vt):
... return -0.5* self.beta * (Vs - Vg - self.Vt)**self.power
... else :
... return 0.0
>>> class NMOS(eispice.PyB):
... def __init__(self, d, g, s, k=5.0e-6, w=2, l=1, power=2.0):
... eispice.PyB.__init__(self, d, s, eispice.Current, self.v(g), \
self.v(s))
... self.Vt = 0.7
... self.beta = k * (w/l)
... self.power = power
... def model(self, Vg, Vs):
... if ((Vg - Vs) > self.Vt):
... return 0.5* self.beta * (Vg - Vs - self.Vt)**self.power
... else :
... return 0.0
>>> cct.Vcc = eispice.V('vcc', eispice.GND, 3.3)
>>> cct.Mx = PMOS('vg', 'vg', 'vcc', k=2.0e-6)
>>> cct.My = NMOS('vg', 'vg', eispice.GND, k=5.0e-6)
>>> cct.Rl = eispice.R('vg', eispice.GND, '10G')
>>> cct.op()
>>> cct.check_v('vg', 1.436097)
True
"""
def __init__(self, pNode, nNode, type, *variables):
"""
Arguments:
pNode -- positive node name
nNode -- negative node name
type -- either Voltage to create a voltage source or Current
to create a current source
*variables -- remaining arguments are a list of all of the
voltage nodes and current probes that will be passed back
to the model method (in the same order)
"""
simulator_.CallBack_.__init__(self, str(pNode), str(nNode),
type, variables, self.callBack)
self.args = []
for varaiable in variables:
self.args.append(calc.Variable())
self.range = list(range(len(variables)))
def callBack(self, data, derivs):
"""Wrapper around the low-level PyB call-back."""
for i in self.range:
self.args[i](data[i])
result = self.model(*self.args)
if not isinstance(result, calc.Variable):
for i in self.range:
derivs[i] = 0.0
else:
for i in self.range:
derivs[i] = result[self.args[i]]
return(float(result))
def model(self, *args):
"""This should be redefined when this class is inherited."""
warnings.warn('PyB model has not been defined.')
return 0.0
def v(self, node):
"""Returns the name of a node used within the simulator."""
return('v(%s)' % str(node))
def i(self, source):
"""Returns the name of a current probe within the simulator."""
return('i(%s)' % str(source))
#-----------------------------------------------------------------------------#
# Sources #
#-----------------------------------------------------------------------------#
class I(simulator_.CurrentSource_):
"""Current Source Model
Example (refer to the waveforms module for more examples):
>>> import eispice
>>> cct = eispice.Circuit("Current Pulse Test")
>>> cct.Ix = eispice.I(1, eispice.GND, 4, \
eispice.Pulse(4, 8, '10n', '2n', '3n', '5n', '20n'))
>>> cct.Vx = eispice.V(2, 1, 0)
>>> cct.Rx = eispice.R(2, eispice.GND, 10)
>>> cct.tran('1n', '20n')
>>> cct.check_i('Vx', 4, '5n')
True
>>> cct.check_i('Vx', 8, '15n')
True
"""
def __init__(self, pNode, nNode, dcValue=0.0, wave=None):
"""
Arguments:
pNode -- positive node name
nNode -- negative node name
dcValue -- DC Value in Amps
wave -- (optional) waveform
"""
if wave == None:
simulator_.CurrentSource_.__init__(self, str(pNode), str(nNode),
units.float(dcValue))
else:
simulator_.CurrentSource_.__init__(self, str(pNode), str(nNode),
units.float(dcValue), wave)
class V(simulator_.VoltageSource_):
"""Voltage Source Model
Example (refer to the waveforms module for more examples):
>>> import eispice
>>> cct = eispice.Circuit("Voltage Pulse Test")
>>> cct.Ix = eispice.V(1, eispice.GND, 4, \
eispice.Pulse(4, 8, '10n', '2n', '3n', '5n', '20n'))
>>> cct.Rx = eispice.R(1, eispice.GND, 10)
>>> cct.tran('1n', '20n')
>>> cct.check_v(1, 4, '5n')
True
>>> cct.check_v(1, 8, '15n')
True
"""
def __init__(self, pNode, nNode, dcValue=0.0, wave=None):
"""
Arguments:
pNode -- positive node name
nNode -- negative node name
dcValue -- DC Value in Volts
wave -- (optional) waveform
"""
if wave == None:
simulator_.VoltageSource_.__init__(self, str(pNode), str(nNode),
units.float(dcValue))
else:
simulator_.VoltageSource_.__init__(self, str(pNode), str(nNode),
units.float(dcValue), wave)
class VI(simulator_.VICurve_):
"""Voltage/Current (VI) Curve Model
Example:
>>> import eispice
>>> cct = eispice.Circuit("VI Curve Test")
>>> data = array([[-10, -10],[-5, -2],[0, 1],[1, 3],[5, 8], \
[10, 10],[12, 8]])
>>> cct.Vx = eispice.V(1, 0, 10)
>>> cct.VIx = eispice.VI(1, 0, eispice.PWL(data))
>>> cct.op()
>>> cct.check_i('VIx', 10)
True
>>> cct.Vx.DC = -5
>>> cct.op()
>>> cct.check_i('VIx', -2)
True
"""
def __init__(self, pNode, nNode, vi, ta=None):
"""
Arguments:
pNode -- positive node name
nNode -- negative node name
vi -- PWL or PWC waveform defining the VI curve
ta -- (optional) PWL or PWC waveform defining the a time/multiplier
curve, at each time point the IV curve is scaled by the repective A
"""
if ta == None:
simulator_.VICurve_.__init__(self, str(pNode), str(nNode), vi)
else:
simulator_.VICurve_.__init__(self, str(pNode), str(nNode), vi, ta)
class G(simulator_.Behavioral_):
"""Voltage-Controlled Current Source
Example:
>>> import eispice
>>> cct = eispice.Circuit("VCCS Test")
>>> cct.Vx = eispice.V(1, 0, 3.2)
>>> cct.Iy = eispice.G(2, 0, 1, 0, 2)
>>> cct.Ry = eispice.R(2, 3, 4.1)
>>> cct.Vy = eispice.V(3, 0, 0)
>>> cct.op()
>>> cct.check_i('Vy', -6.4)
True
"""
def __init__(self, pNode, nNode, pControlNode, nControlNode, value):
"""
Arguments:
pNode -- positive node name
nNode -- negative node name
pNode -- positive control node name
nNode -- negative control node name
value -- gain (Siemens)
"""
equation = ('v(%s,%s)*%e' % (str(pControlNode), str(nControlNode),
units.float(value)))
simulator_.Behavioral_.__init__(self, str(pNode), str(nNode),
Current, equation)
class E(simulator_.Behavioral_):
"""Voltage-Controlled Voltage Source
Example:
>>> import eispice
>>> cct = eispice.Circuit("VCVS Test")
>>> cct.Vx = eispice.V(1, 0, 3.2)
>>> cct.Vy = eispice.E(2, 0, 1, 0, 2.7)
>>> cct.Ry = eispice.R(2, 0, 4.1)
>>> cct.op()
>>> cct.check_v(2, 8.64)
True
"""
def __init__(self, pNode, nNode, pControlNode, nControlNode, value):
"""
Arguments:
pNode -- positive node name
nNode -- negative node name
pNode -- positive control node name
nNode -- negative control node name
value -- gain
"""
equation = ('v(%s,%s)*%e' % (str(pControlNode), str(nControlNode),
units.float(value)))
simulator_.Behavioral_.__init__(self, str(pNode), str(nNode),
Voltage, equation)
class F(simulator_.Behavioral_):
"""Current-Controlled Current Source
Example:
>>> import eispice
>>> cct = eispice.Circuit("CCCS Test")
>>> cct.Ix = eispice.I(1, 0, 3.2)
>>> cct.Rx = eispice.R(1, 4, 7.3)
>>> cct.Vx = eispice.V(4, 0, 0)
>>> cct.Iy = eispice.F(2, 0, 'Vx', 1.75)
>>> cct.Ry = eispice.R(2, 3, 6.2)
>>> cct.Vy = eispice.V(3, 0, 0)
>>> cct.op()
>>> cct.check_i('Vy', 5.6)
True
"""
def __init__(self, pNode, nNode, controlDevice, value):
"""
Arguments:
pNode -- positive node name
nNode -- negative node name
controlDevice -- name of control voltage source (string)
value -- gain
"""
equation = ('i(%s)*%e' % (str(controlDevice),
units.float(value)))
simulator_.Behavioral_.__init__(self, str(pNode), str(nNode),
Current, equation)
class H(simulator_.Behavioral_):
"""Current-Controlled Voltage Source
Example:
>>> import eispice
>>> cct = eispice.Circuit("CCVS Test")
>>> cct.Ix = eispice.I(1, 0, 2.7)
>>> cct.Rx = eispice.R(1, 4, 10.4)
>>> cct.Vx = eispice.V(4, 0, 0)
>>> cct.Vy = eispice.H(2, 0, 'Vx', 3.2)
>>> cct.Ry = eispice.R(2, 0, 3.7)
>>> cct.op()
>>> cct.check_v(2, -8.64)
True
"""
def __init__(self, pNode, nNode, controlDevice, value):
"""
Arguments:
pNode -- positive node name
nNode -- negative node name
controlDevice -- name of control voltage source (string)
value -- gain
"""
equation = ('i(%s)*%e' % (str(controlDevice),
units.float(value)))
simulator_.Behavioral_.__init__(self, str(pNode), str(nNode),
Voltage, equation)
#-----------------------------------------------------------------------------#
# Transmission Lines #
#-----------------------------------------------------------------------------#
class T(simulator_.TLine_):
"""Basic Transmission Line Model
Example:
>>> import eispice
>>> cct = eispice.Circuit("Simple Transmission Line Model Test")
>>> cct.Vx = eispice.V('vs',0, 0, eispice.Pulse(0, 1, '0n','1n','1n',\
'4n','8n'))
>>> cct.Rt = eispice.R('vs', 'vi', 50)
>>> cct.Tg = eispice.T('vi', 0, 'vo', 0, 50, '2n')
>>> cct.Cx = eispice.C('vo',0,'5p')
>>> cct.tran('0.01n', '10n')
>>> cct.check_v('vo', 0.3726765, '2.6n')
True
>>> cct.check_i('Vx', -0.008381298823, '4.62n')
True
"""
def __init__(self, pNodeLeft, nNodeLeft, pNodeRight, nNodeRight, Z0, Td,
loss=None):
"""
Arguments:
pNodeLeft -- positive node on the left side to tline
nNodeLeft -- negative node on the left side to tline
pNodeRight -- positive node on the right side to tline
nNodeRight -- negative node on the right side to tline
Z0 --> characteristic impedance in Ohms
Td --> time delay in seconds
x.loss --> (optional) loss factor times length, is unitless
"""
if loss == None:
simulator_.TLine_.__init__(self, str(pNodeLeft),
str(nNodeLeft), str(pNodeRight), str(nNodeRight),
units.float(Z0), units.float(Td))
else:
simulator_.TLine_.__init__(self, str(pNodeLeft),
str(nNodeLeft), str(pNodeRight), str(nNodeRight),
units.float(Z0), units.float(Td), units.float(loss))
class W(simulator_.TLineW_):
"""
W-Element Transmission Line Model
An RLGC matrix defined coupled, frequency dependent transmission-line
model, should be roughly equivalent to the W-Element in HSPICE.
"""
def __init__(self, iNodes, iRef, oNodes, oRef, length, R0, L0, C0,
G0=None, Rs=None, Gd=None, fgd=1e100, fK=1e9, M=6):
"""
Arguments:
iNodes -- list/tuple or single string of input nodes
iRef -- name of the input refrence node
oNodes -- list/tuple or single string of output nodes
iRef -- name of the output refrence node
length -- length of the t-line in meters
R0 -- DC resistance matrix or float from single T-Line (ohm/m)
L0 -- DC inductance matrix or float from single T-Line (H/m)
C0 -- DC capacitance matrix or float from single T-Line (F/m)
G0 -- (optional) DC shunt conductance matrix (S/m)
Rs -- (optional) Skin-effect resistance matrix (Ohm/m*sqrt(Hz))
Gd -- (optional) Dielectric-loss conductance matrix (S/m*Hz)
fgd -- (optional) Cut-Off for Dielectric loss (Hz)
fK -- (optional) Cut-Off for T-Line Model (Hz)
M -- (optional) Order of Approximation of Curve Fit (unitless)
"""
warnings.warn("W-Element Model not complete.")
# Makes it possible to send a single string for a non-coupled T-Line
if isinstance(iNodes, str):
iNodes = (iNodes, )
if isinstance(oNodes, str):
oNodes = (oNodes, )
if len(iNodes) != len(oNodes):
raise RuntimeError("Must have the same number of input and output nodes.")
nodes = len(iNodes)
# Do some parameter checking here, it's easier that doing it
# in C and, create zeroed matracies if G and/or R aren't defined.
if nodes > 1:
if G0 is None:
G0 = array(zeros((nodes, nodes)))
if Rs is None:
Rs = array(zeros((nodes, nodes)))
if Gd is None:
Gd = array(zeros((nodes, nodes)))
R0 = array(units.floatList2D(R0))
L0 = array(units.floatList2D(L0))
C0 = array(units.floatList2D(C0))
G0 = array(units.floatList2D(G0))
Rs = array(units.floatList2D(Rs))
Gd = array(units.floatList2D(Gd))
if (R0.shape[0] != nodes) or (R0.shape[1] != nodes):
raise RuntimeError("R0 must be a square matrix with as many rows as nodes.")
if (L0.shape[0] != nodes) or (L0.shape[1] != nodes):
raise RuntimeError("L0 must be a square matrix with as many rows as nodes.")
if (C0.shape[0] != nodes) or (C0.shape[1] != nodes):
raise RuntimeError("C0 must be a square matrix with as many rows as nodes.")
if (G0.shape[0] != nodes) or (G0.shape[1] != nodes):
raise RuntimeError("G0 must be a square matrix with as many rows as nodes.")
if (Rs.shape[0] != nodes) or (Rs.shape[1] != nodes):
raise RuntimeError("Rs must be a square matrix with as many rows as nodes.")
if (Gd.shape[0] != nodes) or (Gd.shape[1] != nodes):
raise RuntimeError("Gd must be a square matrix with as many rows as nodes.")
else:
if G0 is None:
G0 = 0.0
if Rs is None:
Rs = 0.0
if Gd is None:
Gd = 0.0
R0 = array((units.float(R0),))
L0 = array((units.float(L0),))
C0 = array((units.float(C0),))
G0 = array((units.float(G0),))
Rs = array((units.float(Rs),))
Gd = array((units.float(Gd),))
nodeNames = tuple([str(i) for i in iNodes] + [str(iRef)]
+ [str(i) for i in oNodes] + [str(oRef)])
print(G0)
simulator_.TLineW_.__init__(self, nodeNames, int(M),
units.float(length), L0, C0, R0, G0, Rs, Gd, units.float(fgd),
units.float(fK))
#-----------------------------------------------------------------------------#
# Semiconductors #
#-----------------------------------------------------------------------------#
class D(subckt.Subckt):
"""
Diode Model
A Berkley spice3f5 compatible Junction Diode Model.
Example:
>>> import eispice
>>> cct = eispice.Circuit("Diode Test")
>>> wave = eispice.Pulse(0, 1, '.5u', '5u', '5u', '.5u')
>>> cct.Vx = eispice.V(1, eispice.GND, 0, wave)
>>> cct.Dx = eispice.D(1, eispice.GND, IS='2.52n', RS=0.568, N=1.752, \
CJO='4p', M=0.4, TT='20n')
>>> cct.tran('0.5u', '10u')
>>> cct.check_i('Vx', -7.326775239e-02, '4.6u')
True
>>> cct.check_i('Vx', -1.781444540e-01, '6.4u')
True
"""
def __init__(self, pNode, nNode, area=1.0,
IS=1.0e-14, RS=0, N=1, TT=0, CJO=0, VJ=1, M=0.5, EG=1.11, XTI=3.0,
KF=0, AF=1, FC=0.5, BV=1e100, IBV=1e-3, TNOM=27):
"""
Arguments:
pNode -- positive node name
nNode -- negative node name
area -- area factor (for spice3f5 compatibility) -- default = 1.0
IS -- saturation current (A) -- default = 1.0e-14
RS -- ohmic resistance (Ohms) -- default = 0
N -- emission coefficient -- default = 1
TT -- transit-time (sec) -- default = 0
CJO -- zero-bias junction capacitance (F) -- default = 0
VJ -- junction potential -- default = 0.5
M -- grading coefficient (V) -- default = 1
EG -- reserved for possible future use
XTI -- reserved for possible future use
KF -- reserved for possible future use
AF -- reserved for possible future use
FC -- Coefficient for Cd formula -- default = 0.5
BV -- reverse breakdown voltage (V) -- default = 1e100
IBV -- current at breakdown voltage (A) -- default = 1e-3
TNOM -- parameter measurement temperature (degC) -- default = 27
"""
pNode = str(pNode)
nNode = str(nNode)
area = units.float(area)
IS = units.float(IS)
RS = units.float(RS)
N = units.float(N)
TT = units.float(TT)
CJO = units.float(CJO)
VJ = units.float(VJ)
M = units.float(M)
BV = units.float(BV)
IBV = units.float(IBV)
TNOM = units.float(TNOM)
# Parasitic Resistance
if RS != 0:
self.Rs = R(pNode, self.node('r'), area*RS)
pNode = self.node('r')
# Local Variables
k = 1.3806503e-23 # Boltzmann's Constant (1/JK)
q = 1.60217646e-19 # Electron Charge (C)
Vt = ((k*(TNOM+273.15))/q) # Thermal Voltage
# Saturation and Breakdown Current
Is = ('(if(v(%s,%s)>=%e)*(%e*(exp(v(%s,%s)/%e)-1)))' %
(pNode, nNode, -BV, area*IS, pNode, nNode, N*Vt))
Ib = ('(if(v(%s,%s)<%e)*(%e*(exp((-%e-v(%s,%s))/%e)-1)))' %
(pNode, nNode, -BV, area*IS, BV, pNode, nNode,Vt))
self.Isb = B(pNode, nNode, Current, Is + '+' + Ib)
# Junction (Depletion) and Diffusion Capacitance
Cd = '(%e*(exp(v(%s,%s)/%e)))' % (area*TT*IS/N*Vt, pNode, nNode, N*Vt)
Cj1 = ('(if(v(%s,%s)<%e)*(%e/((1-v(%s,%s)/%e)^%e)))' %
(pNode, nNode, FC*VJ, area*CJO, pNode, nNode, VJ, M))
Cj2 = ('(if(v(%s,%s)>=%e)*%e*(1-%e+%e*v(%s,%s)))' %
(pNode, nNode, FC*VJ, (area*CJO/((1-FC)**(M+1))),
FC*(1+M), (M/VJ), pNode, nNode))
self.Cjd = B(pNode, nNode, Capacitor, Cd + '+' + Cj1 + '+' + Cj2)
class Q(subckt.Subckt):
"""BJT Model
A Berkley spice3f5 compatible BJT Model.
"""
def __init__(self, cNode, bNode, eNode, sNode=GND, area=1.0,
IS=1.0e-16, BF=100, NF=1.0, VAF=1e100, IKF=1e100, ISE=0, NE=1.5,
BR=1, NR=1, VAR=1e100, IKR=1e100, ISC=0, NC=2, RB=0, IRB=1e100,
RBM=None, RE=0, RC=0, CJE=0, VJE=0.75, MJE=0.33, TF=0, XTF=0,
VTF=1e100, ITF=0, PTF=0, CJC=0, VJC=0.75, MJC=0.33, XCJC=1,
TR=0, CJS=0, VJS=0.75, MJS=0, XTB=0, EG=1.11, XTI=3, KF=0, AF=1,
FC=0.5, TNOM=27):
"""
Arguments:
cNode -- collector node name
bNode -- base node name
eNode -- emitter node name
bNode -- substrate node name -- default = GND
area -- area factor (for spice3f5 compatibility) -- default = 1.0
"""
cNode = str(cNode)
bNode = str(bNode)
eNode = str(eNode)
sNode = str(sNode)
area = units.float(area)
IS = units.float(area)
BF = units.float(area)
NF = units.float(area)
VAF = units.float(area)
IKF = units.float(area)
ISE = units.float(area)
NE = units.float(area)
BR = units.float(area)
NR = units.float(area)
VAR = units.float(area)
IKR = units.float(area)
ISC = units.float(area)
NC = units.float(area)
RB = units.float(area)
IRB = units.float(area)
if RBM == None:
RBM = RB
RBM = units.float(area)
RE = units.float(area)
RC = units.float(area)
CJE = units.float(area)
VJE = units.float(area)
MJE = units.float(area)
TF = units.float(area)
XTF = units.float(area)
TR = units.float(area)
CJS = units.float(area)
VJS = units.float(area)
MJS = units.float(area)
XTB = units.float(area)
EG = units.float(area)
XTI = units.float(area)
KF = units.float(area)
AF = units.float(area)
FC = units.float(area)
TNOM = units.float(area)
# Local Variables
k = 1.3806503e-23 # Boltzmann's Constant (1/JK)
q = 1.60217646e-19 # Electron Charge (C)
Vt = ((k*(TNOM+273.15))/q) # Thermal Voltage
# Parasitic Resistance
if RE != 0:
self.Re = R(eNode, self.node('re'), area*RE)
eNode = self.node('re')
if RC != 0:
self.Rc = R(eNode, self.node('rc'), area*RC)
cNode = self.node('rc')
if RB != 0:
self.Rb = R(eNode, self.node('rb'), area*RC)
bNode = self.node('rb')
# Saturation and Breakdown Current
#~ Ibe = '(%e*(exp(v(%s,%s)/%e)-1))' % (IS/BF, rNode, nNode, NF*Vt)
#~ Ibe = '(%e*(exp(v(%s,%s)/%e)-1))' % (IS/BF, rNode, nNode, NF*Vt)
#~ self.Isb = B(rNode, nNode, Current, Is + '+' + Ib)
warnings.warn("BJT Model not complete.")
if __name__ == '__main__':
import doctest
doctest.testmod(verbose=False)
print('Testing Complete')
|
Narrat/python3-eispice
|
module/device.py
|
Python
|
gpl-2.0
| 32,675
|
[
"xTB"
] |
55dbb245ff11f81d270af10765ae8dcfd86d52c53b06edf3badd97208997e77e
|
'''
Created on Jun 2, 2011
@author: mkiyer
chimerascan: chimeric transcript discovery using RNA-seq
Copyright (C) 2011 Matthew Iyer
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>.
'''
import operator
import collections
import pysam
from seq import DNA_reverse_complement
#
# constants used for CIGAR alignments
#
CIGAR_M = 0 #match Alignment match (can be a sequence match or mismatch)
CIGAR_I = 1 #insertion Insertion to the reference
CIGAR_D = 2 #deletion Deletion from the reference
CIGAR_N = 3 #skip Skipped region from the reference
CIGAR_S = 4 #softclip Soft clip on the read (clipped sequence present in <seq>)
CIGAR_H = 5 #hardclip Hard clip on the read (clipped sequence NOT present in <seq>)
CIGAR_P = 6 #padding Padding (silent deletion from the padded reference sequence)
def parse_reads_by_qname(samfh):
"""
generator function to parse and return lists of
reads that share the same qname
"""
reads = []
for read in samfh:
if len(reads) > 0 and read.qname != reads[-1].qname:
yield reads
reads = []
reads.append(read)
if len(reads) > 0:
yield reads
def parse_pe_reads(bamfh):
"""
generator function to parse and return a tuple of
lists of reads
"""
pe_reads = ([], [])
# reads must be sorted by qname
num_reads = 0
prev_qname = None
for read in bamfh:
# get read attributes
qname = read.qname
readnum = 1 if read.is_read2 else 0
# if query name changes we have completely finished
# the fragment and can reset the read data
if num_reads > 0 and qname != prev_qname:
yield pe_reads
# reset state variables
pe_reads = ([], [])
num_reads = 0
pe_reads[readnum].append(read)
prev_qname = qname
num_reads += 1
if num_reads > 0:
yield pe_reads
def group_read_pairs(pe_reads):
"""
Given tuple of ([read1 reads],[read2 reads]) paired-end read alignments
return mate-pairs and unpaired reads
"""
# group paired reads
paired_reads = ([],[])
unpaired_reads = ([],[])
for rnum,reads in enumerate(pe_reads):
for r in reads:
if r.is_proper_pair:
paired_reads[rnum].append(r)
else:
unpaired_reads[rnum].append(r)
# check if we have at least one pair
pairs = []
if all((len(reads) > 0) for reads in paired_reads):
# index read1 by mate reference name and position
rdict = collections.defaultdict(lambda: collections.deque())
for r in paired_reads[0]:
rdict[(r.rnext,r.pnext)].append(r)
# iterate through read2 and get mate pairs
for r2 in paired_reads[1]:
r1 = rdict[(r2.tid,r2.pos)].popleft()
pairs.append((r1,r2))
return pairs, unpaired_reads
def select_best_scoring_pairs(pairs):
"""
return the set of read pairs (provided as a list of tuples) with
the highest summed alignment score
"""
if len(pairs) == 0:
return []
# gather alignment scores for each pair
pair_scores = [(pair[0].opt('AS') + pair[1].opt('AS'), pair) for pair in pairs]
pair_scores.sort(key=operator.itemgetter(0))
best_score = pair_scores[0][0]
best_pairs = [pair_scores[0][1]]
for score,pair in pair_scores[1:]:
if score < best_score:
break
best_pairs.append(pair)
return best_pairs
def select_primary_alignments(reads):
"""
return only reads that lack the secondary alignment bit
"""
if len(reads) == 0:
return []
# sort reads by number of mismatches
unmapped_reads = []
primary_reads = []
for r in reads:
if r.is_unmapped:
unmapped_reads.append(r)
elif not r.is_secondary:
primary_reads.append(r)
if len(primary_reads) == 0:
assert len(unmapped_reads) > 0
return unmapped_reads
return primary_reads
def select_best_mismatch_strata(reads, mismatch_tolerance=0):
if len(reads) == 0:
return []
# sort reads by number of mismatches
mapped_reads = []
unmapped_reads = []
for r in reads:
if r.is_unmapped:
unmapped_reads.append(r)
else:
mapped_reads.append((r.opt('NM'), r))
if len(mapped_reads) == 0:
return unmapped_reads
sorted_reads = sorted(mapped_reads, key=operator.itemgetter(0))
best_nm = sorted_reads[0][0]
worst_nm = sorted_reads[-1][0]
sorted_reads.extend((worst_nm+1, r) for r in unmapped_reads)
# choose reads within a certain mismatch tolerance
best_reads = []
for mismatches, r in sorted_reads:
if mismatches > (best_nm + mismatch_tolerance):
break
best_reads.append(r)
return best_reads
def copy_read(r):
a = pysam.AlignedRead()
a.qname = r.qname
a.seq = r.seq
a.flag = r.flag
a.tid = r.tid
a.pos = r.pos
a.mapq = r.mapq
a.cigar = r.cigar
a.rnext = r.rnext
a.pnext = r.pnext
a.isize = r.isize
a.qual = r.qual
a.tags = list(r.tags)
return a
def soft_pad_read(fq, r):
"""
'fq' is the fastq record
'r' in the AlignedRead SAM read
"""
# make sequence soft clipped
ext_length = len(fq.seq) - len(r.seq)
cigar_softclip = [(CIGAR_S, ext_length)]
cigar = r.cigar
# reconstitute full length sequence in read
if r.is_reverse:
seq = DNA_reverse_complement(fq.seq)
qual = fq.qual[::-1]
if (cigar is not None) and (ext_length > 0):
cigar = cigar_softclip + cigar
else:
seq = fq.seq
qual = fq.qual
if (cigar is not None) and (ext_length > 0):
cigar = cigar + cigar_softclip
# replace read field
r.seq = seq
r.qual = qual
r.cigar = cigar
def pair_reads(r1, r2, tags=None):
'''
fill in paired-end fields in SAM record
'''
if tags is None:
tags = []
# convert read1 to paired-end
r1.is_paired = True
r1.is_proper_pair = True
r1.is_read1 = True
r1.mate_is_reverse = r2.is_reverse
r1.mate_is_unmapped = r2.is_unmapped
r1.rnext = r2.tid
r1.pnext = r2.pos
tags1 = collections.OrderedDict(r1.tags)
tags1.update(tags)
r1.tags = tags1.items()
# convert read2 to paired-end
r2.is_paired = True
r2.is_proper_pair = True
r2.is_read2 = True
r2.mate_is_reverse = r1.is_reverse
r2.mate_is_unmapped = r1.is_unmapped
r2.rnext = r1.tid
r2.pnext = r1.pos
tags2 = collections.OrderedDict(r2.tags)
tags2.update(tags)
r2.tags = tags2.items()
# compute insert size
if r1.tid != r2.tid:
r1.isize = 0
r2.isize = 0
elif r1.pos > r2.pos:
isize = r1.aend - r2.pos
r1.isize = -isize
r2.isize = isize
else:
isize = r2.aend - r1.pos
r1.isize = isize
r2.isize = -isize
def get_clipped_interval(r):
cigar = r.cigar
padstart, padend = r.pos, r.aend
if len(cigar) > 1:
if (cigar[0][0] == CIGAR_S or
cigar[0][0] == CIGAR_H):
padstart -= cigar[0][1]
elif (cigar[-1][0] == CIGAR_S or
cigar[-1][0] == CIGAR_H):
padend += cigar[-1][1]
return padstart, padend
|
madhavsuresh/chimerascan
|
chimerascan/deprecated/sam_v2.py
|
Python
|
gpl-3.0
| 7,926
|
[
"pysam"
] |
c611214d84bae92fcc214a260404c1d8a087d24b1543f572def95239eb302695
|
# -*- coding: utf-8 -*-
# Generated by Django 1.11.13 on 2018-06-28 09:26
from __future__ import unicode_literals
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('foirequest', '0020_auto_20180605_1053'),
]
operations = [
migrations.AddField(
model_name='foimessage',
name='kind',
field=models.CharField(choices=[('email', 'Email'), ('post', 'Postal mail'), ('fax', 'Fax'), ('phone', 'Phone call'), ('visit', 'Personal visit')], default='email', max_length=10),
),
]
|
stefanw/froide
|
froide/foirequest/migrations/0021_foimessage_kind.py
|
Python
|
mit
| 597
|
[
"VisIt"
] |
152d8fd175948b112aa477120d6237ce0b1afb295292a653656d4ad911763f42
|
# Copyright (C) 2012,2013
# Max Planck Institute for Polymer Research
# Copyright (C) 2008,2009,2010,2011
# Max-Planck-Institute for Polymer Research & Fraunhofer SCAI
#
# This file is part of ESPResSo++.
#
# ESPResSo++ is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# ESPResSo++ is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
"""
***********************************
**espresso.interaction.FENECapped**
***********************************
"""
from espresso import pmi, infinity
from espresso.esutil import *
from espresso.interaction.Potential import *
from espresso.interaction.Interaction import *
from _espresso import interaction_FENECapped, interaction_FixedPairListFENECapped
class FENECappedLocal(PotentialLocal, interaction_FENECapped):
'The (local) FENECapped potential.'
def __init__(self, K=1.0, r0=0.0, rMax=1.0,
cutoff=infinity, caprad=1.0, shift=0.0):
"""Initialize the local FENE object."""
if not (pmi._PMIComm and pmi._PMIComm.isActive()) or pmi._MPIcomm.rank in pmi._PMIComm.getMPIcpugroup():
if shift == "auto":
cxxinit(self, interaction_FENECapped, K, r0, rMax, cutoff, caprad)
else:
cxxinit(self, interaction_FENECapped, K, r0, rMax, cutoff, caprad, shift)
class FixedPairListFENECappedLocal(InteractionLocal, interaction_FixedPairListFENECapped):
'The (local) FENECapped interaction using FixedPair lists.'
def __init__(self, system, vl, potential):
if not (pmi._PMIComm and pmi._PMIComm.isActive()) or pmi._MPIcomm.rank in pmi._PMIComm.getMPIcpugroup():
cxxinit(self, interaction_FixedPairListFENECapped, system, vl, potential)
def setPotential(self, potential):
if not (pmi._PMIComm and pmi._PMIComm.isActive()) or pmi._MPIcomm.rank in pmi._PMIComm.getMPIcpugroup():
self.cxxclass.setPotential(self, potential)
def getPotential(self):
if not (pmi._PMIComm and pmi._PMIComm.isActive()) or pmi._MPIcomm.rank in pmi._PMIComm.getMPIcpugroup():
return self.cxxclass.getPotential(self)
def setFixedPairList(self, fixedpairlist):
if not (pmi._PMIComm and pmi._PMIComm.isActive()) or pmi._MPIcomm.rank in pmi._PMIComm.getMPIcpugroup():
self.cxxclass.setFixedPairList(self, fixedpairlist)
def getFixedPairList(self):
if not (pmi._PMIComm and pmi._PMIComm.isActive()) or pmi._MPIcomm.rank in pmi._PMIComm.getMPIcpugroup():
return self.cxxclass.getFixedPairList(self)
if pmi.isController:
class FENECapped(Potential):
'The FENECapped potential.'
pmiproxydefs = dict(
cls = 'espresso.interaction.FENECappedLocal',
pmiproperty = ['K', 'r0', 'rMax', 'caprad']
)
class FixedPairListFENECapped(Interaction):
__metaclass__ = pmi.Proxy
pmiproxydefs = dict(
cls = 'espresso.interaction.FixedPairListFENECappedLocal',
pmicall = ['setPotential','getPotential','setFixedPairList', 'getFixedPairList']
)
|
BackupTheBerlios/espressopp
|
src/interaction/FENECapped.py
|
Python
|
gpl-3.0
| 3,595
|
[
"ESPResSo"
] |
676e9d96a7384b500955b6d3a6d058e2e61bb91c72af82345e11322f2a23a3f2
|
#!/usr/bin/env python
"""
Artificial Intelligence for Humans
Volume 2: Nature-Inspired Algorithms
Python Version
http://www.aifh.org
http://www.jeffheaton.com
Code repository:
https://github.com/jeffheaton/aifh
Copyright 2014 by Jeff Heaton
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
For more information on Heaton Research copyrights, licenses
and trademarks visit:
http://www.heatonresearch.com/copyright
"""
__author__ = 'jheaton'
try:
# for Python2
from Tkinter import *
from Tkinter import _flatten
except ImportError:
# for Python3
from tkinter import *
from tkinter import _flatten
import random
import math
# The number of particles.
PARTICLE_COUNT = 25
# The size of each particle.
PARTICLE_SIZE = 10
# The constant for cohesion.
COHESION = 0.01
# The constant for alignment.
ALIGNMENT = 0.5
# The constant for separation.
SEPARATION = 0.25
CANVAS_HEIGHT = 400
CANVAS_WIDTH = 400
class Particle:
def __init__(self):
self.location = [0] * 2
self.velocity = [0] * 2
self.poly = None
class App():
"""
Flocking.
"""
def __init__(self):
self.root = Tk()
self.c = Canvas(self.root,width=CANVAS_WIDTH, height=CANVAS_HEIGHT)
self.c.pack()
self.particles = []
self.c.create_rectangle(0, 0, CANVAS_WIDTH, CANVAS_HEIGHT, outline="black", fill="black")
for i in range(0, PARTICLE_COUNT) :
p = Particle()
p.location = [0] * 2
p.velocity = [0] * 2
p.location[0] = random.randint(0,CANVAS_WIDTH)
p.location[1] = random.randint(0,CANVAS_HEIGHT)
p.velocity[0] = 3
p.velocity[1] = random.uniform(0,2.0*math.pi)
p.poly = self.c.create_polygon([0,0,0,0,0,0],fill='white')
self.particles.append(p)
self.update_clock()
self.root.mainloop()
def max_index(self,data):
result = -1
for i in range(0,len(data)):
if result==-1 or data[i] > data[result]:
result = i
return result
def particle_location_mean(self,particles,dimension):
sum = 0
count = 0
for p in particles:
sum = sum + p.location[dimension]
count = count + 1
return sum / count
def particle_velocity_mean(self,particles,dimension):
sum = 0
count = 0
for p in particles:
sum = sum + p.velocity[dimension]
count = count + 1
return sum / count
def find_nearest(self,target,particles,k,max_dist):
result = []
temp_dist = [0] * k
worst_index = -1
for particle in particles:
if particle!=target:
# Euclidean distance
d = math.sqrt(
math.pow(particle.location[0] - target.location[0],2) +
math.pow(particle.location[1] - target.location[1],2) )
if d<=max_dist:
if len(result) < k:
temp_dist[len(result)] = d
result.append(particle)
worst_index = self.max_index(temp_dist)
elif d<temp_dist[worst_index]:
temp_dist[worst_index] = d
worst_index = self.max_index(temp_dist)
return result
def flock(self):
for particle in self.particles:
###############################################################
## Begin implementation of three very basic laws of flocking.
###############################################################
neighbors = self.find_nearest(particle, self.particles, 5, sys.float_info.max)
nearest = self.find_nearest(particle, self.particles, 5, 10)
# 1. Separation - avoid crowding neighbors (short range repulsion)
separation = 0
if len(nearest) > 0:
meanX = self.particle_location_mean(nearest, 0)
meanY = self.particle_location_mean(nearest, 1)
dx = meanX - particle.location[0]
dy = meanY - particle.location[1]
separation = math.atan2(dx, dy) - particle.velocity[1]
separation += math.pi
# 2. Alignment - steer towards average heading of neighbors
alignment = 0
if len(neighbors) > 0:
alignment = self.particle_velocity_mean(neighbors, 1) - particle.velocity[1]
# 3. Cohesion - steer towards average position of neighbors (long range attraction)
cohesion = 0
if len(neighbors):
meanX = self.particle_location_mean(self.particles, 0)
meanY = self.particle_location_mean(self.particles, 1)
dx = meanX - particle.location[0]
dy = meanY - particle.location[1]
cohesion = math.atan2(dx, dy) - particle.velocity[1]
# perform the turn
# The degree to which each of the three laws is applied is configurable.
# The three default ratios that I provide work well.
turnAmount = (cohesion * COHESION) + (alignment * ALIGNMENT) + (separation * SEPARATION)
particle.velocity[1] += turnAmount
###############################################################
## End implementation of three very basic laws of flocking.
###############################################################
def update_clock(self):
# render the particles
points = [0] * 6
for p in self.particles:
points[0] = p.location[0]
points[1] = p.location[1]
r = p.velocity[1] + (math.pi * 5.0) / 12.0
points[2] = points[0] - (int) (math.cos(r) * PARTICLE_SIZE)
points[3] = points[1] - (int) (math.sin(r) * PARTICLE_SIZE)
r2 = p.velocity[1] + (math.pi * 7.0) / 12.0
points[4] = points[0] - (int) (math.cos(r2) * PARTICLE_SIZE)
points[5] = points[1] - (int) (math.sin(r2) * PARTICLE_SIZE)
self.c.coords(p.poly,_flatten(points))
# move the particle
dx = math.cos(r)
dy = math.sin(r)
p.location[0] = p.location[0] + (dx * p.velocity[0])
p.location[1] = p.location[1] + (dy * p.velocity[0])
# handle wraps
if p.location[0] < 0:
p.location[0] = CANVAS_WIDTH
if p.location[1] < 0:
p.location[1] = CANVAS_HEIGHT
if p.location[0] > CANVAS_WIDTH:
p.location[0] = 0
if p.location[1] > CANVAS_HEIGHT:
p.location[1] = 0
self.flock()
# Next frame.
self.root.after(100, self.update_clock)
app=App()
|
trenton3983/Artificial_Intelligence_for_Humans
|
vol2/vol2-python-examples/examples/example_flock.py
|
Python
|
apache-2.0
| 7,457
|
[
"VisIt"
] |
80637656455e4e645b978021506dfabfbb970081fa9f5afcbd35515188413321
|
from note.module.element import AllNoteHandleResults, OneNoteHandleResult
from note.utils import Base
class Visitor:
def visit(self, results: AllNoteHandleResults):
raise NotImplementedError
class StatusResultVisitor(Visitor):
def visit(self, results: AllNoteHandleResults):
"""返回HandleResults的汇总信息,该信息将用于视图输出"""
report = ReportAfterStatus()
for result in results.results:
assert isinstance(result, OneNoteHandleResult)
if len(result.new_qs):
lq = LocatedQuestions(result.location, result.new_qs)
report.new_qs_report.append(lq)
if len(result.reviewed_qs):
lq = LocatedQuestions(result.location, result.reviewed_qs)
report.reviewed_qs_report.append(lq)
if len(result.paused_qs):
lq = LocatedQuestions(result.location, result.paused_qs)
report.paused_qs_report.append(lq)
report.need_reviewed_num += len(result.need_reviewed_qs)
return report
class ReportAfterStatus(Base):
"""表示Visitor访问AllNoteHandleResults之后生成的报告,此信息将由视图展示"""
def __init__(self):
# 新添加的笔记
self.new_qs_report = []
# 复习了的笔记
self.reviewed_qs_report = []
# 暂停了的笔记
self.paused_qs_report = []
# 需要复习的笔记数量
self.need_reviewed_num = 0
class LocatedQuestions(Base):
"""表示一组处于相同位置的问题"""
def __init__(self, location, qs):
self.location = location
self.qs = qs
class CommitResultVisitor(Visitor):
def visit(self, results: AllNoteHandleResults):
"""返回HandleResults的汇总信息,该信息将用于视图输出"""
report = ReportAfterCommit()
for result in results.results:
assert isinstance(result, OneNoteHandleResult)
report.new_num += len(result.new_qs)
report.reviewed_num += len(result.reviewed_qs)
report.paused_num += len(result.paused_qs)
report.need_reviewed_num += len(result.need_reviewed_qs)
return report
class ReportAfterCommit(Base):
def __init__(self):
# 新添加的笔记数量
self.new_num = 0
# 复习了的笔记数量
self.reviewed_num = 0
# 暂停了的笔记数量
self.paused_num = 0
# 需要复习的笔记数量
self.need_reviewed_num = 0
|
urnote/urnote
|
note/module/visitor.py
|
Python
|
gpl-3.0
| 2,557
|
[
"VisIt"
] |
a15a3c18bb5b08c9da628ee4071c30c9a78ee44a19d6db75c920ad910d60b63e
|
import sys
import os
from io import StringIO
import inspect
import numpy as np
import matplotlib.gridspec as gridspec
from matplotlib import pylab
import matplotlib.pyplot as plt
import scipy as sp
from scipy.interpolate import interp1d
from scipy.interpolate import Rbf, InterpolatedUnivariateSpline, splrep, splev, splprep
import re
from shutil import copyfile
from libs.dir_and_file_operations import listOfFilesFN, listOfFiles, listOfFilesFN_with_selected_ext
from feff.libs.numpy_group_by_ep_second_draft import group_by
from scipy.signal import savgol_filter
from feff.libs.fit_current_curve import return_fit_param, func, f_PM, f_diff_PM_for_2_T, \
linearFunc, f_PM_with_T, f_SPM_with_T
from scipy.optimize import curve_fit, leastsq
g_J_Mn2_plus = 5.82
g_J_Mn3_plus = 4.82
g_e = 2.0023 # G-factor Lande
mu_Bohr = 927.4e-26 # J/T
Navagadro = 6.02214e23 #1/mol
k_b = 1.38065e-23 #J/K
rho_GaAs = 5.3176e3 #kg/m3
mass_Molar_kg_GaAs = 144.645e-3 #kg/mol
mass_Molar_kg_Diamond = 12.011e-3 # diamond
rho_Diamond = 3.515e3
testX1 = [0.024, 0.026, 0.028, 0.03, 0.032, 0.034, 0.036, 0.038, 0.03, 0.0325]
testY1 = [0.6, 0.527361, 0.564139, 0.602, 0.640714, 0.676684, 0.713159, 0.7505, 0.9, 0.662469]
testArray = np.array([testX1, testY1])
def fromRowToColumn (Array = testArray):
# if num of columns bigger then num of rows then transpoze that marix
n,m = Array.shape
if n < m:
return Array.T
else:
return Array
def sortMatrixByFirstColumn(Array = fromRowToColumn(testArray), colnum = 0):
# return sorted by selected column number the matrix
return Array[Array[:, colnum].argsort()]
out = sortMatrixByFirstColumn()
# print('sorted out:')
# print(out)
# print('--')
def deleteNonUniqueElements(key = out[:, 0], val = out[:, 1]):
# calc the val.mean value for non-uniq key values
# u, idx = np.unique(Array[:, key_colnum], return_index=True)
return fromRowToColumn(np.array(group_by(key).mean(val)))
# print('mean :')
# print(deleteNonUniqueElements())
# print('--')
def from_EMU_cm3_to_A_by_m(moment_emu = 2300e-8, V_cm3 = 3e-6):
# return value of Magnetization in SI (A/m)
return (moment_emu / V_cm3)*1000
def concentration_from_Ms(Ms = 7667, J=2.5):
# return concentration from Ms = n*gj*mu_Bohr*J = n*p_exp*mu_Bohr
return Ms/mu_Bohr/J/g_e
def number_density(rho = rho_GaAs, M = mass_Molar_kg_GaAs):
# return concentration from Molar mass
return Navagadro*rho/M
class Struct:
'''
Describe structure for a data
'''
def __init__(self):
self.H = []
self.M = []
self.Mraw = [] # raw data
self.T = 2
self.H_inflection = 30
self.Hstep = 0.1 #[T]
self.Hmin = 0
self.Hmax = 0
self.J_total_momentum = 2.5
self.Mn_type = 'Mn2+' # Mn2+ or Mn3+
self.volumeOfTheFilm_GaMnAs = 0 #[m^3]
self.yy_fit = []
self.forFit_y = []
self.forFit_x = []
self.zeroIndex = []
# number of density for PM fit only for the current temperature:
self.n_PM = 0
# corrections for curve:
self.y_shift = 0
self.x_shift = 0
# fit does not calculated:
self.fitWasDone = False
# 'IUS' - Interpolation using univariate spline
# 'RBF' - Interpolation using Radial basis functions
# Interpolation using RBF - multiquadrics
# 'Spline'
# 'Cubic'
# 'Linear'
self.typeOfFiltering = 'IUS'
def define_Mn_type_variables(self):
'''
# unpaired electrons examples
d-count high spin low spin
d 4 4 2 Cr 2+ , Mn 3+
d 5 5 1 Fe 3+ , Mn 2+
d 6 4 0 Fe 2+ , Co 3+
d 7 3 1 Co 2+
Table: High and low spin octahedral transition metal complexes.
'''
# ===================================================
# Mn2 +
# 5.916, 3d5 4s0, 5 unpaired e-, observed: 5.7 - 6.0 in [muB]
# self.mu_spin_only = np.sqrt(5*(5+2))
# Mn3 +
# 5.916, 3d4 4s0, 4 unpaired e-, observed: 4.8 - 4.9 in [muB]
# self.mu_spin_only = np.sqrt(4*(4+2))
if self.Mn_type == 'Mn2+':
self.J_total_momentum = 2.5 # high spin
self.mu_eff = g_J_Mn2_plus
elif self.Mn_type == 'Mn3+':
self.J_total_momentum = 2 # low-spin
self.mu_eff = g_J_Mn3_plus
def interpolate(self):
if self.Hmin == self.Hmax:
self.Hmin = np.fix(10*self.H.min())/10
self.Hmax = np.fix(10*self.H.max())/10
if self.Hmin < self.H.min():
self.Hmin = np.fix(10*self.H.min())/10
if self.Hmax > self.H.max():
self.Hmax = np.fix(10*self.H.max())/10
self.xx = np.r_[self.Hmin: self.Hmax: self.Hstep]
x = np.array(self.H)
y = np.array(self.M)
if self.typeOfFiltering == 'Linear':
f = interp1d(self.H, self.M)
self.yy = f(self.xx)
if self.typeOfFiltering == 'Cubic':
f = interp1d(self.H, self.M, kind='cubic')
self.yy = f(self.xx)
if self.typeOfFiltering == 'Spline':
tck = splrep(x, y, s=0)
self.yy = splev(self.xx, tck, der=0)
if self.typeOfFiltering == 'IUS':
f = InterpolatedUnivariateSpline(self.H, self.M)
self.yy = f(self.xx)
if self.typeOfFiltering == 'RBF':
f = Rbf(self.H, self.M, function = 'linear')
self.yy = f(self.xx)
if abs(self.Hmin) == abs(self.Hmax):
self.y_shift = self.yy[-1] - abs(self.yy[0])
self.yy = self.yy - self.y_shift
yy_0 = np.r_[0:self.yy[-1]:self.yy[-1]/100]
f_0 = interp1d(self.yy, self.xx)
xx_0 = f_0(yy_0)
self.x_shift = xx_0[0]
self.xx = self.xx - self.x_shift
# we need to adjust new self.xx values to a good precision:
if self.Hmin < self.xx.min():
self.Hmin = np.fix(10*self.xx.min())/10
if self.Hmax > self.xx.max():
self.Hmax = np.fix(10*self.xx.max())/10
xx = np.r_[self.Hmin: self.Hmax: self.Hstep]
self.zeroIndex = np.nonzero((np.abs(xx) < self.Hstep*1e-2))
xx[self.zeroIndex] = 0
f = interp1d(self.xx, self.yy)
self.yy = f(xx)
self.xx = xx
def filtering(self):
# do some filtering operations under data:
if self.Hmin == self.Hmax:
self.Hmin = self.H.min()
self.Hmax = self.H.max()
self.xx = np.r_[self.Hmin: self.Hmax: self.Hstep]
window_size, poly_order = 101, 3
self.yy = savgol_filter(self.yy, window_size, poly_order)
def fit_PM(self):
# do a fit procedure:
# indx = np.argwhere(self.xx >= 3)
indx = (self.xx >= 3)
B = self.xx[indx]
M = self.yy[indx]
self.forFit_x = (g_e * self.J_total_momentum * mu_Bohr * B) / k_b / self.T
self.forFit_y = M
n = return_fit_param(self.forFit_x, self.forFit_y) # [1/m^3*1e27]
# try to fit by using 2 params: n and J
# initial_guess = [0.01, 2.5]
# popt, pcov = curve_fit(f_PM, (g_e*self.J*mu_Bohr*self.xx[(self.xx >= 0)])/k_b/self.T, self.yy[(self.xx >= 0)],
# p0= initial_guess, bounds=([0, 0], [np.inf, np.inf]))
# yy = f_PM((g_e*self.J*mu_Bohr*self.xx)/k_b/self.T, popt[0], popt[1])
# plt.plot(B, func(self.forFit_x, n), 'r-', B, M, '.-')
self.yy_fit = func((g_e * self.J_total_momentum * mu_Bohr * self.xx) / k_b / self.T, n)
self.yy_fit[self.zeroIndex] = 0
# plt.plot(self.xx, self.yy_fit, 'r-', self.xx, self.yy, '.-', self.xx, yy, 'x')
self.n_PM = n[0]
print('->> fit PM have been done. For T = {0} K obtained n = {1:1.3g} *1e27 [1/m^3] or {2:1.3g} % of the n(GaAs)'\
.format(self.T, self.n_PM, self.n_PM/22.139136*100))
self.fitWasDone = True
def plot(self, ax):
ax.plot(self.xx, self.yy, 'k-', label='T={0}K {1}'.format(self.T, self.typeOfFiltering))
ax.plot(self.H, self.M, 'x', label='T={0}K raw'.format(self.T))
if self.fitWasDone:
ax.plot(self.xx, self.yy_fit, 'r-', label='T={0}K fit_PM_single_phase'.format(self.T))
ax.set_ylabel('$Moment (A/m)$', fontsize=20, fontweight='bold')
ax.set_xlabel('$B (T)$', fontsize=20, fontweight='bold')
ax.grid(True)
# ax.fill_between(x, y - error, y + error,
# alpha=0.2, edgecolor='#1B2ACC', facecolor='#089FFF',
# linewidth=4, linestyle='dashdot', antialiased=True, label='$\chi(k)$')
def plotLogT(self, ax, H1 = 10, H2 = 300):
# Plot graph log(1/rho) vs 1/T to define the range with a different behavior of charge currents
a = self.xx
# find indeces for elements inside the region [T1, T2]:
ind = np.where(np.logical_and(a >= H1, a <= H2))
x = 1/(self.xx[ind]**1)
# y = np.log(self.yy[ind])
y = self.yy[ind]**(-1)
ax.plot(x, y, 'o-', label='T={0}K'.format(self.T))
# ax.set_ylabel('$ln(1/\\rho)$', fontsize=20, fontweight='bold')
ax.set_ylabel('$(\sigma)$', fontsize=20, fontweight='bold')
ax.set_xlabel('$1/T^{1}$', fontsize=20, fontweight='bold')
ax.grid(True)
m_B180v = 14.3 #mg
m_B180c = 12.2 #mg
m_B180b = 13.5 #mg
m_B180a = 17.2 #mg
t = np.array([14.3, 12.2, 13.5, 17.2])*1e-6/(1.5e-3*5e-3*rho_GaAs)
# to = t.mean()*1000 #mm
class ConcentrationOfMagneticIons:
def __init__(self):
self.selectCase = 'B180v'
self.mass_kg = 0.0
self.magnetic_moment_saturation_experiment = 0.0
self.how_many_Mn_in_percent = 2.3 #[%]
self.dataFolderSorceBase = '/home/yugin/VirtualboxShare/FEFF/Origin_Sawicki_measur (B180)'
self.dataFolderSorce = 'B180v[s]m(H)-dia'
# We suppose that the weight measurements has more accuracy then spatial measurements in the typical lab conditions.
# There for we calculate Area size for each sample from the density of pure GaAs and from the information about
# typical substrate thickness (Y.Syryanyy measured some pieces of the old samples and obtained t0=0.6mm)
self.t0 = 0.6e-3 # thickness of the GaAs substrate 0.6mm [m]
# thickness of the investigated film on the sample:
self.h_film = 400e-9 #[m] 400nm
# magnetic field region for calculation:
self.Hmin = 0
self.Hmax = 0
self.struct_of_data = {}
# for calculating diff_PM we need 2 different Temperature data for ex: m(T=2K) - m(T=5K)
self.diff_PM_keyName1 = '2'
self.diff_PM_keyName2 = '5'
# Select temperature for Langevin fit procedure (SuperParaMagnetic state):
self.SPM_keyName = '50'
self.SPM_field_region_for_fit = np.array([[0, 6]]) # [T]
self.diff_PM_field_region_for_fit = np.array([[-6, -3], [3, 6]]) # [T]
self.FM_field_region_for_fit = np.array([[-6, -2], [2, 6]]) # [T]
def prepareData(self):
# load and prepare data to the next calculation
self.mass_Molar_kg = ( (69.723 + 74.9216)*(100-self.how_many_Mn_in_percent)/100
+ 54.938*self.how_many_Mn_in_percent/100 )/1000
if self.selectCase == 'B180v':
self.mass_kg = m_B180v/1e6 #[kg]
# when para and dia was subtracted:
self.magnetic_moment_saturation_experiment = 5150e-8 # emu
# when only dia was subtracted:
self.magnetic_moment_saturation_experiment = 6950e-8 # emu
self.filmArea = 8.72925e-6 #[m^2]
self.Hmin = -6#[T]
self.Hmax = 8 #[T]
self.dataFolderSorce = os.path.join(self.dataFolderSorceBase, 'B180v[s]m(H)-dia')
if self.selectCase == 'B180c':
self.mass_kg = m_B180c/1e6 #[kg]
self.magnetic_moment_saturation_experiment = 2300e-8 # emu
self.filmArea = 7.2024361e-6 #[m^2]
self.Hmin = -6#[T]
self.Hmax = 6 #[T]
self.dataFolderSorce = os.path.join(self.dataFolderSorceBase, 'B180c[s]m(H)-dia')
if self.selectCase == 'B180b':
self.mass_kg = m_B180b/1e6 #[kg]
self.magnetic_moment_saturation_experiment = 5150e-8 # emu
self.filmArea = 8.3036995e-6 #[m^2]
self.dataFolderSorce = os.path.join(self.dataFolderSorceBase, 'B180b[s]m(H)-dia')
self.Hmin = -6#[T]
self.Hmax = 6 #[T]
self.diff_PM_field_region_for_fit = np.array([[-6, -0.1], [0.1, 6]]) # [T]
if self.selectCase == 'B180a':
self.mass_kg = m_B180a/1e6 #[kg]
self.magnetic_moment_saturation_experiment = 2300e-8 # emu
self.filmArea = 10.111943e-6 #[m^2]
self.dataFolderSorce = os.path.join(self.dataFolderSorceBase, 'B180a[s]m(H)-dia')
self.Hmin = -6#[T]
self.Hmax = 6 #[T]
self.area = self.mass_kg/rho_GaAs/self.t0 #[m^2]
self.area = self.filmArea #[m^2]
self.volumeOfTheFilm_GaMnAs = self.area * self.h_film #[m^3]
files_lsFN = listOfFilesFN_with_selected_ext(self.dataFolderSorce, ext = 'dat')
files_ls = list((os.path.basename(i) for i in files_lsFN))
# setup variables for plot:
self.suptitle_txt = 'case: {}, '.format(self.selectCase) + '$Ga_{1-x}Mn_xAs$,' \
+ ' where $x = {0:1.2g}\%$ '.format(self.how_many_Mn_in_percent)
self.ylabel_txt = 'M (A/m)'
self.xlabel_txt = '$B (T)$'
self.setupAxes()
indx = 0
parameter = list(int(float(re.findall("\d+", k)[0]) * 1) for k in files_ls)
sortedlist = [i[0] for i in sorted(zip(files_lsFN, parameter), key=lambda l: l[1], reverse=False)]
for i in sortedlist:
tmp_data = np.loadtxt(i, float, skiprows=1)
tmp_data = sortMatrixByFirstColumn(Array = fromRowToColumn(tmp_data), colnum = 0)
case_name = re.findall('[\d\.\d]+', os.path.basename(i).split('.')[0])
# create a new instance for each iteration step otherwise struct_of_data has one object for all keys
data = Struct()
data.T = float(case_name[0])
data.H = tmp_data[:, 0]/10000 #[T]
data.Mraw = tmp_data[:, 1]
data.M = from_EMU_cm3_to_A_by_m(moment_emu = data.Mraw, V_cm3 = self.volumeOfTheFilm_GaMnAs*1e6)
data.Hmin = self.Hmin
data.Hmax = self.Hmax
data.volumeOfTheFilm_GaMnAs = self.volumeOfTheFilm_GaMnAs
# 'IUS' - Interpolation using univariate spline
# 'RBF' - Interpolation using Radial basis functions
# Interpolation using RBF - multiquadrics
# 'Spline'
# 'Cubic'
# 'Linear'
data.typeOfFiltering = 'Linear'
data.interpolate()
data.fit_PM()
data.plot(self.ax)
plt.draw()
self.ax.legend(shadow=True, fancybox=True, loc='best')
self.struct_of_data[case_name[0]] = data
print('T = ', self.struct_of_data[case_name[0]].T, ' K')
indx = indx+1
sortKeys = sorted(self.struct_of_data, key=lambda key: self.struct_of_data[key].T)
for i in sortKeys:
self.ax.cla()
# struct_of_data[i].plotLogT(self.ax)
self.struct_of_data[i].plot(self.ax)
self.ax.legend(shadow=True, fancybox=True, loc='upper left')
plt.draw()
# print(list(struct_of_data.items())[i])
def calc_diff_PM(self):
if len(self.struct_of_data) > 1:
T1 = self.struct_of_data[self.diff_PM_keyName1].T
T2 = self.struct_of_data[self.diff_PM_keyName2].T
def fun_diff(B, n):
return f_diff_PM_for_2_T (B, n, J=2.5, T1=T1, T2=T2)
initial_guess = [0.01]
if len(self.struct_of_data[self.diff_PM_keyName1].xx) != len(self.struct_of_data[self.diff_PM_keyName2].xx):
print('len(T={0}K)={1} but len(T={2}K)={3}'.format(T1, len(self.struct_of_data[self.diff_PM_keyName1].xx),
T2, len(self.struct_of_data[self.diff_PM_keyName2].xx)) )
# Find the intersection of two arrays to avoid conflict with numbers of elements.
indices1 = np.nonzero(np.in1d(self.struct_of_data[self.diff_PM_keyName1].xx,
self.struct_of_data[self.diff_PM_keyName2].xx))
indices2 = np.nonzero(np.in1d(self.struct_of_data[self.diff_PM_keyName2].xx,
self.struct_of_data[self.diff_PM_keyName1].xx))
B = self.struct_of_data[self.diff_PM_keyName1].xx[indices1]
# for calculating diff_PM we need 2 different Temperature data for ex: m(T=2K) - m(T=5K)
M = self.struct_of_data[self.diff_PM_keyName1].yy[indices1] - \
self.struct_of_data[self.diff_PM_keyName2].yy[indices2]
M_for_fit = np.copy(M)
if abs(self.struct_of_data[self.diff_PM_keyName1].Hmin) == self.struct_of_data[self.diff_PM_keyName1].Hmax:
if len(M[np.where(B > 0)]) != len(M[np.where(B < 0)]):
# reduce a noise:
negVal = abs(np.min(B))
pozVal = np.max(B)
if pozVal >= negVal:
limitVal = negVal
else:
limitVal = pozVal
eps = 0.001*abs(abs(B[0])-abs(B[1]))
B = B[np.logical_or( (np.abs(B) <= limitVal + eps), (np.abs(B) <= eps) )]
Mp = M[np.where(B > 0)]
Mn = M[np.where(B < 0)]
if len(M[np.where(B > 0)]) == len(M[np.where(B < 0)]):
# reduce a noise:
Mp = M[np.where(B > 0)]
Mn = M[np.where(B < 0)]
M_for_fit[np.where(B > 0)] = 0.5*(Mp + np.abs(Mn[::-1]))
M_for_fit[np.where(B < 0)] = 0.5*(Mn - np.abs(Mp[::-1]))
# M_for_fit[(B > 0)] = 0.5*(Mp + np.abs(Mn))
# M_for_fit[(B < 0)] = 0.5*(Mn - np.abs(Mp))
if self.diff_PM_field_region_for_fit[0, 1] < B.min():
self.diff_PM_field_region_for_fit[0, 1] = np.fix(10*B.min())/10 + \
5*self.struct_of_data[self.diff_PM_keyName1].Hstep
condlist1 = (self.diff_PM_field_region_for_fit[0, 0] <= B) & (self.diff_PM_field_region_for_fit[0, 1] >= B)
condlist2 = (self.diff_PM_field_region_for_fit[1, 0] <= B) & (self.diff_PM_field_region_for_fit[1, 1] >= B)
# fit the data:
n, pcov = curve_fit(fun_diff, B[np.logical_or(condlist1, condlist2)],
M_for_fit[np.where(np.logical_or(condlist1, condlist2))],
p0=initial_guess, bounds=([0, ], [np.inf, ]))
self.n_diffPM = n[0]
print('->> fit diff PM have been done. For m(T={0}K) - m(T={1}K) obtained n = {2:1.3g} *1e27 [1/m^3] or {3:1.3g} % of the n(GaAs)'\
.format(self.diff_PM_keyName1, self.diff_PM_keyName2, self.n_diffPM, self.n_diffPM/22.139136*100))
# setup variables for plot:
self.ax.cla()
self.suptitle_txt = 'case: {}, '.format(self.selectCase) + '$Ga_{1-x}Mn_xAs$,' \
+ ' where $x = {0:1.2g}\%$ '.format(self.how_many_Mn_in_percent)
self.ylabel_txt = '$M_{{T={0}K}}\,-\,M_{{T={1}K}}$ $(A/m)$'.format(T1, T2)
self.xlabel_txt = '$B$ $(T)$'
self.setupAxes()
inx_condlist = np.logical_or(condlist1, condlist2)
inx = B.nonzero()
self.ax.scatter(B[inx_condlist],
M_for_fit[np.where(inx_condlist)],
label='region for $fit$',
facecolor='none',
alpha=.25,
s=200, marker='s', edgecolors='b',
)
self.ax.plot(B, fun_diff(B, n), 'r-',
label='$fit\,M_S*\left[B_J(T={0}K)\,-\,B_J(T={1}K)\\right]$,\n $n_{{PM}}={2:1.3g}\%,\; n_{{[PM,\;x={3:1.3g}\%]}}={4:1.3g}\%$'.
format(T1, T2, self.n_diffPM/22.139136*100, self.how_many_Mn_in_percent,
self.n_diffPM/22.139136*100/self.how_many_Mn_in_percent*100))
self.ax.scatter(B[inx], M[inx], label='raw $M(T={0}K)\,-\,M(T={1}K)$'.format(T1, T2), alpha=.4, color='g', marker='x', s=100)
self.ax.scatter(B[inx], M_for_fit[inx], label='for $fit$ $\\frac{M_++M_-}{2}$', color='k', alpha=.2, s=70)
self.ax.legend(shadow=True, fancybox=True, loc='upper left')
self.ax.grid(True)
plt.draw()
plt.draw()
def subtract_PM_after_calc_diff_PM(self):
if not(self.n_diffPM == 0):
B = self.struct_of_data[self.diff_PM_keyName1].xx
M = self.struct_of_data[self.diff_PM_keyName1].yy
T = self.struct_of_data[self.diff_PM_keyName1].T
J = 2.5
M_sub_PM = M - f_PM_with_T(B=B, n=self.n_diffPM, J=J, T=T)
self.ax.cla()
print('->> sub PM after diff PM have been done. For m(T={0}K) - m(T={1}K) obtained n = {2:1.3g} *1e27 [1/m^3] or {3:1.3g} % of the n(GaAs)'\
.format(self.diff_PM_keyName1, self.diff_PM_keyName2, self.n_diffPM, self.n_diffPM/22.139136*100))
# setup variables for plot:
self.suptitle_txt = 'case: {}, '.format(self.selectCase) + '$Ga_{1-x}Mn_xAs$,' \
+ ' where $x = {0:1.2g}\%$ '.format(self.how_many_Mn_in_percent)
self.ylabel_txt = '$M\,(A/m)$'
self.xlabel_txt = '$B\,(T)$'
self.setupAxes()
self.ax.plot(B, f_PM_with_T(B=B, n=self.n_diffPM, J=J, T=T), 'r-',
label='PM component $\left(T={0:1.2g}K \\right)$, '.format(T) +\
'$n_{{PM}}={:1.3g}\%$,'.format(self.n_diffPM / 22.139136 * 100) + \
'\n$n_{{[PM,\;x={0:1.3g}\%]}}={1:1.3g}\%$'.format(self.how_many_Mn_in_percent,
self.n_diffPM / 22.139136 * 100/self.how_many_Mn_in_percent*100) + \
' $J={:1.2g}(Mn^{{2+}})$'.format(J/2.5)
)
self.ax.scatter(B, M, label='raw $\left(T={0:1.2g}K \\right)$'.format(T), alpha=.2, color='g')
self.ax.scatter(B, M_sub_PM, label='$M_{raw}-M_{PM}$'+' $\left(T={0:1.2g}K \\right)$'.format(T), color='k', alpha=.2)
self.ax.legend(shadow=True, fancybox=True, loc='upper left')
self.ax.grid(True)
plt.draw()
plt.draw()
def subtract_Line(self):
# try to subtract BG line: y=kx+b, which may be correspond to anisotropy effect
# and then calculate FM phase concentration
if not(self.n_diffPM == 0):
B = self.struct_of_data[self.diff_PM_keyName1].xx
M = self.struct_of_data[self.diff_PM_keyName1].yy
T = self.struct_of_data[self.diff_PM_keyName1].T
J = 2.5
M_sub_PM = M - f_PM_with_T(B=B, n=self.n_diffPM, J=J, T=T)
condlist1 = (self.FM_field_region_for_fit[0, 0] < B) & (self.FM_field_region_for_fit[0, 1] > B)
condlist2 = (self.FM_field_region_for_fit[1, 0] < B) & (self.FM_field_region_for_fit[1, 1] > B)
inx_condlist = np.logical_or(condlist2, condlist2)
# fit the data:
par, pcov = curve_fit(linearFunc, B[np.logical_or(condlist2, condlist2)],
M_sub_PM[np.logical_or(condlist2, condlist2)],)
k = par[0]
b = par[1]
M_sub_linear = M_sub_PM - (k*B)
# calc M-saturation and FM phase concentration:
M_FM_saturation = 0.5 * (abs(np.mean(M_sub_linear[condlist2])) + abs(np.mean(M_sub_linear[condlist1])))
self.n_FM_phase = concentration_from_Ms(Ms=M_FM_saturation, J=2.5) * (1e-27)
print('->> subtract_Line have been done.')
print(
'For Ms, which we take from m(T={0}K)-PM-k*B we obtained n = {1:1.3g} *1e27 [1/m^3] or {2:1.3g} % of the n(GaAs)' \
.format(self.diff_PM_keyName1, self.n_FM_phase, self.n_FM_phase / 22.139136 * 100))
self.ax.cla()
# setup variables for plot:
self.suptitle_txt = 'case: {}, '.format(self.selectCase) + '$Ga_{1-x}Mn_xAs$,' \
+ ' where $x = {0:1.2g}\%$ '.format(self.how_many_Mn_in_percent)
self.ylabel_txt = '$M\, (A/m)$'
self.xlabel_txt = '$B\, (T)$'
self.setupAxes()
self.ax.plot(B, f_PM_with_T(B=B, n=self.n_diffPM, J=J, T=T), 'r-',
label='PM component $\left(T={0:1.2g}K \\right)$, '.format(T) + \
'$n_{{PM}}={:1.3g}\%$,'.format(self.n_diffPM / 22.139136 * 100) + \
'\n$n_{{[PM,\;x={0:1.3g}\%]}}={1:1.3g}\%$'.format(self.how_many_Mn_in_percent,
self.n_diffPM / 22.139136 * 100/self.how_many_Mn_in_percent*100) + \
' $J={:1.2g}(Mn^{{2+}})$'.format(J / 2.5)
)
self.ax.plot(B, (k*B), 'b-', label='line')
self.ax.scatter(B, M, label='raw', alpha=.2, color='g')
self.ax.scatter(B, M_sub_PM,
label='$M_{raw}-M_{PM}$'+' $\left(T={0:1.2g}K \\right)$'.format(T),
color='k', alpha=.2)
self.ax.scatter(B, M_sub_linear, label='$M_{PM}-(kx+b)$, '+\
'$n_{{FM}}={:1.3g}\%$,\n'.format(self.n_FM_phase / 22.139136 * 100) + \
'$n_{{[FM,\;x={0:1.3g}\%]}}={1:1.3g}\%$'.format(self.how_many_Mn_in_percent,
self.n_FM_phase / 22.139136 * 100/self.how_many_Mn_in_percent*100) + \
' $J={:1.2g}(Mn^{{2+}})$'.format(J / 2.5),
color='b', s=100,
alpha=.2)
self.ax.scatter(B[inx_condlist], M_sub_PM[inx_condlist],
label='for $linear\,fit$ $M_{raw}-M_{PM}$'+' $\left(T={0:1.2g}K \\right)$'.format(T),
color='k', alpha=.2, s=70)
self.ax.legend(shadow=True, fancybox=True, loc='upper left')
self.ax.grid(True)
plt.draw()
def calc_SPM_Langevin(self):
# fit data by using a Langevin function:
if self.n_diffPM <= 0:
self.calc_diff_PM()
T = self.struct_of_data[self.SPM_keyName].T
def fun_fit(B, n, j):
# return f_SPM_with_T(B, n, J=J, T=T)
return f_PM_with_T(B, n, J=j, T=T)
B = self.struct_of_data[self.SPM_keyName].xx
M = self.struct_of_data[self.SPM_keyName].yy
T = self.struct_of_data[self.SPM_keyName].T
J = 2.5
eps = 0.001 * abs(abs(B[0]) - abs(B[1]))
# reduce a noise:
Mp = M[(B > eps)]
Mn = M[(B < -eps)]
M_for_fit = np.copy(M)
# Find the intersection of two arrays to avoid conflict with numbers of elements.
# if abs(self.struct_of_data[self.SPM_keyName].Hmin) == self.struct_of_data[self.SPM_keyName].Hmax:
if len(M[(B > 0)]) != len(M[(B < 0)]):
# reduce a noise:
negVal = abs(np.min(B))
pozVal = np.max(B)
if pozVal >= negVal:
limitVal = negVal
else:
limitVal = pozVal
inx_B = np.logical_or((np.abs(B) <= limitVal + eps), (np.abs(B) <= eps))
B = np.copy(B[inx_B])
M_for_fit = np.copy(M[inx_B])
Mp = M[np.where(B > eps)]
Mn = M[np.where(B < -eps)]
if len(M[np.where(B > 0)]) == len(M[np.where(B < 0)]):
inx_B = np.logical_or((np.abs(B) <= eps), (np.abs(B) >= eps))
B = np.copy(B[inx_B])
M_for_fit = np.copy(M[np.where(inx_B)])
# reduce a noise:
Mp = M[np.where(B > eps)]
Mn = M[np.where(B < -eps)]
M_for_fit[np.where(B > eps)] = 0.5 * (Mp + np.abs(Mn[::-1]))
M_for_fit[np.where(B < -eps)] = 0.5 * (Mn - np.abs(Mp[::-1]))
# M_for_fit[(B > 0)] = 0.5*(Mp + np.abs(Mn))
# M_for_fit[(B < 0)] = 0.5*(Mn - np.abs(Mp))
if self.SPM_field_region_for_fit[0, 0] >= B.max():
self.SPM_field_region_for_fit[0, 0] = np.fix(10 * B.max()) / 10 - \
5 * self.struct_of_data[self.SPM_keyName].Hstep
# sabtruct a PM magnetic phase which is low temperature independent:
M_for_fit = M_for_fit - f_PM_with_T(B, n=self.n_diffPM, J=J, T=T)
condlist1 = (self.SPM_field_region_for_fit[0, 0] < B) & (self.SPM_field_region_for_fit[0, 1] > B)
initial_guess = [0.01, 2, ]
# fit the data:
pres, pcov = curve_fit(fun_fit, B[np.logical_or(condlist1, condlist1)],
M_for_fit[np.where(np.logical_or(condlist1, condlist1))],
p0=initial_guess,
bounds=([0, 0, ], [np.inf, np.inf, ]))
self.n_SPM = pres[0]
self.J_SPM = pres[1]
print('->> fit SPM have been done. For m(T={0}K) obtained n = {1:1.3g} *1e27 [1/m^3] or {2:1.3g} % of the n(GaAs)'\
.format(self.SPM_keyName, self.n_SPM, self.n_SPM/22.139136*100))
print('->> SPM magnetic phase has total momentum J = {0:1.3g} or {1:1.3g} numbers of J(Mn2+)'\
.format(self.J_SPM, self.J_SPM/2.5))
# setup variables for plot:
self.ax.cla()
self.suptitle_txt = 'case: {}, '.format(self.selectCase) + '$Ga_{1-x}Mn_xAs$,' \
+ ' where $x = {0:1.2g}\%$ '.format(self.how_many_Mn_in_percent)
self.ylabel_txt = '$M(T={0}K) (A/m)$'.format(T)
self.xlabel_txt = '$B (T)$'
self.setupAxes()
self.ax.scatter(B[np.logical_or(condlist1, condlist1)],
M_for_fit[np.where(np.logical_or(condlist1, condlist1))],
label='region for $fit$',
facecolor='none',
alpha=.1,
s=200, marker='s', edgecolors='k',
)
self.ax.plot(B, fun_fit(B, self.n_SPM, self.J_SPM), 'r-',
label='$fit$, $n_{{SPM}}={0:1.3g}\%$, \n$J={1:1.3g}(Mn^{{2+}})$'.format(self.n_SPM/22.139136*100,
self.J_SPM/2.5)+\
', $n_{{[SPM,\;x={0:1.3g}\%]}}={1:1.3g}\%$'.format(self.how_many_Mn_in_percent,
self.n_SPM / 22.139136 * 100/self.how_many_Mn_in_percent*100)
)
self.ax.plot(B, f_PM_with_T(B, n=self.n_diffPM, J=J, T=T), 'b-',
label='$PM(T={2:1.2g}K)$, $n_{{PM}}={0:1.3g}\%$, \n$J={1:1.3g}(Mn^{{2+}})$'\
.format(self.n_diffPM/22.139136*100, 1, T, )+\
', $n_{{[PM,\;x={0:1.3g}\%]}}={1:1.3g}\%$'.format(self.how_many_Mn_in_percent,
self.n_diffPM / 22.139136 * 100/self.how_many_Mn_in_percent*100))
self.ax.scatter(B, M[np.where(inx_B)], label='$raw$', alpha=.2, color='g')
self.ax.scatter(B, M_for_fit, label='set for $fit$, $\\frac{{\left(M_++M_-\\right)}}{{2}} - PM(T={0:1.2g}K)$'.format(T), color='k', alpha=.2)
self.ax.plot(B, 10*(M_for_fit - fun_fit(B, self.n_SPM, self.J_SPM)), 'k.', label='residuals $10*(raw - fit)$')
self.ax.legend(shadow=True, fancybox=True, loc='upper left')
self.ax.grid(True)
plt.show()
plt.draw()
def calculate(self):
n_m3 = number_density(rho=rho_GaAs, M=self.mass_Molar_kg)
n_m3_GaAs = number_density(rho=rho_GaAs, M=mass_Molar_kg_GaAs)
print('Ga(1-x)Mn(x)As molar mass for {0:1.2g}% Mn is: {1:1.6g} [kg/mol]'.format(self.how_many_Mn_in_percent, self.mass_Molar_kg))
print('GaAs molar mass for {0:1.2g}% Mn is: {1:1.6g} [kg/mol]'.format(0, mass_Molar_kg_GaAs))
print( 'table concentration (Number density) is: {:1.6g} *10^27 [1/m3] or 10^21 [1/cm3]'.format( (n_m3/1e27) ) )
print( 'pure GaAs table concentration (Number density) is: {:1.9g} *10^27 [1/m3] or 10^21 [1/cm3]'.format( (n_m3_GaAs/1e27) ) )
n_m3 = n_m3_GaAs
print( '-> we use this table concentration (Number density) is: {:1.6g} *10^27 [1/m3] or 10^21 [1/cm3]'.format( (n_m3/1e27) ) )
print('The area size of the sample surface is: {:1.5g} [mm^2]'.format(self.area*1e6))
print('if suppose that one side has 5mm that other side will be have: {:1.5g} [mm]'.format(self.area/0.005*1e3))
# old uncorrected calc from M saturation values:
# magnetization_SI = from_EMU_cm3_to_A_by_m(moment_emu = self.magnetic_moment_saturation_experiment, V_cm3 = self.volumeOfTheFilm_GaMnAs*1e6)
# n_PM_phase = concentration_PM(M = magnetization_SI, p_exp = g_J_Mn2_plus)
print('===>>')
print('For case {0} :'.format(self.selectCase))
print('Concentration of FM ions Mn is: {:1.6}%'.format(self.n_FM_phase/n_m3_GaAs/1e27*100))
print('<<===')
def setupAxes(self):
# create figure with axes:
pylab.ion() # Force interactive
plt.close('all')
### for 'Qt4Agg' backend maximize figure
plt.switch_backend('QT5Agg')
plt.rc('font', family='serif')
self.fig = plt.figure()
# gs1 = gridspec.GridSpec(1, 2)
# fig.show()
# fig.set_tight_layout(True)
self.figManager = plt.get_current_fig_manager()
DPI = self.fig.get_dpi()
self.fig.set_size_inches(800.0 / DPI, 600.0 / DPI)
gs = gridspec.GridSpec(1, 1)
self.ax = self.fig.add_subplot(gs[0, 0])
self.ax.grid(True)
self.ax.set_ylabel(self.ylabel_txt, fontsize=20, fontweight='bold')
self.ax.set_xlabel(self.xlabel_txt, fontsize=20, fontweight='bold')
self.fig.suptitle(self.suptitle_txt, fontsize=22, fontweight='normal')
# Change the axes border width
for axis in ['top', 'bottom', 'left', 'right']:
self.ax.spines[axis].set_linewidth(2)
# plt.subplots_adjust(top=0.85)
# gs1.tight_layout(fig, rect=[0, 0.03, 1, 0.95])
self.fig.tight_layout(rect=[0.03, 0.03, 1, 0.95], w_pad=1.1)
# put window to the second monitor
# figManager.window.setGeometry(1923, 23, 640, 529)
# self.figManager.window.setGeometry(780, 20, 800, 600)
self.figManager.window.setGeometry(780, 20, 1024, 768)
self.figManager.window.setWindowTitle('Magnetic fitting')
self.figManager.window.showMinimized()
self.fig.tight_layout(rect=[0.03, 0.03, 1, 0.95], w_pad=1.1)
# save to the PNG file:
# out_file_name = '%s_' % (case) + "%05d.png" % (numOfIter)
# fig.savefig(os.path.join(out_dir, out_file_name))
if __name__ =='__main__':
print ('-> you run ', __file__, ' file in a main mode' )
from feff.libs.class_StoreAndLoadVars import StoreAndLoadVars
import tkinter as tk
from tkinter import filedialog, messagebox
from feff.libs.dir_and_file_operations import create_out_data_folder
# open GUI filedialog to select feff_0001 working directory:
A = StoreAndLoadVars()
A.fileNameOfStoredVars = 'GaMnAs_SQUID.pckl'
print('last used: {}'.format(A.getLastUsedDirPath()))
# openfile dialoge
root = tk.Tk()
root.withdraw()
# load A-model data:
txt_info = "select the SQUID measurements\noutput data files for B180 sample\ndefault: Origin_Sawicki_measur_(B180)"
messagebox.showinfo("info", txt_info)
dir_path = filedialog.askdirectory(initialdir=A.getLastUsedDirPath())
if os.path.isdir(dir_path):
A.lastUsedDirPath = dir_path
A.saveLastUsedDirPath()
# # check is the 'calc' folder exist:
# out_dir = os.path.join(dir_path, 'calc')
# print('out dir path is: {}'.format(out_dir))
# if not os.path.isdir(out_dir):
# os.mkdir(out_dir)
out_dir = create_out_data_folder(dir_path, 'calc')
# Change the Case Name for calculation: [B180a, B180b, B180c, B180v]
a = ConcentrationOfMagneticIons()
a.dataFolderSorceBase = dir_path
a.selectCase = 'B180c'
# check is 'a.selectCase' sub-folder exist:
out_dir = os.path.join(out_dir, a.selectCase)
if not os.path.isdir(out_dir):
os.mkdir(out_dir)
a.prepareData()
a.diff_PM_keyName1 = '2'
a.diff_PM_keyName2 = '5'
a.diff_PM_field_region_for_fit = np.array([[-6, -0.1], [0.1, 6]]) # [T]
a.calc_diff_PM()
# save to the PNG file:
out_file_name = '%s' % (a.selectCase) + "_diff.png"
a.fig.savefig(os.path.join(out_dir, out_file_name))
a.subtract_PM_after_calc_diff_PM()
# save to the PNG file:
out_file_name = '%s' % (a.selectCase) + "_RAW-PM.png"
a.fig.savefig(os.path.join(out_dir, out_file_name))
a.SPM_keyName = '50'
a.SPM_field_region_for_fit = np.array([[4, 6]]) # [T]
a.calc_SPM_Langevin()
# save to the PNG file:
out_file_name = '%s' % (a.selectCase) + "_SPM.png"
a.fig.savefig(os.path.join(out_dir, out_file_name))
a.FM_field_region_for_fit = np.array([[-6, -2], [2, 6]]) # [T]
a.subtract_Line()
# save to the PNG file:
out_file_name = '%s' % (a.selectCase) + "_FM.png"
a.fig.savefig(os.path.join(out_dir, out_file_name))
# sortKeys = sorted(a.struct_of_data, key=lambda key: a.struct_of_data[key].T)
# for i in sortKeys:
# a.ax.cla()
# # struct_of_data[i].plotLogT(self.ax)
# a.struct_of_data[i].plot(a.ax)
#
# a.ax.legend(shadow=True, fancybox=True, loc='upper left')
# plt.draw()
# # print([a.struct_of_data[i].forFit_x, a.struct_of_data[i].forFit_y])
# x = np.array(a.struct_of_data[i].forFit_x)
# y = np.array(a.struct_of_data[i].forFit_y)
#
#
# x = np.array(a.struct_of_data[i].forFit_x)[:,0]
# y = np.array(a.struct_of_data[i].forFit_y)[:,0]
#
# x1 = np.array(( 5.12259078, 5.29071205, 5.45883331, 5.62695458, 5.79507585, 5.96319712, 6.13131838, 6.29943965, 6.46756092, 6.63568218 , 6.80380345, 6.97192472, 7.14004599, 7.30816725, 7.47628852, 7.64440979, 7.81253105, 7.98065232, 8.14877359, 8.31689486, 8.48501612, 8.65313739, 8.82125866, 8.98937992, 9.15750119 , 9.32562246, 9.49374373 , 9.66186499, 9.82998626 , 9.99810753))
# y1 = np.array((36562.255, 36704.364, 36846.472, 36959.829, 37071.074, 37182.319, 37288.874, 37394.099, 37499.323, 37575.449, 37634.214, 37692.98 , 37769.696, 37866.358, 37963.021, 38026.567, 38020.08 , 38013.593, 38025.271, 38126.641, 38228.011, 38328.095, 38347.998, 38367.901, 38387.804, 38459.712, 38539.864, 38620.017, 38658.663, 38680.4 ))
#
# # np.set_printoptions(precision=3, suppress=True, linewidth=750,)
# # print(repr(y))
# n = return_fit_param(x, y)
# popt, pcov = curve_fit(func, x, y)
# a.calculate()
print('------')
print('-> file ', __file__, ' was finished')
|
yuginboy/from_GULP_to_FEFF
|
feff/libs/GaMnAs_concentration.py
|
Python
|
gpl-3.0
| 39,959
|
[
"FEFF"
] |
ec7aa3d3153369625aadd374b6bb6e3ff99332c185162695f4719f20c9a89857
|
#
# This source file is part of appleseed.
# Visit http://appleseedhq.net/ for additional information and resources.
#
# This software is released under the MIT license.
#
# Copyright (c) 2012-2013 Esteban Tovagliari, Jupiter Jazz Limited
# Copyright (c) 2014-2017 Esteban Tovagliari, The appleseedhq Organization
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
#
# The appleseed.python module built into appleseed.studio
# is called _appleseedpythonbuiltin. Try to load it first.
# If that fails it means that we are not in appleseed.studio;
# in that case just load the normal appleseed.python module.
try:
from _appleseedpythonbuiltin import *
except:
from _appleseedpython import *
from logtarget import *
|
aytekaman/appleseed
|
src/appleseed.python/__init__.py
|
Python
|
mit
| 1,730
|
[
"VisIt"
] |
9b2c7e5edeb2d81e1c7fbee2ef81fa1acd0f066895e9f56279023168f4da7055
|
"""Django forms for hs_core module."""
import copy
from django.forms import ModelForm, BaseFormSet
from django.contrib.admin.widgets import forms
from crispy_forms.helper import FormHelper
from crispy_forms.layout import Layout, Fieldset, HTML
from crispy_forms.bootstrap import Field
from .hydroshare import utils
from .models import Party, Creator, Contributor, validate_user_url, Relation, Identifier, \
FundingAgency, Description
class Helper(object):
"""Render resusable elements to use in Django forms."""
@classmethod
def get_element_add_modal_form(cls, element_name, modal_form_context_name):
"""Apply a modal UI element to a given form.
Used in netCDF and modflow_modelinstance apps
"""
modal_title = "Add %s" % element_name.title()
layout = Layout(
HTML('<div class="modal fade" id="add-element-dialog" tabindex="-1" '
'role="dialog" aria-labelledby="myModalLabel" aria-hidden="true">'
'<div class="modal-dialog">'
'<div class="modal-content">'),
HTML('<form action="{{ form.action }}" '
'method="POST" enctype="multipart/form-data"> '),
HTML('{% csrf_token %} '
'<input name="resource-mode" type="hidden" value="edit"/>'
'<div class="modal-header">'
'<button type="button" class="close" '
'data-dismiss="modal" aria-hidden="true">×'
'</button>'),
HTML('<h4 class="modal-title" id="myModalLabel"> Add Element </h4>'),
HTML('</div>'
'<div class="modal-body">'
'{% csrf_token %}'
'<div class="form-group">'),
HTML('{% load crispy_forms_tags %} {% crispy add_creator_modal_form %} '),
HTML('</div>'
'</div>'
'<div class="modal-footer">'
'<button type="button" class="btn btn-default" '
'data-dismiss="modal">Close</button>'
'<button type="submit" class="btn btn-primary">'
'Save changes</button>'
'</div>'
'</form>'
'</div>'
'</div>'
'</div>')
)
layout[0] = HTML('<div class="modal fade" id="add-%s-dialog" tabindex="-1" role="dialog" '
'aria-labelledby="myModalLabel" aria-hidden="true">'
'<div class="modal-dialog">'
'<div class="modal-content">' % element_name.lower())
layout[1] = HTML('<form action="{{ %s.action }}" method="POST" '
'enctype="multipart/form-data"> ' % modal_form_context_name)
layout[3] = HTML('<h4 class="modal-title" id="myModalLabel"> {title} '
'</h4>'.format(title=modal_title),)
html_str = '{% load crispy_forms_tags %} {% crispy' + ' add_{element}_modal_form'.format(
element=element_name.lower()) + ' %}'
layout[5] = HTML(html_str)
return layout
# the 1st and the 3rd HTML layout objects get replaced in MetaDataElementDeleteForm class
def _get_modal_confirm_delete_matadata_element():
layout = Layout(
HTML('<div class="modal fade" id="delete-metadata-element-dialog" '
'tabindex="-1" role="dialog" aria-labelledby="myModalLabel" '
'aria-hidden="true">'),
HTML('<div class="modal-dialog">'
'<div class="modal-content">'
'<div class="modal-header">'
'<button type="button" class="close" data-dismiss="modal" '
'aria-hidden="true">×</button>'
'<h4 class="modal-title" id="myModalLabel">'
'Delete metadata element</h4>'
'</div>'
'<div class="modal-body">'
'<strong>Are you sure you want to delete this metadata '
'element?</strong>'
'</div>'
'<div class="modal-footer">'
'<button type="button" class="btn btn-default" '
'data-dismiss="modal">Cancel</button>'),
HTML('<a type="button" class="btn btn-danger" href="">Delete</a>'),
HTML('</div>'
'</div>'
'</div>'
'</div>'),
)
return layout
class MetaDataElementDeleteForm(forms.Form):
"""Render a modal that confirms element deletion."""
def __init__(self, res_short_id, element_name, element_id, *args, **kwargs):
"""Render a modal that confirms element deletion.
uses _get_modal_confirm_delete_matadata_element
"""
super(MetaDataElementDeleteForm, self).__init__(*args, **kwargs)
self.helper = FormHelper()
self.delete_element_action = '"/hsapi/_internal/%s/%s/%s/delete-metadata/"' % \
(res_short_id, element_name, element_id)
self.helper.layout = _get_modal_confirm_delete_matadata_element()
self.helper.layout[0] = HTML('<div class="modal fade" id="delete-%s-element-dialog_%s" '
'tabindex="-1" role="dialog" aria-labelledby="myModalLabel" '
'aria-hidden="true">' % (element_name, element_id))
self.helper.layout[2] = HTML('<a type="button" class="btn btn-danger" '
'href=%s>Delete</a>' % self.delete_element_action)
self.helper.form_tag = False
class ExtendedMetadataForm(forms.Form):
"""Render an extensible metadata form via the extended_metadata_layout kwarg."""
def __init__(self, resource_mode='edit', extended_metadata_layout=None, *args, **kwargs):
"""Render an extensible metadata form via the extended_metadata_layout kwarg."""
super(ExtendedMetadataForm, self).__init__(*args, **kwargs)
self.helper = FormHelper()
self.helper.form_tag = False
self.helper.layout = extended_metadata_layout
class CreatorFormSetHelper(FormHelper):
"""Render a creator form with custom HTML5 validation and error display."""
def __init__(self, *args, **kwargs):
"""Render a creator form with custom HTML5 validation and error display."""
super(CreatorFormSetHelper, self).__init__(*args, **kwargs)
# the order in which the model fields are listed for the FieldSet is the order
# these fields will be displayed
field_width = 'form-control input-sm'
self.form_tag = False
self.form_show_errors = True
self.error_text_inline = True
self.html5_required = True
self.layout = Layout(
Fieldset('Creator',
Field('name', css_class=field_width),
Field('description', css_class=field_width),
Field('organization', css_class=field_width),
Field('email', css_class=field_width),
Field('address', css_class=field_width),
Field('phone', css_class=field_width),
Field('homepage', css_class=field_width),
Field('order', css_class=field_width),
),
)
class PartyForm(ModelForm):
"""Render form for creating and editing Party models, aka people."""
def __init__(self, *args, **kwargs):
"""Render form for creating and editing Party models, aka people.
Removes profile link formset and renders proper description URL
"""
if 'initial' in kwargs:
if 'description' in kwargs['initial']:
if kwargs['initial']['description']:
kwargs['initial']['description'] = utils.current_site_url() + \
kwargs['initial']['description']
super(PartyForm, self).__init__(*args, **kwargs)
self.profile_link_formset = None
self.number = 0
class Meta:
"""Describe meta properties of PartyForm.
Fields that will be displayed are specified here - but not necessarily in the same order
"""
model = Party
fields = ['name', 'description', 'organization', 'email', 'address', 'phone', 'homepage']
# TODO: field labels and widgets types to be specified
labels = {'description': 'HydroShare User Identifier (URL)'}
class CreatorForm(PartyForm):
"""Render form for creating and editing Creator models, as in creators of resources."""
def __init__(self, allow_edit=True, res_short_id=None, element_id=None, *args, **kwargs):
"""Render form for creating and editing Creator models, as in creators of resources."""
super(CreatorForm, self).__init__(*args, **kwargs)
self.helper = CreatorFormSetHelper()
self.delete_modal_form = None
if res_short_id:
self.action = "/hsapi/_internal/%s/creator/add-metadata/" % res_short_id
else:
self.action = ""
if not allow_edit:
for fld_name in self.Meta.fields:
self.fields[fld_name].widget.attrs['readonly'] = True
self.fields[fld_name].widget.attrs['style'] = "background-color:white;"
self.fields['order'].widget.attrs['readonly'] = True
self.fields['order'].widget.attrs['style'] = "background-color:white;"
else:
if 'add-metadata' in self.action:
del self.fields['order']
@property
def form_id(self):
"""Render proper form id by prepending 'id_creator_'."""
form_id = 'id_creator_%s' % self.number
return form_id
@property
def form_id_button(self):
"""Render proper form id with quotes around it."""
form_id = 'id_creator_%s' % self.number
return "'" + form_id + "'"
class Meta:
"""Describe meta properties of PartyForm."""
model = Creator
fields = PartyForm.Meta.fields
fields.append("order")
labels = PartyForm.Meta.labels
class PartyValidationForm(forms.Form):
"""Validate form for Party models."""
description = forms.CharField(required=False, validators=[validate_user_url])
name = forms.CharField(required=False, max_length=100)
organization = forms.CharField(max_length=200, required=False)
email = forms.EmailField(required=False)
address = forms.CharField(max_length=250, required=False)
phone = forms.CharField(max_length=25, required=False)
homepage = forms.URLField(required=False)
identifiers = forms.CharField(required=False)
def clean_description(self):
"""Create absolute URL for Party.description field."""
user_absolute_url = self.cleaned_data['description']
if user_absolute_url:
url_parts = user_absolute_url.split('/')
if len(url_parts) > 4:
return '/user/{user_id}/'.format(user_id=url_parts[4])
return user_absolute_url
def clean_identifiers(self):
data = self.cleaned_data['identifiers']
return Party.validate_identifiers(data)
def clean(self):
"""Validate that name and/or organization are present in form data."""
cleaned_data = super(PartyValidationForm, self).clean()
name = cleaned_data.get('name', None)
org = cleaned_data.get('organization', None)
if not org:
if not name or len(name.strip()) == 0:
self._errors['name'] = ["A value for name or organization is required but both "
"are missing"]
return self.cleaned_data
class CreatorValidationForm(PartyValidationForm):
"""Validate form for Creator models. Extends PartyValidationForm."""
order = forms.IntegerField(required=False)
class ContributorValidationForm(PartyValidationForm):
"""Validate form for Contributor models. Extends PartyValidationForm."""
pass
class BaseCreatorFormSet(BaseFormSet):
"""Render BaseFormSet for working with Creator models."""
def add_fields(self, form, index):
"""Pass through add_fields function to super."""
super(BaseCreatorFormSet, self).add_fields(form, index)
def get_metadata(self):
"""Collect and append creator data to form fields."""
creators_data = []
for form in self.forms:
creator_data = {k: v for k, v in list(form.cleaned_data.items())}
if creator_data:
creators_data.append({'creator': creator_data})
return creators_data
class ContributorFormSetHelper(FormHelper):
"""Render layout for Contributor model form and activate required fields."""
def __init__(self, *args, **kwargs):
"""Render layout for Contributor model form and activate required fields."""
super(ContributorFormSetHelper, self).__init__(*args, **kwargs)
# the order in which the model fields are listed for the FieldSet is the order
# these fields will be displayed
field_width = 'form-control input-sm'
self.form_tag = False
self.layout = Layout(
Fieldset('Contributor',
Field('name', css_class=field_width),
Field('description', css_class=field_width),
Field('organization', css_class=field_width),
Field('email', css_class=field_width),
Field('address', css_class=field_width),
Field('phone', css_class=field_width),
Field('homepage', css_class=field_width),
),
)
self.render_required_fields = True,
class ContributorForm(PartyForm):
"""Render Contributor model form with appropriate attributes."""
def __init__(self, allow_edit=True, res_short_id=None, element_id=None, *args, **kwargs):
"""Render Contributor model form with appropriate attributes."""
super(ContributorForm, self).__init__(*args, **kwargs)
self.helper = ContributorFormSetHelper()
self.delete_modal_form = None
if res_short_id:
self.action = "/hsapi/_internal/%s/contributor/add-metadata/" % res_short_id
else:
self.action = ""
if not allow_edit:
for fld_name in self.Meta.fields:
self.fields[fld_name].widget.attrs['readonly'] = True
self.fields[fld_name].widget.attrs['style'] = "background-color:white;"
@property
def form_id(self):
"""Render proper form id by prepending 'id_contributor_'."""
form_id = 'id_contributor_%s' % self.number
return form_id
@property
def form_id_button(self):
"""Render proper form id with quotes around it."""
form_id = 'id_contributor_%s' % self.number
return "'" + form_id + "'"
class Meta:
"""Describe meta properties of ContributorForm, removing 'order' field."""
model = Contributor
fields = PartyForm.Meta.fields
labels = PartyForm.Meta.labels
if 'order' in fields:
fields.remove('order')
class BaseContributorFormSet(BaseFormSet):
"""Render BaseFormSet for working with Contributor models."""
def add_fields(self, form, index):
"""Pass through add_fields function to super."""
super(BaseContributorFormSet, self).add_fields(form, index)
def get_metadata(self):
"""Collect and append contributor data to form fields."""
contributors_data = []
for form in self.forms:
contributor_data = {k: v for k, v in list(form.cleaned_data.items())}
if contributor_data:
contributors_data.append({'contributor': contributor_data})
return contributors_data
class RelationFormSetHelper(FormHelper):
"""Render layout for Relation form including HTML5 valdiation and errors."""
def __init__(self, *args, **kwargs):
"""Render layout for Relation form including HTML5 valdiation and errors."""
super(RelationFormSetHelper, self).__init__(*args, **kwargs)
# the order in which the model fields are listed for the FieldSet is the order
# these fields will be displayed
field_width = 'form-control input-sm'
self.form_tag = False
self.form_show_errors = True
self.error_text_inline = True
self.html5_required = False
self.layout = Layout(
Fieldset('Relation',
Field('type', css_class=field_width),
Field('value', css_class=field_width),
),
)
class RelationForm(ModelForm):
"""Render Relation model form with appropriate attributes."""
def __init__(self, allow_edit=True, res_short_id=None, *args, **kwargs):
"""Render Relation model form with appropriate attributes."""
super(RelationForm, self).__init__(*args, **kwargs)
self.helper = RelationFormSetHelper()
self.number = 0
self.delete_modal_form = None
if res_short_id:
self.action = "/hsapi/_internal/%s/relation/add-metadata/" % res_short_id
else:
self.action = ""
if not allow_edit:
for fld_name in self.Meta.fields:
self.fields[fld_name].widget.attrs['readonly'] = True
self.fields[fld_name].widget.attrs['style'] = "background-color:white;"
@property
def form_id(self):
"""Render proper form id by prepending 'id_relation_'."""
form_id = 'id_relation_%s' % self.number
return form_id
@property
def form_id_button(self):
"""Render form_id with quotes around it."""
form_id = 'id_relation_%s' % self.number
return "'" + form_id + "'"
class Meta:
"""Describe meta properties of RelationForm."""
model = Relation
# fields that will be displayed are specified here - but not necessarily in the same order
fields = ['type', 'value']
labels = {'type': 'Relation type', 'value': 'Related to'}
class RelationValidationForm(forms.Form):
"""Validate RelationForm 'type' and 'value' CharFields."""
type = forms.CharField(max_length=100)
value = forms.CharField()
class IdentifierFormSetHelper(FormHelper):
"""Render layout for Identifier form including HTML5 valdiation and errors."""
def __init__(self, *args, **kwargs):
"""Render layout for Identifier form including HTML5 valdiation and errors."""
super(IdentifierFormSetHelper, self).__init__(*args, **kwargs)
# the order in which the model fields are listed for the FieldSet is the order these
# fields will be displayed
field_width = 'form-control input-sm'
self.form_tag = False
self.form_show_errors = True
self.error_text_inline = True
self.html5_required = True
self.layout = Layout(
Fieldset('Identifier',
Field('name', css_class=field_width),
Field('url', css_class=field_width),
),
)
class IdentifierForm(ModelForm):
"""Render Identifier model form with appropriate attributes."""
def __init__(self, res_short_id=None, *args, **kwargs):
"""Render Identifier model form with appropriate attributes."""
super(IdentifierForm, self).__init__(*args, **kwargs)
self.fields['name'].widget.attrs['readonly'] = True
self.fields['name'].widget.attrs['style'] = "background-color:white;"
self.fields['url'].widget.attrs['readonly'] = True
self.fields['url'].widget.attrs['style'] = "background-color:white;"
self.helper = IdentifierFormSetHelper()
self.number = 0
self.delete_modal_form = None
if res_short_id:
self.action = "/hsapi/_internal/%s/identifier/add-metadata/" % res_short_id
else:
self.action = ""
class Meta:
"""Define meta properties for IdentifierForm class."""
model = Identifier
# fields that will be displayed are specified here - but not necessarily in the same order
fields = ['name', 'url']
def clean(self):
"""Ensure that identifier name attribute is not blank."""
data = self.cleaned_data
if data['name'].lower() == 'hydroshareidentifier':
raise forms.ValidationError("Identifier name attribute can't have a value "
"of '{}'.".format(data['name']))
return data
class FundingAgencyFormSetHelper(FormHelper):
"""Render layout for FundingAgency form."""
def __init__(self, *args, **kwargs):
"""Render layout for FundingAgency form."""
super(FundingAgencyFormSetHelper, self).__init__(*args, **kwargs)
# the order in which the model fields are listed for the FieldSet is the order these
# fields will be displayed
field_width = 'form-control input-sm'
self.form_tag = False
self.form_show_errors = True
self.error_text_inline = True
self.html5_required = False
self.layout = Layout(
Fieldset('Funding Agency',
Field('agency_name', css_class=field_width),
Field('award_title', css_class=field_width),
Field('award_number', css_class=field_width),
Field('agency_url', css_class=field_width),
),
)
class FundingAgencyForm(ModelForm):
"""Render FundingAgency model form with appropriate attributes."""
def __init__(self, allow_edit=True, res_short_id=None, *args, **kwargs):
"""Render FundingAgency model form with appropriate attributes."""
super(FundingAgencyForm, self).__init__(*args, **kwargs)
self.helper = FundingAgencyFormSetHelper()
self.number = 0
self.delete_modal_form = None
if res_short_id:
self.action = "/hsapi/_internal/%s/fundingagency/add-metadata/" % res_short_id
else:
self.action = ""
if not allow_edit:
for fld_name in self.Meta.fields:
self.fields[fld_name].widget.attrs['readonly'] = True
self.fields[fld_name].widget.attrs['style'] = "background-color:white;"
@property
def form_id(self):
"""Render proper form id by prepending 'id_fundingagency_'."""
form_id = 'id_fundingagency_%s' % self.number
return form_id
@property
def form_id_button(self):
"""Render proper form id with quotes."""
form_id = 'id_fundingagency_%s' % self.number
return "'" + form_id + "'"
class Meta:
"""Define meta properties of FundingAgencyForm class."""
model = FundingAgency
# fields that will be displayed are specified here - but not necessarily in the same order
fields = ['agency_name', 'award_title', 'award_number', 'agency_url']
labels = {'agency_name': 'Funding agency name', 'award_title': 'Title of the award',
'award_number': 'Award number', 'agency_url': 'Agency website'}
class FundingAgencyValidationForm(forms.Form):
"""Validate FundingAgencyForm with agency_name, award_title, award_number and agency_url."""
agency_name = forms.CharField(required=True)
award_title = forms.CharField(required=False)
award_number = forms.CharField(required=False)
agency_url = forms.URLField(required=False)
class BaseFormHelper(FormHelper):
"""Render non-repeatable element related forms."""
def __init__(self, allow_edit=True, res_short_id=None, element_id=None, element_name=None,
element_layout=None, *args, **kwargs):
"""Render non-repeatable element related forms."""
coverage_type = kwargs.pop('coverage', None)
element_name_label = kwargs.pop('element_name_label', None)
super(BaseFormHelper, self).__init__(*args, **kwargs)
if res_short_id:
self.form_method = 'post'
self.form_tag = True
if element_name.lower() == 'coverage':
if coverage_type:
self.form_id = 'id-%s-%s' % (element_name.lower(), coverage_type)
else:
self.form_id = 'id-%s' % element_name.lower()
else:
self.form_id = 'id-%s' % element_name.lower()
if element_id:
self.form_action = "/hsapi/_internal/%s/%s/%s/update-metadata/" % \
(res_short_id, element_name.lower(), element_id)
else:
self.form_action = "/hsapi/_internal/%s/%s/add-metadata/" % (res_short_id,
element_name)
else:
self.form_tag = False
# change the first character to uppercase of the element name
element_name = element_name.title()
if element_name_label:
element_name = element_name_label
if element_name == "Subject":
element_name = "Keywords"
elif element_name == "Description":
element_name = "Abstract"
if res_short_id and allow_edit:
self.layout = Layout(
Fieldset(element_name,
element_layout,
HTML('<div style="margin-top:10px">'),
HTML('<button type="button" '
'class="btn btn-primary pull-right btn-form-submit" '
'return false;">Save changes</button>'),
HTML('</div>')
),
) # TODO: TESTING
else:
self.form_tag = False
self.layout = Layout(
Fieldset(element_name,
element_layout,
),
)
class TitleValidationForm(forms.Form):
"""Validate Title form with value."""
value = forms.CharField(max_length=300)
class SubjectsFormHelper(BaseFormHelper):
"""Render Subject form.
This form handles multiple subject elements - this was not implemented as formset
since we are providing one input field to enter multiple keywords (subjects) as comma
separated values
"""
def __init__(self, allow_edit=True, res_short_id=None, element_id=None, element_name=None,
*args, **kwargs):
"""Render subject form.
The order in which the model fields are listed for the FieldSet is the order these
fields will be displayed
"""
field_width = 'form-control input-sm'
layout = Layout(
Field('value', css_class=field_width),
)
super(SubjectsFormHelper, self).__init__(allow_edit, res_short_id, element_id,
element_name, layout, *args, **kwargs)
class SubjectsForm(forms.Form):
"""Render Subjects model form with appropriate attributes."""
value = forms.CharField(max_length=500,
label='',
widget=forms.TextInput(attrs={'placeholder': 'Keywords'}),
help_text='Enter each keyword separated by a comma.')
def __init__(self, allow_edit=True, res_short_id=None, element_id=None, *args, **kwargs):
"""Render Subjects model form with appropriate attributes."""
super(SubjectsForm, self).__init__(*args, **kwargs)
self.helper = SubjectsFormHelper(allow_edit, res_short_id, element_id,
element_name='subject')
self.number = 0
self.delete_modal_form = None
if res_short_id:
self.action = "/hsapi/_internal/%s/subject/add-metadata/" % res_short_id
else:
self.action = ""
if not allow_edit:
for field in list(self.fields.values()):
field.widget.attrs['readonly'] = True
field.widget.attrs['style'] = "background-color:white;"
class AbstractFormHelper(BaseFormHelper):
"""Render Abstract form."""
def __init__(self, allow_edit=True, res_short_id=None, element_id=None, element_name=None,
*args, **kwargs):
"""Render Abstract form.
The order in which the model fields are listed for the FieldSet is the order these
fields will be displayed
"""
field_width = 'form-control input-sm'
layout = Layout(
Field('abstract', css_class=field_width),
)
super(AbstractFormHelper, self).__init__(allow_edit, res_short_id, element_id,
element_name, layout, *args, **kwargs)
class AbstractForm(ModelForm):
"""Render Abstract model form with appropriate attributes."""
def __init__(self, allow_edit=True, res_short_id=None, element_id=None, *args, **kwargs):
"""Render Abstract model form with appropriate attributes."""
super(AbstractForm, self).__init__(*args, **kwargs)
self.helper = AbstractFormHelper(allow_edit, res_short_id, element_id,
element_name='description')
if not allow_edit:
self.fields['abstract'].widget.attrs['disabled'] = True
self.fields['abstract'].widget.attrs['style'] = "background-color:white;"
class Meta:
"""Describe meta properties of AbstractForm."""
model = Description
fields = ['abstract']
exclude = ['content_object']
labels = {'abstract': ''}
class AbstractValidationForm(forms.Form):
"""Validate Abstract form with abstract field."""
abstract = forms.CharField(max_length=5000)
class RightsValidationForm(forms.Form):
"""Validate Rights form with statement and URL field."""
statement = forms.CharField(required=False)
url = forms.URLField(required=False, max_length=500)
def clean(self):
"""Clean data and render proper error messages."""
cleaned_data = super(RightsValidationForm, self).clean()
statement = cleaned_data.get('statement', None)
url = cleaned_data.get('url', None)
if not statement and not url:
self._errors['statement'] = ["A value for statement is missing"]
self._errors['url'] = ["A value for Url is missing"]
return self.cleaned_data
class CoverageTemporalFormHelper(BaseFormHelper):
"""Render Temporal Coverage form."""
def __init__(self, allow_edit=True, res_short_id=None, element_id=None, element_name=None,
*args, **kwargs):
"""Render Temporal Coverage form.
The order in which the model fields are listed for the FieldSet is the order these
fields will be displayed
"""
file_type = kwargs.pop('file_type', False)
form_field_names = ['start', 'end']
crispy_form_fields = get_crispy_form_fields(form_field_names, file_type=file_type)
layout = Layout(*crispy_form_fields)
kwargs['coverage'] = 'temporal'
super(CoverageTemporalFormHelper, self).__init__(allow_edit, res_short_id, element_id,
element_name, layout, *args, **kwargs)
class CoverageTemporalForm(forms.Form):
"""Render Coverage Temporal Form."""
start = forms.DateField(label='Start Date')
end = forms.DateField(label='End Date')
def __init__(self, allow_edit=True, res_short_id=None, element_id=None, *args, **kwargs):
"""Render Coverage Temporal Form."""
file_type = kwargs.pop('file_type', False)
super(CoverageTemporalForm, self).__init__(*args, **kwargs)
self.helper = CoverageTemporalFormHelper(allow_edit, res_short_id, element_id,
element_name='Temporal Coverage',
file_type=file_type)
self.number = 0
self.delete_modal_form = None
if res_short_id:
self.action = "/hsapi/_internal/%s/coverage/add-metadata/" % res_short_id
else:
self.action = ""
if not allow_edit:
for field in list(self.fields.values()):
field.widget.attrs['readonly'] = True
def clean(self):
"""Modify the form's cleaned_data dictionary."""
is_form_errors = False
super(CoverageTemporalForm, self).clean()
start_date = self.cleaned_data.get('start', None)
end_date = self.cleaned_data.get('end', None)
if self.errors:
self.errors.clear()
if start_date is None:
self.add_error('start', "Data for start date is missing")
is_form_errors = True
if end_date is None:
self.add_error('end', "Data for end date is missing")
is_form_errors = True
if not is_form_errors:
if start_date > end_date:
self.add_error('end', "End date should be a date after the start date")
is_form_errors = True
if is_form_errors:
return self.cleaned_data
if 'name' in self.cleaned_data:
if len(self.cleaned_data['name']) == 0:
del self.cleaned_data['name']
self.cleaned_data['start'] = self.cleaned_data['start'].isoformat()
self.cleaned_data['end'] = self.cleaned_data['end'].isoformat()
self.cleaned_data['value'] = copy.deepcopy(self.cleaned_data)
self.cleaned_data['type'] = 'period'
if 'name' in self.cleaned_data:
del self.cleaned_data['name']
del self.cleaned_data['start']
del self.cleaned_data['end']
return self.cleaned_data
class CoverageSpatialFormHelper(BaseFormHelper):
"""Render layout for CoverageSpatial form."""
def __init__(self, allow_edit=True, res_short_id=None, element_id=None, element_name=None,
*args, **kwargs):
"""Render layout for CoverageSpatial form."""
file_type = kwargs.pop('file_type', False)
layout = Layout()
# the order in which the model fields are listed for the FieldSet is the order these
# fields will be displayed
layout.append(Field('type', id="id_{}_filetype".format('type') if file_type else
"id_{}".format('type')))
form_field_names = ['name', 'projection', 'east', 'north', 'northlimit', 'eastlimit',
'southlimit', 'westlimit', 'units']
crispy_form_fields = get_crispy_form_fields(form_field_names, file_type=file_type)
for field in crispy_form_fields:
layout.append(field)
kwargs['coverage'] = 'spatial'
super(CoverageSpatialFormHelper, self).__init__(allow_edit, res_short_id, element_id,
element_name, layout, *args, **kwargs)
class CoverageSpatialForm(forms.Form):
"""Render CoverateSpatial form."""
TYPE_CHOICES = (
('box', 'Box'),
('point', 'Point')
)
type = forms.ChoiceField(choices=TYPE_CHOICES,
widget=forms.RadioSelect(attrs={'class': 'inline'}), label='')
name = forms.CharField(max_length=200, required=False, label='Place/Area Name')
projection = forms.CharField(max_length=100, required=False,
label='Coordinate System/Geographic Projection')
east = forms.DecimalField(label='Longitude', widget=forms.TextInput())
north = forms.DecimalField(label='Latitude', widget=forms.TextInput())
units = forms.CharField(max_length=50, label='Coordinate Units')
northlimit = forms.DecimalField(label='North Latitude', widget=forms.TextInput())
eastlimit = forms.DecimalField(label='East Longitude', widget=forms.TextInput())
southlimit = forms.DecimalField(label='South Latitude', widget=forms.TextInput())
westlimit = forms.DecimalField(label='West Longitude', widget=forms.TextInput())
def __init__(self, allow_edit=True, res_short_id=None, element_id=None, *args, **kwargs):
"""Render CoverateSpatial form."""
file_type = kwargs.pop('file_type', False)
super(CoverageSpatialForm, self).__init__(*args, **kwargs)
self.helper = CoverageSpatialFormHelper(allow_edit, res_short_id, element_id,
element_name='Spatial Coverage',
file_type=file_type)
self.number = 0
self.delete_modal_form = None
if res_short_id:
self.action = "/hsapi/_internal/%s/coverage/add-metadata/" % res_short_id
else:
self.action = ""
if len(self.initial) > 0:
self.initial['projection'] = 'WGS 84 EPSG:4326'
self.initial['units'] = 'Decimal degrees'
else:
self.fields['type'].widget.attrs['checked'] = 'checked'
self.fields['projection'].widget.attrs['value'] = 'WGS 84 EPSG:4326'
self.fields['units'].widget.attrs['value'] = 'Decimal degrees'
if not allow_edit:
for field in list(self.fields.values()):
field.widget.attrs['readonly'] = True
else:
self.fields['projection'].widget.attrs['readonly'] = True
self.fields['units'].widget.attrs['readonly'] = True
if file_type:
# add the 'data-map-item' attribute so that map interface can be used for editing
# these fields
self.fields['north'].widget.attrs['data-map-item'] = 'latitude'
self.fields['east'].widget.attrs['data-map-item'] = 'longitude'
self.fields['northlimit'].widget.attrs['data-map-item'] = 'northlimit'
self.fields['eastlimit'].widget.attrs['data-map-item'] = 'eastlimit'
self.fields['southlimit'].widget.attrs['data-map-item'] = 'southlimit'
self.fields['westlimit'].widget.attrs['data-map-item'] = 'westlimit'
def clean(self):
"""Modify the form's cleaned_data dictionary."""
super(CoverageSpatialForm, self).clean()
temp_cleaned_data = copy.deepcopy(self.cleaned_data)
spatial_coverage_type = temp_cleaned_data['type']
is_form_errors = False
if self.errors:
self.errors.clear()
if spatial_coverage_type == 'point':
north = temp_cleaned_data.get('north', None)
east = temp_cleaned_data.get('east', None)
if not north and north != 0:
self.add_error('north', "Data for longitude is missing")
is_form_errors = True
if not east and east != 0:
self.add_error('east', "Data for latitude is missing")
is_form_errors = True
if is_form_errors:
return self.cleaned_data
if 'northlimit' in temp_cleaned_data:
del temp_cleaned_data['northlimit']
if 'eastlimit' in self.cleaned_data:
del temp_cleaned_data['eastlimit']
if 'southlimit' in temp_cleaned_data:
del temp_cleaned_data['southlimit']
if 'westlimit' in temp_cleaned_data:
del temp_cleaned_data['westlimit']
if 'uplimit' in temp_cleaned_data:
del temp_cleaned_data['uplimit']
if 'downlimit' in temp_cleaned_data:
del temp_cleaned_data['downlimit']
temp_cleaned_data['north'] = str(temp_cleaned_data['north'])
temp_cleaned_data['east'] = str(temp_cleaned_data['east'])
else: # box type coverage
if 'north' in temp_cleaned_data:
del temp_cleaned_data['north']
if 'east' in temp_cleaned_data:
del temp_cleaned_data['east']
if 'elevation' in temp_cleaned_data:
del temp_cleaned_data['elevation']
box_fields_map = {"northlimit": "north latitude", "southlimit": "south latitude",
"eastlimit": "east longitude", "westlimit": "west longitude"}
for limit in box_fields_map.keys():
limit_data = temp_cleaned_data.get(limit, None)
# allow value of 0 to go through
if not limit_data and limit_data != 0:
self.add_error(limit, "Data for %s is missing" % box_fields_map[limit])
is_form_errors = True
if is_form_errors:
return self.cleaned_data
temp_cleaned_data['northlimit'] = str(temp_cleaned_data['northlimit'])
temp_cleaned_data['eastlimit'] = str(temp_cleaned_data['eastlimit'])
temp_cleaned_data['southlimit'] = str(temp_cleaned_data['southlimit'])
temp_cleaned_data['westlimit'] = str(temp_cleaned_data['westlimit'])
del temp_cleaned_data['type']
if 'projection' in temp_cleaned_data:
if len(temp_cleaned_data['projection']) == 0:
del temp_cleaned_data['projection']
if 'name' in temp_cleaned_data:
if len(temp_cleaned_data['name']) == 0:
del temp_cleaned_data['name']
self.cleaned_data['value'] = copy.deepcopy(temp_cleaned_data)
if 'northlimit' in self.cleaned_data:
del self.cleaned_data['northlimit']
if 'eastlimit' in self.cleaned_data:
del self.cleaned_data['eastlimit']
if 'southlimit' in self.cleaned_data:
del self.cleaned_data['southlimit']
if 'westlimit' in self.cleaned_data:
del self.cleaned_data['westlimit']
if 'uplimit' in self.cleaned_data:
del self.cleaned_data['uplimit']
if 'downlimit' in self.cleaned_data:
del self.cleaned_data['downlimit']
if 'north' in self.cleaned_data:
del self.cleaned_data['north']
if 'east' in self.cleaned_data:
del self.cleaned_data['east']
if 'elevation' in self.cleaned_data:
del self.cleaned_data['elevation']
if 'name' in self.cleaned_data:
del self.cleaned_data['name']
if 'units' in self.cleaned_data:
del self.cleaned_data['units']
if 'zunits' in self.cleaned_data:
del self.cleaned_data['zunits']
if 'projection' in self.cleaned_data:
del self.cleaned_data['projection']
return self.cleaned_data
class LanguageValidationForm(forms.Form):
"""Validate LanguageValidation form with code attribute."""
code = forms.CharField(max_length=3)
class ValidDateValidationForm(forms.Form):
"""Validate DateValidationForm with start_date and end_date attribute."""
start_date = forms.DateField()
end_date = forms.DateField()
def clean(self):
"""Modify the form's cleaned data dictionary."""
cleaned_data = super(ValidDateValidationForm, self).clean()
start_date = cleaned_data.get('start_date', None)
end_date = cleaned_data.get('end_date', None)
if start_date and not end_date:
self._errors['end_date'] = ["End date is missing"]
if end_date and not start_date:
self._errors['start_date'] = ["Start date is missing"]
if not start_date and not end_date:
del self._errors['start_date']
del self._errors['end_date']
if start_date and end_date:
self.cleaned_data['type'] = 'valid'
return self.cleaned_data
def get_crispy_form_fields(field_names, file_type=False):
"""Return a list of objects of type Field.
:param field_names: list of form field names
:param file_type: if true, then this is a metadata form for file type, otherwise, a form
for resource
:return: a list of Field objects
"""
crispy_fields = []
def get_field_id(field_name):
if file_type:
return "id_{}_filetype".format(field_name)
return "id_{}".format(field_name)
for field_name in field_names:
crispy_fields.append(Field(field_name, css_class='form-control input-sm',
id=get_field_id(field_name)))
return crispy_fields
|
hydroshare/hydroshare
|
hs_core/forms.py
|
Python
|
bsd-3-clause
| 45,279
|
[
"NetCDF"
] |
0bcae37f1e0dbaa9463ca37afa049b34ba520158cd2862cb3aeab70d62c3c408
|
## INFO ########################################################################
## ##
## plastey ##
## ======= ##
## ##
## Oculus Rift + Leap Motion + Python 3 + C + Blender + Arch Linux ##
## Version: 0.2.2.048 (20150513) ##
## File: plane.py ##
## ##
## For more information about the project, visit ##
## <http://plastey.kibu.hu>. ##
## Copyright (C) 2015 Peter Varo, Kitchen Budapest ##
## ##
## This program is free software: you can redistribute it and/or modify it ##
## under the terms of the GNU General Public License as published by the ##
## Free Software Foundation, either version 3 of the License, or ##
## (at your option) any later version. ##
## ##
## This program is distributed in the hope that it will be useful, but ##
## WITHOUT ANY WARRANTY; without even the implied warranty of ##
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. ##
## See the GNU General Public License for more details. ##
## ##
## You should have received a copy of the GNU General Public License ##
## along with this program, most likely a file in the root directory, ##
## called 'LICENSE'. If not, see <http://www.gnu.org/licenses>. ##
## ##
######################################################################## INFO ##
# Import python modules
from itertools import count
# Import blender modules
import bpy
import bmesh
#------------------------------------------------------------------------------#
def float_range(*args):
try:
start, stop, step, *rest = args
except ValueError:
step = 1
try:
start, stop = args
except ValueError:
start = 0
stop = args[0]
while start < stop:
yield start
start += step
#------------------------------------------------------------------------------#
def generate(scene,
geo_data,
geo_object,
arm_data,
arm_object,
ctl_object,
dot_material,
bone_name,
dot_data_name,
dot_object_name,
region_count_x = 2,
region_count_y = 2,
subdivision_level_x = 2,
subdivision_level_y = 2,
mesh_face_width = 1,
mesh_face_height = 1,
edge_width = 0.1,
dot_radius = 0.1):
# Make armature active
bpy.context.scene.objects.active = arm_object
bpy.ops.object.mode_set(mode='EDIT')
# Create mesh
geometry = bmesh.new()
# Calculate values for the loops
step_x = mesh_face_width/subdivision_level_x
step_y = mesh_face_height/subdivision_level_y
range_x = region_count_x*mesh_face_width + step_x
range_y = region_count_y*mesh_face_height + step_y
start_x = -region_count_x/2*mesh_face_width
start_y = -region_count_y/2*mesh_face_height
# Set vertex coordinates and create faces
# TODO: it is very likely that the vertices list is not needed!
vertices = []
bevel_vertices = []
counter = count()
for row_i, y in enumerate(float_range(start_y, start_y + range_y, step_y)):
curr_row = []
for col_i, x in enumerate(float_range(start_x, start_x + range_x, step_x)):
curr_vert = geometry.verts.new((x, y, 0))
# Select dot vertices
if (not col_i%subdivision_level_x and
not row_i%subdivision_level_y):
bone = arm_data.edit_bones.new(bone_name.format(next(counter)))
bone.head = x, y, 0
bone.tail = x, y, 1
curr_vert.select = True
# Select 'edge' vertices between two dots
elif (not col_i%subdivision_level_x or
not row_i%subdivision_level_y):
curr_vert.select = True
# Create faces
if col_i and row_i:
geometry.faces.new((prev_row[col_i - 1],
prev_row[col_i],
curr_vert,
prev_vert))
prev_vert = curr_vert
curr_row.append(curr_vert)
prev_row = curr_row
vertices.append(curr_row)
# Write bmesh to mesh-object and prevent further access
geometry.to_mesh(geo_data)
geometry.free()
# Close editing bones
bpy.ops.object.mode_set(mode='OBJECT')
# Open editing geometry
bpy.context.scene.objects.active = geo_object
bpy.ops.object.mode_set(mode='EDIT')
# 'Draw' wires as bevels
bpy.ops.mesh.bevel(offset=edge_width, vertex_only=False)
# Assign materials to selection
bpy.context.object.active_material_index = 1
bpy.ops.object.material_slot_assign()
# Close editing geometry
bpy.ops.object.mode_set(mode='OBJECT')
# Switch back to armature
bpy.context.scene.objects.active = arm_object
# Add constraints to bones
for i, bone in enumerate(arm_object.pose.bones):
# Edit rotation mode of bone
bone.rotation_mode = 'XYZ'
# Create dot-objects
dot_data = bpy.data.meshes.new(dot_data_name.format(i))
dot_object = bpy.data.objects.new(dot_object_name.format(i), dot_data)
scene.objects.link(dot_object)
# Assign material
dot_data.materials.append(dot_material)
# Create mesh-geometry
geometry = bmesh.new()
bmesh.ops.create_cube(geometry, size=dot_radius)
geometry.to_mesh(dot_data)
geometry.free()
# Set object's parent and move it to place
dot_object.parent = ctl_object
dot_object.location = bone.head
# Add modifiers
modifier = dot_object.modifiers.new('Subsurf', 'SUBSURF')
modifier.levels = 3
# Add constraints
constraint = bone.constraints.new('COPY_LOCATION')
constraint.target = dot_object
#constraint = bone.constraints.new('COPY_ROTATION')
#constraint.target = ctl_object
#constraint.use_offset = True
#constraint.owner_space = 'LOCAL_WITH_PARENT'
constraint = bone.constraints.new('COPY_SCALE')
constraint.target = ctl_object
# Deselect all objects
bpy.ops.object.select_all(action='DESELECT')
# Set auto-weighted armature-parent to the geometry
geo_object.select = True
arm_object.select = True
bpy.ops.object.parent_set(type='ARMATURE_AUTO')
|
kitchenbudapest/vr
|
plane.py
|
Python
|
gpl-3.0
| 7,471
|
[
"VisIt"
] |
6adc06afa91d699982a12b17ea849601b3616454d141ce613d7d824a30645463
|
# Expose the different parameter classes (they are really just structures)
class MCMCParamsBPHMM():
''' Creates a struct encoding the default settings for MCMC inference '''
def __init__(self):
self.Niter = 50
self.doSampleFShared = 1
self.doSampleFUnique = 1
self.doSampleUniqueZ = 0
self.doSplitMerge = 0
self.doSplitMergeRGS = 0
self.doSMNoQRev = 0 # ignore proposal prob in accept ratio... not valid!
SM = ()
#self.SM.featSelectDistr = 'splitBias+margLik'
#self.SM.doSeqUpdateThetaHatOnMerge = 0
self.nSMTrials = 5
self.nSweepsRGS = 5
self.doSampleZ = 1
self.doSampleTheta = 1
self.doSampleEta = 1
self.BP = ()
#self.BP.doSampleMass = 1
#self.BP.doSampleConc = 1
#self.BP.Niter = 10
#self.BP.var_c = 2
self.HMM = ()
#self.HMM.doSampleHypers = 1
#self.HMM.Niter = 20
#self.HMM.var_alpha = 2
#self.HMM.var_kappa = 10
self.RJ = ()
# Reversible Jump Proposal settings
#self.RJ.doHastingsFactor = 1
#self.RJ.birthPropDistr = 'DataDriven'
#self.RJ.minW = 15
#self.RJ.maxW = 100
self.doAnneal = 0
self.Anneal = ()
#self.Anneal.T0 = 100
#self.Anneal.Tf = 10000
class HMMPrior():
def __init__(self,a_alpha,b_alpha,a_kappa,b_kappa):
self.a_alpha = a_alpha
self.b_alpha = b_alpha
self.a_kappa = a_kappa
self.b_kappa = b_kappa
class HMMModel():
def __init__(self,alpha,kappa,prior):
self.alpha = alpha
self.kappa = kappa
self.prior = prior
class BPPrior():
def __init__(self,a_mass,b_mass,a_conc,b_conc):
self.a_mass = a_mass
self.b_mass = b_mass
self.a_conc = a_conc
self.b_conc = b_conc
class BPModel():
def __init__(self,gamma,c,prior):
self.gamma = gamma
self.c = c
self.prior = prior
class ModelParams_BPHMM():
def __init__(self,data):
if data.obsType == 'Gaussian':
self.obsM = obsModel(precMu=1,degFree=3,doEmpCovScalePrior=0,Scoef=1)
elif data.obsType == 'AR':
self.obsM = obsModel(doEmpCov=0,doEmpCovFirstDiff=1,degFree=1,Scoef=0.5)
else:
raise Error('BPARHMM only supports Gaussian and AutoRegressive Model Observation Types')
# ------------------------------- HMM params
prior = HMMPrior(a_alpha=0.01,b_alpha=0.01,a_kappa=0.01,b_kappa=0.01)
self.hmmM = HMMModel(alpha=1,kappa=25,prior=prior)
# ================================================== BETA PROCESS MODEL
# GAMMA: Mass param for IBP, c0 : Concentration param for IBP
prior = BPPrior(a_mass=0.01,b_mass=0.01,a_conc=0.01,b_conc=0.01)
self.bpM = BPModel(gamma=5,c=1,prior=prior)
'''class OutputParams_BPHMM( outParams, algP ):
def __init__(self,outParams,algP):
saveDir = getUserSpecifiedPath( 'SimulationResults' )
for aa in range(len(outParams)):
if aa == 1:
jobID = force2double( outParams{aa} )
elif aa == 2:
taskID = force2double( outParams{aa} )
self.jobID = jobID
self.taskID = taskID
self.saveDir = fullfile( saveDir, num2str(jobID), num2str(taskID) )
#if ~exist( self.saveDir, 'dir' )
# [~,~] = mkdir( self.saveDir )
#end
if isfield( algP, 'TimeLimit' ) && ~isempty( algP.TimeLimit ):
TL = algP.TimeLimit
Niter = Inf
else:
TL = Inf
Niter = algP.Niter
if TL <= 5*60 or Niter <= 200:
self.saveEvery = 5
self.printEvery = 5
self.logPrEvery = 1
self.statsEvery = 1
elif TL <= 2*3600 or Niter <= 5000
self.saveEvery = 25
self.printEvery = 25
self.logPrEvery = 5
self.statsEvery = 5
else:
self.saveEvery = 50
self.printEvery = 50
self.logPrEvery = 10
self.statsEvery = 10
self.doPrintHeaderInfo = 1
'''
|
mathDR/BP-AR-HMM
|
parameters.py
|
Python
|
mit
| 3,694
|
[
"Gaussian"
] |
4002d5e3fec9ca5d30eae2bb613566afcfa00f91dcd6440f1dbf26c3d6d32e34
|
# coding: utf-8
"""
Vericred API
Vericred's API allows you to search for Health Plans that a specific doctor
accepts.
## Getting Started
Visit our [Developer Portal](https://developers.vericred.com) to
create an account.
Once you have created an account, you can create one Application for
Production and another for our Sandbox (select the appropriate Plan when
you create the Application).
## SDKs
Our API follows standard REST conventions, so you can use any HTTP client
to integrate with us. You will likely find it easier to use one of our
[autogenerated SDKs](https://github.com/vericred/?query=vericred-),
which we make available for several common programming languages.
## Authentication
To authenticate, pass the API Key you created in the Developer Portal as
a `Vericred-Api-Key` header.
`curl -H 'Vericred-Api-Key: YOUR_KEY' "https://api.vericred.com/providers?search_term=Foo&zip_code=11215"`
## Versioning
Vericred's API default to the latest version. However, if you need a specific
version, you can request it with an `Accept-Version` header.
The current version is `v3`. Previous versions are `v1` and `v2`.
`curl -H 'Vericred-Api-Key: YOUR_KEY' -H 'Accept-Version: v2' "https://api.vericred.com/providers?search_term=Foo&zip_code=11215"`
## Pagination
Endpoints that accept `page` and `per_page` parameters are paginated. They expose
four additional fields that contain data about your position in the response,
namely `Total`, `Per-Page`, `Link`, and `Page` as described in [RFC-5988](https://tools.ietf.org/html/rfc5988).
For example, to display 5 results per page and view the second page of a
`GET` to `/networks`, your final request would be `GET /networks?....page=2&per_page=5`.
## Sideloading
When we return multiple levels of an object graph (e.g. `Provider`s and their `State`s
we sideload the associated data. In this example, we would provide an Array of
`State`s and a `state_id` for each provider. This is done primarily to reduce the
payload size since many of the `Provider`s will share a `State`
```
{
providers: [{ id: 1, state_id: 1}, { id: 2, state_id: 1 }],
states: [{ id: 1, code: 'NY' }]
}
```
If you need the second level of the object graph, you can just match the
corresponding id.
## Selecting specific data
All endpoints allow you to specify which fields you would like to return.
This allows you to limit the response to contain only the data you need.
For example, let's take a request that returns the following JSON by default
```
{
provider: {
id: 1,
name: 'John',
phone: '1234567890',
field_we_dont_care_about: 'value_we_dont_care_about'
},
states: [{
id: 1,
name: 'New York',
code: 'NY',
field_we_dont_care_about: 'value_we_dont_care_about'
}]
}
```
To limit our results to only return the fields we care about, we specify the
`select` query string parameter for the corresponding fields in the JSON
document.
In this case, we want to select `name` and `phone` from the `provider` key,
so we would add the parameters `select=provider.name,provider.phone`.
We also want the `name` and `code` from the `states` key, so we would
add the parameters `select=states.name,states.code`. The id field of
each document is always returned whether or not it is requested.
Our final request would be `GET /providers/12345?select=provider.name,provider.phone,states.name,states.code`
The response would be
```
{
provider: {
id: 1,
name: 'John',
phone: '1234567890'
},
states: [{
id: 1,
name: 'New York',
code: 'NY'
}]
}
```
## Benefits summary format
Benefit cost-share strings are formatted to capture:
* Network tiers
* Compound or conditional cost-share
* Limits on the cost-share
* Benefit-specific maximum out-of-pocket costs
**Example #1**
As an example, we would represent [this Summary of Benefits & Coverage](https://s3.amazonaws.com/vericred-data/SBC/2017/33602TX0780032.pdf) as:
* **Hospital stay facility fees**:
- Network Provider: `$400 copay/admit plus 20% coinsurance`
- Out-of-Network Provider: `$1,500 copay/admit plus 50% coinsurance`
- Vericred's format for this benefit: `In-Network: $400 before deductible then 20% after deductible / Out-of-Network: $1,500 before deductible then 50% after deductible`
* **Rehabilitation services:**
- Network Provider: `20% coinsurance`
- Out-of-Network Provider: `50% coinsurance`
- Limitations & Exceptions: `35 visit maximum per benefit period combined with Chiropractic care.`
- Vericred's format for this benefit: `In-Network: 20% after deductible / Out-of-Network: 50% after deductible | limit: 35 visit(s) per Benefit Period`
**Example #2**
In [this other Summary of Benefits & Coverage](https://s3.amazonaws.com/vericred-data/SBC/2017/40733CA0110568.pdf), the **specialty_drugs** cost-share has a maximum out-of-pocket for in-network pharmacies.
* **Specialty drugs:**
- Network Provider: `40% coinsurance up to a $500 maximum for up to a 30 day supply`
- Out-of-Network Provider `Not covered`
- Vericred's format for this benefit: `In-Network: 40% after deductible, up to $500 per script / Out-of-Network: 100%`
**BNF**
Here's a description of the benefits summary string, represented as a context-free grammar:
```
root ::= coverage
coverage ::= (simple_coverage | tiered_coverage) (space pipe space coverage_modifier)?
tiered_coverage ::= tier (space slash space tier)*
tier ::= tier_name colon space (tier_coverage | not_applicable)
tier_coverage ::= simple_coverage (space (then | or | and) space simple_coverage)* tier_limitation?
simple_coverage ::= (pre_coverage_limitation space)? coverage_amount (space post_coverage_limitation)? (comma? space coverage_condition)?
coverage_modifier ::= limit_condition colon space (((simple_coverage | simple_limitation) (semicolon space see_carrier_documentation)?) | see_carrier_documentation | waived_if_admitted | shared_across_tiers)
waived_if_admitted ::= ("copay" space)? "waived if admitted"
simple_limitation ::= pre_coverage_limitation space "copay applies"
tier_name ::= "In-Network-Tier-2" | "Out-of-Network" | "In-Network"
limit_condition ::= "limit" | "condition"
tier_limitation ::= comma space "up to" space (currency | (integer space time_unit plural?)) (space post_coverage_limitation)?
coverage_amount ::= currency | unlimited | included | unknown | percentage | (digits space (treatment_unit | time_unit) plural?)
pre_coverage_limitation ::= first space digits space time_unit plural?
post_coverage_limitation ::= (((then space currency) | "per condition") space)? "per" space (treatment_unit | (integer space time_unit) | time_unit) plural?
coverage_condition ::= ("before deductible" | "after deductible" | "penalty" | allowance | "in-state" | "out-of-state") (space allowance)?
allowance ::= upto_allowance | after_allowance
upto_allowance ::= "up to" space (currency space)? "allowance"
after_allowance ::= "after" space (currency space)? "allowance"
see_carrier_documentation ::= "see carrier documentation for more information"
shared_across_tiers ::= "shared across all tiers"
unknown ::= "unknown"
unlimited ::= /[uU]nlimited/
included ::= /[iI]ncluded in [mM]edical/
time_unit ::= /[hH]our/ | (((/[cC]alendar/ | /[cC]ontract/) space)? /[yY]ear/) | /[mM]onth/ | /[dD]ay/ | /[wW]eek/ | /[vV]isit/ | /[lL]ifetime/ | ((((/[bB]enefit/ plural?) | /[eE]ligibility/) space)? /[pP]eriod/)
treatment_unit ::= /[pP]erson/ | /[gG]roup/ | /[cC]ondition/ | /[sS]cript/ | /[vV]isit/ | /[eE]xam/ | /[iI]tem/ | /[sS]tay/ | /[tT]reatment/ | /[aA]dmission/ | /[eE]pisode/
comma ::= ","
colon ::= ":"
semicolon ::= ";"
pipe ::= "|"
slash ::= "/"
plural ::= "(s)" | "s"
then ::= "then" | ("," space) | space
or ::= "or"
and ::= "and"
not_applicable ::= "Not Applicable" | "N/A" | "NA"
first ::= "first"
currency ::= "$" number
percentage ::= number "%"
number ::= float | integer
float ::= digits "." digits
integer ::= /[0-9]/+ (comma_int | under_int)*
comma_int ::= ("," /[0-9]/*3) !"_"
under_int ::= ("_" /[0-9]/*3) !","
digits ::= /[0-9]/+ ("_" /[0-9]/+)*
space ::= /[ \t]/+
```
OpenAPI spec version: 1.0.0
Generated by: https://github.com/swagger-api/swagger-codegen.git
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
from __future__ import absolute_import
import os
import sys
import unittest
import vericred_client
from vericred_client.rest import ApiException
from vericred_client.models.plan_medicare_bulk import PlanMedicareBulk
class TestPlanMedicareBulk(unittest.TestCase):
""" PlanMedicareBulk unit test stubs """
def setUp(self):
pass
def tearDown(self):
pass
def testPlanMedicareBulk(self):
"""
Test PlanMedicareBulk
"""
model = vericred_client.models.plan_medicare_bulk.PlanMedicareBulk()
if __name__ == '__main__':
unittest.main()
|
vericred/vericred-python
|
test/test_plan_medicare_bulk.py
|
Python
|
apache-2.0
| 10,039
|
[
"VisIt"
] |
0527709931ee708e60bb41e29cb2228935a574fbdd86dbf276efb482f066ce5d
|
#coding=utf8
import sys
sys.path.insert(0, '..')
from redbreast.core.spec.parser import WorkflowGrammar, WorkflowVisitor
if __name__ == '__main__':
text = """
task task1:
'''
desc abc def;
'''
class : a.b.c
end
task task2(task1):
end
task task3(task1):
end
process workflow:
'''
desc text abc def
'''
key : value
tasks:
task1 as A, B
end
flows:
A->B->task3
A->task1->task2->task3
end
code A.ready:
if data.get("is_superuser") == True:
return Task('B')
else:
return Task('task1')
end
code B.ready:
return Task('task1')
end
end
process workflow1:
'''
desc text abc def
'''
key : value
end
"""
def parse(text):
from pprint import pprint
g = WorkflowGrammar()
resultSoFar = []
result, rest = g.parse(text, resultSoFar=resultSoFar, skipWS=False
# ,root=g['process_code']
)
v = WorkflowVisitor(g)
print result[0].render()
#print rest
v.visit(result, True)
pprint(v.tasks)
pprint(v.processes)
for pk, pv in v.processes.items():
for fa, fb in pv['flows']:
print "%s-->%s" % (fa, fb)
for k, code in pv['codes'].items():
print '--KEY: %s---------' % k
print code
print '---------------'
x = parse(text)
|
uliwebext/uliweb-redbreast
|
test/sandbox/other_parser.py
|
Python
|
bsd-2-clause
| 1,490
|
[
"VisIt"
] |
078febc7a4e0953c17e2275c96fdb21794af2172d7dae9cc46c7c8c107b25b3c
|
# Copyright 2020 Verily Life Sciences LLC
#
# Use of this source code is governed by a BSD-style
# license that can be found in the LICENSE file or at
# https://developers.google.com/open-source/licenses/bsd
"""Functions to read and write data from different file systems."""
import tensorflow as tf
import xarray as xr
open_fn = tf.io.gfile.GFile
def save_data(file_path, file_write_fn, file_open_fn, mode):
"""Write data to a file.
Args:
file_path: A string representing the path to the file to open.
file_write_fn: A function to write the data with.
file_open_fn: A function to open the file with.
mode: A string representing the mode to use to open the file.
"""
with file_open_fn(file_path, mode) as f:
file_write_fn(f)
def load_data(file_path, file_read_fn, file_open_fn, mode):
"""Read a file into memory.
Args:
file_path: A string representing the path to the file to open.
file_read_fn: A function to read the data with.
file_open_fn: A function to open the file with.
mode: A string representing the mode to use to open the file.
Returns:
The return value of file_read_fn.
"""
with file_open_fn(file_path, mode) as f:
data = file_read_fn(f.read())
return data
def write_ville_to_netcdf(c, file_path, file_open_fn=None):
"""Writes a ville to disk.
Args:
c: xr.Dataset specifying the trial, suitable for running recruitment and
event simulation (see sim.py for details about what data vars are
expected).
file_path: The path to write to.
file_open_fn: A function to open the file with.
"""
if file_open_fn is None:
file_open_fn = open_fn
# Have to make historical_time of length at least 1 because 0 dimension size
# is used as a sentinel value in netcdf:
# https://github.com/scipy/scipy/blob/1eae2ea615d9298e938a335ff2bc86ce345cd247/scipy/io/netcdf.py#L434
def extend(historical, future):
return xr.concat(
[historical,
future.isel(time=0).rename(time='historical_time')], 'historical_time')
historical_participants = extend(c.historical_participants, c.participants)
historical_control_arm_events = extend(
c.historical_control_arm_events,
c.control_arm_events.isel(scenario=0, drop=True))
historical_site_activation = extend(c.historical_site_activation,
c.site_activation)
historical_incidence = extend(
c.historical_incidence,
c.incidence_model.isel(model=0, sample=0, drop=True))
c = c.drop_vars([
'historical_participants', 'historical_control_arm_events',
'historical_site_activation', 'historical_incidence', 'historical_time'
])
c['historical_participants'] = historical_participants
c['historical_control_arm_events'] = historical_control_arm_events
c['historical_site_activation'] = historical_site_activation
c['historical_incidence'] = historical_incidence
file_write_fn = lambda f: f.write(c.to_netcdf())
save_data(file_path, file_write_fn, file_open_fn, 'wb')
def load_ville_from_netcdf(file_path, file_open_fn=None):
"""Read a ville into memory.
Args:
file_path: A string representing the path to the ville to open. We assume
the ville is stored as a netCDF file.
file_open_fn: A function to open the file with.
Returns:
An xr.Dataset representing the trial.
"""
if file_open_fn is None:
file_open_fn = open_fn
c = load_data(file_path, xr.load_dataset, file_open_fn, 'rb')
# Undo the historical_time hack used in write_ville_to_netcdf.
return c.isel(historical_time=slice(None, -1))
|
verilylifesciences/site-selection-tool
|
bsst/io.py
|
Python
|
bsd-3-clause
| 3,594
|
[
"NetCDF"
] |
3a63dd52edd8d7c61a66c4f50471f94e21244763b5bbd521af1e96869ef8cd8a
|
# Copyright 2003-2008 by Leighton Pritchard. All rights reserved.
# Revisions copyright 2008-2010 by Peter Cock.
# This code is part of the Biopython distribution and governed by its
# license. Please see the LICENSE file that should have been included
# as part of this package.
#
# Contact: Leighton Pritchard, Scottish Crop Research Institute,
# Invergowrie, Dundee, Scotland, DD2 5DA, UK
# L.Pritchard@scri.ac.uk
################################################################################
#
# TODO: Make representation of Ymax and Ymin values at this level, so that
# calculation of graph/axis drawing is simplified
""" GraphSet module
Provides:
o GraphSet - container for GraphData objects
For drawing capabilities, this module uses reportlab to draw and write
the diagram:
http://www.reportlab.com
For dealing with biological information, the package expects BioPython
objects:
http://www.biopython.org
"""
# ReportLab imports
from __future__ import print_function
from reportlab.lib import colors
from ._Graph import GraphData
class GraphSet(object):
""" GraphSet
Provides:
Methods:
o __init__(self, set_id=None, name=None) Called on instantiation
o new_graph(self, data, name, style='bar', color=colors.lightgreen,
altcolor=colors.darkseagreen) Create new graph in the set
from the passed data, with the passed parameters
o del_graph(self, graph_id) Delete graph with the passed id
o get_graphs(self) Returns a list of all graphs
o get_ids(self) Returns a list of graph ids
o range(self) Returns the range covered by the graphs in the set
o to_string(self, verbose=0) Returns a string describing the set
o __len__(self) Returns the length of sequence covered by the set
o __getitem__(self, key) Returns the graph with the id of the passed key
o __str__(self) Returns a string describing the set
Attributes:
o id Unique identifier for the set
o name String describing the set
"""
def __init__(self, name=None):
""" __init__(self, name=None)
o name String identifying the graph set sensibly
"""
self.id = id # Unique identifier for the set
self._next_id = 0 # Holds unique ids for graphs
self._graphs = {} # Holds graphs, keyed by unique id
self.name = name # Holds description of graph
def new_graph(self, data, name=None, style='bar', color=colors.lightgreen,
altcolor=colors.darkseagreen, linewidth=1, center=None,
colour=None, altcolour=None, centre=None):
""" new_graph(self, data, name=None, style='bar', color=colors.lightgreen,
altcolor=colors.darkseagreen)
o data List of (position, value) int tuples
o name String, description of the graph
o style String ('bar', 'heat', 'line') describing how the graph
will be drawn
o color colors.Color describing the color to draw all or 'high'
(some styles) data (overridden by backwards compatible
argument with UK spelling, colour).
o altcolor colors.Color describing the color to draw 'low' (some
styles) data (overridden by backwards compatible argument
with UK spelling, colour).
o linewidth Float describing linewidth for graph
o center Float setting the value at which the x-axis
crosses the y-axis (overridden by backwards
compatible argument with UK spelling, centre)
Add a GraphData object to the diagram (will be stored
internally
"""
# Let the UK spelling (colour) override the USA spelling (color)
if colour is not None:
color = colour
if altcolour is not None:
altcolor = altcolour
if centre is not None:
center = centre
id = self._next_id # get id number
graph = GraphData(id, data, name, style, color, altcolor, center)
graph.linewidth = linewidth
self._graphs[id] = graph # add graph data
self._next_id += 1 # increment next id
return graph
def del_graph(self, graph_id):
""" del_graph(self, graph_id)
o graph_id Identifying value of the graph
Remove a graph from the set, indicated by its id
"""
del self._graphs[graph_id]
def get_graphs(self):
""" get_graphs(self) -> [Graph, Graph, ...]
Return a list of all graphs in the graph set, sorted by id (for
reliable stacking...)
"""
return [self._graphs[id] for id in sorted(self._graphs)]
def get_ids(self):
""" get_ids(self) -> [int, int, ...]
Return a list of all ids for the graph set
"""
return list(self._graphs.keys())
def range(self):
""" range(self) -> (int, int)
Returns the lowest and highest base (or mark) numbers as a tuple
"""
lows, highs = [], []
for graph in self._graphs.values():
low, high = graph.range()
lows.append(low)
highs.append(high)
return (min(lows), max(highs))
def data_quartiles(self):
""" data_quartiles(self) -> (float, float, float, float, float)
Returns the (minimum, lowerQ, medianQ, upperQ, maximum) values as
a tuple
"""
data = []
for graph in self._graphs.values():
data += list(graph.data.values())
data.sort()
datalen = len(data)
return(data[0], data[datalen/4], data[datalen/2],
data[3*datalen/4], data[-1])
def to_string(self, verbose=0):
""" to_string(self, verbose=0) -> ""
o verbose Flag indicating whether a short or complete account
of the set is required
Returns a formatted string with information about the set
"""
if not verbose:
return "%s" % self
else:
outstr = ["\n<%s: %s>" % (self.__class__, self.name)]
outstr.append("%d graphs" % len(self._graphs))
for key in self._graphs:
outstr.append("%s" % self._graphs[key])
return "\n".join(outstr)
def __len__(self):
""" __len__(self) -> int
Return the number of graphs in the set
"""
return len(self._graphs)
def __getitem__(self, key):
""" __getitem__(self, key) -> Graph
Return a graph, keyed by id
"""
return self._graphs[key]
def __str__(self):
""" __str__(self) -> ""
Returns a formatted string with information about the feature set
"""
outstr = ["\n<%s: %s>" % (self.__class__, self.name)]
outstr.append("%d graphs" % len(self._graphs))
outstr = "\n".join(outstr)
return outstr
################################################################################
# RUN AS SCRIPT
################################################################################
if __name__ == '__main__':
# Test code
gdgs = GraphSet(0, 'test data')
testdata1 = [(1, 10), (5, 15), (10, 20), (20, 40)]
testdata2 = [(250, .34), (251, .7), (252, .7), (253, .54), (254, .65)]
gdgs.add_graph(testdata1, 'TestData 1')
gdgs.add_graph(testdata2, 'TestData 2')
print(gdgs)
|
Ambuj-UF/ConCat-1.0
|
src/Utils/Bio/Graphics/GenomeDiagram/_GraphSet.py
|
Python
|
gpl-2.0
| 7,907
|
[
"Biopython"
] |
1a79415c1525d9c332124f3a451ff672cd0c5d953fd09ed4b2411364082a37e1
|
#!/usr/bin/env python
# Author: Derrick H. Karimi
# Copyright 2014
# Unclassified
import re
import botlog
import pprint
from bbot.Utils import *
from bbot.BaseStates import *
from bbot.Data import *
from bbot.Data import *
from bbot.RegionBuy import RegionBuy
from bbot.Strategy import Strategy
#rom BaseStates.State import State
#rom BaseStates.StatsState import StatsState
#rom BaseStates.BailOut import BailOut
S = SPACE_REGEX
N = NUM_REGEX
TNSOA_MAIN_REGEX = re.compile(
'. Main . [0-9:]+ \[[0-9]+\] Main \[[0-9]+\] Notices:')
SHENKS_MAIN_REGEX = re.compile('.+fidonet.+AFTERSHOCK:')
XBIT_MAIN_REGEX = re.compile(
'. Main .* Xbit Local Echo \[[0-9]+\] InterBBS FE:')
NER_MAIN_REGEX = re.compile(
'. Main . [0-9:]+ \[[0-9]+\] Local \[[0-9]+\] Notices:')
MAIN_MENUS = [TNSOA_MAIN_REGEX, SHENKS_MAIN_REGEX, XBIT_MAIN_REGEX]
class LogOff(State):
def transition(self, app, buf):
# for the looney bin
if 'PLEASE MAKE A SELECTION OR "Q" = EXIT' in buf:
app.send_seq(['q','o','y'],comment="Logoff Sequence")
app.read()
return BailOut()
# logoff sequence for battlestar bbs
elif 'Battlenet :' in buf and 'Which door number or (Q)uit:' in buf:
app.send_seq(['q','q','o','y'],comment="Logoff sequence")
app.read()
return BailOut()
elif ('Which, (Q)uit or [1]:' in buf or
'Trans-Canada Doors Menu' in buf or
'Q) Quit back to Main Menu' in buf or
'Which door number or (Q)uit:' in buf):
app.send_seq(['q', 'o', 'y'],comment="Logoff sequence")
app.read()
return BailOut()
elif 'Which or (Q)uit:' in buf:
app.send('q')
class ExitGame(State):
def transition(self, app, buf):
if '(1) Play Game' in buf:
app.send('0')
return LogOff()
elif '-=<Paused>=-' in buf:
app.sendl(comment='continuing from paused prompt before exiting')
from bbot.EndTurnParser import EndTurnParser
class EndTurn(StatsState):
def __init__(self):
StatsState.__init__(self, statsParser=EndTurnParser())
def transition(self, app, buf):
# i guess this is a good place to reset the emergency vars
app.metadata.low_cash = False
app.metadata.low_food = False
self.parse(app, buf)
if '[Attack Menu]' in buf:
ret = app.on_attack_menu()
# I havn't tested every attack, but, in pirate attack, win or loose
# the attack menu is automatically exited from afterwards, and
# pirate attack is the only one implmented right now
if ret == Strategy.UNHANDLED:
app.send('0', comment='Exiting attack menu')
else:
app.skip_next_read = True
if '[Trading]' in buf:
app.on_trading_menu()
app.send('0', comment='Exiting Trading menu')
elif 'Do you wish to send a message? (y/N)' in buf:
app.send('n', comment='Not sending a message')
elif 'Do you wish to attack a Gooie Kablooie? (y/N)' in buf:
app.send('n', comment='Not attacking gooie')
elif '[InterPlanetary Operations]' in buf:
# in case diplomacy changed we will parse any planet realms that
# we now have a new relationship with that we previously didn't
_parse_other_realms(app)
app.on_interplanetary_menu()
app.send('0', comment='Exiting Inter Planetary menu')
elif 'Do you wish to continue? (Y/n)' in buf:
# print out current parsed data state
botlog.info(str(app.data))
if (app.data.realm.turns is not None and
app.data.realm.turns.current is not None and
app.data.realm.turns.remaining is not None and
app.metadata.first_played_turn is not None):
if app.has_app_value("turnsplit"):
# TODO, think about randomizing turnsplit, +/- 1
turnsplit = ToNum(app.get_app_value("turnsplit"))
remaining = app.data.realm.turns.remaining
turns_remaining_for_exit = (
app.metadata.first_played_turn - turnsplit)
if remaining <= turns_remaining_for_exit:
botlog.note("We are turnsplitting every " +
str(turnsplit) + " turns, exiting with " +
str(app.data.realm.turns.remaining) +
" turns remaining")
app.send('n',
comment='No, not continuing, we are '
'turnsplitting')
return MainMenu(should_exit=True)
else:
botlog.warn("Game was restarted in middle of a turn")
app.send('y', comment='Yes I wish to continue')
return TurnStats()
from bbot.SpendingParser import SpendingParser
class Spending(StatsState):
def __init__(self):
StatsState.__init__(self, statsParser=SpendingParser())
def transition(self, app, buf):
# store the spending menu as we can include it in a status email
app.data.spendtext = app.buf
# parse the buy menu
self.parse(app, buf)
# if game setup has not been read, # parse it
if app.data.setup is None:
app.data.setup = Setup()
app.send_seq(['*', 'g'])
buf = app.read()
self.parse(app, buf)
if '-=<Paused>=-' in buf:
app.sendl()
buf = app.read()
if ('Coordinator Vote' in buf and
app.has_app_value('coordinator_vote')):
app.send(5,comment='Voting for coordinator')
buf = app.read()
coord_name = app.get_app_value('coordinator_vote')
coord_realm = app.data.get_realm_by_name(
coord_name)
coord_menu_option = coord_realm.menu_option
app.send(coord_menu_option,
comment='Placing vote for ' + coord_name)
buf = app.read()
if ('Coordinator Menu' in buf and
( app.has_app_value('enemy_planets') or
app.has_app_value('peace_planets') or
app.has_app_value('allied_planets') or
app.has_app_value('no_relation_planets')
)
):
relation_dict = {
'enemy_planets': 'w',
'peace_planets': 'p',
'allied_planets': 'a',
'no_relation_planets': 'n'}
relations = {}
relation_name_option_dict = {
'Enemy': 'w',
'Peace': 'p',
'Allied': 'a',
'None': 'n'
}
botlog.debug("iterating relation types")
for relation in relation_dict.keys():
if not app.has_app_value(relation):
botlog.debug('No ' + str(relation) +
" planets set in options")
continue
botlog.debug('processing ' + relation + ' relations')
planets = make_string_list(app.get_app_value(relation))
botlog.debug(str(planets) + 'being set as ' + relation)
for planet_name in planets:
relations[planet_name] = relation_dict[relation]
botlog.debug('Intermediate relation dict is:\n' +
str(relations))
changed_relations = {}
botlog.debug('Iterating all planets to compare old and new '
'relations')
for planet in app.data.league.planets:
planet_name = planet.name
botlog.debug('Processing planet ' +
str(planet_name) +
', looking for a similar name')
matched_name = None
for new_planet_name in relations.keys():
if new_planet_name.lower() in planet_name.lower():
matched_name = new_planet_name
break
if matched_name is None:
botlog.debug('Could not match planet ' +
str(planet_name))
continue
relation_changed = (
relation_name_option_dict[planet.relation] !=
relations[matched_name])
botlog.debug('planet ' + str(planet_name) + ', matches ' +
str(matched_name) + ', did relation change?' +
str(relation_changed))
if not relation_changed:
continue
changed_relations[planet_name] = relations[matched_name]
botlog.debug("The following relations have changed:\n" +
str(changed_relations))
if len(changed_relations) > 0:
app.send('*', comment="Entering coordinator menu")
app.read()
app.send(2, comment='Modifying diplomacy')
app.read()
for planet_name in changed_relations.keys():
app.sendl(planet_name, comment="Entering planet name")
app.read()
app.send(changed_relations[planet_name],
comment='Changing disposition to ' +
planet_name)
app.read()
app.sendl(comment="Leaving modify diplomacy menu")
app.read()
_parse_diplomacy_list(app, '4', '[Coordinator Ops]')
app.sendl(comment="exiting coordinator menu")
app.read()
# parse the information from the advisors, we are only doing this
# on the first turn, even though this could change every turn, that
# would be TMI
app.send('a')
for advisor in range(1, 5):
app.data.realm.advisors.reset_advisor(advisor)
app.send(advisor)
buf = app.read()
self.parse(app, buf)
if '-=<Paused>=-' in buf:
app.sendl()
buf = app.read()
# return to the buy menu and parse it again for good measure
app.send_seq(['0', '0'])
buf = app.read()
self.parse(app, buf)
# TODO write a function to determine when we are almost out of
# protection, then only start headquarters at that time
if (app.data.is_oop() and
app.data.realm.army.headquarters.number == 0):
app.send(5, comment="Starting construction on headquarters")
app.buf = app.read()
self.parse(app, buf)
# based on the strategies registered with the app we do differnt
# things, tell the app we are ready for the strategies to act
app.on_spending_menu()
# any strategy is required to leave the state back in the buy menu
# so the current app's buf should be the most recent buy data
app.data.spendtext = app.buf
self.parse(app, app.buf)
buf = app.read()
if len(buf) > 0:
app.data.spendtext = app.buf
# parse the buy menu
self.parse(app, buf)
# exit buy menu
app.sendl()
return EndTurn()
def list_investments(app, maintParser):
app.send("l", comment="Checking investments")
buf = app.read()
app.data.investmentstext = buf
app.data.realm.bank.investments = []
maintParser.parse(app, buf)
botlog.info("Current Investments : $" +
readable_num(sum(app.data.realm.bank.investments)) +
"\n" +
pprint.pformat(app.data.realm.bank.investments))
from bbot.MaintParser import MaintParser
class Maint(StatsState):
def __init__(self):
StatsState.__init__(self, statsParser=MaintParser())
self.money_reconsider_turn = None
self.food_reconsider_turn = None
self.which_maint = None
def transition(self, app, buf):
self.parse(app, buf)
if 'Do you wish to visit the Bank? (y/N)' in buf:
self.which_maint = "Money"
app.send('n')
elif 'How much will you give?' in buf:
app.sendl()
elif '[Food Unlimited]' in buf:
self.which_maint = "Food"
app.send('0')
elif '[Covert Operations]' in buf:
app.send('0')
elif '[Crazy Gold Bank]' in buf:
# list the investments, and parse them
if not app.data.has_full_investments():
list_investments(app, self.statsParse)
else:
botlog.info("Not listing investments, we already know they "
"are full")
if app.on_bank_menu() != Strategy.UNHANDLED:
list_investments(app, self.statsParse)
app.send('0')
return Spending()
elif ' Regions left] Your choice?' in buf:
RegionBuy(app, app.data.realm.regions.waste.number)
app.skip_next_read = True
elif self.which_maint == 'Money' and 'Would you like to reconsider? (Y/n)' in buf:
if self.money_reconsider_turn == app.data.realm.turns.current:
botlog.warn("Unable to prevent not paying bills")
app.send('n')
app.metadata.low_cash = True
else:
app.send('y')
botlog.warn("Turn income was not enough to pay bills")
buf = app.read_until('Do you wish to visit the Bank? (y/N)')
# maint cost
maintcost = app.data.get_maint_cost()
# maint minus current cold is ammount to withdraw
withdraw = maintcost - app.data.realm.gold
# don't try to withdraw more than we have or it will take
# two enter's to get through the prompt
if withdraw > app.data.realm.bank.gold or withdraw < 0:
withdraw = app.data.realm.bank.gold
# withdraw the money and get back to the maintenance sequence
app.send_seq(['y', 'w', str(withdraw), '\r', '0'])
self.money_reconsider_turn = app.data.realm.turns.current
elif self.which_maint == 'Food' and 'Would you like to reconsider? (Y/n)' in buf:
if self.food_reconsider_turn == app.data.realm.turns.current:
botlog.warn("Unable to prevent not feeding realm")
app.send('n')
app.metadata.low_food = True
else:
botlog.info(
"Turn food production was not enough to feed empire")
app.send_seq(['y', 'b', '\r', '0'])
self.food_reconsider_turn = app.data.realm.turns.current
from bbot.TurnStatsParser import TurnStatsParser
class TurnStats(StatsState):
def __init__(self):
StatsState.__init__(self, statsParser=TurnStatsParser())
self.reset_earntext = True
def append_stats_text(self, app, buf):
buf = buf.replace("Yes\n",'')
buf = buf.replace("-==-\n", '')
if self.reset_earntext:
self.reset_earntext = False
app.data.earntext = ''
app.data.earntext += buf + "\n"
def transition(self, app, buf):
self.parse(app, buf)
# if we just started playing, record what turn we started at
if app.metadata.waiting_to_record_first_turn_number is None:
raise Exception("We should know at this point that we are " +
"waiting to record which turn we are playing")
elif app.metadata.waiting_to_record_first_turn_number:
app.metadata.first_played_turn = app.data.realm.turns.current
app.metadata.waiting_to_record_first_turn_number = False
if 'Sorry, you have used all of your turns today.' in buf:
app.sendl(comment="Can not start next turn, we used them all, "
"going to main menu")
return MainMenu(should_exit=True)
elif '-=<Paused>=-' in buf:
self.append_stats_text(app, buf)
app.sendl(comment="Continuing from pause after stats display")
elif 'Do you wish to accept? [Yes, No, or Ignore for now]' in buf:
r = app.data.realm.turns.remaining
if r is None:
r = "an unknown number of"
botlog.note("Accepting trade deal with " + str(r) +
"turns remaining:\n\n" + buf)
app.send('y', comment="accepting mid day trade deal")
elif 'Do you wish to accept?' in buf:
r = app.data.realm.turns.remaining
if r is None:
r = "an unknown number of"
botlog.note("Ignoring trade deal with " + str(r) +
"turns remaining:\n\n" + buf)
deny = app.try_get_app_value(
"deny_ignorable_trade_deals", False)
if deny:
app.send('n',
comment="Denying mid-day trade deal that could have "
"been ignored")
else:
app.send('i', comment="ignoring mid-day tradedeal")
elif 'of your freedom.' in buf or 'Years of Protection Left.' in buf:
# do not append earn text here, this is a status page, we get this
# from the main menu for the email
# this buffer also contains the do you want to visit the bank
# question which is handled by the Maint state. we must skip
# the next read, as the line would be eaten with noone to hanlde
# it
app.skip_next_read = True
return Maint()
else:
self.append_stats_text(app, buf)
#TODO river producing food
from bbot.PreTurnsParser import PreTurnsParser
class PreTurns(StatsState):
def __init__(self):
StatsState.__init__(self, statsParser=PreTurnsParser())
def transition(self, app, buf):
self.parse(app, buf)
if 'Would you like to buy a lottery ticket?' in buf:
# play the lottery
for i in range(7):
i = i
app.sendl()
elif '-=<Paused>=-' in buf:
app.sendl()
if 'Sorry, you have used all of your turns today.' in buf:
return MainMenu(should_exit=True)
elif '[Diplomacy Menu]' in buf:
app.on_diplomacy_menu()
# exit the diplomicy meny
app.send('0')
elif 'Do you wish to accept? [Yes, No, or Ignore for now]' in buf:
app.send('y', comment="accepting trade deal")
elif 'Do you wish to accept?' in buf:
deny = app.try_get_app_value(
"deny_ignorable_trade_deals", False)
if deny:
app.send('n', comment="Denying trade deal that could have "
"been ignored")
else:
app.send('i', comment="ignoring tradedeal")
# not sure why, but in an IP game, there were sm
elif '[R] Reply, [D] Delete, [I] Ignore, or [Q] Quit>' in buf:
app.data.msgtext += buf + "\n"
app.send('d', comment="Deleting received message")
elif '[R] Reply, [D] Delete, [I] Ignore, or [Q] Quit>' in buf:
app.data.msgtext += buf + "\n"
app.send('d', comment="Deleting received message")
elif 'Accept? (Y/n)' in buf:
app.send('y', comment="Accepting offer for a treaty")
elif '[Industrial Production]' in buf:
app.on_industry_menu()
app.send('n')
return TurnStats()
# if a game gets hung up on, you can come out into almost any arbitary
# state of the game, we will have to add them here one by one as we
# discover them
elif '[Attack Menu]' in buf:
app.skip_next_read = True
return EndTurn()
# Hung up game can emerge in bank
elif 'Do you wish to visit the Bank? (y/N)' in buf:
app.skip_next_read = True
return Maint()
# another hangup case
elif '[Covert Operations]' in buf:
app.skip_next_read = True
return Maint()
# another hanger
elif '[Crazy Gold Bank]' in buf:
app.skip_next_read = True
return Maint()
elif '[Food Unlimited]' in buf:
app.skip_next_read = True
return Maint()
elif '[Spending Menu]' in buf:
app.skip_next_read = True
return Spending()
elif 'Do you wish to send a message?' in buf:
app.skip_next_read = True
return EndTurn()
elif '[Trading]' in buf:
app.skip_next_read = True
return EndTurn()
elif ' Regions left] Your choice?' in buf:
SpendingParser().parse(app,buf)
botlog.info("Restarted turn on region allocate with " +
str(app.metadata.regions_left) + " regions")
RegionBuy(app,num_regions=app.metadata.regions_left)
botlog.debug("Allocated victory regions")
app.skip_next_read = True
elif '[InterPlanetary Operations]' in buf:
app.skip_next_read = True
return EndTurn()
from bbot.PlanetParser import PlanetParser
from bbot.WarParser import WarParser
from bbot.InterplanetaryParser import InterplanetaryParser
from bbot.OtherPlanetParser import OtherPlanetParser
def _parse_other_realm(app, cur_planet):
app.send_seq([7, 1], comment="Fake sending a message to read "
"realm stats")
buf = app.read()
app.sendl(cur_planet.name, comment="Fake send message to this "
"enemy planet")
buf = app.read()
app.send('?', comment="List enemy planet realms")
buf = app.read()
app.data.otherrealmscores += buf.replace('(A-Y,Z=All,?=List) Send to: ','') + "\n"
opp = OtherPlanetParser(
cur_planet.realms, planet_name=cur_planet.name)
opp.parse(app, buf)
if '-=<Paused>=-' in buf:
app.sendl(comment="Continuing after displaying other "
"realms")
buf = app.read()
app.sendl(comment="Not sending message, returning to ip menu ")
buf = app.read()
return buf
def _parse_other_realms(app):
league = app.data.league
planets = league.planets
buf = app.buf
for cur_planet in planets:
# our own relation to our own planet is None
# our relation ship to another planet will be 'None'
# sorry it is confusing, but thats how it works out
if cur_planet.relation is None or cur_planet.relation == "None":
continue
if len(cur_planet.realms) > 0:
botlog.debug("Not sending fake message to " +
cur_planet.name + ", because there are already" +
" realms loaded")
continue
buf = _parse_other_realm(app, cur_planet)
return buf
def _parse_diplomacy_list(app, menu_option, calling_menu):
wp = WarParser()
app.send(menu_option, comment="Listing diplomacy")
max_iterations = 10
while max_iterations > 0:
buf = app.read()
wp.parse(app, buf)
if '-=<Paused>=-' in buf:
app.sendl(comment="Continuing to list diplomacy")
elif calling_menu in buf:
botlog.info("returned to interplanetary menu")
break
max_iterations -= 1
botlog.debug("After parsing diplomacy, league is:\n" +
str(app.data.league))
if max_iterations <= 0:
raise Exception("Too many iterations when listing diplomacy")
class MainMenu(StatsState):
def __init__(self, should_exit=False):
StatsState.__init__(self, statsParser=PlanetParser())
self.should_exit = should_exit
self.cur_score_list = None
self.app = None
def set_interplanetary_score(self, context, planet, num):
# botlog.debug("Setting " + self.cur_score_list +
# " for: " + str(planet.name) + " to " + str(num))
if self.cur_score_list == "score":
planet.score = num
elif self.cur_score_list == "networth":
planet.networth = num
elif self.cur_score_list == "regions":
planet.regions = num
elif self.cur_score_list == "nwdensity":
planet.nwdensity = num
else:
raise Exception("Unexpected score list: " +
str(self.cur_score_list))
def parse_interplanetary_data(self, app):
app.data.league = League()
app.send(9, comment="Entering IP menu to parse scores and dip list")
app.read()
app.send(1, comment="Entering IP scores menu")
app.read()
# construct parser for interplanetary stats
ipp = InterplanetaryParser(
context=None,
score_callback_func=self.set_interplanetary_score)
# the score options to parse
scoredict = {1: 'score', 2: 'networth', 3: 'regions',
4: 'nwdensity'}
app.data.ipscorestext = ''
# parse each of the score lists
for menu_option, score_list in scoredict.items():
app.send(menu_option, comment="Reading stats by planet")
self.cur_score_list = score_list
max_iteration = 10
# keep reading while there are more scores
while max_iteration > 0:
app.buf = app.read()
ipp.parse(app, app.buf)
lines = app.buf.splitlines()
for line in lines:
# remove all zero score planets from the message test
if (" 0" in line or
# remove this filler line
"Planetary Post" in line or
# remove all empty lines
"" == line.strip() or
# remove filler lines
'Continue? (Y/n)' in line or
'Barren Realms Elite: ' in line):
continue
app.data.ipscorestext += line + "\n"
max_iteration -= 1
if 'Continue? (Y/n)' in app.buf:
app.send('y', comment="Continuing to display "
"scores")
elif '-=<Paused>=-' in app.buf:
app.sendl(
comment="Leaving paused prompt after displaying scores")
app.buf = app.read()
break
if max_iteration <= 0:
raise Exception(
"Too many iterations when displaying scores")
app.send(0, comment='Leaving scores menu')
app.read()
self.cur_score_list = None
_parse_diplomacy_list(app, 'd', '[InterPlanetary Operations]')
_parse_other_realms(app)
app.send(0, comment="going back to main game menu")
app.read()
def parse_info(self, app):
self.app = app
# in the main menu, check the scores
app.send(3, comment="Checking scores")
buf = app.read()
app.data.planet = Planet()
# read in the scores and stats of all the realms
self.parse(app, buf)
rtext = buf.replace('See Scores','')
rtext = buf.replace('See Scores','')
rtext = rtext.replace('-*Barren Realms Elite*-','')
rtext = rtext.replace('-==-', '')
rtext = rtext.strip()
app.data.planettext = rtext
if '-=<Paused>=-' in buf:
app.sendl()
buf = app.read()
# in the main menu, check the empire status
app.send(2, comment="Checking status")
buf = app.read()
stext = buf
stext = stext.replace('See Status','')
stext = stext.replace('-==-','')
stext = stext.strip()
app.data.statstext = stext
TurnStatsParser().parse(app, buf)
if '-=<Paused>=-' in buf:
app.sendl()
buf = app.read()
# in the main menu, check the history
app.send(5, comment="Checking yesterday's news")
buf = app.read()
app.data.yesterdaynewstext = buf
while 'Continue? (Y/n)' in buf:
app.send('y', comment="Continue reading yesterday's news")
buf = app.read()
app.data.yesterdaynewstext += "\n" + buf
if '-=<Paused>=-' in buf:
app.sendl()
buf = app.read()
# in the main menu, check the history
app.send(4, comment="Checking today's news")
buf = app.read()
app.data.todaynewstext = buf
while 'Continue? (Y/n)' in buf:
app.send('y', comment="Continue reading today's news")
buf = app.read()
app.data.todaynewstext += "\n" + buf
if '-=<Paused>=-' in buf:
app.sendl()
buf = app.read()
# in the main menu, check the history
app.send(6, comment="Reading Messages")
buf = app.read()
if '(1) Play Game' not in buf:
app.data.msgtext = buf
while (
'[R] Reply, [D] Delete, [I] Ignore, or [Q] Quit > ' in buf or
'[R] Reply, [D] Delete, [I] Ignore, or [Q] Quit>' in buf):
app.send('d', comment="Deleting received message")
buf = app.read()
app.data.msgtext += "\n" + buf
# if we have not read in league scores, and if
# this is an IP game
if app.data.league is None and '(9)' in buf:
self.parse_interplanetary_data(app)
self.app.on_main_menu()
return buf
def transition(self, app, buf):
if '-=<Paused>=-' in buf:
app.sendl(comment="Leaving paused prompt in main menu")
elif self.should_exit or "Choice> Quit" in buf:
if "Choice> Quit" in buf:
app.metadata.used_all_turns = True
buf = self.parse_info(app)
# The server should not be playing multiple times, so we need to know if it is
app.skip_next_read = True
return ExitGame()
else:
self.parse_info(app)
app.send(1, comment="Commencing to play the game")
app.metadata.waiting_to_record_first_turn_number = True
return PreTurns()
class NewRealm(State):
def transition(self, app, buf):
app.send_seq([app.get_app_value('realm') + "\r", 'y', 'n'],
comment="Creating a new realm")
return MainMenu()
class StartGame(State):
def transition(self, app, buf):
if '<Paused>' in buf or '>Paused<' in buf or '<MORE>' in buf:
app.sendl()
elif 'Do you want ANSI Graphics? (Y/n)' in buf:
app.send('n')
elif 'Continue? (Y/n)' in buf:
app.send('y')
elif 'Name your Realm' in buf:
return NewRealm()
elif '(1) Play Game' in buf:
app.skip_next_read = True
return MainMenu()
class BBSMenus(State):
def transition(self, app, buf):
if "Enter number of bulletin to view or press (ENTER) to continue:" in buf:
app.sendl()
# there is a decent pause here sometimes, so just use the read until
# function
elif "[Hit a key]" in buf:
app.sendl(comment="Yet another any key")
elif ' Read your mail now?' in buf:
app.send('n',
comment="No I will not read mail, I am a fucking robot")
elif 'Search all groups for new messages' in buf:
app.send('n',
comment="No I will not search for new messages, who uses a bbs for messages?")
elif 'Search all groups for un-read messages to you' in buf:
app.send('n',
comment="No messsages, how many fucking times do I ahve to tell you")
# sequence for TNSOA
elif 'TNSOA' in buf and " Main " in buf and " Notices:" in buf:
app.send_seq(['x', '2', app.get_app_value('game')])
return StartGame()
# sequence for shenks
elif 'Main' in buf and 'fidonet' in buf and 'AFTERSHOCK' in buf:
app.send_seq(['x', '2', app.get_app_value('game')])
return StartGame()
# sequence for xbit
elif 'Main' in buf and ' Xbit Local Echo ' in buf:
app.send_seq(['x', '4', app.get_app_value('game')])
return StartGame()
# sequence for ner
elif 'Main' in buf and ' NER BBS ' in buf:
app.send_seq(['x', '3', app.get_app_value('game')])
return StartGame()
# sequence for trans canada
elif 'Main' in buf and ' Trans-Canada ' in buf:
app.send_seq(['x', '7', app.get_app_value('game')])
return StartGame()
# sequence for Battlestar
elif ('Main' in buf and
('Battlestar BBS' in buf or
app.get_app_value("address") == 'battlestarbbs.dyndns.org')):
app.send_seq(['x', '33', app.get_app_value('game')])
return StartGame()
# sequence for The Loonie Bin
elif ('Main' in buf and ('The Loonie Bin' in buf or
app.get_app_value("address") == 'thelooniebin.net')):
# there is a "Hit a key" in there because this menu is so big
app.send_seq(['x', '\r', app.get_app_value('game')])
return StartGame()
# sequence for The Section One
elif ('Main' in buf and ('Section One BBS' in buf or
app.get_app_value("address") == 'sectiononebbs.com')):
# there is a "Hit a key" in there because this menu is so big
app.send_seq(['x', '5', app.get_app_value('game')])
return StartGame()
class Password(State):
def transition(self, app, buf):
if "Password:" in buf or "PW:" in buf:
app.sendl(app.get_app_value('password'))
buf = app.read_until("[Hit a key]")
app.sendl(comment="Is this the any key?")
return BBSMenus()
class Login(State):
def transition(self, app, buf):
if "Login:" in buf or "Enter Name, Number, 'New', or 'Guest'" in buf or 'NN:' in buf:
app.sendl(app.get_app_value('username'))
return Password()
elif 'Hit a key' in buf:
app.sendl(comment="Where is the any key?")
elif 'Matrix Login' in buf:
app.sendl(comment="login to login screen")
app.read()
app.sendl(app.get_app_value('username'))
app.read()
app.sendl(app.get_app_value('password'))
return BBSMenus()
|
AlwaysTraining/bbot
|
bbot/State.py
|
Python
|
mit
| 35,873
|
[
"VisIt"
] |
bfa0178ea8d9502434d37092bb8b41bd16156bb8c643e57fb3164f23be5af9ec
|
#==========================================================================
#
# Copyright NumFOCUS
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0.txt
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#==========================================================================*/
# also test the import callback feature
def custom_callback(name, progress):
if progress == 0:
print("Loading %s..." % name, file=sys.stderr)
if progress == 1:
print("done", file=sys.stderr)
import itkConfig
itkConfig.ImportCallback = custom_callback
import itk
import sys
import os
# test setting the number of threads
itk.set_nthreads(4)
assert itk.get_nthreads() == 4
# test the force load function
itk.force_load()
filename = sys.argv[1]
mesh_filename = sys.argv[2]
PixelType = itk.UC
dim = 2
ImageType = itk.Image[PixelType, dim]
ReaderType = itk.ImageFileReader[ImageType]
reader = ReaderType.New(FileName=filename)
# test snake_case keyword arguments
reader = ReaderType.New(file_name=filename)
# test echo
itk.echo(reader)
itk.echo(reader, sys.stdout)
# test class_
assert itk.class_(reader) == ReaderType
assert itk.class_("dummy") == str
# test template
assert itk.template(ReaderType) == (itk.ImageFileReader, (ImageType,))
assert itk.template(reader) == (itk.ImageFileReader, (ImageType,))
try:
itk.template(str)
raise Exception("unknown class should send an exception")
except KeyError:
pass
# test ctype
assert itk.ctype("unsigned short") == itk.US
assert itk.ctype(" unsigned \n short \t ") == itk.US
assert itk.ctype("signed short") == itk.SS
assert itk.ctype("short") == itk.SS
try:
itk.ctype("dummy")
raise Exception("unknown C type should send an exception")
except KeyError:
pass
# test output
assert itk.output(reader) == reader.GetOutput()
assert itk.output(1) == 1
# test the deprecated image
assert itk.image(reader) == reader.GetOutput()
assert itk.image(1) == 1
# test size
s = itk.size(reader)
assert s[0] == s[1] == 256
s = itk.size(reader.GetOutput())
assert s[0] == s[1] == 256
# test physical size
s = itk.physical_size(reader)
assert s[0] == s[1] == 256.0
s = itk.physical_size(reader.GetOutput())
assert s[0] == s[1] == 256.0
# test spacing
s = itk.spacing(reader)
assert s[0] == s[1] == 1.0
s = itk.spacing(reader.GetOutput())
assert s[0] == s[1] == 1.0
# test origin
s = itk.origin(reader)
assert s[0] == s[1] == 0.0
s = itk.origin(reader.GetOutput())
assert s[0] == s[1] == 0.0
# test index
s = itk.index(reader)
assert s[0] == s[1] == 0
s = itk.index(reader.GetOutput())
assert s[0] == s[1] == 0
# test region
s = itk.region(reader)
assert s.GetIndex()[0] == s.GetIndex()[1] == 0
assert s.GetSize()[0] == s.GetSize()[1] == 256
s = itk.region(reader.GetOutput())
assert s.GetIndex()[0] == s.GetIndex()[1] == 0
assert s.GetSize()[0] == s.GetSize()[1] == 256
# test range
assert itk.range(reader) == (0, 255)
assert itk.range(reader.GetOutput()) == (0, 255)
# test write
itk.imwrite(reader, sys.argv[3])
itk.imwrite(reader, sys.argv[3], True)
# test read
image = itk.imread(filename)
assert type(image) == itk.Image[itk.RGBPixel[itk.UC],2]
image = itk.imread(filename, itk.F)
assert type(image) == itk.Image[itk.F,2]
image = itk.imread(filename, itk.F, fallback_only=True)
assert type(image) == itk.Image[itk.RGBPixel[itk.UC],2]
try:
image = itk.imread(filename, fallback_only=True)
# Should never reach this point if test passes since an exception
# is expected.
raise Exception('`itk.imread()` fallback_only should have failed')
except Exception as e:
if str(e) == "pixel_type must be set when using the fallback_only option":
pass
else:
raise e
# test mesh read / write
mesh = itk.meshread(mesh_filename)
assert type(mesh) == itk.Mesh[itk.F, 3]
mesh = itk.meshread(mesh_filename, itk.UC)
assert type(mesh) == itk.Mesh[itk.UC, 3]
mesh = itk.meshread(mesh_filename, itk.UC, fallback_only=True)
assert type(mesh) == itk.Mesh[itk.F, 3]
itk.meshwrite(mesh, sys.argv[4])
itk.meshwrite(mesh, sys.argv[4], compression=True)
# test search
res = itk.search("Index")
assert res[0] == "Index"
assert res[1] == "index"
assert "ContinuousIndex" in res
res = itk.search("index", True)
assert "Index" not in res
# test down_cast
obj = itk.Object.cast(reader)
# be sure that the reader is casted to itk::Object
assert obj.__class__ == itk.Object
down_casted = itk.down_cast(obj)
assert down_casted == reader
assert down_casted.__class__ == ReaderType
# test setting the IO manually
png_io = itk.PNGImageIO.New()
assert png_io.GetFileName() == ''
reader=itk.ImageFileReader.New(FileName=filename, ImageIO=png_io)
reader.Update()
assert png_io.GetFileName() == filename
# test reading image series
series_reader = itk.ImageSeriesReader.New(FileNames=[filename,filename])
series_reader.Update()
assert series_reader.GetOutput().GetImageDimension() == 3
assert series_reader.GetOutput().GetLargestPossibleRegion().GetSize()[2] == 2
# test reading image series and check that dimension is not increased if
# last dimension is 1.
image_series = itk.Image[itk.UC, 3].New()
image_series.SetRegions([10, 7, 1])
image_series.Allocate()
image_series.FillBuffer(0)
image_series3d_filename = os.path.join(
sys.argv[5], "image_series_extras_py.mha")
itk.imwrite(image_series, image_series3d_filename)
series_reader = itk.ImageSeriesReader.New(
FileNames=[image_series3d_filename, image_series3d_filename])
series_reader.Update()
assert series_reader.GetOutput().GetImageDimension() == 3
# test reading image series with itk.imread()
image_series = itk.imread([filename, filename])
assert image_series.GetImageDimension() == 3
# Numeric series filename generation without any integer index. It is
# only to produce an ITK object that users could set as an input to
# `itk.ImageSeriesReader.New()` or `itk.imread()` and test that it works.
numeric_series_filename = itk.NumericSeriesFileNames.New()
numeric_series_filename.SetStartIndex(0)
numeric_series_filename.SetEndIndex(3)
numeric_series_filename.SetIncrementIndex(1)
numeric_series_filename.SetSeriesFormat(filename)
image_series = itk.imread(numeric_series_filename.GetFileNames())
number_of_files = len(numeric_series_filename.GetFileNames())
assert image_series.GetImageDimension() == 3
assert image_series.GetLargestPossibleRegion().GetSize()[2] == number_of_files
# test reading image series with `itk.imread()` and check that dimension is
# not increased if last dimension is 1.
image_series = itk.imread([image_series3d_filename, image_series3d_filename])
assert image_series.GetImageDimension() == 3
# pipeline, auto_pipeline and templated class are tested in other files
# BridgeNumPy
try:
# Images
import numpy as np
image = itk.imread(filename)
arr = itk.GetArrayFromImage(image)
arr.fill(1)
assert np.any(arr != itk.GetArrayFromImage(image))
arr = itk.array_from_image(image)
arr.fill(1)
assert np.any(arr != itk.GetArrayFromImage(image))
view = itk.GetArrayViewFromImage(image)
view.fill(1)
assert np.all(view == itk.GetArrayFromImage(image))
image = itk.GetImageFromArray(arr)
image.FillBuffer(2)
assert np.any(arr != itk.GetArrayFromImage(image))
image = itk.GetImageViewFromArray(arr)
image.FillBuffer(2)
assert np.all(arr == itk.GetArrayFromImage(image))
image = itk.GetImageFromArray(arr, is_vector=True)
assert image.GetImageDimension() == 2
image = itk.GetImageViewFromArray(arr, is_vector=True)
assert image.GetImageDimension() == 2
arr = np.array([[1,2,3],[4,5,6]]).astype(np.uint8)
assert arr.shape[0] == 2
assert arr.shape[1] == 3
assert arr[1,1] == 5
image = itk.GetImageFromArray(arr)
arrKeepAxes = itk.GetArrayFromImage(image, keep_axes=True)
assert arrKeepAxes.shape[0] == 3
assert arrKeepAxes.shape[1] == 2
assert arrKeepAxes[1,1] == 4
arr = itk.GetArrayFromImage(image, keep_axes=False)
assert arr.shape[0] == 2
assert arr.shape[1] == 3
assert arr[1,1] == 5
arrKeepAxes = itk.GetArrayViewFromImage(image, keep_axes=True)
assert arrKeepAxes.shape[0] == 3
assert arrKeepAxes.shape[1] == 2
assert arrKeepAxes[1,1] == 4
arr = itk.GetArrayViewFromImage(image, keep_axes=False)
assert arr.shape[0] == 2
assert arr.shape[1] == 3
assert arr[1,1] == 5
arr = arr.copy()
image = itk.GetImageFromArray(arr)
image2 = type(image).New()
image2.Graft(image)
del image # Delete image but pixel data should be kept in img2
image = itk.GetImageFromArray(arr+1) # Fill former memory if wrongly released
assert np.array_equal(arr, itk.GetArrayViewFromImage(image2))
image2.SetPixel([0]*image2.GetImageDimension(), 3) # For mem check in dynamic analysis
# VNL Vectors
v1 = itk.vnl_vector.D(2)
v1.fill(1)
v_np = itk.GetArrayFromVnlVector(v1)
assert v1.get(0) == v_np[0]
v_np[0] = 0
assert v1.get(0) != v_np[0]
view = itk.GetArrayViewFromVnlVector(v1)
assert v1.get(0) == view[0]
view[0] = 0
assert v1.get(0) == view[0]
# VNL Matrices
m1 = itk.vnl_matrix.D(2,2)
m1.fill(1)
m_np = itk.GetArrayFromVnlMatrix(m1)
assert m1.get(0,0) == m_np[0,0]
m_np[0,0] = 0
assert m1.get(0,0) != m_np[0,0]
view = itk.GetArrayViewFromVnlMatrix(m1)
assert m1.get(0,0) == view[0,0]
view[0,0] = 0
assert m1.get(0,0) == view[0,0]
arr = np.zeros([3,3])
m_vnl = itk.GetVnlMatrixFromArray(arr)
assert m_vnl(0,0) == 0
m_vnl.put(0,0,3)
assert m_vnl(0,0) == 3
assert arr[0,0] == 0
# ITK Matrix
arr = np.zeros([3,3],float)
m_itk = itk.GetMatrixFromArray(arr)
# Test snake case function
m_itk = itk.matrix_from_array(arr)
m_itk.SetIdentity()
# Test that the numpy array has not changed,...
assert arr[0,0] == 0
# but that the ITK matrix has the correct value.
assert m_itk(0,0) == 1
arr2 = itk.GetArrayFromMatrix(m_itk)
# Check that snake case function also works
arr2 = itk.array_from_matrix(m_itk)
# Check that the new array has the new value.
assert arr2[0,0] == 1
arr2[0,0]=2
# Change the array value,...
assert arr2[0,0] == 2
# and make sure that the matrix hasn't changed.
assert m_itk(0,0) == 1
# test .astype
image = itk.imread(filename, itk.UC)
cast = image.astype(PixelType)
assert cast == image
cast = image.astype(itk.F)
assert cast.dtype == np.float32
cast = image.astype(itk.SS)
assert cast.dtype == np.int16
cast = image.astype(np.float32)
assert cast.dtype == np.float32
except ImportError:
print("NumPy not imported. Skipping BridgeNumPy tests")
# Numpy is not available, do not run the Bridge NumPy tests
pass
# xarray conversion
try:
import xarray as xr
import numpy as np
print('Testing xarray conversion')
image = itk.imread(filename)
image.SetSpacing((0.1, 0.2))
image.SetOrigin((30., 44.))
theta = np.radians(30)
cosine = np.cos(theta)
sine = np.sin(theta)
rotation = np.array(((cosine, -sine), (sine, cosine)))
image.SetDirection(rotation)
data_array = itk.xarray_from_image(image)
assert data_array.dims[0] == 'y'
assert data_array.dims[1] == 'x'
assert data_array.dims[2] == 'c'
assert np.array_equal(data_array.values, itk.array_from_image(image))
assert len(data_array.coords['x']) == 256
assert len(data_array.coords['y']) == 256
assert len(data_array.coords['c']) == 3
assert data_array.coords['x'][0] == 30.0
assert data_array.coords['x'][1] == 30.1
assert data_array.coords['y'][0] == 44.0
assert data_array.coords['y'][1] == 44.2
assert data_array.coords['c'][0] == 0
assert data_array.coords['c'][1] == 1
assert data_array.attrs['direction'][0,0] == cosine
assert data_array.attrs['direction'][0,1] == sine
assert data_array.attrs['direction'][1,0] == -sine
assert data_array.attrs['direction'][1,1] == cosine
round_trip = itk.image_from_xarray(data_array)
assert np.array_equal(itk.array_from_image(round_trip), itk.array_from_image(image))
spacing = round_trip.GetSpacing()
assert np.isclose(spacing[0], 0.1)
assert np.isclose(spacing[1], 0.2)
origin = round_trip.GetOrigin()
assert np.isclose(origin[0], 30.0)
assert np.isclose(origin[1], 44.0)
direction = round_trip.GetDirection()
assert np.isclose(direction(0,0), cosine)
assert np.isclose(direction(0,1), -sine)
assert np.isclose(direction(1,0), sine)
assert np.isclose(direction(1,1), cosine)
wrong_order = data_array.swap_dims({'y':'z'})
try:
round_trip = itk.image_from_xarray(wrong_order)
assert False
except ValueError:
pass
empty_array = np.array([], dtype=np.uint8)
empty_array.shape = (0,0,0)
empty_image = itk.image_from_array(empty_array)
empty_da = itk.xarray_from_image(empty_image)
empty_image_round = itk.image_from_xarray(empty_da)
except ImportError:
print('xarray not imported. Skipping xarray conversion tests')
pass
# vtk conversion
try:
import vtk
import numpy as np
print('Testing vtk conversion')
image = itk.image_from_array(np.random.rand(2,3,4))
vtk_image = itk.vtk_image_from_image(image)
image_round = itk.image_from_vtk_image(vtk_image)
assert(np.array_equal(itk.origin(image), itk.origin(image_round)))
assert(np.array_equal(itk.spacing(image), itk.spacing(image_round)))
assert(np.array_equal(itk.size(image), itk.size(image_round)))
assert(np.array_equal(itk.array_view_from_image(image), itk.array_view_from_image(image_round)))
image = itk.image_from_array(np.random.rand(5,4,2).astype(np.float32), is_vector=True)
vtk_image = itk.vtk_image_from_image(image)
image_round = itk.image_from_vtk_image(vtk_image)
assert(np.array_equal(itk.origin(image), itk.origin(image_round)))
assert(np.array_equal(itk.spacing(image), itk.spacing(image_round)))
assert(np.array_equal(itk.size(image), itk.size(image_round)))
assert(np.array_equal(itk.array_view_from_image(image), itk.array_view_from_image(image_round)))
except ImportError:
print('vtk not imported. Skipping vtk conversion tests')
pass
|
malaterre/ITK
|
Wrapping/Generators/Python/Tests/extras.py
|
Python
|
apache-2.0
| 14,695
|
[
"VTK"
] |
39758aa5b756346bc6f1dde3c4f5e4bcf7b65c6beebdda3d1efebb20d9762afa
|
# coding: utf-8
# Copyright (c) Pymatgen Development Team.
# Distributed under the terms of the MIT License.
from __future__ import division, unicode_literals
import math
import numpy as np
from monty.dev import deprecated
"""
Utilities for generating nicer plots.
"""
__author__ = "Shyue Ping Ong"
__copyright__ = "Copyright 2012, The Materials Project"
__version__ = "0.1"
__maintainer__ = "Shyue Ping Ong"
__email__ = "shyuep@gmail.com"
__date__ = "Mar 13, 2012"
def pretty_plot(width=8, height=None, plt=None, dpi=None,
color_cycle=("qualitative", "Set1_9")):
"""
Provides a publication quality plot, with nice defaults for font sizes etc.
Args:
width (float): Width of plot in inches. Defaults to 8in.
height (float): Height of plot in inches. Defaults to width * golden
ratio.
plt (matplotlib.pyplot): If plt is supplied, changes will be made to an
existing plot. Otherwise, a new plot will be created.
dpi (int): Sets dot per inch for figure. Defaults to 300.
color_cycle (tuple): Set the color cycle for new plots to one of the
color sets in palettable. Defaults to a qualitative Set1_9.
Returns:
Matplotlib plot object with properly sized fonts.
"""
ticksize = int(width * 2.5)
golden_ratio = (math.sqrt(5) - 1) / 2
if not height:
height = int(width * golden_ratio)
if plt is None:
import matplotlib.pyplot as plt
import importlib
mod = importlib.import_module("palettable.colorbrewer.%s" %
color_cycle[0])
colors = getattr(mod, color_cycle[1]).mpl_colors
from cycler import cycler
plt.figure(figsize=(width, height), facecolor="w", dpi=dpi)
ax = plt.gca()
ax.set_prop_cycle(cycler('color', colors))
else:
fig = plt.gcf()
fig.set_size_inches(width, height)
plt.xticks(fontsize=ticksize)
plt.yticks(fontsize=ticksize)
ax = plt.gca()
ax.set_title(ax.get_title(), size=width * 4)
labelsize = int(width * 3)
ax.set_xlabel(ax.get_xlabel(), size=labelsize)
ax.set_ylabel(ax.get_ylabel(), size=labelsize)
return plt
@deprecated(pretty_plot, "get_publication_quality_plot has been renamed "
"pretty_plot. This stub will be removed in pmg 5.0.")
def get_publication_quality_plot(*args, **kwargs):
return pretty_plot(*args, **kwargs)
def pretty_plot_two_axis(x, y1, y2, xlabel=None, y1label=None, y2label=None,
width=8, height=None, dpi=300):
"""
Variant of pretty_plot that does a dual axis plot. Adapted from matplotlib
examples. Makes it easier to create plots with different axes.
Args:
x (np.ndarray/list): Data for x-axis.
y1 (dict/np.ndarray/list): Data for y1 axis (left). If a dict, it will
be interpreted as a {label: sequence}.
y2 (dict/np.ndarray/list): Data for y2 axis (right). If a dict, it will
be interpreted as a {label: sequence}.
xlabel (str): If not None, this will be the label for the x-axis.
y1label (str): If not None, this will be the label for the y1-axis.
y2label (str): If not None, this will be the label for the y2-axis.
width (float): Width of plot in inches. Defaults to 8in.
height (float): Height of plot in inches. Defaults to width * golden
ratio.
dpi (int): Sets dot per inch for figure. Defaults to 300.
Returns:
matplotlib.pyplot
"""
import palettable.colorbrewer.diverging
colors = palettable.colorbrewer.diverging.RdYlBu_4.mpl_colors
c1 = colors[0]
c2 = colors[-1]
golden_ratio = (math.sqrt(5) - 1) / 2
if not height:
height = int(width * golden_ratio)
import matplotlib.pyplot as plt
width = 12
labelsize = int(width * 3)
ticksize = int(width * 2.5)
styles = ["-", "--", "-.", "."]
fig, ax1 = plt.subplots()
fig.set_size_inches((width, height))
if dpi:
fig.set_dpi(dpi)
if isinstance(y1, dict):
for i, (k, v) in enumerate(y1.items()):
ax1.plot(x, v, c=c1, marker='s', ls=styles[i % len(styles)],
label=k)
ax1.legend(fontsize=labelsize)
else:
ax1.plot(x, y1, c=c1, marker='s', ls='-')
if xlabel:
ax1.set_xlabel(xlabel, fontsize=labelsize)
if y1label:
# Make the y-axis label, ticks and tick labels match the line color.
ax1.set_ylabel(y1label, color=c1, fontsize=labelsize)
ax1.tick_params('x', labelsize=ticksize)
ax1.tick_params('y', colors=c1, labelsize=ticksize)
ax2 = ax1.twinx()
if isinstance(y2, dict):
for i, (k, v) in enumerate(y2.items()):
ax2.plot(x, v, c=c2, marker='o', ls=styles[i % len(styles)],
label=k)
ax2.legend(fontsize=labelsize)
else:
ax2.plot(x, y2, c=c2, marker='o', ls='-')
if y2label:
# Make the y-axis label, ticks and tick labels match the line color.
ax2.set_ylabel(y2label, color=c2, fontsize=labelsize)
ax2.tick_params('y', colors=c2, labelsize=ticksize)
return plt
def pretty_polyfit_plot(x, y, deg=1, xlabel=None, ylabel=None,
**kwargs):
"""
Convenience method to plot data with trend lines based on polynomial fit.
Args:
x: Sequence of x data.
y: Sequence of y data.
deg (int): Degree of polynomial. Defaults to 1.
xlabel (str): Label for x-axis.
ylabel (str): Label for y-axis.
\\*\\*kwargs: Keyword args passed to pretty_plot.
Returns:
matplotlib.pyplot object.
"""
plt = pretty_plot(**kwargs)
pp = np.polyfit(x, y, deg)
xp = np.linspace(min(x), max(x), 200)
plt.plot(xp, np.polyval(pp, xp), 'k--', x, y, 'o')
if xlabel:
plt.xlabel(xlabel)
if ylabel:
plt.ylabel(ylabel)
return plt
def get_ax_fig_plt(ax=None):
"""
Helper function used in plot functions supporting an optional Axes argument.
If ax is None, we build the `matplotlib` figure and create the Axes else
we return the current active figure.
Returns:
ax: :class:`Axes` object
figure: matplotlib figure
plt: matplotlib pyplot module.
"""
import matplotlib.pyplot as plt
if ax is None:
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
else:
fig = plt.gcf()
return ax, fig, plt
def get_ax3d_fig_plt(ax=None):
"""
Helper function used in plot functions supporting an optional Axes3D
argument. If ax is None, we build the `matplotlib` figure and create the
Axes3D else we return the current active figure.
Returns:
ax: :class:`Axes` object
figure: matplotlib figure
plt: matplotlib pyplot module.
"""
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import axes3d
if ax is None:
fig = plt.figure()
ax = axes3d.Axes3D(fig)
else:
fig = plt.gcf()
return ax, fig, plt
def get_axarray_fig_plt(ax_array, nrows=1, ncols=1, sharex=False, sharey=False,
squeeze=True, subplot_kw=None, gridspec_kw=None,
**fig_kw):
"""
Helper function used in plot functions that accept an optional array of Axes
as argument. If ax_array is None, we build the `matplotlib` figure and
create the array of Axes by calling plt.subplots else we return the
current active figure.
Returns:
ax: Array of :class:`Axes` objects
figure: matplotlib figure
plt: matplotlib pyplot module.
"""
import matplotlib.pyplot as plt
if ax_array is None:
fig, ax_array = plt.subplots(nrows=nrows, ncols=ncols, sharex=sharex,
sharey=sharey, squeeze=squeeze,
subplot_kw=subplot_kw,
gridspec_kw=gridspec_kw, **fig_kw)
else:
fig = plt.gcf()
if squeeze:
ax_array = np.array(ax_array).ravel()
if len(ax_array) == 1:
ax_array = ax_array[1]
return ax_array, fig, plt
def add_fig_kwargs(func):
"""
Decorator that adds keyword arguments for functions returning matplotlib
figures.
The function should return either a matplotlib figure or None to signal
some sort of error/unexpected event.
See doc string below for the list of supported options.
"""
from functools import wraps
@wraps(func)
def wrapper(*args, **kwargs):
# pop the kwds used by the decorator.
title = kwargs.pop("title", None)
size_kwargs = kwargs.pop("size_kwargs", None)
show = kwargs.pop("show", True)
savefig = kwargs.pop("savefig", None)
tight_layout = kwargs.pop("tight_layout", False)
# Call func and return immediately if None is returned.
fig = func(*args, **kwargs)
if fig is None:
return fig
# Operate on matplotlib figure.
if title is not None:
fig.suptitle(title)
if size_kwargs is not None:
fig.set_size_inches(size_kwargs.pop("w"), size_kwargs.pop("h"),
**size_kwargs)
if savefig:
fig.savefig(savefig)
if tight_layout:
fig.tight_layout()
if show:
import matplotlib.pyplot as plt
plt.show()
return fig
# Add docstring to the decorated method.
s = "\n" + """\
keyword arguments controlling the display of the figure:
================ ====================================================
kwargs Meaning
================ ====================================================
title Title of the plot (Default: None).
show True to show the figure (default: True).
savefig 'abc.png' or 'abc.eps' to save the figure to a file.
size_kwargs Dictionary with options passed to fig.set_size_inches
example: size_kwargs=dict(w=3, h=4)
tight_layout True if to call fig.tight_layout (default: False)
================ ===================================================="""
if wrapper.__doc__ is not None:
# Add s at the end of the docstring.
wrapper.__doc__ += "\n" + s
else:
# Use s
wrapper.__doc__ = s
return wrapper
|
tallakahath/pymatgen
|
pymatgen/util/plotting.py
|
Python
|
mit
| 10,567
|
[
"pymatgen"
] |
822ebbbacdc7a4c56f14392049740c7f6ee94b32a7efa887e7949d1939c42248
|
#From : https://github.com/AnalogArsonist/qpsk_ofdm_system
import numpy
import matplotlib.pyplot as plt
from matplotlib.ticker import NullFormatter
from scipy.special import erfc
import sys
import logging
logging.basicConfig(stream=sys.stderr, level=logging.DEBUG)
#size of constellation (N symbols per frame; N frames per constellation)
N=64
x=2*numpy.random.random_integers(0,1,(N,N))-1 #real part of symbol matrix 's'
y=2*numpy.random.random_integers(0,1,(N,N))-1 #imaginary part of symbol matrix 's'
s=x+1j*y #complex matrix (x+y) of QPSK symbols
t,w=(numpy.empty((N,N), dtype=complex) for i in range(2)) #generate two empty, NxN arrays for use later
logging.debug('s=%s',s)
# definitions for the plot axes
left, width=0.1,0.65
bottom, height=0.1,0.65
bottom_h=left_h=left+width+0.02
rect_scatter=[left,bottom,width,height]
rect_histx=[left,bottom_h,width,0.2]
rect_histy=[left_h,bottom,0.2,height]
# start with a rectangular figure
plt.figure(1, figsize=(8,8))
# set up plots
axScatter=plt.axes(rect_scatter)
axHistx=plt.axes(rect_histx)
axHisty=plt.axes(rect_histy)
# no axis labels for box plots
axHistx.xaxis.set_major_formatter(NullFormatter())
axHisty.yaxis.set_major_formatter(NullFormatter())
# Generate SNR scale factor for AWGN generation:
error_sum=0.0 # initialize counter to zero to be used in BER calculation
SNR_MIN=-10
SNR_MAX=10
SNR=SNR_MIN # desired SNR used to determine noise power
Eb_No_lin=10**(SNR/10.0) # convert SNR to decimal
logging.debug('Eb_No_lin=%s',Eb_No_lin)
No=1.0/Eb_No_lin # Linear power of the noise; average signal power = 1 (0dB)
scale=numpy.sqrt(No/2) # variable to scale random noise values in AWGN loop
logging.debug('No=%s',No)
logging.debug('scale=%s',scale)
# loop through each frame, modulate, add gaussian noise (AWGN) then decode back in symbols
for i in range(N):
n=numpy.fft.ifftn(numpy.random.normal(scale=scale, size=N)+1j*numpy.random.normal(scale=scale, size=N)) # array of noise
#n=0 # uncomment here and comment above if you want to remove all noise
logging.debug('n[%s]=\n%s',i,n)
t[i]=numpy.fft.ifftn(s[i])
w[i]=numpy.fft.fftn(t[i]+n) # add noise here
# decode received signal + noise back into bins/symbols
z=numpy.sign(numpy.real(w[i]))+1j*numpy.sign(numpy.imag(w[i]))
logging.debug('z of loop %s=\n%s',i,z)
logging.debug('z!=s[%s]=\n%s',i,z!=s[i])
# find errors
err=numpy.where(z != s[i])
logging.debug('err[%s]=\n%s',i,err)
# add up errors per frame
error_sum+=float(len(err[0]))
logging.debug('error_sum[%s]=\n%s',i,error_sum)
# show total error for entire NxN message
BER=error_sum/N**2
logging.debug('Final error_sum = %s out of a total possible %s symbols',error_sum,N**2)
logging.debug('Total BER=%s',BER)
# scatter plot:
axScatter.scatter(numpy.real(w),numpy.imag(w))
# draw axes at origin
axScatter.axhline(0, color='black')
axScatter.axvline(0, color='black')
# add title (at x-axis) to scatter plot
#title = 'Zero noise'
title = 'SNR = %sdB with a BER of %s' % (SNR,BER)
axScatter.xaxis.set_label_text(title)
# now determine nice limits by hand:
binwidth = 0.25 # width of histrogram 'bins'
xymax = numpy.max( [numpy.max(numpy.fabs(numpy.real(w))), numpy.max(numpy.fabs(numpy.imag(w)))] ) # find abs max symbol value; nominally 1
lim = ( int(xymax/binwidth) + 1) * binwidth # create limit that is one 'binwidth' greater than 'xymax'
axScatter.set_xlim( (-lim, lim) ) # set the data limits for the xaxis -- autoscale
axScatter.set_ylim( (-lim, lim) ) # set the data limits for the yaxis --
bins = numpy.arange(-lim, lim + binwidth, binwidth) # create bins 'binwidth' apart between -lin and +lim -- autoscale
axHistx.hist(numpy.real(w), bins=bins) # plot a histogram - xaxis are real values
axHisty.hist(numpy.imag(w), bins=bins, orientation='horizontal') # plot a histogram - yaxis are imaginary values
axHistx.set_xlim( axScatter.get_xlim() ) # set histogram axes to match scatter plot axes limits
axHisty.set_ylim( axScatter.get_ylim() ) # set histogram axes to match scatter plot axes limits
plt.show()
|
TomKealy/pyDemod
|
QPSKSig.py
|
Python
|
gpl-2.0
| 4,030
|
[
"Gaussian"
] |
8d609ff53c6f1c72a089b0041b5202d614602b6603aa7614e352e7809ee66a88
|
__author__ = 'brian'
"""
Output in test run:
Looking up 10 keys from Google Finance
Time to download 10 stocks from Google with Multi-Threading : 6.987292528152466 seconds.
Looking up 10 keys from Google Finance
Time to download 10 stocks from Google with Multi Processing : 6.1684489250183105 seconds.
Looking up 10 keys from Google Finance
Time to download 10 stocks from Google with Single Threading : 7.67667818069458 seconds.
Process finished with exit code 0
"""
import concurrentpandas
import time
# Define your keys
finance_keys = ["aapl", "xom", "msft", "goog", "brk-b", "TSLA", "IRBT", "VTI", "VT", "VNQ"]
# Instantiate Concurrent Pandas
fast_panda = concurrentpandas.ConcurrentPandas()
# Set your data source
fast_panda.set_source_google_finance()
# Insert your keys
fast_panda.insert_keys(finance_keys)
# Choose either asynchronous threads, processes, or a single sequential download
pre = time.time()
fast_panda.consume_keys_asynchronous_threads()
post = time.time()
print("Time to download 10 stocks from Google with Multi-Threading : " + (post - pre).__str__() + " seconds.")
# Insert your keys
fast_panda.insert_keys(finance_keys)
# Choose either asynchronous threads, processes, or a single sequential download
pre = time.time()
fast_panda.consume_keys_asynchronous_processes()
post = time.time()
print("Time to download 10 stocks from Google with Multi Processing : " + (post - pre).__str__() + " seconds.")
# Insert your keys
fast_panda.insert_keys(finance_keys)
# Choose either asynchronous threads, processes, or a single sequential download
pre = time.time()
fast_panda.consume_keys()
post = time.time()
print("Time to download 10 stocks from Google with Single Threading : " + (post - pre).__str__() + " seconds.")
|
briwilcox/Concurrent-Pandas
|
benchmark.py
|
Python
|
apache-2.0
| 1,750
|
[
"Brian"
] |
5df91a844e05b9f974bfe2095acb6c91efc5bb93820fa5dd90d62c15d1b88961
|
#!/usr/bin/env python
"""
Michael Hirsch
Original fortran code by A. E. Hedin
from ftp://hanna.ccmc.gsfc.nasa.gov/pub/modelweb/atmospheric/hwm93/
"""
from pathlib import Path
from numpy import arange
from dateutil.parser import parse
from argparse import ArgumentParser
import hwm93
from matplotlib.pyplot import show
from hwm93.plots import plothwm
def main():
p = ArgumentParser(description="calls HWM93 from Python, a basic demo")
p.add_argument("simtime", help="yyyy-mm-ddTHH:MM:SS time of sim", nargs="?", default="2016-01-01T12")
p.add_argument("-a", "--altkm", help="altitude (km) (start,stop,step)", type=float, nargs="+", default=(60, 1000, 5))
p.add_argument("-c", "--latlon", help="geodetic latitude (deg)", type=float, default=(65, -148))
p.add_argument("f107a", help=" 81 day AVERAGE OF F10.7 FLUX (centered on day DDD)", type=float, nargs="?", default=150)
p.add_argument("f107", help="DAILY F10.7 FLUX FOR PREVIOUS DAY", type=float, nargs="?", default=150)
p.add_argument("ap", help="daily ap", type=int, nargs="?", default=4)
p.add_argument("-o", "--outfn", help="write NetCDF (HDF5) of data")
p = p.parse_args()
altkm = arange(p.altkm[0], p.altkm[1], p.altkm[2])
glat, glon = p.latlon
T = parse(p.simtime)
winds = hwm93.run(T, altkm, glat, glon, p.f107a, p.f107, p.ap)
if p.outfn:
outfn = Path(p.outfn).expanduser()
print("writing", outfn)
winds.to_netcdf(outfn)
plothwm(winds)
show()
if __name__ == "__main__":
main()
|
scienceopen/pyhwm93
|
RunHWM93.py
|
Python
|
mit
| 1,540
|
[
"NetCDF"
] |
b687a0cfe518ef39269fd6a52f08bbd2c8bbf0bdd120c67c48eeec9ba4e75cba
|
#! /usr/bin/python
# -*- coding: utf-8 -*-
#
# Vladimír Slávik 2012
# Python 3.2
#
# for Simutrans
# http://www.simutrans.com
#
# code is public domain
#
# read all dat files in all subfolders
# reformat pictures and save aside
import os, math, sys
import simutools
#-----
Data = []
paksize = 128
outdir = "dump"
font = None
fntsize = 24
minwidth = 3 # tiles
loadedimages = {}
#-----
def procObj(obj) :
objname = obj.ask("name")
objkind = obj.ask("obj")
cname = simutools.canonicalObjName(objname)
if use_obj_kind_prefix :
cname = "%s,%s" % (cname, objkind)
print("processing", objname)
images = []
for param in simutools.ImageParameterNames :
images += map(lambda i: (param, i[0], i[1]), obj.ask_indexed(param))
for param in simutools.SingleImageParameterNames :
value = obj.ask(param)
if value :
images.append((param, "", value))
total_length_px = len(images) * paksize
max_width_px = 1024.0
min_width_px = 384.0 # both fixed, based on common screen and text
new_width_px = max(min(total_length_px, max_width_px), min_width_px) # restrict to range
new_width = int(float(new_width_px) / paksize)
new_height = math.ceil(len(images) / float(new_width)) + 1
newimage = pygame.Surface((paksize * new_width, paksize * new_height))
newimage.fill((231,255,255)) # background
newimage.fill((255,255,255), (0, 0, new_width * paksize, paksize)) # description stripe
text = "name: " + objname
surf = font.render(text, False, (0,0,0), (255,255,255)) # black text on white
newimage.blit(surf, (10, 10))
text = "copyright: " + obj.ask("copyright", "?", False)
surf = font.render(text, False, (0,0,0), (255,255,255))
newimage.blit(surf, (10, 10 + fntsize + 4))
type = obj.ask("type", "?")
if type != "?" :
text = "obj: %s ; type: %s" % (objkind, type)
else :
text = "obj: %s" % objkind
surf = font.render(text, False, (0,0,0), (255,255,255))
newimage.blit(surf, (10, 10 + 2 * (fntsize + 4)))
text = "timeline: %s - %s" % (obj.ask("intro_year", "*"), obj.ask("retire_year", "*"))
surf = font.render(text, False, (0,0,0), (255,255,255))
newimage.blit(surf, (10, 10 + 3 * (fntsize + 4)))
targetcoords = [0, 1]
for i in range(len(images)) :
kind, indices, imgref = images[i]
imgref = simutools.SimutransImgParam(imgref)
if not imgref.isEmpty() :
imgname = os.path.normpath(os.path.join(os.path.dirname(obj.srcfile), imgref.file + ".png"))
if not imgname in loadedimages :
loadedimages[imgname] = pygame.image.load(imgname)
srccoords_px = pygame.Rect(imgref.coords[1] * paksize, \
imgref.coords[0] * paksize, paksize, paksize) # calculate position on old image
imgref.coords[0] = targetcoords[1]; imgref.coords[1] = targetcoords[0]
targetcoords_px = [x * paksize for x in targetcoords]
newimage.blit(loadedimages[imgname], targetcoords_px, srccoords_px) # copy image tile
imgref.file = cname
obj.put(kind + indices, str(imgref))
targetcoords[0] += 1
if targetcoords[0] >= new_width :
targetcoords[0] = 0
targetcoords[1] += 1
pygame.image.save(newimage, os.path.join(outdir, cname + ".png"))
f = open(os.path.join(outdir, cname + ".dat"), 'w')
for l in obj.lines :
f.write(l)
f.close()
#-----
# main() is this piece of code
try :
import pygame
except ImportError :
print("This script needs PyGame to work!")
print("Visit http://www.pygame.org to get it.")
else :
pygame.font.init()
font = pygame.font.Font(None, fntsize)
use_obj_kind_prefix = "-prefix" in sys.argv
simutools.walkFiles(os.getcwd(), simutools.loadFile, cbparam=Data)
simutools.pruneList(Data)
if not os.path.exists(outdir) :
os.mkdir(outdir)
for item in Data :
procObj(item)
#-----
# EOF
|
simutrans/pak128
|
tools/split-items-multiimage.py
|
Python
|
artistic-2.0
| 3,752
|
[
"VisIt"
] |
4dbdb3fc0bfe5cbea0724462c2b9bdf7678ad3d0fe8302e7de57376becc7c713
|
import ocl
import camvtk
import time
import vtk
import datetime
import math
# 2018.08: Weave not wrapped..
def generateRange(zmin,zmax,zNmax):
if zNmax>1:
dz = (float(zmax)-float(zmin))/(zNmax-1)
else:
dz = 0
zvals=[]
for n in range(0,zNmax):
zvals.append(zmin+n*dz)
return zvals
def waterline(cutter, s, zh, tol = 0.1 ):
bpc = ocl.BatchPushCutter()
bpc.setSTL(s)
bpc.setCutter(cutter)
bounds = s.getBounds()
xmin= bounds[0] - 2*cutter.getRadius()
xmax= bounds[1] + 2*cutter.getRadius()
ymin= bounds[2] - 2*cutter.getRadius()
ymax= bounds[3] + 2*cutter.getRadius()
Nx= int( (xmax-xmin)/tol )
Ny= int( (ymax-ymin)/tol )
xvals = generateRange(xmin,xmax,Nx)
yvals = generateRange(ymin,ymax,Ny)
for y in yvals:
f1 = ocl.Point(xmin,y,zh) # start point of fiber
f2 = ocl.Point(xmax,y,zh) # end point of fiber
f = ocl.Fiber( f1, f2)
bpc.appendFiber(f)
for x in xvals:
f1 = ocl.Point(x,ymin,zh) # start point of fiber
f2 = ocl.Point(x,ymax,zh) # end point of fiber
f = ocl.Fiber( f1, f2)
bpc.appendFiber(f)
bpc.run()
fibers = bpc.getFibers() # get fibers
w = ocl.Weave()
for f in fibers:
w.addFiber(f)
print(" build()...",)
w.build()
print("done")
print(" split()...",)
subw = w.get_components()
print("done. graph has", len(subw),"sub-weaves")
m=0
loops = []
for sw in subw:
print(m," face_traverse...",)
t_before = time.time()
sw.face_traverse()
t_after = time.time()
calctime = t_after-t_before
print("done in ", calctime," s.")
sw_loops = sw.getLoops()
for lop in sw_loops:
loops.append(lop)
m=m+1
return loops
if __name__ == "__main__":
print(ocl.version())
myscreen = camvtk.VTKScreen()
#stl = camvtk.STLSurf("../stl/demo.stl")
stl = camvtk.STLSurf("../../stl/gnu_tux_mod.stl")
myscreen.addActor(stl)
stl.SetWireframe()
#stl.SetSurface()
stl.SetColor(camvtk.cyan)
polydata = stl.src.GetOutput()
s = ocl.STLSurf()
camvtk.vtkPolyData2OCLSTL(polydata, s)
print("STL surface read,", s.size(), "triangles")
zh=1.9
cutter_diams = generateRange(0.1, 6, 5)
loops = []
length = 20 # cutter length
for diam in cutter_diams:
cutter = ocl.CylCutter( diam, length )
cutter_loops = waterline(cutter, s, zh, 0.05 )
for l in cutter_loops:
loops.append(l)
print("All waterlines done. Got", len(loops)," loops in total.")
# draw the loops
for lop in loops:
n = 0
N = len(lop)
first_point=ocl.Point(-1,-1,5)
previous=ocl.Point(-1,-1,5)
for p in lop:
if n==0: # don't draw anything on the first iteration
previous=p
first_point = p
elif n== (N-1): # the last point
myscreen.addActor( camvtk.Line(p1=(previous.x,previous.y,previous.z),p2=(p.x,p.y,p.z),color=camvtk.yellow) ) # the normal line
# and a line from p to the first point
myscreen.addActor( camvtk.Line(p1=(p.x,p.y,p.z),p2=(first_point.x,first_point.y,first_point.z),color=camvtk.yellow) )
else:
myscreen.addActor( camvtk.Line(p1=(previous.x,previous.y,previous.z),p2=(p.x,p.y,p.z),color=camvtk.yellow) )
previous=p
n=n+1
print("done.")
myscreen.camera.SetPosition(0.5, 3, 2)
myscreen.camera.SetFocalPoint(0.5, 0.5, 0)
camvtk.drawArrows(myscreen,center=(-0.5,-0.5,-0.5))
camvtk.drawOCLtext(myscreen)
myscreen.render()
myscreen.iren.Start()
#raw_input("Press Enter to terminate")
|
aewallin/opencamlib
|
examples/python/fiber/fiber_10_offsets.py
|
Python
|
lgpl-2.1
| 3,803
|
[
"VTK"
] |
39caeb8232845e3ab93eba86382470963f2a89337893c550450191fe9f109b21
|
#!/usr/bin/env python
"""
This is a script for quick VTK-based visualizations of finite element
computations results.
Examples
--------
The examples assume that run_tests.py has been run successfully and the
resulting data files are present.
- View data in output-tests/test_navier_stokes.vtk.
$ python resview.py output-tests/test_navier_stokes.vtk
- Customize the above output:
plot0: field "p", switch on edges,
plot1: field "u", surface with opacity 0.4, glyphs scaled by factor 2e-2.
$ python resview.py output-tests/test_navier_stokes.vtk -f p:e:p0 u:o.4:p1 u:g:f2e-2:p1
- As above, but glyphs are scaled by the factor determined automatically as
20% of the minimum bounding box size.
$ python resview.py output-tests/test_navier_stokes.vtk -f p:e:p0 u:o.4:p1 u:g:f10%:p1
- View data and take a screenshot.
$ python resview.py output-tests/test_poisson.vtk -o image.png
- Take a screenshot without a window popping up.
$ python resview.py output-tests/test_poisson.vtk -o image.png --off-screen
- Create animation from output-tests/test_time_poisson.*.vtk.
$ python resview.py output-tests/test_time_poisson.*.vtk -a mov.mp4
- Create animation from output-tests/test_hyperelastic.*.vtk,
set frame rate to 3, plot displacements and mooney_rivlin_stress.
$ python resview.py output-tests/test_hyperelastic_TL.*.vtk -f u:wu:e:p0 mooney_rivlin_stress:p1 -a mov.mp4 -r 3
"""
from argparse import ArgumentParser, Action, RawDescriptionHelpFormatter
from ast import literal_eval
import numpy as nm
import os.path as osp
import pyvista as pv
from vtk.util.numpy_support import numpy_to_vtk
cache = {}
def get_camera_position(bounds, azimuth, elevation, distance=None, zoom=1.):
phi, psi = nm.deg2rad(azimuth), nm.deg2rad(elevation)
bounds = nm.asarray(bounds)
if distance is not None:
r = distance / zoom
else:
r = max(bounds[1::2] - bounds[::2]) * 2.0 / zoom
center = (bounds[1::2] + bounds[::2]) * 0.5
# camera position
position = (r * nm.cos(phi) * nm.sin(psi),
r * nm.sin(phi) * nm.sin(psi),
r * nm.cos(psi))
# view up
view_up = (0, 0, 1)
if abs(elevation) < 5. or abs(elevation) > 175.:
view_up = (nm.sin(phi), nm.cos(phi), 0)
return [position, tuple(center), view_up]
def parse_options(opts, separator=':'):
out = {}
if opts is None:
return out
for v in opts.split(separator):
if len(v) < 2:
val = True
elif v[1:].isalpha():
val = v[1:]
elif v[-1] == '%':
val = ('%', float(v[1:-1]))
else:
try:
val = literal_eval(v[1:])
except ValueError:
val = v[1:]
out[v[0]] = val
return out
def make_cells_from_conn(conns, convert_to_vtk_type):
cells, cell_type, offset = [], [], []
_offset = 0
for ctype, conn in conns.items():
nc, np = conn.shape
aux = nm.empty((nc, np + 1), dtype=int)
aux[:, 0] = np
aux[:, 1:] = conn
cells.append(aux.ravel())
cell_type.append(nm.full(nc, convert_to_vtk_type[ctype]))
offset.append(nm.arange(nc) * (np + 1) + _offset)
_offset += nc
cells = nm.concatenate(cells)
cell_type = nm.concatenate(cell_type)
offset = nm.concatenate(offset)
return cells, cell_type, offset
def add_mat_id_to_grid(grid, cell_groups):
val = numpy_to_vtk(cell_groups)
val.SetName('mat_id')
grid.GetCellData().AddArray(val)
return grid
def read_mesh(filenames, step=None, print_info=True, ret_n_steps=False,
use_cache=True):
_, ext = osp.splitext(filenames[0])
if ext in ['.vtk', '.vtu']:
fstep = 0 if step is None else step
fname = filenames[fstep]
key = (fname, fstep)
if key not in cache or not use_cache:
cache[key] = pv.UnstructuredGrid(fname)
mesh = cache[key]
cache['n_steps'] = len(filenames)
elif ext in ['.xdmf', '.xdmf3']:
import meshio
try:
from meshio._common import meshio_to_vtk_type
except ImportError:
from meshio._vtk_common import meshio_to_vtk_type
fname = filenames[0]
key = (fname, step)
if key not in cache:
reader = meshio.xdmf.TimeSeriesReader(fname)
points, _cells = reader.read_points_cells()
points = nm.asarray(points)
if points.shape[1] < 3:
points = nm.pad(points, [(0, 0), (0, 3 - points.shape[1])])
_dcells = {ct.type: ct.data for ct in _cells}
cells, cell_type, offset = make_cells_from_conn(
_dcells, meshio_to_vtk_type,
)
if not reader.num_steps:
grid = pv.UnstructuredGrid(offset, cells, cell_type, points)
add_mat_id_to_grid(grid, mesh.cmesh.cell_groups)
cache[(fname, 0)] = grid
grids = {}
time = []
for _step in range(reader.num_steps):
grid = pv.UnstructuredGrid(offset, cells, cell_type, points)
t, pd, cd = reader.read_data(_step)
for dk, dv in pd.items():
val = numpy_to_vtk(dv)
val.SetName(dk)
grid.GetPointData().AddArray(val)
for dk, dv in cd.items():
val = numpy_to_vtk(nm.vstack(dv).squeeze())
val.SetName(dk)
grid.GetCellData().AddArray(val)
grids[t] = grid
time.append(t)
time.sort()
for _step, t in enumerate(time):
cache[(fname, _step)] = grids[t]
cache[(fname, None)] = cache[(fname, 0)]
cache['n_steps'] = reader.num_steps
mesh = cache[key]
elif ext in ['.h5', '.h5x']:
vtk_cell_types = {'1_1': 1, '1_2': 3, '2_2': 3, '3_2': 3,
'2_3': 5, '2_4': 9, '3_4': 10, '3_8': 12}
# Custom sfepy format.
fname = filenames[0]
key = (fname, step)
if key not in cache:
from sfepy.discrete.fem.meshio import MeshIO
io = MeshIO.any_from_filename(fname)
mesh = io.read()
desc = mesh.descs[0]
nv, dim = mesh.coors.shape
points = nm.c_[mesh.coors, nm.zeros((nv, 3 - dim))]
cells, cell_type, offset = make_cells_from_conn(
{desc: mesh.get_conn(desc)}, vtk_cell_types,
)
steps, times, nts = io.read_times()
if not len(steps):
grid = pv.UnstructuredGrid(offset, cells, cell_type, points)
add_mat_id_to_grid(grid, mesh.cmesh.cell_groups)
cache[(fname, 0)] = grid
for ii, _step in enumerate(steps):
grid = pv.UnstructuredGrid(offset, cells, cell_type, points)
datas = io.read_data(_step)
for dk, data in datas.items():
vval = data.data
if 1 < len(data.dofs) < 3:
vval = nm.c_[vval,
nm.zeros((len(vval), 3 - len(data.dofs)))]
if data.mode == 'vertex':
val = numpy_to_vtk(vval)
val.SetName(dk)
grid.GetPointData().AddArray(val)
else:
val = numpy_to_vtk(vval[:, 0, :, 0])
val.SetName(dk)
grid.GetCellData().AddArray(val)
cache[(fname, ii)] = grid
cache[(fname, None)] = cache[(fname, 0)]
cache['n_steps'] = len(steps)
mesh = cache[key]
else:
raise ValueError('unknown file format! (%s)' % ext)
if print_info:
arrs = {'s': [], 'v': [], 'o': []}
for aname in mesh.array_names:
if len(mesh[aname].shape) == 1 or mesh[aname].shape[1] == 1:
arrs['s'].append(aname)
elif mesh[aname].shape[1] == 3:
arrs['v'].append(aname)
else:
arrs['o'].append(aname + '(%d)' % mesh[aname].shape[1])
step_info = ' (step %d)' % step if step else ''
print('mesh from %s%s:' % (fname, step_info))
print(' points: %d' % mesh.n_points)
print(' cells: %d' % mesh.n_cells)
print(' bounds: %s' % list(zip(nm.min(mesh.points, axis=0),
nm.max(mesh.points, axis=0))))
if len(arrs['s']) > 0:
print(' scalars: %s' % ', '.join(arrs['s']))
if len(arrs['v']) > 0:
print(' vectors: %s' % ', '.join(arrs['v']))
if len(arrs['o']) > 0:
print(' others: %s' % ', '.join(arrs['o']))
print(' steps: %d' % cache['n_steps'])
if ret_n_steps:
return mesh, cache['n_steps']
else:
return mesh
def pv_plot(filenames, options, plotter=None, step=None,
scalar_bar_limits=None, ret_scalar_bar_limits=False,
step_inc=None, use_cache=True):
plots = {}
color = None
if plotter is None:
plotter = pv.Plotter()
fstep = (step if step is not None else options.step)
if step_inc is not None:
plotter.clear()
fstep += step_inc
if fstep < 0:
fstep = 0
if hasattr(plotter, 'resview_n_steps'):
if fstep >= plotter.resview_n_steps:
fstep = plotter.resview_n_steps - 1
mesh, n_steps = read_mesh(filenames, fstep, ret_n_steps=True,
use_cache=use_cache)
steps = {fstep: mesh}
bbox_sizes = nm.diff(nm.reshape(mesh.bounds, (-1, 2)), axis=1)
ii = nm.where(bbox_sizes > 0)[0]
tdim = len(ii)
if options.position_vector is None:
if len(ii):
ipv = ii[-1]
else:
ipv = 2
print('WARNING: zero size mesh!')
options.position_vector = [0, 0, 0]
options.position_vector[ipv] = 1.6
plotter.resview_step, plotter.resview_n_steps = fstep, n_steps
fields_map = {}
if len(options.fields_map) > 0:
for cg, fields in options.fields_map:
for field in fields.split(','):
fields_map[field.strip()] = int(cg)
if len(options.fields) == 0:
fields = []
position = 0
for field in steps[fstep].array_names:
if field in ['node_groups', 'mat_id']:
continue
fval = steps[fstep][field]
bnds = steps[fstep].bounds
mesh_size = (nm.array(bnds[1::2]) - nm.array(bnds[::2])).max()
is_vector_field = len(fval.shape) > 1
is_point_field = fval.shape[0] == steps[fstep].n_points
if is_vector_field and is_point_field:
scale = mesh_size * 0.15 / nm.linalg.norm(fval, axis=1).max()
if not nm.isfinite(scale):
scale = 1.0
fields.append((field, 'vw:p%d' % position))
fields.append((field, 'vs:o.4:p%d' % position))
fields.append((field, 'g:f%e:p%d' % (scale, position)))
else:
fields.append((field, 'p%d' % position))
position += 1
if len(fields) == 0:
fields.append(('mat_id', 'p0'))
else:
fields = options.fields
plot_id = 0
scalar_bars = {}
for field, fopts in fields:
opts = parse_options(fopts)
plot_info = []
if field == '0':
field = None
color = 'white'
if field == '1':
field = None
color = 'black'
if 's' in opts and step is None: # plot data from a given step
fstep = opts['s']
if fstep not in steps:
steps[fstep] = read_mesh(filenames, step=fstep,
use_cache=use_cache)
pipe = [steps[fstep].copy()]
if field in fields_map: # subregion
mat_val = fields_map[field]
elif 'm' in opts:
mat_val = opts['m']
else:
mat_val = None
if mat_val:
if isinstance(mat_val, int):
mat_val = [mat_val, mat_val]
pipe.append(pipe[-1].threshold(value=mat_val,
scalars='mat_id', preference='cell'))
if 'r' in opts: # recalculate cell data to point data
pipe.append(pipe[-1].cell_data_to_point_data())
opacity = opts.get('o', options.opacity) # mesh opacity
show_edges = opts.get('e', options.show_edges) # edge visibility
style = {'s': 'surface',
'w': 'wireframe',
'p': 'points'}[opts.get('v', 's')] # set style
warp = opts.get('w', options.warp) # warp mesh
factor = opts.get('f', options.factor)
if isinstance(factor, tuple):
ws = nm.diff(nm.reshape(pipe[-1].bounds, (-1, 2)), axis=1)
size = ws[ws > 0.0].min()
fmax = nm.abs(pipe[-1][field]).max()
factor = 0.01 * float(factor[1]) * size / fmax
if warp:
field_data = pipe[-1][warp]
if field_data.ndim == 1:
field_data.shape = (-1, 1)
nc = field_data.shape[1]
if nc == 1: # Warp by scalar.
pipe.append(pipe[-1].copy())
pipe[-1].points[:, 2] += field_data[:, 0] * factor
elif nc == 3:
pipe.append(pipe[-1].copy())
pipe[-1].points += field_data * factor
else:
raise ValueError('warp mesh: scalar or vector field required!')
plot_info.append('warp=%s, factor=%.2e' % (warp, factor))
position = opts.get('p', 0) # determine plotting slot
bnds = pipe[-1].bounds
if 'p' in opts:
size = nm.array(bnds[1::2]) - nm.array(bnds[::2])
pipe.append(pipe[-1].copy())
shift = position * size * nm.array(options.position_vector)
pipe[-1].translate(shift)
if opts.get('l', options.outline): # outline
plotter.add_mesh(pipe[-1].outline(), color='k')
scalar = field
scalar_label = scalar
is_vector_field = field is not None and len(pipe[-1][field].shape) > 1
is_point_field = (field is not None and
pipe[-1][field].shape[0] == pipe[-1].n_points)
if is_vector_field:
field_data = pipe[-1][field]
scalar = field + '_magnitude'
scalar_label = f'|{field}|'
pipe[-1][scalar] = nm.linalg.norm(field_data, axis=1)
if 'g' in opts and is_vector_field and is_point_field: # glyphs
pipe[-1][field] *= factor
pipe[-1].set_active_vectors(field)
pipe.append(pipe[-1].arrows)
style = ''
plot_info.append('glyphs=%s, factor=%.2e' % (field, factor))
elif 'c' in opts and is_vector_field: # select field component
comp = opts['c']
scalar = field + '_%d' % comp
pipe[-1][scalar] = field_data[:, comp]
elif 't' in opts: # streamlines
npts = opts.get('t')
if npts is True:
npts = 20
if is_vector_field:
sl_vector = field
sl_pipe = pipe[-1]
else:
sl_vector = 'gradient'
sl_pipe = pipe[-1].compute_derivative(scalars=field)
cmin, cmax = sl_pipe.bounds[::2], sl_pipe.bounds[1::2]
if tdim == 2:
streamlines = sl_pipe.streamlines(vectors=sl_vector,
pointa=cmin, pointb=cmax,
n_points=npts,
max_time=1e12)
else:
radius = 0.5 * nm.linalg.norm(nm.array(cmax) - nm.array(cmin))
streamlines = sl_pipe.streamlines(vectors=sl_vector,
source_radius=radius,
n_points=npts,
max_time=1e12)
pipe.append(streamlines)
plotter.add_mesh(pipe[-1], scalars=scalar, color=color,
style=style, show_edges=show_edges,
opacity=opacity,
cmap=options.color_map,
show_scalar_bar=False, label=scalar_label)
bnds = pipe[-1].bounds
if position not in plots:
plots[position] = []
plot_info = ':' + ','.join(plot_info) if len(plot_info) > 0 else ''
plot_info = '%s(step %d)%s' % (scalar, fstep, plot_info)
plots[position].append(((bnds[::2], bnds[1::2]), plot_info))
if options.show_scalar_bars and scalar:
if scalar not in scalar_bars:
scalar_bars[scalar_label] = []
field_data = pipe[-1][scalar]
limits = (nm.min(field_data), nm.max(field_data))
scalar_bars[scalar_label].append((limits, plotter.mapper,
position))
plot_id += 1
if options.show_scalar_bars:
if scalar_bar_limits is None:
scalar_bar_limits = {}
for k, vs in scalar_bars.items():
limits = (nm.min([v[0] for v, _, _ in vs]),
nm.max([v[1] for v, _, _ in vs]))
scalar_bar_limits[k] = limits
width, height = options.scalar_bar_size
position_x, position_y, shift_x, shift_y = options.scalar_bar_position
nslots = len(scalar_bars)
for k, vs in scalar_bars.items():
clim = scalar_bar_limits[k]
for _, mapper, _ in vs:
mapper.scalar_range = clim
_, mapper, slot = vs[0]
slot_x = (nslots - slot - 1) if shift_x < 0 else slot
x_pos = position_x + slot_x * width * shift_x
slot_y = (nslots - slot - 1) if shift_y < 0 else slot
y_pos = position_y + slot_y * height * shift_y
plotter.add_scalar_bar(title=k,
position_x=x_pos, position_y=y_pos,
width=width, height=height,
n_labels=2, mapper=mapper)
if options.show_labels and len(plots) > 1:
labels, points = [], []
for k, v in plots.items():
bnds = (nm.min(nm.array([iv[0][0] for iv in v]), axis=0),
nm.max(nm.array([iv[0][1] for iv in v]), axis=0))
labels.append('plot:%d' % k)
size = bnds[1] - bnds[0]
olpos = options.label_position
points.append(bnds[0] + nm.array(olpos[:3]) * size * olpos[3])
plotter.add_point_labels(nm.array(points), labels)
for k, v in plots.items():
print('plot %d: %s' % (k, '; '.join(iv[1] for iv in v)))
if ret_scalar_bar_limits:
return plotter, scalar_bar_limits
else:
return plotter
class OptsToListAction(Action):
separator = '='
def __call__(self, parser, namespace, values, option_string=None):
out = []
for item in values:
s = item.split(self.separator, 1)
out.append((s[0].strip(), s[1].strip() if len(s) > 1 else None))
setattr(namespace, self.dest, out)
class FieldOptsToListAction(OptsToListAction):
separator = ':'
class StoreNumberAction(Action):
def __call__(self, parser, namespace, values, option_string=None):
setattr(namespace, self.dest, literal_eval(values))
helps = {
'fields':
'fields to plot, options separated by ":" are possible:\n'
'"cX" - plot only Xth field component; '
'"e" - print edges; '
'"fX" - scale factor for warp/glyphs, see --factor option; '
'"g - glyphs (for vector fields only), scale by factor; '
'"tX" - plot X streamlines, gradient employed for scalar fields; '
'"mX" - plot cells with mat_id=X; '
'"oX" - set opacity to X; '
'"pX" - plot in slot X; '
'"r" - recalculate cell data to point data; '
'"sX" - plot data in step X; '
'"vX" - plotting style: s=surface, w=wireframe, p=points; '
'"wX" - warp mesh by vector field X, scale by factor',
'fields_map':
'map fields and cell groups, e.g. 1:u1,p1 2:u2,p2',
'outline':
'plot mesh outline',
'warp':
'warp mesh by vector field',
'factor':
'scaling factor for mesh warp and glyphs.'
' Append "%%" to scale relatively to the minimum bounding box size.',
'edges':
'plot cell edges',
'opacity':
'set opacity [default: %(default)s]',
'color_map':
'set color_map, e.g. hot, cool, bone, etc. [default: %(default)s]',
'axes_options':
'options for directional axes, e.g. xlabel="z1" ylabel="z2",'
' zlabel="z3"',
'no_axes':
'hide orientation axes',
'no_scalar_bars':
'hide scalar bars',
'position_vector':
'define positions of plots [default: "0, 0, 1.6"]',
'view':
'camera azimuth, elevation angles, and optionally zoom factor'
' [default: "225,75,0.9"]',
'animation':
'create animation, mp4 file type supported',
'framerate':
'set framerate for animation',
'screenshot':
'save screenshot to file',
'off_screen':
'off screen plots, e.g. when screenshotting',
'no_labels':
'hide plot labels',
'label_position':
'define position of plot labels [default: "-1, -1, 0, 0.2"]',
'scalar_bar_size':
'define size of scalar bars [default: "0.15, 0.05"]',
'scalar_bar_position':
'define position of scalar bars [default: "0.8, 0.02, 0, 1.5"]',
'step':
'select data in a given time step',
'2d_view':
'2d view of XY plane',
}
def main():
parser = ArgumentParser(description=__doc__,
formatter_class=RawDescriptionHelpFormatter)
parser.add_argument('-f', '--fields', metavar='field_spec',
action=FieldOptsToListAction, nargs="+", dest='fields',
default=[], help=helps['fields'])
parser.add_argument('--fields-map', metavar='map',
action=FieldOptsToListAction, nargs="+",
dest='fields_map',
default=[], help=helps['fields_map'])
parser.add_argument('-s', '--step', metavar='step',
action=StoreNumberAction, dest='step',
default=0, help=helps['step'])
parser.add_argument('-l', '--outline',
action='store_true', dest='outline',
default=False, help=helps['outline'])
parser.add_argument('-e', '--edges',
action='store_true', dest='show_edges',
default=False, help=helps['edges'])
parser.add_argument('-w', '--warp', metavar='field',
action='store', dest='warp',
default=None, help=helps['warp'])
parser.add_argument('--factor', metavar='factor',
action=StoreNumberAction, dest='factor',
default=1., help=helps['factor'])
parser.add_argument('--opacity', metavar='opacity',
action=StoreNumberAction, dest='opacity',
default=1., help=helps['opacity'])
parser.add_argument('--color-map', metavar='cmap',
action='store', dest='color_map',
default='viridis', help=helps['color_map'])
parser.add_argument('--axes-options', metavar='options',
action=OptsToListAction, nargs="+",
dest='axes_options',
default=[], help=helps['axes_options'])
parser.add_argument('--no-axes',
action='store_false', dest='axes_visibility',
default=True, help=helps['no_axes'])
parser.add_argument('--position-vector', metavar='position_vector',
action=StoreNumberAction, dest='position_vector',
default=None, help=helps['position_vector'])
parser.add_argument('--no-labels',
action='store_false', dest='show_labels',
default=True, help=helps['no_labels'])
parser.add_argument('--label-position', metavar='position',
action=StoreNumberAction, dest='label_position',
default=[-1, -1, 0, 0.2], help=helps['label_position'])
parser.add_argument('--no-scalar-bars',
action='store_false', dest='show_scalar_bars',
default=True, help=helps['no_scalar_bars'])
parser.add_argument('--scalar-bar-size', metavar='size',
action=StoreNumberAction, dest='scalar_bar_size',
default=[0.15, 0.05],
help=helps['scalar_bar_size'])
parser.add_argument('--scalar-bar-position', metavar='position',
action=StoreNumberAction, dest='scalar_bar_position',
default=[0.8, 0.02, 0, 1.5],
help=helps['scalar_bar_position'])
parser.add_argument('-v', '--view', metavar='position',
action=StoreNumberAction, dest='camera',
default=[225, 75, 0.9], help=helps['view'])
parser.add_argument('-a', '--animation', metavar='output_file',
action='store', dest='anim_output_file',
default=None, help=helps['animation'])
parser.add_argument('-r', '--frame-rate', metavar='rate',
action=StoreNumberAction, dest='framerate',
default=2.5, help=helps['framerate'])
parser.add_argument('-o', '--screenshot', metavar='output_file',
action='store', dest='screenshot',
default=None, help=helps['screenshot'])
parser.add_argument('--off-screen',
action='store_true', dest='off_screen',
default=False, help=helps['off_screen'])
parser.add_argument('-2', '--2d-view',
action='store_true', dest='view_2d',
default=False, help=helps['2d_view'])
parser.add_argument('filenames', nargs='+')
options = parser.parse_args()
pv.set_plot_theme("document")
plotter = pv.Plotter(off_screen=options.off_screen)
if options.anim_output_file:
_, n_steps = read_mesh(options.filenames, ret_n_steps=True)
# dry run
scalar_bar_limits = None
if options.axes_visibility:
plotter.add_axes(**dict(options.axes_options))
for step in range(n_steps):
plotter, sb_limits = pv_plot(options.filenames, options,
plotter=plotter, step=step,
ret_scalar_bar_limits=True)
if scalar_bar_limits is None:
scalar_bar_limits = {k: [] for k in sb_limits.keys()}
for k, v in sb_limits.items():
scalar_bar_limits[k].append(v)
if options.camera:
zoom = options.camera[2] if len(options.camera) > 2 else 1.
cpos = get_camera_position(plotter.bounds,
options.camera[0], options.camera[1],
zoom=zoom)
plotter.set_position(cpos[0])
plotter.set_focus(cpos[1])
plotter.set_viewup(cpos[2])
anim_filename = options.anim_output_file
plotter.open_movie(anim_filename, options.framerate)
plotter.show(auto_close=False)
for k in scalar_bar_limits.keys():
lims = scalar_bar_limits[k]
clim = (nm.min([v[0] for v in lims]),
nm.max([v[1] for v in lims]))
scalar_bar_limits[k] = clim
# plot frames
for step in range(n_steps):
plotter.clear()
plotter = pv_plot(options.filenames, options, plotter=plotter,
step=step, scalar_bar_limits=scalar_bar_limits)
if options.axes_visibility:
plotter.add_axes(**dict(options.axes_options))
plotter.write_frame()
plotter.close()
else:
plotter = pv_plot(options.filenames, options, plotter=plotter)
if options.axes_visibility:
plotter.add_axes(**dict(options.axes_options))
plotter.add_key_event(
'Prior', lambda: pv_plot(options.filenames,
options,
step=plotter.resview_step,
step_inc=-1,
plotter=plotter)
)
plotter.add_key_event(
'Next', lambda: pv_plot(options.filenames,
options,
step=plotter.resview_step,
step_inc=1,
plotter=plotter)
)
if options.view_2d:
plotter.view_xy()
plotter.show(screenshot=options.screenshot)
else:
if options.camera:
zoom = options.camera[2] if len(options.camera) > 2 else 1.
cpos = get_camera_position(plotter.bounds,
options.camera[0],
options.camera[1],
zoom=zoom)
else:
cpos = None
plotter.show(cpos=cpos, screenshot=options.screenshot)
if __name__ == '__main__':
main()
|
sfepy/sfepy
|
resview.py
|
Python
|
bsd-3-clause
| 30,216
|
[
"VTK"
] |
adcfff5e8d24ce05d06aa39c34b6250e7d52dab811b2ff0a48e868f3ee634aff
|
import sys
sys.path.append('/Users/phanquochuy/Projects/minimind/prototypes/george/build/lib.macosx-10.10-x86_64-2.7')
import numpy as np
import george
from george.kernels import ExpSquaredKernel
# Generate some fake noisy data.
x = 10 * np.sort(np.random.rand(10))
yerr = 0.2 * np.ones_like(x)
y = np.sin(x) + yerr * np.random.randn(len(x))
# Set up the Gaussian process.
kernel = ExpSquaredKernel(1.0)
gp = george.GP(kernel)
# Pre-compute the factorization of the matrix.
gp.compute(x, yerr)
gp.optimize(x, y, verbose=True)
# Compute the log likelihood.
print(gp.lnlikelihood(y))
t = np.linspace(0, 10, 500)
mu, cov = gp.predict(y, t)
std = np.sqrt(np.diag(cov))
|
fqhuy/minimind
|
prototypes/test_george.py
|
Python
|
mit
| 669
|
[
"Gaussian"
] |
e25b9a8e6083830572cf65c2c94ff1d49d61590f2dc8840b8b19ddd89d9f3172
|
from django.conf import settings
from django.contrib.auth.models import User, Group
from django.contrib.gis.db.models import *
from django.utils.translation import ugettext_lazy as _ # internationalization translate call
from django.shortcuts import get_object_or_404
import simplejson
from psycopg2 import ProgrammingError
import subprocess
from user_groups.models import UserGroup
from settings import IMAGE_DIR
# Added for referral info entry form:
from django.db import models
from django.forms import ModelForm, ChoiceField
import logging
logging.basicConfig(
level = logging.DEBUG,
format = '%(asctime)s %(levelname)s %(message)s',
)
class ExecuteException(Exception):
pass
def execute(cmd):
try:
subprocess.Popen(cmd, shell=True).communicate()
except OSError, e:
raise ExecuteException('''
COMMAND: %s
OSError: %s
''' % (cmd, e))
class ReferralShapeManager(GeoManager):
def gen_referral_info_attributes(self, where_id_str, shape_ids):
from django.db import connection
cursor = connection.cursor()
results = []
sql= 'SELECT * FROM referrals_referralinfo as info, referrals_referralstatus as status, referrals_referralshape as us \
WHERE info.referral_shape_id = us.id AND info.status_id = status.id AND (' + where_id_str +')'
try:
cursor.execute(sql)
except ProgrammingError, e:
from referrals.exceptions import ReportError
raise ReportError, e
for shape_id in shape_ids:
results.append({'record_values':cursor.fetchone()})
return results
def gen_intersections(self, where_id_str, int_layers, where_clause = ''):
from django.db import connection
cursor = connection.cursor()
result = {}
for layer in int_layers:
# Calculate intersection of shapes with various layers
sql= 'SELECT count(*) from ' + layer['table'] + """ as cr, referrals_referralshape as us where \
"""+where_id_str+" and intersects(us.poly,cr.the_geom) "+where_clause
logging.debug(sql)
try:
cursor.execute(sql)
except ProgrammingError, e:
from referrals.exceptions import ReportError
raise ReportError, e
result[layer['title']] = cursor.fetchone()[0]
return result
def gen_answers(self, where_id_str, qst_fields):
from django.db import connection
cursor = connection.cursor()
answers = {}
for i in range(len(qst_fields)):
answer = {}
qst = qst_fields[i]
if qst['sql'] != '':
try:
# Insert ids into query string
sql = qst['sql'].replace('where_id_str',where_id_str)
cursor.execute(sql)
rows = cursor.fetchall()
if len(rows) == 0:
answer['None Found'] = 0
for row in rows:
zero = 0
one = 0
if row[0] != None:
zero = row[0]
if row[1] != None:
one = row[1]
answer[zero] = one
except ProgrammingError, e:
from referrals.exceptions import ReportError
raise ReportError, e
else:
answer = None
answers[i] = {
'title':qst['title'],
'header1':qst['header1'],
'header2':qst['header2'],
'q':answer
}
return answers
def build_where_id_str(self, shape_ids):
where_ids = ""
for i in range(len(shape_ids)):
id = shape_ids[i]
if i == 0:
where_ids += "us.id = "+id
else:
where_ids += " OR us.id = "+id
return where_ids
def generate(self, shape_ids, report_obj):
report = {}
qst = {}
int_sums = {}
gen = {}
where_id_str = self.build_where_id_str(shape_ids)
referral_info_attributes = {}
referral_info_attributes = self.gen_referral_info_attributes(where_id_str, shape_ids)
public_int_layers = report_obj.get_layers()
int_sums = self.gen_intersections(where_id_str, public_int_layers)
qst_fields = report_obj.get_queries()
qstns = self.gen_answers(where_id_str, qst_fields)
report['questions'] = qstns
report['general'] = {'reportIntersectionSums':int_sums}
report['info_attributes'] = referral_info_attributes
return report
class ReferralShape(Model):
owner = ForeignKey(User)
user_group = ForeignKey(UserGroup)
description = CharField(_('Description'), max_length=200, blank=False)
edit_notes = CharField(_('Edit Notes'),max_length=200)
poly = PolygonField(_('Polygon'),srid=settings.SPATIAL_REFERENCE_ID)
objects = ReferralShapeManager()
rts_id = CharField(_('Referral Tracking System ID'), max_length=30, blank=True)
def info(self):
try:
return ReferralInfo.objects.get(referral_shape = self)
except ReferralInfo.DoesNotExist:
return None
def __unicode__(self):
return unicode('%s' % self.description)
class Meta:
verbose_name = _('Referral Shape')
verbose_name_plural = _('Referral Shapes')
class ReferralShapeMetaData(Model):
referral_shape = OneToOneField(ReferralShape)
contact_organization = CharField(_('Organization'), max_length=50, blank=True)
contact_person = CharField(_('Contact Person'), max_length=50, blank=True)
contact_person_title = CharField(_('Title'), max_length=30, blank=True)
contact_phone_voice = CharField(_('Phone'), max_length=20, blank=True)
contact_phone_fax = CharField(_('Fax'), max_length=20, blank=True)
contact_email = CharField(_('Email Address'), max_length=200, blank=True)
address_street = CharField(_('Address'), max_length=50, blank=True)
address_city = CharField(_('City'), max_length=30, blank=True)
address_province_or_state = CharField(_('Province/State'), max_length=30, blank=True)
address_postal_code = CharField(_('Postal Code/Zip'), max_length=15, blank=True)
address_country = CharField(_('Country'), max_length=20, blank=True)
dataset_creator = CharField(_('Creator'), max_length=50, blank=True)
dataset_content_publisher = CharField(_('Content Publisher'), max_length=50, blank=True)
dataset_publication_place = CharField(_('Publication Place'), max_length=200, blank=True)
dataset_publication_date = DateField('Publication Date', blank=True, default=datetime.datetime.now().strftime("%Y-%m-%d"), null=True)
dataset_abstract_description = CharField(_('Abstract/Description'), max_length=200, blank=True)
dataset_scale = CharField(_('Scale'), max_length=20, blank=True)
dataset_date_depicted = DateField('Date Depicted', blank=True, default=datetime.datetime.now().strftime("%Y-%m-%d"), null=True)
dataset_time_period = CharField(_('Time Period'), max_length=50, blank=True)
dataset_geographic_key_words = CharField(_('Geographic Key Words'), max_length=100, blank=True)
dataset_theme_key_words = CharField(_('Theme Key Words'), max_length=100, blank=True)
dataset_security_classification = CharField(_('Security Classification'), max_length=50, blank=True)
dataset_who_can_access_the_data = CharField(_('Who Can Access The Data?'), max_length=50, blank=True)
dataset_use_constraints = CharField(_('Use Constraints/Distribution Liability'), max_length=200, blank=True)
def __unicode__(self):
return unicode('Meta Data for %s' % self.referral_shape)
class Meta:
verbose_name = _('Referral Shape Meta Data')
verbose_name_plural = _('Referral Shape Meta Data')
class ReferralImage(Model):
RECTIFIED_FORMAT = 'tif'
RECTIFIED_CONTENT_TYPE = 'image/tif'
SUPPORTED_SRIDS = [4326, settings.SPATIAL_REFERENCE_ID]
# Can build filenames from the stored pieces
# Supports rectification to multiple EPSG codes without
# having to alter the database. Thus its scalable.
owner = ForeignKey(User)
title = CharField(_('Name'), max_length=200)
image_id = CharField('Unique ID', max_length=20)
orig_extension = CharField(_('File extension of original file'), max_length=6)
content_type = CharField(_('Content type of original file'), max_length=20)
def __unicode__(self):
return unicode('%s' % self.title)
class Meta:
verbose_name = _('Rectified Image')
verbose_name_plural = _('Rectified Images')
def get_orig_filename(self):
return str(self.image_id)+'_orig'+self.orig_extension
def get_interm_filename(self):
return str(self.image_id)+'_interm.'+self.RECTIFIED_FORMAT
def get_rect_filename(self, srid):
return str(self.image_id)+'_rect_'+str(srid)+'.'+self.RECTIFIED_FORMAT
# Returns the height and weight of the given image
def get_image_dim(self):
filename = self.get_orig_filename()
from PIL import Image
im = Image.open('%s%s' % (IMAGE_DIR, filename))
return im.size
def get_orig_file_obj(self):
filename = self.get_orig_filename()
return open('%s%s' % (IMAGE_DIR, filename), "rb")
def get_rect_file_obj(self, srid):
filename = self.get_rect_filename(srid)
return open('%s%s' % (IMAGE_DIR, filename), "rb")
def delete_files(self):
orig_path = settings.IMAGE_DIR + self.get_orig_filename();
interm_path = settings.IMAGE_DIR + self.get_interm_filename();
rect_path_4326 = settings.IMAGE_DIR + self.get_rect_filename(4326);
rect_path_3005 = settings.IMAGE_DIR + self.get_rect_filename(settings.SPATIAL_REFERENCE_ID);
import os
if os.path.exists(orig_path):
os.remove(orig_path)
if os.path.exists(interm_path):
os.remove(interm_path)
if os.path.exists(rect_path_4326):
os.remove(rect_path_4326)
if os.path.exists(rect_path_3005):
os.remove(rect_path_3005)
def rectify(self, image_data, gcp_data):
orig_path = settings.IMAGE_DIR + self.get_orig_filename();
interm_path = settings.IMAGE_DIR + self.get_interm_filename();
rect_path_4326 = settings.IMAGE_DIR + self.get_rect_filename(4326);
rect_path_3005 = settings.IMAGE_DIR + self.get_rect_filename(settings.SPATIAL_REFERENCE_ID);
import os
command = [settings.GDAL_PATH+"/gdal_translate",orig_path,interm_path]
if subprocess.call(command):
raise Exception ("gdal_translate failed")
#Add control points to tif
command = settings.GDAL_PATH+'/gdal_translate -a_srs "EPSG:4326"'
for i in range(len(gcp_data)):
cur = gcp_data[i]
#Switch from bottom y origin to top y origin
cur['yp'] = image_data['height'] - cur['yp'];
gcp_flag = ' -gcp '
gcp_vals = str(cur['xp']) + " " + str(cur['yp']) + " " + str(cur['xr']) + " " + str(cur['yr'])
gcp_param = gcp_flag + gcp_vals
command += gcp_param
command += " " + orig_path + " " + interm_path
execute(command)
#Warp the tif to 4326
command = settings.GDAL_PATH+"/gdalwarp -t_srs 'EPSG:4326' -dstnodata 255 " + interm_path + " " + rect_path_4326
execute(command)
#Warp the tif to settings.SPATIAL_REFERENCE_ID
command = settings.GDAL_PATH+"/gdalwarp -t_srs 'EPSG:" + str(settings.SPATIAL_REFERENCE_ID) +"' -dstnodata 255 " + interm_path + " " + rect_path_3005
execute(command)
return True
def has_been_rectified(self):
rect_path_4326 = settings.IMAGE_DIR + self.get_rect_filename(4326);
rect_path_3005 = settings.IMAGE_DIR + self.get_rect_filename(settings.SPATIAL_REFERENCE_ID);
import os
if os.path.exists(rect_path_4326) and os.path.exists(rect_path_3005):
return True
def srid_supported(self, srid):
if srid in self.SUPPORTED_SRIDS:
return True
else:
return False
def get_supported_srid_string(self):
return ','.join([str(x) for x in self.SUPPORTED_SRIDS])
# The following data lists are for use in the ReferralInfo table for the bcgs_mapsheet_* fields
# (GNG Added 15May09)
BCGS_MAPSHEET_QUAD_CHOICES = (
('114', '114'),
('104', '104'),
('103', '103'),
('102', '102'),
('94', '94'),
('93', '93'),
('92', '92'),
('83', '83'),
('82', '82'),
)
BCGS_MAPSHEET_MAP_BLOCK_CHOICES = (
('A', 'A'),
('B', 'B'),
('C', 'C'),
('D', 'D'),
('E', 'E'),
('F', 'F'),
('G', 'G'),
('H', 'H'),
('I', 'I'),
('J', 'J'),
('K', 'K'),
('L', 'L'),
('M', 'M'),
('N', 'N'),
('O', 'O'),
('P', 'P'),
)
BCGS_MAPSHEET_SHEET_CHOICES = [(a,a) for a in range(1,101)]
class ReferralStatus(Model):
r'''
Tracks the progress of referrals through a number of states, configurable by client.
Model defines an enum of possible referral states.
'''
title = CharField(max_length=20)
def __unicode__(self):
return unicode('%s' % self.title)
class Meta:
verbose_name = _('Referral Status')
verbose_name_plural = _('Referral Status')
# The following class is a table to store associated information
# for a referral, and is populated via a proponent web entry form
# (GNG Added 15May09)
class ReferralInfo(Model):
referral_shape = OneToOneField(ReferralShape)
date_of_submission = DateField('Date of Submission', default=datetime.datetime.now().strftime("%Y-%m-%d"))
first_nations_contact = CharField('Name of First Nations Contact Person(s)', max_length=50)
proposal_desc = TextField(('Proposal'), max_length=200)
location_desc = TextField(('Location'), max_length=200)
date_requested_by = DateField('Date Requested By')
date_requested_by_desc = TextField(('Response is Requested By'), max_length=200, blank=True)
authorization_desc = TextField(('Authorization or Decision to be made'), max_length=200, blank=True)
proponent_name = CharField(('Proponent'), max_length=50)
background_context = TextField(('Background/Context'), max_length=200, blank=True)
bcgs_mapsheet_quad = CharField(('Quadrangle'), max_length=3, choices=BCGS_MAPSHEET_QUAD_CHOICES, blank=True)
bcgs_mapsheet_map_block = CharField(('Map Block'), max_length=1, choices=BCGS_MAPSHEET_MAP_BLOCK_CHOICES, blank=True)
bcgs_mapsheet_sheet = CharField(('Sheet'), max_length=3, choices=BCGS_MAPSHEET_SHEET_CHOICES, blank=True)
legal_desc = TextField(('Legal Description'), max_length=200)
term_of_proposal = CharField(('Schedule/Term of Proposal'), max_length=50)
other_info = TextField(('Other Information'), max_length=200, blank=True)
related_decisions = TextField(('Related Decisions to be made'), max_length=200, blank=True)
agency_moe = CharField(('Ministry of Environment (MOE)'), max_length=50, blank=True)
agency_mempr = CharField(('Ministry of Energy, Mines and Petroleum Resources (MEMPR)'), max_length=50, blank=True)
agency_ilmb = CharField(('Integrated Land Mgmt Bureau (ILMB)'), max_length=50, blank=True)
agency_mal = CharField(('Ministry of Agriculture & Lands (MAL)'), max_length=50, blank=True)
agency_mot = CharField(('Ministry of Transportation & Infrastructure (MOT)'), max_length=50, blank=True)
agency_mtca = CharField(('Ministry of Tourism, Culture, and the Arts (MTCA)'), max_length=50, blank=True)
agency_mofr = CharField(('Ministry of Forest and Range (MOFR)'), max_length=50, blank=True)
agency_mcd = CharField(('Ministry of Community Development (MCD)'), max_length=50, blank=True)
agency_other = CharField(('Other'), max_length=50, blank=True)
contact_person = CharField(('Proponent Contact Person(s)'), max_length=50)
status = ForeignKey(ReferralStatus)
def __unicode__(self):
return unicode('%s' % self.proposal_desc)
class Meta:
verbose_name = _('Referral Information')
verbose_name_plural = _('Referral Information')
class ReferralReport(Model):
name = CharField('Internal Report Name', max_length=50)
title = CharField('Report Title', max_length=50)
user_group = ForeignKey(UserGroup)
template = CharField('Report Template', max_length=50, blank=True, default='referral_report.html')
def get_layers(self):
return ReferralReportLayer.objects.filter(report=self).values()
def get_queries(self):
return ReferralReportQuery.objects.filter(report=self).values()
def __unicode__(self):
return unicode('%s' % self.name)
class Meta:
verbose_name = _('Referral Report')
verbose_name_plural = _('Referral Reports')
class ReferralReportQuery(Model):
report = ForeignKey(ReferralReport)
title = CharField('Question', max_length=255)
header1 = CharField('Header 1', max_length=50, blank=True)
header2 = CharField('Header 2', max_length=50, blank=True)
sql = TextField('SQL Statement')
def __unicode__(self):
return unicode('%s' % self.title)
class Meta:
verbose_name = _('Referral Report Query')
verbose_name_plural = _('Referral Report Queries')
class ReferralReportLayer(Model):
report = ForeignKey(ReferralReport)
table = CharField('Table Name', max_length=50)
title = CharField('Layer Title', max_length=50)
area_units = CharField('Area Units', max_length=10, blank=True)
where_clause = CharField('Where Clause', max_length=255, blank=True)
def __unicode__(self):
return unicode('%s' % self.title)
class Meta:
verbose_name = _('Referral Report Layer')
verbose_name_plural = _('Referral Report Layers')
class referral_report_layer_form(ModelForm):
table = ChoiceField(required=False)
def __init__(self, *args, **kwargs):
super(referral_report_layer_form, self).__init__(*args, **kwargs)
self.fields['table'].choices = [('','')]
cursor = connection.cursor()
sqlstr = "select table_name from information_schema.tables where table_name in (select f_table_name from geometry_columns)"
cursor.execute(sqlstr)
if cursor.rowcount:
tables = cursor.fetchall()
for table in tables:
table = str(table[0])
self.fields['table'].choices.append((table, table))
class Meta:
model = ReferralReportLayer
|
Ecotrust-Canada/terratruth
|
django_app/referrals/models.py
|
Python
|
gpl-3.0
| 19,141
|
[
"MOE"
] |
658a1f49a3f5122ba44474135c6ed8227defd15fdafd1d99e40303c3e2954b3a
|
# -*- coding: iso-8859-1 -*-
"""
MoinMoin - Multiple configuration handler and Configuration defaults class
@copyright: 2000-2004 Juergen Hermann <jh@web.de>,
2005-2008 MoinMoin:ThomasWaldmann.
2008 MoinMoin:JohannesBerg
@license: GNU GPL, see COPYING for details.
"""
import re
import os
import sys
import time
from MoinMoin import log
logging = log.getLogger(__name__)
from MoinMoin import config, error, util, wikiutil, web
from MoinMoin import datastruct
from MoinMoin.auth import MoinAuth
import MoinMoin.auth as authmodule
import MoinMoin.events as events
from MoinMoin.events import PageChangedEvent, PageRenamedEvent
from MoinMoin.events import PageDeletedEvent, PageCopiedEvent
from MoinMoin.events import PageRevertedEvent, FileAttachedEvent
import MoinMoin.web.session
from MoinMoin.packages import packLine
from MoinMoin.security import AccessControlList
from MoinMoin.support.python_compatibility import set
_url_re_cache = None
_farmconfig_mtime = None
_config_cache = {}
def _importConfigModule(name):
""" Import and return configuration module and its modification time
Handle all errors except ImportError, because missing file is not
always an error.
@param name: module name
@rtype: tuple
@return: module, modification time
"""
try:
module = __import__(name, globals(), {})
mtime = os.path.getmtime(module.__file__)
except ImportError:
raise
except IndentationError, err:
logging.exception('Your source code / config file is not correctly indented!')
msg = """IndentationError: %(err)s
The configuration files are Python modules. Therefore, whitespace is
important. Make sure that you use only spaces, no tabs are allowed here!
You have to use four spaces at the beginning of the line mostly.
""" % {
'err': err,
}
raise error.ConfigurationError(msg)
except Exception, err:
logging.exception('An exception happened.')
msg = '%s: %s' % (err.__class__.__name__, str(err))
raise error.ConfigurationError(msg)
return module, mtime
def _url_re_list():
""" Return url matching regular expression
Import wikis list from farmconfig on the first call and compile the
regexes. Later just return the cached regex list.
@rtype: list of tuples of (name, compiled re object)
@return: url to wiki config name matching list
"""
global _url_re_cache, _farmconfig_mtime
if _url_re_cache is None:
try:
farmconfig, _farmconfig_mtime = _importConfigModule('farmconfig')
except ImportError, err:
if 'farmconfig' in str(err):
# we failed importing farmconfig
logging.debug("could not import farmconfig, mapping all URLs to wikiconfig")
_farmconfig_mtime = 0
_url_re_cache = [('wikiconfig', re.compile(r'.')), ] # matches everything
else:
# maybe there was a failing import statement inside farmconfig
raise
else:
logging.info("using farm config: %s" % os.path.abspath(farmconfig.__file__))
try:
cache = []
for name, regex in farmconfig.wikis:
cache.append((name, re.compile(regex)))
_url_re_cache = cache
except AttributeError:
logging.error("required 'wikis' list missing in farmconfig")
msg = """
Missing required 'wikis' list in 'farmconfig.py'.
If you run a single wiki you do not need farmconfig.py. Delete it and
use wikiconfig.py.
"""
raise error.ConfigurationError(msg)
return _url_re_cache
def _makeConfig(name):
""" Create and return a config instance
Timestamp config with either module mtime or farmconfig mtime. This
mtime can be used later to invalidate older caches.
@param name: module name
@rtype: DefaultConfig sub class instance
@return: new configuration instance
"""
global _farmconfig_mtime
try:
module, mtime = _importConfigModule(name)
configClass = getattr(module, 'Config')
cfg = configClass(name)
cfg.cfg_mtime = max(mtime, _farmconfig_mtime)
logging.info("using wiki config: %s" % os.path.abspath(module.__file__))
except ImportError, err:
logging.exception('Could not import.')
msg = """ImportError: %(err)s
Check that the file is in the same directory as the server script. If
it is not, you must add the path of the directory where the file is
located to the python path in the server script. See the comments at
the top of the server script.
Check that the configuration file name is either "wikiconfig.py" or the
module name specified in the wikis list in farmconfig.py. Note that the
module name does not include the ".py" suffix.
""" % {
'err': err,
}
raise error.ConfigurationError(msg)
except AttributeError, err:
logging.exception('An exception occurred.')
msg = """AttributeError: %(err)s
Could not find required "Config" class in "%(name)s.py".
This might happen if you are trying to use a pre 1.3 configuration file, or
made a syntax or spelling error.
Another reason for this could be a name clash. It is not possible to have
config names like e.g. stats.py - because that collides with MoinMoin/stats/ -
have a look into your MoinMoin code directory what other names are NOT
possible.
Please check your configuration file. As an example for correct syntax,
use the wikiconfig.py file from the distribution.
""" % {
'name': name,
'err': err,
}
raise error.ConfigurationError(msg)
return cfg
def _getConfigName(url):
""" Return config name for url or raise """
for name, regex in _url_re_list():
match = regex.match(url)
if match:
return name
raise error.NoConfigMatchedError
def getConfig(url):
""" Return cached config instance for url or create new one
If called by many threads in the same time multiple config
instances might be created. The first created item will be
returned, using dict.setdefault.
@param url: the url from request, possibly matching specific wiki
@rtype: DefaultConfig subclass instance
@return: config object for specific wiki
"""
cfgName = _getConfigName(url)
try:
cfg = _config_cache[cfgName]
except KeyError:
cfg = _makeConfig(cfgName)
cfg = _config_cache.setdefault(cfgName, cfg)
return cfg
# This is a way to mark some text for the gettext tools so that they don't
# get orphaned. See http://www.python.org/doc/current/lib/node278.html.
def _(text):
return text
class CacheClass:
""" just a container for stuff we cache """
pass
class ConfigFunctionality(object):
""" Configuration base class with config class behaviour.
This class contains the functionality for the DefaultConfig
class for the benefit of the WikiConfig macro.
"""
# attributes of this class that should not be shown
# in the WikiConfig() macro.
cfg_mtime = None
siteid = None
cache = None
mail_enabled = None
jabber_enabled = None
auth_can_logout = None
auth_have_login = None
auth_login_inputs = None
_site_plugin_lists = None
_iwid = None
_iwid_full = None
xapian_searchers = None
moinmoin_dir = None
# will be lazily loaded by interwiki code when needed (?)
shared_intermap_files = None
def __init__(self, siteid):
""" Init Config instance """
self.siteid = siteid
self.cache = CacheClass()
from MoinMoin.Page import ItemCache
self.cache.meta = ItemCache('meta')
self.cache.pagelists = ItemCache('pagelists')
if self.config_check_enabled:
self._config_check()
# define directories
self.moinmoin_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), os.path.pardir))
data_dir = os.path.normpath(self.data_dir)
self.data_dir = data_dir
for dirname in ('user', 'cache', 'plugin'):
name = dirname + '_dir'
if not getattr(self, name, None):
setattr(self, name, os.path.abspath(os.path.join(data_dir, dirname)))
# directories below cache_dir (using __dirname__ to avoid conflicts)
for dirname in ('session', ):
name = dirname + '_dir'
if not getattr(self, name, None):
setattr(self, name, os.path.abspath(os.path.join(self.cache_dir, '__%s__' % dirname)))
# Try to decode certain names which allow unicode
self._decode()
# After that, pre-compile some regexes
self.cache.page_category_regex = re.compile(self.page_category_regex, re.UNICODE)
self.cache.page_dict_regex = re.compile(self.page_dict_regex, re.UNICODE)
self.cache.page_group_regex = re.compile(self.page_group_regex, re.UNICODE)
self.cache.page_template_regex = re.compile(self.page_template_regex, re.UNICODE)
# the ..._regexact versions only match if nothing is left (exact match)
self.cache.page_category_regexact = re.compile(u'^%s$' % self.page_category_regex, re.UNICODE)
self.cache.page_dict_regexact = re.compile(u'^%s$' % self.page_dict_regex, re.UNICODE)
self.cache.page_group_regexact = re.compile(u'^%s$' % self.page_group_regex, re.UNICODE)
self.cache.page_template_regexact = re.compile(u'^%s$' % self.page_template_regex, re.UNICODE)
self.cache.ua_spiders = self.ua_spiders and re.compile(self.ua_spiders, re.IGNORECASE)
self._check_directories()
if not isinstance(self.superuser, list):
msg = """The superuser setting in your wiki configuration is not a list
(e.g. ['Sample User', 'AnotherUser']).
Please change it in your wiki configuration and try again."""
raise error.ConfigurationError(msg)
# moin < 1.9 used cookie_lifetime = <float> (but converted it to int) for logged-in users and
# anonymous_session_lifetime = <float> or None for anon users
# moin >= 1.9 uses cookie_lifetime = (<float>, <float>) - first is anon, second is logged-in
if not (isinstance(self.cookie_lifetime, tuple) and len(self.cookie_lifetime) == 2):
logging.error("wiki configuration has an invalid setting: " +
"cookie_lifetime = %r" % (self.cookie_lifetime, ))
try:
anon_lifetime = self.anonymous_session_lifetime
logging.warning("wiki configuration has an unsupported setting: " +
"anonymous_session_lifetime = %r - " % anon_lifetime +
"please remove it.")
if anon_lifetime is None:
anon_lifetime = 0
anon_lifetime = float(anon_lifetime)
except:
# if anything goes wrong, use default value
anon_lifetime = 0
try:
logged_in_lifetime = int(self.cookie_lifetime)
except:
# if anything goes wrong, use default value
logged_in_lifetime = 12
self.cookie_lifetime = (anon_lifetime, logged_in_lifetime)
logging.warning("using cookie_lifetime = %r - " % (self.cookie_lifetime, ) +
"please fix your wiki configuration.")
self._loadPluginModule()
# Preparse user dicts
self._fillDicts()
# Normalize values
self.language_default = self.language_default.lower()
# Use site name as default name-logo
if self.logo_string is None:
self.logo_string = self.sitename
# Check for needed modules
# FIXME: maybe we should do this check later, just before a
# chart is needed, maybe in the chart module, instead doing it
# for each request. But this require a large refactoring of
# current code.
if self.chart_options:
try:
import gdchart
except ImportError:
self.chart_options = None
# 'setuid' special auth method auth method can log out
self.auth_can_logout = ['setuid']
self.auth_login_inputs = []
found_names = []
for auth in self.auth:
if not auth.name:
raise error.ConfigurationError("Auth methods must have a name.")
if auth.name in found_names:
raise error.ConfigurationError("Auth method names must be unique.")
found_names.append(auth.name)
if auth.logout_possible and auth.name:
self.auth_can_logout.append(auth.name)
for input in auth.login_inputs:
if not input in self.auth_login_inputs:
self.auth_login_inputs.append(input)
self.auth_have_login = len(self.auth_login_inputs) > 0
self.auth_methods = found_names
# internal dict for plugin `modules' lists
self._site_plugin_lists = {}
# we replace any string placeholders with config values
# e.g u'%(page_front_page)s' % self
self.navi_bar = [elem % self for elem in self.navi_bar]
# check if python-xapian is installed
if self.xapian_search:
try:
import xapian
except ImportError, err:
self.xapian_search = False
logging.error("xapian_search was auto-disabled because python-xapian is not installed [%s]." % str(err))
# list to cache xapian searcher objects
self.xapian_searchers = []
# check if mail is possible and set flag:
self.mail_enabled = (self.mail_smarthost is not None or self.mail_sendmail is not None) and self.mail_from
self.mail_enabled = self.mail_enabled and True or False
# check if jabber bot is available and set flag:
self.jabber_enabled = self.notification_bot_uri is not None
# if we are to use the jabber bot, instantiate a server object for future use
if self.jabber_enabled:
from xmlrpclib import Server
self.notification_server = Server(self.notification_bot_uri, )
# Cache variables for the properties below
self._iwid = self._iwid_full = self._meta_dict = None
self.cache.acl_rights_before = AccessControlList(self, [self.acl_rights_before])
self.cache.acl_rights_default = AccessControlList(self, [self.acl_rights_default])
self.cache.acl_rights_after = AccessControlList(self, [self.acl_rights_after])
action_prefix = self.url_prefix_action
if action_prefix is not None and action_prefix.endswith('/'): # make sure there is no trailing '/'
self.url_prefix_action = action_prefix[:-1]
if self.url_prefix_local is None:
self.url_prefix_local = self.url_prefix_static
if self.url_prefix_fckeditor is None:
self.url_prefix_fckeditor = self.url_prefix_local + '/applets/FCKeditor'
if self.secrets is None: # admin did not setup a real secret, so make up something
self.secrets = self.calc_secrets()
secret_key_names = ['action/cache', 'wikiutil/tickets', 'xmlrpc/ProcessMail', 'xmlrpc/RemoteScript', ]
if self.jabber_enabled:
secret_key_names.append('jabberbot')
if self.textchas:
secret_key_names.append('security/textcha')
secret_min_length = 10
if isinstance(self.secrets, str):
if len(self.secrets) < secret_min_length:
raise error.ConfigurationError("The secrets = '...' wiki config setting is a way too short string (minimum length is %d chars)!" % (
secret_min_length))
# for lazy people: set all required secrets to same value
secrets = {}
for key in secret_key_names:
secrets[key] = self.secrets
self.secrets = secrets
# we check if we have all secrets we need and that they have minimum length
for secret_key_name in secret_key_names:
try:
secret = self.secrets[secret_key_name]
if len(secret) < secret_min_length:
raise ValueError
except (KeyError, ValueError):
raise error.ConfigurationError("You must set a (at least %d chars long) secret string for secrets['%s']!" % (
secret_min_length, secret_key_name))
if self.password_scheme not in config.password_schemes_configurable:
raise error.ConfigurationError("not supported: password_scheme = %r" % self.password_scheme)
if self.passlib_support:
try:
from passlib.context import CryptContext
except ImportError, err:
raise error.ConfigurationError("Wiki is configured to use passlib, but importing passlib failed [%s]!" % str(err))
try:
self.cache.pwd_context = CryptContext(**self.passlib_crypt_context)
except (ValueError, KeyError, TypeError, UserWarning), err:
# ValueError: wrong configuration values
# KeyError: unsupported hash (seen with passlib 1.3)
# TypeError: configuration value has wrong type
raise error.ConfigurationError("passlib_crypt_context configuration is invalid [%s]." % str(err))
elif self.password_scheme == '{PASSLIB}':
raise error.ConfigurationError("passlib_support is switched off, thus you can't use password_scheme = '{PASSLIB}'.")
def calc_secrets(self):
""" make up some 'secret' using some config values """
varnames = ['data_dir', 'data_underlay_dir', 'language_default',
'mail_smarthost', 'mail_from', 'page_front_page',
'theme_default', 'sitename', 'logo_string',
'interwikiname', 'user_homewiki', 'acl_rights_before', ]
secret = ''
for varname in varnames:
var = getattr(self, varname, None)
if isinstance(var, (str, unicode)):
secret += repr(var)
return secret
_meta_dict = None
def load_meta_dict(self):
""" The meta_dict contains meta data about the wiki instance. """
if self._meta_dict is None:
self._meta_dict = wikiutil.MetaDict(os.path.join(self.data_dir, 'meta'), self.cache_dir)
return self._meta_dict
meta_dict = property(load_meta_dict)
# lazily load iwid(_full)
def make_iwid_property(attr):
def getter(self):
if getattr(self, attr, None) is None:
self.load_IWID()
return getattr(self, attr)
return property(getter)
iwid = make_iwid_property("_iwid")
iwid_full = make_iwid_property("_iwid_full")
# lazily create a list of event handlers
_event_handlers = None
def make_event_handlers_prop():
def getter(self):
if self._event_handlers is None:
self._event_handlers = events.get_handlers(self)
return self._event_handlers
def setter(self, new_handlers):
self._event_handlers = new_handlers
return property(getter, setter)
event_handlers = make_event_handlers_prop()
def load_IWID(self):
""" Loads the InterWikiID of this instance. It is used to identify the instance
globally.
The IWID is available as cfg.iwid
The full IWID containing the interwiki name is available as cfg.iwid_full
This method is called by the property.
"""
try:
iwid = self.meta_dict['IWID']
except KeyError:
iwid = util.random_string(16).encode("hex") + "-" + str(int(time.time()))
self.meta_dict['IWID'] = iwid
self.meta_dict.sync()
self._iwid = iwid
if self.interwikiname is not None:
self._iwid_full = packLine([iwid, self.interwikiname])
else:
self._iwid_full = packLine([iwid])
def _config_check(self):
""" Check namespace and warn about unknown names
Warn about names which are not used by DefaultConfig, except
modules, classes, _private or __magic__ names.
This check is disabled by default, when enabled, it will show an
error message with unknown names.
"""
unknown = ['"%s"' % name for name in dir(self)
if not name.startswith('_') and
name not in DefaultConfig.__dict__ and
not isinstance(getattr(self, name), (type(sys), type(DefaultConfig)))]
if unknown:
msg = """
Unknown configuration options: %s.
For more information, visit HelpOnConfiguration. Please check your
configuration for typos before requesting support or reporting a bug.
""" % ', '.join(unknown)
raise error.ConfigurationError(msg)
def _decode(self):
""" Try to decode certain names, ignore unicode values
Try to decode str using utf-8. If the decode fail, raise FatalError.
Certain config variables should contain unicode values, and
should be defined with u'text' syntax. Python decode these if
the file have a 'coding' line.
This will allow utf-8 users to use simple strings using, without
using u'string'. Other users will have to use u'string' for
these names, because we don't know what is the charset of the
config files.
"""
charset = 'utf-8'
message = u"""
"%(name)s" configuration variable is a string, but should be
unicode. Use %(name)s = u"value" syntax for unicode variables.
Also check your "-*- coding -*-" line at the top of your configuration
file. It should match the actual charset of the configuration file.
"""
decode_names = (
'sitename', 'interwikiname', 'user_homewiki', 'logo_string', 'navi_bar',
'page_front_page', 'page_category_regex', 'page_dict_regex',
'page_group_regex', 'page_template_regex', 'page_license_page',
'page_local_spelling_words', 'acl_rights_default',
'acl_rights_before', 'acl_rights_after', 'mail_from',
'quicklinks_default', 'subscribed_pages_default',
)
for name in decode_names:
attr = getattr(self, name, None)
if attr:
# Try to decode strings
if isinstance(attr, str):
try:
setattr(self, name, unicode(attr, charset))
except UnicodeError:
raise error.ConfigurationError(message %
{'name': name})
# Look into lists and try to decode strings inside them
elif isinstance(attr, list):
for i in xrange(len(attr)):
item = attr[i]
if isinstance(item, str):
try:
attr[i] = unicode(item, charset)
except UnicodeError:
raise error.ConfigurationError(message %
{'name': name})
def _check_directories(self):
""" Make sure directories are accessible
Both data and underlay should exists and allow read, write and
execute.
"""
mode = os.F_OK | os.R_OK | os.W_OK | os.X_OK
for attr in ('data_dir', 'data_underlay_dir'):
path = getattr(self, attr)
# allow an empty underlay path or None
if attr == 'data_underlay_dir' and not path:
continue
path_pages = os.path.join(path, "pages")
if not (os.path.isdir(path_pages) and os.access(path_pages, mode)):
msg = """
%(attr)s "%(path)s" does not exist, or has incorrect ownership or
permissions.
Make sure the directory and the subdirectory "pages" are owned by the web
server and are readable, writable and executable by the web server user
and group.
It is recommended to use absolute paths and not relative paths. Check
also the spelling of the directory name.
""" % {'attr': attr, 'path': path, }
raise error.ConfigurationError(msg)
def _loadPluginModule(self):
"""
import all plugin modules
To be able to import plugin from arbitrary path, we have to load
the base package once using imp.load_module. Later, we can use
standard __import__ call to load plugins in this package.
Since each configured plugin path has unique plugins, we load the
plugin packages as "moin_plugin_<sha1(path)>.plugin".
"""
import imp
from MoinMoin.support.python_compatibility import hash_new
plugin_dirs = [self.plugin_dir] + self.plugin_dirs
self._plugin_modules = []
try:
# Lock other threads while we check and import
imp.acquire_lock()
try:
for pdir in plugin_dirs:
csum = 'p_%s' % hash_new('sha1', pdir).hexdigest()
modname = '%s.%s' % (self.siteid, csum)
# If the module is not loaded, try to load it
if not modname in sys.modules:
# Find module on disk and try to load - slow!
abspath = os.path.abspath(pdir)
parent_dir, pname = os.path.split(abspath)
fp, path, info = imp.find_module(pname, [parent_dir])
try:
# Load the module and set in sys.modules
module = imp.load_module(modname, fp, path, info)
setattr(sys.modules[self.siteid], 'csum', module)
finally:
# Make sure fp is closed properly
if fp:
fp.close()
if modname not in self._plugin_modules:
self._plugin_modules.append(modname)
finally:
imp.release_lock()
except ImportError, err:
msg = """
Could not import plugin package "%(path)s" because of ImportError:
%(err)s.
Make sure your data directory path is correct, check permissions, and
that the data/plugin directory has an __init__.py file.
""" % {
'path': pdir,
'err': str(err),
}
raise error.ConfigurationError(msg)
def _fillDicts(self):
""" fill config dicts
Fills in missing dict keys of derived user config by copying
them from this base class.
"""
# user checkbox defaults
for key, value in DefaultConfig.user_checkbox_defaults.items():
if key not in self.user_checkbox_defaults:
self.user_checkbox_defaults[key] = value
def __getitem__(self, item):
""" Make it possible to access a config object like a dict """
return getattr(self, item)
class DefaultConfig(ConfigFunctionality):
""" Configuration base class with default config values
(added below)
"""
# Do not add anything into this class. Functionality must
# be added above to avoid having the methods show up in
# the WikiConfig macro. Settings must be added below to
# the options dictionary.
_default_backlink_method = lambda cfg, req: 'backlink' if req.user.valid else 'pagelink'
def _default_password_checker(cfg, request, username, password,
min_length=6, min_different=4):
""" Check if a password is secure enough.
We use a built-in check to get rid of the worst passwords.
We do NOT use cracklib / python-crack here any more because it is
not thread-safe (we experienced segmentation faults when using it).
If you don't want to check passwords, use password_checker = None.
@return: None if there is no problem with the password,
some unicode object with an error msg, if the password is problematic.
"""
_ = request.getText
# in any case, do a very simple built-in check to avoid the worst passwords
if len(password) < min_length:
return _("Password is too short.")
if len(set(password)) < min_different:
return _("Password has not enough different characters.")
username_lower = username.lower()
password_lower = password.lower()
if username in password or password in username or \
username_lower in password_lower or password_lower in username_lower:
return _("Password is too easy (password contains name or name contains password).")
keyboards = (ur"`1234567890-=qwertyuiop[]\asdfghjkl;'zxcvbnm,./", # US kbd
ur"^1234567890ß´qwertzuiopü+asdfghjklöä#yxcvbnm,.-", # german kbd
) # add more keyboards!
for kbd in keyboards:
rev_kbd = kbd[::-1]
if password in kbd or password in rev_kbd or \
password_lower in kbd or password_lower in rev_kbd:
return _("Password is too easy (keyboard sequence).")
return None
class DefaultExpression(object):
def __init__(self, exprstr):
self.text = exprstr
self.value = eval(exprstr)
#
# Options that are not prefixed automatically with their
# group name, see below (at the options dict) for more
# information on the layout of this structure.
#
options_no_group_name = {
# =========================================================================
'attachment_extension': ("Mapping of attachment extensions to actions", None,
(
('extensions_mapping',
{'.tdraw': {'modify': 'twikidraw'},
'.adraw': {'modify': 'anywikidraw'},
}, "file extension -> do -> action"),
)),
# ==========================================================================
'datastruct': ('Datastruct settings', None, (
('dicts', lambda cfg, request: datastruct.WikiDicts(request),
"function f(cfg, request) that returns a backend which is used to access dicts definitions."),
('groups', lambda cfg, request: datastruct.WikiGroups(request),
"function f(cfg, request) that returns a backend which is used to access groups definitions."),
)),
# ==========================================================================
'session': ('Session settings', "Session-related settings, see HelpOnSessions.", (
('session_service', DefaultExpression('web.session.FileSessionService()'),
"The session service."),
('cookie_name', None,
'The variable part of the session cookie name. (None = determine from URL, siteidmagic = use siteid, any other string = use that)'),
('cookie_secure', None,
'Use secure cookie. (None = auto-enable secure cookie for https, True = ever use secure cookie, False = never use secure cookie).'),
('cookie_httponly', False,
'Use a httponly cookie that can only be used by the server, not by clientside scripts.'),
('cookie_domain', None,
'Domain used in the session cookie. (None = do not specify domain).'),
('cookie_path', None,
'Path used in the session cookie (None = auto-detect). Please only set if you know exactly what you are doing.'),
('cookie_lifetime', (0, 12),
'Session lifetime [h] of (anonymous, logged-in) users (see HelpOnSessions for details).'),
)),
# ==========================================================================
'auth': ('Authentication / Authorization / Security settings', None, (
('superuser', [],
"List of trusted user names with wiki system administration super powers (not to be confused with ACL admin rights!). Used for e.g. software installation, language installation via SystemPagesSetup and more. See also HelpOnSuperUser."),
('auth', DefaultExpression('[MoinAuth()]'),
"list of auth objects, to be called in this order (see HelpOnAuthentication)"),
('auth_methods_trusted', ['http', 'given', 'xmlrpc_applytoken'], # Note: 'http' auth method is currently just a redirect to 'given'
'authentication methods for which users should be included in the special "Trusted" ACL group.'),
('secrets', None, """Either a long shared secret string used for multiple purposes or a dict {"purpose": "longsecretstring", ...} for setting up different shared secrets for different purposes. If you don't setup own secret(s), a secret string will be auto-generated from other config settings."""),
('DesktopEdition',
False,
"if True, give all local users special powers - ''only use this for a local desktop wiki!''"),
('SecurityPolicy',
None,
"Class object hook for implementing security restrictions or relaxations"),
('actions_excluded',
['xmlrpc', # we do not want wiki admins unknowingly offering xmlrpc service
'MyPages', # only works when used with a non-default SecurityPolicy (e.g. autoadmin)
'CopyPage', # has questionable behaviour regarding subpages a user can't read, but can copy
],
"Exclude unwanted actions (list of strings)"),
('allow_xslt', False,
"if True, enables XSLT processing via 4Suite (Note that this is DANGEROUS. It enables anyone who can edit the wiki to get '''read/write access to your filesystem as the moin process uid/gid''' and to insert '''arbitrary HTML''' into your wiki pages, which is why this setting defaults to `False` (XSLT disabled). Do not set it to other values, except if you know what you do and if you have very trusted editors only)."),
('password_checker', DefaultExpression('_default_password_checker'),
'checks whether a password is acceptable (default check is length >= 6, at least 4 different chars, no keyboard sequence, not username used somehow (you can switch this off by using `None`)'),
('password_scheme', '{PASSLIB}',
'Either "{PASSLIB}" (default) to use passlib for creating and upgrading password hashes (see also passlib_crypt_context for passlib configuration), '
'or "{SSHA}" (or any other of the builtin password schemes) to not use passlib (not recommended).'),
('passlib_support', True,
'If True (default), import passlib and support password hashes offered by it.'),
('passlib_crypt_context', dict(
# schemes we want to support (or deprecated schemes for which we still have
# hashes in our storage).
# note: bcrypt: we did not include it as it needs additional code (that is not pure python
# and thus either needs compiling or installing platform-specific binaries) and
# also there was some bcrypt issue in passlib < 1.5.3.
# pbkdf2_sha512: not included as it needs at least passlib 1.4.0
# sha512_crypt: supported since passlib 1.3.0 (first public release)
schemes=["sha512_crypt", ],
# default scheme for creating new pw hashes (if not given, passlib uses first from schemes)
#default="sha512_crypt",
# deprecated schemes get auto-upgraded to the default scheme at login
# time or when setting a password (including doing a moin account pwreset).
# for passlib >= 1.6, giving ["auto"] means that all schemes except the default are deprecated:
#deprecated=["auto"],
# to support also older passlib versions, rather give a explicit list:
#deprecated=[],
# vary rounds parameter randomly when creating new hashes...
#all__vary_rounds=0.1,
),
"passlib CryptContext arguments, see passlib docs"),
('recovery_token_lifetime', 12,
'how long the password recovery token is valid [h]'),
)),
# ==========================================================================
'spam_leech_dos': ('Anti-Spam/Leech/DOS',
'These settings help limiting ressource usage and avoiding abuse.',
(
('hosts_deny', [], "List of denied IPs; if an IP ends with a dot, it denies a whole subnet (class A, B or C)"),
('surge_action_limits',
{# allow max. <count> <action> requests per <dt> secs
# action: (count, dt)
'all': (30, 30), # all requests (except cache/AttachFile action) count for this limit
'default': (30, 60), # default limit for actions without a specific limit
'show': (30, 60),
'recall': (10, 120),
'raw': (20, 40), # some people use this for css
'diff': (30, 60),
'fullsearch': (10, 120),
'edit': (30, 300), # can be lowered after making preview different from edit
'rss_rc': (1, 60),
# The following actions are often used for images - to avoid pages with lots of images
# (like photo galleries) triggering surge protection, we assign rather high limits:
'AttachFile': (300, 30),
'cache': (600, 30), # cache action is very cheap/efficient
# special stuff to prevent someone trying lots of usernames / passwords to log in.
# we keep this commented / disabled so that this feature does not get activated by default
# (if somebody does not override surge_action_limits with own values):
#'auth-ip': (10, 3600), # same remote ip (any name)
#'auth-name': (10, 3600), # same name (any remote ip)
},
"Surge protection tries to deny clients causing too much load/traffic, see HelpOnConfiguration/SurgeProtection."),
('surge_lockout_time', 3600, "time [s] someone gets locked out when ignoring the warnings"),
('textchas', None,
"Spam protection setup using site-specific questions/answers, see HelpOnSpam."),
('textchas_disabled_group', None,
"Name of a group of trusted users who do not get asked !TextCha questions."),
('textchas_expiry_time', 600,
"Time [s] for a !TextCha to expire."),
('antispam_master_url', "http://master.moinmo.in/?action=xmlrpc2",
"where antispam security policy fetches spam pattern updates (if it is enabled)"),
# a regex of HTTP_USER_AGENTS that should be excluded from logging
# and receive a FORBIDDEN for anything except viewing a page
# list must not contain 'java' because of twikidraw wanting to save drawing uses this useragent
('ua_spiders',
('archiver|bingbot|cfetch|charlotte|crawler|gigabot|googlebot|heritrix|holmes|htdig|httrack|httpunit|'
'intelix|jeeves|larbin|leech|libwww-perl|linkbot|linkmap|linkwalk|litefinder|mercator|'
'microsoft.url.control|mirror| mj12bot|msnbot|msrbot|neomo|nutbot|omniexplorer|puf|robot|scooter|seekbot|'
'sherlock|slurp|sitecheck|snoopy|spider|teleport|twiceler|voilabot|voyager|webreaper|wget|yeti'),
"A regex of HTTP_USER_AGENTs that should be excluded from logging and are not allowed to use actions."),
('unzip_single_file_size', 2.0 * 1000 ** 2,
"max. size of a single file in the archive which will be extracted [bytes]"),
('unzip_attachments_space', 200.0 * 1000 ** 2,
"max. total amount of bytes can be used to unzip files [bytes]"),
('unzip_attachments_count', 101,
"max. number of files which are extracted from the zip file"),
)),
# ==========================================================================
'style': ('Style / Theme / UI related',
'These settings control how the wiki user interface will look like.',
(
('sitename', u'Untitled Wiki',
"Short description of your wiki site, displayed below the logo on each page, and used in RSS documents as the channel title [Unicode]"),
('interwikiname', None, "unique and stable InterWiki name (prefix, moniker) of the site [Unicode], or None"),
('logo_string', None, "The wiki logo top of page, HTML is allowed (`<img>` is possible as well) [Unicode]"),
('html_pagetitle', None, "Allows you to set a specific HTML page title (if None, it defaults to the value of `sitename`)"),
('navi_bar', [u'RecentChanges', u'FindPage', u'HelpContents', ],
'Most important page names. Users can add more names in their quick links in user preferences. To link to URL, use `u"[[url|link title]]"`, to use a shortened name for long page name, use `u"[[LongLongPageName|title]]"`. [list of Unicode strings]'),
('theme_default', 'modernized',
"the name of the theme that is used by default (see HelpOnThemes)"),
('theme_force', False,
"if True, do not allow to change the theme"),
('stylesheets', [],
"List of tuples (media, csshref) to insert after theme css, before user css, see HelpOnThemes."),
('supplementation_page', False,
"if True, show a link to the supplementation page in the theme"),
('supplementation_page_name', u'Discussion',
"default name of the supplementation (sub)page [unicode]"),
('supplementation_page_template', u'DiscussionTemplate',
"default template used for creation of the supplementation page [unicode]"),
('interwiki_preferred', [], "In dialogues, show those wikis at the top of the list."),
('sistersites', [], "list of tuples `('WikiName', 'sisterpagelist_fetch_url')`"),
('trail_size', 5,
"Number of pages in the trail of visited pages"),
('page_footer1', '', "Custom HTML markup sent ''before'' the system footer."),
('page_footer2', '', "Custom HTML markup sent ''after'' the system footer."),
('page_header1', '', "Custom HTML markup sent ''before'' the system header / title area but after the body tag."),
('page_header2', '', "Custom HTML markup sent ''after'' the system header / title area (and body tag)."),
('changed_time_fmt', '%H:%M', "Time format used on Recent``Changes for page edits within the last 24 hours"),
('date_fmt', '%Y-%m-%d', "System date format, used mostly in Recent``Changes"),
('datetime_fmt', '%Y-%m-%d %H:%M:%S', 'Default format for dates and times (when the user has no preferences or chose the "default" date format)'),
('chart_options', None, "If you have gdchart, use something like chart_options = {'width': 720, 'height': 540}"),
('edit_bar', ['Edit', 'Comments', 'Discussion', 'Info', 'Subscribe', 'Quicklink', 'Attachments', 'ActionsMenu'],
'list of edit bar entries'),
('history_count', (100, 200, 5, 10, 25, 50), "Number of revisions shown for info/history action (default_count_shown, max_count_shown, [other values shown as page size choices]). At least first two values (default and maximum) should be provided. If additional values are provided, user will be able to change number of items per page in the UI."),
('history_paging', True, "Enable paging functionality for info action's history display."),
('show_hosts', True,
"if True, show host names and IPs. Set to False to hide them."),
('show_interwiki', False,
"if True, let the theme display your interwiki name"),
('show_names', True,
"if True, show user names in the revision history and on Recent``Changes. Set to False to hide them."),
('show_section_numbers', False,
'show section numbers in headings by default'),
('show_timings', False, "show some timing values at bottom of a page"),
('show_version', False, "show moin's version at the bottom of a page"),
('show_rename_redirect', False, "if True, offer creation of redirect pages when renaming wiki pages"),
('backlink_method', DefaultExpression('_default_backlink_method'),
"function determining how the (last part of the) pagename should be rendered in the title area"),
('packagepages_actions_excluded',
['setthemename', # related to questionable theme stuff, see below
'copythemefile', # maybe does not work, e.g. if no fs write permissions or real theme file path is unknown to moin
'installplugin', # code installation, potentially dangerous
'renamepage', # dangerous with hierarchical acls
'deletepage', # dangerous with hierarchical acls
'delattachment', # dangerous, no revisioning
],
'list with excluded package actions (e.g. because they are dangerous / questionable)'),
('page_credits',
[
'<a href="http://moinmo.in/" title="This site uses the MoinMoin Wiki software.">MoinMoin Powered</a>',
'<a href="http://moinmo.in/Python" title="MoinMoin is written in Python.">Python Powered</a>',
'<a href="http://moinmo.in/GPL" title="MoinMoin is GPL licensed.">GPL licensed</a>',
'<a href="http://validator.w3.org/check?uri=referer" title="Click here to validate this page.">Valid HTML 4.01</a>',
],
'list with html fragments with logos or strings for crediting.'),
# These icons will show in this order in the iconbar, unless they
# are not relevant, e.g email icon when the wiki is not configured
# for email.
('page_iconbar', ["up", "edit", "view", "diff", "info", "subscribe", "raw", "print", ],
'list of icons to show in iconbar, valid values are only those in page_icons_table. Available only in classic theme.'),
# Standard buttons in the iconbar
('page_icons_table',
{
# key pagekey, querystr dict, title, icon-key
'diff': ('page', {'action': 'diff'}, _("Diffs"), "diff"),
'info': ('page', {'action': 'info'}, _("Info"), "info"),
'edit': ('page', {'action': 'edit'}, _("Edit"), "edit"),
'unsubscribe': ('page', {'action': 'unsubscribe'}, _("UnSubscribe"), "unsubscribe"),
'subscribe': ('page', {'action': 'subscribe'}, _("Subscribe"), "subscribe"),
'raw': ('page', {'action': 'raw'}, _("Raw"), "raw"),
'xml': ('page', {'action': 'show', 'mimetype': 'text/xml'}, _("XML"), "xml"),
'print': ('page', {'action': 'print'}, _("Print"), "print"),
'view': ('page', {}, _("View"), "view"),
'up': ('page_parent_page', {}, _("Up"), "up"),
},
"dict of {'iconname': (url, title, icon-img-key), ...}. Available only in classic theme."),
('show_highlight_msg', False, "Show message that page has highlighted text "
"and provide link to non-highlighted "
"version."),
)),
# ==========================================================================
'editor': ('Editor related', None, (
('editor_default', 'text', "Editor to use by default, 'text' or 'gui'"),
('editor_force', False, "if True, force using the default editor"),
('editor_ui', 'freechoice', "Editor choice shown on the user interface, 'freechoice' or 'theonepreferred'"),
('page_license_enabled', False, 'if True, show a license hint in page editor.'),
('page_license_page', u'WikiLicense', 'Page linked from the license hint. [Unicode]'),
('edit_locking', 'warn 10', "Editor locking policy: `None`, `'warn <timeout in minutes>'`, or `'lock <timeout in minutes>'`"),
('edit_ticketing', True, None),
('edit_rows', 20, "Default height of the edit box"),
('comment_required', False, "if True, only allow saving if a comment is filled in"),
)),
# ==========================================================================
'paths': ('Paths', None, (
('data_dir', './data/', "Path to the data directory containing your (locally made) wiki pages."),
('data_underlay_dir', './underlay/', "Path to the underlay directory containing distribution system and help pages."),
('cache_dir', None, "Directory for caching, by default computed from `data_dir`/cache."),
('session_dir', None, "Directory for session storage, by default computed to be `cache_dir`/__session__."),
('user_dir', None, "Directory for user storage, by default computed to be `data_dir`/user."),
('plugin_dir', None, "Plugin directory, by default computed to be `data_dir`/plugin."),
('plugin_dirs', [], "Additional plugin directories."),
('docbook_html_dir', r"/usr/share/xml/docbook/stylesheet/nwalsh/html/",
'Path to the directory with the Docbook to HTML XSLT files (optional, used by the docbook parser). The default value is correct for Debian Etch.'),
('shared_intermap', None,
"Path to a file containing global InterWiki definitions (or a list of such filenames)"),
)),
# ==========================================================================
'urls': ('URLs', None, (
# includes the moin version number, so we can have a unlimited cache lifetime
# for the static stuff. if stuff changes on version upgrade, url will change
# immediately and we have no problem with stale caches.
('url_prefix_static', config.url_prefix_static,
"used as the base URL for icons, css, etc. - includes the moin version number and changes on every release. This replaces the deprecated and sometimes confusing `url_prefix = '/wiki'` setting."),
('url_prefix_local', None,
"used as the base URL for some Javascript - set this to a URL on same server as the wiki if your url_prefix_static points to a different server."),
('url_prefix_fckeditor', None,
"used as the base URL for FCKeditor - similar to url_prefix_local, but just for FCKeditor."),
('url_prefix_action', None,
"Use 'action' to enable action URL generation to be compatible with robots.txt. It will generate .../action/info/PageName?action=info then. Recommended for internet wikis."),
('notification_bot_uri', None, "URI of the Jabber notification bot."),
('url_mappings', {},
"lookup table to remap URL prefixes (dict of {{{'prefix': 'replacement'}}}); especially useful in intranets, when whole trees of externally hosted documents move around"),
)),
# ==========================================================================
'pages': ('Special page names', None, (
('page_front_page', u'LanguageSetup',
"Name of the front page. We don't expect you to keep the default. Just read LanguageSetup in case you're wondering... [Unicode]"),
# the following regexes should match the complete name when used in free text
# the group 'all' shall match all, while the group 'key' shall match the key only
# e.g. CategoryFoo -> group 'all' == CategoryFoo, group 'key' == Foo
# moin's code will add ^ / $ at beginning / end when needed
('page_category_regex', ur'(?P<all>Category(?P<key>(?!Template)\S+))',
'Pagenames exactly matching this regex are regarded as Wiki categories [Unicode]'),
('page_dict_regex', ur'(?P<all>(?P<key>\S+)Dict)',
'Pagenames exactly matching this regex are regarded as pages containing variable dictionary definitions [Unicode]'),
('page_group_regex', ur'(?P<all>(?P<key>\S+)Group)',
'Pagenames exactly matching this regex are regarded as pages containing group definitions [Unicode]'),
('page_template_regex', ur'(?P<all>(?P<key>\S+)Template)',
'Pagenames exactly matching this regex are regarded as pages containing templates for new pages [Unicode]'),
('page_local_spelling_words', u'LocalSpellingWords',
'Name of the page containing user-provided spellchecker words [Unicode]'),
)),
# ==========================================================================
'user': ('User Preferences related', None, (
('quicklinks_default', [],
'List of preset quicklinks for a newly created user accounts. Existing accounts are not affected by this option whereas changes in navi_bar do always affect existing accounts. Preset quicklinks can be removed by the user in the user preferences menu, navi_bar settings not.'),
('subscribed_pages_default', [],
"List of pagenames used for presetting page subscriptions for newly created user accounts."),
('email_subscribed_events_default',
[
PageChangedEvent.__name__,
PageRenamedEvent.__name__,
PageDeletedEvent.__name__,
PageCopiedEvent.__name__,
PageRevertedEvent.__name__,
FileAttachedEvent.__name__,
], None),
('jabber_subscribed_events_default', [], None),
('tz_offset', 0.0,
"default time zone offset in hours from UTC"),
('userprefs_disabled', [],
"Disable the listed user preferences plugins."),
)),
# ==========================================================================
'various': ('Various', None, (
('bang_meta', True, 'if True, enable {{{!NoWikiName}}} markup'),
('caching_formats', ['text_html'], "output formats that are cached; set to [] to turn off caching (useful for development)"),
('config_check_enabled', False, "if True, check configuration for unknown settings."),
('default_markup', 'wiki', 'Default page parser / format (name of module in `MoinMoin.parser`)'),
('html_head', '', "Additional <HEAD> tags, see HelpOnThemes."),
('html_head_queries', '<meta name="robots" content="noindex,nofollow">\n',
"Additional <HEAD> tags for requests with query strings, like actions."),
('html_head_posts', '<meta name="robots" content="noindex,nofollow">\n',
"Additional <HEAD> tags for POST requests."),
('html_head_index', '<meta name="robots" content="index,follow">\n',
"Additional <HEAD> tags for some few index pages."),
('html_head_normal', '<meta name="robots" content="index,nofollow">\n',
"Additional <HEAD> tags for most normal pages."),
('language_default', 'en', "Default language for user interface and page content, see HelpOnLanguages."),
('language_ignore_browser', False, "if True, ignore user's browser language settings, see HelpOnLanguages."),
('log_remote_addr', True,
"if True, log the remote IP address (and maybe hostname)."),
('log_reverse_dns_lookups', True,
"if True, do a reverse DNS lookup on page SAVE. If your DNS is broken, set this to False to speed up SAVE."),
('log_timing', False,
"if True, add timing infos to the log output to analyse load conditions"),
('log_events_format', 1,
"0 = no events logging, 1 = standard format (like <= 1.9.7) [default], 2 = extended format"),
# some dangerous mimetypes (we don't use "content-disposition: inline" for them when a user
# downloads such attachments, because the browser might execute e.g. Javascript contained
# in the HTML and steal your moin session cookie or do other nasty stuff)
('mimetypes_xss_protect',
[
'text/html',
'application/x-shockwave-flash',
'application/xhtml+xml',
],
'"content-disposition: inline" isn\'t used for them when a user downloads such attachments'),
('mimetypes_embed',
[
'application/x-dvi',
'application/postscript',
'application/pdf',
'application/ogg',
'application/vnd.visio',
'image/x-ms-bmp',
'image/svg+xml',
'image/tiff',
'image/x-photoshop',
'audio/mpeg',
'audio/midi',
'audio/x-wav',
'video/fli',
'video/mpeg',
'video/quicktime',
'video/x-msvideo',
'chemical/x-pdb',
'x-world/x-vrml',
],
'mimetypes that can be embedded by the [[HelpOnMacros/EmbedObject|EmbedObject macro]]'),
('refresh', None,
"refresh = (minimum_delay_s, targets_allowed) enables use of `#refresh 5 PageName` processing instruction, targets_allowed must be either `'internal'` or `'external'`"),
('rss_cache', 60, "suggested caching time for Recent''''''Changes RSS, in second"),
('search_results_per_page', 25, "Number of hits shown per page in the search results"),
('siteid', 'default', None),
)),
}
#
# The 'options' dict carries default MoinMoin options. The dict is a
# group name to tuple mapping.
# Each group tuple consists of the following items:
# group section heading, group help text, option list
#
# where each 'option list' is a tuple or list of option tuples
#
# each option tuple consists of
# option name, default value, help text
#
# All the help texts will be displayed by the WikiConfigHelp() macro.
#
# Unlike the options_no_group_name dict, option names in this dict
# are automatically prefixed with "group name '_'" (i.e. the name of
# the group they are in and an underscore), e.g. the 'hierarchic'
# below creates an option called "acl_hierarchic".
#
# If you need to add a complex default expression that results in an
# object and should not be shown in the __repr__ form in WikiConfigHelp(),
# you can use the DefaultExpression class, see 'auth' above for example.
#
#
options = {
'acl': ('Access control lists',
'ACLs control who may do what, see HelpOnAccessControlLists.',
(
('hierarchic', False, 'True to use hierarchical ACLs'),
('rights_default', u"Trusted:read,write,delete,revert Known:read,write,delete,revert All:read,write",
"ACL used if no ACL is specified on the page"),
('rights_before', u"",
"ACL that is processed before the on-page/default ACL"),
('rights_after', u"",
"ACL that is processed after the on-page/default ACL"),
('rights_valid', ['read', 'write', 'delete', 'revert', 'admin'],
"Valid tokens for right sides of ACL entries."),
)),
'xapian': ('Xapian search', "Configuration of the Xapian based indexed search, see HelpOnXapian.", (
('search', False,
"True to enable the fast, indexed search (based on the Xapian search library)"),
('index_dir', None,
"Directory where the Xapian search index is stored (None = auto-configure wiki local storage)"),
('stemming', False,
"True to enable Xapian word stemmer usage for indexing / searching."),
('index_history', False,
"True to enable indexing of non-current page revisions."),
)),
'user': ('Users / User settings', None, (
('email_unique', True,
"if True, check email addresses for uniqueness and don't accept duplicates."),
('jid_unique', True,
"if True, check Jabber IDs for uniqueness and don't accept duplicates."),
('homewiki', u'Self',
"interwiki name of the wiki where the user home pages are located [Unicode] - useful if you have ''many'' users. You could even link to nonwiki \"user pages\" if the wiki username is in the target URL."),
('checkbox_fields',
[
('mailto_author', lambda _: _('Publish my email (not my wiki homepage) in author info')),
('edit_on_doubleclick', lambda _: _('Open editor on double click')),
('remember_last_visit', lambda _: _('After login, jump to last visited page')),
('show_comments', lambda _: _('Show comment sections')),
('show_nonexist_qm', lambda _: _('Show question mark for non-existing pagelinks')),
('show_page_trail', lambda _: _('Show page trail')),
('show_toolbar', lambda _: _('Show icon toolbar')),
('show_topbottom', lambda _: _('Show top/bottom links in headings')),
('show_fancy_diff', lambda _: _('Show fancy diffs')),
('wikiname_add_spaces', lambda _: _('Add spaces to displayed wiki names')),
('remember_me', lambda _: _('Remember login information')),
('disabled', lambda _: _('Disable this account forever')),
# if an account is disabled, it may be used for looking up
# id -> username for page info and recent changes, but it
# is not usable for the user any more:
],
"Describes user preferences, see HelpOnConfiguration/UserPreferences."),
('checkbox_defaults',
{
'mailto_author': 0,
'edit_on_doubleclick': 1,
'remember_last_visit': 0,
'show_comments': 0,
'show_nonexist_qm': False,
'show_page_trail': 1,
'show_toolbar': 1,
'show_topbottom': 0,
'show_fancy_diff': 1,
'wikiname_add_spaces': 0,
'remember_me': 1,
},
"Defaults for user preferences, see HelpOnConfiguration/UserPreferences."),
('checkbox_disable', [],
"Disable user preferences, see HelpOnConfiguration/UserPreferences."),
('checkbox_remove', [],
"Remove user preferences, see HelpOnConfiguration/UserPreferences."),
('form_fields',
[
('name', _('Name'), "text", "36", _("(Use FirstnameLastname)")),
('aliasname', _('Alias-Name'), "text", "36", ''),
('email', _('Email'), "text", "36", ''),
('jid', _('Jabber ID'), "text", "36", ''),
('css_url', _('User CSS URL'), "text", "40", _('(Leave it empty for disabling user CSS)')),
('edit_rows', _('Editor size'), "text", "3", ''),
],
None),
('form_defaults',
{# key: default - do NOT remove keys from here!
'name': '',
'aliasname': '',
'password': '',
'password2': '',
'email': '',
'jid': '',
'css_url': '',
'edit_rows': "20",
},
None),
('form_disable', [], "list of field names used to disable user preferences form fields"),
('form_remove', [], "list of field names used to remove user preferences form fields"),
('transient_fields',
['id', 'valid', 'may', 'auth_username', 'password', 'password2', 'auth_method', 'auth_attribs', ],
"User object attributes that are not persisted to permanent storage (internal use)."),
)),
'openidrp': ('OpenID Relying Party',
'These settings control the built-in OpenID Relying Party (client).',
(
('allowed_op', [], "List of forced providers"),
)),
'openid_server': ('OpenID Server',
'These settings control the built-in OpenID Identity Provider (server).',
(
('enabled', False, "True to enable the built-in OpenID server."),
('restricted_users_group', None, "If set to a group name, the group members are allowed to use the wiki as an OpenID provider. (None = allow for all users)"),
('enable_user', False, "If True, the OpenIDUser processing instruction is allowed."),
)),
'mail': ('Mail settings',
'These settings control outgoing and incoming email from and to the wiki.',
(
('from', None, "Used as From: address for generated mail."),
('login', None, "'username userpass' for SMTP server authentication (None = don't use auth)."),
('smarthost', None, "Address of SMTP server to use for sending mail (None = don't use SMTP server)."),
('sendmail', None, "sendmail command to use for sending mail (None = don't use sendmail)"),
('import_subpage_template', u"$from-$date-$subject", "Create subpages using this template when importing mail."),
('import_pagename_search', ['subject', 'to', ], "Where to look for target pagename specification."),
('import_pagename_envelope', u"%s", "Use this to add some fixed prefix/postfix to the generated target pagename."),
('import_pagename_regex', r'\[\[([^\]]*)\]\]', "Regular expression used to search for target pagename specification."),
('import_wiki_addrs', [], "Target mail addresses to consider when importing mail"),
('notify_page_text', '%(intro)s%(difflink)s\n\n%(comment)s%(diff)s',
"Template for putting together the pieces for the page changed/deleted/renamed notification mail text body"),
('notify_page_changed_subject', _('[%(sitename)s] %(trivial)sUpdate of "%(pagename)s" by %(username)s'),
"Template for the page changed notification mail subject header"),
('notify_page_changed_intro',
_("Dear Wiki user,\n\n"
'You have subscribed to a wiki page or wiki category on "%(sitename)s" for change notification.\n\n'
'The "%(pagename)s" page has been changed by %(editor)s:\n'),
"Template for the page changed notification mail intro text"),
('notify_page_deleted_subject', _('[%(sitename)s] %(trivial)sUpdate of "%(pagename)s" by %(username)s'),
"Template for the page deleted notification mail subject header"),
('notify_page_deleted_intro',
_("Dear wiki user,\n\n"
'You have subscribed to a wiki page "%(sitename)s" for change notification.\n\n'
'The page "%(pagename)s" has been deleted by %(editor)s:\n\n'),
"Template for the page deleted notification mail intro text"),
('notify_page_renamed_subject', _('[%(sitename)s] %(trivial)sUpdate of "%(pagename)s" by %(username)s'),
"Template for the page renamed notification mail subject header"),
('notify_page_renamed_intro',
_("Dear wiki user,\n\n"
'You have subscribed to a wiki page "%(sitename)s" for change notification.\n\n'
'The page "%(pagename)s" has been renamed from "%(oldname)s" by %(editor)s:\n'),
"Template for the page renamed notification mail intro text"),
('notify_att_added_subject', _('[%(sitename)s] New attachment added to page %(pagename)s'),
"Template for the attachment added notification mail subject header"),
('notify_att_added_intro',
_("Dear Wiki user,\n\n"
'You have subscribed to a wiki page "%(page_name)s" for change notification. '
"An attachment has been added to that page by %(editor)s. "
"Following detailed information is available:\n\n"
"Attachment name: %(attach_name)s\n"
"Attachment size: %(attach_size)s\n"),
"Template for the attachment added notification mail intro text"),
('notify_att_removed_subject', _('[%(sitename)s] Removed attachment from page %(pagename)s'),
"Template for the attachment removed notification mail subject header"),
('notify_att_removed_intro',
_("Dear Wiki user,\n\n"
'You have subscribed to a wiki page "%(page_name)s" for change notification. '
"An attachment has been removed from that page by %(editor)s. "
"Following detailed information is available:\n\n"
"Attachment name: %(attach_name)s\n"
"Attachment size: %(attach_size)s\n"),
"Template for the attachment removed notification mail intro text"),
('notify_user_created_subject',
_("[%(sitename)s] New user account created"),
"Template for the user created notification mail subject header"),
('notify_user_created_intro',
_('Dear Superuser, a new user has just been created on "%(sitename)s". Details follow:\n\n'
' User name: %(username)s\n'
' Email address: %(useremail)s'),
"Template for the user created notification mail intro text"),
)),
'backup': ('Backup settings',
'These settings control how the backup action works and who is allowed to use it.',
(
('compression', 'gz', 'What compression to use for the backup ("gz" or "bz2").'),
('users', [], 'List of trusted user names who are allowed to get a backup.'),
('include', [], 'List of pathes to backup.'),
('exclude', lambda self, filename: False, 'Function f(self, filename) that tells whether a file should be excluded from backup. By default, nothing is excluded.'),
)),
'rss': ('RSS settings',
'These settings control RSS behaviour.',
(
('items_default', 15, "Default maximum items value for RSS feed. Can be "
"changed via items URL query parameter of rss_rc "
"action."),
('items_limit', 100, "Limit for item count got via RSS (i. e. user "
"can't get more than items_limit items even via "
"changing items URL query parameter)."),
('unique', 0, "If set to 1, for each page name only one RSS item would "
"be shown. Can be changed via unique rss_rc action URL "
"query parameter."),
('diffs', 0, "Add diffs in RSS item descriptions by default. Can be "
"changed via diffs URL query parameter of rss_rc action."),
('ddiffs', 0, "If set to 1, links to diff view instead of page itself "
"would be generated by default. Can be changed via ddiffs "
"URL query parameter of rss_rc action."),
('lines_default', 20, "Default line count limit for diffs added as item "
"descriptions for RSS items. Can be changed via "
"lines URL query parameter of rss_rc action."),
('lines_limit', 100, "Limit for possible line count for diffs added as "
"item descriptions in RSS."),
('show_attachment_entries', 0, "If set to 1, items, related to "
"attachment management, would be added to "
"RSS feed. Can be changed via show_att "
"URL query parameter of rss_rc action."),
('page_filter_pattern', "", "Default page filter pattern for RSS feed. "
"Empty pattern matches to any page. Pattern "
"beginning with circumflex is interpreted as "
"regular expression. Pattern ending with "
"slash matches page and all its subpages. "
"Otherwise pattern sets specific pagename. "
"Can be changed via page URL query parameter "
"of rss_rc action."),
('show_page_history_link', True, "Add link to page change history "
"RSS feed in theme."),
)),
'search_macro': ('Search macro settings',
'Settings related to behaviour of search macros (such as FullSearch, '
'FullSearchCached, PageList)',
(
('parse_args', False, "Do search macro parameter parsing. In previous "
"versions of MoinMoin, whole search macro "
"parameter string had been interpreted as needle. "
"Now, to provide ability to pass additional "
"parameters, this behaviour should be changed."),
('highlight_titles', 1, "Perform title matches highlighting by default "
"in search results generated by macro."),
('highlight_pages', 1, "Add highlight parameter to links in search "
"results generated by search macros by default."),
)),
}
def _add_options_to_defconfig(opts, addgroup=True):
for groupname in opts:
group_short, group_doc, group_opts = opts[groupname]
for name, default, doc in group_opts:
if addgroup:
name = groupname + '_' + name
if isinstance(default, DefaultExpression):
default = default.value
setattr(DefaultConfig, name, default)
_add_options_to_defconfig(options)
_add_options_to_defconfig(options_no_group_name, False)
# remove the gettext pseudo function
del _
|
RealTimeWeb/wikisite
|
MoinMoin/config/multiconfig.py
|
Python
|
apache-2.0
| 70,847
|
[
"VisIt"
] |
860291faebc42d7a0f019c59f2b3a9a2d25801d199dbcc756cb7ee0a444e6ed9
|
"""
DIRAC.WorkloadManagementSystem.DB package
"""
__RCSID__ = "$Id$"
|
fstagni/DIRAC
|
WorkloadManagementSystem/DB/__init__.py
|
Python
|
gpl-3.0
| 73
|
[
"DIRAC"
] |
9f407bf9e31c0b07096b860ef7627d01071c17fa00f7180b589567f5be886f89
|
"""
Support for creating videos.
"""
# Copyright (C) 2009-2011 University of Edinburgh
#
# This file is part of IMUSim.
#
# IMUSim is free software: you can redistribute it and/or modify it
# under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# IMUSim is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with IMUSim. If not, see <http://www.gnu.org/licenses/>.
import numpy as np
from imusim.visualisation.rendering import AnimatedRenderer
try:
from mayavi import mlab
except ImportError:
from enthought.mayavi import mlab
import tempfile
import os
import subprocess
import shutil
import collections
def createVideo(filename, renderers, tMin, tMax, framerate=25,
resolution=None):
"""
Create a video of a 3D visualisation.
The video is created using the current MayaVi figure, which should be setup
beforehand to correctly position the camera and any other elements that
should be rendered.
@param filename: Name of the video file to create.
@param renderers: List of L{AnimatedRenderer} objects to use.
@param tMin: Time from which to begin rendering.
@param tMax: Time at which to stop rendering.
@param framerate: Framerate of the video in frames per second.
@param resolution: Resolution of the video (width, height). If not
specified, the resolution of the current figure is used.
"""
if not isinstance(renderers, collections.Iterable):
renderers = [renderers]
figure = mlab.gcf()
if resolution is not None:
originalResolution = figure.scene.get_size()
figure.scene.set_size(resolution)
figure.scene.off_screen_rendering = True
figure.scene.disable_render = True
frameTimes = np.arange(tMin, tMax, 1.0/framerate)
try:
imageDir = tempfile.mkdtemp()
for f,t in enumerate(frameTimes):
for r in renderers:
r.renderUpdate(t)
framefile = os.path.join(imageDir, '%06d.png'%(f))
mlab.savefig(framefile, size=resolution)
assert os.path.exists(framefile)
retval = subprocess.call(['ffmpeg', '-f','image2', '-r', str(framerate),
'-i', '%s'%os.path.join(imageDir,'%06d.png'), '-sameq',
filename, '-pass','2'])
finally:
shutil.rmtree(imageDir)
figure.scene.disable_render = False
figure.scene.off_screen_rendering = False
if resolution is not None:
figure.scene.set_size(originalResolution)
|
martinling/imusim
|
imusim/visualisation/video.py
|
Python
|
gpl-3.0
| 2,860
|
[
"Mayavi"
] |
59c5a595f9012fe06ef12cbde18c982a77e95c6a766ebd0d091112092a81f721
|
# This code is part of the Biopython distribution and governed by its
# license. Please see the LICENSE file that should have been included
# as part of this package.
#
"""Code to interact with the primersearch program from EMBOSS.
"""
__docformat__ = "restructuredtext en"
class InputRecord(object):
"""Represent the input file into the primersearch program.
This makes it easy to add primer information and write it out to the
simple primer file format.
"""
def __init__(self):
self.primer_info = []
def __str__(self):
output = ""
for name, primer1, primer2 in self.primer_info:
output += "%s %s %s\n" % (name, primer1, primer2)
return output
def add_primer_set(self, primer_name, first_primer_seq,
second_primer_seq):
"""Add primer information to the record.
"""
self.primer_info.append((primer_name, first_primer_seq,
second_primer_seq))
class OutputRecord(object):
"""Represent the information from a primersearch job.
amplifiers is a dictionary where the keys are the primer names and
the values are a list of PrimerSearchAmplifier objects.
"""
def __init__(self):
self.amplifiers = {}
class Amplifier(object):
"""Represent a single amplification from a primer.
"""
def __init__(self):
self.hit_info = ""
self.length = 0
def read(handle):
"""Get output from primersearch into a PrimerSearchOutputRecord
"""
record = OutputRecord()
for line in handle:
if not line.strip():
continue
elif line.startswith("Primer name"):
name = line.split()[-1]
record.amplifiers[name] = []
elif line.startswith("Amplimer"):
amplifier = Amplifier()
record.amplifiers[name].append(amplifier)
elif line.startswith("\tSequence: "):
amplifier.hit_info = line.replace("\tSequence: ", "")
elif line.startswith("\tAmplimer length: "):
length = line.split()[-2]
amplifier.length = int(length)
else:
amplifier.hit_info += line
for name in record.amplifiers:
for amplifier in record.amplifiers[name]:
amplifier.hit_info = amplifier.hit_info.rstrip()
return record
|
updownlife/multipleK
|
dependencies/biopython-1.65/build/lib.linux-x86_64-2.7/Bio/Emboss/PrimerSearch.py
|
Python
|
gpl-2.0
| 2,366
|
[
"Biopython"
] |
2d71d20307539efa5e1a8d8f3c06dd20459b5b7e1dd50b0c9bf797b8c891c0b5
|
# coding: utf-8
"""
Vericred API
Vericred's API allows you to search for Health Plans that a specific doctor
accepts.
## Getting Started
Visit our [Developer Portal](https://developers.vericred.com) to
create an account.
Once you have created an account, you can create one Application for
Production and another for our Sandbox (select the appropriate Plan when
you create the Application).
## SDKs
Our API follows standard REST conventions, so you can use any HTTP client
to integrate with us. You will likely find it easier to use one of our
[autogenerated SDKs](https://github.com/vericred/?query=vericred-),
which we make available for several common programming languages.
## Authentication
To authenticate, pass the API Key you created in the Developer Portal as
a `Vericred-Api-Key` header.
`curl -H 'Vericred-Api-Key: YOUR_KEY' "https://api.vericred.com/providers?search_term=Foo&zip_code=11215"`
## Versioning
Vericred's API default to the latest version. However, if you need a specific
version, you can request it with an `Accept-Version` header.
The current version is `v3`. Previous versions are `v1` and `v2`.
`curl -H 'Vericred-Api-Key: YOUR_KEY' -H 'Accept-Version: v2' "https://api.vericred.com/providers?search_term=Foo&zip_code=11215"`
## Pagination
Endpoints that accept `page` and `per_page` parameters are paginated. They expose
four additional fields that contain data about your position in the response,
namely `Total`, `Per-Page`, `Link`, and `Page` as described in [RFC-5988](https://tools.ietf.org/html/rfc5988).
For example, to display 5 results per page and view the second page of a
`GET` to `/networks`, your final request would be `GET /networks?....page=2&per_page=5`.
## Sideloading
When we return multiple levels of an object graph (e.g. `Provider`s and their `State`s
we sideload the associated data. In this example, we would provide an Array of
`State`s and a `state_id` for each provider. This is done primarily to reduce the
payload size since many of the `Provider`s will share a `State`
```
{
providers: [{ id: 1, state_id: 1}, { id: 2, state_id: 1 }],
states: [{ id: 1, code: 'NY' }]
}
```
If you need the second level of the object graph, you can just match the
corresponding id.
## Selecting specific data
All endpoints allow you to specify which fields you would like to return.
This allows you to limit the response to contain only the data you need.
For example, let's take a request that returns the following JSON by default
```
{
provider: {
id: 1,
name: 'John',
phone: '1234567890',
field_we_dont_care_about: 'value_we_dont_care_about'
},
states: [{
id: 1,
name: 'New York',
code: 'NY',
field_we_dont_care_about: 'value_we_dont_care_about'
}]
}
```
To limit our results to only return the fields we care about, we specify the
`select` query string parameter for the corresponding fields in the JSON
document.
In this case, we want to select `name` and `phone` from the `provider` key,
so we would add the parameters `select=provider.name,provider.phone`.
We also want the `name` and `code` from the `states` key, so we would
add the parameters `select=states.name,states.code`. The id field of
each document is always returned whether or not it is requested.
Our final request would be `GET /providers/12345?select=provider.name,provider.phone,states.name,states.code`
The response would be
```
{
provider: {
id: 1,
name: 'John',
phone: '1234567890'
},
states: [{
id: 1,
name: 'New York',
code: 'NY'
}]
}
```
## Benefits summary format
Benefit cost-share strings are formatted to capture:
* Network tiers
* Compound or conditional cost-share
* Limits on the cost-share
* Benefit-specific maximum out-of-pocket costs
**Example #1**
As an example, we would represent [this Summary of Benefits & Coverage](https://s3.amazonaws.com/vericred-data/SBC/2017/33602TX0780032.pdf) as:
* **Hospital stay facility fees**:
- Network Provider: `$400 copay/admit plus 20% coinsurance`
- Out-of-Network Provider: `$1,500 copay/admit plus 50% coinsurance`
- Vericred's format for this benefit: `In-Network: $400 before deductible then 20% after deductible / Out-of-Network: $1,500 before deductible then 50% after deductible`
* **Rehabilitation services:**
- Network Provider: `20% coinsurance`
- Out-of-Network Provider: `50% coinsurance`
- Limitations & Exceptions: `35 visit maximum per benefit period combined with Chiropractic care.`
- Vericred's format for this benefit: `In-Network: 20% after deductible / Out-of-Network: 50% after deductible | limit: 35 visit(s) per Benefit Period`
**Example #2**
In [this other Summary of Benefits & Coverage](https://s3.amazonaws.com/vericred-data/SBC/2017/40733CA0110568.pdf), the **specialty_drugs** cost-share has a maximum out-of-pocket for in-network pharmacies.
* **Specialty drugs:**
- Network Provider: `40% coinsurance up to a $500 maximum for up to a 30 day supply`
- Out-of-Network Provider `Not covered`
- Vericred's format for this benefit: `In-Network: 40% after deductible, up to $500 per script / Out-of-Network: 100%`
**BNF**
Here's a description of the benefits summary string, represented as a context-free grammar:
```
root ::= coverage
coverage ::= (simple_coverage | tiered_coverage) (space pipe space coverage_modifier)?
tiered_coverage ::= tier (space slash space tier)*
tier ::= tier_name colon space (tier_coverage | not_applicable)
tier_coverage ::= simple_coverage (space (then | or | and) space simple_coverage)* tier_limitation?
simple_coverage ::= (pre_coverage_limitation space)? coverage_amount (space post_coverage_limitation)? (comma? space coverage_condition)?
coverage_modifier ::= limit_condition colon space (((simple_coverage | simple_limitation) (semicolon space see_carrier_documentation)?) | see_carrier_documentation | waived_if_admitted | shared_across_tiers)
waived_if_admitted ::= ("copay" space)? "waived if admitted"
simple_limitation ::= pre_coverage_limitation space "copay applies"
tier_name ::= "In-Network-Tier-2" | "Out-of-Network" | "In-Network"
limit_condition ::= "limit" | "condition"
tier_limitation ::= comma space "up to" space (currency | (integer space time_unit plural?)) (space post_coverage_limitation)?
coverage_amount ::= currency | unlimited | included | unknown | percentage | (digits space (treatment_unit | time_unit) plural?)
pre_coverage_limitation ::= first space digits space time_unit plural?
post_coverage_limitation ::= (((then space currency) | "per condition") space)? "per" space (treatment_unit | (integer space time_unit) | time_unit) plural?
coverage_condition ::= ("before deductible" | "after deductible" | "penalty" | allowance | "in-state" | "out-of-state") (space allowance)?
allowance ::= upto_allowance | after_allowance
upto_allowance ::= "up to" space (currency space)? "allowance"
after_allowance ::= "after" space (currency space)? "allowance"
see_carrier_documentation ::= "see carrier documentation for more information"
shared_across_tiers ::= "shared across all tiers"
unknown ::= "unknown"
unlimited ::= /[uU]nlimited/
included ::= /[iI]ncluded in [mM]edical/
time_unit ::= /[hH]our/ | (((/[cC]alendar/ | /[cC]ontract/) space)? /[yY]ear/) | /[mM]onth/ | /[dD]ay/ | /[wW]eek/ | /[vV]isit/ | /[lL]ifetime/ | ((((/[bB]enefit/ plural?) | /[eE]ligibility/) space)? /[pP]eriod/)
treatment_unit ::= /[pP]erson/ | /[gG]roup/ | /[cC]ondition/ | /[sS]cript/ | /[vV]isit/ | /[eE]xam/ | /[iI]tem/ | /[sS]tay/ | /[tT]reatment/ | /[aA]dmission/ | /[eE]pisode/
comma ::= ","
colon ::= ":"
semicolon ::= ";"
pipe ::= "|"
slash ::= "/"
plural ::= "(s)" | "s"
then ::= "then" | ("," space) | space
or ::= "or"
and ::= "and"
not_applicable ::= "Not Applicable" | "N/A" | "NA"
first ::= "first"
currency ::= "$" number
percentage ::= number "%"
number ::= float | integer
float ::= digits "." digits
integer ::= /[0-9]/+ (comma_int | under_int)*
comma_int ::= ("," /[0-9]/*3) !"_"
under_int ::= ("_" /[0-9]/*3) !","
digits ::= /[0-9]/+ ("_" /[0-9]/+)*
space ::= /[ \t]/+
```
OpenAPI spec version: 1.0.0
Generated by: https://github.com/swagger-api/swagger-codegen.git
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
from __future__ import absolute_import
import os
import sys
import unittest
import vericred_client
from vericred_client.rest import ApiException
from vericred_client.models.provider_details import ProviderDetails
class TestProviderDetails(unittest.TestCase):
""" ProviderDetails unit test stubs """
def setUp(self):
pass
def tearDown(self):
pass
def testProviderDetails(self):
"""
Test ProviderDetails
"""
model = vericred_client.models.provider_details.ProviderDetails()
if __name__ == '__main__':
unittest.main()
|
vericred/vericred-python
|
test/test_provider_details.py
|
Python
|
apache-2.0
| 10,029
|
[
"VisIt"
] |
c2c5949b7b8c32d32f3b25a99b3751871a9b5ef89ffff57896fe2e15aad6c2cf
|
# -*- coding: utf-8 -*-
import logging
import feedparser
from kalliope.core.NeuronModule import NeuronModule, MissingParameterException
logging.basicConfig()
logger = logging.getLogger("kalliope")
class Rss_reader(NeuronModule):
def __init__(self, **kwargs):
super(Rss_reader, self).__init__(**kwargs)
self.feedUrl = kwargs.get('feed_url', None)
self.limit = kwargs.get('max_items', 30)
# check if parameters have been provided
if self._is_parameters_ok():
# prepare a returned dict
returned_dict = dict()
logging.debug("Reading feed from: %s" % self.feedUrl)
feed = feedparser.parse( self.feedUrl )
logging.debug("Read title from feed: %s" % feed["channel"]["title"])
returned_dict["feed"] = feed["channel"]["title"]
returned_dict["items"] = feed["items"][:self.limit]
self.say(returned_dict)
def _is_parameters_ok(self):
"""
Check if received parameters are ok to perform operations in the neuron
:return: true if parameters are ok, raise an exception otherwise
.. raises:: MissingParameterException
"""
if self.feedUrl is None:
raise MissingParameterException("feed url parameter required")
return True
|
kalliope-project/kalliope_neuron_rss_reader
|
rss_reader.py
|
Python
|
mit
| 1,345
|
[
"NEURON"
] |
bd7db2acacb53af5bd3b5c8da7c6a6a95c117b0c4d374292e4635249eeaccf6b
|
# Copyright (c) 2012 Google Inc. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
from __future__ import with_statement
import collections
import errno
import filecmp
import os.path
import re
import tempfile
import sys
# A minimal memoizing decorator. It'll blow up if the args aren't immutable,
# among other "problems".
class memoize(object):
def __init__(self, func):
self.func = func
self.cache = {}
def __call__(self, *args):
try:
return self.cache[args]
except KeyError:
result = self.func(*args)
self.cache[args] = result
return result
class GypError(Exception):
"""Error class representing an error, which is to be presented
to the user. The main entry point will catch and display this.
"""
pass
def ExceptionAppend(e, msg):
"""Append a message to the given exception's message."""
if not e.args:
e.args = (msg,)
elif len(e.args) == 1:
e.args = (str(e.args[0]) + ' ' + msg,)
else:
e.args = (str(e.args[0]) + ' ' + msg,) + e.args[1:]
def FindQualifiedTargets(target, qualified_list):
"""
Given a list of qualified targets, return the qualified targets for the
specified |target|.
"""
return [t for t in qualified_list if ParseQualifiedTarget(t)[1] == target]
def ParseQualifiedTarget(target):
# Splits a qualified target into a build file, target name and toolset.
# NOTE: rsplit is used to disambiguate the Windows drive letter separator.
target_split = target.rsplit(':', 1)
if len(target_split) == 2:
[build_file, target] = target_split
else:
build_file = None
target_split = target.rsplit('#', 1)
if len(target_split) == 2:
[target, toolset] = target_split
else:
toolset = None
return [build_file, target, toolset]
def ResolveTarget(build_file, target, toolset):
# This function resolves a target into a canonical form:
# - a fully defined build file, either absolute or relative to the current
# directory
# - a target name
# - a toolset
#
# build_file is the file relative to which 'target' is defined.
# target is the qualified target.
# toolset is the default toolset for that target.
[parsed_build_file, target, parsed_toolset] = ParseQualifiedTarget(target)
if parsed_build_file:
if build_file:
# If a relative path, parsed_build_file is relative to the directory
# containing build_file. If build_file is not in the current directory,
# parsed_build_file is not a usable path as-is. Resolve it by
# interpreting it as relative to build_file. If parsed_build_file is
# absolute, it is usable as a path regardless of the current directory,
# and os.path.join will return it as-is.
build_file = os.path.normpath(os.path.join(os.path.dirname(build_file),
parsed_build_file))
# Further (to handle cases like ../cwd), make it relative to cwd)
if not os.path.isabs(build_file):
build_file = RelativePath(build_file, '.')
else:
build_file = parsed_build_file
if parsed_toolset:
toolset = parsed_toolset
return [build_file, target, toolset]
def BuildFile(fully_qualified_target):
# Extracts the build file from the fully qualified target.
return ParseQualifiedTarget(fully_qualified_target)[0]
def GetEnvironFallback(var_list, default):
"""Look up a key in the environment, with fallback to secondary keys
and finally falling back to a default value."""
for var in var_list:
if var in os.environ:
return os.environ[var]
return default
def QualifiedTarget(build_file, target, toolset):
# "Qualified" means the file that a target was defined in and the target
# name, separated by a colon, suffixed by a # and the toolset name:
# /path/to/file.gyp:target_name#toolset
fully_qualified = build_file + ':' + target
if toolset:
fully_qualified = fully_qualified + '#' + toolset
return fully_qualified
@memoize
def RelativePath(path, relative_to, follow_path_symlink=True):
# Assuming both |path| and |relative_to| are relative to the current
# directory, returns a relative path that identifies path relative to
# relative_to.
# If |follow_symlink_path| is true (default) and |path| is a symlink, then
# this method returns a path to the real file represented by |path|. If it is
# false, this method returns a path to the symlink. If |path| is not a
# symlink, this option has no effect.
# Convert to normalized (and therefore absolute paths).
if follow_path_symlink:
path = os.path.realpath(path)
else:
path = os.path.abspath(path)
relative_to = os.path.realpath(relative_to)
# On Windows, we can't create a relative path to a different drive, so just
# use the absolute path.
if sys.platform == 'win32':
if (os.path.splitdrive(path)[0].lower() !=
os.path.splitdrive(relative_to)[0].lower()):
return path
# Split the paths into components.
path_split = path.split(os.path.sep)
relative_to_split = relative_to.split(os.path.sep)
# Determine how much of the prefix the two paths share.
prefix_len = len(os.path.commonprefix([path_split, relative_to_split]))
# Put enough ".." components to back up out of relative_to to the common
# prefix, and then append the part of path_split after the common prefix.
relative_split = [os.path.pardir] * (len(relative_to_split) - prefix_len) + \
path_split[prefix_len:]
if len(relative_split) == 0:
# The paths were the same.
return ''
# Turn it back into a string and we're done.
return os.path.join(*relative_split)
@memoize
def InvertRelativePath(path, toplevel_dir=None):
"""Given a path like foo/bar that is relative to toplevel_dir, return
the inverse relative path back to the toplevel_dir.
E.g. os.path.normpath(os.path.join(path, InvertRelativePath(path)))
should always produce the empty string, unless the path contains symlinks.
"""
if not path:
return path
toplevel_dir = '.' if toplevel_dir is None else toplevel_dir
return RelativePath(toplevel_dir, os.path.join(toplevel_dir, path))
def FixIfRelativePath(path, relative_to):
# Like RelativePath but returns |path| unchanged if it is absolute.
if os.path.isabs(path):
return path
return RelativePath(path, relative_to)
def UnrelativePath(path, relative_to):
# Assuming that |relative_to| is relative to the current directory, and |path|
# is a path relative to the dirname of |relative_to|, returns a path that
# identifies |path| relative to the current directory.
rel_dir = os.path.dirname(relative_to)
return os.path.normpath(os.path.join(rel_dir, path))
# re objects used by EncodePOSIXShellArgument. See IEEE 1003.1 XCU.2.2 at
# http://www.opengroup.org/onlinepubs/009695399/utilities/xcu_chap02.html#tag_02_02
# and the documentation for various shells.
# _quote is a pattern that should match any argument that needs to be quoted
# with double-quotes by EncodePOSIXShellArgument. It matches the following
# characters appearing anywhere in an argument:
# \t, \n, space parameter separators
# # comments
# $ expansions (quoted to always expand within one argument)
# % called out by IEEE 1003.1 XCU.2.2
# & job control
# ' quoting
# (, ) subshell execution
# *, ?, [ pathname expansion
# ; command delimiter
# <, >, | redirection
# = assignment
# {, } brace expansion (bash)
# ~ tilde expansion
# It also matches the empty string, because "" (or '') is the only way to
# represent an empty string literal argument to a POSIX shell.
#
# This does not match the characters in _escape, because those need to be
# backslash-escaped regardless of whether they appear in a double-quoted
# string.
_quote = re.compile('[\t\n #$%&\'()*;<=>?[{|}~]|^$')
# _escape is a pattern that should match any character that needs to be
# escaped with a backslash, whether or not the argument matched the _quote
# pattern. _escape is used with re.sub to backslash anything in _escape's
# first match group, hence the (parentheses) in the regular expression.
#
# _escape matches the following characters appearing anywhere in an argument:
# " to prevent POSIX shells from interpreting this character for quoting
# \ to prevent POSIX shells from interpreting this character for escaping
# ` to prevent POSIX shells from interpreting this character for command
# substitution
# Missing from this list is $, because the desired behavior of
# EncodePOSIXShellArgument is to permit parameter (variable) expansion.
#
# Also missing from this list is !, which bash will interpret as the history
# expansion character when history is enabled. bash does not enable history
# by default in non-interactive shells, so this is not thought to be a problem.
# ! was omitted from this list because bash interprets "\!" as a literal string
# including the backslash character (avoiding history expansion but retaining
# the backslash), which would not be correct for argument encoding. Handling
# this case properly would also be problematic because bash allows the history
# character to be changed with the histchars shell variable. Fortunately,
# as history is not enabled in non-interactive shells and
# EncodePOSIXShellArgument is only expected to encode for non-interactive
# shells, there is no room for error here by ignoring !.
_escape = re.compile(r'(["\\`])')
def EncodePOSIXShellArgument(argument):
"""Encodes |argument| suitably for consumption by POSIX shells.
argument may be quoted and escaped as necessary to ensure that POSIX shells
treat the returned value as a literal representing the argument passed to
this function. Parameter (variable) expansions beginning with $ are allowed
to remain intact without escaping the $, to allow the argument to contain
references to variables to be expanded by the shell.
"""
if not isinstance(argument, str):
argument = str(argument)
if _quote.search(argument):
quote = '"'
else:
quote = ''
encoded = quote + re.sub(_escape, r'\\\1', argument) + quote
return encoded
def EncodePOSIXShellList(list):
"""Encodes |list| suitably for consumption by POSIX shells.
Returns EncodePOSIXShellArgument for each item in list, and joins them
together using the space character as an argument separator.
"""
encoded_arguments = []
for argument in list:
encoded_arguments.append(EncodePOSIXShellArgument(argument))
return ' '.join(encoded_arguments)
def DeepDependencyTargets(target_dicts, roots):
"""Returns the recursive list of target dependencies."""
dependencies = set()
pending = set(roots)
while pending:
# Pluck out one.
r = pending.pop()
# Skip if visited already.
if r in dependencies:
continue
# Add it.
dependencies.add(r)
# Add its children.
spec = target_dicts[r]
pending.update(set(spec.get('dependencies', [])))
pending.update(set(spec.get('dependencies_original', [])))
return list(dependencies - set(roots))
def BuildFileTargets(target_list, build_file):
"""From a target_list, returns the subset from the specified build_file.
"""
return [p for p in target_list if BuildFile(p) == build_file]
def AllTargets(target_list, target_dicts, build_file):
"""Returns all targets (direct and dependencies) for the specified build_file.
"""
bftargets = BuildFileTargets(target_list, build_file)
deptargets = DeepDependencyTargets(target_dicts, bftargets)
return bftargets + deptargets
def WriteOnDiff(filename):
"""Write to a file only if the new contents differ.
Arguments:
filename: name of the file to potentially write to.
Returns:
A file like object which will write to temporary file and only overwrite
the target if it differs (on close).
"""
class Writer(object):
"""Wrapper around file which only covers the target if it differs."""
def __init__(self):
# Pick temporary file.
tmp_fd, self.tmp_path = tempfile.mkstemp(
suffix='.tmp',
prefix=os.path.split(filename)[1] + '.gyp.',
dir=os.path.split(filename)[0])
try:
self.tmp_file = os.fdopen(tmp_fd, 'wb')
except Exception:
# Don't leave turds behind.
os.unlink(self.tmp_path)
raise
def __getattr__(self, attrname):
# Delegate everything else to self.tmp_file
return getattr(self.tmp_file, attrname)
def close(self):
try:
# Close tmp file.
self.tmp_file.close()
# Determine if different.
same = False
try:
same = filecmp.cmp(self.tmp_path, filename, False)
except OSError, e:
if e.errno != errno.ENOENT:
raise
if same:
# The new file is identical to the old one, just get rid of the new
# one.
os.unlink(self.tmp_path)
else:
# The new file is different from the old one, or there is no old one.
# Rename the new file to the permanent name.
#
# tempfile.mkstemp uses an overly restrictive mode, resulting in a
# file that can only be read by the owner, regardless of the umask.
# There's no reason to not respect the umask here, which means that
# an extra hoop is required to fetch it and reset the new file's mode.
#
# No way to get the umask without setting a new one? Set a safe one
# and then set it back to the old value.
umask = os.umask(077)
os.umask(umask)
os.chmod(self.tmp_path, 0666 & ~umask)
if sys.platform == 'win32' and os.path.exists(filename):
# NOTE: on windows (but not cygwin) rename will not replace an
# existing file, so it must be preceded with a remove. Sadly there
# is no way to make the switch atomic.
os.remove(filename)
os.rename(self.tmp_path, filename)
except Exception:
# Don't leave turds behind.
os.unlink(self.tmp_path)
raise
return Writer()
def EnsureDirExists(path):
"""Make sure the directory for |path| exists."""
try:
os.makedirs(os.path.dirname(path))
except OSError:
pass
def GetFlavor(params):
"""Returns |params.flavor| if it's set, the system's default flavor else."""
flavors = {
'cygwin': 'win',
'win32': 'win',
'darwin': 'mac',
}
if 'flavor' in params:
return params['flavor']
if sys.platform in flavors:
return flavors[sys.platform]
if sys.platform.startswith('sunos'):
return 'solaris'
if sys.platform.startswith('freebsd'):
return 'freebsd'
if sys.platform.startswith('openbsd'):
return 'openbsd'
if sys.platform.startswith('netbsd'):
return 'netbsd'
if sys.platform.startswith('aix'):
return 'aix'
return 'linux'
def CopyTool(flavor, out_path, generator_flags={}):
"""Finds (flock|mac|win)_tool.gyp in the gyp directory and copies it
to |out_path|."""
# aix and solaris just need flock emulation. mac and win use more complicated
# support scripts.
prefix = {
'aix': 'flock',
'solaris': 'flock',
'mac': 'mac',
'win': 'win'
}.get(flavor, None)
if not prefix:
return
# Slurp input file.
source_path = os.path.join(
os.path.dirname(os.path.abspath(__file__)), '%s_tool.py' % prefix)
with open(source_path) as source_file:
source = source_file.readlines()
# Set custom header flags.
header = '# Generated by gyp. Do not edit.\n'
mac_toolchain_dir = generator_flags.get('mac_toolchain_dir', None)
if flavor == 'mac' and mac_toolchain_dir:
header += "import os;\nos.environ['DEVELOPER_DIR']='%s'\n" \
% mac_toolchain_dir
# Add header and write it out.
tool_path = os.path.join(out_path, 'gyp-%s-tool' % prefix)
with open(tool_path, 'w') as tool_file:
tool_file.write(
''.join([source[0], header] + source[1:]))
# Make file executable.
os.chmod(tool_path, 0755)
# From Alex Martelli,
# http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/52560
# ASPN: Python Cookbook: Remove duplicates from a sequence
# First comment, dated 2001/10/13.
# (Also in the printed Python Cookbook.)
def uniquer(seq, idfun=None):
if idfun is None:
idfun = lambda x: x
seen = {}
result = []
for item in seq:
marker = idfun(item)
if marker in seen: continue
seen[marker] = 1
result.append(item)
return result
# Based on http://code.activestate.com/recipes/576694/.
class OrderedSet(collections.MutableSet):
def __init__(self, iterable=None):
self.end = end = []
end += [None, end, end] # sentinel node for doubly linked list
self.map = {} # key --> [key, prev, next]
if iterable is not None:
self |= iterable
def __len__(self):
return len(self.map)
def __contains__(self, key):
return key in self.map
def add(self, key):
if key not in self.map:
end = self.end
curr = end[1]
curr[2] = end[1] = self.map[key] = [key, curr, end]
def discard(self, key):
if key in self.map:
key, prev_item, next_item = self.map.pop(key)
prev_item[2] = next_item
next_item[1] = prev_item
def __iter__(self):
end = self.end
curr = end[2]
while curr is not end:
yield curr[0]
curr = curr[2]
def __reversed__(self):
end = self.end
curr = end[1]
while curr is not end:
yield curr[0]
curr = curr[1]
# The second argument is an addition that causes a pylint warning.
def pop(self, last=True): # pylint: disable=W0221
if not self:
raise KeyError('set is empty')
key = self.end[1][0] if last else self.end[2][0]
self.discard(key)
return key
def __repr__(self):
if not self:
return '%s()' % (self.__class__.__name__,)
return '%s(%r)' % (self.__class__.__name__, list(self))
def __eq__(self, other):
if isinstance(other, OrderedSet):
return len(self) == len(other) and list(self) == list(other)
return set(self) == set(other)
# Extensions to the recipe.
def update(self, iterable):
for i in iterable:
if i not in self:
self.add(i)
class CycleError(Exception):
"""An exception raised when an unexpected cycle is detected."""
def __init__(self, nodes):
self.nodes = nodes
def __str__(self):
return 'CycleError: cycle involving: ' + str(self.nodes)
def TopologicallySorted(graph, get_edges):
r"""Topologically sort based on a user provided edge definition.
Args:
graph: A list of node names.
get_edges: A function mapping from node name to a hashable collection
of node names which this node has outgoing edges to.
Returns:
A list containing all of the node in graph in topological order.
It is assumed that calling get_edges once for each node and caching is
cheaper than repeatedly calling get_edges.
Raises:
CycleError in the event of a cycle.
Example:
graph = {'a': '$(b) $(c)', 'b': 'hi', 'c': '$(b)'}
def GetEdges(node):
return re.findall(r'\$\(([^))]\)', graph[node])
print TopologicallySorted(graph.keys(), GetEdges)
==>
['a', 'c', b']
"""
get_edges = memoize(get_edges)
visited = set()
visiting = set()
ordered_nodes = []
def Visit(node):
if node in visiting:
raise CycleError(visiting)
if node in visited:
return
visited.add(node)
visiting.add(node)
for neighbor in get_edges(node):
Visit(neighbor)
visiting.remove(node)
ordered_nodes.insert(0, node)
for node in sorted(graph):
Visit(node)
return ordered_nodes
def CrossCompileRequested():
# TODO: figure out how to not build extra host objects in the
# non-cross-compile case when this is enabled, and enable unconditionally.
return (os.environ.get('GYP_CROSSCOMPILE') or
os.environ.get('AR_host') or
os.environ.get('CC_host') or
os.environ.get('CXX_host') or
os.environ.get('AR_target') or
os.environ.get('CC_target') or
os.environ.get('CXX_target'))
|
tkelman/utf8rewind
|
tools/gyp/pylib/gyp/common.py
|
Python
|
mit
| 20,338
|
[
"VisIt"
] |
a6f3d13b12dbdf11eb9a8e752d8b6af51b037ea967b7e5abe4b5a8edac86145c
|
from __future__ import absolute_import
from ..engine import Layer
from .. import backend as K
import numpy as np
class GaussianNoise(Layer):
"""Apply additive zero-centered Gaussian noise.
This is useful to mitigate overfitting
(you could see it as a form of random data augmentation).
Gaussian Noise (GS) is a natural choice as corruption process
for real valued inputs.
As it is a regularization layer, it is only active at training time.
# Arguments
sigma: float, standard deviation of the noise distribution.
# Input shape
Arbitrary. Use the keyword argument `input_shape`
(tuple of integers, does not include the samples axis)
when using this layer as the first layer in a model.
# Output shape
Same shape as input.
"""
def __init__(self, sigma, **kwargs):
self.supports_masking = True
self.sigma = sigma
self.uses_learning_phase = True
super(GaussianNoise, self).__init__(**kwargs)
def call(self, x, mask=None):
noise_x = x + K.random_normal(shape=K.shape(x),
mean=0.,
std=self.sigma)
return K.in_train_phase(noise_x, x)
def get_config(self):
config = {'sigma': self.sigma}
base_config = super(GaussianNoise, self).get_config()
return dict(list(base_config.items()) + list(config.items()))
class GaussianDropout(Layer):
"""Apply multiplicative 1-centered Gaussian noise.
As it is a regularization layer, it is only active at training time.
# Arguments
p: float, drop probability (as with `Dropout`).
The multiplicative noise will have
standard deviation `sqrt(p / (1 - p))`.
# Input shape
Arbitrary. Use the keyword argument `input_shape`
(tuple of integers, does not include the samples axis)
when using this layer as the first layer in a model.
# Output shape
Same shape as input.
# References
- [Dropout: A Simple Way to Prevent Neural Networks from Overfitting Srivastava, Hinton, et al. 2014](http://www.cs.toronto.edu/~rsalakhu/papers/srivastava14a.pdf)
"""
def __init__(self, p, **kwargs):
self.supports_masking = True
self.p = p
if 0 < p < 1:
self.uses_learning_phase = True
super(GaussianDropout, self).__init__(**kwargs)
def call(self, x, mask=None):
if 0 < self.p < 1:
noise_x = x * K.random_normal(shape=K.shape(x), mean=1.0,
std=np.sqrt(self.p / (1.0 - self.p)))
return K.in_train_phase(noise_x, x)
return x
def get_config(self):
config = {'p': self.p}
base_config = super(GaussianDropout, self).get_config()
return dict(list(base_config.items()) + list(config.items()))
|
surgebiswas/poker
|
PokerBots_2017/Johnny/keras/layers/noise.py
|
Python
|
mit
| 2,912
|
[
"Gaussian"
] |
2f99c45c76323e89efcefbbf695a7774f67c715111c3cc1bf19c55bec12be206
|
#!/usr/bin/env python
# encoding: utf-8
"""
The ipcluster application.
Authors:
* Brian Granger
* MinRK
"""
#-----------------------------------------------------------------------------
# Copyright (C) 2008-2011 The IPython Development Team
#
# Distributed under the terms of the BSD License. The full license is in
# the file COPYING, distributed as part of this software.
#-----------------------------------------------------------------------------
#-----------------------------------------------------------------------------
# Imports
#-----------------------------------------------------------------------------
import errno
import logging
import os
import re
import signal
from subprocess import check_call, CalledProcessError, PIPE
import zmq
from zmq.eventloop import ioloop
from IPython.config.application import Application, boolean_flag, catch_config_error
from IPython.config.loader import Config
from IPython.core.application import BaseIPythonApplication
from IPython.core.profiledir import ProfileDir
from IPython.utils.daemonize import daemonize
from IPython.utils.importstring import import_item
from IPython.utils.sysinfo import num_cpus
from IPython.utils.traitlets import (Integer, Unicode, Bool, CFloat, Dict, List, Any,
DottedObjectName)
from IPython.parallel.apps.baseapp import (
BaseParallelApplication,
PIDFileError,
base_flags, base_aliases
)
#-----------------------------------------------------------------------------
# Module level variables
#-----------------------------------------------------------------------------
default_config_file_name = u'ipcluster_config.py'
_description = """Start an IPython cluster for parallel computing.
An IPython cluster consists of 1 controller and 1 or more engines.
This command automates the startup of these processes using a wide
range of startup methods (SSH, local processes, PBS, mpiexec,
Windows HPC Server 2008). To start a cluster with 4 engines on your
local host simply do 'ipcluster start --n=4'. For more complex usage
you will typically do 'ipython profile create mycluster --parallel', then edit
configuration files, followed by 'ipcluster start --profile=mycluster --n=4'.
"""
_main_examples = """
ipcluster start --n=4 # start a 4 node cluster on localhost
ipcluster start -h # show the help string for the start subcmd
ipcluster stop -h # show the help string for the stop subcmd
ipcluster engines -h # show the help string for the engines subcmd
"""
_start_examples = """
ipython profile create mycluster --parallel # create mycluster profile
ipcluster start --profile=mycluster --n=4 # start mycluster with 4 nodes
"""
_stop_examples = """
ipcluster stop --profile=mycluster # stop a running cluster by profile name
"""
_engines_examples = """
ipcluster engines --profile=mycluster --n=4 # start 4 engines only
"""
# Exit codes for ipcluster
# This will be the exit code if the ipcluster appears to be running because
# a .pid file exists
ALREADY_STARTED = 10
# This will be the exit code if ipcluster stop is run, but there is not .pid
# file to be found.
ALREADY_STOPPED = 11
# This will be the exit code if ipcluster engines is run, but there is not .pid
# file to be found.
NO_CLUSTER = 12
#-----------------------------------------------------------------------------
# Utilities
#-----------------------------------------------------------------------------
def find_launcher_class(clsname, kind):
"""Return a launcher for a given clsname and kind.
Parameters
==========
clsname : str
The full name of the launcher class, either with or without the
module path, or an abbreviation (MPI, SSH, SGE, PBS, LSF,
WindowsHPC).
kind : str
Either 'EngineSet' or 'Controller'.
"""
if '.' not in clsname:
# not a module, presume it's the raw name in apps.launcher
if kind and kind not in clsname:
# doesn't match necessary full class name, assume it's
# just 'PBS' or 'MPI' prefix:
clsname = clsname + kind + 'Launcher'
clsname = 'IPython.parallel.apps.launcher.'+clsname
klass = import_item(clsname)
return klass
#-----------------------------------------------------------------------------
# Main application
#-----------------------------------------------------------------------------
start_help = """Start an IPython cluster for parallel computing
Start an ipython cluster by its profile name or cluster
directory. Cluster directories contain configuration, log and
security related files and are named using the convention
'profile_<name>' and should be creating using the 'start'
subcommand of 'ipcluster'. If your cluster directory is in
the cwd or the ipython directory, you can simply refer to it
using its profile name, 'ipcluster start --n=4 --profile=<profile>`,
otherwise use the 'profile-dir' option.
"""
stop_help = """Stop a running IPython cluster
Stop a running ipython cluster by its profile name or cluster
directory. Cluster directories are named using the convention
'profile_<name>'. If your cluster directory is in
the cwd or the ipython directory, you can simply refer to it
using its profile name, 'ipcluster stop --profile=<profile>`, otherwise
use the '--profile-dir' option.
"""
engines_help = """Start engines connected to an existing IPython cluster
Start one or more engines to connect to an existing Cluster
by profile name or cluster directory.
Cluster directories contain configuration, log and
security related files and are named using the convention
'profile_<name>' and should be creating using the 'start'
subcommand of 'ipcluster'. If your cluster directory is in
the cwd or the ipython directory, you can simply refer to it
using its profile name, 'ipcluster engines --n=4 --profile=<profile>`,
otherwise use the 'profile-dir' option.
"""
stop_aliases = dict(
signal='IPClusterStop.signal',
)
stop_aliases.update(base_aliases)
class IPClusterStop(BaseParallelApplication):
name = u'ipcluster'
description = stop_help
examples = _stop_examples
config_file_name = Unicode(default_config_file_name)
signal = Integer(signal.SIGINT, config=True,
help="signal to use for stopping processes.")
aliases = Dict(stop_aliases)
def start(self):
"""Start the app for the stop subcommand."""
try:
pid = self.get_pid_from_file()
except PIDFileError:
self.log.critical(
'Could not read pid file, cluster is probably not running.'
)
# Here I exit with a unusual exit status that other processes
# can watch for to learn how I existed.
self.remove_pid_file()
self.exit(ALREADY_STOPPED)
if not self.check_pid(pid):
self.log.critical(
'Cluster [pid=%r] is not running.' % pid
)
self.remove_pid_file()
# Here I exit with a unusual exit status that other processes
# can watch for to learn how I existed.
self.exit(ALREADY_STOPPED)
elif os.name=='posix':
sig = self.signal
self.log.info(
"Stopping cluster [pid=%r] with [signal=%r]" % (pid, sig)
)
try:
os.kill(pid, sig)
except OSError:
self.log.error("Stopping cluster failed, assuming already dead.",
exc_info=True)
self.remove_pid_file()
elif os.name=='nt':
try:
# kill the whole tree
p = check_call(['taskkill', '-pid', str(pid), '-t', '-f'], stdout=PIPE,stderr=PIPE)
except (CalledProcessError, OSError):
self.log.error("Stopping cluster failed, assuming already dead.",
exc_info=True)
self.remove_pid_file()
engine_aliases = {}
engine_aliases.update(base_aliases)
engine_aliases.update(dict(
n='IPClusterEngines.n',
engines = 'IPClusterEngines.engine_launcher_class',
daemonize = 'IPClusterEngines.daemonize',
))
engine_flags = {}
engine_flags.update(base_flags)
engine_flags.update(dict(
daemonize=(
{'IPClusterEngines' : {'daemonize' : True}},
"""run the cluster into the background (not available on Windows)""",
)
))
class IPClusterEngines(BaseParallelApplication):
name = u'ipcluster'
description = engines_help
examples = _engines_examples
usage = None
config_file_name = Unicode(default_config_file_name)
default_log_level = logging.INFO
classes = List()
def _classes_default(self):
from IPython.parallel.apps import launcher
launchers = launcher.all_launchers
eslaunchers = [ l for l in launchers if 'EngineSet' in l.__name__]
return [ProfileDir]+eslaunchers
n = Integer(num_cpus(), config=True,
help="""The number of engines to start. The default is to use one for each
CPU on your machine""")
engine_launcher = Any(config=True, help="Deprecated, use engine_launcher_class")
def _engine_launcher_changed(self, name, old, new):
if isinstance(new, basestring):
self.log.warn("WARNING: %s.engine_launcher is deprecated as of 0.12,"
" use engine_launcher_class" % self.__class__.__name__)
self.engine_launcher_class = new
engine_launcher_class = DottedObjectName('LocalEngineSetLauncher',
config=True,
help="""The class for launching a set of Engines. Change this value
to use various batch systems to launch your engines, such as PBS,SGE,MPI,etc.
Each launcher class has its own set of configuration options, for making sure
it will work in your environment.
You can also write your own launcher, and specify it's absolute import path,
as in 'mymodule.launcher.FTLEnginesLauncher`.
IPython's bundled examples include:
Local : start engines locally as subprocesses [default]
MPI : use mpiexec to launch engines in an MPI environment
PBS : use PBS (qsub) to submit engines to a batch queue
SGE : use SGE (qsub) to submit engines to a batch queue
LSF : use LSF (bsub) to submit engines to a batch queue
SSH : use SSH to start the controller
Note that SSH does *not* move the connection files
around, so you will likely have to do this manually
unless the machines are on a shared file system.
WindowsHPC : use Windows HPC
If you are using one of IPython's builtin launchers, you can specify just the
prefix, e.g:
c.IPClusterEngines.engine_launcher_class = 'SSH'
or:
ipcluster start --engines=MPI
"""
)
daemonize = Bool(False, config=True,
help="""Daemonize the ipcluster program. This implies --log-to-file.
Not available on Windows.
""")
def _daemonize_changed(self, name, old, new):
if new:
self.log_to_file = True
early_shutdown = Integer(30, config=True, help="The timeout (in seconds)")
_stopping = False
aliases = Dict(engine_aliases)
flags = Dict(engine_flags)
@catch_config_error
def initialize(self, argv=None):
super(IPClusterEngines, self).initialize(argv)
self.init_signal()
self.init_launchers()
def init_launchers(self):
self.engine_launcher = self.build_launcher(self.engine_launcher_class, 'EngineSet')
def init_signal(self):
# Setup signals
signal.signal(signal.SIGINT, self.sigint_handler)
def build_launcher(self, clsname, kind=None):
"""import and instantiate a Launcher based on importstring"""
try:
klass = find_launcher_class(clsname, kind)
except (ImportError, KeyError):
self.log.fatal("Could not import launcher class: %r"%clsname)
self.exit(1)
launcher = klass(
work_dir=u'.', config=self.config, log=self.log,
profile_dir=self.profile_dir.location, cluster_id=self.cluster_id,
)
return launcher
def engines_started_ok(self):
self.log.info("Engines appear to have started successfully")
self.early_shutdown = 0
def start_engines(self):
# Some EngineSetLaunchers ignore `n` and use their own engine count, such as SSH:
n = getattr(self.engine_launcher, 'engine_count', self.n)
self.log.info("Starting %s Engines with %s", n, self.engine_launcher_class)
self.engine_launcher.start(self.n)
self.engine_launcher.on_stop(self.engines_stopped_early)
if self.early_shutdown:
ioloop.DelayedCallback(self.engines_started_ok, self.early_shutdown*1000, self.loop).start()
def engines_stopped_early(self, r):
if self.early_shutdown and not self._stopping:
self.log.error("""
Engines shutdown early, they probably failed to connect.
Check the engine log files for output.
If your controller and engines are not on the same machine, you probably
have to instruct the controller to listen on an interface other than localhost.
You can set this by adding "--ip='*'" to your ControllerLauncher.controller_args.
Be sure to read our security docs before instructing your controller to listen on
a public interface.
""")
self.stop_launchers()
return self.engines_stopped(r)
def engines_stopped(self, r):
return self.loop.stop()
def stop_engines(self):
if self.engine_launcher.running:
self.log.info("Stopping Engines...")
d = self.engine_launcher.stop()
return d
else:
return None
def stop_launchers(self, r=None):
if not self._stopping:
self._stopping = True
self.log.error("IPython cluster: stopping")
self.stop_engines()
# Wait a few seconds to let things shut down.
dc = ioloop.DelayedCallback(self.loop.stop, 3000, self.loop)
dc.start()
def sigint_handler(self, signum, frame):
self.log.debug("SIGINT received, stopping launchers...")
self.stop_launchers()
def start_logging(self):
# Remove old log files of the controller and engine
if self.clean_logs:
log_dir = self.profile_dir.log_dir
for f in os.listdir(log_dir):
if re.match(r'ip(engine|controller)z-\d+\.(log|err|out)',f):
os.remove(os.path.join(log_dir, f))
# This will remove old log files for ipcluster itself
# super(IPBaseParallelApplication, self).start_logging()
def start(self):
"""Start the app for the engines subcommand."""
self.log.info("IPython cluster: started")
# First see if the cluster is already running
# Now log and daemonize
self.log.info(
'Starting engines with [daemon=%r]' % self.daemonize
)
# TODO: Get daemonize working on Windows or as a Windows Server.
if self.daemonize:
if os.name=='posix':
daemonize()
dc = ioloop.DelayedCallback(self.start_engines, 0, self.loop)
dc.start()
# Now write the new pid file AFTER our new forked pid is active.
# self.write_pid_file()
try:
self.loop.start()
except KeyboardInterrupt:
pass
except zmq.ZMQError as e:
if e.errno == errno.EINTR:
pass
else:
raise
start_aliases = {}
start_aliases.update(engine_aliases)
start_aliases.update(dict(
delay='IPClusterStart.delay',
controller = 'IPClusterStart.controller_launcher_class',
))
start_aliases['clean-logs'] = 'IPClusterStart.clean_logs'
class IPClusterStart(IPClusterEngines):
name = u'ipcluster'
description = start_help
examples = _start_examples
default_log_level = logging.INFO
auto_create = Bool(True, config=True,
help="whether to create the profile_dir if it doesn't exist")
classes = List()
def _classes_default(self,):
from IPython.parallel.apps import launcher
return [ProfileDir] + [IPClusterEngines] + launcher.all_launchers
clean_logs = Bool(True, config=True,
help="whether to cleanup old logs before starting")
delay = CFloat(1., config=True,
help="delay (in s) between starting the controller and the engines")
controller_launcher = Any(config=True, help="Deprecated, use controller_launcher_class")
def _controller_launcher_changed(self, name, old, new):
if isinstance(new, basestring):
# old 0.11-style config
self.log.warn("WARNING: %s.controller_launcher is deprecated as of 0.12,"
" use controller_launcher_class" % self.__class__.__name__)
self.controller_launcher_class = new
controller_launcher_class = DottedObjectName('LocalControllerLauncher',
config=True,
help="""The class for launching a Controller. Change this value if you want
your controller to also be launched by a batch system, such as PBS,SGE,MPI,etc.
Each launcher class has its own set of configuration options, for making sure
it will work in your environment.
Note that using a batch launcher for the controller *does not* put it
in the same batch job as the engines, so they will still start separately.
IPython's bundled examples include:
Local : start engines locally as subprocesses
MPI : use mpiexec to launch the controller in an MPI universe
PBS : use PBS (qsub) to submit the controller to a batch queue
SGE : use SGE (qsub) to submit the controller to a batch queue
LSF : use LSF (bsub) to submit the controller to a batch queue
SSH : use SSH to start the controller
WindowsHPC : use Windows HPC
If you are using one of IPython's builtin launchers, you can specify just the
prefix, e.g:
c.IPClusterStart.controller_launcher_class = 'SSH'
or:
ipcluster start --controller=MPI
"""
)
reset = Bool(False, config=True,
help="Whether to reset config files as part of '--create'."
)
# flags = Dict(flags)
aliases = Dict(start_aliases)
def init_launchers(self):
self.controller_launcher = self.build_launcher(self.controller_launcher_class, 'Controller')
self.engine_launcher = self.build_launcher(self.engine_launcher_class, 'EngineSet')
def engines_stopped(self, r):
"""prevent parent.engines_stopped from stopping everything on engine shutdown"""
pass
def start_controller(self):
self.log.info("Starting Controller with %s", self.controller_launcher_class)
self.controller_launcher.on_stop(self.stop_launchers)
self.controller_launcher.start()
def stop_controller(self):
# self.log.info("In stop_controller")
if self.controller_launcher and self.controller_launcher.running:
return self.controller_launcher.stop()
def stop_launchers(self, r=None):
if not self._stopping:
self.stop_controller()
super(IPClusterStart, self).stop_launchers()
def start(self):
"""Start the app for the start subcommand."""
# First see if the cluster is already running
try:
pid = self.get_pid_from_file()
except PIDFileError:
pass
else:
if self.check_pid(pid):
self.log.critical(
'Cluster is already running with [pid=%s]. '
'use "ipcluster stop" to stop the cluster.' % pid
)
# Here I exit with a unusual exit status that other processes
# can watch for to learn how I existed.
self.exit(ALREADY_STARTED)
else:
self.remove_pid_file()
# Now log and daemonize
self.log.info(
'Starting ipcluster with [daemon=%r]' % self.daemonize
)
# TODO: Get daemonize working on Windows or as a Windows Server.
if self.daemonize:
if os.name=='posix':
daemonize()
dc = ioloop.DelayedCallback(self.start_controller, 0, self.loop)
dc.start()
dc = ioloop.DelayedCallback(self.start_engines, 1000*self.delay, self.loop)
dc.start()
# Now write the new pid file AFTER our new forked pid is active.
self.write_pid_file()
try:
self.loop.start()
except KeyboardInterrupt:
pass
except zmq.ZMQError as e:
if e.errno == errno.EINTR:
pass
else:
raise
finally:
self.remove_pid_file()
base='IPython.parallel.apps.ipclusterapp.IPCluster'
class IPClusterApp(Application):
name = u'ipcluster'
description = _description
examples = _main_examples
subcommands = {
'start' : (base+'Start', start_help),
'stop' : (base+'Stop', stop_help),
'engines' : (base+'Engines', engines_help),
}
# no aliases or flags for parent App
aliases = Dict()
flags = Dict()
def start(self):
if self.subapp is None:
print "No subcommand specified. Must specify one of: %s"%(self.subcommands.keys())
print
self.print_description()
self.print_subcommands()
self.exit(1)
else:
return self.subapp.start()
def launch_new_instance():
"""Create and run the IPython cluster."""
app = IPClusterApp.instance()
app.initialize()
app.start()
if __name__ == '__main__':
launch_new_instance()
|
cloud9ers/gurumate
|
environment/lib/python2.7/site-packages/IPython/parallel/apps/ipclusterapp.py
|
Python
|
lgpl-3.0
| 22,236
|
[
"Brian"
] |
0ce9d543ff09e8114b3c25367fee033c1591bb4a57750e36ef3524d493c0f5cb
|
from __future__ import division
import logging
import numpy as np
from nengo import AdaptiveLIFRate, LIF
from nengo.params import Parameter, NumberParam
from nengo.utils.compat import range
class AdaptiveLIF(AdaptiveLIFRate, LIF):
"""Adaptive spiking version of the LIF neuron model."""
probeable = ['spikes', 'adaptation', 'voltage', 'refractory_time']
def __init__(self, **kwargs):
super(AdaptiveLIF, self).__init__(**kwargs)
self.clip = -np.inf
def _J_tspk(self, tspk):
"""Computes the input J that produces a steady state spike interval"""
t0 = 1. / (1. - np.exp(-tspk / self.tau_rc))
if self.tau_rc != self.tau_n:
t1 = (self.inc_n * np.exp(-self.tau_ref / self.tau_n) *
(np.exp(-tspk / self.tau_n) - np.exp(-tspk / self.tau_rc)))
t2 = ((1. - np.exp(-(self.tau_ref + tspk) / self.tau_n)) *
(self.tau_rc-self.tau_n))
elif self.tau_rc == self.tau_n: # Python doesn't know LHopital's Rule
t1 = (-self.inc_n * tspk *
np.exp(-(self.tau_ref + tspk) / self.tau_rc))
t2 = (self.tau_rc**2 *
(1 - np.exp(-(self.tau_ref + tspk) / self.tau_rc)))
J = t0 * (1. - t1 / t2)
return J
def _num_rates(self, J, min_f=.001, max_f=None, max_iter=100, tol=1e-3):
"""Numerically determine the spiking adaptive LIF neuron tuning curve
Uses the bisection method (binary search in CS parlance) to find the
steady state firing rate for a given input
Parameters
----------
J : array-like of floats
input
min_f : float (optional)
minimum firing rate to consider nonzero
max_f : float (optional)
maximum firing rate to seed bisection method with. Be sure that the
maximum firing rate will indeed be within this bound otherwise the
binary search will break
max_iter : int (optional)
maximium number of iterations in binary search
tol : float (optional)
tolerance of binary search algorithm in J. The algorithm terminates
when maximum difference between estimated J and input J is within
tol
"""
f_ret = np.zeros_like(J)
f_high = max_f
if max_f is None:
f_high = 1. / self.tau_ref
f_low = min_f
tspk_high = 1. / f_low - self.tau_ref
tspk_low = 1. / f_high - self.tau_ref
# check for J that produces firing rates below the minimum firing rate
J_min = self._J_tspk(tspk_high)
idx = J > J_min # selects the range of J that produces spikes
if not idx.any():
return f_ret
tspk_high = np.zeros(J[idx].shape) + tspk_high
tspk_low = np.zeros(J[idx].shape) + tspk_low
for i in xrange(max_iter):
assert (tspk_low <= tspk_low).all(), 'binary search failed'
tspk = (tspk_high + tspk_low) / 2.
Jhat = self._J_tspk(tspk)
max_diff = np.max(np.abs(J[idx]-Jhat))
if max_diff < tol:
break
high_idx = Jhat > J[idx] # where our estimate of J is too high
low_idx = Jhat <= J[idx] # where our estimate of J is too low
tspk_high[low_idx] = tspk[low_idx]
tspk_low[high_idx] = tspk[high_idx]
f_ret[idx] = 1. / (self.tau_ref + tspk)
return f_ret
def rates(self, x, gain, bias):
J = gain * x + bias
out = self._num_rates(J)
return out
def gain_bias(self, max_rates, intercepts):
"""Compute the alpha and bias needed to satisfy max_rates, intercepts.
Returns gain (alpha) and offset (j_bias) values of neurons.
Parameters
----------
max_rates : list of floats
Maximum firing rates of neurons.
intercepts : list of floats
X-intercepts of neurons.
"""
inv_tau_ref = 1. / self.tau_ref if self.tau_ref > 0 else np.inf
if (max_rates > inv_tau_ref).any():
raise ValueError(
"Max rates must be below the inverse refractory period (%0.3f)"
% (inv_tau_ref))
tspk = 1. / max_rates - self.tau_ref
x = self._J_tspk(tspk)
gain = (1 - x) / (intercepts - 1.0)
bias = 1 - gain * intercepts
return gain, bias
def step_math(self, dt, J, spiked, voltage, ref, adaptation):
"""Compute rates for input current (incl. bias)"""
n = adaptation
k = np.expm1(-dt / self.tau_n)
inc = -k
dec = k+1
n *= dec # decay adaptation
LIF.step_math(self, dt, J - n, spiked, voltage, ref)
n += inc * (self.inc_n * spiked) # increment adaptation
|
fragapanagos/adaptive_lif
|
neurons.py
|
Python
|
mit
| 4,823
|
[
"NEURON"
] |
6c2151815904ea6a6847c4ecb93151df591aea2db0d74a975da367a2f3bd06f6
|
#!/usr/bin/env python
# Copyright 2015 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import print_function
import argparse
import glob
import json
import mmap
import os
import re
import sys
parser = argparse.ArgumentParser()
parser.add_argument("filenames", help="list of files to check, all files if unspecified", nargs='*')
args = parser.parse_args()
rootdir = os.path.dirname(__file__) + "/../../"
rootdir = os.path.abspath(rootdir)
def get_refs():
refs = {}
for path in glob.glob(os.path.join(rootdir, "hack/boilerplate/boilerplate.*.txt")):
extension = os.path.basename(path).split(".")[1]
ref_file = open(path, 'r')
ref = ref_file.read().splitlines()
ref_file.close()
refs[extension] = ref
return refs
def file_passes(filename, refs, regexs):
try:
f = open(filename, 'r')
except:
return False
data = f.read()
f.close()
extension = file_extension(filename)
ref = refs[extension]
# remove build tags from the top of Go files
if extension == "go":
p = regexs["go_build_constraints"]
(data, found) = p.subn("", data, 1)
# remove shebang from the top of shell files
if extension == "sh":
p = regexs["shebang"]
(data, found) = p.subn("", data, 1)
data = data.splitlines()
# if our test file is smaller than the reference it surely fails!
if len(ref) > len(data):
return False
# trim our file to the same number of lines as the reference file
data = data[:len(ref)]
p = regexs["year"]
for d in data:
if p.search(d):
return False
# Replace all occurrences of the regex "2016|2015|2014" with "YEAR"
p = regexs["date"]
for i, d in enumerate(data):
(data[i], found) = p.subn('YEAR', d)
if found != 0:
break
# if we don't match the reference at this point, fail
if ref != data:
return False
return True
def file_extension(filename):
return os.path.splitext(filename)[1].split(".")[-1].lower()
skipped_dirs = ['Godeps', 'third_party', '_output', '.git', 'vendor']
def normalize_files(files):
newfiles = []
for pathname in files:
if any(x in pathname for x in skipped_dirs):
continue
newfiles.append(pathname)
for i, pathname in enumerate(newfiles):
if not os.path.isabs(pathname):
newfiles[i] = os.path.join(rootdir, pathname)
return newfiles
def get_files(extensions):
files = []
if len(args.filenames) > 0:
files = args.filenames
else:
for root, dirs, walkfiles in os.walk(rootdir):
# don't visit certain dirs. This is just a performance improvement
# as we would prune these later in normalize_files(). But doing it
# cuts down the amount of filesystem walking we do and cuts down
# the size of the file list
for d in skipped_dirs:
if d in dirs:
dirs.remove(d)
for name in walkfiles:
pathname = os.path.join(root, name)
files.append(pathname)
files = normalize_files(files)
outfiles = []
for pathname in files:
extension = file_extension(pathname)
if extension in extensions:
outfiles.append(pathname)
return outfiles
def get_regexs():
regexs = {}
# Search for "YEAR" which exists in the boilerplate, but shouldn't in the real thing
regexs["year"] = re.compile( 'YEAR' )
# dates can be 2014 or 2015, company holder names can be anything
regexs["date"] = re.compile( '(2014|2015|2016)' )
# strip // +build \n\n build constraints
regexs["go_build_constraints"] = re.compile(r"^(// \+build.*\n)+\n", re.MULTILINE)
# strip #!.* from shell scripts
regexs["shebang"] = re.compile(r"^(#!.*\n)\n*", re.MULTILINE)
return regexs
def main():
regexs = get_regexs()
refs = get_refs()
filenames = get_files(refs.keys())
for filename in filenames:
if not file_passes(filename, refs, regexs):
print(filename, file=sys.stdout)
if __name__ == "__main__":
sys.exit(main())
|
mikedanese/contrib
|
hack/boilerplate/boilerplate.py
|
Python
|
apache-2.0
| 4,733
|
[
"VisIt"
] |
66d9ffc40ba5410baf3bd3f046fbf1624c0d038f8e736cd9ffb13f9c1ee8cfa6
|
"""Plot various quantities related to electronic stucture integration.
"""
import numpy as np
from numpy.linalg import norm, inv, det
from mpl_toolkits.mplot3d import Axes3D
from matplotlib.patches import FancyArrowPatch
from mpl_toolkits.mplot3d import proj3d
import matplotlib.pyplot as plt
from itertools import product, chain
from scipy.spatial import ConvexHull
from copy import deepcopy
import time, pickle, os
# import plotly.plotly as py
# import plotly.graph_objs as go
# from plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot
# init_notebook_mode(connected=True)
from BZI.symmetry import (bcc_sympts, fcc_sympts, sc_sympts, make_ptvecs,
make_rptvecs, sym_path, number_of_point_operators,
find_orbits)
from BZI.sampling import make_cell_points
from BZI.integration import rectangular_method, rec_dos_nos
from BZI.tetrahedron import (grid_and_tetrahedra, calc_fermi_level,
calc_total_energy, get_extended_tetrahedra,
get_corrected_total_energy, density_of_states,
number_of_states, tet_dos_nos, find_irreducible_tetrahedra)
from BZI.make_IBZ import find_bz, orderAngle, planar3dTo2d
from BZI.utilities import remove_points, find_point_indices, check_contained
def ScatterPlotMultiple(func, states, ndivisions, cutoff=None):
"""Plot the energy states of a multivalued toy pseudo-potential using
matplotlib's function scatter.
Args:
nstates (int): the number of states to plot.
ndivisions (int): the number of divisions along each coordinate direction.
cutoff (float): the value at which the states get cutoff.
Returns:
None
Example:
>>> from BZI import ScatterPlotMultiple
>>> nstates = 2
>>> ndivisions = 11
>>> ScatterPlotMultiple(nstates, ndivisions)
"""
kxs = np.linspace(-1./2, 1./2, ndivisions)
kys = np.linspace(-1./2, 1./2, ndivisions)
kzs = [0.]
kxlist = [kxs[np.mod(i,len(kxs))] for i in range(len(kxs)*len(kys))]
kylist = [kxs[int(i/len(kxs))] for i in range(len(kxs)*len(kys))]
kpts = [[kx, ky, kz] for kx in kxs for ky in kys for kz in kzs]
all_estates = [func(kpt) for kpt in kpts]
prows = int(np.sqrt(len(states)))
pcols = int(np.ceil(len(states)/prows))
p = 0
if cutoff == None:
for n in states:
p += 1
estates = np.array([], dtype=complex)
for es in all_estates:
estates = np.append(np.real(estates), es[n])
ax = plt.subplot(prows,pcols,p,projection="3d")
ax.scatter(kxlist, kylist, estates,s=.5);
else:
for n in states:
p += 1
estates = np.array([], dtype=complex)
for es in all_estates:
if es[n] > cutoff:
estates = np.append(np.real(estates), 0.)
else:
estates = np.append(np.real(estates), es[n])
ax = plt.subplot(prows,pcols,p,projection="3d")
ax.scatter(kxlist, kylist, estates,s=.5);
plt.show()
def scatter_plot_pp(EPM, states, nbands, ndivisions, grid_vectors,
plane_value, offset, cutoff=None):
"""Plot the energy states of a multivalued toy pseudo-potential using
matplotlib's function scatter.
Args:
EPM (:py:obj:`BZI.pseudopots.EmpiricalPseudopotential`): a pseudopotential object.
states (list): a list of states to plot.
nbands (int): the number of bands to include. No value in states can be
greater than nbands.
ndivisions (int): the number of divisions along each coordinate direction.
grid_vectors(list): two integers that indicate which reciprocal lattice
vectors to take as the plane over which to plot the band structure.
plane_value (float): the value along the axis perpendicular to the plane
at which to plot. It is given in reciprocal lattic coordinates.
offset (numpy.ndarray): the offset of the grid in lattice coordinates.
cutoff (float): the value at which the states get cutoff.
Returns:
None
Example:
>>> from BZI import ScatterPlotMultiple
>>> nstates = 2
>>> ndivisions = 11
>>> ScatterPlotMultiple(nstates, ndivisions)
"""
g1 = EPM.lattice.reciprocal_vectors[:, grid_vectors[0]]/ndivisions
g2 = EPM.lattice.reciprocal_vectors[:, grid_vectors[1]]/ndivisions
orth_vec = np.setdiff1d([0, 1, 2], grid_vectors)[0]
orthogonal_vector = EPM.lattice.reciprocal_vectors[orth_vec]
# Distance in the direction orthogonal to the plane.
d = plane_value*orthogonal_vector
grid = []
kxlist = []
kylist = []
for i,j in product(range(ndivisions+1), repeat=2):
grid.append(i*g1 + j*g2 + d + offset)
kxlist.append(grid[-1][grid_vectors[0]])
kylist.append(grid[-1][grid_vectors[1]])
all_estates = [EPM.eval(kpt, nbands) for kpt in grid]
prows = int(np.sqrt(len(states)))
pcols = int(np.ceil(len(states)/prows))
p = 0
if cutoff == None:
for n in states:
p += 1
estates = np.array([], dtype=complex)
for es in all_estates:
estates = np.append(np.real(estates), es[n])
ax = plt.subplot(prows,pcols,p,projection="3d")
ax.scatter(kxlist, kylist, estates,s=.5);
else:
for n in states:
p += 1
estates = np.array([], dtype=complex)
for es in all_estates:
if es[n] > cutoff:
estates = np.append(np.real(estates), 0.)
else:
estates = np.append(np.real(estates), es[n])
ax = plt.subplot(prows,pcols,p,projection="3d")
ax.scatter(kxlist, kylist, estates,s=.5);
plt.show()
return grid
def ScatterPlotSingle(func, ndivisions, cutoff=None):
"""Plot the energy states of a single valued toy pseudo-potential using
matplotlib's function scatter.
Args:
func (function): one of the single valued functions from pseudopots.
ndivisions (int): the number of divisions along each coordinate direction.
cutoff (float): the value at which function is gets cutoff.
Returns:
None
Example:
>>> from BZI.pseudopots import W1
>>> from BZI.plots import ScatterPlotSingle
>>> nstates = 2
>>> ndivisions = 11
>>> ScatterPlotMultiple(W1, nstates, ndivisions)
"""
kxs = np.linspace(0, 1, ndivisions)
kys = np.linspace(0, 1, ndivisions)
kzs = [0.]
kxlist = [kxs[np.mod(i,len(kxs))] for i in range(len(kxs)*len(kys))]
kylist = [kxs[int(i/len(kxs))] for i in range(len(kxs)*len(kys))]
kpts = [[kx, ky, kz] for kx in kxs for ky in kys for kz in kzs]
estates = [func(kpt)[0] for kpt in kpts]
if cutoff == None:
ax = plt.subplot(1,1,1,projection="3d")
ax.scatter(kxlist, kylist, estates,s=.5);
else:
estates, kxlist, kylist = zip(*filter(lambda x: x[0]
<= cutoff, zip(estates,
kxlist, kylist)))
ax = plt.subplot(1,1,1,projection="3d")
ax.scatter(kxlist, kylist, estates,s=.5);
plt.show()
def plot_just_points(mesh_points, ax=None):
"""Plot just the points in a mesh.
Args:
mesh_points (numpy.ndarray or list): a list of points
"""
ngpts = len(mesh_points)
kxlist = [mesh_points[i][0] for i in range(ngpts)]
kylist = [mesh_points[i][1] for i in range(ngpts)]
kzlist = [mesh_points[i][2] for i in range(ngpts)]
if not ax:
ax = plt.subplot(1,1,1,projection="3d")
ax.scatter(kxlist, kylist, kzlist, c="black", s=10)
def plot_mesh(mesh_points, cell_vecs, offset = np.asarray([0.,0.,0.]),
ax=None, indices=None, show=True, save=False, file_name=None,
color="red"):
"""Create a 3D scatter plot of a set of mesh points inside a cell.
Args:
mesh_points (list or numpy.ndarray): a list of mesh points in Cartesian
coordinates.
cell_vecs (list or numpy.ndarray): a list vectors that define a cell.
offset (list or numpy.ndarray): the offset of the unit cell, which is
also plotted, in Cartesian coordinates.
ax (matplotlib.axes): an axes object
indices (list or numpy.ndarray): the indices of the points. If
provided, they will be plotted with the mesh points.
show (bool): if true, the plot will be shown.
save (bool): if true, the plot will be saved.
file_name (str): the file name under which the plot is saved. If not
provided the plot is saved as "mesh.pdf".
Returns:
None
"""
ngpts = len(mesh_points)
kxlist = [mesh_points[i][0] for i in range(ngpts)]
kylist = [mesh_points[i][1] for i in range(ngpts)]
kzlist = [mesh_points[i][2] for i in range(ngpts)]
if ax is None:
ax = plt.subplot(1,1,1,projection="3d")
ax.scatter(kxlist, kylist, kzlist, c=color)
# Give the points labels if provided.
if (type(indices) == list or type(indices) == np.ndarray):
for x,y,z,i in zip(kxlist,kylist,kzlist,indices):
ax.text(x,y,z,i)
c1 = cell_vecs[:,0]
c2 = cell_vecs[:,1]
c3 = cell_vecs[:,2]
O = np.asarray([0.,0.,0.])
l1 = zip(O + offset, c1 + offset)
l2 = zip(c2 + offset, c1 + c2 + offset)
l3 = zip(c3 + offset, c1 + c3 + offset)
l4 = zip(c2 + c3 + offset, c1 + c2 + c3 + offset)
l5 = zip(O + offset, c3 + offset)
l6 = zip(c1 + offset, c1 + c3 + offset)
l7 = zip(c2 + offset, c2 + c3 + offset)
l8 = zip(c1 + c2 + offset, c1 + c2 + c3 + offset)
l9 = zip(O + offset, c2 + offset)
l10 = zip(c1 + offset, c1 + c2 + offset)
l11 = zip(c3 + offset, c2 + c3 + offset)
l12 = zip(c1 + c3 + offset, c1 + c2 + c3 + offset)
ls = [l1, l2, l3, l4, l5, l6, l7, l8, l9, l10, l11, l12]
for l in ls:
ax.plot3D(*l, c="blue")
if show:
plt.show()
elif save:
if file_name:
plt.savefig(file_name + ".pdf")
else:
plt.savefig("mesh.pdf")
return None
def plot_bz_mesh(mesh_points, rlat_vecs, ax=None, BZ=None, color="red", limits=None):
"""Plot the k-points inside the Wigner-Seitz construction of the first
Brillouin zone.
Args:
mesh_points (numpy.ndarray): an 2D array of mesh points.
rlat_vecs (numpy.ndarray): an array of reciprocal lattice vectors as columns of a
3x3 array.
ax (matplotlib.pyplot.axes): a matplotlib axes object.
BZ (scipy.spatial.ConvexHull): the Brillouin zone.
color (float): the color of the mesh points.
"""
ngpts = len(mesh_points)
kxlist = [mesh_points[i][0] for i in range(ngpts)]
kylist = [mesh_points[i][1] for i in range(ngpts)]
kzlist = [mesh_points[i][2] for i in range(ngpts)]
if ax is None:
ax = plt.subplot(1,1,1,projection="3d")
# ax.set_aspect('equal')
# ax.auto_scale_xyz([-1, 1], [-1, 1], [-1, 1])
ax.scatter(kxlist, kylist, kzlist, c=color)
if BZ is None:
BZ = find_bz(rlat_vecs)
plot_bz(BZ, ax=ax, limits=limits)
# for simplex in BZ.simplices:
# # We're going to plot lines between the vertices of the simplex.
# # To make sure we make it all the way around, append the first element
# # to the end of the simplex.
# simplex = np.append(simplex, simplex[0])
# simplex_pts = [BZ.points[i] for i in simplex]
# plot_simplex_edges(simplex_pts, ax)
def PlotMeshes(mesh_points_list, cell_vecs, atoms, offset = np.asarray([0.,0.,0.])):
"""Create a 3D scatter plot of a set of mesh points inside a cell.
Args:
mesh_points (list or np.ndarray): a list of mesh points.
cell_vecs (list or np.ndarray): a list vectors that define a cell.
Returns:
None
"""
ax = plt.subplot(1,1,1,projection="3d")
colors = ["red", "blue", "green", "black"]
for i,mesh_points in enumerate(mesh_points_list):
ngpts = len(mesh_points)
kxlist = [mesh_points[i][0] for i in range(ngpts)]
kylist = [mesh_points[i][1] for i in range(ngpts)]
kzlist = [mesh_points[i][2] for i in range(ngpts)]
ax.scatter(kxlist, kylist, kzlist, c=colors[atoms[i]])
c1 = cell_vecs[:,0]
c2 = cell_vecs[:,1]
c3 = cell_vecs[:,2]
O = np.asarray([0.,0.,0.])
l1 = zip(O + offset, c1 + offset)
l2 = zip(c2 + offset, c1 + c2 + offset)
l3 = zip(c3 + offset, c1 + c3 + offset)
l4 = zip(c2 + c3 + offset, c1 + c2 + c3 + offset)
l5 = zip(O + offset, c3 + offset)
l6 = zip(c1 + offset, c1 + c3 + offset)
l7 = zip(c2 + offset, c2 + c3 + offset)
l8 = zip(c1 + c2 + offset, c1 + c2 + c3 + offset)
l9 = zip(O + offset, c2 + offset)
l10 = zip(c1 + offset, c1 + c2 + offset)
l11 = zip(c3 + offset, c2 + c3 + offset)
l12 = zip(c1 + c3 + offset, c1 + c2 + c3 + offset)
ls = [l1, l2, l3, l4, l5, l6, l7, l8, l9, l10, l11, l12]
for l in ls:
ax.plot3D(*l, c="black")
plt.show()
return None
def PlotSphereMesh(mesh_points,r2, offset = np.asarray([0.,0.,0.]),
save=False, show=True):
"""Create a 3D scatter plot of a set of points inside a sphere.
Args:
mesh_points (list or np.ndarray): a list of mesh points.
r2 (float): the squared radius of the sphere
cell_vecs (list or np.ndarray): a list vectors that define a cell.
save (bool): if true, the plot is saved as sphere_mesh.png.
show (bool): if true, the plot is displayed.
Returns:
None
Example:
>>> from BZI.sampling import sphere_pts
>>> from BZI.symmetry import make_rptvecs
>>> from BZI.plots import PlotSphereMesh
>>> import numpy as np
>>> lat_type = "fcc"
>>> lat_const = 10.26
>>> lat_vecs = make_rptvecs(lat_type, lat_const)
>>> r2 = 3.*(2*np.pi/lat_const)**2
>>> offset = [0.,0.,0.]
>>> grid = sphere_pts(lat_vecs,r2,offset)
>>> PlotSphereMesh(grid,r2,offset)
"""
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
# Plot the sphere
u = np.linspace(0, 2 * np.pi, 100)
v = np.linspace(0, np.pi, 100)
r = np.sqrt(r2)
x = r * np.outer(np.cos(u), np.sin(v)) + offset[0]
y = r * np.outer(np.sin(u), np.sin(v)) + offset[1]
z = r * np.outer(np.ones(np.size(u)), np.cos(v)) + offset[2]
ax.scatter(x,y,z,s=0.001)
# Plot the points within the sphere.
ngpts = len(mesh_points)
kxlist = [mesh_points[i][0] for i in range(ngpts)]
kylist = [mesh_points[i][1] for i in range(ngpts)]
kzlist = [mesh_points[i][2] for i in range(ngpts)]
ax.set_aspect('equal')
ax.scatter(kxlist, kylist, kzlist, c="black",s=1)
lim = np.sqrt(r2)*1.1
ax.set_xlim(-lim, lim)
ax.set_ylim(-lim, lim)
ax.set_zlim(-lim, lim)
if save:
plt.savefig("sphere_mesh.png")
if show:
plt.show()
return None
def plot_band_structure(materials_list, EPMlist, EPMargs_list, lattice, npts,
neigvals, energy_shift=0.0, energy_limits=False,
fermi_level=False, save=False, show=True, sum_bands=None,
plot_below=False, ax=None):
"""Plot the band structure of a pseudopotential along symmetry paths.
Args:
materials_list (str): a list of materials whose bandstructures will be
plotted. The first string will label figures and files.
EPMlist (function): a list of pseudopotenial functions.
EPMargs_list (list): a list of pseudopotential arguments as dictionaries.
lattice
npts (int): the number of points to plot along each symmetry line.
neigvals (int): the number of lower-most eigenvalues to include
in the plot.
energy_shift (float): energy shift for band structure
energy_limits (list): the energy window for the plot.
fermi_level (float): if provided, the Fermi level will be included
in the plot.
save (bool): if true, the band structure will be saved.
Returns:
Display or save the band structure.
"""
if ax is None:
fig, ax = plt.subplots()
# k-points between symmetry point pairs in Cartesian coordinates.
car_kpoints = sym_path(lattice, npts, cart=True)
# Find the distance of each symmetry path by putting the symmetry point pairs
# that make up a path in lattice coordinates, converting to Cartesian, and then
# taking the norm of the difference of the pairs.
lat_symmetry_paths = np.empty_like(lattice.symmetry_paths, dtype=list)
car_symmetry_paths = np.empty_like(lattice.symmetry_paths, dtype=list)
distances = []
for i,path in enumerate(lattice.symmetry_paths):
for j,sympt in enumerate(path):
lat_symmetry_paths[i][j] = lattice.symmetry_points[sympt]
car_symmetry_paths[i][j] = np.dot(lattice.reciprocal_vectors,
lat_symmetry_paths[i][j])
distances.append(norm(car_symmetry_paths[i][1] - car_symmetry_paths[i][0]))
# Create coordinates for plotting.
lines = []
for i in range(len(distances)):
start = np.sum(distances[:i])
stop = np.sum(distances[:i+1])
if i == (len(distances) - 1):
lines += list(np.linspace(start, stop, npts))
else:
lines += list(np.delete(np.linspace(start, stop, npts),-1))
# Store the energy eigenvalues in an nested array.
nEPM = len(EPMlist)
energies = [[] for i in range(nEPM)]
for i in range(nEPM):
EPM = EPMlist[i]
EPMargs = EPMargs_list[i]
EPMargs["neigvals"] = neigvals
for kpt in car_kpoints:
EPMargs["kpoint"] = kpt
energies[i].append(EPM.eval(**EPMargs) - energy_shift)
# energies[i].append(EPM.eval(**EPMargs))
colors = ["blue", "green", "red", "violet", "orange", "cyan", "black"]
energies = np.array(energies)
if plot_below:
colors = ["red", "blue", "green", "violet", "orange", "cyan", "black"]
for i in range(nEPM):
energies[i][energies[i] > EPMlist[i].fermi_level] = np.nan
# Find the x-axis labels and label locations.
plot_xlabels = [lattice.symmetry_paths[0][0]]
plot_xlabel_pos = [0.]
for i in range(len(lattice.symmetry_paths) - 1):
if (lattice.symmetry_paths[i][1] == lattice.symmetry_paths[i+1][0]):
plot_xlabels.append(lattice.symmetry_paths[i][1])
plot_xlabel_pos.append(np.sum(distances[:i+1]))
else:
plot_xlabels.append(lattice.symmetry_paths[i][1] + "|" +
lattice.symmetry_paths[i+1][0])
plot_xlabel_pos.append(np.sum(distances[:i+1]))
plot_xlabels.append(lattice.symmetry_paths[-1][1])
plot_xlabel_pos.append(np.sum(distances))
# Plot the energy dispersion curves one at a time.
for i in range(nEPM):
if sum_bands is not None:
ienergy = []
for nk in range(len(car_kpoints)):
tmp_energies = np.array(energies[i][nk][:neigvals])
# print("tmp energies: ", tmp_energies)
# print("all included bands: ", tmp_energies[np.where(tmp_energies < sum_bands)])
ienergy.append(np.sum(tmp_energies[np.where(tmp_energies < sum_bands)]))
ax.plot(lines, ienergy, color=colors[i], label="%s"%materials_list[i])
else:
for ne in range(neigvals):
ienergy = []
for nk in range(len(car_kpoints)):
ienergy.append(energies[i][nk][ne])
if ne == 0:
ax.plot(lines, ienergy, color=colors[i], label="%s"%materials_list[i])
else:
ax.plot(lines, ienergy, color=colors[i])
# Plot the Fermi level if provided.
if fermi_level:
ax.axhline(y = EPMlist[0].fermi_level, c="yellow", label="Fermi level")
# Plot a vertical line at the symmetry points with proper labels.
for pos in plot_xlabel_pos:
ax.axvline(x = pos, c="gray")
plt.xticks(plot_xlabel_pos, plot_xlabels, fontsize=14)
# Adjust the energy range if one was provided.
if energy_limits:
ax.set_ylim(energy_limits)
# Adjust the legend.
# lgd = ax.legend(loc='center left', bbox_to_anchor=(1, 0.5))
ax.set_xlim([0,np.sum(distances)])
ax.set_xlabel("Symmetry points", fontsize=16)
ax.set_ylabel("Energy (eV)", fontsize=16)
ax.set_title("%s Band Structure" %materials_list[0], fontsize=16)
ax.grid(linestyle="dotted")
if save:
# ax.savefig("%s_band_structure.pdf" %materials_list[0],
# bbox_extra_artists=(lgd,), bbox_inches='tight')
plt.savefig("%s_band_structure.pdf" %materials_list[0],
bbox_inches='tight')
if show:
plt.show()
return None
def PlotVaspBandStructure(file_loc, material, lat_type, lat_consts, lat_angles,
energy_shift=0.0, fermi_level=False, elimits=False,
save=False, show=True):
"""Plot the band structure from a VASP INCAR, KPOINT, and OUTCAR file.
Args:
file_loc (str): the location of the directory with the VASP output files.
The KPOINTS file MUST be in a very particular format: there must be
4 introductory lines, each k-point must be on its own line and there
must only be one space between pairs of k-points. There mustn't be
any space after the last entered k-point.
material (str): the material whose band structure is being plotted.
lat_type (str): the lattice type
energy_shift (float): energy shift for band structure
fermi_level (bool): if true, plot the Fermi level.
elimits (list): the energy window for the plot.
save (bool): if true, the band structure will be saved.
show (bool): if false, return none. This is useful for showing multiple
plots.
Returns:
Display or save the band structure.
"""
# Get the correct symmetry point dictionary.
if lat_type == "fcc":
sympt_dict = fcc_sympts
lat_centering = "face"
elif lat_type == "bcc":
sympt_dict = bcc_sympts
lat_centering = "body"
elif lat_type == "sc":
sympt_dict = sc_sympts
lat_centering = "prim"
else:
raise ValueError("Invalid lattice type")
# Extract the lattice constant from the POSCAR file.
sympt_list = []
with open(file_loc + "POSCAR","r") as file:
lat_const = ""
f = file.readlines()
for c in f[1]:
try:
int(c)
lat_const += c
except:
if c == ".":
lat_const += c
if c == "!":
break
continue
lat_const = float(lat_const)
angstrom_to_Bohr = 1.889725989
lat_const *= angstrom_to_Bohr
lat_vecs = make_ptvecs(lat_centering, lat_consts, lat_angles)
rlat_vecs = make_rptvecs(lat_vecs)
nbands = ""
with open(file_loc + "INCAR","r") as file:
f = file.readlines()
for line in f:
if "NBANDS" in line:
for l in line:
try:
int(l)
nbands += l
except ValueError:
continue
# nbands = int(nbands)
nbands = 10
# Extract the total number of k-points, number of k-points per path,
# the number of paths and the symmetry points from the KPOINTs file.
with open(file_loc + "KPOINTS","r") as file:
f = file.readlines()
npts_path = int(f[1].split()[0])
npaths = (len(f)-3)/3
nkpoints = int(npts_path*npaths)
sympt_list = []
f = f[4:]
for line in f:
spt = ""
sline = line.strip()
for l in sline:
if (l == " " or
l == "!" or
l == "." or
l == "\t"):
continue
else:
try:
int(l)
except:
spt += l
if spt != "":
sympt_list.append(spt)
for i,sympt in enumerate(sympt_list):
if sympt == "gamma" or sympt == "Gamma":
sympt_list[i] = "G"
# Remove all duplicate symmetry points
unique_sympts = [sympt_list[i] for i in range(0, len(sympt_list), 2)] + [sympt_list[-1]]
# Replace symbols representing points with their lattice coordinates.
lat_sympt_coords = [sympt_dict[sp] for sp in unique_sympts]
car_sympt_coords = [np.dot(rlat_vecs,k) for k in lat_sympt_coords]
with open(file_loc + "OUTCAR", "r") as file:
f = file.readlines()
EFERMI = ""
for line in f:
sline = line.strip()
if "EFERMI" in sline:
for c in sline:
try:
int(c)
EFERMI += c
except:
if c == ".":
EFERMI += c
EFERMI = float(EFERMI)
id_line = " band No. band energies occupation \n"
with open(file_loc + "OUTCAR", "r") as file:
f = file.readlines()
energies = []
occupancies = []
en_occ = []
lat_kpoints = []
nkpt = 0
nkpts_dr = 0 # number of kpoints with duplicates removed
for i,line in enumerate(f):
if line == id_line:
nkpt += 1
if nkpt % npts_path == 0 and nkpt != nkpoints:
continue
else:
nkpts_dr += 1
energies.append([])
occupancies.append([])
en_occ.append([])
lat_kpoints.append(list(map(float,f[i-1].split()[3:6])))
for j in range(1,nbands+1):
energies[nkpts_dr-1].append(float(f[i+j].split()[1]) - energy_shift)
occupancies[nkpts_dr-1].append(float(f[i+j].split()[2]))
en_occ[nkpts_dr-1].append(energies[nkpts_dr-1][-1]*(
occupancies[nkpts_dr-1][-1]/2))
car_kpoints = [np.dot(rlat_vecs,k) for k in lat_kpoints]
# Find the distances between symmetry points.
nsympts = len(unique_sympts)
sympt_dist = [0] + [norm(car_sympt_coords[i+1]
- car_sympt_coords[i])
for i in range(nsympts - 1)]
# Create coordinates for plotting
lines = []
for i in range(nsympts - 1):
start = np.sum(sympt_dist[:i+1])
stop = np.sum(sympt_dist[:i+2])
if i == (nsympts - 2):
lines += list(np.linspace(start, stop, npts_path))
else:
lines += list(np.delete(np.linspace(start, stop, npts_path),-1))
for nb in range(nbands):
ienergy = []
for nk in range(len(car_kpoints)):
ienergy.append(energies[nk][nb])
if nb == 0:
plt.plot(lines,ienergy, label="VASP Band structure",color="blue")
else:
plt.plot(lines,ienergy,color="blue")
# Plot a vertical line at the symmetry points with proper labels.
for i in range(nsympts):
pos = np.sum(sympt_dist[:i+1])
plt.axvline(x = pos, c="gray")
tick_labels = unique_sympts
tick_locs = [np.sum(sympt_dist[:i+1]) for i in range(nsympts)]
plt.xticks(tick_locs,tick_labels)
# Adjust the energy range if one was provided.
if elimits:
plt.ylim(elimits)
if fermi_level:
plt.axhline(y = EFERMI, c="yellow", label="Fermi level")
# Adjust the legend.
lgd = plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.xlim([0,np.sum(sympt_dist)])
plt.xlabel("Symmetry points")
plt.ylabel("Energy (eV)")
plt.title("%s Band Structure" %material)
plt.grid(linestyle="dotted")
if save:
plt.savefig("%s_band_struct.pdf" %material,
bbox_extra_artists=(lgd,), bbox_inches='tight')
elif show:
plt.show()
else:
return None
def plot_paths(EPM, npts, save=False):
"""Plot the path along which the band structure is plotted.
"""
# k-points between symmetry point pairs in Cartesian coordinates.
car_kpoints = sym_path(EPM.lattice, npts, cart=True)
# Plot the paths.
fig = plt.figure()
ax = fig.gca(projection='3d')
x = [ckp[0] for ckp in car_kpoints]
y = [ckp[1] for ckp in car_kpoints]
z = [ckp[2] for ckp in car_kpoints]
ax.plot(x,y,z)
# Label the paths.
sympt_labels = list(EPM.lattice.symmetry_points.keys())
sympts = [np.dot(EPM.lattice.reciprocal_vectors, p) for p in
list(EPM.lattice.symmetry_points.values())]
x_list = [sp[0] for sp in sympts]
y_list = [sp[1] for sp in sympts]
z_list = [sp[2] for sp in sympts]
for x,y,z,i in zip(x_list, y_list, z_list, sympt_labels):
ax.text(x,y,z,i)
plt.show()
def create_convergence_plot(EPM, ndivisions, exact_fl, improved, symmetry,
file_names, location, err_correlation=False,
convention="ordinary", degree=None):
"""Create a convergence plot of the total energy fermi level convergence for the
free elecetron model.
Args:
ndivisions (list): a list of integers that gives the size of the grid.
degree (int): the degree of the free electron dispersion relation.
exact_fl (bool): if true fix the Fermi level at the exact value.
improved (bool): if true include the improved tetrahedron method.
symmetry (bool): if true, use symmetry reduction with tetrahedron method.
EPM_name (str): the name of the pseudopotential, and the folder in which
it is saved.
file_names (list): a list of file name strings. The first corresponds to
the name of the ferme level plot; the second the total energy.
location (str): the file path to where the plots are saved.
err_correlation (bool): if true, generate a plot of Fermi level error against
total energy error, and must include three strings in file names.
"""
if err_correlation:
if len(file_names) != 3:
msg = "There must be three file names when error correlation is included."
raise ValueError(msg.format(err_correlation))
# Lists for storing errors
rec_fl_err = []
rec_te_err = []
tet_fl_err = []
tet_te_err = []
ctet_te_err = []
sym_rec_fl_err = []
sym_rec_te_err = []
sym_tet_fl_err = []
sym_tet_te_err = []
# Figures
fermi_fig, fermi_axes = plt.subplots()
energy_fig, energy_axes = plt.subplots()
# Offsets for the tetrahedron method.
lat_shift = [-1./2]*3
grid_shift = [0,0,0]
# Change the degree for the free electron model.
if degree is not None:
EPM.set_degree(degree)
# Make the Fermi level exact if selected.
if exact_fl:
EPM.fermi_level = EPM.fermi_level_ans
tot_timei = time.time()
for ndivs in ndivisions:
print("Divisions ", ndivs)
t0 = time.time()
# Create the grid for rectangles
grid_consts = np.array(EPM.lattice.constants)*ndivs
grid_angles = [np.pi/2]*3
grid_centering = "prim"
grid_vecs = make_ptvecs(grid_centering, grid_consts, grid_angles)
rgrid_vecs = make_rptvecs(grid_vecs, convention)
offset = np.dot(inv(rgrid_vecs), -np.sum(EPM.lattice.reciprocal_vectors, 1)/2) + (
[0.5]*3)
# Calculate the Fermi level, if applicable, and total energy for the rectangular method.
# Calculate percent error for each.
grid = make_cell_points(EPM.lattice.reciprocal_vectors, rgrid_vecs, offset)
weights = np.ones(len(grid), dtype=int)
# Calculate errors using rectangle method with symmetry.
if symmetry:
sym_grid, sym_weights = find_orbits(grid, EPM.lattice.reciprocal_vectors,
rgrid_vecs, offset)
if not exact_fl:
EPM.fermi_level = rectangular_fermi_level(EPM, sym_grid, sym_weights)
sym_rec_fl_err.append( abs(EPM.fermi_level -
EPM.fermi_level_ans)/EPM.fermi_level_ans*100)
EPM.total_energy = rectangular_method(EPM, sym_grid, list(sym_weights))
sym_rec_te_err.append( abs(EPM.total_energy -
EPM.total_energy_ans)/EPM.total_energy_ans*100)
# Rectangle Fermi level
if not exact_fl:
EPM.fermi_level = rectangular_fermi_level(EPM, grid, weights)
print("rectangles: ", EPM.fermi_level)
rec_fl_err.append( abs(EPM.fermi_level - EPM.fermi_level_ans)/EPM.fermi_level_ans*100)
# Rectangle Total energy
EPM.total_energy = rectangular_method(EPM, grid, weights)
print("rectangles te: ", EPM.total_energy)
# rec_te.append(EPM.total_energy)
rec_te_err.append( abs(EPM.total_energy - EPM.total_energy_ans)/EPM.total_energy_ans*100)
# Calculate grid, tetrahedra, and weights for tetrahedron method.
grid, tetrahedra = grid_and_tetrahedra(EPM, ndivs, lat_shift, grid_shift)
weights = np.ones(len(tetrahedra), dtype=int)
# Calculate errors using tetrahedron method and symmetry reduction.
if symmetry:
irr_tet, tet_weights = find_irreducible_tetrahedra(EPM, tetrahedra, grid)
if not exact_fl:
EPM.fermi_level = calc_fermi_level(EPM, irr_tet, tet_weights, grid, tol=1e-8)
sym_tet_fl_err.append( abs(EPM.fermi_level - EPM.fermi_level_ans)/EPM.fermi_level_ans*100)
EPM.total_energy = calc_total_energy(EPM, irr_tet, tet_weights, grid)
sym_tet_te_err.append( abs(EPM.total_energy - EPM.total_energy_ans)/EPM.total_energy_ans*100)
if not exact_fl:
EPM.fermi_level = calc_fermi_level(EPM, tetrahedra, weights, grid, tol=1e-8)
print("tetrahedra: ", EPM.fermi_level)
tet_fl_err.append( abs(EPM.fermi_level - EPM.fermi_level_ans)/EPM.fermi_level_ans*100)
# Calculate the error for the tetrahedron method.
EPM.total_energy = calc_total_energy(EPM, tetrahedra, weights, grid)
print("tetrahedra te ", EPM.total_energy)
tet_te_err.append( abs(EPM.total_energy - EPM.total_energy_ans)/EPM.total_energy_ans*100)
# Calculate the error for the corrected tetrahedron method.
if improved:
ndiv0 = [ndivs]*3
extended_grid, extended_tetrahedra = get_extended_tetrahedra(EPM, ndivs, lat_shift, grid_shift)
EPM.total_energy = get_corrected_total_energy(EPM, tetrahedra, extended_tetrahedra,
grid, extended_grid, ndiv0)
ctet_te_err.append( abs(EPM.total_energy - EPM.total_energy_ans)/EPM.total_energy_ans*100)
print("run time", time.time() - t0)
# Location where plots are saved.
loc = os.path.join(location, EPM.material)
# Plot errors.
if not exact_fl:
fermi_axes.loglog(np.array(ndivisions)**3, rec_fl_err,label="Rectangles")
fermi_axes.loglog(np.array(ndivisions)**3, tet_fl_err,label="Tetrahedra")
if symmetry:
fermi_axes.loglog(np.array(ndivisions)**3, sym_rec_fl_err,label="Reduced Rectangles")
fermi_axes.loglog(np.array(ndivisions)**3, sym_tet_fl_err,label="Reduced Tetrahedra")
fermi_axes.set_title(EPM.material + " Fermi Level Convergence")
fermi_axes.set_xlabel("Number of k-points")
fermi_axes.set_ylabel("Percent Error")
fermi_axes.legend(loc="best")
fermi_fig_name = os.path.join(loc, file_names[0] + ".pdf")
fermi_fig.savefig(fermi_fig_name)
print("rectangles ", rec_te_err)
print("tetrahedra ", tet_te_err)
energy_axes.loglog(np.array(ndivisions)**3, rec_te_err,label="Rectangles")
energy_axes.loglog(np.array(ndivisions)**3, tet_te_err,label="Tetrahedra")
if improved:
energy_axes.loglog(np.array(ndivisions)**3, ctet_te_err,label="Improved Tetrahedra")
if symmetry:
energy_axes.loglog(np.array(ndivisions)**3, sym_rec_te_err,label="Reduced Rectangles")
energy_axes.loglog(np.array(ndivisions)**3, sym_tet_te_err,label="Reduced Tetrahedra")
energy_axes.set_title(EPM.material + " Total Energy Convergence")
energy_axes.set_xlabel("Number of k-points")
energy_axes.set_ylabel("Percent Error")
energy_axes.legend(loc="best")
energy_fig_name = os.path.join(loc, file_names[1] + ".pdf")
energy_fig.savefig(energy_fig_name)
if (not exact_fl) and err_correlation:
corr_fig = plt.figure()
corr_axes = corr_fig.add_subplot(1,1,1)
corr_axes.scatter(rec_fl_err, rec_te_err, label="Rectangles")
corr_axes.scatter(tet_fl_err, tet_te_err, label="Tetrahedra")
corr_axes.set_xscale("log")
corr_axes.set_yscale("log")
corr_axes.set_title(EPM.material + " Error Correlation")
corr_axes.set_xlabel("Percent Error Fermi Level")
corr_axes.set_ylabel("Percent Error Total Energy")
corr_axes.legend(loc="best")
corr_fig_name = os.path.join(loc, file_names[2] + ".pdf")
corr_fig.savefig(corr_fig_name)
tot_timef = time.time()
print("total elapsed time: ", (tot_timef - tot_timei)/60, " minutes")
def plot_states(EPM, grid, tetrahedra, weights, method, energy_list, quantity, nbands, answer,
title, xlimits, ylimits, labels, bin_size=0.1, show=True, save=False,
root_dir=None):
"""Plot the density of states and the correct density of states.
Args:
EPM (:py:obj:`BZI.pseudopots.EmpiricalPseudopotential`): an instance of a pseudopotential class.
grid (numpy.ndarray): a list of grid point over which the density of states is calculated.
tetrahedra (list or numpy.ndarray): a list of tetrahedra given district labels by a quadruple
of grid indices
weights (numpy.ndarray or list): a list of tetrahedron weights.
method (str): the method used to calculate the density of states.
energy_list (list): a list of energies at which to calculate the density of states.
quantity (str): the quantity to plot. Can be density or number of states.
nbands (int): the number of bands included in the calculation.
answer (function): a function of energy that returns the exact density of states.
title (str): the title of the plot.
xlimits (tuple): the x-axis limits.
ylimits (tuple): the y-axis limits.
labels (tuple): the labels for the plots, first comes the calculated DOS label.
bin_size (float): the size of the energy bins. Only applicable to rectangles.
show (bool): display the density of states.
save (str): save the plot with this file name. If not provided, the plot isn't saved. The
string must include the file format, such as .pdf or .png.
root_dir (str): the root directory where the plot is saved.
"""
if method == "rectangles":
energies = np.sort(np.array([EPM.eval(g, nbands) for g in grid]).flatten())
dE = bin_size
dV = EPM.lattice.reciprocal_volume/len(grid)
V = EPM.lattice.reciprocal_volume
Ei = 0
Ef = 0
dos = [] # density of states
nos = [] # number of states
dos_energies = [] # energies
while max(energies) > Ef:
Ef += dE
dos_energies.append(Ei + (Ef-Ei)/2.)
dos.append( len(energies[(Ei <= energies) & (energies < Ef)])/(len(grid)*dE)*2)
nos.append(np.sum(dos)*dE)
Ei += dE
if EPM.degree:
answer_list = [answer(en, EPM.degree) for en in energy_list]
else:
answer_list = [answer(en) for en in energy_list]
if quantity == "dos":
plt.scatter(dos_energies, dos, label=labels[0], c="blue")
plt.ylabel("Density of States")
elif quantity == "nos":
plt.scatter(dos_energies, nos, label=labels[0], c="blue")
plt.ylabel("Number of States")
else:
msg = "The supported quantities are dos and nos."
raise ValueError(msg.format(quantity))
plt.plot(energy_list, answer_list, label=labels[1], c="black")
plt.xlabel("Energy (eV)")
plt.title(title)
plt.xlim(xlimits[0], xlimits[1])
plt.ylim(ylimits[0], ylimits[1])
plt.legend(loc="best")
if save:
plt.savefig(root_dir + save)
elif show:
plt.show()
elif method == "tetrahedra":
VG = EPM.lattice.reciprocal_volume
VT = VG/len(weights)
dos = np.zeros(len(energy_list))
nos = np.zeros(len(energy_list))
if quantity == "dos":
for i,energy in enumerate(energy_list):
for tet in tetrahedra:
for band in range(nbands):
tet_energies = np.sort([EPM.eval(grid[j], nbands)[band] for j in tet])
dos[i] += density_of_states(VG, VT, tet_energies, energy)
elif quantity == "nos":
for i,energy in enumerate(energy_list):
for tet in tetrahedra:
for band in range(nbands):
tet_energies = np.sort([EPM.eval(grid[j], nbands)[band] for j in tet])
nos[i] += number_of_states(VG, VT, tet_energies, energy)
else:
msg = "The supported quantities are dos and nos."
raise ValueError(msg.format(quantity))
if EPM.degree:
answer_list = [answer(en, EPM.degree) for en in energy_list]
else:
answer_list = [answer(en) for en in energy_list]
if quantity == "dos":
plt.plot(energy_list, dos, label=labels[0], c="blue")
plt.ylabel("Density of States")
else:
plt.plot(energy_list, nos, label=labels[0], c="blue")
plt.ylabel("Number of States")
plt.plot(energy_list, answer_list, label=labels[1], c="black")
plt.xlabel("Energy (eV)")
plt.xlim(xlimits[0], xlimits[1])
plt.ylim(ylimits[0], ylimits[1])
plt.title(title)
plt.legend(loc="best")
if save:
plt.savefig(root_dir + save)
elif show:
plt.show()
else:
None
else:
msg = "The supported methods are rectangles and tetrahedra."
raise ValueError(msg.format(methods))
def generate_states_data(EPM, nbands, grid, methods_list, weights_list,
energy_list, file_location, file_number,
bin_size=0.1, tetrahedra=None):
"""Generate data needed to plot the density of states and number of states
of a pseudopotential.
Args:
EPM (obj): an instance of a pseudopotential class.
nbands (int): the number of bands included in the calculation.
grid (list or numpy.ndarray): a list of grid points over which the quantity
provided is calculated.
methods_list (str): a list of methods used to calculate the provided quantity.
weights_list (list): a list of symmetry reduction weights. Must be in the same
order as methods_list.
energy_list (list): a list of energies at which the provided quantity is
calculated.
file_location (str): the file path to where the data is saved. Exclude the last
backslash.
file_number (str): the number of the file. This should avoid overwriting
previous data.
quantity (str): the quantity whose data is saved.
function_list (list): a list of functions that provide the exact value of the
quantities provide. It must be in the same order as quantity_list.
bin_size (float): the bin size for the rectangular method.
tetrahedra (list): a list of tetrahedra vertices.
weights (list): a list of tetrahedron weights.
"""
# The file structure is as follows: potential/potential_variations/valency_#
# There may not be any potential variations, such as different degrees for the
# free electron model.
file_prefix = file_location + "/data"
# If the data folder doesn't exist, make one.
if not os.path.isdir(file_prefix):
os.mkdir(file_prefix)
file_prefix += "/" + EPM.name
# If the pseudopotential folder doesn't exist, make one.
if not os.path.isdir(file_prefix):
os.mkdir(file_prefix)
# If the variation of the potential folder doesn't exist, make one.
if EPM.degree:
file_prefix += "/degree_" + str(EPM.degree)
if not os.path.isdir(file_prefix):
os.mkdir(file_prefix)
# If the considered valency folder doesn't exist, make one.
file_prefix += "/" + "valency_" + str(EPM.nvalence_electrons)
if not os.path.isdir(file_prefix):
os.mkdir(file_prefix)
# Generate the energies, density of states, and number of states with
# the rectangular method if it is one of the methods provided.
if "rectangles" in methods_list:
rec_weights = weights_list[0]
# Remove rectangles from methods_list.
del methods_list[methods_list == "rectangles"]
# Make sure that the method folder exists. If not, make one.
rec_prefix = file_prefix + "/rectangles"
if not os.path.isdir(rec_prefix):
os.mkdir(rec_prefix)
# The energies of the potential at each point in the grid.
rec_energies = np.array(list(chain(*[EPM.eval(grid[i], nbands)*rec_weights[i]
for i in range(len(grid))])))
energies, dos, nos = rec_dos_nos(rec_energies, nbands, bin_size)
# Generate and save the exact values of the quantities provided at the energies
# provided.
exact_dos_list = [EPM.density_of_states(en) for en in energy_list]
exact_nos_list = [EPM.number_of_states(en) for en in energy_list]
# Add all these quantities to the data dictionary and pickle it.
data = {}
data["energies"] = energy_list
data["binned energies"] = energies
data["density of states"] = dos
data["number of states"] = nos
data["analytic density of states"] = exact_dos_list
data["analytic number of states"] = exact_nos_list
with open(rec_prefix + "/run_" + file_number + ".p", "w") as file:
pickle.dump(data, file)
# Generate the energies, density of states, and number of states with
# the rectangular method if it is one of the methods provided.
if "tetrahedra" in methods_list:
# Remove rectangles from methods_list.
del methods_list[methods_list == "tetrahedra"]
# The tetrahedral weights should always come second in the weights list.
if len(weights_list) > 1:
tet_weights = weights_list[1]
else:
tet_weights = weights_list[0]
# Make sure that the method folder exists. If not, make one.
tet_prefix = file_prefix + "/tetrahedra"
if not os.path.isdir(tet_prefix):
os.mkdir(tet_prefix)
# Generate and save the exact values of the quantities provided at the energies
# provided, as well as the numerical values.
energies, dos, nos = tet_dos_nos(EPM, nbands, grid, energy_list, tetrahedra,
tet_weights)
exact_dos_list = [EPM.density_of_states(en) for en in energy_list]
exact_nos_list = [EPM.number_of_states(en) for en in energy_list]
data = {}
data["energies"] = energy_list
data["density of states"] = dos
data["number of states"] = nos
data["analytic density of states"] = exact_dos_list
data["analytic number of states"] = exact_nos_list
with open(tet_prefix + "/run_" + file_number + ".p", "w") as file:
pickle.dump(data, file)
if methods_list != []:
msg = "The allowed methods are 'rectangles' and 'tetrahedra'."
raise ValueError(msg.format(methods_list))
def plot_states_data(EPM, file_location, file_number, file_name, quantity, method,
title, xlimits, ylimits, labels):
"""Plot the density of states or number of states of a pseudopotential with
data retrieved from file.
Args:
file_location (str): the file path to where the data is saved.
file_number (str): the number of the file. This should avoid overwriting
previous data.
quantity (str): the quantity whose data is plotted.
method (str): the method used to generate the data.
title (str): the title of the plot.
xlimits (tuple): the x-axis limits.
ylimits (tuple): the y-axis limits.
labels (tuple): the labels for the plots, first comes the analytic label, and then
numeric label.
save_location (str): the location where the plot is saved.
save_name (str): the file name of the plot.
"""
if EPM.degree:
data_file = (file_location + "/data/" + EPM.name + "/degree_" + str(EPM.degree) +
"/" "valency_" + str(EPM.nvalence_electrons) + "/" + method +
"/run_" + str(file_number) + ".p")
else:
data_file = (file_location + "/data/" + EPM.name + "/degree_" + "valency_" +
str(EPM.nvalence_electrons) + "/" + method + "/run_" +
str(file_number) + ".p")
file_prefix = file_location + "/plots"
# If the plots directory doesn't exist, make one.
if not os.path.isdir(file_prefix):
os.mkdir(file_prefix)
file_prefix += "/" + EPM.name
# If the pseudopotential folder doesn't exist, make one.
if not os.path.isdir(file_prefix):
os.mkdir(file_prefix)
# If the variation of the potential folder doesn't exist, make one.
if EPM.degree:
file_prefix += "/degree_" + str(EPM.degree)
if not os.path.isdir(file_prefix):
os.mkdir(file_prefix)
# If the considered valency folder doesn't exist, make one.
file_prefix += "/" + "valency_" + str(EPM.nvalence_electrons)
if not os.path.isdir(file_prefix):
os.mkdir(file_prefix)
# Get the dictionary with the data.
print("data", data_file)
data = pickle.load(open(data_file, "r"))
if method == "rectangles":
plt.scatter(data["binned energies"], data[quantity], label=labels[0], c="blue")
else:
plt.scatter(data["energies"], data[quantity], label = labels[0], c="blue")
plt.plot(data["energies"], data["analytic " + quantity], label=labels[1], c="black")
plt.xlabel("Energy (eV)")
ylabel_dict = {"density of states": "Density of States",
"number of states": "Number of States"}
plt.ylabel(ylabel_dict[quantity])
plt.title(title)
plt.xlim(xlimits[0], xlimits[1])
plt.ylim(ylimits[0], ylimits[1])
plt.legend(loc="best")
plt.savefig(file_prefix + "/" + file_name + ".pdf")
def plot_simplex_edges(vertices, axes, color="blue"):
"""Plot the edges of a 3-simplex.
Args:
vertices (numpy.ndarray): a list of simplex vertices.
axes (matplotlib.axes)
"""
if len(vertices) == 3:
vertices = np.append(vertices, [vertices[0]], axis=0)
for i in range(len(vertices)-1):
xstart = vertices[i][0]
xfinish = vertices[i+1][0]
xs = np.linspace(xstart, xfinish, 100)
ystart = vertices[i][1]
yfinish = vertices[i+1][1]
ys = np.linspace(ystart, yfinish, 100)
zstart = vertices[i][2]
zfinish = vertices[i+1][2]
zs = np.linspace(zstart, zfinish, 100)
axes.plot(xs, ys, zs, c=color)
def plot_bz(bz, symmetry_points=None, remove=True, ax=None, color="blue", limits=None):
"""Plot a Brillouin zone.
Args:
BZ (scipy.spatial.ConvexHull): a convex hull object.
symmetry_points (dict): a dictionary of symmetry points in Cartesian coordinates.
remove (bool): if True, plot the facets instead of the simplices that make up the
boundary of the Brillouin zone or irreducible Brilloun zone.
ax (matplotlib.axes): an axes object.
limits (list): the axis limits on the plot.
"""
fig = plt.figure()
if ax is None:
ax = fig.gca(projection='3d')
ax.set_aspect('equal')
if symmetry_points != None:
eps = np.average(list(symmetry_points.values()))*0.05
for spt in symmetry_points.keys():
coords = symmetry_points[spt]
ax.scatter(coords[0], coords[1], coords[2], c="black")
ax.text(coords[0] + eps, coords[1] + eps, coords[2] + eps, spt, size=10)
if remove:
facet_list = []
equations = list(deepcopy(bz.equations))
simplices = list(deepcopy(bz.points[bz.simplices]))
while len(equations) != 0:
equation, equations = equations[-1], equations[:-1]
facet, simplices = simplices[-1], simplices[:-1]
indices = find_point_indices([equation], equations)
if len(indices) > 0:
# Remove duplicate equations from the list of equations
equations_to_remove = [equations[i] for i in indices]
for eq in equations_to_remove:
equations = remove_points([equation], equations)
# Find all simplices that lie on the same plane to get the facet.
simplices_to_remove = []
for index in indices:
facet = np.append(facet, simplices[index], axis=0)
simplices_to_remove.append(simplices[index])
for s in simplices_to_remove:
simplices = remove_points([s], simplices)
# Remove duplicate points on facet.
unique_facet = []
for pt in facet:
if not check_contained([pt], unique_facet):
unique_facet.append(pt)
facet_list.append(orderAngle(unique_facet))
else:
facet_list.append(orderAngle(facet))
for facet in facet_list:
# We want to plot all the edges, so we append the last vertex to the
# beginning of the facet.
facet = np.append(facet, [facet[0]], axis=0)
plot_simplex_edges(facet, ax, color=color)
else:
for simplex in bz.simplices:
# We're going to plot lines between the vertices of the simplex.
# To make sure we make it all the way around, append the first element
# to the end of the simplex.
simplex = np.append(simplex, simplex[0])
simplex_pts = [bz.points[i] for i in simplex]
plot_simplex_edges(simplex_pts, ax, color=color)
if limits is not None:
ax.set_xlim(limits[0])
ax.set_ylim(limits[1])
ax.set_zlim(limits[2])
# plt.close()
def plot_all_bz(lat_vecs, grid=None, sympts=None, ax=None, convention="ordinary"):
"""Plot the Brillouin zone and optionally the irreducible Brillouin zone and points
within the Brillouin zone.
Args:
lat_vecs (numpy.ndarray): a 3x3 array with lattice vectors as columns.
grid (list or numpy.ndarray): a list of list or 2D array of points to plot.
sympts (list or numpy.ndarray): a dictionary whose key is the Greek or Roman
letter representing the symmetry point. The corresponding value is the
coordinate of the point in lattice coordinates.
ax (matplotlib.axes): an axes object.
convention (str): the convention for finding the reciprocal lattice vectors.
Options include 'ordinary' and 'angular'.
"""
rlat_vecs = make_rptvecs(lat_vecs, convention=convention)
if ax is None:
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
bz = find_bz(rlat_vecs)
plot_bz(bz, ax=ax)
if grid is not None:
plot_just_points(grid, ax=ax)
if sympts is not None:
# Get the vertices of the IBZ.
ibz_vertices = list(sympts.values())
ibz_vertices = [np.dot(rlat_vecs, v) for v in ibz_vertices]
# Get the symmetry points in Cartesian coordinates.
sympts_cart = {}
for spt in sympts.keys():
sympts_cart[spt] = np.dot(rlat_vecs, sympts[spt])
ibz = ConvexHull(ibz_vertices)
plot_bz(ibz, sympts_cart, ax=ax, color="red")
class Arrow3D(FancyArrowPatch):
def __init__(self, xs, ys, zs, *args, **kwargs):
FancyArrowPatch.__init__(self, (0,0), (0,0), *args, **kwargs)
self._verts3d = xs, ys, zs
def draw(self, renderer):
xs3d, ys3d, zs3d = self._verts3d
xs, ys, zs = proj3d.proj_transform(xs3d, ys3d, zs3d, renderer.M)
self.set_positions((xs[0],ys[0]),(xs[1],ys[1]))
FancyArrowPatch.draw(self, renderer)
def plot_vecs(vecs, colors, labels, ax=None):
"""Plot a list of vectors."""
xmin = min([vecs[i][0] for i in range(len(vecs))])
xmax = max([vecs[i][0] for i in range(len(vecs))])
ymin = min([vecs[i][1] for i in range(len(vecs))])
ymax = max([vecs[i][1] for i in range(len(vecs))])
zmin = min([vecs[i][2] for i in range(len(vecs))])
zmax = max([vecs[i][2] for i in range(len(vecs))])
if ax is None:
fig = plt.figure(figsize=(15,15))
ax = fig.add_subplot(111, projection='3d')
for i,v in enumerate(vecs):
arrow = Arrow3D([0,v[0]], [0,v[1]], [0,v[2]],
mutation_scale=20, lw=1.5,
arrowstyle="-|>", color=colors[i])
ax.text(v[0], v[1], v[2], labels[i])
ax.add_artist(arrow)
ax.set_xlim(xmin,xmax)
ax.set_ylim(ymin,ymax)
ax.set_zlim(zmin,zmax)
def plot_fermi_surface(EPM, bz_grid, all_energies, eigvals, eps,
marker_sizes, marker_colors, marker_opacities):
"""Plot the Fermi surface of an empirical pseudopotential.
Args:
EPM (object): an empirical pseudopotential object.
grid (list): a list of points within the Brillouin zone.
all_energies (list): a list of eigenvalue energies evaluated at
each point in grid.
eigvals (list): the eigenvalues for which the Fermi surface is plotted.
eps (float): the tolerance within which an energy eigenvalue must be
within in order to be consider as part of the Fermi surface.
marker_size (float): the size of the marker.
marker_color (float): the color of the marker.
marker_opacity (float): the opacity of the marker.
"""
colors = ["g", "b", "r", "c", "m", "k", "y"]
bz_grid = np.array(bz_grid)
all_energies = np.array(all_energies)
data = []
# fig = plt.figure()
# ax = fig.add_subplot(111, projection='3d')
# ax.axis('off')
for i in eigvals:
energies = all_energies[:,i]
indices = np.where(abs(energies - EPM.fermi_level) < eps)
if np.shape((indices)) == (1,0):
print("Band #{} had no eigenvalues near the Fermi level.".format(i))
continue
plot_pts = bz_grid[indices]
x,y,z=zip(*plot_pts)
# ax.scatter(x, y, z, c=marker_colors[i], s=marker_sizes[i], alpha=marker_opacities[i])
# ax.set_aspect('equal')
data.append(go.Scatter3d(x=x,y=y,z=z, mode='markers',
marker=dict(size=marker_sizes.pop(0), color=marker_colors.pop(0)),
opacity=marker_opacities.pop(0)))
layout = go.Layout(scene=dict(
xaxis=dict(
title="",
autorange=True,
showgrid=False,
zeroline=False,
showline=False,
ticks='',
showticklabels=False
),
yaxis=dict(
title="",
autorange=True,
showgrid=False,
zeroline=False,
showline=False,
ticks='',
showticklabels=False
),
zaxis=dict(
title="",
autorange=True,
showgrid=False,
zeroline=False,
showline=False,
ticks='',
showticklabels=False
)
)
)
fig = go.Figure(data=data, layout=layout)
plot(fig, show_link = False)
|
jerjorg/BZI
|
BZI/plots.py
|
Python
|
gpl-3.0
| 62,428
|
[
"VASP"
] |
12c1eb378dd2c867ed6942fabac4ce35588cea5862a8f664c02342f9ff4def66
|
"""
OGRe Twitter Interface
:func:`twitter` : method for fetching data from Twitter
"""
import base64
import hashlib
import logging
import sys
import time
from datetime import datetime
from twython import Twython
from ogre.validation import sanitize
from ogre.exceptions import OGReError, OGReLimitError
from snowflake2time.snowflake import snowflake2utc, utc2snowflake
from future.standard_library import hooks
with hooks():
from urllib.request import urlopen # pylint: disable=import-error
def sanitize_twitter(
keys,
media=("image", "text"),
keyword="",
quantity=15,
location=None,
interval=None
): # pylint: disable=too-many-arguments,too-many-locals
"""
Validate and prepare parameters for use in Twitter data retrieval.
.. seealso:: :meth:`ogre.validation.validate` describes the format each
parameter must have.
:type keys: dict
:param keys: Specify Twitter API keys.
Twitter **requires** a "consumer_key" and "access_token".
:type media: tuple
:param media: Specify content mediums to make lowercase and deduplicate.
"image" and "text" are supported mediums.
:type keyword: str
:param keyword: Specify search criteria to incorporate
the requested media in.
:type quantity: int
:param quantity: Specify a quota of results.
:type location: tuple
:param location: Specify a location to format as a Twitter geocode
("<latitude>,<longitude>,<radius><unit>").
:type interval: tuple
:param interval: Specify earliest and latest moments to convert to
Twitter Snowflake IDs.
:raises: ValueError
:rtype: tuple
:returns: Each passed parameter is returned (in order) in the proper format.
"""
clean_keys = {}
for key, value in keys.items():
key = key.lower()
if key not in (
"consumer_key",
"access_token"
):
raise ValueError(
'Valid Twitter keys are "consumer_key" and "access_token".'
)
if not value:
raise ValueError("Twitter API keys are required.")
clean_keys[key] = value
if "consumer_key" not in clean_keys.keys() or \
"access_token" not in clean_keys.keys():
raise ValueError(
'Twitter API keys must include a "consumer_key" and "access_token".'
)
clean_media, clean_keyword, clean_quantity, clean_location, clean_interval = \
sanitize(
media=media,
keyword=keyword,
quantity=quantity,
location=location,
interval=interval
)
kinds = []
if clean_media is not None:
for clean_medium in clean_media:
if clean_medium in ("image", "text"):
kinds.append(clean_medium)
kinds = tuple(kinds)
keywords = clean_keyword
if kinds == ("image",):
keywords += " pic.twitter.com"
elif kinds == ("text",):
keywords += " -pic.twitter.com"
keywords = keywords.strip()
geocode = None
if location is not None and clean_location[2] > 0:
geocode = \
str(clean_location[0]) + "," +\
str(clean_location[1]) + "," +\
str(clean_location[2])+clean_location[3]
period_id = (None, None)
if interval is not None:
period_id = (
utc2snowflake(clean_interval[0]),
utc2snowflake(clean_interval[1])
)
if keywords in ("", "-pic.twitter.com") and geocode is None:
raise ValueError("Specify either a keyword or a location.")
return (
clean_keys,
kinds,
keywords,
clean_quantity,
geocode,
period_id
)
def twitter(
keys,
media=("image", "text"),
keyword="",
quantity=15,
location=None,
interval=None,
**kwargs
): # pylint: disable=too-many-arguments,too-many-locals,too-many-branches,too-many-statements
"""
Fetch Tweets from the Twitter API.
.. seealso:: :meth:`sanitize_twitter` describes more about
the format each parameter must have.
:type keys: dict
:param keys: Specify an API key and access token.
:type media: tuple
:param media: Specify content mediums to fetch.
"text" or "image" are supported mediums.
:type keyword: str
:param keyword: Specify search criteria.
"Queries can be limited due to complexity."
If this happens, no results will be returned.
To avoid this, follow Twitter Best Practices including
the following:
"Limit your searches to 10 keywords and operators."
:type quantity: int
:param quantity: Specify a quota of results to fetch.
Twitter will return 15 results by default,
and up to 100 can be requested in a single query.
If a number larger than 100 is specified,
the retriever will make multiple queries in an attempt
to satisfy the requested `quantity`,
but this is done on a best effort basis.
Whether the specified number is returned or
not depends on Twitter.
:type location: tuple
:param location: Specify a place (latitude, longitude, radius, unit)
to search.
Since OGRe only returns geotagged results,
the larger the specified radius,
the fewer results will be returned.
This is because of the way Twitter satisfies
geocoded queries.
It uses so-called "fuzzy matching logic" to deduce the
location of Tweets posted publicly without location data.
OGRe filters these out.
:type interval: tuple
:param interval: Specify a period of time (earliest, latest) to search.
"The Search API is not complete index of all Tweets,
but instead an index of recent Tweets."
Twitter's definition of "recent" is rather vague,
but when an interval is not specified,
"that index includes between 6-9 days of Tweets."
:type strict_media: bool
:param strict_media: Specify whether to only return the requested media
(defaults to False).
Setting this to False helps build caches faster at
no additional cost.
For instance, since Twitter automatically sends the
text of a Tweet back, if `("image",)` is passed for
`media`, the text on hand will only be filtered if
`strict_media` is True.
:type secure: bool
:param secure: Specify whether to prefer HTTPS or not (defaults to True).
:type test: bool
:param test: Specify whether a the current request is a trial run.
This affects what gets logged and should be accompanied by
the next 3 parameters (`test_message`, `api`, and `network`).
:type test_message: str
:param test_message: Specify a description of the test to log.
This is ignored if the `test` parameter is False.
:type api: callable
:param api: Specify API access point (for dependency injection).
:type network: callable
:param network: Specify a network access point (for dependency injection).
:raises: OGReError, OGReLimitError, TwythonError
:rtype: list
:returns: GeoJSON Feature(s)
.. seealso:: Visit https://dev.twitter.com/docs/using-search for tips on
how to build queries for Twitter using the `keyword` parameter.
More information may also be found at
https://dev.twitter.com/docs/api/1.1/get/search/tweets.
"""
keychain, kinds, keywords, remaining, geocode, (since_id, max_id) = \
sanitize_twitter(
keys=keys,
media=media,
keyword=keyword,
quantity=quantity,
location=location,
interval=interval
)
modifiers = {
"api": Twython,
"fail_hard": False,
"network": urlopen,
"query_limit": 450, # Twitter allows 450 queries every 15 minutes.
"secure": True,
"strict_media": False
}
for modifier in modifiers:
if kwargs.get(modifier) is not None:
modifiers[modifier] = kwargs[modifier]
qid = hashlib.md5(
(
str(time.time()) +
str(keywords) +
str(remaining) +
str(geocode) +
str(since_id) +
str(max_id) +
str(kwargs)
).encode('utf-8')
).hexdigest()
log = logging.getLogger(__name__)
log.info(qid+" Request: Twitter")
log.debug(
qid+" Status:" +
" media("+str(media)+")" +
" keyword("+str(keywords)+")" +
" quantity("+str(remaining)+")" +
" location("+str(geocode)+")" +
" interval("+str(since_id)+","+str(max_id)+")" +
" kwargs("+str(kwargs)+")"
)
if not kinds or remaining < 1 or modifiers["query_limit"] < 1:
log.info(qid+" Success: No results were requested.")
return []
api = modifiers["api"](
keychain["consumer_key"],
access_token=keychain["access_token"]
)
limits = api.get_application_rate_limit_status()
try:
limit = int(
limits["resources"]["search"]["/search/tweets"]["remaining"]
)
reset = int(
limits["resources"]["search"]["/search/tweets"]["reset"]
)
if limit < 1:
message = "Queries are being limited."
log.info(qid+" Failure: "+message)
if modifiers["fail_hard"]:
raise OGReLimitError(
source="Twitter",
message=message,
reset=reset
)
else:
log.debug(qid+" Status: "+str(limit)+" queries remain.")
if limit < modifiers["query_limit"]:
modifiers["query_limit"] = limit
except KeyError:
log.warning(qid+" Unobtainable Rate Limit")
raise
total = remaining
collection = []
for query in range(modifiers["query_limit"]): # pylint: disable=too-many-nested-blocks
count = min(remaining, 100) # Twitter accepts a max count of 100.
try:
results = api.search(
q=keywords,
count=count,
geocode=geocode,
since_id=since_id,
max_id=max_id
)
except Exception:
log.info(
qid+" Failure: " +
str(query+1)+" queries produced " +
str(len(collection))+" results. " +
str(sys.exc_info()[1])
)
raise
if results.get("statuses") is None:
message = "The request is too complex."
log.info(
qid+" Failure: " +
str(query+1)+" queries produced " +
str(len(collection))+" results. " +
message
)
if modifiers["fail_hard"]:
raise OGReError(source="Twitter", message=message)
break
for tweet in results["statuses"]:
if tweet.get("coordinates") is None or tweet.get("id") is None:
# Tweets must be geotagged and timestamped.
continue
feature = {
"type": "Feature",
"geometry": {
"type": "Point",
"coordinates": [
tweet["coordinates"]["coordinates"][0],
tweet["coordinates"]["coordinates"][1]
]
},
"properties": {
"source": "Twitter",
"time": datetime.utcfromtimestamp(
snowflake2utc(tweet["id"])
).isoformat()+"Z"
}
}
if "text" in kinds:
if tweet.get("text") is not None:
feature["properties"]["text"] = tweet["text"]
if "image" in kinds:
if not modifiers["strict_media"]:
if tweet.get("text") is not None:
feature["properties"]["text"] = tweet["text"]
if tweet.get("entities", {}).get("media") is not None:
for entity in tweet["entities"]["media"]:
if entity.get("type") is not None:
if entity["type"].lower() == "photo":
media_url = "media_url_https"
if not modifiers["secure"]:
media_url = "media_url"
if entity.get(media_url) is not None:
feature["properties"]["image"] =\
base64.b64encode(
modifiers["network"](
entity[media_url]
).read().encode('utf-8')
)
if len(feature["properties"]) > 2:
collection.append(feature)
remained = remaining
remaining = total-len(collection)
log.debug(
qid+" Status:" +
" 1 query produced "+str(remained-remaining)+" results."
)
if remaining <= 0:
log.info(
qid+" Success: " +
str(query+1)+" queries produced " +
str(len(collection))+" results."
)
break
if results.get("search_metadata", {}).get("next_results") is None:
outcome = "Success" if collection else "Failure"
log.info(
qid+" "+outcome+": " +
str(query+1)+" queries produced " +
str(len(collection))+" results. " +
"No retrievable results remain."
)
break
max_id = int(
results["search_metadata"]["next_results"]
.split("max_id=")[1]
.split("&")[0]
)
if query+1 >= modifiers["query_limit"]:
outcome = "Success" if collection else "Failure"
log.info(
qid+" "+outcome+": " +
str(query+1)+" queries produced " +
str(len(collection))+" results. " +
"No remaining results are retrievable."
)
return collection
|
dmtucker/ogre
|
ogre/Twitter.py
|
Python
|
lgpl-2.1
| 15,081
|
[
"VisIt"
] |
5188c80cf986503975070eb17752311696bcfe239ed385753ebc774d3b82f431
|
# coding: utf-8
# Copyright 2013 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS-IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Tests that walk through Course Builder pages."""
__author__ = 'Sean Lip'
import __builtin__
import copy
import csv
import datetime
import logging
import os
import re
import shutil
import time
import urllib
import zipfile
import appengine_config
from controllers import lessons
from controllers import sites
from controllers import utils
from controllers.utils import XsrfTokenManager
from models import config
from models import courses
from models import jobs
from models import models
from models import transforms
from models import vfs
from models.courses import Course
import modules.admin.admin
from modules.announcements.announcements import AnnouncementEntity
from tools import verify
from tools.etl import etl
from tools.etl import remote
import actions
from actions import assert_contains
from actions import assert_contains_all_of
from actions import assert_does_not_contain
from actions import assert_equals
from google.appengine.api import memcache
from google.appengine.api import namespace_manager
from google.appengine.ext import db
# A number of data files in a test course.
COURSE_FILE_COUNT = 70
# There is an expectation in our tests of automatic import of data/*.csv files,
# which is achieved below by selecting an alternative factory method.
courses.Course.create_new_default_course = (
courses.Course.custom_new_default_course_for_test)
class InfrastructureTest(actions.TestBase):
"""Test core infrastructure classes agnostic to specific user roles."""
def test_response_content_type_is_application_json_in_utf_8(self):
response = self.testapp.get(
'/rest/config/item?key=gcb_config_update_interval_sec')
self.assertEqual(
'application/javascript; charset=utf-8',
response.headers['Content-Type'])
def test_xsrf_token_manager(self):
"""Test XSRF token operations."""
# os.environ['AUTH_DOMAIN'] = 'test_domain'
# os.environ['APPLICATION_ID'] = 'test app'
# Issues and verify anonymous user token.
action = 'test-action'
token = utils.XsrfTokenManager.create_xsrf_token(action)
assert '/' in token
assert utils.XsrfTokenManager.is_xsrf_token_valid(token, action)
# Impersonate real user.
os.environ['USER_EMAIL'] = 'test_email'
os.environ['USER_ID'] = 'test_id'
# Issues and verify real user token.
action = 'test-action'
token = utils.XsrfTokenManager.create_xsrf_token(action)
assert '/' in token
assert utils.XsrfTokenManager.is_xsrf_token_valid(token, action)
# Check forged time stamp invalidates token.
parts = token.split('/')
assert len(parts) == 2
forgery = '%s/%s' % (long(parts[0]) + 1000, parts[1])
assert not forgery == token
assert not utils.XsrfTokenManager.is_xsrf_token_valid(forgery, action)
# Check token properly expires.
action = 'test-action'
time_in_the_past = long(
time.time() - utils.XsrfTokenManager.XSRF_TOKEN_AGE_SECS)
# pylint: disable-msg=protected-access
old_token = utils.XsrfTokenManager._create_token(
action, time_in_the_past)
assert not utils.XsrfTokenManager.is_xsrf_token_valid(old_token, action)
# Clean up.
# del os.environ['APPLICATION_ID']
# del os.environ['AUTH_DOMAIN']
del os.environ['USER_EMAIL']
del os.environ['USER_ID']
def test_import_course(self):
"""Tests importing one course into another."""
# Setup courses.
sites.setup_courses('course:/a::ns_a, course:/b::ns_b, course:/:/')
# Validate the courses before import.
all_courses = sites.get_all_courses()
dst_app_context_a = all_courses[0]
dst_app_context_b = all_courses[1]
src_app_context = all_courses[2]
dst_course_a = courses.Course(None, app_context=dst_app_context_a)
dst_course_b = courses.Course(None, app_context=dst_app_context_b)
src_course = courses.Course(None, app_context=src_app_context)
assert not dst_course_a.get_units()
assert not dst_course_b.get_units()
assert 11 == len(src_course.get_units())
# Import 1.2 course into 1.3.
errors = []
src_course_out, dst_course_out_a = dst_course_a.import_from(
src_app_context, errors)
if errors:
raise Exception(errors)
assert len(
src_course.get_units()) == len(src_course_out.get_units())
assert len(
src_course_out.get_units()) == len(dst_course_out_a.get_units())
# Import 1.3 course into 1.3.
errors = []
src_course_out_a, dst_course_out_b = dst_course_b.import_from(
dst_app_context_a, errors)
if errors:
raise Exception(errors)
assert src_course_out_a.get_units() == dst_course_out_b.get_units()
# Test delete.
units_to_delete = dst_course_a.get_units()
deleted_count = 0
for unit in units_to_delete:
assert dst_course_a.delete_unit(unit)
deleted_count += 1
dst_course_a.save()
assert deleted_count == len(units_to_delete)
assert not dst_course_a.get_units()
assert not dst_course_a.app_context.fs.list(os.path.join(
dst_course_a.app_context.get_home(), 'assets/js/'))
# Clean up.
sites.reset_courses()
def test_create_new_course(self):
"""Tests creating a new course."""
# Setup courses.
sites.setup_courses('course:/test::ns_test, course:/:/')
# Add several units.
course = courses.Course(None, app_context=sites.get_all_courses()[0])
link = course.add_link()
unit = course.add_unit()
assessment = course.add_assessment()
course.save()
assert course.find_unit_by_id(link.unit_id)
assert course.find_unit_by_id(unit.unit_id)
assert course.find_unit_by_id(assessment.unit_id)
assert 3 == len(course.get_units())
assert assessment.unit_id == 3
# Check unit can be found.
assert unit == course.find_unit_by_id(unit.unit_id)
assert not course.find_unit_by_id(999)
# Update unit.
unit.title = 'Test Title'
course.update_unit(unit)
course.save()
assert 'Test Title' == course.find_unit_by_id(unit.unit_id).title
# Update assessment.
assessment_content = open(os.path.join(
appengine_config.BUNDLE_ROOT,
'assets/js/assessment-Pre.js'), 'rb').readlines()
assessment_content = u''.join(assessment_content)
errors = []
course.set_assessment_content(assessment, assessment_content, errors)
course.save()
assert not errors
assessment_content_stored = course.app_context.fs.get(os.path.join(
course.app_context.get_home(),
course.get_assessment_filename(assessment.unit_id)))
assert assessment_content == assessment_content_stored
# Test adding lessons.
lesson_a = course.add_lesson(unit)
lesson_b = course.add_lesson(unit)
lesson_c = course.add_lesson(unit)
course.save()
assert [lesson_a, lesson_b, lesson_c] == course.get_lessons(
unit.unit_id)
assert lesson_c.lesson_id == 6
# Reorder lessons.
new_order = [
{'id': link.unit_id},
{
'id': unit.unit_id,
'lessons': [
{'id': lesson_b.lesson_id},
{'id': lesson_a.lesson_id},
{'id': lesson_c.lesson_id}]},
{'id': assessment.unit_id}]
course.reorder_units(new_order)
course.save()
assert [lesson_b, lesson_a, lesson_c] == course.get_lessons(
unit.unit_id)
# Move lesson to another unit.
another_unit = course.add_unit()
course.move_lesson_to(lesson_b, another_unit)
course.save()
assert [lesson_a, lesson_c] == course.get_lessons(unit.unit_id)
assert [lesson_b] == course.get_lessons(another_unit.unit_id)
course.delete_unit(another_unit)
course.save()
# Make the course available.
get_environ_old = sites.ApplicationContext.get_environ
def get_environ_new(self):
environ = get_environ_old(self)
environ['course']['now_available'] = True
return environ
sites.ApplicationContext.get_environ = get_environ_new
# Test public/private assessment.
assessment_url = (
'/test/' + course.get_assessment_filename(assessment.unit_id))
assert not assessment.now_available
response = self.get(assessment_url, expect_errors=True)
assert_equals(response.status_int, 403)
assessment = course.find_unit_by_id(assessment.unit_id)
assessment.now_available = True
course.update_unit(assessment)
course.save()
response = self.get(assessment_url)
assert_equals(response.status_int, 200)
# Check delayed assessment deletion.
course.delete_unit(assessment)
response = self.get(assessment_url) # note: file is still available
assert_equals(response.status_int, 200)
course.save()
response = self.get(assessment_url, expect_errors=True)
assert_equals(response.status_int, 404)
# Test public/private activity.
lesson_a = course.find_lesson_by_id(None, lesson_a.lesson_id)
lesson_a.now_available = False
lesson_a.has_activity = True
course.update_lesson(lesson_a)
errors = []
course.set_activity_content(lesson_a, u'var activity = []', errors)
assert not errors
activity_url = (
'/test/' + course.get_activity_filename(None, lesson_a.lesson_id))
response = self.get(activity_url, expect_errors=True)
assert_equals(response.status_int, 403)
lesson_a = course.find_lesson_by_id(None, lesson_a.lesson_id)
lesson_a.now_available = True
course.update_lesson(lesson_a)
course.save()
response = self.get(activity_url)
assert_equals(response.status_int, 200)
# Check delayed activity.
course.delete_lesson(lesson_a)
response = self.get(activity_url) # note: file is still available
assert_equals(response.status_int, 200)
course.save()
response = self.get(activity_url, expect_errors=True)
assert_equals(response.status_int, 404)
# Test deletes removes all child objects.
course.delete_unit(link)
course.delete_unit(unit)
assert not course.delete_unit(assessment)
course.save()
assert not course.get_units()
assert not course.app_context.fs.list(os.path.join(
course.app_context.get_home(), 'assets/js/'))
# Clean up.
sites.ApplicationContext.get_environ = get_environ_old
sites.reset_courses()
def test_unit_lesson_not_available(self):
"""Tests that unavailable units and lessons behave correctly."""
# Setup a new course.
sites.setup_courses('course:/test::ns_test, course:/:/')
config.Registry.test_overrides[
models.CAN_USE_MEMCACHE.name] = True
app_context = sites.get_all_courses()[0]
course = courses.Course(None, app_context=app_context)
# Add a unit that is not available.
unit_1 = course.add_unit()
unit_1.now_available = False
lesson_1_1 = course.add_lesson(unit_1)
lesson_1_1.title = 'Lesson 1.1'
course.update_unit(unit_1)
# Add a unit with some lessons available and some lessons not available.
unit_2 = course.add_unit()
unit_2.now_available = True
lesson_2_1 = course.add_lesson(unit_2)
lesson_2_1.title = 'Lesson 2.1'
lesson_2_1.now_available = False
lesson_2_2 = course.add_lesson(unit_2)
lesson_2_2.title = 'Lesson 2.2'
lesson_2_2.now_available = True
course.update_unit(unit_2)
# Add a unit with all lessons not available.
unit_3 = course.add_unit()
unit_3.now_available = True
lesson_3_1 = course.add_lesson(unit_3)
lesson_3_1.title = 'Lesson 3.1'
lesson_3_1.now_available = False
course.update_unit(unit_3)
# Add a unit that is available.
unit_4 = course.add_unit()
unit_4.now_available = True
lesson_4_1 = course.add_lesson(unit_4)
lesson_4_1.title = 'Lesson 4.1'
lesson_4_1.now_available = True
course.update_unit(unit_4)
# Add an available unit with no lessons.
unit_5 = course.add_unit()
unit_5.now_available = True
course.update_unit(unit_5)
course.save()
assert [lesson_1_1] == course.get_lessons(unit_1.unit_id)
assert [lesson_2_1, lesson_2_2] == course.get_lessons(unit_2.unit_id)
assert [lesson_3_1] == course.get_lessons(unit_3.unit_id)
# Make the course available.
get_environ_old = sites.ApplicationContext.get_environ
def get_environ_new(self):
environ = get_environ_old(self)
environ['course']['now_available'] = True
return environ
sites.ApplicationContext.get_environ = get_environ_new
private_tag = 'id="lesson-title-private"'
# Simulate a student traversing the course.
email = 'test_unit_lesson_not_available@example.com'
name = 'Test Unit Lesson Not Available'
actions.login(email, is_admin=False)
actions.register(self, name)
# Accessing a unit that is not available redirects to the main page.
response = self.get('/test/unit?unit=%s' % unit_1.unit_id)
assert_equals(response.status_int, 302)
response = self.get('/test/unit?unit=%s' % unit_2.unit_id)
assert_equals(response.status_int, 200)
assert_contains('Lesson 2.1', response.body)
assert_contains('This lesson is not available.', response.body)
assert_does_not_contain(private_tag, response.body)
response = self.get('/test/unit?unit=%s&lesson=%s' % (
unit_2.unit_id, lesson_2_2.lesson_id))
assert_equals(response.status_int, 200)
assert_contains('Lesson 2.2', response.body)
assert_does_not_contain('This lesson is not available.', response.body)
assert_does_not_contain(private_tag, response.body)
response = self.get('/test/unit?unit=%s' % unit_3.unit_id)
assert_equals(response.status_int, 200)
assert_contains('Lesson 3.1', response.body)
assert_contains('This lesson is not available.', response.body)
assert_does_not_contain(private_tag, response.body)
response = self.get('/test/unit?unit=%s' % unit_4.unit_id)
assert_equals(response.status_int, 200)
assert_contains('Lesson 4.1', response.body)
assert_does_not_contain('This lesson is not available.', response.body)
assert_does_not_contain(private_tag, response.body)
response = self.get('/test/unit?unit=%s' % unit_5.unit_id)
assert_equals(response.status_int, 200)
assert_does_not_contain('Lesson', response.body)
assert_contains(
'This unit does not contain any lessons.', response.body)
assert_does_not_contain(private_tag, response.body)
actions.logout()
# Simulate an admin traversing the course.
email = 'test_unit_lesson_not_available@example.com_admin'
name = 'Test Unit Lesson Not Available Admin'
actions.login(email, is_admin=True)
actions.register(self, name)
# The course admin can access a unit that is not available.
response = self.get('/test/unit?unit=%s' % unit_1.unit_id)
assert_equals(response.status_int, 200)
assert_contains('Lesson 1.1', response.body)
response = self.get('/test/unit?unit=%s' % unit_2.unit_id)
assert_equals(response.status_int, 200)
assert_contains('Lesson 2.1', response.body)
assert_does_not_contain('This lesson is not available.', response.body)
assert_contains(private_tag, response.body)
response = self.get('/test/unit?unit=%s&lesson=%s' % (
unit_2.unit_id, lesson_2_2.lesson_id))
assert_equals(response.status_int, 200)
assert_contains('Lesson 2.2', response.body)
assert_does_not_contain('This lesson is not available.', response.body)
assert_does_not_contain(private_tag, response.body)
response = self.get('/test/unit?unit=%s' % unit_3.unit_id)
assert_equals(response.status_int, 200)
assert_contains('Lesson 3.1', response.body)
assert_does_not_contain('This lesson is not available.', response.body)
assert_contains(private_tag, response.body)
response = self.get('/test/unit?unit=%s' % unit_4.unit_id)
assert_equals(response.status_int, 200)
assert_contains('Lesson 4.1', response.body)
assert_does_not_contain('This lesson is not available.', response.body)
assert_does_not_contain(private_tag, response.body)
response = self.get('/test/unit?unit=%s' % unit_5.unit_id)
assert_equals(response.status_int, 200)
assert_does_not_contain('Lesson', response.body)
assert_contains(
'This unit does not contain any lessons.', response.body)
assert_does_not_contain(private_tag, response.body)
actions.logout()
# Clean up app_context.
sites.ApplicationContext.get_environ = get_environ_old
def test_custom_assessments(self):
"""Tests that custom assessments are evaluated correctly."""
# Setup a new course.
sites.setup_courses('course:/test::ns_test, course:/:/')
config.Registry.test_overrides[
models.CAN_USE_MEMCACHE.name] = True
app_context = sites.get_all_courses()[0]
course = courses.Course(None, app_context=app_context)
email = 'test_assessments@google.com'
name = 'Test Assessments'
assessment_1 = course.add_assessment()
assessment_1.title = 'first'
assessment_1.now_available = True
assessment_1.weight = 0
assessment_2 = course.add_assessment()
assessment_2.title = 'second'
assessment_2.now_available = True
assessment_2.weight = 0
course.save()
assert course.find_unit_by_id(assessment_1.unit_id)
assert course.find_unit_by_id(assessment_2.unit_id)
assert 2 == len(course.get_units())
# Make the course available.
get_environ_old = sites.ApplicationContext.get_environ
def get_environ_new(self):
environ = get_environ_old(self)
environ['course']['now_available'] = True
return environ
sites.ApplicationContext.get_environ = get_environ_new
first = {'score': '1.00', 'assessment_type': assessment_1.unit_id}
second = {'score': '3.00', 'assessment_type': assessment_2.unit_id}
# Update assessment 1.
assessment_1_content = open(os.path.join(
appengine_config.BUNDLE_ROOT,
'assets/js/assessment-Pre.js'), 'rb').readlines()
assessment_1_content = u''.join(assessment_1_content)
errors = []
course.set_assessment_content(
assessment_1, assessment_1_content, errors)
course.save()
assert not errors
# Update assessment 2.
assessment_2_content = open(os.path.join(
appengine_config.BUNDLE_ROOT,
'assets/js/assessment-Mid.js'), 'rb').readlines()
assessment_2_content = u''.join(assessment_2_content)
errors = []
course.set_assessment_content(
assessment_2, assessment_2_content, errors)
course.save()
assert not errors
# Register.
actions.login(email)
actions.register(self, name)
# Submit assessment 1.
actions.submit_assessment(
self, assessment_1.unit_id, first, base='/test')
student = models.Student.get_enrolled_student_by_email(email)
student_scores = course.get_all_scores(student)
assert len(student_scores) == 2
assert student_scores[0]['id'] == str(assessment_1.unit_id)
assert student_scores[0]['score'] == 1
assert student_scores[0]['title'] == 'first'
assert student_scores[0]['weight'] == 0
assert student_scores[1]['id'] == str(assessment_2.unit_id)
assert student_scores[1]['score'] == 0
assert student_scores[1]['title'] == 'second'
assert student_scores[1]['weight'] == 0
# The overall score is None if there are no weights assigned to any of
# the assessments.
overall_score = course.get_overall_score(student)
assert overall_score is None
# View the student profile page.
response = self.get('/test/student/home')
assert_does_not_contain('Overall course score', response.body)
# Add a weight to the first assessment.
assessment_1.weight = 10
overall_score = course.get_overall_score(student)
assert overall_score == 1
# Submit assessment 2.
actions.submit_assessment(
self, assessment_2.unit_id, second, base='/test')
# We need to reload the student instance, because its properties have
# changed.
student = models.Student.get_enrolled_student_by_email(email)
student_scores = course.get_all_scores(student)
assert len(student_scores) == 2
assert student_scores[1]['score'] == 3
overall_score = course.get_overall_score(student)
assert overall_score == 1
# Change the weight of assessment 2.
assessment_2.weight = 30
overall_score = course.get_overall_score(student)
assert overall_score == int((1 * 10 + 3 * 30) / 40)
# Save all changes.
course.save()
# View the student profile page.
response = self.get('/test/student/home')
assert_contains('assessment-score-first">1</span>', response.body)
assert_contains('assessment-score-second">3</span>', response.body)
assert_contains('Overall course score', response.body)
assert_contains('assessment-score-overall">2</span>', response.body)
# Submitting a lower score for any assessment does not change any of
# the scores, since the system records the maximum score that has ever
# been achieved on any assessment.
first_retry = {'score': '0', 'assessment_type': assessment_1.unit_id}
actions.submit_assessment(
self, assessment_1.unit_id, first_retry, base='/test')
student = models.Student.get_enrolled_student_by_email(email)
student_scores = course.get_all_scores(student)
assert len(student_scores) == 2
assert student_scores[0]['id'] == str(assessment_1.unit_id)
assert student_scores[0]['score'] == 1
overall_score = course.get_overall_score(student)
assert overall_score == int((1 * 10 + 3 * 30) / 40)
actions.logout()
# Clean up app_context.
sites.ApplicationContext.get_environ = get_environ_old
def test_datastore_backed_file_system(self):
"""Tests datastore-backed file system operations."""
fs = vfs.AbstractFileSystem(vfs.DatastoreBackedFileSystem('', '/'))
# Check binary file.
src = os.path.join(appengine_config.BUNDLE_ROOT, 'course.yaml')
dst = os.path.join('/', 'course.yaml')
fs.put(dst, open(src, 'rb'))
stored = fs.open(dst)
assert stored.metadata.size == len(open(src, 'rb').read())
assert not stored.metadata.is_draft
assert stored.read() == open(src, 'rb').read()
# Check draft.
fs.put(dst, open(src, 'rb'), is_draft=True)
stored = fs.open(dst)
assert stored.metadata.is_draft
# Check text files with non-ASCII characters and encoding.
foo_js = os.path.join('/', 'assets/js/foo.js')
foo_text = u'This is a test text (тест данные).'
fs.put(foo_js, vfs.string_to_stream(foo_text))
stored = fs.open(foo_js)
assert vfs.stream_to_string(stored) == foo_text
# Check delete.
del_file = os.path.join('/', 'memcache.test')
fs.put(del_file, vfs.string_to_stream(u'test'))
assert fs.isfile(del_file)
fs.delete(del_file)
assert not fs.isfile(del_file)
# Check open or delete of non-existent does not fail.
assert not fs.open('/foo/bar/baz')
assert not fs.delete('/foo/bar/baz')
# Check new content fully overrides old (with and without memcache).
test_file = os.path.join('/', 'memcache.test')
fs.put(test_file, vfs.string_to_stream(u'test text'))
stored = fs.open(test_file)
assert u'test text' == vfs.stream_to_string(stored)
fs.delete(test_file)
# Check file existence.
assert not fs.isfile('/foo/bar')
assert fs.isfile('/course.yaml')
assert fs.isfile('/assets/js/foo.js')
# Check file listing.
bar_js = os.path.join('/', 'assets/js/bar.js')
fs.put(bar_js, vfs.string_to_stream(foo_text))
baz_js = os.path.join('/', 'assets/js/baz.js')
fs.put(baz_js, vfs.string_to_stream(foo_text))
assert fs.list('/') == sorted([
u'/course.yaml',
u'/assets/js/foo.js', u'/assets/js/bar.js', u'/assets/js/baz.js'])
assert fs.list('/assets') == sorted([
u'/assets/js/foo.js', u'/assets/js/bar.js', u'/assets/js/baz.js'])
assert not fs.list('/foo/bar')
def test_utf8_datastore(self):
"""Test writing to and reading from datastore using UTF-8 content."""
event = models.EventEntity()
event.source = 'test-source'
event.user_id = 'test-user-id'
event.data = u'Test Data (тест данные)'
event.put()
stored_event = models.EventEntity().get_by_id([event.key().id()])
assert 1 == len(stored_event)
assert event.data == stored_event[0].data
def assert_queriable(self, entity, name, date_type=datetime.datetime):
"""Create some entities and check that single-property queries work."""
for i in range(1, 32):
item = entity(
key_name='%s_%s' % (date_type.__class__.__name__, i))
setattr(item, name, date_type(2012, 1, i))
item.put()
# Descending order.
items = entity.all().order('-%s' % name).fetch(1000)
assert len(items) == 31
assert getattr(items[0], name) == date_type(2012, 1, 31)
# Ascending order.
items = entity.all().order('%s' % name).fetch(1000)
assert len(items) == 31
assert getattr(items[0], name) == date_type(2012, 1, 1)
def test_indexed_properties(self):
"""Test whether entities support specific query types."""
# A 'DateProperty' or 'DateTimeProperty' of each persistent entity must
# be indexed. This is true even if the application doesn't execute any
# queries relying on the index. The index is still critically important
# for managing data, for example, for bulk data download or for
# incremental computations. Using index, the entire table can be
# processed in daily, weekly, etc. chunks and it is easy to query for
# new data. If we did not have an index, chunking would have to be done
# by the primary index, where it is impossible to separate recently
# added/modified rows from the rest of the data. Having this index adds
# to the cost of datastore writes, but we believe it is important to
# have it. Below we check that all persistent date/datetime properties
# are indexed.
self.assert_queriable(
AnnouncementEntity, 'date', date_type=datetime.date)
self.assert_queriable(models.EventEntity, 'recorded_on')
self.assert_queriable(models.Student, 'enrolled_on')
self.assert_queriable(models.StudentAnswersEntity, 'updated_on')
self.assert_queriable(jobs.DurableJobEntity, 'updated_on')
def test_assets_and_date(self):
"""Verify semantics of all asset and data files."""
def echo(unused_message):
pass
warnings, errors = verify.Verifier().load_and_verify_model(echo)
assert not errors and not warnings
def test_config_visible_from_any_namespace(self):
"""Test that ConfigProperty is visible from any namespace."""
assert (
config.UPDATE_INTERVAL_SEC.value ==
config.UPDATE_INTERVAL_SEC.default_value)
new_value = config.UPDATE_INTERVAL_SEC.default_value + 5
# Add datastore override for known property.
prop = config.ConfigPropertyEntity(
key_name=config.UPDATE_INTERVAL_SEC.name)
prop.value = str(new_value)
prop.is_draft = False
prop.put()
# Check visible from default namespace.
config.Registry.last_update_time = 0
assert config.UPDATE_INTERVAL_SEC.value == new_value
# Check visible from another namespace.
old_namespace = namespace_manager.get_namespace()
try:
namespace_manager.set_namespace(
'ns-test_config_visible_from_any_namespace')
config.Registry.last_update_time = 0
assert config.UPDATE_INTERVAL_SEC.value == new_value
finally:
namespace_manager.set_namespace(old_namespace)
class AdminAspectTest(actions.TestBase):
"""Test site from the Admin perspective."""
def test_courses_page_for_multiple_courses(self):
"""Tests /admin page showing multiple courses."""
# Setup courses.
sites.setup_courses('course:/aaa::ns_a, course:/bbb::ns_b, course:/:/')
config.Registry.test_overrides[
models.CAN_USE_MEMCACHE.name] = True
# Validate the courses before import.
all_courses = sites.get_all_courses()
dst_app_context_a = all_courses[0]
dst_app_context_b = all_courses[1]
src_app_context = all_courses[2]
# This test requires a read-write file system. If test runs on read-
# only one, we can't run this test :(
if (not dst_app_context_a.fs.is_read_write() or
not dst_app_context_a.fs.is_read_write()):
return
course_a = courses.Course(None, app_context=dst_app_context_a)
course_b = courses.Course(None, app_context=dst_app_context_b)
unused_course, course_a = course_a.import_from(src_app_context)
unused_course, course_b = course_b.import_from(src_app_context)
# Rename courses.
dst_app_context_a.fs.put(
dst_app_context_a.get_config_filename(),
vfs.string_to_stream(u'course:\n title: \'Course AAA\''))
dst_app_context_b.fs.put(
dst_app_context_b.get_config_filename(),
vfs.string_to_stream(u'course:\n title: \'Course BBB\''))
# Login.
email = 'test_courses_page_for_multiple_courses@google.com'
actions.login(email, True)
# Check the course listing page.
response = self.testapp.get('/admin')
assert_contains_all_of([
'Course AAA',
'/aaa/dashboard',
'Course BBB',
'/bbb/dashboard'], response.body)
# Clean up.
sites.reset_courses()
def test_python_console(self):
"""Test access rights to the Python console."""
email = 'test_python_console@google.com'
# The default is that the console should be turned off
self.assertFalse(modules.admin.admin.DIRECT_CODE_EXECUTION_UI_ENABLED)
# Test the console when it is enabled
modules.admin.admin.DIRECT_CODE_EXECUTION_UI_ENABLED = True
# Check normal user has no access.
actions.login(email)
response = self.testapp.get('/admin?action=console')
assert_equals(response.status_int, 302)
response = self.testapp.post('/admin?action=console')
assert_equals(response.status_int, 302)
# Check delegated admin has no access.
os.environ['gcb_admin_user_emails'] = '[%s]' % email
actions.login(email)
response = self.testapp.get('/admin?action=console')
assert_equals(response.status_int, 200)
assert_contains(
'You must be an actual admin user to continue.', response.body)
response = self.testapp.get('/admin?action=console')
assert_equals(response.status_int, 200)
assert_contains(
'You must be an actual admin user to continue.', response.body)
del os.environ['gcb_admin_user_emails']
# Check actual admin has access.
actions.login(email, True)
response = self.testapp.get('/admin?action=console')
assert_equals(response.status_int, 200)
response.form.set('code', 'print "foo" + "bar"')
response = self.submit(response.form)
assert_contains('foobar', response.body)
# Finally, test that the console is not found when it is disabled
modules.admin.admin.DIRECT_CODE_EXECUTION_UI_ENABLED = False
actions.login(email, True)
self.testapp.get('/admin?action=console', status=404)
self.testapp.post('/admin?action=console_run', status=404)
def test_non_admin_has_no_access(self):
"""Test non admin has no access to pages or REST endpoints."""
email = 'test_non_admin_has_no_access@google.com'
actions.login(email)
# Add datastore override.
prop = config.ConfigPropertyEntity(
key_name='gcb_config_update_interval_sec')
prop.value = '5'
prop.is_draft = False
prop.put()
# Check user has no access to specific pages and actions.
response = self.testapp.get('/admin?action=settings')
assert_equals(response.status_int, 302)
response = self.testapp.get(
'/admin?action=config_edit&name=gcb_admin_user_emails')
assert_equals(response.status_int, 302)
response = self.testapp.post(
'/admin?action=config_reset&name=gcb_admin_user_emails')
assert_equals(response.status_int, 302)
# Check user has no rights to GET verb.
response = self.testapp.get(
'/rest/config/item?key=gcb_config_update_interval_sec')
assert_equals(response.status_int, 200)
json_dict = transforms.loads(response.body)
assert json_dict['status'] == 401
assert json_dict['message'] == 'Access denied.'
# Here are the endpoints we want to test: (uri, xsrf_action_name).
endpoints = [
('/rest/config/item', 'config-property-put'),
('/rest/courses/item', 'add-course-put')]
# Check user has no rights to PUT verb.
payload_dict = {}
payload_dict['value'] = '666'
payload_dict['is_draft'] = False
request = {}
request['key'] = 'gcb_config_update_interval_sec'
request['payload'] = transforms.dumps(payload_dict)
for uri, unused_action in endpoints:
response = self.testapp.put(uri + '?%s' % urllib.urlencode(
{'request': transforms.dumps(request)}), {})
assert_equals(response.status_int, 200)
assert_contains('"status": 403', response.body)
# Check user still has no rights to PUT verb even if he somehow
# obtained a valid XSRF token.
for uri, action in endpoints:
request['xsrf_token'] = XsrfTokenManager.create_xsrf_token(action)
response = self.testapp.put(uri + '?%s' % urllib.urlencode(
{'request': transforms.dumps(request)}), {})
assert_equals(response.status_int, 200)
json_dict = transforms.loads(response.body)
assert json_dict['status'] == 401
assert json_dict['message'] == 'Access denied.'
def test_admin_list(self):
"""Test delegation of admin access to another user."""
email = 'test_admin_list@google.com'
actions.login(email)
# Add environment variable override.
os.environ['gcb_admin_user_emails'] = '[%s]' % email
# Add datastore override.
prop = config.ConfigPropertyEntity(
key_name='gcb_config_update_interval_sec')
prop.value = '5'
prop.is_draft = False
prop.put()
# Check user has access now.
response = self.testapp.get('/admin?action=settings')
assert_equals(response.status_int, 200)
# Check overrides are active and have proper management actions.
assert_contains('gcb_admin_user_emails', response.body)
assert_contains('[test_admin_list@google.com]', response.body)
assert_contains(
'/admin?action=config_override&name=gcb_admin_user_emails',
response.body)
assert_contains(
'/admin?action=config_edit&name=gcb_config_update_interval_sec',
response.body)
# Check editor page has proper actions.
response = self.testapp.get(
'/admin?action=config_edit&name=gcb_config_update_interval_sec')
assert_equals(response.status_int, 200)
assert_contains('/admin?action=config_reset', response.body)
assert_contains('name=gcb_config_update_interval_sec', response.body)
# Remove override.
del os.environ['gcb_admin_user_emails']
# Check user has no access.
response = self.testapp.get('/admin?action=settings')
assert_equals(response.status_int, 302)
def test_access_to_admin_pages(self):
"""Test access to admin pages."""
# assert anonymous user has no access
response = self.testapp.get('/admin?action=settings')
assert_equals(response.status_int, 302)
# assert admin user has access
email = 'test_access_to_admin_pages@google.com'
name = 'Test Access to Admin Pages'
actions.login(email, True)
actions.register(self, name)
response = self.testapp.get('/admin')
assert_contains('Power Searching with Google', response.body)
assert_contains('All Courses', response.body)
response = self.testapp.get('/admin?action=settings')
assert_contains('gcb_admin_user_emails', response.body)
assert_contains('gcb_config_update_interval_sec', response.body)
assert_contains('All Settings', response.body)
response = self.testapp.get('/admin?action=perf')
assert_contains('gcb-admin-uptime-sec:', response.body)
assert_contains('In-process Performance Counters', response.body)
response = self.testapp.get('/admin?action=deployment')
assert_contains('application_id: testbed-test', response.body)
assert_contains('About the Application', response.body)
actions.unregister(self)
actions.logout()
# assert not-admin user has no access
actions.login(email)
actions.register(self, name)
response = self.testapp.get('/admin?action=settings')
assert_equals(response.status_int, 302)
def test_multiple_courses(self):
"""Test courses admin page with two courses configured."""
sites.setup_courses(
'course:/foo:/foo-data, course:/bar:/bar-data:nsbar')
email = 'test_multiple_courses@google.com'
actions.login(email, True)
response = self.testapp.get('/admin')
assert_contains('Course Builder > Admin > Courses', response.body)
assert_contains('Total: 2 item(s)', response.body)
# Check ocurse URL's.
assert_contains('<a href="/foo/dashboard">', response.body)
assert_contains('<a href="/bar/dashboard">', response.body)
# Check content locations.
assert_contains('/foo-data', response.body)
assert_contains('/bar-data', response.body)
# Check namespaces.
assert_contains('gcb-course-foo-data', response.body)
assert_contains('nsbar', response.body)
# Clean up.
sites.reset_courses()
def test_add_course(self):
"""Tests adding a new course entry."""
if not self.supports_editing:
return
email = 'test_add_course@google.com'
actions.login(email, True)
# Prepare request data.
payload_dict = {
'name': 'add_new',
'title': u'new course (тест данные)', 'admin_email': 'foo@bar.com'}
request = {}
request['payload'] = transforms.dumps(payload_dict)
request['xsrf_token'] = XsrfTokenManager.create_xsrf_token(
'add-course-put')
# Execute action.
response = self.testapp.put('/rest/courses/item?%s' % urllib.urlencode(
{'request': transforms.dumps(request)}), {})
assert_equals(response.status_int, 200)
# Check response.
json_dict = transforms.loads(transforms.loads(response.body)['payload'])
assert 'course:/add_new::ns_add_new' == json_dict.get('entry')
# Re-execute action; should fail as this would create a duplicate.
response = self.testapp.put('/rest/courses/item?%s' % urllib.urlencode(
{'request': transforms.dumps(request)}), {})
assert_equals(response.status_int, 200)
assert_equals(412, transforms.loads(response.body)['status'])
# Load the course and check its title.
new_app_context = sites.get_all_courses(
'course:/add_new::ns_add_new')[0]
assert_equals(u'new course (тест данные)', new_app_context.get_title())
new_course = courses.Course(None, app_context=new_app_context)
assert not new_course.get_units()
class CourseAuthorAspectTest(actions.TestBase):
"""Tests the site from the Course Author perspective."""
def test_dashboard(self):
"""Test course dashboard."""
email = 'test_dashboard@google.com'
name = 'Test Dashboard'
# Non-admin does't have access.
actions.login(email)
response = self.get('dashboard')
assert_equals(response.status_int, 302)
actions.register(self, name)
assert_equals(response.status_int, 302)
actions.logout()
# Admin has access.
actions.login(email, True)
response = self.get('dashboard')
assert_contains('Google > Dashboard > Outline', response.body)
# Tests outline view.
response = self.get('dashboard')
assert_contains('Unit 3 - Advanced techniques', response.body)
assert_contains('data/lesson.csv', response.body)
# Check editability.
if self.supports_editing:
assert_contains('Add Assessment', response.body)
else:
assert_does_not_contain('Add Assessment', response.body)
# Test assets view.
response = self.get('dashboard?action=assets')
assert_contains('Google > Dashboard > Assets', response.body)
assert_contains('assets/css/main.css', response.body)
assert_contains('assets/img/Image1.5.png', response.body)
assert_contains('assets/js/activity-3.2.js', response.body)
# Test settings view.
response = self.get('dashboard?action=settings')
assert_contains(
'Google > Dashboard > Settings', response.body)
assert_contains('course.yaml', response.body)
assert_contains(
'title: \'Power Searching with Google\'', response.body)
assert_contains('locale: \'en_US\'', response.body)
# Check editability.
if self.supports_editing:
assert_contains('create_or_edit_settings', response.body)
else:
assert_does_not_contain('create_or_edit_settings', response.body)
# Tests student statistics view.
response = self.get('dashboard?action=students')
assert_contains(
'Google > Dashboard > Students', response.body)
assert_contains('have not been calculated yet', response.body)
compute_form = response.forms['gcb-compute-student-stats']
response = self.submit(compute_form)
assert_equals(response.status_int, 302)
assert len(self.taskq.GetTasks('default')) == 1
response = self.get('dashboard?action=students')
assert_contains('is running', response.body)
self.execute_all_deferred_tasks()
response = self.get('dashboard?action=students')
assert_contains('were last updated on', response.body)
assert_contains('currently enrolled: 1', response.body)
assert_contains('total: 1', response.body)
# Tests assessment statistics.
old_namespace = namespace_manager.get_namespace()
namespace_manager.set_namespace(self.namespace)
try:
for i in range(5):
student = models.Student(key_name='key-%s' % i)
student.is_enrolled = True
student.scores = transforms.dumps({'test-assessment': i})
student.put()
finally:
namespace_manager.set_namespace(old_namespace)
response = self.get('dashboard?action=students')
compute_form = response.forms['gcb-compute-student-stats']
response = self.submit(compute_form)
self.execute_all_deferred_tasks()
response = self.get('dashboard?action=students')
assert_contains('currently enrolled: 6', response.body)
assert_contains(
'test-assessment: completed 5, average score 2.0', response.body)
def test_trigger_sample_announcements(self):
"""Test course author can trigger adding sample announcements."""
email = 'test_announcements@google.com'
name = 'Test Announcements'
actions.login(email, True)
actions.register(self, name)
response = actions.view_announcements(self)
assert_contains('Example Announcement', response.body)
assert_contains('Welcome to the final class!', response.body)
assert_does_not_contain('No announcements yet.', response.body)
def test_manage_announcements(self):
"""Test course author can manage announcements."""
email = 'test_announcements@google.com'
name = 'Test Announcements'
actions.login(email, True)
actions.register(self, name)
# add new
response = actions.view_announcements(self)
add_form = response.forms['gcb-add-announcement']
response = self.submit(add_form)
assert_equals(response.status_int, 302)
# check edit form rendering
response = self.testapp.get(response.location)
assert_equals(response.status_int, 200)
assert_contains('/rest/announcements/item?key=', response.body)
# check added
response = actions.view_announcements(self)
assert_contains('Sample Announcement (Draft)', response.body)
# delete draft
response = actions.view_announcements(self)
delete_form = response.forms['gcb-delete-announcement-1']
response = self.submit(delete_form)
assert_equals(response.status_int, 302)
# check deleted
assert_does_not_contain('Welcome to the final class!', response.body)
def test_announcements_rest(self):
"""Test REST access to announcements."""
email = 'test_announcements_rest@google.com'
name = 'Test Announcements Rest'
actions.login(email, True)
actions.register(self, name)
response = actions.view_announcements(self)
assert_does_not_contain('My Test Title', response.body)
# REST GET existing item
items = AnnouncementEntity.all().fetch(1)
for item in items:
response = self.get('rest/announcements/item?key=%s' % item.key())
json_dict = transforms.loads(response.body)
assert json_dict['status'] == 200
assert 'message' in json_dict
assert 'payload' in json_dict
payload_dict = transforms.loads(json_dict['payload'])
assert 'title' in payload_dict
assert 'date' in payload_dict
# REST PUT item
payload_dict['title'] = u'My Test Title Мой заголовок теста'
payload_dict['date'] = '2012/12/31'
payload_dict['is_draft'] = True
request = {}
request['key'] = str(item.key())
request['payload'] = transforms.dumps(payload_dict)
# Check XSRF is required.
response = self.put('rest/announcements/item?%s' % urllib.urlencode(
{'request': transforms.dumps(request)}), {})
assert_equals(response.status_int, 200)
assert_contains('"status": 403', response.body)
# Check PUT works.
request['xsrf_token'] = json_dict['xsrf_token']
response = self.put('rest/announcements/item?%s' % urllib.urlencode(
{'request': transforms.dumps(request)}), {})
assert_equals(response.status_int, 200)
assert_contains('"status": 200', response.body)
# Confirm change is visible on the page.
response = self.get('announcements')
assert_contains(
u'My Test Title Мой заголовок теста (Draft)', response.body)
# REST GET not-existing item
response = self.get('rest/announcements/item?key=not_existent_key')
json_dict = transforms.loads(response.body)
assert json_dict['status'] == 404
class StudentAspectTest(actions.TestBase):
"""Test the site from the Student perspective."""
def test_view_announcements(self):
"""Test student aspect of announcements."""
email = 'test_announcements@google.com'
name = 'Test Announcements'
actions.login(email)
actions.register(self, name)
# Check no announcements yet.
response = actions.view_announcements(self)
assert_does_not_contain('Example Announcement', response.body)
assert_does_not_contain('Welcome to the final class!', response.body)
assert_contains('No announcements yet.', response.body)
actions.logout()
# Login as admin and add announcements.
actions.login('admin@sample.com', True)
actions.register(self, 'admin')
response = actions.view_announcements(self)
actions.logout()
# Check we can see non-draft announcements.
actions.login(email)
response = actions.view_announcements(self)
assert_contains('Example Announcement', response.body)
assert_does_not_contain('Welcome to the final class!', response.body)
assert_does_not_contain('No announcements yet.', response.body)
# Check no access to access to draft announcements via REST handler.
items = AnnouncementEntity.all().fetch(1000)
for item in items:
response = self.get('rest/announcements/item?key=%s' % item.key())
if item.is_draft:
json_dict = transforms.loads(response.body)
assert json_dict['status'] == 401
else:
assert_equals(response.status_int, 200)
def test_registration(self):
"""Test student registration."""
email = 'test_registration@example.com'
name1 = 'Test Student'
name2 = 'John Smith'
name3 = u'Pavel Simakov (тест данные)'
actions.login(email)
actions.register(self, name1)
actions.check_profile(self, name1)
actions.change_name(self, name2)
actions.unregister(self)
actions.register(self, name3)
actions.check_profile(self, name3)
def test_course_not_available(self):
"""Tests course is only accessible to author when incomplete."""
email = 'test_course_not_available@example.com'
name = 'Test Course Not Available'
actions.login(email)
actions.register(self, name)
# Check preview and static resources are available.
response = self.get('course')
assert_equals(response.status_int, 200)
response = self.get('assets/js/activity-1.4.js')
assert_equals(response.status_int, 200)
# Override course.yaml settings by patching app_context.
get_environ_old = sites.ApplicationContext.get_environ
def get_environ_new(self):
environ = get_environ_old(self)
environ['course']['now_available'] = False
return environ
sites.ApplicationContext.get_environ = get_environ_new
# Check preview and static resources are not available to Student.
response = self.get('course', expect_errors=True)
assert_equals(response.status_int, 404)
response = self.get('assets/js/activity-1.4.js', expect_errors=True)
assert_equals(response.status_int, 404)
# Check preview and static resources are still available to author.
actions.login(email, True)
response = self.get('course')
assert_equals(response.status_int, 200)
response = self.get('assets/js/activity-1.4.js')
assert_equals(response.status_int, 200)
# Clean up app_context.
sites.ApplicationContext.get_environ = get_environ_old
def test_registration_closed(self):
"""Test student registration when course is full."""
email = 'test_registration_closed@example.com'
name = 'Test Registration Closed'
# Override course.yaml settings by patching app_context.
get_environ_old = sites.ApplicationContext.get_environ
def get_environ_new(self):
environ = get_environ_old(self)
environ['reg_form']['can_register'] = False
return environ
sites.ApplicationContext.get_environ = get_environ_new
# Try to login and register.
actions.login(email)
try:
actions.register(self, name)
raise actions.ShouldHaveFailedByNow(
'Expected to fail: new registrations should not be allowed '
'when registration is closed.')
except actions.ShouldHaveFailedByNow as e:
raise e
except:
pass
# Clean up app_context.
sites.ApplicationContext.get_environ = get_environ_old
def test_permissions(self):
"""Test student permissions, and which pages they can view."""
email = 'test_permissions@example.com'
name = 'Test Permissions'
actions.login(email)
actions.register(self, name)
actions.Permissions.assert_enrolled(self)
actions.unregister(self)
actions.Permissions.assert_unenrolled(self)
actions.register(self, name)
actions.Permissions.assert_enrolled(self)
def test_login_and_logout(self):
"""Test if login and logout behave as expected."""
email = 'test_login_logout@example.com'
actions.Permissions.assert_logged_out(self)
actions.login(email)
actions.Permissions.assert_unenrolled(self)
actions.logout()
actions.Permissions.assert_logged_out(self)
def test_lesson_activity_navigation(self):
"""Test navigation between lesson/activity pages."""
email = 'test_lesson_activity_navigation@example.com'
name = 'Test Lesson Activity Navigation'
actions.login(email)
actions.register(self, name)
response = self.get('unit?unit=1&lesson=1')
assert_does_not_contain('Previous Page', response.body)
assert_contains('Next Page', response.body)
response = self.get('unit?unit=2&lesson=3')
assert_contains('Previous Page', response.body)
assert_contains('Next Page', response.body)
response = self.get('unit?unit=3&lesson=5')
assert_contains('Previous Page', response.body)
assert_does_not_contain('Next Page', response.body)
assert_contains('End', response.body)
def test_attempt_activity_event(self):
"""Test activity attempt generates event."""
email = 'test_attempt_activity_event@example.com'
name = 'Test Attempt Activity Event'
actions.login(email)
actions.register(self, name)
# Enable event recording.
config.Registry.test_overrides[
lessons.CAN_PERSIST_ACTIVITY_EVENTS.name] = True
# Prepare event.
request = {}
request['source'] = 'test-source'
request['payload'] = transforms.dumps({'Alice': u'Bob (тест данные)'})
# Check XSRF token is required.
response = self.post('rest/events?%s' % urllib.urlencode(
{'request': transforms.dumps(request)}), {})
assert_equals(response.status_int, 200)
assert_contains('"status": 403', response.body)
# Check PUT works.
request['xsrf_token'] = XsrfTokenManager.create_xsrf_token(
'event-post')
response = self.post('rest/events?%s' % urllib.urlencode(
{'request': transforms.dumps(request)}), {})
assert_equals(response.status_int, 200)
assert not response.body
# Check event is properly recorded.
old_namespace = namespace_manager.get_namespace()
namespace_manager.set_namespace(self.namespace)
try:
events = models.EventEntity.all().fetch(1000)
assert 1 == len(events)
assert_contains(
u'Bob (тест данные)',
transforms.loads(events[0].data)['Alice'])
finally:
namespace_manager.set_namespace(old_namespace)
# Clean up.
config.Registry.test_overrides = {}
def test_two_students_dont_see_each_other_pages(self):
"""Test a user can't see another user pages."""
email1 = 'user1@foo.com'
name1 = 'User 1'
email2 = 'user2@foo.com'
name2 = 'User 2'
# Login as one user and view 'unit' and other pages, which are not
# cached.
actions.login(email1)
actions.register(self, name1)
actions.Permissions.assert_enrolled(self)
response = actions.view_unit(self)
assert_contains(email1, response.body)
actions.logout()
# Login as another user and check that 'unit' and other pages show
# the correct new email.
actions.login(email2)
actions.register(self, name2)
actions.Permissions.assert_enrolled(self)
response = actions.view_unit(self)
assert_contains(email2, response.body)
actions.logout()
def test_xsrf_defence(self):
"""Test defense against XSRF attack."""
email = 'test_xsrf_defence@example.com'
name = 'Test Xsrf Defence'
actions.login(email)
actions.register(self, name)
response = self.get('student/home')
response.form.set('name', 'My New Name')
response.form.set('xsrf_token', 'bad token')
response = response.form.submit(expect_errors=True)
assert_equals(response.status_int, 403)
def test_autoescaping(self):
"""Test Jinja autoescaping."""
email = 'test_autoescaping@example.com'
name1 = '<script>alert(1);</script>'
name2 = '<script>alert(2);</script>'
actions.login(email)
actions.register(self, name1)
actions.check_profile(self, name1)
actions.change_name(self, name2)
actions.unregister(self)
def test_response_headers(self):
"""Test dynamically-generated responses use proper headers."""
email = 'test_response_headers@example.com'
name = 'Test Response Headers'
actions.login(email)
actions.register(self, name)
response = self.get('student/home')
assert_equals(response.status_int, 200)
assert_contains('must-revalidate', response.headers['Cache-Control'])
assert_contains('no-cache', response.headers['Cache-Control'])
assert_contains('no-cache', response.headers['Pragma'])
assert_contains('Mon, 01 Jan 1990', response.headers['Expires'])
class StaticHandlerTest(actions.TestBase):
"""Check serving of static resources."""
def test_static_files_cache_control(self):
"""Test static/zip handlers use proper Cache-Control headers."""
# Check static handler.
response = self.get('/assets/css/main.css')
assert_equals(response.status_int, 200)
assert_contains('max-age=600', response.headers['Cache-Control'])
assert_contains('public', response.headers['Cache-Control'])
assert_does_not_contain('no-cache', response.headers['Cache-Control'])
# Check zip file handler.
response = self.testapp.get(
'/static/inputex-3.1.0/src/inputex/assets/skins/sam/inputex.css')
assert_equals(response.status_int, 200)
assert_contains('max-age=600', response.headers['Cache-Control'])
assert_contains('public', response.headers['Cache-Control'])
assert_does_not_contain('no-cache', response.headers['Cache-Control'])
class ActivityTest(actions.TestBase):
"""Test for activities."""
def get_activity(self, unit_id, lesson_id, args):
"""Retrieve the activity page for a given unit and lesson id."""
response = self.get('activity?unit=%s&lesson=%s' % (unit_id, lesson_id))
assert_equals(response.status_int, 200)
assert_contains(
'<script src="assets/js/activity-%s.%s.js"></script>' %
(unit_id, lesson_id), response.body)
assert_contains('assets/lib/activity-generic-1.3.js', response.body)
js_response = self.get('assets/lib/activity-generic-1.3.js')
assert_equals(js_response.status_int, 200)
# Extract XSRF token from the page.
match = re.search(r'eventXsrfToken = [\']([^\']+)', response.body)
assert match
xsrf_token = match.group(1)
args['xsrf_token'] = xsrf_token
return response, args
def test_activities(self):
"""Test that activity submissions are handled and recorded correctly."""
email = 'test_activities@google.com'
name = 'Test Activities'
unit_id = 1
lesson_id = 2
activity_submissions = {
'1.2': {
'index': 3,
'type': 'activity-choice',
'value': 3,
'correct': True,
},
}
# Register.
actions.login(email)
actions.register(self, name)
# Enable event recording.
config.Registry.test_overrides[
lessons.CAN_PERSIST_ACTIVITY_EVENTS.name] = True
# Navigate to the course overview page, and check that the unit shows
# no progress yet.
response = self.get('course')
assert_equals(response.status_int, 200)
assert_contains(u'id="progress-notstarted-%s"' % unit_id, response.body)
old_namespace = namespace_manager.get_namespace()
namespace_manager.set_namespace(self.namespace)
try:
response, args = self.get_activity(unit_id, lesson_id, {})
# Check that the current activity shows no progress yet.
assert_contains(
u'id="progress-notstarted-%s"' % lesson_id, response.body)
# Prepare activity submission event.
args['source'] = 'attempt-activity'
lesson_key = '%s.%s' % (unit_id, lesson_id)
assert lesson_key in activity_submissions
args['payload'] = activity_submissions[lesson_key]
args['payload']['location'] = (
'http://localhost:8080/activity?unit=%s&lesson=%s' %
(unit_id, lesson_id))
args['payload'] = transforms.dumps(args['payload'])
# Submit the request to the backend.
response = self.post('rest/events?%s' % urllib.urlencode(
{'request': transforms.dumps(args)}), {})
assert_equals(response.status_int, 200)
assert not response.body
# Check that the current activity shows partial progress.
response, args = self.get_activity(unit_id, lesson_id, {})
assert_contains(
u'id="progress-inprogress-%s"' % lesson_id, response.body)
# Navigate to the course overview page and check that the unit shows
# partial progress.
response = self.get('course')
assert_equals(response.status_int, 200)
assert_contains(
u'id="progress-inprogress-%s"' % unit_id, response.body)
finally:
namespace_manager.set_namespace(old_namespace)
def test_progress(self):
"""Test student activity progress in detail, using the sample course."""
class FakeHandler(object):
def __init__(self, app_context):
self.app_context = app_context
course = Course(FakeHandler(sites.get_all_courses()[0]))
tracker = course.get_progress_tracker()
student = models.Student(key_name='key-test-student')
# Initially, all progress entries should be set to zero.
unit_progress = tracker.get_unit_progress(student)
for key in unit_progress:
assert unit_progress[key] == 0
lesson_progress = tracker.get_lesson_progress(student, 1)
for key in lesson_progress:
assert lesson_progress[key] == 0
# The blocks in Lesson 1.2 with activities are blocks 3 and 6.
# Submitting block 3 should trigger an in-progress update.
tracker.put_block_completed(student, 1, 2, 3)
assert tracker.get_unit_progress(student)['1'] == 1
assert tracker.get_lesson_progress(student, 1)[2] == 1
# Submitting block 6 should trigger a completion update for Lesson 1.2.
tracker.put_block_completed(student, 1, 2, 6)
assert tracker.get_unit_progress(student)['1'] == 1
assert tracker.get_lesson_progress(student, 1)[2] == 2
# Test a lesson with no interactive blocks in its activity. It should
# change its status to 'completed' once it is accessed.
tracker.put_activity_accessed(student, 2, 1)
assert tracker.get_unit_progress(student)['2'] == 1
assert tracker.get_lesson_progress(student, 2)[1] == 2
# Test that a lesson without activities (Lesson 1.1) doesn't count.
# Complete lessons 1.3, 1.4, 1.5 and 1.6; unit 1 should then be marked
# as 'completed' even though we have no events associated with
# Lesson 1.1.
tracker.put_activity_completed(student, 1, 3)
assert tracker.get_unit_progress(student)['1'] == 1
tracker.put_activity_completed(student, 1, 4)
assert tracker.get_unit_progress(student)['1'] == 1
tracker.put_activity_completed(student, 1, 5)
assert tracker.get_unit_progress(student)['1'] == 1
tracker.put_activity_completed(student, 1, 6)
assert tracker.get_unit_progress(student)['1'] == 2
# Test that a unit is not completed until all activity pages have been,
# at least, visited. Unit 6 has 3 lessons; the last one has no
# activity block.
tracker.put_activity_completed(student, 6, 1)
tracker.put_activity_completed(student, 6, 2)
assert tracker.get_unit_progress(student)['6'] == 1
tracker.put_activity_accessed(student, 6, 3)
assert tracker.get_unit_progress(student)['6'] == 2
# Test assessment counters.
pre_id = 'Pre'
tracker.put_assessment_completed(student, pre_id)
progress = tracker.get_or_create_progress(student)
assert tracker.is_assessment_completed(progress, pre_id)
assert tracker.get_assessment_status(progress, pre_id) == 1
tracker.put_assessment_completed(student, pre_id)
progress = tracker.get_or_create_progress(student)
assert tracker.is_assessment_completed(progress, pre_id)
assert tracker.get_assessment_status(progress, pre_id) == 2
tracker.put_assessment_completed(student, pre_id)
progress = tracker.get_or_create_progress(student)
assert tracker.is_assessment_completed(progress, pre_id)
assert tracker.get_assessment_status(progress, pre_id) == 3
# Test that invalid keys do not lead to any updates.
# Invalid assessment id.
fake_id = 'asdf'
tracker.put_assessment_completed(student, fake_id)
progress = tracker.get_or_create_progress(student)
assert not tracker.is_assessment_completed(progress, fake_id)
assert tracker.get_assessment_status(progress, fake_id) is None
# Invalid unit id.
tracker.put_activity_completed(student, fake_id, 1)
progress = tracker.get_or_create_progress(student)
assert tracker.get_activity_status(progress, fake_id, 1) is None
# Invalid lesson id.
fake_numeric_id = 22
tracker.put_activity_completed(student, 1, fake_numeric_id)
progress = tracker.get_or_create_progress(student)
assert tracker.get_activity_status(progress, 1, fake_numeric_id) is None
# Invalid block id.
tracker.put_block_completed(student, 5, 2, fake_numeric_id)
progress = tracker.get_or_create_progress(student)
assert not tracker.is_block_completed(
progress, 5, 2, fake_numeric_id)
class AssessmentTest(actions.TestBase):
"""Test for assessments."""
def test_course_pass(self):
"""Test student passing final exam."""
email = 'test_pass@google.com'
name = 'Test Pass'
post = {'assessment_type': 'Fin', 'score': '100.00'}
# Register.
actions.login(email)
actions.register(self, name)
# Submit answer.
response = actions.submit_assessment(self, 'Fin', post)
assert_equals(response.status_int, 200)
assert_contains('your overall course score of 70%', response.body)
assert_contains('you have passed the course', response.body)
# Check that the result shows up on the profile page.
response = actions.check_profile(self, name)
assert_contains('70', response.body)
assert_contains('100', response.body)
def test_assessments(self):
"""Test assessment scores are properly submitted and summarized."""
course = courses.Course(None, app_context=sites.get_all_courses()[0])
email = 'test_assessments@google.com'
name = 'Test Assessments'
pre_answers = [{'foo': 'bar'}, {'Alice': u'Bob (тест данные)'}]
pre = {
'assessment_type': 'Pre', 'score': '1.00',
'answers': transforms.dumps(pre_answers)}
mid = {'assessment_type': 'Mid', 'score': '2.00'}
fin = {'assessment_type': 'Fin', 'score': '3.00'}
second_mid = {'assessment_type': 'Mid', 'score': '1.00'}
second_fin = {'assessment_type': 'Fin', 'score': '100000'}
# Register.
actions.login(email)
actions.register(self, name)
# Navigate to the course overview page.
response = self.get('course')
assert_equals(response.status_int, 200)
assert_does_not_contain(u'id="progress-completed-Mid', response.body)
assert_contains(u'id="progress-notstarted-Mid', response.body)
old_namespace = namespace_manager.get_namespace()
namespace_manager.set_namespace(self.namespace)
try:
student = models.Student.get_enrolled_student_by_email(email)
# Check that three score objects (corresponding to the Pre, Mid and
# Fin assessments) exist right now, and that they all have zero
# score.
student_scores = course.get_all_scores(student)
assert len(student_scores) == 3
for assessment in student_scores:
assert assessment['score'] == 0
# Submit assessments and check that the score is updated.
actions.submit_assessment(self, 'Pre', pre)
student = models.Student.get_enrolled_student_by_email(email)
student_scores = course.get_all_scores(student)
assert len(student_scores) == 3
for assessment in student_scores:
if assessment['id'] == 'Pre':
assert assessment['score'] > 0
else:
assert assessment['score'] == 0
actions.submit_assessment(self, 'Mid', mid)
student = models.Student.get_enrolled_student_by_email(email)
# Navigate to the course overview page.
response = self.get('course')
assert_equals(response.status_int, 200)
assert_contains(u'id="progress-completed-Pre', response.body)
assert_contains(u'id="progress-completed-Mid', response.body)
assert_contains(u'id="progress-notstarted-Fin', response.body)
# Submit the final assessment.
actions.submit_assessment(self, 'Fin', fin)
student = models.Student.get_enrolled_student_by_email(email)
# Navigate to the course overview page.
response = self.get('course')
assert_equals(response.status_int, 200)
assert_contains(u'id="progress-completed-Fin', response.body)
# Check that the overall-score is non-zero.
assert course.get_overall_score(student)
# Check assessment answers.
answers = transforms.loads(
models.StudentAnswersEntity.get_by_key_name(
student.user_id).data)
assert pre_answers == answers['Pre']
# pylint: disable-msg=g-explicit-bool-comparison
assert [] == answers['Mid']
assert [] == answers['Fin']
# pylint: enable-msg=g-explicit-bool-comparison
# Check that scores are recorded properly.
student = models.Student.get_enrolled_student_by_email(email)
assert int(course.get_score(student, 'Pre')) == 1
assert int(course.get_score(student, 'Mid')) == 2
assert int(course.get_score(student, 'Fin')) == 3
assert (int(course.get_overall_score(student)) ==
int((0.30 * 2) + (0.70 * 3)))
# Try posting a new midcourse exam with a lower score;
# nothing should change.
actions.submit_assessment(self, 'Mid', second_mid)
student = models.Student.get_enrolled_student_by_email(email)
assert int(course.get_score(student, 'Pre')) == 1
assert int(course.get_score(student, 'Mid')) == 2
assert int(course.get_score(student, 'Fin')) == 3
assert (int(course.get_overall_score(student)) ==
int((0.30 * 2) + (0.70 * 3)))
# Now try posting a postcourse exam with a higher score and note
# the changes.
actions.submit_assessment(self, 'Fin', second_fin)
student = models.Student.get_enrolled_student_by_email(email)
assert int(course.get_score(student, 'Pre')) == 1
assert int(course.get_score(student, 'Mid')) == 2
assert int(course.get_score(student, 'Fin')) == 100000
assert (int(course.get_overall_score(student)) ==
int((0.30 * 2) + (0.70 * 100000)))
finally:
namespace_manager.set_namespace(old_namespace)
def remove_dir(dir_name):
"""Delete a directory."""
logging.info('removing folder: %s', dir_name)
if os.path.exists(dir_name):
shutil.rmtree(dir_name)
if os.path.exists(dir_name):
raise Exception('Failed to delete directory: %s' % dir_name)
def clean_dir(dir_name):
"""Clean a directory."""
remove_dir(dir_name)
logging.info('creating folder: %s', dir_name)
os.makedirs(dir_name)
if not os.path.exists(dir_name):
raise Exception('Failed to create directory: %s' % dir_name)
def clone_canonical_course_data(src, dst):
"""Makes a copy of canonical course content."""
clean_dir(dst)
def copytree(name):
shutil.copytree(
os.path.join(src, name),
os.path.join(dst, name))
copytree('assets')
copytree('data')
copytree('views')
shutil.copy(
os.path.join(src, 'course.yaml'),
os.path.join(dst, 'course.yaml'))
# Make all files writable.
for root, unused_dirs, files in os.walk(dst):
for afile in files:
fname = os.path.join(root, afile)
os.chmod(fname, 0o777)
class GeneratedCourse(object):
"""A helper class for a dynamically generated course content."""
@classmethod
def set_data_home(cls, test):
"""All data for this test will be placed here."""
cls.data_home = test.test_tempdir
def __init__(self, ns):
self.path = ns
@property
def namespace(self):
return 'ns%s' % self.path
@property
def title(self):
return u'Power Searching with Google title-%s (тест данные)' % self.path
@property
def unit_title(self):
return u'Interpreting results unit-title-%s (тест данные)' % self.path
@property
def lesson_title(self):
return u'Word order matters lesson-title-%s (тест данные)' % self.path
@property
def head(self):
return '<!-- head-%s -->' % self.path
@property
def css(self):
return '<!-- css-%s -->' % self.path
@property
def home(self):
return os.path.join(self.data_home, 'data-%s' % self.path)
@property
def email(self):
return 'walk_the_course_named_%s@google.com' % self.path
@property
def name(self):
return 'Walk The Course Named %s' % self.path
class MultipleCoursesTestBase(actions.TestBase):
"""Configures several courses for running concurrently."""
def modify_file(self, filename, find, replace):
"""Read, modify and write back the file."""
text = open(filename, 'r').read().decode('utf-8')
# Make sure target text is not in the file.
assert not replace in text
text = text.replace(find, replace)
assert replace in text
open(filename, 'w').write(text.encode('utf-8'))
def modify_canonical_course_data(self, course):
"""Modify canonical content by adding unique bits to it."""
self.modify_file(
os.path.join(course.home, 'course.yaml'),
'title: \'Power Searching with Google\'',
'title: \'%s\'' % course.title)
self.modify_file(
os.path.join(course.home, 'data/unit.csv'),
',Interpreting results,',
',%s,' % course.unit_title)
self.modify_file(
os.path.join(course.home, 'data/lesson.csv'),
',Word order matters,',
',%s,' % course.lesson_title)
self.modify_file(
os.path.join(course.home, 'data/lesson.csv'),
',Interpreting results,',
',%s,' % course.unit_title)
self.modify_file(
os.path.join(course.home, 'views/base.html'),
'<head>',
'<head>\n%s' % course.head)
self.modify_file(
os.path.join(course.home, 'assets/css/main.css'),
'html {',
'%s\nhtml {' % course.css)
def prepare_course_data(self, course):
"""Create unique course content for a course."""
clone_canonical_course_data(self.bundle_root, course.home)
self.modify_canonical_course_data(course)
def setUp(self): # pylint: disable-msg=g-bad-name
"""Configure the test."""
super(MultipleCoursesTestBase, self).setUp()
GeneratedCourse.set_data_home(self)
self.course_a = GeneratedCourse('a')
self.course_b = GeneratedCourse('b')
self.course_ru = GeneratedCourse('ru')
# Override BUNDLE_ROOT.
self.bundle_root = appengine_config.BUNDLE_ROOT
appengine_config.BUNDLE_ROOT = GeneratedCourse.data_home
# Prepare course content.
clean_dir(GeneratedCourse.data_home)
self.prepare_course_data(self.course_a)
self.prepare_course_data(self.course_b)
self.prepare_course_data(self.course_ru)
# Setup one course for I18N.
self.modify_file(
os.path.join(self.course_ru.home, 'course.yaml'),
'locale: \'en_US\'',
'locale: \'ru\'')
# Configure courses.
sites.setup_courses('%s, %s, %s' % (
'course:/courses/a:/data-a:nsa',
'course:/courses/b:/data-b:nsb',
'course:/courses/ru:/data-ru:nsru'))
def tearDown(self): # pylint: disable-msg=g-bad-name
"""Clean up."""
sites.reset_courses()
appengine_config.BUNDLE_ROOT = self.bundle_root
super(MultipleCoursesTestBase, self).tearDown()
def walk_the_course(
self, course, first_time=True, is_admin=False, logout=True):
"""Visit a course as a Student would."""
# Check normal user has no access.
actions.login(course.email, is_admin)
# Test schedule.
if first_time:
response = self.testapp.get('/courses/%s/preview' % course.path)
else:
response = self.testapp.get('/courses/%s/course' % course.path)
assert_contains(course.title, response.body)
assert_contains(course.unit_title, response.body)
assert_contains(course.head, response.body)
# Tests static resource.
response = self.testapp.get(
'/courses/%s/assets/css/main.css' % course.path)
assert_contains(course.css, response.body)
if first_time:
# Test registration.
response = self.get('/courses/%s/register' % course.path)
assert_contains(course.title, response.body)
assert_contains(course.head, response.body)
response.form.set('form01', course.name)
response.form.action = '/courses/%s/register' % course.path
response = self.submit(response.form)
assert_contains(course.title, response.body)
assert_contains(course.head, response.body)
assert_contains(course.title, response.body)
assert_contains(
'//groups.google.com/group/My-Course-Announce', response.body)
assert_contains(
'//groups.google.com/group/My-Course', response.body)
# Check lesson page.
response = self.testapp.get(
'/courses/%s/unit?unit=1&lesson=5' % course.path)
assert_contains(course.title, response.body)
assert_contains(course.lesson_title, response.body)
assert_contains(course.head, response.body)
# Check activity page.
response = self.testapp.get(
'/courses/%s/activity?unit=1&lesson=5' % course.path)
assert_contains(course.title, response.body)
assert_contains(course.lesson_title, response.body)
assert_contains(course.head, response.body)
if logout:
actions.logout()
class MultipleCoursesTest(MultipleCoursesTestBase):
"""Test several courses running concurrently."""
def test_courses_are_isolated(self):
"""Test each course serves its own assets, views and data."""
# Pretend students visit courses.
self.walk_the_course(self.course_a)
self.walk_the_course(self.course_b)
self.walk_the_course(self.course_a, first_time=False)
self.walk_the_course(self.course_b, first_time=False)
# Check course namespaced data.
self.validate_course_data(self.course_a)
self.validate_course_data(self.course_b)
# Check default namespace.
assert (
namespace_manager.get_namespace() ==
appengine_config.DEFAULT_NAMESPACE_NAME)
assert not models.Student.all().fetch(1000)
def validate_course_data(self, course):
"""Check course data is valid."""
old_namespace = namespace_manager.get_namespace()
namespace_manager.set_namespace(course.namespace)
try:
students = models.Student.all().fetch(1000)
assert len(students) == 1
for student in students:
assert_equals(course.email, student.key().name())
assert_equals(course.name, student.name)
finally:
namespace_manager.set_namespace(old_namespace)
class I18NTest(MultipleCoursesTestBase):
"""Test courses running in different locales and containing I18N content."""
def test_csv_supports_utf8(self):
"""Test UTF-8 content in CSV file is handled correctly."""
title_ru = u'Найди факты быстрее'
csv_file = os.path.join(self.course_ru.home, 'data/unit.csv')
self.modify_file(
csv_file, ',Find facts faster,', ',%s,' % title_ru)
self.modify_file(
os.path.join(self.course_ru.home, 'data/lesson.csv'),
',Find facts faster,', ',%s,' % title_ru)
rows = []
for row in csv.reader(open(csv_file)):
rows.append(row)
assert title_ru == rows[6][3].decode('utf-8')
response = self.get('/courses/%s/preview' % self.course_ru.path)
assert_contains(title_ru, response.body)
# Tests student perspective.
self.walk_the_course(self.course_ru, first_time=True)
self.walk_the_course(self.course_ru, first_time=False)
# Test course author dashboard.
self.walk_the_course(
self.course_ru, first_time=False, is_admin=True, logout=False)
def assert_page_contains(page_name, text_array):
dashboard_url = '/courses/%s/dashboard' % self.course_ru.path
response = self.get('%s?action=%s' % (dashboard_url, page_name))
for text in text_array:
assert_contains(text, response.body)
assert_page_contains('', [
title_ru, self.course_ru.unit_title, self.course_ru.lesson_title])
assert_page_contains(
'assets', [self.course_ru.title])
assert_page_contains(
'settings', [
self.course_ru.title,
vfs.AbstractFileSystem.normpath(self.course_ru.home)])
# Clean up.
actions.logout()
def test_i18n(self):
"""Test course is properly internationalized."""
response = self.get('/courses/%s/preview' % self.course_ru.path)
assert_contains_all_of(
[u'Войти', u'Регистрация', u'Расписание', u'Курс'], response.body)
class CourseUrlRewritingTestBase(actions.TestBase):
"""Prepare course for using rewrite rules and '/courses/pswg' base URL."""
def setUp(self): # pylint: disable-msg=g-bad-name
super(CourseUrlRewritingTestBase, self).setUp()
self.base = '/courses/pswg'
self.namespace = 'gcb-courses-pswg-tests-ns'
sites.setup_courses('course:%s:/:%s' % (self.base, self.namespace))
def tearDown(self): # pylint: disable-msg=g-bad-name
sites.reset_courses()
super(CourseUrlRewritingTestBase, self).tearDown()
def canonicalize(self, href, response=None):
"""Canonicalize URL's using either <base> or self.base."""
# Check if already canonicalized.
if href.startswith(
self.base) or utils.ApplicationHandler.is_absolute(href):
pass
else:
# Look for <base> tag in the response to compute the canonical URL.
if response:
return super(CourseUrlRewritingTestBase, self).canonicalize(
href, response)
# Prepend self.base to compute the canonical URL.
if not href.startswith('/'):
href = '/%s' % href
href = '%s%s' % (self.base, href)
self.audit_url(href)
return href
class VirtualFileSystemTestBase(actions.TestBase):
"""Prepares a course running on a virtual local file system."""
def setUp(self): # pylint: disable-msg=g-bad-name
"""Configure the test."""
super(VirtualFileSystemTestBase, self).setUp()
GeneratedCourse.set_data_home(self)
# Override BUNDLE_ROOT.
self.bundle_root = appengine_config.BUNDLE_ROOT
appengine_config.BUNDLE_ROOT = GeneratedCourse.data_home
# Prepare course content.
home_folder = os.path.join(GeneratedCourse.data_home, 'data-v')
clone_canonical_course_data(self.bundle_root, home_folder)
# Configure course.
self.namespace = 'nsv'
sites.setup_courses('course:/:/data-vfs:%s' % self.namespace)
# Modify app_context filesystem to map /data-v to /data-vfs.
def after_create(unused_cls, instance):
# pylint: disable-msg=protected-access
instance._fs = vfs.AbstractFileSystem(
vfs.LocalReadOnlyFileSystem(
os.path.join(GeneratedCourse.data_home, 'data-vfs'),
home_folder))
sites.ApplicationContext.after_create = after_create
def tearDown(self): # pylint: disable-msg=g-bad-name
"""Clean up."""
sites.reset_courses()
appengine_config.BUNDLE_ROOT = self.bundle_root
super(VirtualFileSystemTestBase, self).tearDown()
class DatastoreBackedCourseTest(actions.TestBase):
"""Prepares an empty course running on datastore-backed file system."""
def setUp(self): # pylint: disable-msg=g-bad-name
"""Configure the test."""
super(DatastoreBackedCourseTest, self).setUp()
self.supports_editing = True
self.namespace = 'dsbfs'
sites.setup_courses('course:/::%s' % self.namespace)
all_courses = sites.get_all_courses()
assert len(all_courses) == 1
self.app_context = all_courses[0]
def tearDown(self): # pylint: disable-msg=g-bad-name
"""Clean up."""
sites.reset_courses()
super(DatastoreBackedCourseTest, self).tearDown()
def upload_all_in_dir(self, dir_name, files_added):
"""Uploads all files in a folder to vfs."""
root_dir = os.path.join(appengine_config.BUNDLE_ROOT, dir_name)
for root, unused_dirs, files in os.walk(root_dir):
for afile in files:
filename = os.path.join(root, afile)
self.app_context.fs.put(filename, open(filename, 'rb'))
files_added.append(filename)
def init_course_data(self, upload_files):
"""Uploads required course data files into vfs."""
files_added = []
old_namespace = namespace_manager.get_namespace()
try:
namespace_manager.set_namespace(self.namespace)
upload_files(files_added)
# Normalize paths to be identical for Windows and Linux.
files_added_normpath = []
for file_added in files_added:
files_added_normpath.append(
vfs.AbstractFileSystem.normpath(file_added))
assert self.app_context.fs.list(
appengine_config.BUNDLE_ROOT) == sorted(files_added_normpath)
finally:
namespace_manager.set_namespace(old_namespace)
def upload_all_sample_course_files(self, files_added):
"""Uploads all sample course data files into vfs."""
self.upload_all_in_dir('assets', files_added)
self.upload_all_in_dir('views', files_added)
self.upload_all_in_dir('data', files_added)
course_yaml = os.path.join(
appengine_config.BUNDLE_ROOT, 'course.yaml')
self.app_context.fs.put(course_yaml, open(course_yaml, 'rb'))
files_added.append(course_yaml)
class DatastoreBackedCustomCourseTest(DatastoreBackedCourseTest):
"""Prepares a sample course running on datastore-backed file system."""
def test_course_import(self):
"""Test importing of the course."""
# Setup courses.
sites.setup_courses('course:/test::ns_test, course:/:/')
self.namespace = 'ns_test'
self.base = '/test'
config.Registry.test_overrides[
models.CAN_USE_MEMCACHE.name] = True
# Format import payload and URL.
payload_dict = {}
payload_dict['course'] = 'course:/:/'
request = {}
request['payload'] = transforms.dumps(payload_dict)
import_put_url = (
'/test/rest/course/import?%s' % urllib.urlencode(
{'request': transforms.dumps(request)}))
# Check non-logged user has no rights.
response = self.testapp.put(import_put_url, {}, expect_errors=True)
assert_equals(404, response.status_int)
# Login as admin.
email = 'test_course_import@google.com'
name = 'Test Course Import'
actions.login(email, is_admin=True)
# Check course is empty.
response = self.get('/test/dashboard')
assert_equals(200, response.status_int)
assert_does_not_contain('Filter image results by color', response.body)
# Import sample course.
response = self.put(import_put_url, {})
assert_equals(200, response.status_int)
assert_contains('Imported.', response.body)
# Check course is not empty.
response = self.get('/test/dashboard')
assert_contains('Filter image results by color', response.body)
# Check assessment is copied.
response = self.get('/test/assets/js/assessment-21.js')
assert_equals(200, response.status_int)
assert_contains('Humane Society website', response.body)
# Check activity is copied.
response = self.get('/test/assets/js/activity-37.js')
assert_equals(200, response.status_int)
assert_contains('explore ways to keep yourself updated', response.body)
unit_2_title = 'Unit 2 - Interpreting results'
lesson_2_1_title = '2.1 When search results suggest something new'
lesson_2_2_title = '2.2 Thinking more deeply about your search'
# Check units and lessons are indexed correctly.
response = self.get('/test/preview')
assert_contains(unit_2_title, response.body)
actions.register(self, name)
response = self.get('/test/course')
assert_contains(unit_2_title, response.body)
# Unit page.
response = self.get('/test/unit?unit=9')
assert_contains( # A unit title.
unit_2_title, response.body)
assert_contains( # First child lesson without link.
lesson_2_1_title, response.body)
assert_contains( # Second child lesson with link.
lesson_2_2_title, response.body)
assert_contains_all_of( # Breadcrumbs.
['Unit 2</a></li>', 'Lesson 1</li>'], response.body)
# Unit page.
response = self.get('/test/activity?unit=9&lesson=10')
assert_contains( # A unit title.
unit_2_title, response.body)
assert_contains( # An activity title.
'Lesson 2.1 Activity', response.body)
assert_contains( # First child lesson without link.
lesson_2_1_title, response.body)
assert_contains( # Second child lesson with link.
lesson_2_2_title, response.body)
assert_contains_all_of( # Breadcrumbs.
['Unit 2</a></li>', 'Lesson 1</a></li>'], response.body)
# Clean up.
sites.reset_courses()
config.Registry.test_overrides = {}
def test_get_put_file(self):
"""Test that one can put/get file via REST interface."""
self.init_course_data(self.upload_all_sample_course_files)
email = 'test_get_put_file@google.com'
actions.login(email, True)
response = self.get('dashboard?action=settings')
# Check course.yaml edit form.
compute_form = response.forms['edit_course_yaml']
response = self.submit(compute_form)
assert_equals(response.status_int, 302)
assert_contains(
'dashboard?action=edit_settings&key=%2Fcourse.yaml',
response.location)
response = self.get(response.location)
assert_contains('rest/files/item?key=%2Fcourse.yaml', response.body)
# Get text file.
response = self.get('rest/files/item?key=%2Fcourse.yaml')
assert_equals(response.status_int, 200)
json_dict = transforms.loads(
transforms.loads(response.body)['payload'])
assert '/course.yaml' == json_dict['key']
assert 'text/utf-8' == json_dict['encoding']
assert (open(os.path.join(
appengine_config.BUNDLE_ROOT, 'course.yaml')).read(
) == json_dict['content'])
def test_empty_course(self):
"""Test course with no assets and the simlest possible course.yaml."""
email = 'test_empty_course@google.com'
actions.login(email, True)
# Check minimal preview page comes up.
response = self.get('preview')
assert_contains('UNTITLED COURSE', response.body)
assert_contains('Registration', response.body)
# Check inheritable files are accessible.
response = self.get('/assets/css/main.css')
assert (open(os.path.join(
appengine_config.BUNDLE_ROOT, 'assets/css/main.css')).read(
) == response.body)
# Check non-inheritable files are not inherited.
response = self.testapp.get(
'/assets/js/activity-1.3.js', expect_errors=True)
assert_equals(response.status_int, 404)
# Login as admin.
email = 'test_empty_course@google.com'
actions.login(email, True)
response = self.get('dashboard')
# Add unit.
compute_form = response.forms['add_unit']
response = self.submit(compute_form)
response = self.get('/rest/course/unit?key=1')
assert_equals(response.status_int, 200)
# Add lessons.
response = self.get('dashboard')
compute_form = response.forms['add_lesson']
response = self.submit(compute_form)
response = self.get('/rest/course/lesson?key=2')
assert_equals(response.status_int, 200)
# Add assessment.
response = self.get('dashboard')
compute_form = response.forms['add_assessment']
response = self.submit(compute_form)
response = self.get('/rest/course/assessment?key=3')
assert_equals(response.status_int, 200)
# Add link.
response = self.get('dashboard')
compute_form = response.forms['add_link']
response = self.submit(compute_form)
response = self.get('/rest/course/link?key=4')
assert_equals(response.status_int, 200)
def import_sample_course(self):
"""Imports a sample course."""
# Setup courses.
sites.setup_courses('course:/test::ns_test, course:/:/')
# Import sample course.
dst_app_context = sites.get_all_courses()[0]
src_app_context = sites.get_all_courses()[1]
dst_course = courses.Course(None, app_context=dst_app_context)
errors = []
src_course_out, dst_course_out = dst_course.import_from(
src_app_context, errors)
if errors:
raise Exception(errors)
assert len(
src_course_out.get_units()) == len(dst_course_out.get_units())
dst_course_out.save()
# Clean up.
sites.reset_courses()
def test_imported_course_performace(self):
"""Tests various pages of the imported course."""
self.import_sample_course()
# Install a clone on the '/' so all the tests will treat it as normal
# sample course.
sites.setup_courses('course:/::ns_test')
self.namespace = 'ns_test'
# Enable memcache.
config.Registry.test_overrides[
models.CAN_USE_MEMCACHE.name] = True
# Override course.yaml settings by patching app_context.
get_environ_old = sites.ApplicationContext.get_environ
def get_environ_new(self):
environ = get_environ_old(self)
environ['course']['now_available'] = True
return environ
sites.ApplicationContext.get_environ = get_environ_new
def custom_inc(unused_increment=1, context=None):
"""A custom inc() function for cache miss counter."""
self.keys.append(context)
self.count += 1
def assert_cached(url, assert_text, cache_miss_allowed=0):
"""Checks that specific URL supports caching."""
memcache.flush_all()
self.keys = []
self.count = 0
# Expect cache misses first time we load page.
cache_miss_before = self.count
response = self.get(url)
assert_contains(assert_text, response.body)
assert cache_miss_before != self.count
# Expect no cache misses first time we load page.
self.keys = []
cache_miss_before = self.count
response = self.get(url)
assert_contains(assert_text, response.body)
cache_miss_actual = self.count - cache_miss_before
if cache_miss_actual != cache_miss_allowed:
raise Exception(
'Expected %s cache misses, got %s. Keys are:\n%s' % (
cache_miss_allowed, cache_miss_actual,
'\n'.join(self.keys)))
old_inc = models.CACHE_MISS.inc
models.CACHE_MISS.inc = custom_inc
# Walk the site.
email = 'test_units_lessons@google.com'
name = 'Test Units Lessons'
assert_cached('preview', 'Putting it all together')
actions.login(email, True)
assert_cached('preview', 'Putting it all together')
actions.register(self, name)
assert_cached(
'unit?unit=9', 'When search results suggest something new')
assert_cached(
'unit?unit=9&lesson=12', 'Understand options for different media')
# Clean up.
models.CACHE_MISS.inc = old_inc
sites.ApplicationContext.get_environ = get_environ_old
config.Registry.test_overrides = {}
sites.reset_courses()
def test_imported_course(self):
"""Tests various pages of the imported course."""
# TODO(psimakov): Ideally, this test class should run all aspect tests
# and they all should pass. However, the id's in the cloned course
# do not match the id's of source sample course and we fetch pages
# and assert page content using id's. For now, we will check the minimal
# set of pages manually. Later, we have to make it run all known tests.
self.import_sample_course()
# Install a clone on the '/' so all the tests will treat it as normal
# sample course.
sites.setup_courses('course:/::ns_test')
self.namespace = 'ns_test'
email = 'test_units_lessons@google.com'
name = 'Test Units Lessons'
actions.login(email, True)
response = self.get('preview')
assert_contains('Putting it all together', response.body)
actions.register(self, name)
actions.check_profile(self, name)
actions.view_announcements(self)
# Check unit page without lesson specified.
response = self.get('unit?unit=9')
assert_contains('Interpreting results', response.body)
assert_contains(
'When search results suggest something new', response.body)
# Check unit page with a lessons.
response = self.get('unit?unit=9&lesson=12')
assert_contains('Interpreting results', response.body)
assert_contains(
'Understand options for different media', response.body)
# Check assesment page.
response = self.get('assessment?name=21')
assert_contains(
'<script src="assets/js/assessment-21.js"></script>', response.body)
# Check activity page.
response = self.get('activity?unit=9&lesson=13')
assert_contains(
'<script src="assets/js/activity-13.js"></script>',
response.body)
# Clean up.
sites.reset_courses()
class DatastoreBackedSampleCourseTest(DatastoreBackedCourseTest):
"""Run all existing tests using datastore-backed file system."""
def setUp(self): # pylint: disable-msg=g-bad-name
super(DatastoreBackedSampleCourseTest, self).setUp()
self.init_course_data(self.upload_all_sample_course_files)
class FakeEnvironment(object):
"""Temporary fake tools.etl.remote.Evironment.
Bypasses making a remote_api connection because webtest can't handle it and
we don't want to bring up a local server for our functional tests. When this
fake is used, the in-process datastore stub will handle RPCs.
TODO(johncox): find a way to make webtest successfully emulate the
remote_api endpoint and get rid of this fake.
"""
def __init__(self, application_id, server, path=None):
self._appication_id = application_id
self._path = path
self._server = server
def establish(self):
pass
class EtlMainTestCase(DatastoreBackedCourseTest):
"""Tests tools/etl/etl.py's main()."""
# Allow access to protected members under test.
# pylint: disable-msg=protected-access
def setUp(self):
"""Configures EtlMainTestCase."""
super(EtlMainTestCase, self).setUp()
self.test_environ = copy.deepcopy(os.environ)
# In etl.main, use test auth scheme to avoid interactive login.
self.test_environ['SERVER_SOFTWARE'] = remote.TEST_SERVER_SOFTWARE
self.archive_path = os.path.join(self.test_tempdir, 'archive.zip')
self.new_course_title = 'New Course Title'
self.url_prefix = '/test'
self.raw = 'course:%s::ns_test' % self.url_prefix
self.swap(os, 'environ', self.test_environ)
self.common_args = [
self.url_prefix, 'myapp', 'localhost:8080', self.archive_path]
self.common_course_args = [etl._TYPE_COURSE] + self.common_args
self.common_datastore_args = [
etl._TYPE_DATASTORE] + self.common_args
self.download_course_args = etl.PARSER.parse_args(
[etl._MODE_DOWNLOAD] + self.common_course_args)
self.upload_course_args = etl.PARSER.parse_args(
[etl._MODE_UPLOAD] + self.common_course_args)
# Set up courses: version 1.3, version 1.2.
sites.setup_courses(self.raw + ', course:/:/')
def tearDown(self):
sites.reset_courses()
super(EtlMainTestCase, self).tearDown()
def create_archive(self):
self.upload_all_sample_course_files([])
self.import_sample_course()
args = etl.PARSER.parse_args(['download'] + self.common_course_args)
etl.main(args, environment_class=FakeEnvironment)
sites.reset_courses()
def create_empty_course(self, raw):
sites.setup_courses(raw)
course = etl._get_course_from(etl._get_requested_context(
sites.get_all_courses(), self.url_prefix))
course.delete_all()
def import_sample_course(self):
"""Imports a sample course."""
# Import sample course.
dst_app_context = sites.get_all_courses()[0]
src_app_context = sites.get_all_courses()[1]
# Patch in a course.yaml.
yaml = copy.deepcopy(courses.DEFAULT_COURSE_YAML_DICT)
yaml['course']['title'] = self.new_course_title
dst_app_context.fs.impl.put(
os.path.join(
appengine_config.BUNDLE_ROOT, etl._COURSE_YAML_PATH_SUFFIX),
etl._ReadWrapper(str(yaml)), is_draft=False)
dst_course = courses.Course(None, app_context=dst_app_context)
errors = []
src_course_out, dst_course_out = dst_course.import_from(
src_app_context, errors)
if errors:
raise Exception(errors)
assert len(
src_course_out.get_units()) == len(dst_course_out.get_units())
dst_course_out.save()
def test_download_course_creates_valid_archive(self):
"""Tests download of course data and archive creation."""
self.upload_all_sample_course_files([])
self.import_sample_course()
etl.main(self.download_course_args, environment_class=FakeEnvironment)
# Don't use Archive and Manifest here because we want to test the raw
# structure of the emitted zipfile.
zip_archive = zipfile.ZipFile(self.archive_path)
manifest = transforms.loads(
zip_archive.open(etl._MANIFEST_FILENAME).read())
self.assertGreaterEqual(
courses.COURSE_MODEL_VERSION_1_3, manifest['version'])
self.assertEqual(
'course:%s::ns_test' % self.url_prefix, manifest['raw'])
for entity in manifest['entities']:
self.assertTrue(entity.has_key('is_draft'))
self.assertTrue(zip_archive.open(entity['path']))
def test_download_course_errors_if_archive_path_exists_on_disk(self):
self.upload_all_sample_course_files([])
self.import_sample_course()
etl.main(self.download_course_args, environment_class=FakeEnvironment)
self.assertRaises(
SystemExit, etl.main, self.download_course_args,
environment_class=FakeEnvironment)
def test_download_errors_if_course_url_prefix_does_not_exist(self):
sites.reset_courses()
self.assertRaises(
SystemExit, etl.main, self.download_course_args,
environment_class=FakeEnvironment)
def test_download_course_errors_if_course_version_is_pre_1_3(self):
args = etl.PARSER.parse_args(
['download', 'course', '/'] + self.common_course_args[2:])
self.upload_all_sample_course_files([])
self.import_sample_course()
self.assertRaises(
SystemExit, etl.main, args, environment_class=FakeEnvironment)
def test_download_datastore_fails_if_datastore_types_not_in_datastore(self):
download_datastore_args = etl.PARSER.parse_args(
[etl._MODE_DOWNLOAD] + self.common_datastore_args +
['--datastore_types', 'missing'])
self.assertRaises(
SystemExit, etl.main, download_datastore_args,
environment_class=FakeEnvironment)
def test_download_datastore_succeeds(self):
"""Test download of datastore data and archive creation."""
download_datastore_args = etl.PARSER.parse_args(
[etl._MODE_DOWNLOAD] + self.common_datastore_args +
['--datastore_types', 'Student,StudentPropertyEntity'])
context = etl._get_requested_context(
sites.get_all_courses(), download_datastore_args.course_url_prefix)
old_namespace = namespace_manager.get_namespace()
try:
namespace_manager.set_namespace(context.get_namespace_name())
first_student = models.Student(name='first_student')
second_student = models.Student(name='second_student')
first_entity = models.StudentPropertyEntity(
name='first_student_property_entity')
second_entity = models.StudentPropertyEntity(
name='second_student_property_entity')
db.put([first_student, second_student, first_entity, second_entity])
finally:
namespace_manager.set_namespace(old_namespace)
etl.main(
download_datastore_args, environment_class=FakeEnvironment)
archive = etl._Archive(self.archive_path)
archive.open('r')
self.assertEqual(
['Student.json', 'StudentPropertyEntity.json'],
sorted(
[os.path.basename(e.path) for e in archive.manifest.entities]))
student_entity = [
e for e in archive.manifest.entities
if e.path.endswith('Student.json')][0]
entity_entity = [
e for e in archive.manifest.entities
if e.path.endswith('StudentPropertyEntity.json')][0]
# Ensure .json files are deserializable into Python objects.
students = sorted(
transforms.loads(archive.get(student_entity.path))['rows'],
key=lambda d: d['name'])
entities = sorted(
transforms.loads(archive.get(entity_entity.path))['rows'],
key=lambda d: d['name'])
# Spot check their contents.
self.assertEqual(
[model.name for model in [first_student, second_student]],
[student['name'] for student in students])
self.assertEqual(
[model.name for model in [first_entity, second_entity]],
[entity['name'] for entity in entities])
def test_upload_course_fails_if_archive_cannot_be_opened(self):
sites.setup_courses(self.raw)
self.assertRaises(
SystemExit, etl.main, self.upload_course_args,
environment_class=FakeEnvironment)
def test_upload_course_fails_if_archive_course_json_malformed(self):
self.create_archive()
self.create_empty_course(self.raw)
zip_archive = zipfile.ZipFile(self.archive_path, 'a')
zip_archive.writestr(
etl._Archive.get_internal_path(etl._COURSE_JSON_PATH_SUFFIX),
'garbage')
zip_archive.close()
self.assertRaises(
SystemExit, etl.main, self.upload_course_args,
environment_class=FakeEnvironment)
def test_upload_course_fails_if_archive_course_yaml_malformed(self):
self.create_archive()
self.create_empty_course(self.raw)
zip_archive = zipfile.ZipFile(self.archive_path, 'a')
zip_archive.writestr(
etl._Archive.get_internal_path(etl._COURSE_YAML_PATH_SUFFIX),
'{')
zip_archive.close()
self.assertRaises(
SystemExit, etl.main, self.upload_course_args,
environment_class=FakeEnvironment)
def test_upload_course_fails_if_course_with_units_found(self):
self.upload_all_sample_course_files([])
self.import_sample_course()
self.assertRaises(
SystemExit, etl.main, self.upload_course_args,
environment_class=FakeEnvironment)
def test_upload_course_fails_if_no_course_with_url_prefix_found(self):
self.create_archive()
self.assertRaises(
SystemExit, etl.main, self.upload_course_args,
environment_class=FakeEnvironment)
def test_upload_course_succeeds(self):
"""Tests upload of archive contents."""
self.create_archive()
self.create_empty_course(self.raw)
context = etl._get_requested_context(
sites.get_all_courses(), self.upload_course_args.course_url_prefix)
self.assertNotEqual(self.new_course_title, context.get_title())
etl.main(self.upload_course_args, environment_class=FakeEnvironment)
archive = etl._Archive(self.archive_path)
archive.open('r')
context = etl._get_requested_context(
sites.get_all_courses(), self.upload_course_args.course_url_prefix)
filesystem_contents = context.fs.impl.list(appengine_config.BUNDLE_ROOT)
self.assertEqual(
len(archive.manifest.entities), len(filesystem_contents))
self.assertEqual(self.new_course_title, context.get_title())
units = etl._get_course_from(context).get_units()
spot_check_single_unit = [u for u in units if u.unit_id == 9][0]
self.assertEqual('Interpreting results', spot_check_single_unit.title)
for unit in units:
self.assertTrue(unit.title)
for entity in archive.manifest.entities:
full_path = os.path.join(
appengine_config.BUNDLE_ROOT,
etl._Archive.get_external_path(entity.path))
stream = context.fs.impl.get(full_path)
self.assertEqual(entity.is_draft, stream.metadata.is_draft)
def test_upload_datastore_fails(self):
upload_datastore_args = etl.PARSER.parse_args(
[etl._MODE_UPLOAD] + self.common_datastore_args +
['--datastore_types', 'doesnt_matter'])
self.assertRaises(
NotImplementedError, etl.main, upload_datastore_args,
environment_class=FakeEnvironment)
# TODO(johncox): re-enable these tests once we figure out how to make webtest
# play nice with remote_api.
class EtlRemoteEnvironmentTestCase(actions.TestBase):
"""Tests tools/etl/remote.py."""
# Method name determined by superclass. pylint: disable-msg=g-bad-name
def setUp(self):
super(EtlRemoteEnvironmentTestCase, self).setUp()
self.test_environ = copy.deepcopy(os.environ)
# Allow access to protected members under test.
# pylint: disable-msg=protected-access
def disabled_test_can_establish_environment_in_dev_mode(self):
# Stub the call that requires user input so the test runs unattended.
self.swap(__builtin__, 'raw_input', lambda _: 'username')
self.assertEqual(os.environ['SERVER_SOFTWARE'], remote.SERVER_SOFTWARE)
# establish() performs RPC. If it doesn't throw, we're good.
remote.Environment('mycourse', 'localhost:8080').establish()
def disabled_test_can_establish_environment_in_test_mode(self):
self.test_environ['SERVER_SOFTWARE'] = remote.TEST_SERVER_SOFTWARE
self.swap(os, 'environ', self.test_environ)
# establish() performs RPC. If it doesn't throw, we're good.
remote.Environment('mycourse', 'localhost:8080').establish()
class CourseUrlRewritingTest(CourseUrlRewritingTestBase):
"""Run all existing tests using '/courses/pswg' base URL rewrite rules."""
class VirtualFileSystemTest(VirtualFileSystemTestBase):
"""Run all existing tests using virtual local file system."""
class MemcacheTestBase(actions.TestBase):
"""Executes all tests with memcache enabled."""
def setUp(self): # pylint: disable-msg=g-bad-name
super(MemcacheTestBase, self).setUp()
config.Registry.test_overrides = {models.CAN_USE_MEMCACHE.name: True}
def tearDown(self): # pylint: disable-msg=g-bad-name
config.Registry.test_overrides = {}
super(MemcacheTestBase, self).tearDown()
class MemcacheTest(MemcacheTestBase):
"""Executes all tests with memcache enabled."""
class TransformsJsonFileTestCase(actions.TestBase):
"""Tests for models/transforms.py's JsonFile."""
# Method name determined by superclass. pylint: disable-msg=g-bad-name
def setUp(self):
super(TransformsJsonFileTestCase, self).setUp()
# Treat as module-protected. pylint: disable-msg=protected-access
self.path = os.path.join(self.test_tempdir, 'file.json')
self.reader = transforms.JsonFile(self.path)
self.writer = transforms.JsonFile(self.path)
self.first = 1
self.second = {'c': 'c_value', 'd': {'nested': 'e'}}
def tearDown(self):
self.reader.close()
self.writer.close()
super(TransformsJsonFileTestCase, self).tearDown()
def test_round_trip_of_file_with_zero_records(self):
self.writer.open('w')
self.writer.close()
self.reader.open('r')
self.assertEqual([], [entity for entity in self.reader])
self.reader.reset()
self.assertEqual({'rows': []}, self.reader.read())
def test_round_trip_of_file_with_one_record(self):
self.writer.open('w')
self.writer.write(self.first)
self.writer.close()
self.reader.open('r')
self.assertEqual([self.first], [entity for entity in self.reader])
self.reader.reset()
self.assertEqual({'rows': [self.first]}, self.reader.read())
def test_round_trip_of_file_with_multiple_records(self):
self.writer.open('w')
self.writer.write(self.first)
self.writer.write(self.second)
self.writer.close()
self.reader.open('r')
self.assertEqual(
[self.first, self.second], [entity for entity in self.reader])
self.reader.reset()
self.assertEqual(
{'rows': [self.first, self.second]}, self.reader.read())
ALL_COURSE_TESTS = (
StudentAspectTest, AssessmentTest, CourseAuthorAspectTest,
StaticHandlerTest, AdminAspectTest)
MemcacheTest.__bases__ += (InfrastructureTest,) + ALL_COURSE_TESTS
CourseUrlRewritingTest.__bases__ += ALL_COURSE_TESTS
VirtualFileSystemTest.__bases__ += ALL_COURSE_TESTS
DatastoreBackedSampleCourseTest.__bases__ += ALL_COURSE_TESTS
|
supunkamburugamuva/course-builder
|
tests/functional/tests.py
|
Python
|
apache-2.0
| 122,267
|
[
"VisIt"
] |
f94b4efcbf12945cf9447b35933fc9e41aca97bdcb2aeb8d9d5b42381e78e699
|
##
## Biskit, a toolkit for the manipulation of macromolecular structures
## Copyright (C) 2004-2018 Raik Gruenberg
##
## This program is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 3 of the
## License, or any later version.
##
## This program is distributed in the hope that it will be useful,
## but WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You find a copy of the GNU General Public License in the file
## license.txt along with this program; if not, write to the Free
## Software Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
"""
Delphi-based protocol for electrostatic component of free energy of binding.
"""
import numpy as N
import copy
from biskit import PDBModel
from biskit import StdLog
from biskit import AtomCharger
from biskit.exe import Delphi, DelphiError, Reduce
from biskit.dock import Complex
import biskit.tools as T
import biskit.mathUtils as U
class DelphiBindingEnergy( object ):
"""
Determine the electrostatic component of the free energy of binding using
several rounds of Delphi calculations. DelphiBindingEnergy accepts a
binary complex (L{Biskit.Dock.Complex}) as input, performs several house
keeping tasks (optional capping of free terminals, h-bond optimization and
protonation with the reduce program, assignment of Amber partial atomic
charges) and then calls Delphi six times:
1 - 3: one run each for complex, receptor, and ligand with zero ionic
strength
4 - 6: one run each for complex, receptor, and ligand with custom salt
concentration (default: physiological 0.15 M).
The grid position and dimensions are determined once for the complex and
then kept constant between all runs so that receptor, ligand and complex
calculations can indeed be compared to each other.
The free energy of binding is calculated as described in the Delphi
documentation -- see: delphi2004/examples/binding.html -- according to the
energy partioning scheme:
dG = dG(coulomb) + ddG(solvation) + ddG(ions)
Please have a look at the source code of DelphiBindingEnergy.bindingEnergy()
for a detailed breakup of these terms.
Use:
====
>>> com = Biskit.Complex( 'myrec.pdb', 'mylig.pdb' )
>>> dg = DelphiBindingEnergy( com, verbose=True )
>>> r = dg.run()
Variable r will then receive a dictionary with the free energy of binding
and its three components:
{'dG_kt': free E. in units of kT,
'dG_kcal': free E. in units of kcal / mol,
'dG_kJ': free E. in units of kJ / mol,
'coul': coulomb contribution in kT ,
'solv': solvation contribution in kT,
'ions': salt or ionic contribution in kT }
The modified complex used for the calculation (with hydrogens added and
many other small changes) can be recovered from the DelphiBindingEnergy
instance:
>>> modified_com = dg.delphicom
The run() method assigns the result dictionary to the info record
of both the original and the modified complex. That means:
>>> r1 = com.info['dG_delphi']
>>> r2 = modified_com.info['dG_delphi']
>>> r1 == r2
True
The result of the original DelPhi calculations (with and without salt for
receptor, ligand and complex) are assigned to the info dictionaries of the
complex' receptor and ligand model as well as to the complex itself:
com.info['delphi_0.15salt'] ...delphi run with 0.15M salt on complex
com.info['delphi_0salt'] ...delphi run without salt on complex
com.lig().info['delphi_0.15salt'] ...delphi run with 0.15M salt on ligand
com.lig().info['delphi_0salt'] ...delphi run without salt on ligand
com.rec().info['delphi_0.15salt'] ...delphi run with 0.15M salt on receptor
com.rec().info['delphi_0salt'] ...delphi run without salt on receptor
From there the individual values for solvation, ionic and couloumb
contributions can be recovered. See L{Biskit.Delphi} for a description of
this result dictionary.
Customization:
==============
The most important Delphi parameters (dielectric, salt concentration,
grid scales, ion and probe radius) can be adjusted by passing parameters
to the constructor (see documentation of __init__). The default parameters
should be reasonable. By default,
we create a grid that covers every linear dimension to at least 60%
(perfil=60) and has a density of 2.3 points per Angstroem (scale=2.3).
Such high densities come at much larger computational cost. It is
recommended to test different values and average results.
Note: Any parameters that are not recognized by
DelphiBindingEnergy() will be passed on to the Biskit.Delphi instance
of each Delphi run and, from there, passed on to Biskit.Executor.
The internal pipeline of DelphiBindingEnergy consists of:
* adding hydrogens and capping of protein chain breaks and terminals with
Biskit.Reduce( autocap=True ). The capping is performed by
L{Biskit.PDBCleaner} called from within Reduce.
* mapping Amber partial atomic charges into the structure with
Biskit.AtomCharger()
* setting up the various delphi runs with L{Biskit.Delphi}.
A point worth looking at is the automatic capping of protein chain
breaks and premature terminals with ACE and NME residues. This should
generally be a good idea but the automatic discovery of premature C-termini
or N-termini is guess work at best. See L{Biskit.PDBCleaner} for a more
detailed discussion. You can override the default behaviour by setting
autocap=False (no capping at all, this is now the default) and you can
then provide a complex structure that is already pre-treated by
L{Biskit.Reduce}.
For example:
>>> m = PDBModel('mycomplex.pdb')
>>> m = Reduce( m, capN=[0], capC=[2] )
>>> com = Biskit.Complex( m.takeChains([0]), m.takeChains([1,2]) )
>>> dg = DelphiBindingEnergy( com, protonate=False )
In this case, the original structure would receive a ACE N-terminal
capping of the first chain and the second chain would receive a C-terminal
NME capping residue. The first chain is considered receptor and the second
and third chain are considered ligand.
@see: L{Biskit.PDBCleaner}
@see: L{Biskit.Reduce}
@see: L{Biskit.AtomCharger}
@see: L{Biskit.Delphi}
"""
def __init__(self, com,
protonate=True, addcharge=True,
indi=4.0, exdi=80.0, salt=0.15, ionrad=2, prbrad=1.4,
bndcon=4, scale=2.3, perfil=60,
template=None, topologies=None,
f_charges=None,
verbose=True, debug=False, log=None, tempdir=None, cwd=None,
**kw ):
"""
@param com: complex to analyze
@type com: Biskit.Complex
@param protonate: (re-)build hydrogen atoms with reduce program (True)
see L{Biskit.Reduce}
@type protonate: bool
@param addcharge: Assign partial charges from Amber topologies (True)
@type addcharge: bool
@param indi: interior dilectric (4.0)
@param exdi: exterior dielectric (80.0)
@param salt: salt conc. in M (0.15)
@param ionrad: ion radius (2)
@param prbrad: probe radius (1.4)
@param bndcon: boundary condition (4, delphi default is 2)
@param scale: grid spacing (2.3)
@param perfil: grid fill factor in % (for automatic grid, 60)
@param template: delphi command file template [None=use default]
@type template: str
@param f_radii: alternative delphi atom radii file [None=use default]
@type f_radii: str
@param topologies: alternative list of residue charge/topology files
[default: amber/residues/all*]
@type topologies: [ str ]
@param f_charges: alternative delphi charge file
[default: create custom]
@type f_charges: str
@param kw: additional key=value parameters for Delphi or Executor:
@type kw: key=value pairs
::
debug - 0|1, keep all temporary files (default: 0)
verbose - 0|1, print progress messages to log (log != STDOUT)
node - str, host for calculation (None->local) NOT TESTED
(default: None)
nice - int, nice level (default: 0)
log - Biskit.LogFile, program log (None->STOUT) (default: None)
"""
self.com = com
self.delphicom = None
self.protonate = protonate
self.addcharge = addcharge
## DELPHI run parameters
self.indi=indi # interior dilectric(4.0)
self.exdi=exdi # exterior dielectric(80.0)
self.salt=salt # salt conc. in M (0.15)
self.ionrad=ionrad # ion radius (2)
self.prbrad=prbrad # probe radius (1.4)
self.bndcon=bndcon # boundary condition (4, delphi default is 2)
## DELPHI parameters for custom grid
self.scale=scale # grid spacing (1.2 / A)
self.perfil=perfil # grid fill factor in % (for automatic grid, 60)
## DELPHI parameter file and Amber residue definitions or charge file
self.template = template
self.topologies = topologies
self.f_charges = f_charges
## pump everything else into name space, too
self.__dict__.update( kw )
## prepare globally valid grid
self.grid = None
self.verbose = verbose
self.log = log or StdLog()
self.debug = debug
self.tempdir = tempdir
self.cwd = cwd
self.ezero = self.esalt = None # raw results assigned by run()
def prepare( self ):
"""
"""
mrec = self.com.rec()
mlig = self.com.lig()
m = mrec.concat( mlig )
if self.protonate:
if self.verbose:
self.log.add( '\nRe-building hydrogen atoms...' )
tempdir = self.tempdir
if tempdir:
tempdir += '/0_reduce'
r = Reduce( m, tempdir=tempdir, log=self.log,
autocap=False,
debug=self.debug, verbose=self.verbose )
m = r.run()
if self.addcharge:
if self.verbose:
self.log.add( '\nAssigning charges from Amber topologies...')
ac = AtomCharger( log=self.log, verbose=self.verbose )
ac.charge( m )
mrec = m.takeChains( range( mrec.lenChains() ) )
mlig = m.takeChains( range( mrec.lenChains(), m.lenChains() ) )
self.delphicom = Complex( mrec, mlig )
def setupDelphi( self, model, grid={}, **kw ):
"""
"""
params = copy.copy( self.__dict__ )
params.update( kw )
params['protonate'] = False # run reduce once for complex only
params['addcharge'] = False # run AtomCharger once for complex only
d = Delphi( model, **params )
d.setGrid( **grid )
return d
def processThreesome( self, **kw ):
"""
Calculate Delphi energies for rec, lig and complex
@return: result dictionaries for com, rec, lig
@rtype: ( dict, dict, dict )
"""
dcom = self.setupDelphi( self.delphicom.model(), **kw )
grid = dcom.getGrid()
drec = self.setupDelphi( self.delphicom.rec(), grid=grid, **kw )
dlig = self.setupDelphi( self.delphicom.lig(), grid=grid, **kw )
if self.verbose:
self.log.add('\nrunning Delphi for complex...')
r_com = dcom.run()
if self.verbose:
self.log.add('\nrunning Delphi for receptor...')
r_rec = drec.run()
if self.verbose:
self.log.add('\nrunning Delphi for ligand...')
r_lig = dlig.run()
return r_com, r_rec, r_lig
def processSixsome( self, **kw ):
"""
Calculate Delphi energies for rec, lig and complex with and without
salt.
@return: com, rec, lig delphi calculations with and without ions
@rtype: ( {}, {} )
"""
## with ions
if self.verbose:
self.log.add('\nDelphi calculations with %1.3f M salt' % self.salt)
self.log.add('=======================================')
ri_com, ri_rec, ri_lig = self.processThreesome( salt=self.salt, **kw )
## w/o ions
if self.verbose:
self.log.add('\nDelphi calculations with zero ionic strenght')
self.log.add('==============================================')
r_com, r_rec, r_lig = self.processThreesome( salt=0.0, **kw )
rsalt = { 'com':ri_com, 'rec':ri_rec, 'lig':ri_lig }
rzero = { 'com':r_com, 'rec':r_rec, 'lig':r_lig }
if self.verbose:
self.log.add('\nRaw results')
self.log.add('============')
self.log.add('zero ionic strength: \n' + str(rzero) )
self.log.add('\nwith salt: \n' + str(rsalt) )
return rzero, rsalt
def bindingEnergy( self, ezero, esalt ):
"""
Calculate electrostatic component of the free energy of binding
according to energy partitioning formula.
@param ezero: delphi energies calculated at zero ionic strength
@type ezero: {'com':{}, 'rec':{}, 'lig':{} }
@param esalt: delphi energies calculated with ions (see salt=)
@type esalt: {'com':{}, 'rec':{}, 'lig':{} }
@return: {'dG_kt': free E. in units of kT,
'dG_kcal': free E. in units of kcal / mol,
'dG_kJ': free E. in units of kJ / mol,
'coul': coulomb contribution in kT ,
'solv': solvation contribution in kT,
'ions': salt or ionic contribution in kT }
@rtype: dict { str : float }
"""
delta_0 = {}
delta_i = {}
for k, v in ezero['com'].items():
delta_0[ k ] = ezero['com'][k] - ezero['rec'][k] - ezero['lig'][k]
delta_i[ k ] = esalt['com'][k] - esalt['rec'][k] - esalt['lig'][k]
dG_coul = delta_0['ecoul']
ddG_rxn = delta_0['erxn']
ddG_ions= delta_i['egrid'] - delta_0['egrid']
dG = dG_coul + ddG_rxn + ddG_ions # in units of kT
dG_kcal = round( dG * 0.593, 3) # kcal / mol
dG_kJ = round( dG * 2.479, 3) # kJ / mol
r = {'dG_kt':dG, 'dG_kcal':dG_kcal, 'dG_kJ':dG_kJ,
'coul':dG_coul, 'solv':ddG_rxn, 'ions':ddG_ions}
return r
def run( self ):
"""
Prepare structures and perform Delphi calculations for complex,
receptor and ligand, each with and without ions (salt). Calculate
free energy of binding according to energy partitioning.
Detailed delphi results are saved into object fields 'ezero' and
'esalt' for calculations without and with ions, respectively.
@return: {'dG_kt': free E. in units of kT,
'dG_kcal': free E. in units of kcal / mol,
'dG_kJ': free E. in units of kJ / mol,
'coul': coulomb contribution in kT ,
'solv': solvation contribution in kT,
'ions': salt or ionic contribution in kT }
@rtype: dict { str : float }
"""
self.prepare()
self.ezero, self.esalt = self.processSixsome()
self.result = self.bindingEnergy( self.ezero, self.esalt )
self.delphicom.info['dG_delphi'] = self.result
self.com.info['dG_delphi'] = self.result
for com in [self.delphicom, self.com]:
key = 'delphi_%4.2fsalt' % self.salt
com.rec_model.info[key] = self.esalt['rec']
com.lig_model.info[key] = self.esalt['lig']
com.lig_model.lig_transformed = None
key = 'delphi_0salt'
com.rec_model.info[key] = self.ezero['rec']
com.lig_model.info[key] = self.ezero['lig']
com.lig_model.lig_transformed = None # reset Complex.lig() cache
com.info[key] = self.ezero['com']
return self.result
#############
## TESTING
#############
import biskit.test as BT
class Test(BT.BiskitTest):
"""Test class"""
TAGS = [ BT.EXE, BT.LONG ]
def test_bindingE( self ):
"""bindingEnergyDelphi test (Barnase:Barstar)"""
self.com = T.load( T.testRoot() + '/com/ref.complex' )
self.dG = DelphiBindingEnergy( self.com, log=self.log, scale=1.2,
verbose=self.local )
self.r = self.dG.run()
## self.assertAlmostEqual( self.r['dG_kt'], 21., 0 )
self.assertTrue( abs(self.r['dG_kt'] - 24.6) < 4 )
if self.local:
self.log.add(
'\nFinal result: dG = %3.2f kcal/mol'%self.r['dG_kcal'])
def test_errorcase1( self ):
"""bindinEnergyDelphi test (error case 01)"""
self.m = PDBModel( T.testRoot() + '/delphi/case01.pdb' )
rec = self.m.takeChains( [0,1] )
lig = self.m.takeChains( [2] )
self.com = Complex( rec, lig )
self.dG = DelphiBindingEnergy( self.com, log = self.log, scale=0.5,
verbose=self.local )
self.r = self.dG.run()
if self.local:
self.log.add(
'\nFinal result: dG = %3.2f kcal/mol'%self.r['dG_kcal'])
if __name__ == '__main__':
BT.localTest(debug=False)
|
graik/biskit
|
biskit/dock/delphiBindingEnergy.py
|
Python
|
gpl-3.0
| 18,273
|
[
"Amber"
] |
dbe28284421a5c606f683134f7a8da13a1a9a854ca51854b4cbd10393b564361
|
"""
Class encapsulating the management of repository dependencies installed or being installed
into Galaxy from the Tool Shed.
"""
import json
import logging
import os
import urllib
import urllib2
from galaxy.util import asbool
from tool_shed.galaxy_install.tools import tool_panel_manager
from tool_shed.util import common_util
from tool_shed.util import container_util
from tool_shed.util import encoding_util
from tool_shed.util import shed_util_common as suc
log = logging.getLogger( __name__ )
class RepositoryDependencyInstallManager( object ):
def __init__( self, app ):
self.app = app
def build_repository_dependency_relationships( self, repo_info_dicts, tool_shed_repositories ):
"""
Build relationships between installed tool shed repositories and other installed
tool shed repositories upon which they depend. These relationships are defined in
the repository_dependencies entry for each dictionary in the received list of repo_info_dicts.
Each of these dictionaries is associated with a repository in the received tool_shed_repositories
list.
"""
install_model = self.app.install_model
log.debug( "Building repository dependency relationships..." )
for repo_info_dict in repo_info_dicts:
for name, repo_info_tuple in repo_info_dict.items():
description, \
repository_clone_url, \
changeset_revision, \
ctx_rev, \
repository_owner, \
repository_dependencies, \
tool_dependencies = \
suc.get_repo_info_tuple_contents( repo_info_tuple )
if repository_dependencies:
for key, val in repository_dependencies.items():
if key in [ 'root_key', 'description' ]:
continue
d_repository = None
repository_components_tuple = container_util.get_components_from_key( key )
components_list = suc.extract_components_from_tuple( repository_components_tuple )
d_toolshed, d_name, d_owner, d_changeset_revision = components_list[ 0:4 ]
for tsr in tool_shed_repositories:
# Get the the tool_shed_repository defined by name, owner and changeset_revision. This is
# the repository that will be dependent upon each of the tool shed repositories contained in
# val. We'll need to check tool_shed_repository.tool_shed as well if/when repository dependencies
# across tool sheds is supported.
if tsr.name == d_name and tsr.owner == d_owner and tsr.changeset_revision == d_changeset_revision:
d_repository = tsr
break
if d_repository is None:
# The dependent repository is not in the received list so look in the database.
d_repository = self.get_or_create_tool_shed_repository( d_toolshed,
d_name,
d_owner,
d_changeset_revision )
# Process each repository_dependency defined for the current dependent repository.
for repository_dependency_components_list in val:
required_repository = None
rd_toolshed, \
rd_name, \
rd_owner, \
rd_changeset_revision, \
rd_prior_installation_required, \
rd_only_if_compiling_contained_td = \
common_util.parse_repository_dependency_tuple( repository_dependency_components_list )
# Get the the tool_shed_repository defined by rd_name, rd_owner and rd_changeset_revision. This
# is the repository that will be required by the current d_repository.
# TODO: Check tool_shed_repository.tool_shed as well when repository dependencies across tool sheds is supported.
for tsr in tool_shed_repositories:
if tsr.name == rd_name and tsr.owner == rd_owner and tsr.changeset_revision == rd_changeset_revision:
required_repository = tsr
break
if required_repository is None:
# The required repository is not in the received list so look in the database.
required_repository = self.get_or_create_tool_shed_repository( rd_toolshed,
rd_name,
rd_owner,
rd_changeset_revision )
# Ensure there is a repository_dependency relationship between d_repository and required_repository.
rrda = None
for rd in d_repository.repository_dependencies:
if rd.id == required_repository.id:
rrda = rd
break
if not rrda:
# Make sure required_repository is in the repository_dependency table.
repository_dependency = self.get_repository_dependency_by_repository_id( install_model,
required_repository.id )
if not repository_dependency:
log.debug( 'Creating new repository_dependency record for installed revision %s of repository: %s owned by %s.' % \
( str( required_repository.installed_changeset_revision ),
str( required_repository.name ),
str( required_repository.owner ) ) )
repository_dependency = install_model.RepositoryDependency( tool_shed_repository_id=required_repository.id )
install_model.context.add( repository_dependency )
install_model.context.flush()
# Build the relationship between the d_repository and the required_repository.
rrda = install_model.RepositoryRepositoryDependencyAssociation( tool_shed_repository_id=d_repository.id,
repository_dependency_id=repository_dependency.id )
install_model.context.add( rrda )
install_model.context.flush()
def create_repository_dependency_objects( self, tool_path, tool_shed_url, repo_info_dicts, install_repository_dependencies=False,
no_changes_checked=False, tool_panel_section_id=None, new_tool_panel_section_label=None ):
"""
Discover all repository dependencies and make sure all tool_shed_repository and
associated repository_dependency records exist as well as the dependency relationships
between installed repositories. This method is called when uninstalled repositories
are being reinstalled. If the user elected to install repository dependencies, all
items in the all_repo_info_dicts list will be processed. However, if repository
dependencies are not to be installed, only those items contained in the received
repo_info_dicts list will be processed.
"""
install_model = self.app.install_model
log.debug( "Creating repository dependency objects..." )
# The following list will be maintained within this method to contain all created
# or updated tool shed repositories, including repository dependencies that may not
# be installed.
all_created_or_updated_tool_shed_repositories = []
# There will be a one-to-one mapping between items in 3 lists:
# created_or_updated_tool_shed_repositories, tool_panel_section_keys
# and filtered_repo_info_dicts. The 3 lists will filter out repository
# dependencies that are not to be installed.
created_or_updated_tool_shed_repositories = []
tool_panel_section_keys = []
# Repositories will be filtered (e.g., if already installed, if elected
# to not be installed, etc), so filter the associated repo_info_dicts accordingly.
filtered_repo_info_dicts = []
# Discover all repository dependencies and retrieve information for installing
# them. Even if the user elected to not install repository dependencies we have
# to make sure all repository dependency objects exist so that the appropriate
# repository dependency relationships can be built.
all_required_repo_info_dict = self.get_required_repo_info_dicts( tool_shed_url, repo_info_dicts )
all_repo_info_dicts = all_required_repo_info_dict.get( 'all_repo_info_dicts', [] )
if not all_repo_info_dicts:
# No repository dependencies were discovered so process the received repositories.
all_repo_info_dicts = [ rid for rid in repo_info_dicts ]
for repo_info_dict in all_repo_info_dicts:
# If the user elected to install repository dependencies, all items in the
# all_repo_info_dicts list will be processed. However, if repository dependencies
# are not to be installed, only those items contained in the received repo_info_dicts
# list will be processed but the all_repo_info_dicts list will be used to create all
# defined repository dependency relationships.
if self.is_in_repo_info_dicts( repo_info_dict, repo_info_dicts ) or install_repository_dependencies:
for name, repo_info_tuple in repo_info_dict.items():
can_update_db_record = False
description, \
repository_clone_url, \
changeset_revision, \
ctx_rev, \
repository_owner, \
repository_dependencies, \
tool_dependencies = \
suc.get_repo_info_tuple_contents( repo_info_tuple )
# See if the repository has an existing record in the database.
repository_db_record, installed_changeset_revision = \
suc.repository_was_previously_installed( self.app, tool_shed_url, name, repo_info_tuple, from_tip=False )
if repository_db_record:
if repository_db_record.status in [ install_model.ToolShedRepository.installation_status.INSTALLED,
install_model.ToolShedRepository.installation_status.CLONING,
install_model.ToolShedRepository.installation_status.SETTING_TOOL_VERSIONS,
install_model.ToolShedRepository.installation_status.INSTALLING_REPOSITORY_DEPENDENCIES,
install_model.ToolShedRepository.installation_status.INSTALLING_TOOL_DEPENDENCIES,
install_model.ToolShedRepository.installation_status.LOADING_PROPRIETARY_DATATYPES ]:
debug_msg = "Skipping installation of revision %s of repository '%s' because it was installed " % \
( str( changeset_revision ), str( repository_db_record.name ) )
debug_msg += "with the (possibly updated) revision %s and its current installation status is '%s'." % \
( str( installed_changeset_revision ), str( repository_db_record.status ) )
log.debug( debug_msg )
can_update_db_record = False
else:
if repository_db_record.status in [ install_model.ToolShedRepository.installation_status.ERROR,
install_model.ToolShedRepository.installation_status.NEW,
install_model.ToolShedRepository.installation_status.UNINSTALLED ]:
# The current tool shed repository is not currently installed, so we can update its
# record in the database.
name = repository_db_record.name
installed_changeset_revision = repository_db_record.installed_changeset_revision
metadata_dict = repository_db_record.metadata
dist_to_shed = repository_db_record.dist_to_shed
can_update_db_record = True
elif repository_db_record.status in [ install_model.ToolShedRepository.installation_status.DEACTIVATED ]:
# The current tool shed repository is deactivated, so updating its database record
# is not necessary - just activate it.
log.debug( "Reactivating deactivated tool_shed_repository '%s'." % str( repository_db_record.name ) )
self.app.installed_repository_manager.activate_repository( repository_db_record )
# No additional updates to the database record are necessary.
can_update_db_record = False
elif repository_db_record.status not in [ install_model.ToolShedRepository.installation_status.NEW ]:
# Set changeset_revision here so suc.create_or_update_tool_shed_repository will find
# the previously installed and uninstalled repository instead of creating a new record.
changeset_revision = repository_db_record.installed_changeset_revision
self.reset_previously_installed_repository( repository_db_record )
can_update_db_record = True
else:
# No record exists in the database for the repository currently being processed.
installed_changeset_revision = changeset_revision
metadata_dict = {}
dist_to_shed = False
can_update_db_record = True
if can_update_db_record:
# The database record for the tool shed repository currently being processed can be updated.
# Get the repository metadata to see where it was previously located in the tool panel.
tpm = tool_panel_manager.ToolPanelManager( self.app )
if repository_db_record and repository_db_record.metadata:
tool_section, tool_panel_section_key = \
tpm.handle_tool_panel_selection( toolbox=self.app.toolbox,
metadata=repository_db_record.metadata,
no_changes_checked=no_changes_checked,
tool_panel_section_id=tool_panel_section_id,
new_tool_panel_section_label=new_tool_panel_section_label )
else:
# We're installing a new tool shed repository that does not yet have a database record.
tool_panel_section_key, tool_section = \
tpm.handle_tool_panel_section( self.app.toolbox,
tool_panel_section_id=tool_panel_section_id,
new_tool_panel_section_label=new_tool_panel_section_label )
tool_shed_repository = \
suc.create_or_update_tool_shed_repository( app=self.app,
name=name,
description=description,
installed_changeset_revision=installed_changeset_revision,
ctx_rev=ctx_rev,
repository_clone_url=repository_clone_url,
metadata_dict={},
status=install_model.ToolShedRepository.installation_status.NEW,
current_changeset_revision=changeset_revision,
owner=repository_owner,
dist_to_shed=False )
if tool_shed_repository not in all_created_or_updated_tool_shed_repositories:
all_created_or_updated_tool_shed_repositories.append( tool_shed_repository )
# Only append the tool shed repository to the list of created_or_updated_tool_shed_repositories if
# it is supposed to be installed.
if install_repository_dependencies or self.is_in_repo_info_dicts( repo_info_dict, repo_info_dicts ):
if tool_shed_repository not in created_or_updated_tool_shed_repositories:
# Keep the one-to-one mapping between items in 3 lists.
created_or_updated_tool_shed_repositories.append( tool_shed_repository )
tool_panel_section_keys.append( tool_panel_section_key )
filtered_repo_info_dicts.append( repo_info_dict )
# Build repository dependency relationships even if the user chose to not install repository dependencies.
self.build_repository_dependency_relationships( all_repo_info_dicts, all_created_or_updated_tool_shed_repositories )
return created_or_updated_tool_shed_repositories, tool_panel_section_keys, all_repo_info_dicts, filtered_repo_info_dicts
def get_or_create_tool_shed_repository( self, tool_shed, name, owner, changeset_revision ):
"""
Return a tool shed repository database record defined by the combination of
tool shed, repository name, repository owner and changeset_revision or
installed_changeset_revision. A new tool shed repository record will be
created if one is not located.
"""
install_model = self.app.install_model
# We store the port in the database.
tool_shed = common_util.remove_protocol_from_tool_shed_url( tool_shed )
# This method is used only in Galaxy, not the tool shed.
repository = suc.get_repository_for_dependency_relationship( self.app, tool_shed, name, owner, changeset_revision )
if not repository:
tool_shed_url = common_util.get_tool_shed_url_from_tool_shed_registry( self.app, tool_shed )
repository_clone_url = os.path.join( tool_shed_url, 'repos', owner, name )
ctx_rev = suc.get_ctx_rev( self.app, tool_shed_url, name, owner, changeset_revision )
repository = suc.create_or_update_tool_shed_repository( app=self.app,
name=name,
description=None,
installed_changeset_revision=changeset_revision,
ctx_rev=ctx_rev,
repository_clone_url=repository_clone_url,
metadata_dict={},
status=install_model.ToolShedRepository.installation_status.NEW,
current_changeset_revision=None,
owner=owner,
dist_to_shed=False )
return repository
def get_repository_dependencies_for_installed_tool_shed_repository( self, app, repository ):
"""
Send a request to the appropriate tool shed to retrieve the dictionary of repository dependencies defined
for the received repository which is installed into Galaxy. This method is called only from Galaxy.
"""
tool_shed_url = common_util.get_tool_shed_url_from_tool_shed_registry( app, str( repository.tool_shed ) )
params = '?name=%s&owner=%s&changeset_revision=%s' % ( str( repository.name ),
str( repository.owner ),
str( repository.changeset_revision ) )
url = common_util.url_join( tool_shed_url,
'repository/get_repository_dependencies%s' % params )
try:
raw_text = common_util.tool_shed_get( app, tool_shed_url, url )
except Exception, e:
print "The URL\n%s\nraised the exception:\n%s\n" % ( url, str( e ) )
return ''
if len( raw_text ) > 2:
encoded_text = json.loads( raw_text )
text = encoding_util.tool_shed_decode( encoded_text )
else:
text = ''
return text
def get_repository_dependency_by_repository_id( self, install_model, decoded_repository_id ):
return install_model.context.query( install_model.RepositoryDependency ) \
.filter( install_model.RepositoryDependency.table.c.tool_shed_repository_id == decoded_repository_id ) \
.first()
def get_required_repo_info_dicts( self, tool_shed_url, repo_info_dicts ):
"""
Inspect the list of repo_info_dicts for repository dependencies and append a repo_info_dict for each of
them to the list. All repository_dependency entries in each of the received repo_info_dicts includes
all required repositories, so only one pass through this method is required to retrieve all repository
dependencies.
"""
all_required_repo_info_dict = {}
all_repo_info_dicts = []
if repo_info_dicts:
# We'll send tuples of ( tool_shed, repository_name, repository_owner, changeset_revision ) to the tool
# shed to discover repository ids.
required_repository_tups = []
for repo_info_dict in repo_info_dicts:
if repo_info_dict not in all_repo_info_dicts:
all_repo_info_dicts.append( repo_info_dict )
for repository_name, repo_info_tup in repo_info_dict.items():
description, \
repository_clone_url, \
changeset_revision, \
ctx_rev, \
repository_owner, \
repository_dependencies, \
tool_dependencies = \
suc.get_repo_info_tuple_contents( repo_info_tup )
if repository_dependencies:
for key, val in repository_dependencies.items():
if key in [ 'root_key', 'description' ]:
continue
repository_components_tuple = container_util.get_components_from_key( key )
components_list = suc.extract_components_from_tuple( repository_components_tuple )
# Skip listing a repository dependency if it is required only to compile a tool dependency
# defined for the dependent repository since in this case, the repository dependency is really
# a dependency of the dependent repository's contained tool dependency, and only if that
# tool dependency requires compilation.
# For backward compatibility to the 12/20/12 Galaxy release.
prior_installation_required = 'False'
only_if_compiling_contained_td = 'False'
if len( components_list ) == 4:
prior_installation_required = 'False'
only_if_compiling_contained_td = 'False'
elif len( components_list ) == 5:
prior_installation_required = components_list[ 4 ]
only_if_compiling_contained_td = 'False'
if not asbool( only_if_compiling_contained_td ):
if components_list not in required_repository_tups:
required_repository_tups.append( components_list )
for components_list in val:
try:
only_if_compiling_contained_td = components_list[ 5 ]
except:
only_if_compiling_contained_td = 'False'
# Skip listing a repository dependency if it is required only to compile a tool dependency
# defined for the dependent repository (see above comment).
if not asbool( only_if_compiling_contained_td ):
if components_list not in required_repository_tups:
required_repository_tups.append( components_list )
else:
# We have a single repository with no dependencies.
components_list = [ tool_shed_url, repository_name, repository_owner, changeset_revision ]
required_repository_tups.append( components_list )
if required_repository_tups:
# The value of required_repository_tups is a list of tuples, so we need to encode it.
encoded_required_repository_tups = []
for required_repository_tup in required_repository_tups:
# Convert every item in required_repository_tup to a string.
required_repository_tup = [ str( item ) for item in required_repository_tup ]
encoded_required_repository_tups.append( encoding_util.encoding_sep.join( required_repository_tup ) )
encoded_required_repository_str = encoding_util.encoding_sep2.join( encoded_required_repository_tups )
encoded_required_repository_str = encoding_util.tool_shed_encode( encoded_required_repository_str )
if suc.is_tool_shed_client( self.app ):
# Handle secure / insecure Tool Shed URL protocol changes and port changes.
tool_shed_url = common_util.get_tool_shed_url_from_tool_shed_registry( self.app, tool_shed_url )
url = common_util.url_join( tool_shed_url, '/repository/get_required_repo_info_dict' )
# Fix for handling 307 redirect not being handled nicely by urllib2.urlopen when the urllib2.Request has data provided
url = urllib2.urlopen( urllib2.Request( url ) ).geturl()
request = urllib2.Request( url, data=urllib.urlencode( dict( encoded_str=encoded_required_repository_str ) ) )
response = urllib2.urlopen( request ).read()
if response:
try:
required_repo_info_dict = json.loads( response )
except Exception, e:
log.exception( e )
return all_repo_info_dicts
required_repo_info_dicts = []
for k, v in required_repo_info_dict.items():
if k == 'repo_info_dicts':
encoded_dict_strings = required_repo_info_dict[ 'repo_info_dicts' ]
for encoded_dict_str in encoded_dict_strings:
decoded_dict = encoding_util.tool_shed_decode( encoded_dict_str )
required_repo_info_dicts.append( decoded_dict )
else:
if k not in all_required_repo_info_dict:
all_required_repo_info_dict[ k ] = v
else:
if v and not all_required_repo_info_dict[ k ]:
all_required_repo_info_dict[ k ] = v
if required_repo_info_dicts:
for required_repo_info_dict in required_repo_info_dicts:
# Each required_repo_info_dict has a single entry, and all_repo_info_dicts is a list
# of dictionaries, each of which has a single entry. We'll check keys here rather than
# the entire dictionary because a dictionary entry in all_repo_info_dicts will include
# lists of discovered repository dependencies, but these lists will be empty in the
# required_repo_info_dict since dependency discovery has not yet been performed for these
# dictionaries.
required_repo_info_dict_key = required_repo_info_dict.keys()[ 0 ]
all_repo_info_dicts_keys = [ d.keys()[ 0 ] for d in all_repo_info_dicts ]
if required_repo_info_dict_key not in all_repo_info_dicts_keys:
all_repo_info_dicts.append( required_repo_info_dict )
all_required_repo_info_dict[ 'all_repo_info_dicts' ] = all_repo_info_dicts
return all_required_repo_info_dict
def is_in_repo_info_dicts( self, repo_info_dict, repo_info_dicts ):
"""Return True if the received repo_info_dict is contained in the list of received repo_info_dicts."""
for name, repo_info_tuple in repo_info_dict.items():
for rid in repo_info_dicts:
for rid_name, rid_repo_info_tuple in rid.items():
if rid_name == name:
if len( rid_repo_info_tuple ) == len( repo_info_tuple ):
for item in rid_repo_info_tuple:
if item not in repo_info_tuple:
return False
return True
return False
def reset_previously_installed_repository( self, repository ):
"""
Reset the attributes of a tool_shed_repository that was previously installed.
The repository will be in some state other than INSTALLED, so all attributes
will be set to the default NEW state. This will enable the repository to be
freshly installed.
"""
debug_msg = "Resetting tool_shed_repository '%s' for installation.\n" % str( repository.name )
debug_msg += "The current state of the tool_shed_repository is:\n"
debug_msg += "deleted: %s\n" % str( repository.deleted )
debug_msg += "tool_shed_status: %s\n" % str( repository.tool_shed_status )
debug_msg += "uninstalled: %s\n" % str( repository.uninstalled )
debug_msg += "status: %s\n" % str( repository.status )
debug_msg += "error_message: %s\n" % str( repository.error_message )
log.debug( debug_msg )
repository.deleted = False
repository.tool_shed_status = None
repository.uninstalled = False
repository.status = self.app.install_model.ToolShedRepository.installation_status.NEW
repository.error_message = None
self.app.install_model.context.add( repository )
self.app.install_model.context.flush()
|
mikel-egana-aranguren/SADI-Galaxy-Docker
|
galaxy-dist/lib/tool_shed/galaxy_install/repository_dependencies/repository_dependency_manager.py
|
Python
|
gpl-3.0
| 33,928
|
[
"Galaxy"
] |
f50e5089d981af4aef2a84b8c4ac78fd53df92121051fe1e45398a29f37790f4
|
__author__ = 'lovci'
"""a tool to extract ranges from indexed .maf file, adapted from bx-python / scripts / maf_tile.py and \
http://biopython.org/wiki/Phylo_cookbook"""
import bx
import bx.align.maf
from Bio import Phylo
import pyfasta
from collections import defaultdict
import os, sys
import socket
if "tscc" in socket.gethostname():
conservation_basedir = "/projects/ps-yeolab/conservation"
genome_basedir = "/projects/ps-yeolab/genomes"
else:
#haven't set up this system yet... download and index .maf files
raise NotImplementedError
class MafRangeGetter(object):
def intervals_from_mask(self, mask ):
start = 0
last = mask[0]
for i in range( 1, len( mask ) ):
if mask[i] != last:
yield start, i, last
start = i
last = mask[i]
yield start, len(mask), last
def tile_interval(self, chrom, start, end, strand):
"""where the magic happens... tile blocks from a .maf file to make a contiguous alignment
for species other than the pivot species, reference (chromosome) names are masked, as the blocks
from which a tiled interval may originate may not be on the same reference in an ortholog"""
base_len = end - start
ref_src = self.species + "." + chrom
blocks = self.index.get( ref_src, start, end )
# From low to high score
blocks.sort( lambda a, b: cmp( a.score, b.score ) )
mask = [ -1 ] * base_len
ref_src_size = None
if len(blocks) > 0 and blocks[0] is not None:
for i, block in enumerate( blocks ):
ref = block.get_component_by_src_start( ref_src )
ref_src_size = ref.src_size
assert ref.strand == "+"
slice_start = max( start, ref.start )
slice_end = min( end, ref.end )
for j in range( slice_start, slice_end ):
mask[j-start] = i
tiled = []
for i in range( len( self.sources ) ): tiled.append( [] )
for ss, ee, index in self.intervals_from_mask( mask ):
if index < 0: #masked portions
fetch_start = (start+ss-1)
fetch_stop = (start +ee-1)
x = self.fasta[chrom][fetch_start:fetch_stop]
tiled[0].append(x)
for row in tiled[1:]: #unaligned other species
row.append( "-" * ( ee - ss ) )
else:
slice_start = start + ss
slice_end = start + ee
block = blocks[index]
ref = block.get_component_by_src_start( ref_src )
sliced = block.slice_by_component( ref, slice_start, slice_end )
sliced = sliced.limit_to_species( self.sources )
sliced.remove_all_gap_columns()
for i, src in enumerate( self.sources ):
comp = sliced.get_component_by_src_start( src )
if comp:
tiled[i].append( comp.text )
else:
tiled[i].append( "-" * sliced.text_size )
aln = bx.align.Alignment()
s = list()
for i, name in enumerate( self.sources ):
for n, s in enumerate(tiled[i]):
tiled[i][n] = s.strip().replace("\s", "")
text = str("".join( tiled[i]))
size = len( text ) - text.count( "-" )
if i == 0:
if ref_src_size is None: ref_src_size = len(self.fasta[chrom])
c = bx.align.Component( ref_src, start, end-start, "+", ref_src_size, text )
else:
c = bx.align.Component( name + ".fake", 0, size, "?", size, text )
aln.add_component( c )
if strand == "-":
aln = aln.reverse_complement()
return aln
def _lookup_tree(self):
"""
lookup dictionary for nodes
from: http://biopython.org/wiki/Phylo_cookbook
"""
names = {}
for clade in self.tree.find_clades():
if clade.name:
if clade.name in names:
raise ValueError("Duplicate key: %s" % clade.name)
names[clade.name] = clade
self.treeNodes = names
def _phylogenetic_distance_table(self):
"""precompute distances"""
distance = defaultdict()
for species in self.treeNodes.keys():
bl = 0
for b in self.tree.get_path(self.treeNodes[species]):
bl += b.branch_length
distance[species] = bl
self.phyloD = distance
def plot_tree(self):
"""show phylogenetic tree"""
import pylab
f = pylab.figure(figsize=(8,8))
ax = f.add_subplot(111)
y = Phylo.draw(self.tree, axes=ax)
class ce10MafRangeGetter(MafRangeGetter):
def __init__(self):
self.species = "ce10"
super(MafRangeGetter,self).__init__()
chrs = ["I", "II", "III", "IV", "V", "X", "M"]
maf_files = [(os.path.join(conservation_basedir, "ce10_7way/", "multiz7way/", "chr" + c + ".maf.bz2"))\
for c in chrs]
self.index = bx.align.maf.MultiIndexed(maf_files, keep_open=True, parse_e_rows=True, use_cache=True)
self.fasta = pyfasta.Fasta(os.path.join(genome_basedir, "ce10/chromosomes/all.fa"), flatten_inplace=True)
treeFile = os.path.join(conservation_basedir, "ce10_7way/", "multiz7way/", "ce10.7way.nh")
self.tree = Phylo.read(treeFile, 'newick')
self.sources = [i.name for i in self.tree.get_terminals()]
self.tree.root_with_outgroup({"name":"ce10"})
self.tree.ladderize()
self._lookup_tree() #make tree search-able
self._phylogenetic_distance_table() #calculate the distance from each species to hg19
class hg19MafRangeGetter(MafRangeGetter):
def __init__(self):
self.species = "hg19"
super(MafRangeGetter,self).__init__()
chrs = map(str, range(1,23)) + ["X", "Y", "M"]
maf_files = [os.path.join(conservation_basedir, "hg19_100way/multiz100way/maf/chr" + c + ".maf.bz2")\
for c in chrs]
self.index = bx.align.maf.MultiIndexed( maf_files, keep_open=True, parse_e_rows=True, use_cache=True )
self.fasta = pyfasta.Fasta(os.path.join(genome_basedir, "hg19/chromosomes/all.fa"), flatten_inplace=True)
treeFile = os.path.join(conservation_basedir, "hg19_100way/multiz100way/hg19.100way.nh")
self.tree = Phylo.read(treeFile, 'newick')
self.sources = [i.name for i in self.tree.get_terminals()]
self.tree.root_with_outgroup({"name":"hg19"})
self.tree.ladderize()
self._lookup_tree() #make tree search-able
self._phylogenetic_distance_table() #calculate the distance from each species to hg19
def revcom(s):
import string
complements = string.maketrans('acgtrymkbdhvACGTRYMKBDHV-', 'tgcayrkmvhdbTGCAYRKMVHDB-')
s = s.translate(complements)[::-1]
return s
def chop_maf(maf, maxSize=2500, overlap=6, id = "none", verbose=False):
"""make smaller bits (<=maxSize) from a big maf range with little bits overlapping by $overlap nt"""
i = 0
j = maxSize
fullSize = len(maf.components[0].text)
while i < fullSize:
if i > 0 and verbose:
pDone = 100*float(i)/fullSize
sys.stderr.write( "chunking id:%s from %d-%d, %3.2f \r" %(id, i, j, pDone) )
yield maf.slice(i,j)
i = j + -overlap
j = i + maxSize
def test():
"""for gpratt"""
getter = ce10MafRangeGetter()
lin41 = getter.tile_interval("chrI", 9335957, 9341889, '-')
assert lin41.slice(0,100).components[0].text == \
'ATGGCGA---------CCATC------------GTGCCATGCTCATTGGAGAAAGAAGAAGGAGCAC---------CATCA-------------------'
|
YeoLab/gscripts
|
gscripts/conservation/maf_handler.py
|
Python
|
mit
| 7,886
|
[
"Biopython"
] |
cc65ee2b55ecc33b50fc45d1bba1605236e2e68157db4d777df11f6c2acf28a5
|
#!/usr/bin/env python
# coding=utf-8
import glob
import click
import os
import json
import datetime
import re
import csv
from requests.exceptions import ConnectionError
from exchangelib import DELEGATE, IMPERSONATION, Account, Credentials, ServiceAccount, \
EWSDateTime, EWSTimeZone, Configuration, NTLM, CalendarItem, Message, \
Mailbox, Attendee, Q, ExtendedProperty, FileAttachment, ItemAttachment, \
HTMLBody, Build, Version
sendmail_secret = None
with open(os.path.join(os.path.dirname(os.path.abspath(__file__)), 'secrets.json')) as data_file:
sendmail_secret = (json.load(data_file))['sendmail_win']
TO_REGISTER = 'Confirmed (to register)'
def dump_csv(res, output_filename, from_date):
keys = res[0].keys()
final_output_filename = '_'.join(['Output_sendmail',
output_filename,
from_date.strftime('%y%m%d'),
datetime.datetime.now().strftime('%H%M')
]) + '.csv'
with open(final_output_filename, 'w', newline='', encoding='utf-8') as output_file:
dict_writer = csv.DictWriter(output_file, keys)
dict_writer.writeheader()
dict_writer.writerows(res)
@click.command()
@click.option('--filename', default='output_hotel_ref_')
@click.option('--email', default='no-reply@gta-travel.com')
def sendmail_win_cs(filename, email):
target_filename = filename + '*.csv'
newest = max(glob.iglob(target_filename), key=os.path.getctime)
print('newest file: ' + newest)
today_date = datetime.datetime.now().strftime('%y%m%d')
try:
newest_date = re.search( filename + '(\d+)', newest).group(1)
except AttributeError:
newest_date = ''
print('newest date: ' + newest_date)
if newest_date != today_date:
print('Error: newest date != today date.. mannual intervention needed..')
return
print('Setting account..')
# Username in WINDOMAIN\username format. Office365 wants usernames in PrimarySMTPAddress
# ('myusername@example.com') format. UPN format is also supported.
credentials = Credentials(username='APACNB\\809452', password=sendmail_secret['password'])
print('Discovering..')
# If the server doesn't support autodiscover, use a Configuration object to set the server
# location:
config = Configuration(server='emailuk.kuoni.com', credentials=credentials)
try:
account = Account(primary_smtp_address=email, config=config,
autodiscover=False, access_type=DELEGATE)
except ConnectionError as e:
print('Fatal: Connection Error.. aborted..')
return
print('Logged in as: ' + str(email))
recipient_email = 'yu.leng@gta-travel.com'
recipient_email1 = 'Alex.Sha@gta-travel.com'
recipient_email2 = 'will.he@gta-travel.com'
recipient_email3 = 'Crystal.liu@gta-travel.com'
recipient_email4 = 'lily.yu@gta-travel.com'
recipient_email6 = 'intern.shanghai@gta-travel.com'
recipient_email5 = 'Emilie.wang@gta-travel.com'
body_text = 'FYI\n' + \
'Best\n' + \
'-Yu'
title_text = '[[[ Ctrip hotel reference ]]]'
# Or, if you want a copy in e.g. the 'Sent' folder
m = Message(
account=account,
folder=account.sent,
sender=Mailbox(email_address=email),
author=Mailbox(email_address=email),
subject=title_text,
body=body_text,
# to_recipients=[Mailbox(email_address=recipient_email1),
# Mailbox(email_address=recipient_email2),
# Mailbox(email_address=recipient_email3)
# ]
# to_recipients=[Mailbox(email_address=recipient_email1),
# Mailbox(email_address=recipient_email2),
# Mailbox(email_address=recipient_email3),
# Mailbox(email_address=recipient_email4),
# Mailbox(email_address=recipient_email5)
# ]
to_recipients=[Mailbox(email_address=recipient_email1),
Mailbox(email_address=recipient_email2),
Mailbox(email_address=recipient_email3),
Mailbox(email_address=recipient_email4)
]
)
with open(newest, 'rb') as f:
update_csv = FileAttachment(name=newest, content=f.read())
m.attach(update_csv)
m.send_and_save()
print('Message sent.. ')
if __name__ == '__main__':
sendmail_win_cs()
|
Fatman13/gta_swarm
|
sendmail_win_cs.py
|
Python
|
mit
| 4,113
|
[
"CRYSTAL"
] |
fdac5709d591a57589188639f353e6f16e4f9510242f017eb1c56b901a77c491
|
from time import time
try:
from cStringIO import StringIO # Faster, where available
except:
from StringIO import StringIO
import sys
from tests.web2unittest import Web2UnitTest
from gluon import current
try:
from twill import get_browser
from twill import set_output
from twill.browser import *
except ImportError:
raise NameError("Twill not installed")
try:
import mechanize
# from mechanize import BrowserStateError
# from mechanize import ControlNotFoundError
except ImportError:
raise NameError("Mechanize not installed")
class BrokenLinkTest(Web2UnitTest):
""" Selenium Unit Test """
def __init__(self):
Web2UnitTest.__init__(self)
self.b = get_browser()
self.b_data = StringIO()
set_output(self.b_data)
self.clearRecord()
# This string must exist in the URL for it to be followed
# Useful to avoid going to linked sites
self.homeURL = self.url
# Link used to identify a URL to a ticket
self.url_ticket = "/admin/default/ticket/"
# Tuple of strings that if in the URL will be ignored
# Useful to avoid dynamic URLs that trigger the same functionality
self.include_ignore = ("_language=",
"logout",
"appadmin",
"admin"
)
# tuple of strings that should be removed from the URL before storing
# Typically this will be some variables passed in via the URL
self.strip_url = ("?_next=",
)
self.maxDepth = 16 # sanity check
self.setUser("test@example.com/eden")
self.linkDepth = []
def clearRecord(self):
# list of links that return a http_code other than 200
# with the key being the URL and the value the http code
self.brokenLinks = dict()
# List of links visited (key) with the parent
self.urlParentList = dict()
# List of links visited (key) with the depth
self.urlList = dict()
# List of urls for each model
self.model_url = dict()
self.totalLinks = 0
def setDepth(self, depth):
self.maxDepth = depth
def setUser(self, user):
self.credentials = user.split(",")
def login(self, credentials):
if credentials == "UNAUTHENTICATED":
url = "%s/default/user/logout" % self.homeURL
self.b.go(url)
return True
try:
(self.user, self.password) = credentials.split("/",1)
except:
msg = "Unable to split %s into a user name and password" % user
self.reporter(msg)
return False
url = "%s/default/user/login" % self.homeURL
self.b.go(url)
forms = self.b.get_all_forms()
for form in forms:
try:
if form["_formname"] == "login":
self.b._browser.form = form
form["email"] = self.user
form["password"] = self.password
self.b.submit("Login")
# If login is successful then should be redirected to the homepage
return self.b.get_url()[len(self.homeURL):] == "/default/index"
except:
# This should be a mechanize.ControlNotFoundError, but
# for some unknown reason that isn't caught on Windows or Mac
pass
return False
def runTest(self):
"""
Test to find all exposed links and check the http code returned.
This test doesn't run any javascript so some false positives
will be found.
The test can also display an histogram depicting the number of
links found at each depth.
"""
for user in self.credentials:
self.clearRecord()
if self.login(user):
self.visitLinks()
def visitLinks(self):
url = self.homeURL
to_visit = [url]
start = time()
for depth in range(self.maxDepth):
if len(to_visit) == 0:
break
self.linkDepth.append(len(to_visit))
self.totalLinks += len(to_visit)
visit_start = time()
url_visited = "%d urls" % len(to_visit)
to_visit = self.visit(to_visit, depth)
msg = "%.2d Visited %s in %.3f seconds, %d more urls found" % (depth, url_visited, time()-visit_start, len(to_visit))
self.reporter(msg)
if self.config.verbose == 2:
if self.stdout.isatty(): # terminal should support colour
msg = "%.2d Visited \033[1;32m%s\033[0m in %.3f seconds, \033[1;31m%d\033[0m more urls found" % (depth, url_visited, time()-visit_start, len(to_visit))
print >> self.stdout, msg
if len(to_visit) > 0:
self.linkDepth.append(len(to_visit))
finish = time()
self.report()
self.reporter("Finished took %.3f seconds" % (finish - start))
self.report_link_depth()
# self.report_model_url()
def add_to_model(self, url, depth, parent):
start = url.find(self.homeURL) + len(self.homeURL)
end = url.find("/",start)
model = url[start:end]
if model in self.model_url:
self.model_url[model].append((url, depth, parent))
else:
self.model_url[model] = [(url, depth, parent)]
def visit(self, url_list, depth):
repr_list = [".pdf", ".xls", ".rss", ".kml"]
to_visit = []
for visited_url in url_list:
index_url = visited_url[len(self.homeURL):]
# Find out if the page can be visited
open_novisit = False
for repr in repr_list:
if repr in index_url:
open_novisit = True
break
try:
if open_novisit:
self.b._journey("open_novisit", visited_url)
http_code = self.b.get_code()
if http_code != 200: # an error situation
self.b.go(visited_url)
http_code = self.b.get_code()
else:
self.b.go(visited_url)
http_code = self.b.get_code()
except Exception as e:
import traceback
print traceback.format_exc()
self.brokenLinks[index_url] = ("-","Exception raised")
continue
http_code = self.b.get_code()
if http_code != 200:
url = "<a href=%s target=\"_blank\">URL</a>" % (visited_url)
self.brokenLinks[index_url] = (http_code,url)
elif open_novisit:
continue
links = []
try:
if self.b._browser.viewing_html():
links = self.b._browser.links()
else:
continue
except Exception as e:
import traceback
print traceback.format_exc()
self.brokenLinks[index_url] = ("-","Exception raised")
continue
for link in (links):
url = link.absolute_url
if url.find(self.url_ticket) != -1:
# A ticket was raised so...
# capture the details and add to brokenLinks
if current.test_config.html:
ticket = "<a href=%s target=\"_blank\">Ticket</a> at <a href=%s target=\"_blank\">URL</a>" % (url,visited_url)
else:
ticket = "Ticket: %s" % url
self.brokenLinks[index_url] = (http_code,ticket)
break # no need to check any other links on this page
if url.find(self.homeURL) == -1:
continue
ignore_link = False
for ignore in self.include_ignore:
if url.find(ignore) != -1:
ignore_link = True
break
if ignore_link:
continue
for strip in self.strip_url:
location = url.find(strip)
if location != -1:
url = url[0:location]
if url not in self.urlList:
self.urlList[url] = depth
self.urlParentList[url[len(self.homeURL):]] = visited_url
self.add_to_model(url, depth, visited_url)
if url not in to_visit:
to_visit.append(url)
return to_visit
def report(self):
# print "Visited pages"
# n = 1
# for (url, depth) in self.urlList.items():
# print "%d. depth %d %s" % (n, depth, url,)
# n += 1
self.reporter("%d URLs visited" % self.totalLinks)
self.reporter("Broken Links")
n = 1
for (url, result) in self.brokenLinks.items():
http_code = result[0]
try:
parent = self.urlParentList[url]
if current.test_config.html:
parent = "<a href=%s target=\"_blank\">Parent</a>" % (parent)
except:
parent = "unknown"
if len(result) == 1:
self.reporter("%3d. (%s) %s called from %s" % (n,
http_code,
url,
parent
)
)
else:
self.reporter("%3d. (%s-%s) %s called from %s" % (n,
http_code,
result[1],
url,
parent
)
)
n += 1
def report_link_depth(self):
"""
Method to draw a histogram of the number of new links
discovered at each depth.
(i.e. show how many links are required to reach a link)
"""
try:
from matplotlib.backends.backend_agg import FigureCanvasAgg as FigureCanvas
self.FigureCanvas = FigureCanvas
from matplotlib.figure import Figure
self.Figure = Figure
from numpy import arange
except ImportError:
return
self.reporter("Analysis of link depth")
fig = Figure(figsize=(4, 2.5))
# Draw a histogram
width = 0.9
rect = [0.12, 0.08, 0.9, 0.85]
ax = fig.add_axes(rect)
left = arange(len(self.linkDepth))
plot = ax.bar(left, self.linkDepth, width=width)
# Add the x axis labels
ax.set_xticks(left+(width*0.5))
ax.set_xticklabels(left)
chart = StringIO()
canvas = self.FigureCanvas(fig)
canvas.print_figure(chart)
image = chart.getvalue()
import base64
base64Img = base64.b64encode(image)
image = "<img src=\"data:image/png;base64,%s\">" % base64Img
self.reporter(image)
def report_model_url(self):
print "Report breakdown by module"
for (model, value) in self.model_url.items():
print model
for ud in value:
url = ud[0]
depth = ud[1]
parent = ud[2]
tabs = "\t" * depth
print "%s %s-%s (parent url - %s)" % (tabs, depth, url, parent)
|
ashwyn/eden-message_parser
|
modules/tests/smoke/broken_links.py
|
Python
|
mit
| 11,841
|
[
"VisIt"
] |
cb18ace14edcca187e776c1345faaabd81d936cbee3ff6d0727c3a7aeabf0d43
|
import numpy as np
import math
from scipy import stats
import matplotlib.pyplot as plt
import matplotlib.gridspec
from sklearn.mixture import GMM
def exp_max_test(max_wij_resid_sum=0.0001):
# Author: Andrew Schechtman-Rook
# This is an attempt to implement the expectation maximum algorithm from 4.4.3.
# First I'll try to implement it for Gaussians, but I think it would also work for a sum of exponentials
true_a = np.array([400,500,600])
true_mu = np.array([0.5,0.0,-0.2])
true_sig = np.array([0.1,0.3,0.5])
xvals = np.zeros(0)
for i in range(len(true_a)):
xvals = np.append(xvals,np.random.normal(loc=true_mu[i],scale=true_sig[i],size=true_a[i]))
numgaussians_guess = 3
sig_guess = np.ones(numgaussians_guess)*np.std(xvals,ddof=1)
a_guess = np.ones(numgaussians_guess)/float(numgaussians_guess)
mu_guess = xvals[np.random.randint(0,len(xvals),size=numgaussians_guess)]
wij_guess = compute_wij(a_guess,mu_guess,sig_guess,xvals)
#print sig_guess
#print a_guess
#print mu_guess
converged = False
while not converged:
new_a = compute_a(wij_guess)
new_mu = compute_mu(wij_guess,xvals)
new_sig = compute_sig(wij_guess,new_mu,xvals)
new_wij = compute_wij(new_a,new_mu,new_sig,xvals)
wij_resids = np.abs(new_wij-wij_guess)
sum_resids = np.sum(wij_resids)
a_guess = new_a
mu_guess = new_mu
sig_guess = new_sig
wij_guess = new_wij
if sum_resids <= max_wij_resid_sum:
converged = True
plot_xes = np.linspace(xvals.min(),xvals.max(),1000)
plot_yes = plot_xes*0
for i in range(len(a_guess)):
plot_yes += a_guess[i]*np.exp(-(plot_xes-mu_guess[i])**2/(2*sig_guess[i]**2))/(sig_guess[i]*np.sqrt(2*math.pi))
sorted_true = np.argsort(true_a)
sorted_guess = np.argsort(a_guess)
print true_a[sorted_true]/float(np.sum(true_a)),a_guess[sorted_guess]
print true_mu[sorted_true],mu_guess[sorted_guess]
print true_sig[sorted_true],sig_guess[sorted_guess]
ax = plt.figure().add_subplot(111)
ax.plot(plot_xes,plot_yes,ls='-',color='black',lw=3)
ax.hist(xvals,30,normed=True,histtype='stepfilled',alpha=0.4)
ax.figure.savefig('exp_max_test.png',dpi=300)
def compute_wij(a,mu,sig,xvals):
gaussvals = np.zeros((len(a),len(xvals)))
gauss_sum = np.zeros(len(xvals))
for i in range(len(a)):
gaussvals[i,:] = a[i]*np.exp(-(xvals-mu[i])**2/(2*sig[i]**2))/(sig[i]*np.sqrt(2*math.pi))
gauss_sum += gaussvals[i,:]
return gaussvals/gauss_sum
def compute_a(wij):
return np.sum(wij,axis=1)/float(wij.shape[1])
def compute_mu(wij,xvals):
numerator = np.sum(wij*xvals,axis=1)
denominator = np.sum(wij,axis=1)
return numerator/denominator
def compute_sig(wij,xvals,mu):
stacked_xvals = np.vstack([xvals for i in range(len(mu))])
numerator = np.sum(wij*(stacked_xvals.T-mu)**2,axis=1)
denominator = np.sum(wij,axis=1)
return np.sqrt(numerator/denominator)
def make_fig_4_1():
# Author: Jake VanderPlas
# Modified by Andrew Schechtman-Rook
# License: BSD
# The figure produced by this code is published in the textbook
# "Statistics, Data Mining, and Machine Learning in Astronomy" (2013)
# For more information, see http://astroML.github.com
# To report a bug or issue, use the following forum:
# https://groups.google.com/forum/#!forum/astroml-general
#----------------------------------------------------------------------
# This function adjusts matplotlib settings for a uniform feel in the textbook.
# Note that with usetex=True, fonts are rendered with LaTeX. This may
# result in an error if LaTeX is not installed on your system. In that case,
# you can set usetex to False.
from astroML.plotting import setup_text_plots
#setup_text_plots(fontsize=8, usetex=True)
#------------------------------------------------------------
# Generate Dataset
np.random.seed(1)
N = 50
L0 = 10
dL = 0.2
t = np.linspace(0, 1, N)
L_obs = np.random.normal(L0, dL, N)
#------------------------------------------------------------
# Plot the results
fig = plt.figure(figsize=(5, 5))
fig.subplots_adjust(left=0.1, right=0.95, wspace=0.05,
bottom=0.1, top=0.95, hspace=0.05)
y_vals = [L_obs, L_obs, L_obs, L_obs + 0.5 - t ** 2]
y_errs = [dL, dL * 2, dL / 2, dL]
titles = ['correct errors',
'overestimated errors',
'underestimated errors',
'incorrect model']
for i in range(4):
ax = fig.add_subplot(2, 2, 1 + i, xticks=[])
# compute the mean and the chi^2/dof
mu = np.mean(y_vals[i])
z = (y_vals[i] - mu) / y_errs[i]
chi2 = np.sum(z ** 2)
chi2dof = chi2 / (N - 1)
# compute the standard deviations of chi^2/dof
sigma = np.sqrt(2. / (N - 1))
nsig = (chi2dof - 1) / sigma
# plot the points with errorbars
ax.errorbar(t, y_vals[i], y_errs[i], fmt='.k', ecolor='gray', lw=1)
ax.plot([-0.1, 1.3], [L0, L0], ':k', lw=1)
# Add labels and text
ax.text(0.95, 0.95, titles[i], ha='right', va='top',
transform=ax.transAxes,
bbox=dict(boxstyle='round', fc='w', ec='k'))
ax.text(0.02, 0.02, r'$\hat{\mu} = %.2f$' % mu, ha='left', va='bottom',
transform=ax.transAxes)
ax.text(0.98, 0.02,
r'$\chi^2_{\rm dof} = %.2f\, (%.2g\,\sigma)$' % (chi2dof, nsig),
ha='right', va='bottom', transform=ax.transAxes)
# set axis limits
ax.set_xlim(-0.05, 1.05)
ax.set_ylim(8.6, 11.4)
# set ticks and labels
ax.yaxis.set_major_locator(plt.MultipleLocator(1))
if i > 1:
ax.set_xlabel('observations')
if i % 2 == 0:
ax.set_ylabel('Luminosity')
else:
ax.yaxis.set_major_formatter(plt.NullFormatter())
fig.savefig('chap_4-make_fig_4_1.eps')
def make_fig_4_2():
# Author: Jake VanderPlas
# Modified by Andrew Schechtman-Rook
# License: BSD
# The figure produced by this code is published in the textbook
# "Statistics, Data Mining, and Machine Learning in Astronomy" (2013)
# For more information, see http://astroML.github.com
# To report a bug or issue, use the following forum:
# https://groups.google.com/forum/#!forum/astroml-general
#----------------------------------------------------------------------
# This function adjusts matplotlib settings for a uniform feel in the textbook.
# Note that with usetex=True, fonts are rendered with LaTeX. This may
# result in an error if LaTeX is not installed on your system. In that case,
# you can set usetex to False.
#------------------------------------------------------------
# Set up the dataset.
# We'll use scikit-learn's Gaussian Mixture Model to sample
# data from a mixture of Gaussians. The usual way of using
# this involves fitting the mixture to data: we'll see that
# below. Here we'll set the internal means, covariances,
# and weights by-hand.
np.random.seed(1)
gmm = GMM(3, n_iter=1)
gmm.means_ = np.array([[-1], [0], [3]])
gmm.covars_ = np.array([[1.5], [1], [0.5]]) ** 2
gmm.weights_ = np.array([0.3, 0.5, 0.2])
X = gmm.sample(10000)
#------------------------------------------------------------
# Learn the best-fit GMM models
# Here we'll use GMM in the standard way: the fit() method
# uses an Expectation-Maximization approach to find the best
# mixture of Gaussians for the data
# fit models with 1-10 components
N = np.arange(1, 11)
models = [None for i in range(len(N))]
for i in range(len(N)):
models[i] = GMM(N[i]).fit(X)
# compute the AIC and the BIC
AIC = [m.aic(X) for m in models]
BIC = [m.bic(X) for m in models]
#------------------------------------------------------------
# Plot the results
# We'll use three panels:
# 1) data + best-fit mixture
# 2) AIC and BIC vs number of components
# 3) probability that a point came from each component
fig = plt.figure(figsize=(9,3))
#fig.subplots_adjust(left=0.12, right=0.97, bottom=0.21, top=0.9, wspace=0.5)
gs = matplotlib.gridspec.GridSpec(1,3)
# plot 1: data + best-fit mixture
ax = fig.add_subplot(gs[0,0])
M_best = models[np.argmin(AIC)]
x = np.linspace(-6, 6, 1000)
logprob, responsibilities = M_best.eval(x)
pdf = np.exp(logprob)
pdf_individual = responsibilities * pdf[:, np.newaxis]
ax.hist(X, 30, normed=True, histtype='stepfilled', alpha=0.4)
ax.plot(x, pdf, '-k')
ax.plot(x, pdf_individual, '--k')
ax.text(0.04, 0.96, "Best-fit Mixture",
ha='left', va='top', transform=ax.transAxes)
ax.set_xlabel('$x$')
ax.set_ylabel('$p(x)$')
# plot 2: AIC and BIC
ax = fig.add_subplot(gs[0,1])
ax.plot(N, AIC, '-k', label='AIC')
ax.plot(N, BIC, '--k', label='BIC')
ax.set_xlabel('n. components')
ax.set_ylabel('information criterion')
ax.legend(loc=2)
# plot 3: posterior probabilities for each component
ax = fig.add_subplot(gs[0,2])
p = M_best.predict_proba(x)
p = p[:, (1, 0, 2)] # rearrange order so the plot looks better
p = p.cumsum(1).T
ax.fill_between(x, 0, p[0], color='gray', alpha=0.3)
ax.fill_between(x, p[0], p[1], color='gray', alpha=0.5)
ax.fill_between(x, p[1], 1, color='gray', alpha=0.7)
ax.set_xlim(-6, 6)
ax.set_ylim(0, 1)
ax.set_xlabel('$x$')
ax.set_ylabel(r'$p({\rm class}|x)$')
ax.text(-5, 0.3, 'class 1', rotation='vertical')
ax.text(0, 0.5, 'class 2', rotation='vertical')
ax.text(3, 0.3, 'class 3', rotation='vertical')
fig.tight_layout()
fig.savefig('chap_4-make_fig_4_2.png',dpi=300)
if __name__ == '__main__':
exp_max_test()
|
AndrewRook/machine_learning
|
chap_4.py
|
Python
|
mit
| 10,072
|
[
"Gaussian"
] |
be95992b4973c95e2551d90afa29c85a07e92794f044793d89f7ef812f800040
|
import numpy as np
from ase.dft.stm import STM
from ase.dft.dos import DOS
from ase.dft.wannier import Wannier
def monkhorst_pack(size):
if np.less_equal(size, 0).any():
raise ValueError('Illegal size: %s' % list(size))
kpts = np.indices(size).transpose((1, 2, 3, 0)).reshape((-1, 3))
return (kpts + 0.5) / size - 0.5
def get_distribution_moment(x, y, order=0):
"""Return the moment of nth order of distribution.
1st and 2nd order moments of a band correspond to the band's
center and width respectively.
For integration, the trapezoid rule is used.
"""
x = np.asarray(x)
y = np.asarray(y)
if order==0:
return np.trapz(y, x)
elif isinstance(order, int):
return np.trapz(x**order * y, x) / np.trapz(y, x)
elif hasattr(order, '__iter__'):
return [get_distribution_moment(x, y, n) for n in order]
else:
raise ValueError('Illegal order: %s' % order)
|
freephys/python_ase
|
ase/dft/__init__.py
|
Python
|
gpl-3.0
| 966
|
[
"ASE"
] |
af738643c8041cb92774439bb60013a3351d4dba1fe0e38a5a03179c6c1c8bf3
|
# -*- coding: utf-8 -*-
# vi:si:et:sw=4:sts=4:ts=4
##
## Copyright (C) 2013 Async Open Source <http://www.async.com.br>
## All rights reserved
##
## This program is free software; you can redistribute it and/or modify
## it under the terms of the GNU Lesser General Public License as published by
## the Free Software Foundation; either version 2 of the License, or
## (at your option) any later version.
##
## This program is distributed in the hope that it will be useful,
## but WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
## GNU Lesser General Public License for more details.
##
## You should have received a copy of the GNU Lesser General Public License
## along with this program; if not, write to the Free Software
## Foundation, Inc., or visit: http://www.gnu.org/.
##
## Author(s): Stoq Team <stoq-devel@async.com.br>
##
##
import collections
import datetime
import decimal
import gtk
from kiwi import ValueUnset
from kiwi.currency import currency
from kiwi.datatypes import ValidationError
from kiwi.python import Settable
from kiwi.ui.forms import PriceField, NumericField
from kiwi.ui.objectlist import Column
import pango
from storm.expr import And, Eq, Or
from stoqlib.api import api
from stoqlib.database.expr import Field
from stoqlib.domain.person import Employee
from stoqlib.domain.sellable import Sellable
from stoqlib.domain.views import SellableFullStockView
from stoqlib.domain.workorder import (WorkOrder, WorkOrderItem,
WorkOrderHistoryView)
from stoqlib.gui.base.dialogs import run_dialog
from stoqlib.gui.dialogs.batchselectiondialog import BatchDecreaseSelectionDialog
from stoqlib.gui.dialogs.credentialsdialog import CredentialsDialog
from stoqlib.gui.editors.baseeditor import BaseEditorSlave, BaseEditor
from stoqlib.gui.editors.noteeditor import NoteEditor
from stoqlib.gui.wizards.abstractwizard import SellableItemSlave
from stoqlib.lib.dateutils import localtoday
from stoqlib.lib.decorators import cached_property
from stoqlib.lib.defaults import QUANTITY_PRECISION, MAX_INT
from stoqlib.lib.formatters import format_quantity, format_sellable_description
from stoqlib.lib.translation import stoqlib_gettext
_ = stoqlib_gettext
class _WorkOrderItemBatchSelectionDialog(BatchDecreaseSelectionDialog):
# When indicating the batches to reserve items bellow, make sure the user
# doesn't select more batches than he selected to reserve.
validate_max_quantity = True
class _WorkOrderItemEditor(BaseEditor):
model_name = _(u'Work order item')
model_type = WorkOrderItem
confirm_widgets = ['price', 'quantity', 'quantity_reserved']
@cached_property()
def fields(self):
return collections.OrderedDict(
price=PriceField(_(u'Price'), proxy=True, mandatory=True),
quantity=NumericField(_(u'Quantity'), proxy=True, mandatory=True),
quantity_reserved=NumericField(_(u'Reserved quantity')),
)
def __init__(self, store, model, visual_mode=False):
self._original_quantity_decreased = model.quantity_decreased
self.manager = None
BaseEditor.__init__(self, store, model, visual_mode=visual_mode)
self.price.set_icon_activatable(gtk.ENTRY_ICON_PRIMARY,
activatable=True)
#
# BaseEditor
#
def setup_proxies(self):
unit = self.model.sellable.unit
digits = QUANTITY_PRECISION if unit and unit.allow_fraction else 0
for widget in [self.quantity, self.quantity_reserved]:
widget.set_digits(digits)
self.quantity.set_range(1, MAX_INT)
# If there's a sale, we can't change it's quantity, but we can
# reserve/return_to_stock them. On the other hand, if there's no sale,
# the quantity_reserved must be in sync with quantity
# *Only products with stock control can be reserved
storable = self.model.sellable.product_storable
if self.model.order.sale_id is not None and storable:
self.price.set_sensitive(False)
self.quantity.set_sensitive(False)
self.quantity_reserved.set_range(0, self.model.quantity)
else:
self.quantity_reserved.set_range(0, MAX_INT)
self.quantity_reserved.set_visible(False)
self.fields['quantity_reserved'].label_widget.set_visible(False)
# We need to add quantity_reserved to a proxy or else it's validate
# method won't do anything
self.add_proxy(
Settable(quantity_reserved=self.model.quantity_decreased),
['quantity_reserved'])
def on_confirm(self):
diff = (self.quantity_reserved.read() -
self._original_quantity_decreased)
if diff == 0:
return
elif diff < 0:
self.model.return_to_stock(-diff)
return
storable = self.model.sellable.product_storable
# This can only happen for diff > 0. If the product is marked to
# control batches, no decreased should have been made without
# specifying a batch on the item
if storable and storable.is_batch and self.model.batch is None:
# The only way self.model.batch is None is that this item
# was created on a sale quote and thus it has a sale_item
sale_item = self.model.sale_item
batches = run_dialog(
_WorkOrderItemBatchSelectionDialog, self, self.store,
model=storable, quantity=diff)
if not batches:
return
for s_item in [sale_item] + sale_item.set_batches(batches):
wo_item = WorkOrderItem.get_from_sale_item(self.store,
s_item)
if wo_item.batch is not None:
wo_item.reserve(wo_item.quantity)
elif storable:
self.model.reserve(diff)
#
# Private
#
def _validate_quantity(self, value):
storable = self.model.sellable.product_storable
if storable is None:
return
if self.model.batch is not None:
balance = self.model.batch.get_balance_for_branch(
self.model.order.branch)
else:
balance = storable.get_balance_for_branch(self.model.order.branch)
if value > self._original_quantity_decreased + balance:
return ValidationError(
_(u"This quantity is not available in stock"))
#
# Callbacks
#
def on_price__validate(self, widget, value):
if value <= 0:
return ValidationError(_(u"The price must be greater than 0"))
sellable = self.model.sellable
self.manager = self.manager or api.get_current_user(self.store)
# FIXME: Because of the design of the editor, the client
# could not be set yet.
category = self.model.order.client and self.model.order.client.category
valid_data = sellable.is_valid_price(value, category, self.manager)
if not valid_data['is_valid']:
return ValidationError(
(_(u'Max discount for this product is %.2f%%.') %
valid_data['max_discount']))
def on_price__icon_press(self, entry, icon_pos, event):
if icon_pos != gtk.ENTRY_ICON_PRIMARY:
return
# Ask for the credentials of a different user that can possibly allow a
# bigger discount.
self.manager = run_dialog(CredentialsDialog, self, self.store)
if self.manager:
self.price.validate(force=True)
def on_quantity__content_changed(self, entry):
# Check if 'quantity' widget is valid, before update the 'quantity_reserved'.
# We need make this, because the 'validate' signal of 'quantity_reserved'
# is not emitted when force the update of that widget
if self.quantity.validate() is not ValueUnset:
self.quantity_reserved.update(entry.read())
def on_quantity__validate(self, entry, value):
if value <= 0:
return ValidationError("The quantity must be greater than 0")
return self._validate_quantity(value)
def on_quantity_reserved__validate(self, widget, value):
return self._validate_quantity(value)
class _WorkOrderItemSlave(SellableItemSlave):
model_type = WorkOrder
summary_label_text = '<b>%s</b>' % api.escape(_("Total:"))
sellable_view = SellableFullStockView
item_editor = _WorkOrderItemEditor
validate_stock = True
validate_price = True
value_column = 'price'
batch_selection_dialog = BatchDecreaseSelectionDialog
def __init__(self, store, parent, model=None, visual_mode=False):
super(_WorkOrderItemSlave, self).__init__(store, parent, model=model,
visual_mode=visual_mode)
# If the workorder already has a sale, we cannot add items directly
# to the work order, but must use the sale editor to do so.
self.hide_add_button()
if model.sale_id:
self.hide_del_button()
self.hide_item_addition_toolbar()
self.slave.set_message(
_(u"This order is related to a sale. Edit the sale if you "
u"need to change the items"))
# If the os is not on it's original branch, don't allow
# the user to edit it (the edit is used to change quantity or
# reserve/return_to_stock them)
if model.branch_id != model.current_branch_id:
self.hide_del_button()
self.hide_item_addition_toolbar()
self.hide_edit_button()
#
# SellableItemSlave
#
def get_columns(self, editable=True):
return [
Column('sellable.code', title=_(u'Code'),
data_type=str, visible=False),
Column('sellable.barcode', title=_(u'Barcode'),
data_type=str, visible=False),
Column('sellable.description', title=_(u'Description'),
data_type=str, expand=True,
format_func=self._format_description, format_func_data=True),
Column('price', title=_(u'Price'),
data_type=currency),
Column('quantity', title=_(u'Quantity'),
data_type=decimal.Decimal, format_func=format_quantity),
Column('quantity_decreased', title=_(u'Consumed quantity'),
data_type=decimal.Decimal, format_func=format_quantity),
Column('total', title=_(u'Total'),
data_type=currency),
]
def get_remaining_quantity(self, sellable, batch=None):
# The original get_remaining_quantity will take items of the same
# sellable on the list here and discount them from the balance. We
# can't allow that since, unlike other SellableItemSlave subclasses,
# the stock is decreased as soon as the item is added.
storable = sellable.product_storable
if storable:
return storable.get_balance_for_branch(self.model.branch)
else:
return None
def get_saved_items(self):
return self.model.order_items
def get_order_item(self, sellable, price, quantity, batch=None):
item = self.model.add_sellable(sellable, price=price,
quantity=quantity, batch=batch)
# Storable items added here are consumed at the same time
storable = item.sellable.product_storable
if storable:
item.reserve(quantity)
return item
def get_sellable_view_query(self):
return (self.sellable_view,
# FIXME: How to do this using sellable_view.find_by_branch ?
And(Or(Field('_stock_summary', 'branch_id') == self.model.branch.id,
Eq(Field('_stock_summary', 'branch_id'), None)),
Sellable.get_available_sellables_query(self.store)))
def get_batch_items(self):
# FIXME: Since the item will have it's stock synchronized above
# (on sellable_selected) and thus having it's stock decreased,
# we can't pass anything here. Find a better way to do this
return []
#
# Private
#
def _format_description(self, item, data):
return format_sellable_description(item.sellable, item.batch)
class WorkOrderOpeningSlave(BaseEditorSlave):
gladefile = 'WorkOrderOpeningSlave'
model_type = WorkOrder
proxy_widgets = [
'defect_reported',
'open_date',
]
#
# BaseEditorSlave
#
def setup_proxies(self):
# Set sensitivity before adding the proxy, otherwise, the open date will
# be changed.
if not api.sysparam.get_bool('ALLOW_OUTDATED_OPERATIONS'):
self.open_date.set_sensitive(False)
self.add_proxy(self.model, self.proxy_widgets)
class WorkOrderQuoteSlave(BaseEditorSlave):
gladefile = 'WorkOrderQuoteSlave'
model_type = WorkOrder
proxy_widgets = [
'defect_detected',
'quote_responsible',
'description',
'estimated_cost',
'estimated_finish',
'estimated_hours',
'estimated_start',
]
#: If we should show an entry for the description
#: (allowing it to be set or changed).
show_description_entry = False
#
# BaseEditorSlave
#
def setup_proxies(self):
self._new_model = False
self._fill_quote_responsible_combo()
if not self.show_description_entry:
self.description.hide()
self.description_lbl.hide()
self.add_proxy(self.model, self.proxy_widgets)
def on_attach(self, editor):
self._new_model = not editor.edit_mode
#
# Private
#
def _fill_quote_responsible_combo(self):
employees = Employee.get_active_employees(self.store)
self.quote_responsible.prefill(api.for_person_combo(employees))
#
# Callbacks
#
def on_estimated_start__validate(self, widget, value):
if (self._new_model and value < localtoday() and
not api.sysparam.get_bool('ALLOW_OUTDATED_OPERATIONS')):
return ValidationError(u"The start date cannot be on the past")
self.estimated_finish.validate(force=True)
def on_estimated_finish__validate(self, widget, value):
if (self._new_model and value < localtoday() and
not api.sysparam.get_bool('ALLOW_OUTDATED_OPERATIONS')):
return ValidationError(u"The end date cannot be on the past")
estimated_start = self.estimated_start.read()
if estimated_start and value < estimated_start:
return ValidationError(
_(u"Finished date needs to be after start date"))
class WorkOrderExecutionSlave(BaseEditorSlave):
gladefile = 'WorkOrderExecutionSlave'
model_type = WorkOrder
proxy_widgets = [
'execution_responsible',
]
#
# BaseEditorSlave
#
def __init__(self, parent, *args, **kwargs):
self.parent = parent
BaseEditorSlave.__init__(self, *args, **kwargs)
def setup_proxies(self):
self._fill_execution_responsible_combo()
self.proxy = self.add_proxy(self.model, self.proxy_widgets)
def setup_slaves(self):
self.sellable_item_slave = _WorkOrderItemSlave(
self.store, self.parent, self.model, visual_mode=self.visual_mode)
self.attach_slave('sellable_item_holder', self.sellable_item_slave)
#
# Private
#
def _fill_execution_responsible_combo(self):
employees = Employee.get_active_employees(self.store)
self.execution_responsible.prefill(api.for_person_combo(employees))
class WorkOrderHistorySlave(BaseEditorSlave):
"""Slave responsible to show the history of a |workorder|"""
gladefile = 'WorkOrderHistorySlave'
model_type = WorkOrder
#
# Public API
#
def update_items(self):
"""Update the items on the list
Useful when a history is created when using this slave and
we want it to show here at the same time.
"""
self.details_list.add_list(
WorkOrderHistoryView.find_by_work_order(self.store, self.model))
#
# BaseEditorSlave
#
def setup_proxies(self):
self.details_btn.set_sensitive(False)
# TODO: Show a tooltip for each row displaying the reason
self.details_list.set_columns([
Column('date', _(u"Date"), data_type=datetime.datetime, sorted=True),
Column('user_name', _(u"Who"), data_type=str, expand=True,
ellipsize=pango.ELLIPSIZE_END),
Column('what', _(u"What"), data_type=str, expand=True),
Column('old_value', _(u"Old value"), data_type=str, visible=False),
Column('new_value', _(u"New value"), data_type=str),
Column('notes', _(u"Notes"), data_type=str,
format_func=self._format_notes,
ellipsize=pango.ELLIPSIZE_END)])
self.update_items()
#
# Private
#
def _format_notes(self, notes):
return notes.split('\n')[0]
def _show_details(self, item):
parent = self.get_toplevel().get_toplevel()
# XXX: The window here is not decorated on gnome-shell, and because of
# the visual_mode it gets no buttons. What to do?
run_dialog(NoteEditor, parent, self.store, model=item,
attr_name='notes', title=_(u"Notes"), visual_mode=True)
#
# Callbacks
#
def on_details_list__row_activated(self, details_list, item):
if self.details_btn.get_sensitive():
self._show_details(item)
def on_details_list__selection_changed(self, details_list, item):
self.details_btn.set_sensitive(bool(item and item.notes))
def on_details_btn__clicked(self, button):
selected = self.details_list.get_selected()
self._show_details(selected)
|
andrebellafronte/stoq
|
stoqlib/gui/slaves/workorderslave.py
|
Python
|
gpl-2.0
| 18,157
|
[
"VisIt"
] |
00bf890853ad480ca01cb2bf37bb7f876ca9846ad33c0accb9ef3fd3d18770ec
|
#!/usr/bin/python
#
# @author: Gaurav Rastogi (grastogi@avinetworks.com)
# Eric Anderson (eanderson@avinetworks.com)
# module_check: supported
# Avi Version: 17.1.1
#
# Copyright: (c) 2017 Gaurav Rastogi, <grastogi@avinetworks.com>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
#
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: avi_httppolicyset
author: Gaurav Rastogi (grastogi@avinetworks.com)
short_description: Module for setup of HTTPPolicySet Avi RESTful Object
description:
- This module is used to configure HTTPPolicySet object
- more examples at U(https://github.com/avinetworks/devops)
requirements: [ avisdk ]
version_added: "2.4"
options:
state:
description:
- The state that should be applied on the entity.
default: present
choices: ["absent", "present"]
avi_api_update_method:
description:
- Default method for object update is HTTP PUT.
- Setting to patch will override that behavior to use HTTP PATCH.
version_added: "2.5"
default: put
choices: ["put", "patch"]
avi_api_patch_op:
description:
- Patch operation to use when using avi_api_update_method as patch.
version_added: "2.5"
choices: ["add", "replace", "delete"]
cloud_config_cksum:
description:
- Checksum of cloud configuration for pool.
- Internally set by cloud connector.
created_by:
description:
- Creator name.
description:
description:
- User defined description for the object.
http_request_policy:
description:
- Http request policy for the virtual service.
http_response_policy:
description:
- Http response policy for the virtual service.
http_security_policy:
description:
- Http security policy for the virtual service.
is_internal_policy:
description:
- Boolean flag to set is_internal_policy.
- Default value when not specified in API or module is interpreted by Avi Controller as False.
name:
description:
- Name of the http policy set.
required: true
tenant_ref:
description:
- It is a reference to an object of type tenant.
url:
description:
- Avi controller URL of the object.
uuid:
description:
- Uuid of the http policy set.
extends_documentation_fragment:
- avi
'''
EXAMPLES = """
- name: Create a HTTP Policy set two switch between testpool1 and testpool2
avi_httppolicyset:
controller: 10.10.27.90
username: admin
password: AviNetworks123!
name: test-HTTP-Policy-Set
tenant_ref: admin
http_request_policy:
rules:
- index: 1
enable: true
name: test-test1
match:
path:
match_case: INSENSITIVE
match_str:
- /test1
match_criteria: EQUALS
switching_action:
action: HTTP_SWITCHING_SELECT_POOL
status_code: HTTP_LOCAL_RESPONSE_STATUS_CODE_200
pool_ref: "/api/pool?name=testpool1"
- index: 2
enable: true
name: test-test2
match:
path:
match_case: INSENSITIVE
match_str:
- /test2
match_criteria: CONTAINS
switching_action:
action: HTTP_SWITCHING_SELECT_POOL
status_code: HTTP_LOCAL_RESPONSE_STATUS_CODE_200
pool_ref: "/api/pool?name=testpool2"
is_internal_policy: false
"""
RETURN = '''
obj:
description: HTTPPolicySet (api/httppolicyset) object
returned: success, changed
type: dict
'''
from ansible.module_utils.basic import AnsibleModule
try:
from ansible.module_utils.network.avi.avi import (
avi_common_argument_spec, HAS_AVI, avi_ansible_api)
except ImportError:
HAS_AVI = False
def main():
argument_specs = dict(
state=dict(default='present',
choices=['absent', 'present']),
avi_api_update_method=dict(default='put',
choices=['put', 'patch']),
avi_api_patch_op=dict(choices=['add', 'replace', 'delete']),
cloud_config_cksum=dict(type='str',),
created_by=dict(type='str',),
description=dict(type='str',),
http_request_policy=dict(type='dict',),
http_response_policy=dict(type='dict',),
http_security_policy=dict(type='dict',),
is_internal_policy=dict(type='bool',),
name=dict(type='str', required=True),
tenant_ref=dict(type='str',),
url=dict(type='str',),
uuid=dict(type='str',),
)
argument_specs.update(avi_common_argument_spec())
module = AnsibleModule(
argument_spec=argument_specs, supports_check_mode=True)
if not HAS_AVI:
return module.fail_json(msg=(
'Avi python API SDK (avisdk>=17.1) is not installed. '
'For more details visit https://github.com/avinetworks/sdk.'))
return avi_ansible_api(module, 'httppolicyset',
set([]))
if __name__ == '__main__':
main()
|
ravibhure/ansible
|
lib/ansible/modules/network/avi/avi_httppolicyset.py
|
Python
|
gpl-3.0
| 5,380
|
[
"VisIt"
] |
c28b2270028688ed2a0b8f80e9bf78a8efa1bc8697b7ba1d3b3603ee6ac181ea
|
import matplotlib.pyplot as pl
import numpy as np
import copy
import pickle
import sys
sys.setrecursionlimit(2000)
import morphologyReader as morphR
import neuronModels as neurM
import functionFitter as funF
## parameters
Veq = -65. # mV
tmax = 300. # ms
dt = .1 # ms
K = 4
## initialization #####################################################################
## Step 0: initialize the morphology
# Specify the path to an '.swc' file.
morphfile = 'morphologies/ball_and_stick_taper.swc'
# Define the ion channel distributions for dendrites and soma. Here the neuron model is
# passive.
d_distr = {'L': {'type': 'fit', 'param': [Veq, 50.], 'E': Veq, 'calctype': 'pas'}}
s_distr = {'L': {'type': 'fit', 'param': [Veq, 50.], 'E': Veq, 'calctype': 'pas'}}
# initialize a greensTree. Here, all the quantities are stored to compute the GF in the
# frequency domain (algorithm of Koch and Poggio, 1985).
greenstree = morphR.greensTree(morphfile, soma_distr=s_distr, ionc_distr=d_distr, cnodesdistr='all')
# initialize a greensFunctionCalculator using the previously created greensTree. This class
# stores all variables necessary to compute the GF in a format fit for simulation, either
# the plain time domain or with the partial fraction decomposition.
gfcalc = morphR.greensFunctionCalculator(greenstree)
gfcalc.set_impedances_logscale(fmax=7, base=10, num=200)
# Now a list of input locations needs to be defined. For the sparse reformulation, the
# first location needs to be the soma
inlocs = [ {'node': 1, 'x': .5, 'ID': 0}, {'node': 4, 'x': .5, 'ID': 1}, {'node': 5, 'x': .5, 'ID': 2},
{'node': 6, 'x': .5, 'ID': 3}, {'node': 7, 'x': .5, 'ID': 4}, {'node': 8, 'x': .5, 'ID': 5},
{'node': 9, 'x': .5, 'ID': 6}]
## Steps 1,2,3 and 4:
# find sets of nearest neighbours, computes the necessary GF kernels, then computes the
# sparse kernels and then fits the partial fraction decomposition using the VF algorithm.
alphas, gammas, pairs, Ms = gfcalc.kernelSet_sparse(inlocs, FFT=False, kernelconstants=True)
## Step 4 bis: compute the vectors that will be used in the simulation
prep = neurM.preprocessor()
mat_dict_hybrid = prep.construct_volterra_matrices_hybrid(dt, alphas, gammas, K, pprint=False)
## Examples of steps that happen within the kernelSet_sparse function
## Step 1: example to find the nearest neighbours
NNs, _ = gfcalc.greenstree.get_nearest_neighbours(inlocs, add_leaves=False, reduced=False)
## Step 2: example of finding a kernel
g_example = gfcalc.greenstree.calc_greensfunction(inlocs[0], inlocs[1], voltage=True)
## Step 4: example of computing a partial fraction decomposition
FEF = funF.fExpFitter()
alpha_example, gamma_example, pair_example, rms = FEF.fitFExp_increment(gfcalc.s, g_example, \
rtol=1e-8, maxiter=50, realpoles=False, constrained=True, zerostart=False)
# # plot the kernel example
# pl.figure('kernel example')
# pl.plot(gfcalc.s.imag, g_example.real, 'b')
# pl.plot(gfcalc.s.imag, g_example.real, 'r')
#######################################################################################
## Simulation #########################################################################
# define a synapse and a spiketime
synapseparams = [{'node': 9, 'x': .5, 'ID': 0, 'tau1': .2, 'tau2': 3., 'E_r': 0., 'weight': 5.*1e-3}]
spiketimes = [{'ID': 0, 'spks': [10.]}]
# ion channel conductances at integration points, in this example, there is only leak which is
# already incorporated in the GF
gs_point = {inloc['ID']: {'L': 0.} for inloc in inlocs}
es_point = {inloc['ID']: {'L': -65.} for inloc in inlocs}
gcalctype_point = {inloc['ID']: {'L': 'pas'} for inloc in inlocs}
# create an SGF neuron
SGFneuron = neurM.integratorneuron(inlocs, synapseparams, [], gs_point, es_point, gcalctype_point,
E_eq=Veq, nonlinear=False)
# run the simulation
SGFres = SGFneuron.run_volterra_hybrid(tmax, dt, spiketimes, mat_dict=mat_dict_hybrid)
# run a neuron simulation for comparison
NEURONneuron = neurM.NeuronNeuron(greenstree, dt=dt, truemorph=True, factorlambda=10.)
NEURONneuron.add_double_exp_synapses(copy.deepcopy(synapseparams))
NEURONneuron.set_spiketrains(spiketimes)
NEURres = NEURONneuron.run(tdur=tmax, pprint=False)
#######################################################################################
## plot trace
pl.figure('simulation')
pl.plot(NEURres['t'], NEURres['vmsoma'], 'r-', label=r'NEURON soma')
pl.plot(NEURres['t'], NEURres[0], 'b-', label=r'NEURON syn')
pl.plot(SGFres['t'], SGFres['Vm'][0,:], 'r--', lw=1.7, label=r'SGF soma')
pl.plot(SGFres['t'], SGFres['Vm'][-1,:], 'b--', lw=1.7, label=r'SGF syn')
pl.xlabel(r'$t$ (ms)')
pl.ylabel(r'$V_m$ (mV)')
pl.legend(loc=0)
pl.show()
|
WillemWybo/SGF_formalism
|
demo.py
|
Python
|
mit
| 4,706
|
[
"NEURON"
] |
c25d9bd3edc58939be7fed7db86277b4451c2ec744c1877965aaadd3613d89a4
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import os
import vtk
import vtk.test.Testing
import math
data_2008 = [10822, 10941, 9979, 10370, 9460, 11228, 15093, 12231, 10160, 9816, 9384, 7892]
data_2009 = [9058, 9474, 9979, 9408, 8900, 11569, 14688, 12231, 10294, 9585, 8957, 8590]
data_2010 = [9058, 10941, 9979, 10270, 8900, 11228, 14688, 12231, 10160, 9585, 9384, 8590]
class TestBarGraph(vtk.test.Testing.vtkTest):
def testBarGraph(self):
"Test if bar graphs can be built with python"
# Set up a 2D scene, add an XY chart to it
view = vtk.vtkContextView()
view.GetRenderer().SetBackground(1.0,1.0,1.0)
view.GetRenderWindow().SetSize(400,300)
chart = vtk.vtkChartXY()
view.GetScene().AddItem(chart)
# Create a table with some points in it
table = vtk.vtkTable()
arrMonth = vtk.vtkIntArray()
arrMonth.SetName("Month")
arr2008 = vtk.vtkIntArray()
arr2008.SetName("2008")
arr2009 = vtk.vtkIntArray()
arr2009.SetName("2009")
arr2010 = vtk.vtkIntArray()
arr2010.SetName("2010")
numMonths = 12
for i in range(0,numMonths):
arrMonth.InsertNextValue(i + 1)
arr2008.InsertNextValue(data_2008[i])
arr2009.InsertNextValue(data_2009[i])
arr2010.InsertNextValue(data_2010[i])
table.AddColumn(arrMonth)
table.AddColumn(arr2008)
table.AddColumn(arr2009)
table.AddColumn(arr2010)
# Now add the line plots with appropriate colors
line = chart.AddPlot(2)
line.SetInputData(table,0,1)
line.SetColor(0,255,0,255)
line = chart.AddPlot(2)
line.SetInputData(table,0,2)
line.SetColor(255,0,0,255)
line = chart.AddPlot(2)
line.SetInputData(table,0,3)
line.SetColor(0,0,255,255)
view.GetRenderWindow().SetMultiSamples(0)
view.GetRenderWindow().Render()
img_file = "TestBarGraph.png"
vtk.test.Testing.compareImage(view.GetRenderWindow(),vtk.test.Testing.getAbsImagePath(img_file),threshold=25)
vtk.test.Testing.interact()
if __name__ == "__main__":
vtk.test.Testing.main([(TestBarGraph, 'test')])
|
hlzz/dotfiles
|
graphics/VTK-7.0.0/Charts/Core/Testing/Python/TestBarGraph.py
|
Python
|
bsd-3-clause
| 2,326
|
[
"VTK"
] |
ef905e458138d26d4adfef6636b0a477af813c6f9c7b339307052069282d6486
|
#!/usr/bin/env python
from vtk import *
addStringLabel = vtkProgrammableFilter()
def computeLabel():
input = addStringLabel.GetInput()
output = addStringLabel.GetOutput()
output.ShallowCopy(input)
# Create output array
vertexArray = vtkStringArray()
vertexArray.SetName("label")
vertexArray.SetNumberOfTuples(output.GetNumberOfVertices())
# Loop through all the vertices setting the degree for the new attribute array
for i in range(output.GetNumberOfVertices()):
label = '%02d' % (i)
vertexArray.SetValue(i, label)
# Add the new attribute array to the output graph
output.GetVertexData().AddArray(vertexArray)
addStringLabel.SetExecuteMethod(computeLabel)
source = vtkRandomGraphSource()
source.SetNumberOfVertices(15)
source.SetIncludeEdgeWeights(True)
addStringLabel.SetInputConnection(source.GetOutputPort())
conn_comp = vtkBoostConnectedComponents()
bi_conn_comp = vtkBoostBiconnectedComponents()
conn_comp.SetInputConnection(addStringLabel.GetOutputPort())
bi_conn_comp.SetInputConnection(conn_comp.GetOutputPort())
# Cleave off part of the graph
vertexDataTable = vtkDataObjectToTable()
vertexDataTable.SetInputConnection(bi_conn_comp.GetOutputPort())
vertexDataTable.SetFieldType(3) # Vertex data
# Make a tree out of connected/biconnected components
toTree = vtkTableToTreeFilter()
toTree.AddInputConnection(vertexDataTable.GetOutputPort())
tree1 = vtkGroupLeafVertices()
tree1.AddInputConnection(toTree.GetOutputPort())
tree1.SetInputArrayToProcess(0,0, 0, 4, "component")
tree1.SetInputArrayToProcess(1,0, 0, 4, "label")
tree2 = vtkGroupLeafVertices()
tree2.AddInputConnection(tree1.GetOutputPort())
tree2.SetInputArrayToProcess(0,0, 0, 4, "biconnected component")
tree2.SetInputArrayToProcess(1,0, 0, 4, "label")
# Create a tree ring view on connected/biconnected components
view1 = vtkTreeRingView()
view1.SetTreeFromInputConnection(tree2.GetOutputPort())
view1.SetGraphFromInputConnection(bi_conn_comp.GetOutputPort())
view1.SetLabelPriorityArrayName("GraphVertexDegree")
view1.SetAreaColorArrayName("VertexDegree")
view1.SetAreaLabelArrayName("label")
view1.SetAreaHoverArrayName("label")
view1.SetAreaLabelVisibility(True)
view1.SetBundlingStrength(.5)
view1.SetLayerThickness(.5)
view1.Update()
view1.SetColorEdges(True)
view1.SetEdgeColorArrayName("edge weight")
view2 = vtkGraphLayoutView()
view2.AddRepresentationFromInputConnection(bi_conn_comp.GetOutputPort())
view2.SetVertexLabelArrayName("label")
view2.SetVertexLabelVisibility(True)
view2.SetVertexColorArrayName("label")
view2.SetColorVertices(True)
view2.SetLayoutStrategyToSimple2D()
# Apply a theme to the views
theme = vtkViewTheme.CreateOceanTheme()
view1.ApplyViewTheme(theme)
view2.ApplyViewTheme(theme)
theme.FastDelete()
view1.GetRenderWindow().SetSize(600, 600)
view1.ResetCamera()
view1.Render()
view2.GetRenderWindow().SetSize(600, 600)
view2.ResetCamera()
view2.Render()
view1.GetInteractor().Start()
|
HopeFOAM/HopeFOAM
|
ThirdParty-0.1/ParaView-5.0.1/VTK/Examples/Infovis/Python/graph_tree_ring.py
|
Python
|
gpl-3.0
| 2,950
|
[
"VTK"
] |
6b441c827d05e796c1c9717089898cdc2c6f4a7d9bd160b308b86001ca4f19c6
|
# -*- coding: utf-8 -*-
# vi:si:et:sw=4:sts=4:ts=4
##
## Copyright (C) 2013 Async Open Source <http://www.async.com.br>
## All rights reserved
##
## This program is free software; you can redistribute it and/or modify
## it under the terms of the GNU General Public License as published by
## the Free Software Foundation; either version 2 of the License, or
## (at your option) any later version.
##
## This program is distributed in the hope that it will be useful,
## but WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
## GNU General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with this program; if not, write to the Free Software
## Foundation, Inc., or visit: http://www.gnu.org/.
##
## Author(s): Stoq Team <stoq-devel@async.com.br>
##
__tests__ = 'stoqlib/gui/dialogs/daterangedialog.py'
import datetime
from stoqlib.gui.dialogs.daterangedialog import DateRangeDialog, date_range
from stoqlib.gui.test.uitestutils import GUITest
class TestDateRangeDialog(GUITest):
def test_create(self):
dialog = DateRangeDialog()
self.check_dialog(dialog, 'dialog-date-range')
def test_confirm(self):
dialog = DateRangeDialog()
start = end = datetime.date(2013, 1, 1)
dialog.date_filter.set_state(start=start, end=end)
dialog.confirm()
self.assertEqual(dialog.retval, date_range(start=start, end=end))
dialog = DateRangeDialog()
start = datetime.date(2013, 1, 1)
end = datetime.date(2013, 2, 1)
dialog.date_filter.set_state(start=start, end=end)
dialog.confirm()
self.assertEqual(dialog.retval, date_range(start=start, end=end))
|
tiagocardosos/stoq
|
stoqlib/gui/test/test_daterangedialog.py
|
Python
|
gpl-2.0
| 1,789
|
[
"VisIt"
] |
a409ac25cd89a98e6501903cd7e06f4c6226162f0da7f6f942c99d0a33990381
|
import numpy as np
import sys, random, math
from mpi4py import MPI
"""
This is a code for doing widom insertions inside LAMMPS using the python wrapper lammps.py
Usage: python widom_ins.py infile ninsert nsteps molfile
"""
def run(steps):
lmp.command("run %s" % int(steps))
def grabpe():
pe = lmp.extract_compute("thermo_pe",0,0)/natoms
return pe
def grabvol():
vol = (lmp.extract_global("boxxhi",1)-lmp.extract_global("boxxlo",1))**3
return vol
# Define Units
kb = 0.0019872041 # kcal/(mol*K)
T = 314.1 # K
beta = (1.0/(kb*T))
"""
convfactor takes:
1 kcal
- * ---- => kbar
A^3 mol
"""
convfactor = 69.47511748
# MPI Section
comm = MPI.COMM_WORLD
rank = comm.Get_rank()
# Argument Parser
if len(sys.argv) != 5:
print("Usage: python widom_ins.py infile ninsert nsteps molfile")
sys.exit(1)
infile = sys.argv[1]
ninsert = int(sys.argv[2])
nsteps = int(sys.argv[3])
molfile = sys.argv[4]
# Generate Random Number Seed
random.seed(27413)
# Define Random Number Generator
def randnum():
rand = ''.join(random.sample("0123456789",6))
return rand
# Import LAMMPS and initialize it
from lammps import lammps
lmp = lammps()
# Read LAMMPS inputfile line by line
lines = open(infile, 'r').readlines()
for line in lines: lmp.command(line)
run(0)
lmp.command("variable e equal pe")
# Read initial information
natoms = lmp.extract_global("natoms", 0)
inite = grabpe()
vol = grabvol()
# Print Basic info to screen.
if rank == 0:
print("%s atoms in the system" % natoms)
print("%s is the initial energy (kcal/mol)" % inite)
run(100000)
lmp.command("molecule ins %s toff 2" % molfile)
# Initialize
vol = []
nrho = []
Bi = []
# Begin Step Loop
for i in range(nsteps):
# Zeros the energy vlaues
bolz = []
# Move to new config
run(1000)
E_h2o = grabpe()
vol.append(grabvol())
nrho.append(natoms/vol[i])
# Loops over insertion attempts.
for j in range(ninsert):
lmp.command("create_atoms 0 random 1 %s NULL mol ins %s" % (randnum(),randnum()))
lmp.command("group del type 3")
run(0)
ins_e = grabpe()
del_e = ins_e - E_h2o
bolz.append(math.exp(-beta*del_e))
lmp.command("delete_atoms group del")
# Calculates Quantities for the step.
Bi.append(np.average(bolz)*vol[i])
# Open the Logfile
logfile = "log.widom"
log = open(logfile,'w')
if rank == 0:
log.write("Step MuEx H\n")
av_vol = np.average(vol)
av_nrho = np.average(nrho)
for i in range(nsteps):
MuEx = -kb*T*math.log(Bi[i]/av_vol)
H = av_nrho/beta*math.exp(beta*MuEx)
if rank == 0:
log.write("%s %s %s\n" % ((i+1),MuEx, H*convfactor))
log.flush()
MuEx = -kb*T*math.log(np.average(Bi)/av_vol)
H = av_nrho/beta*math.exp(beta*MuEx)
log.close()
if rank == 0:
print("MuEx(kcal/mol) %s" % (MuEx))
print("Henry(kbar) %s" % (H*convfactor))
|
piskuliche/Python-Code-Collection
|
Monte-Carlo/widom/widom_ins.py
|
Python
|
gpl-3.0
| 2,917
|
[
"LAMMPS"
] |
45df32f68e0579875a7240b469db67f5ad549be8bf962e446401e806e2ff6cf6
|
# ./gen_mlp_init.py
# script generateing NN initialization for training with TNet
#
# author: Karel Vesely
#
import math, random
import sys
from optparse import OptionParser
parser = OptionParser()
parser.add_option('--dim', dest='dim', help='d1:d2:d3 layer dimensions in the network')
parser.add_option('--gauss', dest='gauss', help='use gaussian noise for weights', action='store_true', default=False)
parser.add_option('--negbias', dest='negbias', help='use uniform [-4.1,-3.9] for bias (default all 0.0)', action='store_true', default=False)
(options, args) = parser.parse_args()
if(options.dim == None):
parser.print_help()
sys.exit(1)
dimStrL = options.dim.split(':')
dimL = []
for i in range(len(dimStrL)):
dimL.append(int(dimStrL[i]))
for layer in range(len(dimL)-1):
print '<recurrent>', dimL[layer+1], dimL[layer]
print 'm', dimL[layer+1], dimL[layer]+dimL[layer+1]
for row in range(dimL[layer+1]):
for col in range(dimL[layer]+dimL[layer+1]):
if(options.gauss):
print 0.1*random.gauss(0.0,1.0),
else:
print random.random()/5.0-0.1,
print
print 'v', dimL[layer+1]
for idx in range(dimL[layer+1]):
if(options.negbias):
print random.random()/5.0-4.1,
else:
print '0.0',
print
|
troylee/nnet-asr
|
tools/init/gen_recurrent_init.py
|
Python
|
apache-2.0
| 1,278
|
[
"Gaussian"
] |
61563f75f3268f4fb3c3dc93ce91123f7abd5b35be44800506d227d4fbf78ca2
|
"""
Views of pages.
NOTE: Put new Ajax-only actions into main/xhr_handlers.py.
"""
import json
import os
import re
import urllib
from django.conf import settings
from django.contrib.auth import login
from django.contrib.auth.decorators import login_required
from django.contrib.auth.models import User
from django.core.files.base import ContentFile
from django.core.files.storage import default_storage
from django.core.urlresolvers import reverse
from django.http import Http404
from django.http import HttpResponse
from django.http import HttpResponseBadRequest
from django.http import HttpResponseRedirect
from django.shortcuts import get_object_or_404
from django.shortcuts import render
from registration.backends.simple.views import RegistrationView
from main.adapters import adapt_model_instance_to_frontend
from main.adapters import adapt_model_to_frontend
from main.models import AlignmentGroup
from main.models import Contig
from main.models import Dataset
from main.models import Project
from main.models import ReferenceGenome
from main.models import ExperimentSample
from main.models import ExperimentSampleToAlignment
from main.models import VariantSet
from main.model_utils import get_dataset_with_type
from pipeline.pipeline_runner import run_pipeline
from utils.import_util import import_variant_set_from_vcf
from utils.jbrowse_util import compile_tracklist_json
# Tags used to indicate which tab we are on.
TAB_ROOT__DATA = 'DATA'
TAB_ROOT__ANALYZE = 'ANALYZE'
def home_view(request):
"""The main landing page.
"""
context = {}
return render(request, 'home.html', context)
def demo_splash_view(request):
"""The main landing page.
"""
context = {}
return render(request, settings.DEMO_SPLASH, context)
@login_required
def project_list_view(request):
"""The list of projects.
"""
# NOTE: The 'project_list' template variable is provided via
# the custom context processor main.context_processors.common_data.
context = {}
return render(request, 'project_list.html', context)
@login_required
def project_create_view(request):
"""View where a user creates a new Project.
"""
context = {}
if request.POST:
try:
project_name = request.POST.get('title')
existing_proj_count = Project.objects.filter(
owner=request.user.get_profile(),
title=project_name).count()
assert existing_proj_count == 0, (
'Project with that name already exists.')
project = Project.objects.create(
owner=request.user.get_profile(),
title=project_name)
# Redirect to "Create Alignment" page.
redirect_url = '%s?is_new_project=1' % reverse(
'main.views.alignment_create_view',
args=(project.uid,))
return HttpResponseRedirect(redirect_url)
except Exception as e:
context['error_string'] = str(e)
return render(request, 'project_create.html', context)
@login_required
def project_view(request, project_uid):
"""Overview of a single project.
"""
project = get_object_or_404(Project, owner=request.user.get_profile(),
uid=project_uid)
# Initial javascript data.
init_js_data = json.dumps({
'entity': adapt_model_instance_to_frontend(project)
})
context = {
'project': project,
'init_js_data': init_js_data,
'tab_root': TAB_ROOT__DATA
}
return render(request, 'project.html', context)
VALID_ANALYZE_SUB_VIEWS = set([
'contigs',
'genes',
'sets',
'variants'
])
@login_required
def tab_root_analyze(request, project_uid, alignment_group_uid=None, sub_view=None):
"""The analyze view. Subviews are loaded in Javascript.
"""
context = {}
project = get_object_or_404(Project, owner=request.user.get_profile(),
uid=project_uid)
# Initial javascript data to establish initial state.
init_js_data = {
'project': adapt_model_instance_to_frontend(project)
}
alignment_group = None
if alignment_group_uid is not None:
alignment_group = get_object_or_404(AlignmentGroup,
uid=alignment_group_uid,
reference_genome__project__owner=request.user.get_profile())
init_js_data['alignmentGroup'] = adapt_model_instance_to_frontend(
alignment_group)
context['active_alignment_group'] = alignment_group
ref_genome = alignment_group.reference_genome
init_js_data['refGenome'] = adapt_model_instance_to_frontend(
ref_genome)
context['active_ref_genome'] = ref_genome
if sub_view is not None:
if not sub_view in VALID_ANALYZE_SUB_VIEWS:
raise Http404
init_js_data['subView'] = sub_view
context['sub_view'] = sub_view
ref_genomes_with_alignments = [
rg for rg in ReferenceGenome.objects.filter(project=project)
if rg.alignmentgroup_set.exists()]
context.update({
'project': project,
'init_js_data': json.dumps(init_js_data),
'tab_root': TAB_ROOT__ANALYZE,
'ref_genomes_with_alignments': ref_genomes_with_alignments,
'genome_finish_enabled': settings.FLAG__GENOME_FINISHING_ENABLED
})
return render(request, 'tab_root_analyze.html', context)
@login_required
def reference_genome_list_view(request, project_uid):
"""Shows the ReferenceGenomes and handles creating new
ReferenceGenomes when requested.
"""
project = get_object_or_404(Project, owner=request.user.get_profile(),
uid=project_uid)
init_js_data = json.dumps({
'entity': adapt_model_instance_to_frontend(project)
})
context = {
'project': project,
'tab_root': TAB_ROOT__DATA,
'init_js_data': init_js_data,
}
return render(request, 'reference_genome_list.html', context)
@login_required
def reference_genome_view(request, project_uid, ref_genome_uid):
"""Overview of a single ReferenceGenome.
"""
project = get_object_or_404(Project, owner=request.user.get_profile(),
uid=project_uid)
reference_genome = ReferenceGenome.objects.get(project=project,
uid=ref_genome_uid)
init_js_data = json.dumps({
'entity': adapt_model_instance_to_frontend(reference_genome)
})
context = {
'project': project,
'tab_root': TAB_ROOT__DATA,
'reference_genome': reference_genome,
'reference_genome_uid': reference_genome.uid,
'has_fasta': reference_genome.dataset_set.filter(
type=Dataset.TYPE.REFERENCE_GENOME_FASTA).exists(),
'has_genbank': reference_genome.dataset_set.filter(
type=Dataset.TYPE.REFERENCE_GENOME_GENBANK).exists(),
'jbrowse_link': reference_genome.get_client_jbrowse_link(),
'init_js_data': init_js_data
}
return render(request, 'reference_genome.html', context)
@login_required
def contig_view(request, project_uid, contig_uid):
"""Overview of a single Contig.
"""
project = get_object_or_404(Project, owner=request.user.get_profile(),
uid=project_uid)
contig = Contig.objects.get(
parent_reference_genome__project=project, uid=contig_uid)
init_js_data = json.dumps({
'entity': adapt_model_instance_to_frontend(contig)
})
context = {
'project': project,
'tab_root': TAB_ROOT__DATA,
'contig': contig,
'init_js_data': init_js_data,
}
return render(request, 'contig.html', context)
@login_required
def sample_list_view(request, project_uid):
project = get_object_or_404(Project, owner=request.user.get_profile(),
uid=project_uid)
error_string = None
init_js_data = json.dumps({
'entity': adapt_model_instance_to_frontend(project)
})
context = {
'project': project,
'tab_root': TAB_ROOT__DATA,
'init_js_data': init_js_data,
'error_string': error_string
}
return render(request, 'sample_list.html', context)
@login_required
def fastqc_view(request, project_uid, sample_uid, read_num):
project = get_object_or_404(Project, owner=request.user.get_profile(),
uid=project_uid)
sample = get_object_or_404(ExperimentSample, project=project,
uid=sample_uid)
if int(read_num) == 1:
dataset_type = Dataset.TYPE.FASTQC1_HTML
elif int(read_num) == 2:
dataset_type = Dataset.TYPE.FASTQC2_HTML
else:
raise Exception('Read number must be 1 or 2')
fastqc_dataset = get_dataset_with_type(sample, dataset_type)
response = HttpResponse(mimetype="text/html")
for line in open(fastqc_dataset.get_absolute_location()):
response.write(line)
return response
@login_required
def alignment_list_view(request, project_uid):
project = get_object_or_404(Project, owner=request.user.get_profile(),
uid=project_uid)
init_js_data = json.dumps({
'entity': adapt_model_instance_to_frontend(project)
})
context = {
'project': project,
'tab_root': TAB_ROOT__DATA,
'init_js_data': init_js_data,
}
return render(request, 'alignment_list.html', context)
@login_required
def alignment_view(request, project_uid, alignment_group_uid):
"""View of a single AlignmentGroup.
"""
project = get_object_or_404(Project, owner=request.user.get_profile(),
uid=project_uid)
alignment_group = get_object_or_404(AlignmentGroup,
reference_genome__project=project, uid=alignment_group_uid)
# Initial javascript data.
init_js_data = json.dumps({
'project': adapt_model_instance_to_frontend(project),
'alignment_group': adapt_model_instance_to_frontend(alignment_group)
})
context = {
'project': project,
'tab_root': TAB_ROOT__DATA,
'alignment_group': alignment_group,
'experiment_sample_to_alignment_list_json': adapt_model_to_frontend(
ExperimentSampleToAlignment,
{'alignment_group': alignment_group}),
'init_js_data': init_js_data,
'flag_genome_finishing_enabled': int(settings.FLAG__GENOME_FINISHING_ENABLED)
}
return render(request, 'alignment.html', context)
@login_required
def alignment_error_log(request, project_uid, alignment_group_uid):
"""Shows the combined error log for an alignment.
"""
project = get_object_or_404(Project, owner=request.user.get_profile(),
uid=project_uid)
ag = AlignmentGroup.objects.get(
uid=alignment_group_uid,
reference_genome__project__uid=project_uid)
raw_data = ag.get_combined_error_log_data()
context = {
'project': project,
'tab_root': TAB_ROOT__DATA,
'raw_data': raw_data
}
return render(request, 'alignment_error_log.html', context)
@login_required
def alignment_create_view(request, project_uid):
"""Displays the view for creating a new alignment.
Also handles POST request to actually creating the alignment.
"""
project = get_object_or_404(Project, owner=request.user.get_profile(),
uid=project_uid)
# Validate and kick off new alignment.
if request.POST:
return _start_new_alignment(request, project)
# Show the data that renders the create view.
init_js_data = json.dumps({
'entity': adapt_model_instance_to_frontend(project)
})
is_new_project = bool(request.GET.get('is_new_project', False))
context = {
'project': project,
'tab_root': TAB_ROOT__DATA,
'init_js_data': init_js_data,
'is_new_project': is_new_project
}
return render(request, 'alignment_create.html', context)
def _start_new_alignment(request, project):
"""Delegate function that handles logic of kicking off alignment.
"""
# Parse the data from the request body.
request_data = json.loads(request.body)
# Make sure the required keys are present.
REQUIRED_KEYS = [
'name', 'refGenomeUidList', 'sampleUidList', 'skipHetOnly',
'callAsHaploid']
if not all(key in request_data for key in REQUIRED_KEYS):
return HttpResponseBadRequest("Invalid request. Missing keys.")
try:
# Parse the data and look up the relevant model instances.
alignment_group_name = request_data['name']
assert len(alignment_group_name), "Name required."
ref_genome_list = ReferenceGenome.objects.filter(
project=project,
uid__in=request_data['refGenomeUidList'])
assert (len(ref_genome_list) ==
len(request_data['refGenomeUidList'])), (
"Invalid reference genome uid(s).")
assert len(ref_genome_list) == 1, (
"Exactly one reference genome must be provided.")
ref_genome = ref_genome_list[0]
# Make sure AlignmentGroup has a unique name, because run_pipeline
# will re-use an alignment based on label, reference genome,
# aligner. We are currently hard-coding the aligner to BWA.
assert AlignmentGroup.objects.filter(
label=alignment_group_name,
reference_genome=ref_genome).count() == 0, (
"Please pick unique alignment name.")
sample_list = ExperimentSample.objects.filter(
project=project,
uid__in=request_data['sampleUidList'])
assert len(sample_list) == len(request_data['sampleUidList']), (
"Invalid expeirment sample uid(s).")
assert len(sample_list) > 0, "At least one sample required."
# Populate alignment options.
alignment_options = dict()
if request_data['skipHetOnly']:
alignment_options['skip_het_only'] = True
if request_data['callAsHaploid']:
alignment_options['call_as_haploid'] = True
# Kick off alignments.
run_pipeline(
alignment_group_name,
ref_genome, sample_list,
alignment_options=alignment_options)
# Success. Return a redirect response.
response_data = {
'redirect': reverse(
'main.views.alignment_list_view',
args=(project.uid,)),
}
except Exception as e:
response_data = {
'error': str(e)
}
return HttpResponse(json.dumps(response_data),
content_type='application/json')
@login_required
def sample_alignment_error_view(request, project_uid, alignment_group_uid,
sample_alignment_uid):
project = get_object_or_404(Project, owner=request.user.get_profile(),
uid=project_uid)
sample_alignment = ExperimentSampleToAlignment.objects.get(
uid=sample_alignment_uid,
alignment_group__reference_genome__project__uid=project_uid)
# Get the path of the error file.
data_dir = sample_alignment.get_model_data_dir()
error_file_dir = os.path.join(data_dir, 'bwa_align.error')
if os.path.exists(error_file_dir):
with open(error_file_dir, 'rb') as fh:
# Effectively perform a tail call, showing approximately last
# 100 lines (~100 bytes / line) = 10000 bytes.
# First figure out size of file by jumping to end.
fh.seek(0, os.SEEK_END)
eof = fh.tell()
# Now jump back up to 100 lines * ~100 bytes / line, or beginning.
fh.seek(max(eof - 10000, 0), os.SEEK_SET)
# Read the data from here to end.
raw_data = fh.read()
else:
raw_data = 'undefined'
context = {
'project': project,
'tab_root': TAB_ROOT__DATA,
'raw_data': raw_data
}
return render(request, 'sample_alignment_error_view.html', context)
@login_required
def variant_set_list_view(request, project_uid):
project = get_object_or_404(Project, owner=request.user.get_profile(),
uid=project_uid)
error_string = ''
# If a POST, then we are creating a new variant set.
if request.method == 'POST':
# Common validation.
# TODO: Most of this should be validated client-side.
if (not 'refGenomeID' in request.POST or
request.POST['refGenomeID'] == ''):
error_string = 'No reference genome selected.'
elif (not 'variantSetName' in request.POST or
request.POST['variantSetName'] == ''):
error_string = 'No variant set name.'
if not error_string:
ref_genome_uid = request.POST['refGenomeID']
variant_set_name = request.POST['variantSetName']
# Handling depending on which form was submitted.
if request.POST['create-set-type'] == 'from-file':
error_string = _create_variant_set_from_file(request, project,
ref_genome_uid, variant_set_name)
elif request.POST['create-set-type'] == 'empty':
error_string = _create_variant_set_empty(project,
ref_genome_uid, variant_set_name)
else:
return HttpResponseBadRequest("Invalid request.")
# Grab all the ReferenceGenomes for this project
# (for choosing which ref genome a new variant set goes into).
ref_genome_list = ReferenceGenome.objects.filter(project=project)
# Initial javascript data.
init_js_data = json.dumps({
'entity': adapt_model_instance_to_frontend(project)
})
context = {
'project': project,
'tab_root': TAB_ROOT__DATA,
'ref_genome_list' : ref_genome_list,
'init_js_data' : init_js_data,
'error_string': error_string
}
return render(request, 'variant_set_list.html', context)
def _create_variant_set_from_file(request, project, ref_genome_uid,
variant_set_name):
"""Creates a variant set from file.
Returns:
A string indicating any errors that occurred. If no errors, then
the empty string.
"""
error_string = ''
path = default_storage.save('tmp/tmp_varset.vcf',
ContentFile(request.FILES['vcfFile'].read()))
variant_set_file = os.path.join(settings.MEDIA_ROOT, path)
try:
ref_genome = ReferenceGenome.objects.get(project=project,
uid=ref_genome_uid)
import_variant_set_from_vcf(ref_genome, variant_set_name,
variant_set_file)
except Exception as e:
error_string = 'Import error: ' + str(e)
finally:
os.remove(variant_set_file)
return error_string
def _create_variant_set_empty(project, ref_genome_uid, variant_set_name):
"""Creates an empty variant set.
Returns:
A string indicating any errors that occurred. If no errors, then
the empty string.
"""
error_string = ''
ref_genome = ReferenceGenome.objects.get(
project=project,
uid=ref_genome_uid)
variant_set = VariantSet.objects.create(
reference_genome=ref_genome,
label=variant_set_name)
return error_string
@login_required
def variant_set_view(request, project_uid, variant_set_uid):
"""Data view for a single VariantSet which includes options for taking
actions on the VariantSet.
"""
project = get_object_or_404(Project, owner=request.user.get_profile(),
uid=project_uid)
variant_set = get_object_or_404(VariantSet,
reference_genome__project=project,
uid=variant_set_uid)
# Initial javascript data.
init_js_data = json.dumps({
'entity': {
'uid': variant_set.uid,
'project': adapt_model_instance_to_frontend(project),
'refGenomeUid': variant_set.reference_genome.uid,
}
})
context = {
'project': project,
'tab_root': TAB_ROOT__DATA,
'variant_set': variant_set,
'init_js_data': init_js_data,
'is_print_mage_oligos_enabled':
settings.FLAG__PRINT_MAGE_OLIGOS_ENABLED,
'is_generate_new_reference_genome_enabled':
settings.FLAG__GENERATE_NEW_REFERENCE_GENOME_ENABLED
}
return render(request, 'variant_set.html', context)
@login_required
def compile_jbrowse_and_redirect(request):
"""Compiles the jbrowse tracklist and redirects to jbrowse
"""
# First, grab the data string and get the project and ref genome from it
data_string = request.GET.get('data')
regexp_str = (r'/jbrowse/gd_data/projects/(?P<project_uid>\w+)' +
r'/ref_genomes/(?P<ref_genome_uid>\w+)/jbrowse')
uid_match = re.match(regexp_str, data_string)
regexp_str = (r'/jbrowse/gd_data/projects/(?P<project_uid>\w+)' +
r'/contigs/(?P<contig_uid>\w+)/jbrowse')
contig_uid_match = re.match(regexp_str, data_string)
assert uid_match or contig_uid_match, \
"Incorrect URL passed in data key to redirect_jbrowse"
if uid_match:
reference_genome = get_object_or_404(ReferenceGenome,
project__owner=request.user.get_profile(),
uid=uid_match.group('ref_genome_uid'))
# Recompile the tracklist from components and symlink subdirs
compile_tracklist_json(reference_genome)
else:
contig = get_object_or_404(Contig,
parent_reference_genome__project__owner=(
request.user.get_profile()),
uid=contig_uid_match.group('contig_uid'))
# Recompile the tracklist from components and symlink subdirs
compile_tracklist_json(contig)
# Finally, pass the GET along to the jbrowse static page.
get_values = urllib.urlencode(request.GET).replace('+', '%20')
maybe_force_nginx = ''
if settings.DEBUG_FORCE_JBROWSE_NGINX:
maybe_force_nginx = 'http://localhost'
full_redirect_url = (maybe_force_nginx + '/jbrowse/index.html' + '?' +
get_values)
return HttpResponseRedirect(full_redirect_url)
class RegistrationViewWrapper(RegistrationView):
"""
For now, if there are no users present, allow registration.
Once the first user is created, disallow registration.
"""
def registration_allowed(self, request):
if not User.objects.count():
return True
return super(RegistrationViewWrapper, self).registration_allowed(
request)
def get_success_url(self, request, user):
'''
Log in the new user and take them to the project list.
'''
login(request, user)
return '/'
if settings.RUNNING_ON_EC2:
from boto.utils import get_instance_metadata
@login_required
def ec2_info_view(request):
"""
boto.utils.get_instance_metadata() will block if not running on EC2.
"""
m = get_instance_metadata()
password_path = os.path.join(settings.PWD, "../password.txt")
if os.path.isfile(password_path):
with open(password_path) as f:
password = f.read().strip()
else:
password = "Not found from '%s'" % password_path
context = {
'hostname': m['public-hostname'],
'instance_type': m['instance-type'],
'instance_id': m['instance-id'],
'password': password
}
return render(request, 'ec2_info_view.html', context)
|
churchlab/millstone
|
genome_designer/main/views.py
|
Python
|
mit
| 23,605
|
[
"BWA"
] |
81d39359c798a5c1fd19da8aeb607a6381337023d0082bdedf3fd1466a9829c8
|
"""
NetCDF reader/writer module.
This module is used to read and create NetCDF files. NetCDF files are
accessed through the `netcdf_file` object. Data written to and from NetCDF
files are contained in `netcdf_variable` objects. Attributes are given
as member variables of the `netcdf_file` and `netcdf_variable` objects.
This module implements the Scientific.IO.NetCDF API to read and create
NetCDF files. The same API is also used in the PyNIO and pynetcdf
modules, allowing these modules to be used interchangeably when working
with NetCDF files.
Only NetCDF3 is supported here; for NetCDF4 see
`netCDF4-python <http://unidata.github.io/netcdf4-python/>`__,
which has a similar API.
"""
from __future__ import division, print_function, absolute_import
# TODO:
# * properly implement ``_FillValue``.
# * fix character variables.
# * implement PAGESIZE for Python 2.6?
# The Scientific.IO.NetCDF API allows attributes to be added directly to
# instances of ``netcdf_file`` and ``netcdf_variable``. To differentiate
# between user-set attributes and instance attributes, user-set attributes
# are automatically stored in the ``_attributes`` attribute by overloading
# ``__setattr__``. This is the reason why the code sometimes uses
# ``obj.__dict__['key'] = value``, instead of simply ``obj.key = value``;
# otherwise the key would be inserted into userspace attributes.
__all__ = ['netcdf_file']
import warnings
import weakref
from operator import mul
import mmap as mm
import numpy as np
from numpy.compat import asbytes, asstr
from numpy import fromstring, dtype, empty, array, asarray
from numpy import little_endian as LITTLE_ENDIAN
from functools import reduce
from scipy._lib.six import integer_types, text_type, binary_type
ABSENT = b'\x00\x00\x00\x00\x00\x00\x00\x00'
ZERO = b'\x00\x00\x00\x00'
NC_BYTE = b'\x00\x00\x00\x01'
NC_CHAR = b'\x00\x00\x00\x02'
NC_SHORT = b'\x00\x00\x00\x03'
NC_INT = b'\x00\x00\x00\x04'
NC_FLOAT = b'\x00\x00\x00\x05'
NC_DOUBLE = b'\x00\x00\x00\x06'
NC_DIMENSION = b'\x00\x00\x00\n'
NC_VARIABLE = b'\x00\x00\x00\x0b'
NC_ATTRIBUTE = b'\x00\x00\x00\x0c'
TYPEMAP = {NC_BYTE: ('b', 1),
NC_CHAR: ('c', 1),
NC_SHORT: ('h', 2),
NC_INT: ('i', 4),
NC_FLOAT: ('f', 4),
NC_DOUBLE: ('d', 8)}
REVERSE = {('b', 1): NC_BYTE,
('B', 1): NC_CHAR,
('c', 1): NC_CHAR,
('h', 2): NC_SHORT,
('i', 4): NC_INT,
('f', 4): NC_FLOAT,
('d', 8): NC_DOUBLE,
# these come from asarray(1).dtype.char and asarray('foo').dtype.char,
# used when getting the types from generic attributes.
('l', 4): NC_INT,
('S', 1): NC_CHAR}
class netcdf_file(object):
"""
A file object for NetCDF data.
A `netcdf_file` object has two standard attributes: `dimensions` and
`variables`. The values of both are dictionaries, mapping dimension
names to their associated lengths and variable names to variables,
respectively. Application programs should never modify these
dictionaries.
All other attributes correspond to global attributes defined in the
NetCDF file. Global file attributes are created by assigning to an
attribute of the `netcdf_file` object.
Parameters
----------
filename : string or file-like
string -> filename
mode : {'r', 'w', 'a'}, optional
read-write-append mode, default is 'r'
mmap : None or bool, optional
Whether to mmap `filename` when reading. Default is True
when `filename` is a file name, False when `filename` is a
file-like object. Note that when mmap is in use, data arrays
returned refer directly to the mmapped data on disk, and the
file cannot be closed as long as references to it exist.
version : {1, 2}, optional
version of netcdf to read / write, where 1 means *Classic
format* and 2 means *64-bit offset format*. Default is 1. See
`here <http://www.unidata.ucar.edu/software/netcdf/docs/netcdf/Which-Format.html>`__
for more info.
maskandscale : bool, optional
Whether to automatically scale and/or mask data based on attributes.
Default is False.
Notes
-----
The major advantage of this module over other modules is that it doesn't
require the code to be linked to the NetCDF libraries. This module is
derived from `pupynere <https://bitbucket.org/robertodealmeida/pupynere/>`_.
NetCDF files are a self-describing binary data format. The file contains
metadata that describes the dimensions and variables in the file. More
details about NetCDF files can be found `here
<http://www.unidata.ucar.edu/software/netcdf/docs/netcdf.html>`__. There
are three main sections to a NetCDF data structure:
1. Dimensions
2. Variables
3. Attributes
The dimensions section records the name and length of each dimension used
by the variables. The variables would then indicate which dimensions it
uses and any attributes such as data units, along with containing the data
values for the variable. It is good practice to include a
variable that is the same name as a dimension to provide the values for
that axes. Lastly, the attributes section would contain additional
information such as the name of the file creator or the instrument used to
collect the data.
When writing data to a NetCDF file, there is often the need to indicate the
'record dimension'. A record dimension is the unbounded dimension for a
variable. For example, a temperature variable may have dimensions of
latitude, longitude and time. If one wants to add more temperature data to
the NetCDF file as time progresses, then the temperature variable should
have the time dimension flagged as the record dimension.
In addition, the NetCDF file header contains the position of the data in
the file, so access can be done in an efficient manner without loading
unnecessary data into memory. It uses the ``mmap`` module to create
Numpy arrays mapped to the data on disk, for the same purpose.
Note that when `netcdf_file` is used to open a file with mmap=True
(default for read-only), arrays returned by it refer to data
directly on the disk. The file should not be closed, and cannot be cleanly
closed when asked, if such arrays are alive. You may want to copy data arrays
obtained from mmapped Netcdf file if they are to be processed after the file
is closed, see the example below.
Examples
--------
To create a NetCDF file:
>>> from scipy.io import netcdf
>>> f = netcdf.netcdf_file('simple.nc', 'w')
>>> f.history = 'Created for a test'
>>> f.createDimension('time', 10)
>>> time = f.createVariable('time', 'i', ('time',))
>>> time[:] = np.arange(10)
>>> time.units = 'days since 2008-01-01'
>>> f.close()
Note the assignment of ``range(10)`` to ``time[:]``. Exposing the slice
of the time variable allows for the data to be set in the object, rather
than letting ``range(10)`` overwrite the ``time`` variable.
To read the NetCDF file we just created:
>>> from scipy.io import netcdf
>>> f = netcdf.netcdf_file('simple.nc', 'r')
>>> print(f.history)
Created for a test
>>> time = f.variables['time']
>>> print(time.units)
days since 2008-01-01
>>> print(time.shape)
(10,)
>>> print(time[-1])
9
NetCDF files, when opened read-only, return arrays that refer
directly to memory-mapped data on disk:
>>> data = time[:]
>>> data.base.base
<mmap.mmap object at 0x7fe753763180>
If the data is to be processed after the file is closed, it needs
to be copied to main memory:
>>> data = time[:].copy()
>>> f.close()
>>> data.mean()
4.5
A NetCDF file can also be used as context manager:
>>> from scipy.io import netcdf
>>> with netcdf.netcdf_file('simple.nc', 'r') as f:
... print(f.history)
Created for a test
"""
def __init__(self, filename, mode='r', mmap=None, version=1,
maskandscale=False):
"""Initialize netcdf_file from fileobj (str or file-like)."""
if mode not in 'rwa':
raise ValueError("Mode must be either 'r', 'w' or 'a'.")
if hasattr(filename, 'seek'): # file-like
self.fp = filename
self.filename = 'None'
if mmap is None:
mmap = False
elif mmap and not hasattr(filename, 'fileno'):
raise ValueError('Cannot use file object for mmap')
else: # maybe it's a string
self.filename = filename
omode = 'r+' if mode == 'a' else mode
self.fp = open(self.filename, '%sb' % omode)
if mmap is None:
mmap = True
if mode != 'r':
# Cannot read write-only files
mmap = False
self.use_mmap = mmap
self.mode = mode
self.version_byte = version
self.maskandscale = maskandscale
self.dimensions = {}
self.variables = {}
self._dims = []
self._recs = 0
self._recsize = 0
self._mm = None
self._mm_buf = None
if self.use_mmap:
self._mm = mm.mmap(self.fp.fileno(), 0, access=mm.ACCESS_READ)
self._mm_buf = np.frombuffer(self._mm, dtype=np.int8)
self._attributes = {}
if mode in 'ra':
self._read()
def __setattr__(self, attr, value):
# Store user defined attributes in a separate dict,
# so we can save them to file later.
try:
self._attributes[attr] = value
except AttributeError:
pass
self.__dict__[attr] = value
def close(self):
"""Closes the NetCDF file."""
if not self.fp.closed:
try:
self.flush()
finally:
self.variables = {}
if self._mm_buf is not None:
ref = weakref.ref(self._mm_buf)
self._mm_buf = None
if ref() is None:
# self._mm_buf is gc'd, and we can close the mmap
self._mm.close()
else:
# we cannot close self._mm, since self._mm_buf is
# alive and there may still be arrays referring to it
warnings.warn((
"Cannot close a netcdf_file opened with mmap=True, when "
"netcdf_variables or arrays referring to its data still exist. "
"All data arrays obtained from such files refer directly to "
"data on disk, and must be copied before the file can be cleanly "
"closed. (See netcdf_file docstring for more information on mmap.)"
), category=RuntimeWarning)
self._mm = None
self.fp.close()
__del__ = close
def __enter__(self):
return self
def __exit__(self, type, value, traceback):
self.close()
def createDimension(self, name, length):
"""
Adds a dimension to the Dimension section of the NetCDF data structure.
Note that this function merely adds a new dimension that the variables can
reference. The values for the dimension, if desired, should be added as
a variable using `createVariable`, referring to this dimension.
Parameters
----------
name : str
Name of the dimension (Eg, 'lat' or 'time').
length : int
Length of the dimension.
See Also
--------
createVariable
"""
if length is None and self._dims:
raise ValueError("Only first dimension may be unlimited!")
self.dimensions[name] = length
self._dims.append(name)
def createVariable(self, name, type, dimensions):
"""
Create an empty variable for the `netcdf_file` object, specifying its data
type and the dimensions it uses.
Parameters
----------
name : str
Name of the new variable.
type : dtype or str
Data type of the variable.
dimensions : sequence of str
List of the dimension names used by the variable, in the desired order.
Returns
-------
variable : netcdf_variable
The newly created ``netcdf_variable`` object.
This object has also been added to the `netcdf_file` object as well.
See Also
--------
createDimension
Notes
-----
Any dimensions to be used by the variable should already exist in the
NetCDF data structure or should be created by `createDimension` prior to
creating the NetCDF variable.
"""
shape = tuple([self.dimensions[dim] for dim in dimensions])
shape_ = tuple([dim or 0 for dim in shape]) # replace None with 0 for numpy
type = dtype(type)
typecode, size = type.char, type.itemsize
if (typecode, size) not in REVERSE:
raise ValueError("NetCDF 3 does not support type %s" % type)
data = empty(shape_, dtype=type.newbyteorder("B")) # convert to big endian always for NetCDF 3
self.variables[name] = netcdf_variable(
data, typecode, size, shape, dimensions,
maskandscale=self.maskandscale)
return self.variables[name]
def flush(self):
"""
Perform a sync-to-disk flush if the `netcdf_file` object is in write mode.
See Also
--------
sync : Identical function
"""
if hasattr(self, 'mode') and self.mode in 'wa':
self._write()
sync = flush
def _write(self):
self.fp.seek(0)
self.fp.write(b'CDF')
self.fp.write(array(self.version_byte, '>b').tostring())
# Write headers and data.
self._write_numrecs()
self._write_dim_array()
self._write_gatt_array()
self._write_var_array()
def _write_numrecs(self):
# Get highest record count from all record variables.
for var in self.variables.values():
if var.isrec and len(var.data) > self._recs:
self.__dict__['_recs'] = len(var.data)
self._pack_int(self._recs)
def _write_dim_array(self):
if self.dimensions:
self.fp.write(NC_DIMENSION)
self._pack_int(len(self.dimensions))
for name in self._dims:
self._pack_string(name)
length = self.dimensions[name]
self._pack_int(length or 0) # replace None with 0 for record dimension
else:
self.fp.write(ABSENT)
def _write_gatt_array(self):
self._write_att_array(self._attributes)
def _write_att_array(self, attributes):
if attributes:
self.fp.write(NC_ATTRIBUTE)
self._pack_int(len(attributes))
for name, values in attributes.items():
self._pack_string(name)
self._write_values(values)
else:
self.fp.write(ABSENT)
def _write_var_array(self):
if self.variables:
self.fp.write(NC_VARIABLE)
self._pack_int(len(self.variables))
# Sort variable names non-recs first, then recs.
def sortkey(n):
v = self.variables[n]
if v.isrec:
return (-1,)
return v._shape
variables = sorted(self.variables, key=sortkey, reverse=True)
# Set the metadata for all variables.
for name in variables:
self._write_var_metadata(name)
# Now that we have the metadata, we know the vsize of
# each record variable, so we can calculate recsize.
self.__dict__['_recsize'] = sum([
var._vsize for var in self.variables.values()
if var.isrec])
# Set the data for all variables.
for name in variables:
self._write_var_data(name)
else:
self.fp.write(ABSENT)
def _write_var_metadata(self, name):
var = self.variables[name]
self._pack_string(name)
self._pack_int(len(var.dimensions))
for dimname in var.dimensions:
dimid = self._dims.index(dimname)
self._pack_int(dimid)
self._write_att_array(var._attributes)
nc_type = REVERSE[var.typecode(), var.itemsize()]
self.fp.write(asbytes(nc_type))
if not var.isrec:
vsize = var.data.size * var.data.itemsize
vsize += -vsize % 4
else: # record variable
try:
vsize = var.data[0].size * var.data.itemsize
except IndexError:
vsize = 0
rec_vars = len([v for v in self.variables.values()
if v.isrec])
if rec_vars > 1:
vsize += -vsize % 4
self.variables[name].__dict__['_vsize'] = vsize
self._pack_int(vsize)
# Pack a bogus begin, and set the real value later.
self.variables[name].__dict__['_begin'] = self.fp.tell()
self._pack_begin(0)
def _write_var_data(self, name):
var = self.variables[name]
# Set begin in file header.
the_beguine = self.fp.tell()
self.fp.seek(var._begin)
self._pack_begin(the_beguine)
self.fp.seek(the_beguine)
# Write data.
if not var.isrec:
self.fp.write(var.data.tostring())
count = var.data.size * var.data.itemsize
self.fp.write(b'0' * (var._vsize - count))
else: # record variable
# Handle rec vars with shape[0] < nrecs.
if self._recs > len(var.data):
shape = (self._recs,) + var.data.shape[1:]
# Resize in-place does not always work since
# the array might not be single-segment
try:
var.data.resize(shape)
except ValueError:
var.__dict__['data'] = np.resize(var.data, shape).astype(var.data.dtype)
pos0 = pos = self.fp.tell()
for rec in var.data:
# Apparently scalars cannot be converted to big endian. If we
# try to convert a ``=i4`` scalar to, say, '>i4' the dtype
# will remain as ``=i4``.
if not rec.shape and (rec.dtype.byteorder == '<' or
(rec.dtype.byteorder == '=' and LITTLE_ENDIAN)):
rec = rec.byteswap()
self.fp.write(rec.tostring())
# Padding
count = rec.size * rec.itemsize
self.fp.write(b'0' * (var._vsize - count))
pos += self._recsize
self.fp.seek(pos)
self.fp.seek(pos0 + var._vsize)
def _write_values(self, values):
if hasattr(values, 'dtype'):
nc_type = REVERSE[values.dtype.char, values.dtype.itemsize]
else:
types = [(t, NC_INT) for t in integer_types]
types += [
(float, NC_FLOAT),
(str, NC_CHAR)
]
# bytes index into scalars in py3k. Check for "string" types
if isinstance(values, text_type) or isinstance(values, binary_type):
sample = values
else:
try:
sample = values[0] # subscriptable?
except TypeError:
sample = values # scalar
for class_, nc_type in types:
if isinstance(sample, class_):
break
typecode, size = TYPEMAP[nc_type]
dtype_ = '>%s' % typecode
# asarray() dies with bytes and '>c' in py3k. Change to 'S'
dtype_ = 'S' if dtype_ == '>c' else dtype_
values = asarray(values, dtype=dtype_)
self.fp.write(asbytes(nc_type))
if values.dtype.char == 'S':
nelems = values.itemsize
else:
nelems = values.size
self._pack_int(nelems)
if not values.shape and (values.dtype.byteorder == '<' or
(values.dtype.byteorder == '=' and LITTLE_ENDIAN)):
values = values.byteswap()
self.fp.write(values.tostring())
count = values.size * values.itemsize
self.fp.write(b'0' * (-count % 4)) # pad
def _read(self):
# Check magic bytes and version
magic = self.fp.read(3)
if not magic == b'CDF':
raise TypeError("Error: %s is not a valid NetCDF 3 file" %
self.filename)
self.__dict__['version_byte'] = fromstring(self.fp.read(1), '>b')[0]
# Read file headers and set data.
self._read_numrecs()
self._read_dim_array()
self._read_gatt_array()
self._read_var_array()
def _read_numrecs(self):
self.__dict__['_recs'] = self._unpack_int()
def _read_dim_array(self):
header = self.fp.read(4)
if header not in [ZERO, NC_DIMENSION]:
raise ValueError("Unexpected header.")
count = self._unpack_int()
for dim in range(count):
name = asstr(self._unpack_string())
length = self._unpack_int() or None # None for record dimension
self.dimensions[name] = length
self._dims.append(name) # preserve order
def _read_gatt_array(self):
for k, v in self._read_att_array().items():
self.__setattr__(k, v)
def _read_att_array(self):
header = self.fp.read(4)
if header not in [ZERO, NC_ATTRIBUTE]:
raise ValueError("Unexpected header.")
count = self._unpack_int()
attributes = {}
for attr in range(count):
name = asstr(self._unpack_string())
attributes[name] = self._read_values()
return attributes
def _read_var_array(self):
header = self.fp.read(4)
if header not in [ZERO, NC_VARIABLE]:
raise ValueError("Unexpected header.")
begin = 0
dtypes = {'names': [], 'formats': []}
rec_vars = []
count = self._unpack_int()
for var in range(count):
(name, dimensions, shape, attributes,
typecode, size, dtype_, begin_, vsize) = self._read_var()
# http://www.unidata.ucar.edu/software/netcdf/docs/netcdf.html
# Note that vsize is the product of the dimension lengths
# (omitting the record dimension) and the number of bytes
# per value (determined from the type), increased to the
# next multiple of 4, for each variable. If a record
# variable, this is the amount of space per record. The
# netCDF "record size" is calculated as the sum of the
# vsize's of all the record variables.
#
# The vsize field is actually redundant, because its value
# may be computed from other information in the header. The
# 32-bit vsize field is not large enough to contain the size
# of variables that require more than 2^32 - 4 bytes, so
# 2^32 - 1 is used in the vsize field for such variables.
if shape and shape[0] is None: # record variable
rec_vars.append(name)
# The netCDF "record size" is calculated as the sum of
# the vsize's of all the record variables.
self.__dict__['_recsize'] += vsize
if begin == 0:
begin = begin_
dtypes['names'].append(name)
dtypes['formats'].append(str(shape[1:]) + dtype_)
# Handle padding with a virtual variable.
if typecode in 'bch':
actual_size = reduce(mul, (1,) + shape[1:]) * size
padding = -actual_size % 4
if padding:
dtypes['names'].append('_padding_%d' % var)
dtypes['formats'].append('(%d,)>b' % padding)
# Data will be set later.
data = None
else: # not a record variable
# Calculate size to avoid problems with vsize (above)
a_size = reduce(mul, shape, 1) * size
if self.use_mmap:
data = self._mm_buf[begin_:begin_ + a_size].view(dtype=dtype_)
data.shape = shape
else:
pos = self.fp.tell()
self.fp.seek(begin_)
data = fromstring(self.fp.read(a_size), dtype=dtype_)
data.shape = shape
self.fp.seek(pos)
# Add variable.
self.variables[name] = netcdf_variable(
data, typecode, size, shape, dimensions, attributes,
maskandscale=self.maskandscale)
if rec_vars:
# Remove padding when only one record variable.
if len(rec_vars) == 1:
dtypes['names'] = dtypes['names'][:1]
dtypes['formats'] = dtypes['formats'][:1]
# Build rec array.
if self.use_mmap:
rec_array = self._mm_buf[begin:begin + self._recs * self._recsize].view(dtype=dtypes)
rec_array.shape = (self._recs,)
else:
pos = self.fp.tell()
self.fp.seek(begin)
rec_array = fromstring(self.fp.read(self._recs * self._recsize), dtype=dtypes)
rec_array.shape = (self._recs,)
self.fp.seek(pos)
for var in rec_vars:
self.variables[var].__dict__['data'] = rec_array[var]
def _read_var(self):
name = asstr(self._unpack_string())
dimensions = []
shape = []
dims = self._unpack_int()
for i in range(dims):
dimid = self._unpack_int()
dimname = self._dims[dimid]
dimensions.append(dimname)
dim = self.dimensions[dimname]
shape.append(dim)
dimensions = tuple(dimensions)
shape = tuple(shape)
attributes = self._read_att_array()
nc_type = self.fp.read(4)
vsize = self._unpack_int()
begin = [self._unpack_int, self._unpack_int64][self.version_byte - 1]()
typecode, size = TYPEMAP[nc_type]
dtype_ = '>%s' % typecode
return name, dimensions, shape, attributes, typecode, size, dtype_, begin, vsize
def _read_values(self):
nc_type = self.fp.read(4)
n = self._unpack_int()
typecode, size = TYPEMAP[nc_type]
count = n * size
values = self.fp.read(int(count))
self.fp.read(-count % 4) # read padding
if typecode is not 'c':
values = fromstring(values, dtype='>%s' % typecode)
if values.shape == (1,):
values = values[0]
else:
values = values.rstrip(b'\x00')
return values
def _pack_begin(self, begin):
if self.version_byte == 1:
self._pack_int(begin)
elif self.version_byte == 2:
self._pack_int64(begin)
def _pack_int(self, value):
self.fp.write(array(value, '>i').tostring())
_pack_int32 = _pack_int
def _unpack_int(self):
return int(fromstring(self.fp.read(4), '>i')[0])
_unpack_int32 = _unpack_int
def _pack_int64(self, value):
self.fp.write(array(value, '>q').tostring())
def _unpack_int64(self):
return fromstring(self.fp.read(8), '>q')[0]
def _pack_string(self, s):
count = len(s)
self._pack_int(count)
self.fp.write(asbytes(s))
self.fp.write(b'0' * (-count % 4)) # pad
def _unpack_string(self):
count = self._unpack_int()
s = self.fp.read(count).rstrip(b'\x00')
self.fp.read(-count % 4) # read padding
return s
class netcdf_variable(object):
"""
A data object for the `netcdf` module.
`netcdf_variable` objects are constructed by calling the method
`netcdf_file.createVariable` on the `netcdf_file` object. `netcdf_variable`
objects behave much like array objects defined in numpy, except that their
data resides in a file. Data is read by indexing and written by assigning
to an indexed subset; the entire array can be accessed by the index ``[:]``
or (for scalars) by using the methods `getValue` and `assignValue`.
`netcdf_variable` objects also have attribute `shape` with the same meaning
as for arrays, but the shape cannot be modified. There is another read-only
attribute `dimensions`, whose value is the tuple of dimension names.
All other attributes correspond to variable attributes defined in
the NetCDF file. Variable attributes are created by assigning to an
attribute of the `netcdf_variable` object.
Parameters
----------
data : array_like
The data array that holds the values for the variable.
Typically, this is initialized as empty, but with the proper shape.
typecode : dtype character code
Desired data-type for the data array.
size : int
Desired element size for the data array.
shape : sequence of ints
The shape of the array. This should match the lengths of the
variable's dimensions.
dimensions : sequence of strings
The names of the dimensions used by the variable. Must be in the
same order of the dimension lengths given by `shape`.
attributes : dict, optional
Attribute values (any type) keyed by string names. These attributes
become attributes for the netcdf_variable object.
maskandscale : bool, optional
Whether to automatically scale and/or mask data based on attributes.
Default is False.
Attributes
----------
dimensions : list of str
List of names of dimensions used by the variable object.
isrec, shape
Properties
See also
--------
isrec, shape
"""
def __init__(self, data, typecode, size, shape, dimensions,
attributes=None,
maskandscale=False):
self.data = data
self._typecode = typecode
self._size = size
self._shape = shape
self.dimensions = dimensions
self.maskandscale = maskandscale
self._attributes = attributes or {}
for k, v in self._attributes.items():
self.__dict__[k] = v
def __setattr__(self, attr, value):
# Store user defined attributes in a separate dict,
# so we can save them to file later.
try:
self._attributes[attr] = value
except AttributeError:
pass
self.__dict__[attr] = value
def isrec(self):
"""Returns whether the variable has a record dimension or not.
A record dimension is a dimension along which additional data could be
easily appended in the netcdf data structure without much rewriting of
the data file. This attribute is a read-only property of the
`netcdf_variable`.
"""
return bool(self.data.shape) and not self._shape[0]
isrec = property(isrec)
def shape(self):
"""Returns the shape tuple of the data variable.
This is a read-only attribute and can not be modified in the
same manner of other numpy arrays.
"""
return self.data.shape
shape = property(shape)
def getValue(self):
"""
Retrieve a scalar value from a `netcdf_variable` of length one.
Raises
------
ValueError
If the netcdf variable is an array of length greater than one,
this exception will be raised.
"""
return self.data.item()
def assignValue(self, value):
"""
Assign a scalar value to a `netcdf_variable` of length one.
Parameters
----------
value : scalar
Scalar value (of compatible type) to assign to a length-one netcdf
variable. This value will be written to file.
Raises
------
ValueError
If the input is not a scalar, or if the destination is not a length-one
netcdf variable.
"""
if not self.data.flags.writeable:
# Work-around for a bug in NumPy. Calling itemset() on a read-only
# memory-mapped array causes a seg. fault.
# See NumPy ticket #1622, and SciPy ticket #1202.
# This check for `writeable` can be removed when the oldest version
# of numpy still supported by scipy contains the fix for #1622.
raise RuntimeError("variable is not writeable")
self.data.itemset(value)
def typecode(self):
"""
Return the typecode of the variable.
Returns
-------
typecode : char
The character typecode of the variable (eg, 'i' for int).
"""
return self._typecode
def itemsize(self):
"""
Return the itemsize of the variable.
Returns
-------
itemsize : int
The element size of the variable (eg, 8 for float64).
"""
return self._size
def __getitem__(self, index):
if not self.maskandscale:
return self.data[index]
data = self.data[index].copy()
missing_value = self._get_missing_value()
data = self._apply_missing_value(data, missing_value)
scale_factor = self._attributes.get('scale_factor')
add_offset = self._attributes.get('add_offset')
if add_offset is not None or scale_factor is not None:
data = data.astype(np.float64)
if scale_factor is not None:
data = data * scale_factor
if add_offset is not None:
data += add_offset
return data
def __setitem__(self, index, data):
if self.maskandscale:
missing_value = (
self._get_missing_value() or
getattr(data, 'fill_value', 999999))
self._attributes.setdefault('missing_value', missing_value)
self._attributes.setdefault('_FillValue', missing_value)
data = ((data - self._attributes.get('add_offset', 0.0)) /
self._attributes.get('scale_factor', 1.0))
data = np.ma.asarray(data).filled(missing_value)
if self._typecode not in 'fd' and data.dtype.kind == 'f':
data = np.round(data)
# Expand data for record vars?
if self.isrec:
if isinstance(index, tuple):
rec_index = index[0]
else:
rec_index = index
if isinstance(rec_index, slice):
recs = (rec_index.start or 0) + len(data)
else:
recs = rec_index + 1
if recs > len(self.data):
shape = (recs,) + self._shape[1:]
# Resize in-place does not always work since
# the array might not be single-segment
try:
self.data.resize(shape)
except ValueError:
self.__dict__['data'] = np.resize(self.data, shape).astype(self.data.dtype)
self.data[index] = data
def _get_missing_value(self):
"""
Returns the value denoting "no data" for this variable.
If this variable does not have a missing/fill value, returns None.
If both _FillValue and missing_value are given, give precedence to
_FillValue. The netCDF standard gives special meaning to _FillValue;
missing_value is just used for compatibility with old datasets.
"""
if '_FillValue' in self._attributes:
missing_value = self._attributes['_FillValue']
elif 'missing_value' in self._attributes:
missing_value = self._attributes['missing_value']
else:
missing_value = None
return missing_value
@staticmethod
def _apply_missing_value(data, missing_value):
"""
Applies the given missing value to the data array.
Returns a numpy.ma array, with any value equal to missing_value masked
out (unless missing_value is None, in which case the original array is
returned).
"""
if missing_value is None:
newdata = data
else:
try:
missing_value_isnan = np.isnan(missing_value)
except (TypeError, NotImplementedError):
# some data types (e.g., characters) cannot be tested for NaN
missing_value_isnan = False
if missing_value_isnan:
mymask = np.isnan(data)
else:
mymask = (data == missing_value)
newdata = np.ma.masked_where(mymask, data)
return newdata
NetCDFFile = netcdf_file
NetCDFVariable = netcdf_variable
|
DailyActie/Surrogate-Model
|
01-codes/scipy-master/scipy/io/netcdf.py
|
Python
|
mit
| 37,641
|
[
"NetCDF"
] |
3b28cb4c178339091adad014559161b573cbe6f40978b30b7acde1b2fb7febbb
|
"""
MAP Client, a program to generate detailed musculoskeletal models for OpenSim.
Copyright (C) 2012 University of Auckland
This file is part of MAP Client. (http://launchpad.net/mapclient)
MAP Client is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
MAP Client is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with MAP Client. If not, see <http://www.gnu.org/licenses/>..
"""
import os, sys, logging, subprocess, py_compile, string
from mapclient.settings.general import get_log_directory
logger = logging.getLogger(__name__)
MAPCLIENT_PLUGIN_LOCATIONS = {'autosegmentationstep':['Automatic Segmenter', 'https://github.com/mapclient-plugins/autosegmentationstep/archive/master.zip'],
'fieldworkexportstlsurfacestep':['Fieldwork Export STL Surface', 'https://github.com/mapclient-plugins/fieldworkexportstlsurfacestep/archive/master.zip'],
'fieldworkfemurmeasurementstep':['Fieldwork Femur Measurements', 'https://github.com/mapclient-plugins/fieldworkfemurmeasurementstep/archive/master.zip'],
'fieldworkhostmeshfittingstep':['Fieldwork Host Mesh Fitting', 'https://github.com/mapclient-plugins/fieldworkhostmeshfittingstep/archive/master.zip'],
'fieldworkmeshfittingstep':['Fieldwork Mesh Fitting', 'https://github.com/mapclient-plugins/fieldworkmeshfittingstep/archive/master.zip'],
'fieldworkmodeldictmakerstep':['Fieldwork Model Dict Maker', 'https://github.com/mapclient-plugins/fieldworkmodeldictmakerstep/archive/master.zip'],
'fieldworkmodelevaluationstep':['Fieldwork Model Evaluation', 'https://github.com/mapclient-plugins/fieldworkmodelevaluationstep/archive/master.zip'],
'fieldworkmodellandmarkstep':['Fieldwork Model Landmarker', 'https://github.com/mapclient-plugins/fieldworkmodellandmarkstep/archive/master.zip'],
'fieldworkmodelselectorstep':['Fieldwork Model Selector', 'https://github.com/mapclient-plugins/fieldworkmodelselectorstep/archive/master.zip'],
'fieldworkmodelserialiserstep':['Fieldwork Model Serialiser', 'https://github.com/mapclient-plugins/fieldworkmodelserialiserstep/archive/master.zip'],
'fieldworkmodelsourcestep':['Fieldwork Model Source', 'https://github.com/mapclient-plugins/fieldworkmodelsourcestep/archive/master.zip'],
'fieldworkmodeltransformationstep':['Fieldwork Model Transformation', 'https://github.com/mapclient-plugins/fieldworkmodeltransformationstep/archive/master.zip'],
'fieldworkpcmeshfittingstep':['Fieldwork PC Mesh Fitting', 'https://github.com/mapclient-plugins/fieldworkpcmeshfittingstep/archive/master.zip'],
'fieldworkpcregfemur2landmarksstep':['Fieldwork PC-Reg Femur 2 Landmarks', 'https://github.com/mapclient-plugins/fieldworkpcregfemur2landmarksstep/archive/master.zip'],
'fieldworkpcregpelvis2landmarksstep':['Fieldwork PC-Reg Pelvis 2 Landmarks', 'https://github.com/mapclient-plugins/fieldworkpcregpelvis2landmarksstep/archive/master.zip'],
'fittinglogentrymakerstep':['FittingLogEntryMaker', 'https://github.com/mapclient-plugins/fittinglogentrymakerstep/archive/master.zip'],
'giaspcsourcestep':['GIAS PC Source', 'https://github.com/mapclient-plugins/giaspcsourcestep/archive/master.zip'],
'landmarksjoinerstep':['Landmarks Joiner', 'https://github.com/mapclient-plugins/landmarksjoinerstep/archive/master.zip'],
'loadstlstep':['Load STL', 'https://github.com/mapclient-plugins/loadstlstep/archive/master.zip'],
'loadvrmlstep':['Load VRML', 'https://github.com/mapclient-plugins/loadvrmlstep/archive/master.zip'],
'matplotlibstaticplotterstep':['Static Data Plotter', 'https://github.com/mapclient-plugins/matplotlibstaticplotterstep/archive/master.zip'],
'mayaviviewerstep':['Mayavi 3D Model Viewer', 'https://github.com/mapclient-plugins/mayaviviewerstep/archive/master.zip'],
'mocapdataviewerstep':['MOCAP Data Viewer', 'https://github.com/mapclient-plugins/mocapdataviewerstep/archive/master.zip'],
'pelvislandmarkshjcpredictionstep':['Pelvis Landmark HJC Prediction', 'https://github.com/mapclient-plugins/pelvislandmarkshjcpredictionstep/archive/master.zip'],
'pointclouddictmakerstep':['points Cloud Dict Maker', 'https://github.com/mapclient-plugins/pointclouddictmakerstep/archive/master.zip'],
'pointwiserigidregistrationstep':['Point-wise Rigid Registration', 'https://github.com/mapclient-plugins/pointwiserigidregistrationstep/archive/master.zip'],
'segmentationstep':['Segmentation', 'https://github.com/mapclient-plugins/segmentationstep/archive/master.zip'],
'stringsource2step':['String Source 2', 'https://github.com/mapclient-plugins/stringsource2step/archive/master.zip'],
'textwriterstep':['TextWriter', 'https://github.com/mapclient-plugins/textwriterstep/archive/master.zip'],
'transformmodeltoimagespacestep':['Transform Model to Image Space', 'https://github.com/mapclient-plugins/transformmodeltoimagespacestep/archive/master.zip'],
'trcframeselectorstep':['TRC Frame Selector', 'https://github.com/mapclient-plugins/trcframeselectorstep/archive/master.zip'],
'trcsourcestep':['TRC Source', 'https://github.com/mapclient-plugins/trcsourcestep/archive/master.zip'],
'zincdatasourcestep':['Zinc Data Source', 'https://github.com/mapclient-plugins/zincdatasourcestep/archive/master.zip'],
'zincmodelsourcestep':['Zinc Model Source', 'https://github.com/mapclient-plugins/zincmodelsourcestep/archive/master.zip']}
class PluginUpdater:
def __init__(self, parent=None):
self._pluginUpdateDict = {}
self._dependenciesUpdateDict = {}
self._directory = ''
self._successful_plugin_update = {'init_update_success':False, 'resources_update_success':False, 'syntax_update_success':False, 'indentation_update_sucess':False}
self._2to3Location = self.locate2to3Script()
def analyseDependencies(self, dependencies, directories):
pass
def updateInitContents(self, plugin, directory):
if plugin in MAPCLIENT_PLUGIN_LOCATIONS:
with open(directory, 'r') as oldInitFile:
contents = oldInitFile.readlines()
with open(directory, 'w') as newInitFile:
for line in contents:
newInitFile.write(line)
if '__author__' in line:
newInitFile.write('__stepname__ = \'' + MAPCLIENT_PLUGIN_LOCATIONS[plugin][0] + '\'' + '\n')
newInitFile.write('__location__ = \'' + MAPCLIENT_PLUGIN_LOCATIONS[plugin][1] + '\'' + '\n\n')
fail_init = self.checkSuccessfulInitUpdate(self._pluginUpdateDict[plugin][5])
if not fail_init:
logger.info('__init__.py file for ' + plugin + ' updated successfully.')
self._successful_plugin_update['init_update_success'] = True
else:
logger.debug('There was a problem updating the ' + plugin + ' __init__.py file.')
def checkSuccessfulInitUpdate(self, directory):
return self.checkPluginInitContents(directory)
def updateResourcesFile(self, plugin, directories):
for directory in directories:
filename = directory.split('\\')[-1]
with open(directory, 'r') as oldResourceFile:
contents = oldResourceFile.readlines()
with open(directory, 'w') as newResourceFile:
for line in contents:
if 'qt_resource_data = ' in line or 'qt_resource_name = ' in line or 'qt_resource_struct = ' in line:
line = line.split(' = ')
line = line[0] + ' = b' + line[1]
newResourceFile.write(line)
fail_resource = self.checkSuccessfulResourceUpdate(directory)
if not fail_resource:
logger.info(filename + ' file for \'' + plugin + '\' updated successfully.')
self._successful_plugin_update['resources_update_success'] = True
else:
logger.debug('There was a problem updating the \'' + plugin + '\'' + filename + ' file.')
def checkSuccessfulResourceUpdate(self, directory):
return self.checkResourcesFileContents(directory)
def locatePyvenvScript(self):
# Windows
if os.path.isfile(os.path.join(sys.exec_prefix, 'Tools', 'scripts', 'pyvenv.py')):
return os.path.join(sys.exec_prefix, 'Tools', 'scripts', 'pyvenv.py')
# Linux and Mac
elif os.path.isfile(os.path.join(sys.exec_prefix, 'bin', 'pyvenv.py')):
return os.path.join(sys.exec_prefix, 'bin', 'pyvenv.py')
elif os.path.isfile(os.path.join(sys.exec_prefix, 'local', 'bin', 'pyvenv.py')):
return os.path.join(sys.exec_prefix, 'local', 'bin', 'pyvenv.py')
else:
return ''
def locate2to3Script(self):
# Windows
if os.path.isfile(os.path.join(sys.exec_prefix, 'Tools', 'scripts', '2to3.py')):
return os.path.join(sys.exec_prefix, 'Tools', 'scripts', '2to3.py')
# Linux and Mac
elif os.path.isfile(os.path.join(sys.exec_prefix, 'bin', '2to3.py')):
return os.path.join(sys.exec_prefix, 'bin', '2to3.py')
elif os.path.isfile(os.path.join(sys.exec_prefix, 'local', 'bin', '2to3.py')):
return os.path.join(sys.exec_prefix, 'local', 'bin', '2to3.py')
else:
return ''
def set2to3Dir(self, location):
if location:
self._2to3Location = location
def get2to3Dir(self):
return self._2to3Location
def updateSyntax(self, plugin, directory):
# find 2to3 for the system
dir_2to3 = self.get2to3Dir()
with open(os.path.join(get_log_directory(), 'syntax_update_report_' + plugin + '.log'), 'w') as file_out:
try:
subprocess.check_call(['python', dir_2to3, '--output-dir=' + directory, '-W', '-v', '-n', '-w', directory], stdout = file_out, stderr = file_out)
logger.info('Syntax update for \'' + plugin + '\' successful.')
self._successful_plugin_update['syntax_update_success'] = True
except Exception as e:
logger.warning('Syntax update for \'' + plugin + '\' unsuccessful.')
logger.warning('Reason: ' + e)
def fixTabbedIndentation(self, plugin, directories, temp_file):
for moduleDir in directories:
with open(moduleDir, 'r') as module:
contents = module.readlines()
if temp_file:
moduleDir = moduleDir[:-3] + '_temp.py'
with open(moduleDir, 'w') as newModule:
for line in contents:
new_line = ''
whitespace = line[:len(line) - len(line.lstrip())]
if '\t' in whitespace:
whitespace = whitespace.replace('\t', ' ')
new_line += whitespace + line[len(line) - len(line.lstrip()):]
newModule.write(new_line)
if not temp_file:
fail_indentation = self.checkSuccessfulIndentationUpdate(directories)
if not fail_indentation:
logger.info('Indentation for \'' + plugin + '\' updated successfully.')
self._successful_plugin_update['indentation_update_sucess'] = True
else:
logger.debug('There was a problem updating the \'' + plugin + '\' plugin indentation.')
def checkPluginInitContents(self, directory):
with open(directory, 'r') as init_file:
contents = init_file.read()
if not ('__stepname__' in contents and '__location__' in contents):
return True
return False
def checkResourcesUpdate(self, directory, resource_files):
resource_updates = False
requires_update = []
directories = []
for filename in resource_files:
for dirpath, _, filenames in os.walk(directory):
if filename + '.py' in filenames:
directories += [os.path.join(dirpath, filename + '.py')]
requires_update += [self.checkResourcesFileContents(os.path.join(dirpath, filename + '.py'))]
if not requires_update:
directories.pop()
if True in requires_update:
resource_updates = True
return resource_updates, directories
def checkResourcesFileContents(self, resources_path):
with open(resources_path, 'r') as resourcesFile:
content = resourcesFile.readlines()
for line in content:
if 'qt_resource_data = ' in line:
line_content = line.split(' = ')
data = line_content[1]
if data[0] != 'b' and sys.version_info >= (3, 0):
return True
return False
def pluginUpdateDict(self, modname, update_init, update_resources, update_syntax, update_indentation, initDir, resourcesDir, tabbed_modules):
self._pluginUpdateDict[modname] = [self._directory, update_init, update_resources, update_syntax, update_indentation, initDir, resourcesDir, tabbed_modules]
def getAllModulesDirsInTree(self, directory):
moduleDirs = []
for dirpath, _, filenames in os.walk(directory):
for filename in filenames:
if '.py' == filename[-3:]:
moduleDirs += [os.path.join(dirpath, filename)]
return moduleDirs
def checkModuleSyntax(self, directory):
moduleDirs = self.getAllModulesDirsInTree(directory)
for module in moduleDirs:
try:
py_compile.compile(module, doraise = True)
except Exception as e:
if e.exc_type_name == 'SyntaxError' and sys.version_info >= (3, 0):
return True
return False
def checkTabbedIndentation(self, directory):
modules_to_update = []
moduleDirs = self.getAllModulesDirsInTree(directory)
for module in moduleDirs:
update_required = self.analyseModuleIndentation(module)
if update_required:
modules_to_update += [module]
if modules_to_update:
return True, modules_to_update
else:
return False, modules_to_update
def checkSuccessfulIndentationUpdate(self, directories):
for directory in directories:
fail_update = self.analyseModuleIndentation(directory)
if fail_update:
return True
return False
def analyseModuleIndentation(self, directory):
with open(directory, 'r') as module:
contents = module.readlines()
for line in contents:
for char in line:
if char not in string.whitespace:
break
if char == '\t':
return True
return False
def deleteTempFiles(self, modules):
for module in modules:
module = module[:-3] + '_temp.py'
os.remove(module)
|
hsorby/mapclient
|
src/mapclient/view/managers/plugins/pluginupdater.py
|
Python
|
gpl-3.0
| 16,669
|
[
"Mayavi"
] |
84c37b2ea6f48d401d742537968bbf26cbb24243fcdb471dcfdbbacbff58301d
|
# -*- coding: utf-8 -*-
"""
End-to-end tests for the Account Settings page.
"""
from datetime import datetime
from unittest import skip
import pytest
from bok_choy.page_object import XSS_INJECTION
from pytz import timezone, utc
from common.test.acceptance.pages.common.auto_auth import AutoAuthPage, FULL_NAME
from common.test.acceptance.pages.lms.account_settings import AccountSettingsPage
from common.test.acceptance.pages.lms.dashboard import DashboardPage
from common.test.acceptance.tests.helpers import AcceptanceTest, EventsTestMixin
class AccountSettingsTestMixin(EventsTestMixin, AcceptanceTest):
"""
Mixin with helper methods to test the account settings page.
"""
CHANGE_INITIATED_EVENT_NAME = u"edx.user.settings.change_initiated"
USER_SETTINGS_CHANGED_EVENT_NAME = 'edx.user.settings.changed'
ACCOUNT_SETTINGS_REFERER = u"/account/settings"
def visit_account_settings_page(self, gdpr=False):
"""
Visit the account settings page for the current user, and store the page instance
as self.account_settings_page.
"""
self.account_settings_page = AccountSettingsPage(self.browser)
self.account_settings_page.visit()
self.account_settings_page.wait_for_ajax()
# TODO: LEARNER-4422 - delete when we clean up flags
if gdpr:
self.account_settings_page.browser.get(self.browser.current_url + "?course_experience.gdpr=1")
self.account_settings_page.wait_for_page()
def log_in_as_unique_user(self, email=None, full_name=None, password=None):
"""
Create a unique user and return the account's username and id.
"""
username = "test_{uuid}".format(uuid=self.unique_id[0:6])
auto_auth_page = AutoAuthPage(
self.browser,
username=username,
email=email,
full_name=full_name,
password=password
).visit()
user_id = auto_auth_page.get_user_id()
return username, user_id
def settings_changed_event_filter(self, event):
"""Filter out any events that are not "settings changed" events."""
return event['event_type'] == self.USER_SETTINGS_CHANGED_EVENT_NAME
def expected_settings_changed_event(self, setting, old, new, table=None):
"""A dictionary representing the expected fields in a "settings changed" event."""
return {
'username': self.username,
'referer': self.get_settings_page_url(),
'event': {
'user_id': self.user_id,
'setting': setting,
'old': old,
'new': new,
'truncated': [],
'table': table or 'auth_userprofile'
}
}
def settings_change_initiated_event_filter(self, event):
"""Filter out any events that are not "settings change initiated" events."""
return event['event_type'] == self.CHANGE_INITIATED_EVENT_NAME
def expected_settings_change_initiated_event(self, setting, old, new, username=None, user_id=None):
"""A dictionary representing the expected fields in a "settings change initiated" event."""
return {
'username': username or self.username,
'referer': self.get_settings_page_url(),
'event': {
'user_id': user_id or self.user_id,
'setting': setting,
'old': old,
'new': new,
}
}
def get_settings_page_url(self):
"""The absolute URL of the account settings page given the test context."""
return self.relative_path_to_absolute_uri(self.ACCOUNT_SETTINGS_REFERER)
def assert_no_setting_changed_event(self):
"""Assert no setting changed event has been emitted thus far."""
self.assert_no_matching_events_were_emitted({'event_type': self.USER_SETTINGS_CHANGED_EVENT_NAME})
class DashboardMenuTest(AccountSettingsTestMixin, AcceptanceTest):
"""
Tests that the dashboard menu works correctly with the account settings page.
"""
shard = 8
def test_link_on_dashboard_works(self):
"""
Scenario: Verify that the "Account" link works from the dashboard.
Given that I am a registered user
And I visit my dashboard
And I click on "Account" in the top drop down
Then I should see my account settings page
"""
self.log_in_as_unique_user()
dashboard_page = DashboardPage(self.browser)
dashboard_page.visit()
dashboard_page.click_username_dropdown()
self.assertIn('Account', dashboard_page.username_dropdown_link_text)
dashboard_page.click_account_settings_link()
class AccountSettingsPageTest(AccountSettingsTestMixin, AcceptanceTest):
"""
Tests that verify behaviour of the Account Settings page.
"""
SUCCESS_MESSAGE = 'Your changes have been saved.'
shard = 8
def setUp(self):
"""
Initialize account and pages.
"""
super(AccountSettingsPageTest, self).setUp()
self.full_name = FULL_NAME
self.social_link = ''
self.username, self.user_id = self.log_in_as_unique_user(full_name=self.full_name)
self.visit_account_settings_page()
def test_page_view_event(self):
"""
Scenario: An event should be recorded when the "Account Settings"
page is viewed.
Given that I am a registered user
And I visit my account settings page
Then a page view analytics event should be recorded
"""
actual_events = self.wait_for_events(
event_filter={'event_type': 'edx.user.settings.viewed'}, number_of_matches=1)
self.assert_events_match(
[
{
'event': {
'user_id': self.user_id,
'page': 'account',
'visibility': None
}
}
],
actual_events
)
def test_all_sections_and_fields_are_present(self):
"""
Scenario: Verify that all sections and fields are present on the page.
"""
expected_sections_structure = [
{
'title': 'Basic Account Information',
'fields': [
'Username',
'Full Name',
'Email Address (Sign In)',
'Password',
'Language',
'Country or Region of Residence',
'Time Zone',
]
},
{
'title': 'Additional Information',
'fields': [
'Education Completed',
'Gender',
'Year of Birth',
'Preferred Language',
]
},
{
'title': 'Social Media Links',
'fields': [
'Twitter Link',
'Facebook Link',
'LinkedIn Link',
]
},
{
'title': 'Delete My Account',
'fields': []
},
]
self.assertEqual(self.account_settings_page.sections_structure(), expected_sections_structure)
def _test_readonly_field(self, field_id, title, value):
"""
Test behavior of a readonly field.
"""
self.assertEqual(self.account_settings_page.title_for_field(field_id), title)
self.assertEqual(self.account_settings_page.value_for_readonly_field(field_id), value)
def _test_text_field(
self, field_id, title, initial_value, new_invalid_value, new_valid_values, success_message=SUCCESS_MESSAGE,
assert_after_reload=True
):
"""
Test behaviour of a text field.
"""
self.assertEqual(self.account_settings_page.title_for_field(field_id), title)
self.assertEqual(self.account_settings_page.value_for_text_field(field_id), initial_value)
self.assertEqual(
self.account_settings_page.value_for_text_field(field_id, new_invalid_value), new_invalid_value
)
self.account_settings_page.wait_for_indicator(field_id, 'validation-error')
self.browser.refresh()
self.assertNotEqual(self.account_settings_page.value_for_text_field(field_id), new_invalid_value)
for new_value in new_valid_values:
self.assertEqual(self.account_settings_page.value_for_text_field(field_id, new_value), new_value)
self.account_settings_page.wait_for_message(field_id, success_message)
if assert_after_reload:
self.browser.refresh()
self.assertEqual(self.account_settings_page.value_for_text_field(field_id), new_value)
def _test_dropdown_field(
self,
field_id,
title,
initial_value,
new_values,
success_message=SUCCESS_MESSAGE, # pylint: disable=unused-argument
reloads_on_save=False
):
"""
Test behaviour of a dropdown field.
"""
self.assertEqual(self.account_settings_page.title_for_field(field_id), title)
self.assertEqual(self.account_settings_page.value_for_dropdown_field(field_id, focus_out=True), initial_value)
for new_value in new_values:
self.assertEqual(
self.account_settings_page.value_for_dropdown_field(field_id, new_value, focus_out=True),
new_value
)
# An XHR request is made when changing the field
self.account_settings_page.wait_for_ajax()
if reloads_on_save:
self.account_settings_page.wait_for_loading_indicator()
else:
self.browser.refresh()
self.account_settings_page.wait_for_page()
self.assertEqual(self.account_settings_page.value_for_dropdown_field(field_id, focus_out=True), new_value)
def _test_link_field(self, field_id, title, link_title, field_type, success_message):
"""
Test behaviour a link field.
"""
self.assertEqual(self.account_settings_page.title_for_field(field_id), title)
self.assertEqual(self.account_settings_page.link_title_for_link_field(field_id), link_title)
self.account_settings_page.click_on_link_in_link_field(field_id, field_type=field_type)
self.account_settings_page.wait_for_message(field_id, success_message)
def test_username_field(self):
"""
Test behaviour of "Username" field.
"""
self._test_readonly_field('username', 'Username', self.username)
def test_full_name_field(self):
"""
Test behaviour of "Full Name" field.
"""
self._test_text_field(
u'name',
u'Full Name',
self.full_name,
u'@',
[u'<h1>another name<h1>', u'<script>'],
'Full Name cannot contain the following characters: < >',
False
)
def test_email_field(self):
"""
Test behaviour of "Email" field.
"""
email = u"test@example.com"
username, user_id = self.log_in_as_unique_user(email=email)
self.visit_account_settings_page()
self._test_text_field(
u'email',
u'Email Address (Sign In)',
email,
u'test@example.com' + XSS_INJECTION,
[u'me@here.com', u'you@there.com'],
success_message='Click the link in the message to update your email address.',
assert_after_reload=False
)
actual_events = self.wait_for_events(
event_filter=self.settings_change_initiated_event_filter, number_of_matches=2)
self.assert_events_match(
[
self.expected_settings_change_initiated_event(
'email', email, 'me@here.com', username=username, user_id=user_id),
# NOTE the first email change was never confirmed, so old has not changed.
self.expected_settings_change_initiated_event(
'email', email, 'you@there.com', username=username, user_id=user_id),
],
actual_events
)
# Email is not saved until user confirms, so no events should have been
# emitted.
self.assert_no_setting_changed_event()
def test_password_field(self):
"""
Test behaviour of "Password" field.
"""
self._test_link_field(
u'password',
u'Password',
u'Reset Your Password',
u'button',
success_message='Click the link in the message to reset your password.',
)
event_filter = self.expected_settings_change_initiated_event('password', None, None)
self.wait_for_events(event_filter=event_filter, number_of_matches=1)
# Like email, since the user has not confirmed their password change,
# the field has not yet changed, so no events will have been emitted.
self.assert_no_setting_changed_event()
@skip(
'On bokchoy test servers, language changes take a few reloads to fully realize '
'which means we can no longer reliably match the strings in the html in other tests.'
)
def test_language_field(self):
"""
Test behaviour of "Language" field.
"""
self._test_dropdown_field(
u'pref-lang',
u'Language',
u'English',
[u'Dummy Language (Esperanto)', u'English'],
reloads_on_save=True,
)
def test_education_completed_field(self):
"""
Test behaviour of "Education Completed" field.
"""
self._test_dropdown_field(
u'level_of_education',
u'Education Completed',
u'',
[u'Bachelor\'s degree', u''],
)
actual_events = self.wait_for_events(event_filter=self.settings_changed_event_filter, number_of_matches=2)
self.assert_events_match(
[
self.expected_settings_changed_event('level_of_education', None, 'b'),
self.expected_settings_changed_event('level_of_education', 'b', None),
],
actual_events
)
def test_gender_field(self):
"""
Test behaviour of "Gender" field.
"""
self._test_dropdown_field(
u'gender',
u'Gender',
u'',
[u'Female', u''],
)
actual_events = self.wait_for_events(event_filter=self.settings_changed_event_filter, number_of_matches=2)
self.assert_events_match(
[
self.expected_settings_changed_event('gender', None, 'f'),
self.expected_settings_changed_event('gender', 'f', None),
],
actual_events
)
def test_year_of_birth_field(self):
"""
Test behaviour of "Year of Birth" field.
"""
# Note that when we clear the year_of_birth here we're firing an event.
self.assertEqual(self.account_settings_page.value_for_dropdown_field('year_of_birth', '', focus_out=True), '')
expected_events = [
self.expected_settings_changed_event('year_of_birth', None, 1980),
self.expected_settings_changed_event('year_of_birth', 1980, None),
]
with self.assert_events_match_during(self.settings_changed_event_filter, expected_events):
self._test_dropdown_field(
u'year_of_birth',
u'Year of Birth',
u'',
[u'1980', u''],
)
def test_country_field(self):
"""
Test behaviour of "Country or Region" field.
"""
self._test_dropdown_field(
u'country',
u'Country or Region of Residence',
u'',
[u'Pakistan', u'Palau'],
)
def test_time_zone_field(self):
"""
Test behaviour of "Time Zone" field
"""
kiev_abbr, kiev_offset = self._get_time_zone_info('Europe/Kiev')
pacific_abbr, pacific_offset = self._get_time_zone_info('US/Pacific')
self._test_dropdown_field(
u'time_zone',
u'Time Zone',
u'Default (Local Time Zone)',
[
u'Europe/Kiev ({abbr}, UTC{offset})'.format(abbr=kiev_abbr, offset=kiev_offset),
u'US/Pacific ({abbr}, UTC{offset})'.format(abbr=pacific_abbr, offset=pacific_offset),
],
)
def _get_time_zone_info(self, time_zone_str):
"""
Helper that returns current time zone abbreviation and UTC offset
and accounts for daylight savings time
"""
time_zone = datetime.now(utc).astimezone(timezone(time_zone_str))
abbr = time_zone.strftime('%Z')
offset = time_zone.strftime('%z')
return abbr, offset
def test_social_links_field(self):
"""
Test behaviour of one of the social media links field.
"""
self._test_text_field(
u'social_links',
u'Twitter Link',
self.social_link,
u'www.google.com/invalidlink',
[u'https://www.twitter.com/edX', self.social_link],
)
def test_linked_accounts(self):
"""
Test that fields for third party auth providers exist.
Currently there is no way to test the whole authentication process
because that would require accounts with the providers.
"""
providers = (
['auth-oa2-facebook', 'Facebook', 'Link Your Account'],
['auth-oa2-google-oauth2', 'Google', 'Link Your Account'],
)
# switch to "Linked Accounts" tab
self.account_settings_page.switch_account_settings_tabs('accounts-tab')
for field_id, title, link_title in providers:
self.assertEqual(self.account_settings_page.title_for_field(field_id), title)
self.assertEqual(self.account_settings_page.link_title_for_link_field(field_id), link_title)
def test_order_history(self):
"""
Test that we can see orders on Order History tab.
"""
# switch to "Order History" tab
self.account_settings_page.switch_account_settings_tabs('orders-tab')
# verify that we are on correct tab
self.assertTrue(self.account_settings_page.is_order_history_tab_visible)
expected_order_data_first_row = {
'number': 'Order Number:\nEdx-123',
'date': 'Date Placed:\nApr 21, 2016',
'price': 'Cost:\n$100.00',
}
expected_order_data_second_row = {
'number': 'Product Name:\nTest Course',
'date': 'Date Placed:\nApr 21, 2016',
'price': 'Cost:\n$100.00',
}
for field_name, value in expected_order_data_first_row.iteritems():
self.assertEqual(
self.account_settings_page.get_value_of_order_history_row_item('order-Edx-123', field_name)[0], value
)
for field_name, value in expected_order_data_second_row.iteritems():
self.assertEqual(
self.account_settings_page.get_value_of_order_history_row_item('order-Edx-123', field_name)[1], value
)
self.assertTrue(self.account_settings_page.order_button_is_visible('order-Edx-123'))
class AccountSettingsDeleteAccountTest(AccountSettingsTestMixin, AcceptanceTest):
"""
Tests for the account deletion workflow.
"""
def setUp(self):
"""
Initialize account and pages.
"""
super(AccountSettingsDeleteAccountTest, self).setUp()
self.full_name = FULL_NAME
self.social_link = ''
self.password = 'password'
self.username, self.user_id = self.log_in_as_unique_user(full_name=self.full_name, password=self.password)
self.visit_account_settings_page(gdpr=True)
def test_button_visible(self):
self.assertTrue(
self.account_settings_page.is_delete_button_visible
)
def test_delete_modal(self):
self.account_settings_page.click_delete_button()
self.assertTrue(
self.account_settings_page.is_delete_modal_visible
)
self.assertFalse(
self.account_settings_page.delete_confirm_button_enabled()
)
self.account_settings_page.fill_in_password_field(self.password)
self.assertTrue(
self.account_settings_page.delete_confirm_button_enabled()
)
@pytest.mark.a11y
class AccountSettingsA11yTest(AccountSettingsTestMixin, AcceptanceTest):
"""
Class to test account settings accessibility.
"""
def test_account_settings_a11y(self):
"""
Test the accessibility of the account settings page.
"""
self.log_in_as_unique_user()
self.visit_account_settings_page()
self.account_settings_page.a11y_audit.check_for_accessibility_errors()
|
ahmedaljazzar/edx-platform
|
common/test/acceptance/tests/lms/test_account_settings.py
|
Python
|
agpl-3.0
| 21,256
|
[
"VisIt"
] |
caa10568f0cb428dd592cc45277f54c6c7dbd7d5011225572fe63c2ec25be809
|
import code
import os
import readline
import sys
from octopus.shell.completer.octopus_rlcompleter import OctopusShellCompleter
from octopus.shell.config.config import config
from octopus.shell.octopus_shell_utils import reload as _reload
class OctopusInteractiveConsole(code.InteractiveConsole):
def __init__(self, octopus_shell, locals=None):
def reload(path=config["steps"]["dir"]):
_reload(octopus_shell, path)
super().__init__(locals={"reload": reload}, filename="<console>")
self.octopus_shell = octopus_shell
def runsource(self, source, filename="<input>", symbol="single"):
if not source:
return False
if source[0] == '!':
return super().runsource(source[1:], filename, symbol)
try:
response = self.octopus_shell.run_command(source)
if source == "quit":
self._save_history()
return super().runsource("exit()", filename, symbol)
except Exception as e:
if "[MultipleCompilationErrorsException]" in str(e):
# incomplete input
return True
response = [str(e)]
if not response:
return False
for line in response:
self.write(line)
self.write('\n')
return False
def write(self, data):
sys.stdout.write(data)
def interact(self, host="localhost", port=6000):
self._load_prompt()
self._load_banner()
self._init_readline()
self._load_history()
super().interact(self.banner)
self._save_history()
def init_file(self):
return config['readline']['init']
def hist_file(self):
return config['readline']['hist']
def _load_banner(self):
base = os.path.dirname(__file__)
path = "data/banner.txt"
fname = os.path.join(base, path)
try:
with open(fname, 'r') as f:
self.banner = f.read()
except:
self.banner = "octopus shell\n"
def _load_prompt(self):
sys.ps1 = "> "
def _init_readline(self):
readline.parse_and_bind("tab: complete")
try:
init_file = self.init_file()
if init_file:
readline.read_init_file(os.path.expanduser(self.init_file()))
except FileNotFoundError:
pass
readline.set_completer(OctopusShellCompleter(self.octopus_shell).complete)
def _load_history(self):
try:
hist_file = self.hist_file()
if hist_file:
readline.read_history_file(os.path.expanduser(hist_file))
except FileNotFoundError:
pass
def _save_history(self):
hist_file = self.hist_file()
if hist_file:
readline.write_history_file(os.path.expanduser(hist_file))
|
octopus-platform/octopus
|
python/octopus-mlutils/octopus/shell/octopus_console.py
|
Python
|
lgpl-3.0
| 2,887
|
[
"Octopus"
] |
fbb52dd19d3d670bbeab7c8207fc229b3f67ba5881e8db331b7afa6901e4e7a5
|
"""User-friendly public interface to polynomial functions. """
from __future__ import print_function, division
from sympy.core import (
S, Basic, Expr, I, Integer, Add, Mul, Dummy, Tuple
)
from sympy.core.mul import _keep_coeff
from sympy.core.symbol import Symbol
from sympy.core.basic import preorder_traversal
from sympy.core.relational import Relational
from sympy.core.sympify import sympify
from sympy.core.decorators import _sympifyit
from sympy.logic.boolalg import BooleanAtom
from sympy.polys.polyclasses import DMP
from sympy.polys.polyutils import (
basic_from_dict,
_sort_gens,
_unify_gens,
_dict_reorder,
_dict_from_expr,
_parallel_dict_from_expr,
)
from sympy.polys.rationaltools import together
from sympy.polys.rootisolation import dup_isolate_real_roots_list
from sympy.polys.groebnertools import groebner as _groebner
from sympy.polys.fglmtools import matrix_fglm
from sympy.polys.monomials import Monomial
from sympy.polys.orderings import monomial_key
from sympy.polys.polyerrors import (
OperationNotSupported, DomainError,
CoercionFailed, UnificationFailed,
GeneratorsNeeded, PolynomialError,
MultivariatePolynomialError,
ExactQuotientFailed,
PolificationFailed,
ComputationFailed,
GeneratorsError,
)
from sympy.utilities import group, sift, public
import sympy.polys
import sympy.mpmath
from sympy.polys.domains import FF, QQ
from sympy.polys.constructor import construct_domain
from sympy.polys import polyoptions as options
from sympy.core.compatibility import iterable
@public
class Poly(Expr):
"""Generic class for representing polynomial expressions. """
__slots__ = ['rep', 'gens']
is_commutative = True
is_Poly = True
def __new__(cls, rep, *gens, **args):
"""Create a new polynomial instance out of something useful. """
opt = options.build_options(gens, args)
if 'order' in opt:
raise NotImplementedError("'order' keyword is not implemented yet")
if iterable(rep, exclude=str):
if isinstance(rep, dict):
return cls._from_dict(rep, opt)
else:
return cls._from_list(list(rep), opt)
else:
rep = sympify(rep)
if rep.is_Poly:
return cls._from_poly(rep, opt)
else:
return cls._from_expr(rep, opt)
@classmethod
def new(cls, rep, *gens):
"""Construct :class:`Poly` instance from raw representation. """
if not isinstance(rep, DMP):
raise PolynomialError(
"invalid polynomial representation: %s" % rep)
elif rep.lev != len(gens) - 1:
raise PolynomialError("invalid arguments: %s, %s" % (rep, gens))
obj = Basic.__new__(cls)
obj.rep = rep
obj.gens = gens
return obj
@classmethod
def from_dict(cls, rep, *gens, **args):
"""Construct a polynomial from a ``dict``. """
opt = options.build_options(gens, args)
return cls._from_dict(rep, opt)
@classmethod
def from_list(cls, rep, *gens, **args):
"""Construct a polynomial from a ``list``. """
opt = options.build_options(gens, args)
return cls._from_list(rep, opt)
@classmethod
def from_poly(cls, rep, *gens, **args):
"""Construct a polynomial from a polynomial. """
opt = options.build_options(gens, args)
return cls._from_poly(rep, opt)
@classmethod
def from_expr(cls, rep, *gens, **args):
"""Construct a polynomial from an expression. """
opt = options.build_options(gens, args)
return cls._from_expr(rep, opt)
@classmethod
def _from_dict(cls, rep, opt):
"""Construct a polynomial from a ``dict``. """
gens = opt.gens
if not gens:
raise GeneratorsNeeded(
"can't initialize from 'dict' without generators")
level = len(gens) - 1
domain = opt.domain
if domain is None:
domain, rep = construct_domain(rep, opt=opt)
else:
for monom, coeff in rep.items():
rep[monom] = domain.convert(coeff)
return cls.new(DMP.from_dict(rep, level, domain), *gens)
@classmethod
def _from_list(cls, rep, opt):
"""Construct a polynomial from a ``list``. """
gens = opt.gens
if not gens:
raise GeneratorsNeeded(
"can't initialize from 'list' without generators")
elif len(gens) != 1:
raise MultivariatePolynomialError(
"'list' representation not supported")
level = len(gens) - 1
domain = opt.domain
if domain is None:
domain, rep = construct_domain(rep, opt=opt)
else:
rep = list(map(domain.convert, rep))
return cls.new(DMP.from_list(rep, level, domain), *gens)
@classmethod
def _from_poly(cls, rep, opt):
"""Construct a polynomial from a polynomial. """
if cls != rep.__class__:
rep = cls.new(rep.rep, *rep.gens)
gens = opt.gens
field = opt.field
domain = opt.domain
if gens and rep.gens != gens:
if set(rep.gens) != set(gens):
return cls._from_expr(rep.as_expr(), opt)
else:
rep = rep.reorder(*gens)
if 'domain' in opt and domain:
rep = rep.set_domain(domain)
elif field is True:
rep = rep.to_field()
return rep
@classmethod
def _from_expr(cls, rep, opt):
"""Construct a polynomial from an expression. """
rep, opt = _dict_from_expr(rep, opt)
return cls._from_dict(rep, opt)
def _hashable_content(self):
"""Allow SymPy to hash Poly instances. """
return (self.rep, self.gens)
def __hash__(self):
return super(Poly, self).__hash__()
@property
def free_symbols(self):
"""
Free symbols of a polynomial expression.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x, y
>>> Poly(x**2 + 1).free_symbols
set([x])
>>> Poly(x**2 + y).free_symbols
set([x, y])
>>> Poly(x**2 + y, x).free_symbols
set([x, y])
"""
symbols = set([])
for gen in self.gens:
symbols |= gen.free_symbols
return symbols | self.free_symbols_in_domain
@property
def free_symbols_in_domain(self):
"""
Free symbols of the domain of ``self``.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x, y
>>> Poly(x**2 + 1).free_symbols_in_domain
set()
>>> Poly(x**2 + y).free_symbols_in_domain
set()
>>> Poly(x**2 + y, x).free_symbols_in_domain
set([y])
"""
domain, symbols = self.rep.dom, set()
if domain.is_Composite:
for gen in domain.symbols:
symbols |= gen.free_symbols
elif domain.is_EX:
for coeff in self.coeffs():
symbols |= coeff.free_symbols
return symbols
@property
def args(self):
"""
Don't mess up with the core.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x
>>> Poly(x**2 + 1, x).args
(x**2 + 1,)
"""
return (self.as_expr(),)
@property
def gen(self):
"""
Return the principal generator.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x
>>> Poly(x**2 + 1, x).gen
x
"""
return self.gens[0]
@property
def domain(self):
"""Get the ground domain of ``self``. """
return self.get_domain()
@property
def zero(self):
"""Return zero polynomial with ``self``'s properties. """
return self.new(self.rep.zero(self.rep.lev, self.rep.dom), *self.gens)
@property
def one(self):
"""Return one polynomial with ``self``'s properties. """
return self.new(self.rep.one(self.rep.lev, self.rep.dom), *self.gens)
@property
def unit(self):
"""Return unit polynomial with ``self``'s properties. """
return self.new(self.rep.unit(self.rep.lev, self.rep.dom), *self.gens)
def unify(f, g):
"""
Make ``f`` and ``g`` belong to the same domain.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x
>>> f, g = Poly(x/2 + 1), Poly(2*x + 1)
>>> f
Poly(1/2*x + 1, x, domain='QQ')
>>> g
Poly(2*x + 1, x, domain='ZZ')
>>> F, G = f.unify(g)
>>> F
Poly(1/2*x + 1, x, domain='QQ')
>>> G
Poly(2*x + 1, x, domain='QQ')
"""
_, per, F, G = f._unify(g)
return per(F), per(G)
def _unify(f, g):
g = sympify(g)
if not g.is_Poly:
try:
return f.rep.dom, f.per, f.rep, f.rep.per(f.rep.dom.from_sympy(g))
except CoercionFailed:
raise UnificationFailed("can't unify %s with %s" % (f, g))
if isinstance(f.rep, DMP) and isinstance(g.rep, DMP):
gens = _unify_gens(f.gens, g.gens)
dom, lev = f.rep.dom.unify(g.rep.dom, gens), len(gens) - 1
if f.gens != gens:
f_monoms, f_coeffs = _dict_reorder(
f.rep.to_dict(), f.gens, gens)
if f.rep.dom != dom:
f_coeffs = [dom.convert(c, f.rep.dom) for c in f_coeffs]
F = DMP(dict(list(zip(f_monoms, f_coeffs))), dom, lev)
else:
F = f.rep.convert(dom)
if g.gens != gens:
g_monoms, g_coeffs = _dict_reorder(
g.rep.to_dict(), g.gens, gens)
if g.rep.dom != dom:
g_coeffs = [dom.convert(c, g.rep.dom) for c in g_coeffs]
G = DMP(dict(list(zip(g_monoms, g_coeffs))), dom, lev)
else:
G = g.rep.convert(dom)
else:
raise UnificationFailed("can't unify %s with %s" % (f, g))
cls = f.__class__
def per(rep, dom=dom, gens=gens, remove=None):
if remove is not None:
gens = gens[:remove] + gens[remove + 1:]
if not gens:
return dom.to_sympy(rep)
return cls.new(rep, *gens)
return dom, per, F, G
def per(f, rep, gens=None, remove=None):
"""
Create a Poly out of the given representation.
Examples
========
>>> from sympy import Poly, ZZ
>>> from sympy.abc import x, y
>>> from sympy.polys.polyclasses import DMP
>>> a = Poly(x**2 + 1)
>>> a.per(DMP([ZZ(1), ZZ(1)], ZZ), gens=[y])
Poly(y + 1, y, domain='ZZ')
"""
if gens is None:
gens = f.gens
if remove is not None:
gens = gens[:remove] + gens[remove + 1:]
if not gens:
return f.rep.dom.to_sympy(rep)
return f.__class__.new(rep, *gens)
def set_domain(f, domain):
"""Set the ground domain of ``f``. """
opt = options.build_options(f.gens, {'domain': domain})
return f.per(f.rep.convert(opt.domain))
def get_domain(f):
"""Get the ground domain of ``f``. """
return f.rep.dom
def set_modulus(f, modulus):
"""
Set the modulus of ``f``.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x
>>> Poly(5*x**2 + 2*x - 1, x).set_modulus(2)
Poly(x**2 + 1, x, modulus=2)
"""
modulus = options.Modulus.preprocess(modulus)
return f.set_domain(FF(modulus))
def get_modulus(f):
"""
Get the modulus of ``f``.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x
>>> Poly(x**2 + 1, modulus=2).get_modulus()
2
"""
domain = f.get_domain()
if domain.is_FiniteField:
return Integer(domain.characteristic())
else:
raise PolynomialError("not a polynomial over a Galois field")
def _eval_subs(f, old, new):
"""Internal implementation of :func:`subs`. """
if old in f.gens:
if new.is_number:
return f.eval(old, new)
else:
try:
return f.replace(old, new)
except PolynomialError:
pass
return f.as_expr().subs(old, new)
def exclude(f):
"""
Remove unnecessary generators from ``f``.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import a, b, c, d, x
>>> Poly(a + x, a, b, c, d, x).exclude()
Poly(a + x, a, x, domain='ZZ')
"""
J, new = f.rep.exclude()
gens = []
for j in range(len(f.gens)):
if j not in J:
gens.append(f.gens[j])
return f.per(new, gens=gens)
def replace(f, x, y=None):
"""
Replace ``x`` with ``y`` in generators list.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x, y
>>> Poly(x**2 + 1, x).replace(x, y)
Poly(y**2 + 1, y, domain='ZZ')
"""
if y is None:
if f.is_univariate:
x, y = f.gen, x
else:
raise PolynomialError(
"syntax supported only in univariate case")
if x == y:
return f
if x in f.gens and y not in f.gens:
dom = f.get_domain()
if not dom.is_Composite or y not in dom.symbols:
gens = list(f.gens)
gens[gens.index(x)] = y
return f.per(f.rep, gens=gens)
raise PolynomialError("can't replace %s with %s in %s" % (x, y, f))
def reorder(f, *gens, **args):
"""
Efficiently apply new order of generators.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x, y
>>> Poly(x**2 + x*y**2, x, y).reorder(y, x)
Poly(y**2*x + x**2, y, x, domain='ZZ')
"""
opt = options.Options((), args)
if not gens:
gens = _sort_gens(f.gens, opt=opt)
elif set(f.gens) != set(gens):
raise PolynomialError(
"generators list can differ only up to order of elements")
rep = dict(list(zip(*_dict_reorder(f.rep.to_dict(), f.gens, gens))))
return f.per(DMP(rep, f.rep.dom, len(gens) - 1), gens=gens)
def ltrim(f, gen):
"""
Remove dummy generators from the "left" of ``f``.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x, y, z
>>> Poly(y**2 + y*z**2, x, y, z).ltrim(y)
Poly(y**2 + y*z**2, y, z, domain='ZZ')
"""
rep = f.as_dict(native=True)
j = f._gen_to_level(gen)
terms = {}
for monom, coeff in rep.items():
monom = monom[j:]
if monom not in terms:
terms[monom] = coeff
else:
raise PolynomialError("can't left trim %s" % f)
gens = f.gens[j:]
return f.new(DMP.from_dict(terms, len(gens) - 1, f.rep.dom), *gens)
def has_only_gens(f, *gens):
"""
Return ``True`` if ``Poly(f, *gens)`` retains ground domain.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x, y, z
>>> Poly(x*y + 1, x, y, z).has_only_gens(x, y)
True
>>> Poly(x*y + z, x, y, z).has_only_gens(x, y)
False
"""
indices = set([])
for gen in gens:
try:
index = f.gens.index(gen)
except ValueError:
raise GeneratorsError(
"%s doesn't have %s as generator" % (f, gen))
else:
indices.add(index)
for monom in f.monoms():
for i, elt in enumerate(monom):
if i not in indices and elt:
return False
return True
def to_ring(f):
"""
Make the ground domain a ring.
Examples
========
>>> from sympy import Poly, QQ
>>> from sympy.abc import x
>>> Poly(x**2 + 1, domain=QQ).to_ring()
Poly(x**2 + 1, x, domain='ZZ')
"""
if hasattr(f.rep, 'to_ring'):
result = f.rep.to_ring()
else: # pragma: no cover
raise OperationNotSupported(f, 'to_ring')
return f.per(result)
def to_field(f):
"""
Make the ground domain a field.
Examples
========
>>> from sympy import Poly, ZZ
>>> from sympy.abc import x
>>> Poly(x**2 + 1, x, domain=ZZ).to_field()
Poly(x**2 + 1, x, domain='QQ')
"""
if hasattr(f.rep, 'to_field'):
result = f.rep.to_field()
else: # pragma: no cover
raise OperationNotSupported(f, 'to_field')
return f.per(result)
def to_exact(f):
"""
Make the ground domain exact.
Examples
========
>>> from sympy import Poly, RR
>>> from sympy.abc import x
>>> Poly(x**2 + 1.0, x, domain=RR).to_exact()
Poly(x**2 + 1, x, domain='QQ')
"""
if hasattr(f.rep, 'to_exact'):
result = f.rep.to_exact()
else: # pragma: no cover
raise OperationNotSupported(f, 'to_exact')
return f.per(result)
def retract(f, field=None):
"""
Recalculate the ground domain of a polynomial.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x
>>> f = Poly(x**2 + 1, x, domain='QQ[y]')
>>> f
Poly(x**2 + 1, x, domain='QQ[y]')
>>> f.retract()
Poly(x**2 + 1, x, domain='ZZ')
>>> f.retract(field=True)
Poly(x**2 + 1, x, domain='QQ')
"""
dom, rep = construct_domain(f.as_dict(zero=True),
field=field, composite=f.domain.is_Composite or None)
return f.from_dict(rep, f.gens, domain=dom)
def slice(f, x, m, n=None):
"""Take a continuous subsequence of terms of ``f``. """
if n is None:
j, m, n = 0, x, m
else:
j = f._gen_to_level(x)
m, n = int(m), int(n)
if hasattr(f.rep, 'slice'):
result = f.rep.slice(m, n, j)
else: # pragma: no cover
raise OperationNotSupported(f, 'slice')
return f.per(result)
def coeffs(f, order=None):
"""
Returns all non-zero coefficients from ``f`` in lex order.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x
>>> Poly(x**3 + 2*x + 3, x).coeffs()
[1, 2, 3]
See Also
========
all_coeffs
coeff_monomial
nth
"""
return [f.rep.dom.to_sympy(c) for c in f.rep.coeffs(order=order)]
def monoms(f, order=None):
"""
Returns all non-zero monomials from ``f`` in lex order.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x, y
>>> Poly(x**2 + 2*x*y**2 + x*y + 3*y, x, y).monoms()
[(2, 0), (1, 2), (1, 1), (0, 1)]
See Also
========
all_monoms
"""
return f.rep.monoms(order=order)
def terms(f, order=None):
"""
Returns all non-zero terms from ``f`` in lex order.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x, y
>>> Poly(x**2 + 2*x*y**2 + x*y + 3*y, x, y).terms()
[((2, 0), 1), ((1, 2), 2), ((1, 1), 1), ((0, 1), 3)]
See Also
========
all_terms
"""
return [(m, f.rep.dom.to_sympy(c)) for m, c in f.rep.terms(order=order)]
def all_coeffs(f):
"""
Returns all coefficients from a univariate polynomial ``f``.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x
>>> Poly(x**3 + 2*x - 1, x).all_coeffs()
[1, 0, 2, -1]
"""
return [f.rep.dom.to_sympy(c) for c in f.rep.all_coeffs()]
def all_monoms(f):
"""
Returns all monomials from a univariate polynomial ``f``.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x
>>> Poly(x**3 + 2*x - 1, x).all_monoms()
[(3,), (2,), (1,), (0,)]
See Also
========
all_terms
"""
return f.rep.all_monoms()
def all_terms(f):
"""
Returns all terms from a univariate polynomial ``f``.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x
>>> Poly(x**3 + 2*x - 1, x).all_terms()
[((3,), 1), ((2,), 0), ((1,), 2), ((0,), -1)]
"""
return [(m, f.rep.dom.to_sympy(c)) for m, c in f.rep.all_terms()]
def termwise(f, func, *gens, **args):
"""
Apply a function to all terms of ``f``.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x
>>> def func(k, coeff):
... k = k[0]
... return coeff//10**(2-k)
>>> Poly(x**2 + 20*x + 400).termwise(func)
Poly(x**2 + 2*x + 4, x, domain='ZZ')
"""
terms = {}
for monom, coeff in f.terms():
result = func(monom, coeff)
if isinstance(result, tuple):
monom, coeff = result
else:
coeff = result
if coeff:
if monom not in terms:
terms[monom] = coeff
else:
raise PolynomialError(
"%s monomial was generated twice" % monom)
return f.from_dict(terms, *(gens or f.gens), **args)
def length(f):
"""
Returns the number of non-zero terms in ``f``.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x
>>> Poly(x**2 + 2*x - 1).length()
3
"""
return len(f.as_dict())
def as_dict(f, native=False, zero=False):
"""
Switch to a ``dict`` representation.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x, y
>>> Poly(x**2 + 2*x*y**2 - y, x, y).as_dict()
{(0, 1): -1, (1, 2): 2, (2, 0): 1}
"""
if native:
return f.rep.to_dict(zero=zero)
else:
return f.rep.to_sympy_dict(zero=zero)
def as_list(f, native=False):
"""Switch to a ``list`` representation. """
if native:
return f.rep.to_list()
else:
return f.rep.to_sympy_list()
def as_expr(f, *gens):
"""
Convert a Poly instance to an Expr instance.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x, y
>>> f = Poly(x**2 + 2*x*y**2 - y, x, y)
>>> f.as_expr()
x**2 + 2*x*y**2 - y
>>> f.as_expr({x: 5})
10*y**2 - y + 25
>>> f.as_expr(5, 6)
379
"""
if not gens:
gens = f.gens
elif len(gens) == 1 and isinstance(gens[0], dict):
mapping = gens[0]
gens = list(f.gens)
for gen, value in mapping.items():
try:
index = gens.index(gen)
except ValueError:
raise GeneratorsError(
"%s doesn't have %s as generator" % (f, gen))
else:
gens[index] = value
return basic_from_dict(f.rep.to_sympy_dict(), *gens)
def lift(f):
"""
Convert algebraic coefficients to rationals.
Examples
========
>>> from sympy import Poly, I
>>> from sympy.abc import x
>>> Poly(x**2 + I*x + 1, x, extension=I).lift()
Poly(x**4 + 3*x**2 + 1, x, domain='QQ')
"""
if hasattr(f.rep, 'lift'):
result = f.rep.lift()
else: # pragma: no cover
raise OperationNotSupported(f, 'lift')
return f.per(result)
def deflate(f):
"""
Reduce degree of ``f`` by mapping ``x_i**m`` to ``y_i``.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x, y
>>> Poly(x**6*y**2 + x**3 + 1, x, y).deflate()
((3, 2), Poly(x**2*y + x + 1, x, y, domain='ZZ'))
"""
if hasattr(f.rep, 'deflate'):
J, result = f.rep.deflate()
else: # pragma: no cover
raise OperationNotSupported(f, 'deflate')
return J, f.per(result)
def inject(f, front=False):
"""
Inject ground domain generators into ``f``.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x, y
>>> f = Poly(x**2*y + x*y**3 + x*y + 1, x)
>>> f.inject()
Poly(x**2*y + x*y**3 + x*y + 1, x, y, domain='ZZ')
>>> f.inject(front=True)
Poly(y**3*x + y*x**2 + y*x + 1, y, x, domain='ZZ')
"""
dom = f.rep.dom
if dom.is_Numerical:
return f
elif not dom.is_Poly:
raise DomainError("can't inject generators over %s" % dom)
if hasattr(f.rep, 'inject'):
result = f.rep.inject(front=front)
else: # pragma: no cover
raise OperationNotSupported(f, 'inject')
if front:
gens = dom.symbols + f.gens
else:
gens = f.gens + dom.symbols
return f.new(result, *gens)
def eject(f, *gens):
"""
Eject selected generators into the ground domain.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x, y
>>> f = Poly(x**2*y + x*y**3 + x*y + 1, x, y)
>>> f.eject(x)
Poly(x*y**3 + (x**2 + x)*y + 1, y, domain='ZZ[x]')
>>> f.eject(y)
Poly(y*x**2 + (y**3 + y)*x + 1, x, domain='ZZ[y]')
"""
dom = f.rep.dom
if not dom.is_Numerical:
raise DomainError("can't eject generators over %s" % dom)
n, k = len(f.gens), len(gens)
if f.gens[:k] == gens:
_gens, front = f.gens[k:], True
elif f.gens[-k:] == gens:
_gens, front = f.gens[:-k], False
else:
raise NotImplementedError(
"can only eject front or back generators")
dom = dom.inject(*gens)
if hasattr(f.rep, 'eject'):
result = f.rep.eject(dom, front=front)
else: # pragma: no cover
raise OperationNotSupported(f, 'eject')
return f.new(result, *_gens)
def terms_gcd(f):
"""
Remove GCD of terms from the polynomial ``f``.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x, y
>>> Poly(x**6*y**2 + x**3*y, x, y).terms_gcd()
((3, 1), Poly(x**3*y + 1, x, y, domain='ZZ'))
"""
if hasattr(f.rep, 'terms_gcd'):
J, result = f.rep.terms_gcd()
else: # pragma: no cover
raise OperationNotSupported(f, 'terms_gcd')
return J, f.per(result)
def add_ground(f, coeff):
"""
Add an element of the ground domain to ``f``.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x
>>> Poly(x + 1).add_ground(2)
Poly(x + 3, x, domain='ZZ')
"""
if hasattr(f.rep, 'add_ground'):
result = f.rep.add_ground(coeff)
else: # pragma: no cover
raise OperationNotSupported(f, 'add_ground')
return f.per(result)
def sub_ground(f, coeff):
"""
Subtract an element of the ground domain from ``f``.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x
>>> Poly(x + 1).sub_ground(2)
Poly(x - 1, x, domain='ZZ')
"""
if hasattr(f.rep, 'sub_ground'):
result = f.rep.sub_ground(coeff)
else: # pragma: no cover
raise OperationNotSupported(f, 'sub_ground')
return f.per(result)
def mul_ground(f, coeff):
"""
Multiply ``f`` by a an element of the ground domain.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x
>>> Poly(x + 1).mul_ground(2)
Poly(2*x + 2, x, domain='ZZ')
"""
if hasattr(f.rep, 'mul_ground'):
result = f.rep.mul_ground(coeff)
else: # pragma: no cover
raise OperationNotSupported(f, 'mul_ground')
return f.per(result)
def quo_ground(f, coeff):
"""
Quotient of ``f`` by a an element of the ground domain.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x
>>> Poly(2*x + 4).quo_ground(2)
Poly(x + 2, x, domain='ZZ')
>>> Poly(2*x + 3).quo_ground(2)
Poly(x + 1, x, domain='ZZ')
"""
if hasattr(f.rep, 'quo_ground'):
result = f.rep.quo_ground(coeff)
else: # pragma: no cover
raise OperationNotSupported(f, 'quo_ground')
return f.per(result)
def exquo_ground(f, coeff):
"""
Exact quotient of ``f`` by a an element of the ground domain.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x
>>> Poly(2*x + 4).exquo_ground(2)
Poly(x + 2, x, domain='ZZ')
>>> Poly(2*x + 3).exquo_ground(2)
Traceback (most recent call last):
...
ExactQuotientFailed: 2 does not divide 3 in ZZ
"""
if hasattr(f.rep, 'exquo_ground'):
result = f.rep.exquo_ground(coeff)
else: # pragma: no cover
raise OperationNotSupported(f, 'exquo_ground')
return f.per(result)
def abs(f):
"""
Make all coefficients in ``f`` positive.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x
>>> Poly(x**2 - 1, x).abs()
Poly(x**2 + 1, x, domain='ZZ')
"""
if hasattr(f.rep, 'abs'):
result = f.rep.abs()
else: # pragma: no cover
raise OperationNotSupported(f, 'abs')
return f.per(result)
def neg(f):
"""
Negate all coefficients in ``f``.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x
>>> Poly(x**2 - 1, x).neg()
Poly(-x**2 + 1, x, domain='ZZ')
>>> -Poly(x**2 - 1, x)
Poly(-x**2 + 1, x, domain='ZZ')
"""
if hasattr(f.rep, 'neg'):
result = f.rep.neg()
else: # pragma: no cover
raise OperationNotSupported(f, 'neg')
return f.per(result)
def add(f, g):
"""
Add two polynomials ``f`` and ``g``.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x
>>> Poly(x**2 + 1, x).add(Poly(x - 2, x))
Poly(x**2 + x - 1, x, domain='ZZ')
>>> Poly(x**2 + 1, x) + Poly(x - 2, x)
Poly(x**2 + x - 1, x, domain='ZZ')
"""
g = sympify(g)
if not g.is_Poly:
return f.add_ground(g)
_, per, F, G = f._unify(g)
if hasattr(f.rep, 'add'):
result = F.add(G)
else: # pragma: no cover
raise OperationNotSupported(f, 'add')
return per(result)
def sub(f, g):
"""
Subtract two polynomials ``f`` and ``g``.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x
>>> Poly(x**2 + 1, x).sub(Poly(x - 2, x))
Poly(x**2 - x + 3, x, domain='ZZ')
>>> Poly(x**2 + 1, x) - Poly(x - 2, x)
Poly(x**2 - x + 3, x, domain='ZZ')
"""
g = sympify(g)
if not g.is_Poly:
return f.sub_ground(g)
_, per, F, G = f._unify(g)
if hasattr(f.rep, 'sub'):
result = F.sub(G)
else: # pragma: no cover
raise OperationNotSupported(f, 'sub')
return per(result)
def mul(f, g):
"""
Multiply two polynomials ``f`` and ``g``.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x
>>> Poly(x**2 + 1, x).mul(Poly(x - 2, x))
Poly(x**3 - 2*x**2 + x - 2, x, domain='ZZ')
>>> Poly(x**2 + 1, x)*Poly(x - 2, x)
Poly(x**3 - 2*x**2 + x - 2, x, domain='ZZ')
"""
g = sympify(g)
if not g.is_Poly:
return f.mul_ground(g)
_, per, F, G = f._unify(g)
if hasattr(f.rep, 'mul'):
result = F.mul(G)
else: # pragma: no cover
raise OperationNotSupported(f, 'mul')
return per(result)
def sqr(f):
"""
Square a polynomial ``f``.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x
>>> Poly(x - 2, x).sqr()
Poly(x**2 - 4*x + 4, x, domain='ZZ')
>>> Poly(x - 2, x)**2
Poly(x**2 - 4*x + 4, x, domain='ZZ')
"""
if hasattr(f.rep, 'sqr'):
result = f.rep.sqr()
else: # pragma: no cover
raise OperationNotSupported(f, 'sqr')
return f.per(result)
def pow(f, n):
"""
Raise ``f`` to a non-negative power ``n``.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x
>>> Poly(x - 2, x).pow(3)
Poly(x**3 - 6*x**2 + 12*x - 8, x, domain='ZZ')
>>> Poly(x - 2, x)**3
Poly(x**3 - 6*x**2 + 12*x - 8, x, domain='ZZ')
"""
n = int(n)
if hasattr(f.rep, 'pow'):
result = f.rep.pow(n)
else: # pragma: no cover
raise OperationNotSupported(f, 'pow')
return f.per(result)
def pdiv(f, g):
"""
Polynomial pseudo-division of ``f`` by ``g``.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x
>>> Poly(x**2 + 1, x).pdiv(Poly(2*x - 4, x))
(Poly(2*x + 4, x, domain='ZZ'), Poly(20, x, domain='ZZ'))
"""
_, per, F, G = f._unify(g)
if hasattr(f.rep, 'pdiv'):
q, r = F.pdiv(G)
else: # pragma: no cover
raise OperationNotSupported(f, 'pdiv')
return per(q), per(r)
def prem(f, g):
"""
Polynomial pseudo-remainder of ``f`` by ``g``.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x
>>> Poly(x**2 + 1, x).prem(Poly(2*x - 4, x))
Poly(20, x, domain='ZZ')
"""
_, per, F, G = f._unify(g)
if hasattr(f.rep, 'prem'):
result = F.prem(G)
else: # pragma: no cover
raise OperationNotSupported(f, 'prem')
return per(result)
def pquo(f, g):
"""
Polynomial pseudo-quotient of ``f`` by ``g``.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x
>>> Poly(x**2 + 1, x).pquo(Poly(2*x - 4, x))
Poly(2*x + 4, x, domain='ZZ')
>>> Poly(x**2 - 1, x).pquo(Poly(2*x - 2, x))
Poly(2*x + 2, x, domain='ZZ')
"""
_, per, F, G = f._unify(g)
if hasattr(f.rep, 'pquo'):
result = F.pquo(G)
else: # pragma: no cover
raise OperationNotSupported(f, 'pquo')
return per(result)
def pexquo(f, g):
"""
Polynomial exact pseudo-quotient of ``f`` by ``g``.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x
>>> Poly(x**2 - 1, x).pexquo(Poly(2*x - 2, x))
Poly(2*x + 2, x, domain='ZZ')
>>> Poly(x**2 + 1, x).pexquo(Poly(2*x - 4, x))
Traceback (most recent call last):
...
ExactQuotientFailed: 2*x - 4 does not divide x**2 + 1
"""
_, per, F, G = f._unify(g)
if hasattr(f.rep, 'pexquo'):
try:
result = F.pexquo(G)
except ExactQuotientFailed as exc:
raise exc.new(f.as_expr(), g.as_expr())
else: # pragma: no cover
raise OperationNotSupported(f, 'pexquo')
return per(result)
def div(f, g, auto=True):
"""
Polynomial division with remainder of ``f`` by ``g``.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x
>>> Poly(x**2 + 1, x).div(Poly(2*x - 4, x))
(Poly(1/2*x + 1, x, domain='QQ'), Poly(5, x, domain='QQ'))
>>> Poly(x**2 + 1, x).div(Poly(2*x - 4, x), auto=False)
(Poly(0, x, domain='ZZ'), Poly(x**2 + 1, x, domain='ZZ'))
"""
dom, per, F, G = f._unify(g)
retract = False
if auto and dom.has_Ring and not dom.has_Field:
F, G = F.to_field(), G.to_field()
retract = True
if hasattr(f.rep, 'div'):
q, r = F.div(G)
else: # pragma: no cover
raise OperationNotSupported(f, 'div')
if retract:
try:
Q, R = q.to_ring(), r.to_ring()
except CoercionFailed:
pass
else:
q, r = Q, R
return per(q), per(r)
def rem(f, g, auto=True):
"""
Computes the polynomial remainder of ``f`` by ``g``.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x
>>> Poly(x**2 + 1, x).rem(Poly(2*x - 4, x))
Poly(5, x, domain='ZZ')
>>> Poly(x**2 + 1, x).rem(Poly(2*x - 4, x), auto=False)
Poly(x**2 + 1, x, domain='ZZ')
"""
dom, per, F, G = f._unify(g)
retract = False
if auto and dom.has_Ring and not dom.has_Field:
F, G = F.to_field(), G.to_field()
retract = True
if hasattr(f.rep, 'rem'):
r = F.rem(G)
else: # pragma: no cover
raise OperationNotSupported(f, 'rem')
if retract:
try:
r = r.to_ring()
except CoercionFailed:
pass
return per(r)
def quo(f, g, auto=True):
"""
Computes polynomial quotient of ``f`` by ``g``.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x
>>> Poly(x**2 + 1, x).quo(Poly(2*x - 4, x))
Poly(1/2*x + 1, x, domain='QQ')
>>> Poly(x**2 - 1, x).quo(Poly(x - 1, x))
Poly(x + 1, x, domain='ZZ')
"""
dom, per, F, G = f._unify(g)
retract = False
if auto and dom.has_Ring and not dom.has_Field:
F, G = F.to_field(), G.to_field()
retract = True
if hasattr(f.rep, 'quo'):
q = F.quo(G)
else: # pragma: no cover
raise OperationNotSupported(f, 'quo')
if retract:
try:
q = q.to_ring()
except CoercionFailed:
pass
return per(q)
def exquo(f, g, auto=True):
"""
Computes polynomial exact quotient of ``f`` by ``g``.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x
>>> Poly(x**2 - 1, x).exquo(Poly(x - 1, x))
Poly(x + 1, x, domain='ZZ')
>>> Poly(x**2 + 1, x).exquo(Poly(2*x - 4, x))
Traceback (most recent call last):
...
ExactQuotientFailed: 2*x - 4 does not divide x**2 + 1
"""
dom, per, F, G = f._unify(g)
retract = False
if auto and dom.has_Ring and not dom.has_Field:
F, G = F.to_field(), G.to_field()
retract = True
if hasattr(f.rep, 'exquo'):
try:
q = F.exquo(G)
except ExactQuotientFailed as exc:
raise exc.new(f.as_expr(), g.as_expr())
else: # pragma: no cover
raise OperationNotSupported(f, 'exquo')
if retract:
try:
q = q.to_ring()
except CoercionFailed:
pass
return per(q)
def _gen_to_level(f, gen):
"""Returns level associated with the given generator. """
if isinstance(gen, int):
length = len(f.gens)
if -length <= gen < length:
if gen < 0:
return length + gen
else:
return gen
else:
raise PolynomialError("-%s <= gen < %s expected, got %s" %
(length, length, gen))
else:
try:
return f.gens.index(sympify(gen))
except ValueError:
raise PolynomialError(
"a valid generator expected, got %s" % gen)
def degree(f, gen=0):
"""
Returns degree of ``f`` in ``x_j``.
The degree of 0 is negative infinity.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x, y
>>> Poly(x**2 + y*x + 1, x, y).degree()
2
>>> Poly(x**2 + y*x + y, x, y).degree(y)
1
>>> Poly(0, x).degree()
-oo
"""
j = f._gen_to_level(gen)
if hasattr(f.rep, 'degree'):
return f.rep.degree(j)
else: # pragma: no cover
raise OperationNotSupported(f, 'degree')
def degree_list(f):
"""
Returns a list of degrees of ``f``.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x, y
>>> Poly(x**2 + y*x + 1, x, y).degree_list()
(2, 1)
"""
if hasattr(f.rep, 'degree_list'):
return f.rep.degree_list()
else: # pragma: no cover
raise OperationNotSupported(f, 'degree_list')
def total_degree(f):
"""
Returns the total degree of ``f``.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x, y
>>> Poly(x**2 + y*x + 1, x, y).total_degree()
2
>>> Poly(x + y**5, x, y).total_degree()
5
"""
if hasattr(f.rep, 'total_degree'):
return f.rep.total_degree()
else: # pragma: no cover
raise OperationNotSupported(f, 'total_degree')
def homogenize(f, s):
"""
Returns the homogeneous polynomial of ``f``.
A homogeneous polynomial is a polynomial whose all monomials with
non-zero coefficients have the same total degree. If you only
want to check if a polynomial is homogeneous, then use
:func:`Poly.is_homogeneous`. If you want not only to check if a
polynomial is homogeneous but also compute its homogeneous order,
then use :func:`Poly.homogeneous_order`.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x, y, z
>>> f = Poly(x**5 + 2*x**2*y**2 + 9*x*y**3)
>>> f.homogenize(z)
Poly(x**5 + 2*x**2*y**2*z + 9*x*y**3*z, x, y, z, domain='ZZ')
"""
if not isinstance(s, Symbol):
raise TypeError("``Symbol`` expected, got %s" % type(s))
if s in f.gens:
i = f.gens.index(s)
gens = f.gens
else:
i = len(f.gens)
gens = f.gens + (s,)
if hasattr(f.rep, 'homogenize'):
return f.per(f.rep.homogenize(i), gens=gens)
raise OperationNotSupported(f, 'homogeneous_order')
def homogeneous_order(f):
"""
Returns the homogeneous order of ``f``.
A homogeneous polynomial is a polynomial whose all monomials with
non-zero coefficients have the same total degree. This degree is
the homogeneous order of ``f``. If you only want to check if a
polynomial is homogeneous, then use :func:`Poly.is_homogeneous`.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x, y
>>> f = Poly(x**5 + 2*x**3*y**2 + 9*x*y**4)
>>> f.homogeneous_order()
5
"""
if hasattr(f.rep, 'homogeneous_order'):
return f.rep.homogeneous_order()
else: # pragma: no cover
raise OperationNotSupported(f, 'homogeneous_order')
def LC(f, order=None):
"""
Returns the leading coefficient of ``f``.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x
>>> Poly(4*x**3 + 2*x**2 + 3*x, x).LC()
4
"""
if order is not None:
return f.coeffs(order)[0]
if hasattr(f.rep, 'LC'):
result = f.rep.LC()
else: # pragma: no cover
raise OperationNotSupported(f, 'LC')
return f.rep.dom.to_sympy(result)
def TC(f):
"""
Returns the trailing coefficient of ``f``.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x
>>> Poly(x**3 + 2*x**2 + 3*x, x).TC()
0
"""
if hasattr(f.rep, 'TC'):
result = f.rep.TC()
else: # pragma: no cover
raise OperationNotSupported(f, 'TC')
return f.rep.dom.to_sympy(result)
def EC(f, order=None):
"""
Returns the last non-zero coefficient of ``f``.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x
>>> Poly(x**3 + 2*x**2 + 3*x, x).EC()
3
"""
if hasattr(f.rep, 'coeffs'):
return f.coeffs(order)[-1]
else: # pragma: no cover
raise OperationNotSupported(f, 'EC')
def coeff_monomial(f, monom):
"""
Returns the coefficient of ``monom`` in ``f`` if there, else None.
Examples
========
>>> from sympy import Poly, exp
>>> from sympy.abc import x, y
>>> p = Poly(24*x*y*exp(8) + 23*x, x, y)
>>> p.coeff_monomial(x)
23
>>> p.coeff_monomial(y)
0
>>> p.coeff_monomial(x*y)
24*exp(8)
Note that ``Expr.coeff()`` behaves differently, collecting terms
if possible; the Poly must be converted to an Expr to use that
method, however:
>>> p.as_expr().coeff(x)
24*y*exp(8) + 23
>>> p.as_expr().coeff(y)
24*x*exp(8)
>>> p.as_expr().coeff(x*y)
24*exp(8)
See Also
========
nth: more efficient query using exponents of the monomial's generators
"""
return f.nth(*Monomial(monom, f.gens).exponents)
def nth(f, *N):
"""
Returns the ``n``-th coefficient of ``f`` where ``N`` are the
exponents of the generators in the term of interest.
Examples
========
>>> from sympy import Poly, sqrt
>>> from sympy.abc import x, y
>>> Poly(x**3 + 2*x**2 + 3*x, x).nth(2)
2
>>> Poly(x**3 + 2*x*y**2 + y**2, x, y).nth(1, 2)
2
>>> Poly(4*sqrt(x)*y)
Poly(4*y*sqrt(x), y, sqrt(x), domain='ZZ')
>>> _.nth(1, 1)
4
See Also
========
coeff_monomial
"""
if hasattr(f.rep, 'nth'):
result = f.rep.nth(*list(map(int, N)))
else: # pragma: no cover
raise OperationNotSupported(f, 'nth')
return f.rep.dom.to_sympy(result)
def coeff(f, x, n=1, right=False):
# the semantics of coeff_monomial and Expr.coeff are different;
# if someone is working with a Poly, they should be aware of the
# differences and chose the method best suited for the query.
# Alternatively, a pure-polys method could be written here but
# at this time the ``right`` keyword would be ignored because Poly
# doesn't work with non-commutatives.
raise NotImplementedError(
'Either convert to Expr with `as_expr` method '
'to use Expr\'s coeff method or else use the '
'`coeff_monomial` method of Polys.')
def LM(f, order=None):
"""
Returns the leading monomial of ``f``.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x, y
>>> Poly(4*x**2 + 2*x*y**2 + x*y + 3*y, x, y).LM()
x**2*y**0
"""
return Monomial(f.monoms(order)[0], f.gens)
def EM(f, order=None):
"""
Returns the last non-zero monomial of ``f``.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x, y
>>> Poly(4*x**2 + 2*x*y**2 + x*y + 3*y, x, y).EM()
x**0*y**1
"""
return Monomial(f.monoms(order)[-1], f.gens)
def LT(f, order=None):
"""
Returns the leading term of ``f``.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x, y
>>> Poly(4*x**2 + 2*x*y**2 + x*y + 3*y, x, y).LT()
(x**2*y**0, 4)
"""
monom, coeff = f.terms(order)[0]
return Monomial(monom, f.gens), coeff
def ET(f, order=None):
"""
Returns the last non-zero term of ``f``.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x, y
>>> Poly(4*x**2 + 2*x*y**2 + x*y + 3*y, x, y).ET()
(x**0*y**1, 3)
"""
monom, coeff = f.terms(order)[-1]
return Monomial(monom, f.gens), coeff
def max_norm(f):
"""
Returns maximum norm of ``f``.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x
>>> Poly(-x**2 + 2*x - 3, x).max_norm()
3
"""
if hasattr(f.rep, 'max_norm'):
result = f.rep.max_norm()
else: # pragma: no cover
raise OperationNotSupported(f, 'max_norm')
return f.rep.dom.to_sympy(result)
def l1_norm(f):
"""
Returns l1 norm of ``f``.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x
>>> Poly(-x**2 + 2*x - 3, x).l1_norm()
6
"""
if hasattr(f.rep, 'l1_norm'):
result = f.rep.l1_norm()
else: # pragma: no cover
raise OperationNotSupported(f, 'l1_norm')
return f.rep.dom.to_sympy(result)
def clear_denoms(f, convert=False):
"""
Clear denominators, but keep the ground domain.
Examples
========
>>> from sympy import Poly, S, QQ
>>> from sympy.abc import x
>>> f = Poly(x/2 + S(1)/3, x, domain=QQ)
>>> f.clear_denoms()
(6, Poly(3*x + 2, x, domain='QQ'))
>>> f.clear_denoms(convert=True)
(6, Poly(3*x + 2, x, domain='ZZ'))
"""
if not f.rep.dom.has_Field:
return S.One, f
dom = f.get_domain()
if dom.has_assoc_Ring:
dom = f.rep.dom.get_ring()
if hasattr(f.rep, 'clear_denoms'):
coeff, result = f.rep.clear_denoms()
else: # pragma: no cover
raise OperationNotSupported(f, 'clear_denoms')
coeff, f = dom.to_sympy(coeff), f.per(result)
if not convert or not dom.has_assoc_Ring:
return coeff, f
else:
return coeff, f.to_ring()
def rat_clear_denoms(f, g):
"""
Clear denominators in a rational function ``f/g``.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x, y
>>> f = Poly(x**2/y + 1, x)
>>> g = Poly(x**3 + y, x)
>>> p, q = f.rat_clear_denoms(g)
>>> p
Poly(x**2 + y, x, domain='ZZ[y]')
>>> q
Poly(y*x**3 + y**2, x, domain='ZZ[y]')
"""
dom, per, f, g = f._unify(g)
f = per(f)
g = per(g)
if not (dom.has_Field and dom.has_assoc_Ring):
return f, g
a, f = f.clear_denoms(convert=True)
b, g = g.clear_denoms(convert=True)
f = f.mul_ground(b)
g = g.mul_ground(a)
return f, g
def integrate(f, *specs, **args):
"""
Computes indefinite integral of ``f``.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x, y
>>> Poly(x**2 + 2*x + 1, x).integrate()
Poly(1/3*x**3 + x**2 + x, x, domain='QQ')
>>> Poly(x*y**2 + x, x, y).integrate((0, 1), (1, 0))
Poly(1/2*x**2*y**2 + 1/2*x**2, x, y, domain='QQ')
"""
if args.get('auto', True) and f.rep.dom.has_Ring:
f = f.to_field()
if hasattr(f.rep, 'integrate'):
if not specs:
return f.per(f.rep.integrate(m=1))
rep = f.rep
for spec in specs:
if type(spec) is tuple:
gen, m = spec
else:
gen, m = spec, 1
rep = rep.integrate(int(m), f._gen_to_level(gen))
return f.per(rep)
else: # pragma: no cover
raise OperationNotSupported(f, 'integrate')
def diff(f, *specs):
"""
Computes partial derivative of ``f``.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x, y
>>> Poly(x**2 + 2*x + 1, x).diff()
Poly(2*x + 2, x, domain='ZZ')
>>> Poly(x*y**2 + x, x, y).diff((0, 0), (1, 1))
Poly(2*x*y, x, y, domain='ZZ')
"""
if hasattr(f.rep, 'diff'):
if not specs:
return f.per(f.rep.diff(m=1))
rep = f.rep
for spec in specs:
if type(spec) is tuple:
gen, m = spec
else:
gen, m = spec, 1
rep = rep.diff(int(m), f._gen_to_level(gen))
return f.per(rep)
else: # pragma: no cover
raise OperationNotSupported(f, 'diff')
def eval(f, x, a=None, auto=True):
"""
Evaluate ``f`` at ``a`` in the given variable.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x, y, z
>>> Poly(x**2 + 2*x + 3, x).eval(2)
11
>>> Poly(2*x*y + 3*x + y + 2, x, y).eval(x, 2)
Poly(5*y + 8, y, domain='ZZ')
>>> f = Poly(2*x*y + 3*x + y + 2*z, x, y, z)
>>> f.eval({x: 2})
Poly(5*y + 2*z + 6, y, z, domain='ZZ')
>>> f.eval({x: 2, y: 5})
Poly(2*z + 31, z, domain='ZZ')
>>> f.eval({x: 2, y: 5, z: 7})
45
>>> f.eval((2, 5))
Poly(2*z + 31, z, domain='ZZ')
>>> f(2, 5)
Poly(2*z + 31, z, domain='ZZ')
"""
if a is None:
if isinstance(x, dict):
mapping = x
for gen, value in mapping.items():
f = f.eval(gen, value)
return f
elif isinstance(x, (tuple, list)):
values = x
if len(values) > len(f.gens):
raise ValueError("too many values provided")
for gen, value in zip(f.gens, values):
f = f.eval(gen, value)
return f
else:
j, a = 0, x
else:
j = f._gen_to_level(x)
if not hasattr(f.rep, 'eval'): # pragma: no cover
raise OperationNotSupported(f, 'eval')
try:
result = f.rep.eval(a, j)
except CoercionFailed:
if not auto:
raise DomainError("can't evaluate at %s in %s" % (a, f.rep.dom))
else:
a_domain, [a] = construct_domain([a])
new_domain = f.get_domain().unify_with_symbols(a_domain, f.gens)
f = f.set_domain(new_domain)
a = new_domain.convert(a, a_domain)
result = f.rep.eval(a, j)
return f.per(result, remove=j)
def __call__(f, *values):
"""
Evaluate ``f`` at the give values.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x, y, z
>>> f = Poly(2*x*y + 3*x + y + 2*z, x, y, z)
>>> f(2)
Poly(5*y + 2*z + 6, y, z, domain='ZZ')
>>> f(2, 5)
Poly(2*z + 31, z, domain='ZZ')
>>> f(2, 5, 7)
45
"""
return f.eval(values)
def half_gcdex(f, g, auto=True):
"""
Half extended Euclidean algorithm of ``f`` and ``g``.
Returns ``(s, h)`` such that ``h = gcd(f, g)`` and ``s*f = h (mod g)``.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x
>>> f = x**4 - 2*x**3 - 6*x**2 + 12*x + 15
>>> g = x**3 + x**2 - 4*x - 4
>>> Poly(f).half_gcdex(Poly(g))
(Poly(-1/5*x + 3/5, x, domain='QQ'), Poly(x + 1, x, domain='QQ'))
"""
dom, per, F, G = f._unify(g)
if auto and dom.has_Ring:
F, G = F.to_field(), G.to_field()
if hasattr(f.rep, 'half_gcdex'):
s, h = F.half_gcdex(G)
else: # pragma: no cover
raise OperationNotSupported(f, 'half_gcdex')
return per(s), per(h)
def gcdex(f, g, auto=True):
"""
Extended Euclidean algorithm of ``f`` and ``g``.
Returns ``(s, t, h)`` such that ``h = gcd(f, g)`` and ``s*f + t*g = h``.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x
>>> f = x**4 - 2*x**3 - 6*x**2 + 12*x + 15
>>> g = x**3 + x**2 - 4*x - 4
>>> Poly(f).gcdex(Poly(g))
(Poly(-1/5*x + 3/5, x, domain='QQ'),
Poly(1/5*x**2 - 6/5*x + 2, x, domain='QQ'),
Poly(x + 1, x, domain='QQ'))
"""
dom, per, F, G = f._unify(g)
if auto and dom.has_Ring:
F, G = F.to_field(), G.to_field()
if hasattr(f.rep, 'gcdex'):
s, t, h = F.gcdex(G)
else: # pragma: no cover
raise OperationNotSupported(f, 'gcdex')
return per(s), per(t), per(h)
def invert(f, g, auto=True):
"""
Invert ``f`` modulo ``g`` when possible.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x
>>> Poly(x**2 - 1, x).invert(Poly(2*x - 1, x))
Poly(-4/3, x, domain='QQ')
>>> Poly(x**2 - 1, x).invert(Poly(x - 1, x))
Traceback (most recent call last):
...
NotInvertible: zero divisor
"""
dom, per, F, G = f._unify(g)
if auto and dom.has_Ring:
F, G = F.to_field(), G.to_field()
if hasattr(f.rep, 'invert'):
result = F.invert(G)
else: # pragma: no cover
raise OperationNotSupported(f, 'invert')
return per(result)
def revert(f, n):
"""Compute ``f**(-1)`` mod ``x**n``. """
if hasattr(f.rep, 'revert'):
result = f.rep.revert(int(n))
else: # pragma: no cover
raise OperationNotSupported(f, 'revert')
return f.per(result)
def subresultants(f, g):
"""
Computes the subresultant PRS of ``f`` and ``g``.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x
>>> Poly(x**2 + 1, x).subresultants(Poly(x**2 - 1, x))
[Poly(x**2 + 1, x, domain='ZZ'),
Poly(x**2 - 1, x, domain='ZZ'),
Poly(-2, x, domain='ZZ')]
"""
_, per, F, G = f._unify(g)
if hasattr(f.rep, 'subresultants'):
result = F.subresultants(G)
else: # pragma: no cover
raise OperationNotSupported(f, 'subresultants')
return list(map(per, result))
def resultant(f, g, includePRS=False):
"""
Computes the resultant of ``f`` and ``g`` via PRS.
If includePRS=True, it includes the subresultant PRS in the result.
Because the PRS is used to calculate the resultant, this is more
efficient than calling :func:`subresultants` separately.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x
>>> f = Poly(x**2 + 1, x)
>>> f.resultant(Poly(x**2 - 1, x))
4
>>> f.resultant(Poly(x**2 - 1, x), includePRS=True)
(4, [Poly(x**2 + 1, x, domain='ZZ'), Poly(x**2 - 1, x, domain='ZZ'),
Poly(-2, x, domain='ZZ')])
"""
_, per, F, G = f._unify(g)
if hasattr(f.rep, 'resultant'):
if includePRS:
result, R = F.resultant(G, includePRS=includePRS)
else:
result = F.resultant(G)
else: # pragma: no cover
raise OperationNotSupported(f, 'resultant')
if includePRS:
return (per(result, remove=0), list(map(per, R)))
return per(result, remove=0)
def discriminant(f):
"""
Computes the discriminant of ``f``.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x
>>> Poly(x**2 + 2*x + 3, x).discriminant()
-8
"""
if hasattr(f.rep, 'discriminant'):
result = f.rep.discriminant()
else: # pragma: no cover
raise OperationNotSupported(f, 'discriminant')
return f.per(result, remove=0)
def dispersionset(f, g=None):
r"""Compute the *dispersion set* of two polynomials.
For two polynomials `f(x)` and `g(x)` with `\deg f > 0`
and `\deg g > 0` the dispersion set `\operatorname{J}(f, g)` is defined as:
.. math::
\operatorname{J}(f, g)
& := \{a \in \mathbb{N}_0 | \gcd(f(x), g(x+a)) \neq 1\} \\
& = \{a \in \mathbb{N}_0 | \deg \gcd(f(x), g(x+a)) \geq 1\}
For a single polynomial one defines `\operatorname{J}(f) := \operatorname{J}(f, f)`.
Examples
========
>>> from sympy import poly
>>> from sympy.polys.dispersion import dispersion, dispersionset
>>> from sympy.abc import x
Dispersion set and dispersion of a simple polynomial:
>>> fp = poly((x - 3)*(x + 3), x)
>>> sorted(dispersionset(fp))
[0, 6]
>>> dispersion(fp)
6
Note that the definition of the dispersion is not symmetric:
>>> fp = poly(x**4 - 3*x**2 + 1, x)
>>> gp = fp.shift(-3)
>>> sorted(dispersionset(fp, gp))
[2, 3, 4]
>>> dispersion(fp, gp)
4
>>> sorted(dispersionset(gp, fp))
[]
>>> dispersion(gp, fp)
-oo
Computing the dispersion also works over field extensions:
>>> from sympy import sqrt
>>> fp = poly(x**2 + sqrt(5)*x - 1, x, domain='QQ<sqrt(5)>')
>>> gp = poly(x**2 + (2 + sqrt(5))*x + sqrt(5), x, domain='QQ<sqrt(5)>')
>>> sorted(dispersionset(fp, gp))
[2]
>>> sorted(dispersionset(gp, fp))
[1, 4]
We can even perform the computations for polynomials
having symbolic coefficients:
>>> from sympy.abc import a
>>> fp = poly(4*x**4 + (4*a + 8)*x**3 + (a**2 + 6*a + 4)*x**2 + (a**2 + 2*a)*x, x)
>>> sorted(dispersionset(fp))
[0, 1]
See Also
========
dispersion
References
==========
1. [ManWright94]_
2. [Koepf98]_
3. [Abramov71]_
4. [Man93]_
"""
from sympy.polys.dispersion import dispersionset
return dispersionset(f, g)
def dispersion(f, g=None):
r"""Compute the *dispersion* of polynomials.
For two polynomials `f(x)` and `g(x)` with `\deg f > 0`
and `\deg g > 0` the dispersion `\operatorname{dis}(f, g)` is defined as:
.. math::
\operatorname{dis}(f, g)
& := \max\{ J(f,g) \cup \{0\} \} \\
& = \max\{ \{a \in \mathbb{N} | \gcd(f(x), g(x+a)) \neq 1\} \cup \{0\} \}
and for a single polynomial `\operatorname{dis}(f) := \operatorname{dis}(f, f)`.
Examples
========
>>> from sympy import poly
>>> from sympy.polys.dispersion import dispersion, dispersionset
>>> from sympy.abc import x
Dispersion set and dispersion of a simple polynomial:
>>> fp = poly((x - 3)*(x + 3), x)
>>> sorted(dispersionset(fp))
[0, 6]
>>> dispersion(fp)
6
Note that the definition of the dispersion is not symmetric:
>>> fp = poly(x**4 - 3*x**2 + 1, x)
>>> gp = fp.shift(-3)
>>> sorted(dispersionset(fp, gp))
[2, 3, 4]
>>> dispersion(fp, gp)
4
>>> sorted(dispersionset(gp, fp))
[]
>>> dispersion(gp, fp)
-oo
Computing the dispersion also works over field extensions:
>>> from sympy import sqrt
>>> fp = poly(x**2 + sqrt(5)*x - 1, x, domain='QQ<sqrt(5)>')
>>> gp = poly(x**2 + (2 + sqrt(5))*x + sqrt(5), x, domain='QQ<sqrt(5)>')
>>> sorted(dispersionset(fp, gp))
[2]
>>> sorted(dispersionset(gp, fp))
[1, 4]
We can even perform the computations for polynomials
having symbolic coefficients:
>>> from sympy.abc import a
>>> fp = poly(4*x**4 + (4*a + 8)*x**3 + (a**2 + 6*a + 4)*x**2 + (a**2 + 2*a)*x, x)
>>> sorted(dispersionset(fp))
[0, 1]
See Also
========
dispersionset
References
==========
1. [ManWright94]_
2. [Koepf98]_
3. [Abramov71]_
4. [Man93]_
"""
from sympy.polys.dispersion import dispersion
return dispersion(f, g)
def cofactors(f, g):
"""
Returns the GCD of ``f`` and ``g`` and their cofactors.
Returns polynomials ``(h, cff, cfg)`` such that ``h = gcd(f, g)``, and
``cff = quo(f, h)`` and ``cfg = quo(g, h)`` are, so called, cofactors
of ``f`` and ``g``.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x
>>> Poly(x**2 - 1, x).cofactors(Poly(x**2 - 3*x + 2, x))
(Poly(x - 1, x, domain='ZZ'),
Poly(x + 1, x, domain='ZZ'),
Poly(x - 2, x, domain='ZZ'))
"""
_, per, F, G = f._unify(g)
if hasattr(f.rep, 'cofactors'):
h, cff, cfg = F.cofactors(G)
else: # pragma: no cover
raise OperationNotSupported(f, 'cofactors')
return per(h), per(cff), per(cfg)
def gcd(f, g):
"""
Returns the polynomial GCD of ``f`` and ``g``.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x
>>> Poly(x**2 - 1, x).gcd(Poly(x**2 - 3*x + 2, x))
Poly(x - 1, x, domain='ZZ')
"""
_, per, F, G = f._unify(g)
if hasattr(f.rep, 'gcd'):
result = F.gcd(G)
else: # pragma: no cover
raise OperationNotSupported(f, 'gcd')
return per(result)
def lcm(f, g):
"""
Returns polynomial LCM of ``f`` and ``g``.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x
>>> Poly(x**2 - 1, x).lcm(Poly(x**2 - 3*x + 2, x))
Poly(x**3 - 2*x**2 - x + 2, x, domain='ZZ')
"""
_, per, F, G = f._unify(g)
if hasattr(f.rep, 'lcm'):
result = F.lcm(G)
else: # pragma: no cover
raise OperationNotSupported(f, 'lcm')
return per(result)
def trunc(f, p):
"""
Reduce ``f`` modulo a constant ``p``.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x
>>> Poly(2*x**3 + 3*x**2 + 5*x + 7, x).trunc(3)
Poly(-x**3 - x + 1, x, domain='ZZ')
"""
p = f.rep.dom.convert(p)
if hasattr(f.rep, 'trunc'):
result = f.rep.trunc(p)
else: # pragma: no cover
raise OperationNotSupported(f, 'trunc')
return f.per(result)
def monic(f, auto=True):
"""
Divides all coefficients by ``LC(f)``.
Examples
========
>>> from sympy import Poly, ZZ
>>> from sympy.abc import x
>>> Poly(3*x**2 + 6*x + 9, x, domain=ZZ).monic()
Poly(x**2 + 2*x + 3, x, domain='QQ')
>>> Poly(3*x**2 + 4*x + 2, x, domain=ZZ).monic()
Poly(x**2 + 4/3*x + 2/3, x, domain='QQ')
"""
if auto and f.rep.dom.has_Ring:
f = f.to_field()
if hasattr(f.rep, 'monic'):
result = f.rep.monic()
else: # pragma: no cover
raise OperationNotSupported(f, 'monic')
return f.per(result)
def content(f):
"""
Returns the GCD of polynomial coefficients.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x
>>> Poly(6*x**2 + 8*x + 12, x).content()
2
"""
if hasattr(f.rep, 'content'):
result = f.rep.content()
else: # pragma: no cover
raise OperationNotSupported(f, 'content')
return f.rep.dom.to_sympy(result)
def primitive(f):
"""
Returns the content and a primitive form of ``f``.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x
>>> Poly(2*x**2 + 8*x + 12, x).primitive()
(2, Poly(x**2 + 4*x + 6, x, domain='ZZ'))
"""
if hasattr(f.rep, 'primitive'):
cont, result = f.rep.primitive()
else: # pragma: no cover
raise OperationNotSupported(f, 'primitive')
return f.rep.dom.to_sympy(cont), f.per(result)
def compose(f, g):
"""
Computes the functional composition of ``f`` and ``g``.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x
>>> Poly(x**2 + x, x).compose(Poly(x - 1, x))
Poly(x**2 - x, x, domain='ZZ')
"""
_, per, F, G = f._unify(g)
if hasattr(f.rep, 'compose'):
result = F.compose(G)
else: # pragma: no cover
raise OperationNotSupported(f, 'compose')
return per(result)
def decompose(f):
"""
Computes a functional decomposition of ``f``.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x
>>> Poly(x**4 + 2*x**3 - x - 1, x, domain='ZZ').decompose()
[Poly(x**2 - x - 1, x, domain='ZZ'), Poly(x**2 + x, x, domain='ZZ')]
"""
if hasattr(f.rep, 'decompose'):
result = f.rep.decompose()
else: # pragma: no cover
raise OperationNotSupported(f, 'decompose')
return list(map(f.per, result))
def shift(f, a):
"""
Efficiently compute Taylor shift ``f(x + a)``.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x
>>> Poly(x**2 - 2*x + 1, x).shift(2)
Poly(x**2 + 2*x + 1, x, domain='ZZ')
"""
if hasattr(f.rep, 'shift'):
result = f.rep.shift(a)
else: # pragma: no cover
raise OperationNotSupported(f, 'shift')
return f.per(result)
def sturm(f, auto=True):
"""
Computes the Sturm sequence of ``f``.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x
>>> Poly(x**3 - 2*x**2 + x - 3, x).sturm()
[Poly(x**3 - 2*x**2 + x - 3, x, domain='QQ'),
Poly(3*x**2 - 4*x + 1, x, domain='QQ'),
Poly(2/9*x + 25/9, x, domain='QQ'),
Poly(-2079/4, x, domain='QQ')]
"""
if auto and f.rep.dom.has_Ring:
f = f.to_field()
if hasattr(f.rep, 'sturm'):
result = f.rep.sturm()
else: # pragma: no cover
raise OperationNotSupported(f, 'sturm')
return list(map(f.per, result))
def gff_list(f):
"""
Computes greatest factorial factorization of ``f``.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x
>>> f = x**5 + 2*x**4 - x**3 - 2*x**2
>>> Poly(f).gff_list()
[(Poly(x, x, domain='ZZ'), 1), (Poly(x + 2, x, domain='ZZ'), 4)]
"""
if hasattr(f.rep, 'gff_list'):
result = f.rep.gff_list()
else: # pragma: no cover
raise OperationNotSupported(f, 'gff_list')
return [(f.per(g), k) for g, k in result]
def sqf_norm(f):
"""
Computes square-free norm of ``f``.
Returns ``s``, ``f``, ``r``, such that ``g(x) = f(x-sa)`` and
``r(x) = Norm(g(x))`` is a square-free polynomial over ``K``,
where ``a`` is the algebraic extension of the ground domain.
Examples
========
>>> from sympy import Poly, sqrt
>>> from sympy.abc import x
>>> s, f, r = Poly(x**2 + 1, x, extension=[sqrt(3)]).sqf_norm()
>>> s
1
>>> f
Poly(x**2 - 2*sqrt(3)*x + 4, x, domain='QQ<sqrt(3)>')
>>> r
Poly(x**4 - 4*x**2 + 16, x, domain='QQ')
"""
if hasattr(f.rep, 'sqf_norm'):
s, g, r = f.rep.sqf_norm()
else: # pragma: no cover
raise OperationNotSupported(f, 'sqf_norm')
return s, f.per(g), f.per(r)
def sqf_part(f):
"""
Computes square-free part of ``f``.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x
>>> Poly(x**3 - 3*x - 2, x).sqf_part()
Poly(x**2 - x - 2, x, domain='ZZ')
"""
if hasattr(f.rep, 'sqf_part'):
result = f.rep.sqf_part()
else: # pragma: no cover
raise OperationNotSupported(f, 'sqf_part')
return f.per(result)
def sqf_list(f, all=False):
"""
Returns a list of square-free factors of ``f``.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x
>>> f = 2*x**5 + 16*x**4 + 50*x**3 + 76*x**2 + 56*x + 16
>>> Poly(f).sqf_list()
(2, [(Poly(x + 1, x, domain='ZZ'), 2),
(Poly(x + 2, x, domain='ZZ'), 3)])
>>> Poly(f).sqf_list(all=True)
(2, [(Poly(1, x, domain='ZZ'), 1),
(Poly(x + 1, x, domain='ZZ'), 2),
(Poly(x + 2, x, domain='ZZ'), 3)])
"""
if hasattr(f.rep, 'sqf_list'):
coeff, factors = f.rep.sqf_list(all)
else: # pragma: no cover
raise OperationNotSupported(f, 'sqf_list')
return f.rep.dom.to_sympy(coeff), [(f.per(g), k) for g, k in factors]
def sqf_list_include(f, all=False):
"""
Returns a list of square-free factors of ``f``.
Examples
========
>>> from sympy import Poly, expand
>>> from sympy.abc import x
>>> f = expand(2*(x + 1)**3*x**4)
>>> f
2*x**7 + 6*x**6 + 6*x**5 + 2*x**4
>>> Poly(f).sqf_list_include()
[(Poly(2, x, domain='ZZ'), 1),
(Poly(x + 1, x, domain='ZZ'), 3),
(Poly(x, x, domain='ZZ'), 4)]
>>> Poly(f).sqf_list_include(all=True)
[(Poly(2, x, domain='ZZ'), 1),
(Poly(1, x, domain='ZZ'), 2),
(Poly(x + 1, x, domain='ZZ'), 3),
(Poly(x, x, domain='ZZ'), 4)]
"""
if hasattr(f.rep, 'sqf_list_include'):
factors = f.rep.sqf_list_include(all)
else: # pragma: no cover
raise OperationNotSupported(f, 'sqf_list_include')
return [(f.per(g), k) for g, k in factors]
def factor_list(f):
"""
Returns a list of irreducible factors of ``f``.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x, y
>>> f = 2*x**5 + 2*x**4*y + 4*x**3 + 4*x**2*y + 2*x + 2*y
>>> Poly(f).factor_list()
(2, [(Poly(x + y, x, y, domain='ZZ'), 1),
(Poly(x**2 + 1, x, y, domain='ZZ'), 2)])
"""
if hasattr(f.rep, 'factor_list'):
try:
coeff, factors = f.rep.factor_list()
except DomainError:
return S.One, [(f, 1)]
else: # pragma: no cover
raise OperationNotSupported(f, 'factor_list')
return f.rep.dom.to_sympy(coeff), [(f.per(g), k) for g, k in factors]
def factor_list_include(f):
"""
Returns a list of irreducible factors of ``f``.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x, y
>>> f = 2*x**5 + 2*x**4*y + 4*x**3 + 4*x**2*y + 2*x + 2*y
>>> Poly(f).factor_list_include()
[(Poly(2*x + 2*y, x, y, domain='ZZ'), 1),
(Poly(x**2 + 1, x, y, domain='ZZ'), 2)]
"""
if hasattr(f.rep, 'factor_list_include'):
try:
factors = f.rep.factor_list_include()
except DomainError:
return [(f, 1)]
else: # pragma: no cover
raise OperationNotSupported(f, 'factor_list_include')
return [(f.per(g), k) for g, k in factors]
def intervals(f, all=False, eps=None, inf=None, sup=None, fast=False, sqf=False):
"""
Compute isolating intervals for roots of ``f``.
For real roots the Vincent-Akritas-Strzebonski (VAS) continued fractions method is used.
References:
===========
1. Alkiviadis G. Akritas and Adam W. Strzebonski: A Comparative Study of Two Real Root
Isolation Methods . Nonlinear Analysis: Modelling and Control, Vol. 10, No. 4, 297-304, 2005.
2. Alkiviadis G. Akritas, Adam W. Strzebonski and Panagiotis S. Vigklas: Improving the
Performance of the Continued Fractions Method Using new Bounds of Positive Roots. Nonlinear
Analysis: Modelling and Control, Vol. 13, No. 3, 265-279, 2008.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x
>>> Poly(x**2 - 3, x).intervals()
[((-2, -1), 1), ((1, 2), 1)]
>>> Poly(x**2 - 3, x).intervals(eps=1e-2)
[((-26/15, -19/11), 1), ((19/11, 26/15), 1)]
"""
if eps is not None:
eps = QQ.convert(eps)
if eps <= 0:
raise ValueError("'eps' must be a positive rational")
if inf is not None:
inf = QQ.convert(inf)
if sup is not None:
sup = QQ.convert(sup)
if hasattr(f.rep, 'intervals'):
result = f.rep.intervals(
all=all, eps=eps, inf=inf, sup=sup, fast=fast, sqf=sqf)
else: # pragma: no cover
raise OperationNotSupported(f, 'intervals')
if sqf:
def _real(interval):
s, t = interval
return (QQ.to_sympy(s), QQ.to_sympy(t))
if not all:
return list(map(_real, result))
def _complex(rectangle):
(u, v), (s, t) = rectangle
return (QQ.to_sympy(u) + I*QQ.to_sympy(v),
QQ.to_sympy(s) + I*QQ.to_sympy(t))
real_part, complex_part = result
return list(map(_real, real_part)), list(map(_complex, complex_part))
else:
def _real(interval):
(s, t), k = interval
return ((QQ.to_sympy(s), QQ.to_sympy(t)), k)
if not all:
return list(map(_real, result))
def _complex(rectangle):
((u, v), (s, t)), k = rectangle
return ((QQ.to_sympy(u) + I*QQ.to_sympy(v),
QQ.to_sympy(s) + I*QQ.to_sympy(t)), k)
real_part, complex_part = result
return list(map(_real, real_part)), list(map(_complex, complex_part))
def refine_root(f, s, t, eps=None, steps=None, fast=False, check_sqf=False):
"""
Refine an isolating interval of a root to the given precision.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x
>>> Poly(x**2 - 3, x).refine_root(1, 2, eps=1e-2)
(19/11, 26/15)
"""
if check_sqf and not f.is_sqf:
raise PolynomialError("only square-free polynomials supported")
s, t = QQ.convert(s), QQ.convert(t)
if eps is not None:
eps = QQ.convert(eps)
if eps <= 0:
raise ValueError("'eps' must be a positive rational")
if steps is not None:
steps = int(steps)
elif eps is None:
steps = 1
if hasattr(f.rep, 'refine_root'):
S, T = f.rep.refine_root(s, t, eps=eps, steps=steps, fast=fast)
else: # pragma: no cover
raise OperationNotSupported(f, 'refine_root')
return QQ.to_sympy(S), QQ.to_sympy(T)
def count_roots(f, inf=None, sup=None):
"""
Return the number of roots of ``f`` in ``[inf, sup]`` interval.
Examples
========
>>> from sympy import Poly, I
>>> from sympy.abc import x
>>> Poly(x**4 - 4, x).count_roots(-3, 3)
2
>>> Poly(x**4 - 4, x).count_roots(0, 1 + 3*I)
1
"""
inf_real, sup_real = True, True
if inf is not None:
inf = sympify(inf)
if inf is S.NegativeInfinity:
inf = None
else:
re, im = inf.as_real_imag()
if not im:
inf = QQ.convert(inf)
else:
inf, inf_real = list(map(QQ.convert, (re, im))), False
if sup is not None:
sup = sympify(sup)
if sup is S.Infinity:
sup = None
else:
re, im = sup.as_real_imag()
if not im:
sup = QQ.convert(sup)
else:
sup, sup_real = list(map(QQ.convert, (re, im))), False
if inf_real and sup_real:
if hasattr(f.rep, 'count_real_roots'):
count = f.rep.count_real_roots(inf=inf, sup=sup)
else: # pragma: no cover
raise OperationNotSupported(f, 'count_real_roots')
else:
if inf_real and inf is not None:
inf = (inf, QQ.zero)
if sup_real and sup is not None:
sup = (sup, QQ.zero)
if hasattr(f.rep, 'count_complex_roots'):
count = f.rep.count_complex_roots(inf=inf, sup=sup)
else: # pragma: no cover
raise OperationNotSupported(f, 'count_complex_roots')
return Integer(count)
def root(f, index, radicals=True):
"""
Get an indexed root of a polynomial.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x
>>> f = Poly(2*x**3 - 7*x**2 + 4*x + 4)
>>> f.root(0)
-1/2
>>> f.root(1)
2
>>> f.root(2)
2
>>> f.root(3)
Traceback (most recent call last):
...
IndexError: root index out of [-3, 2] range, got 3
>>> Poly(x**5 + x + 1).root(0)
RootOf(x**3 - x**2 + 1, 0)
"""
return sympy.polys.rootoftools.RootOf(f, index, radicals=radicals)
def real_roots(f, multiple=True, radicals=True):
"""
Return a list of real roots with multiplicities.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x
>>> Poly(2*x**3 - 7*x**2 + 4*x + 4).real_roots()
[-1/2, 2, 2]
>>> Poly(x**3 + x + 1).real_roots()
[RootOf(x**3 + x + 1, 0)]
"""
reals = sympy.polys.rootoftools.RootOf.real_roots(f, radicals=radicals)
if multiple:
return reals
else:
return group(reals, multiple=False)
def all_roots(f, multiple=True, radicals=True):
"""
Return a list of real and complex roots with multiplicities.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x
>>> Poly(2*x**3 - 7*x**2 + 4*x + 4).all_roots()
[-1/2, 2, 2]
>>> Poly(x**3 + x + 1).all_roots()
[RootOf(x**3 + x + 1, 0),
RootOf(x**3 + x + 1, 1),
RootOf(x**3 + x + 1, 2)]
"""
roots = sympy.polys.rootoftools.RootOf.all_roots(f, radicals=radicals)
if multiple:
return roots
else:
return group(roots, multiple=False)
def nroots(f, n=15, maxsteps=50, cleanup=True, error=False):
"""
Compute numerical approximations of roots of ``f``.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x
>>> Poly(x**2 - 3).nroots(n=15)
[-1.73205080756888, 1.73205080756888]
>>> Poly(x**2 - 3).nroots(n=30)
[-1.73205080756887729352744634151, 1.73205080756887729352744634151]
"""
if f.is_multivariate:
raise MultivariatePolynomialError(
"can't compute numerical roots of %s" % f)
if f.degree() <= 0:
return []
coeffs = [coeff.evalf(n=n).as_real_imag()
for coeff in f.all_coeffs()]
dps = sympy.mpmath.mp.dps
sympy.mpmath.mp.dps = n
try:
try:
coeffs = [sympy.mpmath.mpc(*coeff) for coeff in coeffs]
except TypeError:
raise DomainError(
"numerical domain expected, got %s" % f.rep.dom)
result = sympy.mpmath.polyroots(
coeffs, maxsteps=maxsteps, cleanup=cleanup, error=error)
if error:
roots, error = result
else:
roots, error = result, None
roots = list(map(sympify, sorted(roots, key=lambda r: (r.real, r.imag))))
finally:
sympy.mpmath.mp.dps = dps
if error is not None:
return roots, sympify(error)
else:
return roots
def ground_roots(f):
"""
Compute roots of ``f`` by factorization in the ground domain.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x
>>> Poly(x**6 - 4*x**4 + 4*x**3 - x**2).ground_roots()
{0: 2, 1: 2}
"""
if f.is_multivariate:
raise MultivariatePolynomialError(
"can't compute ground roots of %s" % f)
roots = {}
for factor, k in f.factor_list()[1]:
if factor.is_linear:
a, b = factor.all_coeffs()
roots[-b/a] = k
return roots
def nth_power_roots_poly(f, n):
"""
Construct a polynomial with n-th powers of roots of ``f``.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x
>>> f = Poly(x**4 - x**2 + 1)
>>> f.nth_power_roots_poly(2)
Poly(x**4 - 2*x**3 + 3*x**2 - 2*x + 1, x, domain='ZZ')
>>> f.nth_power_roots_poly(3)
Poly(x**4 + 2*x**2 + 1, x, domain='ZZ')
>>> f.nth_power_roots_poly(4)
Poly(x**4 + 2*x**3 + 3*x**2 + 2*x + 1, x, domain='ZZ')
>>> f.nth_power_roots_poly(12)
Poly(x**4 - 4*x**3 + 6*x**2 - 4*x + 1, x, domain='ZZ')
"""
if f.is_multivariate:
raise MultivariatePolynomialError(
"must be a univariate polynomial")
N = sympify(n)
if N.is_Integer and N >= 1:
n = int(N)
else:
raise ValueError("'n' must an integer and n >= 1, got %s" % n)
x = f.gen
t = Dummy('t')
r = f.resultant(f.__class__.from_expr(x**n - t, x, t))
return r.replace(t, x)
def cancel(f, g, include=False):
"""
Cancel common factors in a rational function ``f/g``.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x
>>> Poly(2*x**2 - 2, x).cancel(Poly(x**2 - 2*x + 1, x))
(1, Poly(2*x + 2, x, domain='ZZ'), Poly(x - 1, x, domain='ZZ'))
>>> Poly(2*x**2 - 2, x).cancel(Poly(x**2 - 2*x + 1, x), include=True)
(Poly(2*x + 2, x, domain='ZZ'), Poly(x - 1, x, domain='ZZ'))
"""
dom, per, F, G = f._unify(g)
if hasattr(F, 'cancel'):
result = F.cancel(G, include=include)
else: # pragma: no cover
raise OperationNotSupported(f, 'cancel')
if not include:
if dom.has_assoc_Ring:
dom = dom.get_ring()
cp, cq, p, q = result
cp = dom.to_sympy(cp)
cq = dom.to_sympy(cq)
return cp/cq, per(p), per(q)
else:
return tuple(map(per, result))
@property
def is_zero(f):
"""
Returns ``True`` if ``f`` is a zero polynomial.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x
>>> Poly(0, x).is_zero
True
>>> Poly(1, x).is_zero
False
"""
return f.rep.is_zero
@property
def is_one(f):
"""
Returns ``True`` if ``f`` is a unit polynomial.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x
>>> Poly(0, x).is_one
False
>>> Poly(1, x).is_one
True
"""
return f.rep.is_one
@property
def is_sqf(f):
"""
Returns ``True`` if ``f`` is a square-free polynomial.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x
>>> Poly(x**2 - 2*x + 1, x).is_sqf
False
>>> Poly(x**2 - 1, x).is_sqf
True
"""
return f.rep.is_sqf
@property
def is_monic(f):
"""
Returns ``True`` if the leading coefficient of ``f`` is one.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x
>>> Poly(x + 2, x).is_monic
True
>>> Poly(2*x + 2, x).is_monic
False
"""
return f.rep.is_monic
@property
def is_primitive(f):
"""
Returns ``True`` if GCD of the coefficients of ``f`` is one.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x
>>> Poly(2*x**2 + 6*x + 12, x).is_primitive
False
>>> Poly(x**2 + 3*x + 6, x).is_primitive
True
"""
return f.rep.is_primitive
@property
def is_ground(f):
"""
Returns ``True`` if ``f`` is an element of the ground domain.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x, y
>>> Poly(x, x).is_ground
False
>>> Poly(2, x).is_ground
True
>>> Poly(y, x).is_ground
True
"""
return f.rep.is_ground
@property
def is_linear(f):
"""
Returns ``True`` if ``f`` is linear in all its variables.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x, y
>>> Poly(x + y + 2, x, y).is_linear
True
>>> Poly(x*y + 2, x, y).is_linear
False
"""
return f.rep.is_linear
@property
def is_quadratic(f):
"""
Returns ``True`` if ``f`` is quadratic in all its variables.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x, y
>>> Poly(x*y + 2, x, y).is_quadratic
True
>>> Poly(x*y**2 + 2, x, y).is_quadratic
False
"""
return f.rep.is_quadratic
@property
def is_monomial(f):
"""
Returns ``True`` if ``f`` is zero or has only one term.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x
>>> Poly(3*x**2, x).is_monomial
True
>>> Poly(3*x**2 + 1, x).is_monomial
False
"""
return f.rep.is_monomial
@property
def is_homogeneous(f):
"""
Returns ``True`` if ``f`` is a homogeneous polynomial.
A homogeneous polynomial is a polynomial whose all monomials with
non-zero coefficients have the same total degree. If you want not
only to check if a polynomial is homogeneous but also compute its
homogeneous order, then use :func:`Poly.homogeneous_order`.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x, y
>>> Poly(x**2 + x*y, x, y).is_homogeneous
True
>>> Poly(x**3 + x*y, x, y).is_homogeneous
False
"""
return f.rep.is_homogeneous
@property
def is_irreducible(f):
"""
Returns ``True`` if ``f`` has no factors over its domain.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x
>>> Poly(x**2 + x + 1, x, modulus=2).is_irreducible
True
>>> Poly(x**2 + 1, x, modulus=2).is_irreducible
False
"""
return f.rep.is_irreducible
@property
def is_univariate(f):
"""
Returns ``True`` if ``f`` is a univariate polynomial.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x, y
>>> Poly(x**2 + x + 1, x).is_univariate
True
>>> Poly(x*y**2 + x*y + 1, x, y).is_univariate
False
>>> Poly(x*y**2 + x*y + 1, x).is_univariate
True
>>> Poly(x**2 + x + 1, x, y).is_univariate
False
"""
return len(f.gens) == 1
@property
def is_multivariate(f):
"""
Returns ``True`` if ``f`` is a multivariate polynomial.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x, y
>>> Poly(x**2 + x + 1, x).is_multivariate
False
>>> Poly(x*y**2 + x*y + 1, x, y).is_multivariate
True
>>> Poly(x*y**2 + x*y + 1, x).is_multivariate
False
>>> Poly(x**2 + x + 1, x, y).is_multivariate
True
"""
return len(f.gens) != 1
@property
def is_cyclotomic(f):
"""
Returns ``True`` if ``f`` is a cyclotomic polnomial.
Examples
========
>>> from sympy import Poly
>>> from sympy.abc import x
>>> f = x**16 + x**14 - x**10 + x**8 - x**6 + x**2 + 1
>>> Poly(f).is_cyclotomic
False
>>> g = x**16 + x**14 - x**10 - x**8 - x**6 + x**2 + 1
>>> Poly(g).is_cyclotomic
True
"""
return f.rep.is_cyclotomic
def __abs__(f):
return f.abs()
def __neg__(f):
return f.neg()
@_sympifyit('g', NotImplemented)
def __add__(f, g):
if not g.is_Poly:
try:
g = f.__class__(g, *f.gens)
except PolynomialError:
return f.as_expr() + g
return f.add(g)
@_sympifyit('g', NotImplemented)
def __radd__(f, g):
if not g.is_Poly:
try:
g = f.__class__(g, *f.gens)
except PolynomialError:
return g + f.as_expr()
return g.add(f)
@_sympifyit('g', NotImplemented)
def __sub__(f, g):
if not g.is_Poly:
try:
g = f.__class__(g, *f.gens)
except PolynomialError:
return f.as_expr() - g
return f.sub(g)
@_sympifyit('g', NotImplemented)
def __rsub__(f, g):
if not g.is_Poly:
try:
g = f.__class__(g, *f.gens)
except PolynomialError:
return g - f.as_expr()
return g.sub(f)
@_sympifyit('g', NotImplemented)
def __mul__(f, g):
if not g.is_Poly:
try:
g = f.__class__(g, *f.gens)
except PolynomialError:
return f.as_expr()*g
return f.mul(g)
@_sympifyit('g', NotImplemented)
def __rmul__(f, g):
if not g.is_Poly:
try:
g = f.__class__(g, *f.gens)
except PolynomialError:
return g*f.as_expr()
return g.mul(f)
@_sympifyit('n', NotImplemented)
def __pow__(f, n):
if n.is_Integer and n >= 0:
return f.pow(n)
else:
return f.as_expr()**n
@_sympifyit('g', NotImplemented)
def __divmod__(f, g):
if not g.is_Poly:
g = f.__class__(g, *f.gens)
return f.div(g)
@_sympifyit('g', NotImplemented)
def __rdivmod__(f, g):
if not g.is_Poly:
g = f.__class__(g, *f.gens)
return g.div(f)
@_sympifyit('g', NotImplemented)
def __mod__(f, g):
if not g.is_Poly:
g = f.__class__(g, *f.gens)
return f.rem(g)
@_sympifyit('g', NotImplemented)
def __rmod__(f, g):
if not g.is_Poly:
g = f.__class__(g, *f.gens)
return g.rem(f)
@_sympifyit('g', NotImplemented)
def __floordiv__(f, g):
if not g.is_Poly:
g = f.__class__(g, *f.gens)
return f.quo(g)
@_sympifyit('g', NotImplemented)
def __rfloordiv__(f, g):
if not g.is_Poly:
g = f.__class__(g, *f.gens)
return g.quo(f)
@_sympifyit('g', NotImplemented)
def __div__(f, g):
return f.as_expr()/g.as_expr()
@_sympifyit('g', NotImplemented)
def __rdiv__(f, g):
return g.as_expr()/f.as_expr()
__truediv__ = __div__
__rtruediv__ = __rdiv__
@_sympifyit('g', NotImplemented)
def __eq__(f, g):
if not g.is_Poly:
try:
g = f.__class__(g, f.gens, domain=f.get_domain())
except (PolynomialError, DomainError, CoercionFailed):
return False
if f.gens != g.gens:
return False
if f.rep.dom != g.rep.dom:
try:
dom = f.rep.dom.unify(g.rep.dom, f.gens)
except UnificationFailed:
return False
f = f.set_domain(dom)
g = g.set_domain(dom)
return f.rep == g.rep
@_sympifyit('g', NotImplemented)
def __ne__(f, g):
return not f.__eq__(g)
def __nonzero__(f):
return not f.is_zero
__bool__ = __nonzero__
def eq(f, g, strict=False):
if not strict:
return f.__eq__(g)
else:
return f._strict_eq(sympify(g))
def ne(f, g, strict=False):
return not f.eq(g, strict=strict)
def _strict_eq(f, g):
return isinstance(g, f.__class__) and f.gens == g.gens and f.rep.eq(g.rep, strict=True)
@public
class PurePoly(Poly):
"""Class for representing pure polynomials. """
def _hashable_content(self):
"""Allow SymPy to hash Poly instances. """
return (self.rep,)
def __hash__(self):
return super(PurePoly, self).__hash__()
@property
def free_symbols(self):
"""
Free symbols of a polynomial.
Examples
========
>>> from sympy import PurePoly
>>> from sympy.abc import x, y
>>> PurePoly(x**2 + 1).free_symbols
set()
>>> PurePoly(x**2 + y).free_symbols
set()
>>> PurePoly(x**2 + y, x).free_symbols
set([y])
"""
return self.free_symbols_in_domain
@_sympifyit('g', NotImplemented)
def __eq__(f, g):
if not g.is_Poly:
try:
g = f.__class__(g, f.gens, domain=f.get_domain())
except (PolynomialError, DomainError, CoercionFailed):
return False
if len(f.gens) != len(g.gens):
return False
if f.rep.dom != g.rep.dom:
try:
dom = f.rep.dom.unify(g.rep.dom, f.gens)
except UnificationFailed:
return False
f = f.set_domain(dom)
g = g.set_domain(dom)
return f.rep == g.rep
def _strict_eq(f, g):
return isinstance(g, f.__class__) and f.rep.eq(g.rep, strict=True)
def _unify(f, g):
g = sympify(g)
if not g.is_Poly:
try:
return f.rep.dom, f.per, f.rep, f.rep.per(f.rep.dom.from_sympy(g))
except CoercionFailed:
raise UnificationFailed("can't unify %s with %s" % (f, g))
if len(f.gens) != len(g.gens):
raise UnificationFailed("can't unify %s with %s" % (f, g))
if not (isinstance(f.rep, DMP) and isinstance(g.rep, DMP)):
raise UnificationFailed("can't unify %s with %s" % (f, g))
cls = f.__class__
gens = f.gens
dom = f.rep.dom.unify(g.rep.dom, gens)
F = f.rep.convert(dom)
G = g.rep.convert(dom)
def per(rep, dom=dom, gens=gens, remove=None):
if remove is not None:
gens = gens[:remove] + gens[remove + 1:]
if not gens:
return dom.to_sympy(rep)
return cls.new(rep, *gens)
return dom, per, F, G
@public
def poly_from_expr(expr, *gens, **args):
"""Construct a polynomial from an expression. """
opt = options.build_options(gens, args)
return _poly_from_expr(expr, opt)
def _poly_from_expr(expr, opt):
"""Construct a polynomial from an expression. """
orig, expr = expr, sympify(expr)
if not isinstance(expr, Basic):
raise PolificationFailed(opt, orig, expr)
elif expr.is_Poly:
poly = expr.__class__._from_poly(expr, opt)
opt.gens = poly.gens
opt.domain = poly.domain
if opt.polys is None:
opt.polys = True
return poly, opt
elif opt.expand:
expr = expr.expand()
try:
rep, opt = _dict_from_expr(expr, opt)
except GeneratorsNeeded:
raise PolificationFailed(opt, orig, expr)
monoms, coeffs = list(zip(*list(rep.items())))
domain = opt.domain
if domain is None:
opt.domain, coeffs = construct_domain(coeffs, opt=opt)
else:
coeffs = list(map(domain.from_sympy, coeffs))
rep = dict(list(zip(monoms, coeffs)))
poly = Poly._from_dict(rep, opt)
if opt.polys is None:
opt.polys = False
return poly, opt
@public
def parallel_poly_from_expr(exprs, *gens, **args):
"""Construct polynomials from expressions. """
opt = options.build_options(gens, args)
return _parallel_poly_from_expr(exprs, opt)
def _parallel_poly_from_expr(exprs, opt):
"""Construct polynomials from expressions. """
if len(exprs) == 2:
f, g = exprs
if isinstance(f, Poly) and isinstance(g, Poly):
f = f.__class__._from_poly(f, opt)
g = g.__class__._from_poly(g, opt)
f, g = f.unify(g)
opt.gens = f.gens
opt.domain = f.domain
if opt.polys is None:
opt.polys = True
return [f, g], opt
origs, exprs = list(exprs), []
_exprs, _polys = [], []
failed = False
for i, expr in enumerate(origs):
expr = sympify(expr)
if isinstance(expr, Basic):
if expr.is_Poly:
_polys.append(i)
else:
_exprs.append(i)
if opt.expand:
expr = expr.expand()
else:
failed = True
exprs.append(expr)
if failed:
raise PolificationFailed(opt, origs, exprs, True)
if _polys:
# XXX: this is a temporary solution
for i in _polys:
exprs[i] = exprs[i].as_expr()
try:
reps, opt = _parallel_dict_from_expr(exprs, opt)
except GeneratorsNeeded:
raise PolificationFailed(opt, origs, exprs, True)
for k in opt.gens:
if isinstance(k, Piecewise):
raise PolynomialError("Piecewise generators do not make sense")
coeffs_list, lengths = [], []
all_monoms = []
all_coeffs = []
for rep in reps:
monoms, coeffs = list(zip(*list(rep.items())))
coeffs_list.extend(coeffs)
all_monoms.append(monoms)
lengths.append(len(coeffs))
domain = opt.domain
if domain is None:
opt.domain, coeffs_list = construct_domain(coeffs_list, opt=opt)
else:
coeffs_list = list(map(domain.from_sympy, coeffs_list))
for k in lengths:
all_coeffs.append(coeffs_list[:k])
coeffs_list = coeffs_list[k:]
polys = []
for monoms, coeffs in zip(all_monoms, all_coeffs):
rep = dict(list(zip(monoms, coeffs)))
poly = Poly._from_dict(rep, opt)
polys.append(poly)
if opt.polys is None:
opt.polys = bool(_polys)
return polys, opt
def _update_args(args, key, value):
"""Add a new ``(key, value)`` pair to arguments ``dict``. """
args = dict(args)
if key not in args:
args[key] = value
return args
@public
def degree(f, *gens, **args):
"""
Return the degree of ``f`` in the given variable.
The degree of 0 is negative infinity.
Examples
========
>>> from sympy import degree
>>> from sympy.abc import x, y
>>> degree(x**2 + y*x + 1, gen=x)
2
>>> degree(x**2 + y*x + 1, gen=y)
1
>>> degree(0, x)
-oo
"""
options.allowed_flags(args, ['gen', 'polys'])
try:
F, opt = poly_from_expr(f, *gens, **args)
except PolificationFailed as exc:
raise ComputationFailed('degree', 1, exc)
return sympify(F.degree(opt.gen))
@public
def degree_list(f, *gens, **args):
"""
Return a list of degrees of ``f`` in all variables.
Examples
========
>>> from sympy import degree_list
>>> from sympy.abc import x, y
>>> degree_list(x**2 + y*x + 1)
(2, 1)
"""
options.allowed_flags(args, ['polys'])
try:
F, opt = poly_from_expr(f, *gens, **args)
except PolificationFailed as exc:
raise ComputationFailed('degree_list', 1, exc)
degrees = F.degree_list()
return tuple(map(Integer, degrees))
@public
def LC(f, *gens, **args):
"""
Return the leading coefficient of ``f``.
Examples
========
>>> from sympy import LC
>>> from sympy.abc import x, y
>>> LC(4*x**2 + 2*x*y**2 + x*y + 3*y)
4
"""
options.allowed_flags(args, ['polys'])
try:
F, opt = poly_from_expr(f, *gens, **args)
except PolificationFailed as exc:
raise ComputationFailed('LC', 1, exc)
return F.LC(order=opt.order)
@public
def LM(f, *gens, **args):
"""
Return the leading monomial of ``f``.
Examples
========
>>> from sympy import LM
>>> from sympy.abc import x, y
>>> LM(4*x**2 + 2*x*y**2 + x*y + 3*y)
x**2
"""
options.allowed_flags(args, ['polys'])
try:
F, opt = poly_from_expr(f, *gens, **args)
except PolificationFailed as exc:
raise ComputationFailed('LM', 1, exc)
monom = F.LM(order=opt.order)
return monom.as_expr()
@public
def LT(f, *gens, **args):
"""
Return the leading term of ``f``.
Examples
========
>>> from sympy import LT
>>> from sympy.abc import x, y
>>> LT(4*x**2 + 2*x*y**2 + x*y + 3*y)
4*x**2
"""
options.allowed_flags(args, ['polys'])
try:
F, opt = poly_from_expr(f, *gens, **args)
except PolificationFailed as exc:
raise ComputationFailed('LT', 1, exc)
monom, coeff = F.LT(order=opt.order)
return coeff*monom.as_expr()
@public
def pdiv(f, g, *gens, **args):
"""
Compute polynomial pseudo-division of ``f`` and ``g``.
Examples
========
>>> from sympy import pdiv
>>> from sympy.abc import x
>>> pdiv(x**2 + 1, 2*x - 4)
(2*x + 4, 20)
"""
options.allowed_flags(args, ['polys'])
try:
(F, G), opt = parallel_poly_from_expr((f, g), *gens, **args)
except PolificationFailed as exc:
raise ComputationFailed('pdiv', 2, exc)
q, r = F.pdiv(G)
if not opt.polys:
return q.as_expr(), r.as_expr()
else:
return q, r
@public
def prem(f, g, *gens, **args):
"""
Compute polynomial pseudo-remainder of ``f`` and ``g``.
Examples
========
>>> from sympy import prem
>>> from sympy.abc import x
>>> prem(x**2 + 1, 2*x - 4)
20
"""
options.allowed_flags(args, ['polys'])
try:
(F, G), opt = parallel_poly_from_expr((f, g), *gens, **args)
except PolificationFailed as exc:
raise ComputationFailed('prem', 2, exc)
r = F.prem(G)
if not opt.polys:
return r.as_expr()
else:
return r
@public
def pquo(f, g, *gens, **args):
"""
Compute polynomial pseudo-quotient of ``f`` and ``g``.
Examples
========
>>> from sympy import pquo
>>> from sympy.abc import x
>>> pquo(x**2 + 1, 2*x - 4)
2*x + 4
>>> pquo(x**2 - 1, 2*x - 1)
2*x + 1
"""
options.allowed_flags(args, ['polys'])
try:
(F, G), opt = parallel_poly_from_expr((f, g), *gens, **args)
except PolificationFailed as exc:
raise ComputationFailed('pquo', 2, exc)
try:
q = F.pquo(G)
except ExactQuotientFailed:
raise ExactQuotientFailed(f, g)
if not opt.polys:
return q.as_expr()
else:
return q
@public
def pexquo(f, g, *gens, **args):
"""
Compute polynomial exact pseudo-quotient of ``f`` and ``g``.
Examples
========
>>> from sympy import pexquo
>>> from sympy.abc import x
>>> pexquo(x**2 - 1, 2*x - 2)
2*x + 2
>>> pexquo(x**2 + 1, 2*x - 4)
Traceback (most recent call last):
...
ExactQuotientFailed: 2*x - 4 does not divide x**2 + 1
"""
options.allowed_flags(args, ['polys'])
try:
(F, G), opt = parallel_poly_from_expr((f, g), *gens, **args)
except PolificationFailed as exc:
raise ComputationFailed('pexquo', 2, exc)
q = F.pexquo(G)
if not opt.polys:
return q.as_expr()
else:
return q
@public
def div(f, g, *gens, **args):
"""
Compute polynomial division of ``f`` and ``g``.
Examples
========
>>> from sympy import div, ZZ, QQ
>>> from sympy.abc import x
>>> div(x**2 + 1, 2*x - 4, domain=ZZ)
(0, x**2 + 1)
>>> div(x**2 + 1, 2*x - 4, domain=QQ)
(x/2 + 1, 5)
"""
options.allowed_flags(args, ['auto', 'polys'])
try:
(F, G), opt = parallel_poly_from_expr((f, g), *gens, **args)
except PolificationFailed as exc:
raise ComputationFailed('div', 2, exc)
q, r = F.div(G, auto=opt.auto)
if not opt.polys:
return q.as_expr(), r.as_expr()
else:
return q, r
@public
def rem(f, g, *gens, **args):
"""
Compute polynomial remainder of ``f`` and ``g``.
Examples
========
>>> from sympy import rem, ZZ, QQ
>>> from sympy.abc import x
>>> rem(x**2 + 1, 2*x - 4, domain=ZZ)
x**2 + 1
>>> rem(x**2 + 1, 2*x - 4, domain=QQ)
5
"""
options.allowed_flags(args, ['auto', 'polys'])
try:
(F, G), opt = parallel_poly_from_expr((f, g), *gens, **args)
except PolificationFailed as exc:
raise ComputationFailed('rem', 2, exc)
r = F.rem(G, auto=opt.auto)
if not opt.polys:
return r.as_expr()
else:
return r
@public
def quo(f, g, *gens, **args):
"""
Compute polynomial quotient of ``f`` and ``g``.
Examples
========
>>> from sympy import quo
>>> from sympy.abc import x
>>> quo(x**2 + 1, 2*x - 4)
x/2 + 1
>>> quo(x**2 - 1, x - 1)
x + 1
"""
options.allowed_flags(args, ['auto', 'polys'])
try:
(F, G), opt = parallel_poly_from_expr((f, g), *gens, **args)
except PolificationFailed as exc:
raise ComputationFailed('quo', 2, exc)
q = F.quo(G, auto=opt.auto)
if not opt.polys:
return q.as_expr()
else:
return q
@public
def exquo(f, g, *gens, **args):
"""
Compute polynomial exact quotient of ``f`` and ``g``.
Examples
========
>>> from sympy import exquo
>>> from sympy.abc import x
>>> exquo(x**2 - 1, x - 1)
x + 1
>>> exquo(x**2 + 1, 2*x - 4)
Traceback (most recent call last):
...
ExactQuotientFailed: 2*x - 4 does not divide x**2 + 1
"""
options.allowed_flags(args, ['auto', 'polys'])
try:
(F, G), opt = parallel_poly_from_expr((f, g), *gens, **args)
except PolificationFailed as exc:
raise ComputationFailed('exquo', 2, exc)
q = F.exquo(G, auto=opt.auto)
if not opt.polys:
return q.as_expr()
else:
return q
@public
def half_gcdex(f, g, *gens, **args):
"""
Half extended Euclidean algorithm of ``f`` and ``g``.
Returns ``(s, h)`` such that ``h = gcd(f, g)`` and ``s*f = h (mod g)``.
Examples
========
>>> from sympy import half_gcdex
>>> from sympy.abc import x
>>> half_gcdex(x**4 - 2*x**3 - 6*x**2 + 12*x + 15, x**3 + x**2 - 4*x - 4)
(-x/5 + 3/5, x + 1)
"""
options.allowed_flags(args, ['auto', 'polys'])
try:
(F, G), opt = parallel_poly_from_expr((f, g), *gens, **args)
except PolificationFailed as exc:
domain, (a, b) = construct_domain(exc.exprs)
try:
s, h = domain.half_gcdex(a, b)
except NotImplementedError:
raise ComputationFailed('half_gcdex', 2, exc)
else:
return domain.to_sympy(s), domain.to_sympy(h)
s, h = F.half_gcdex(G, auto=opt.auto)
if not opt.polys:
return s.as_expr(), h.as_expr()
else:
return s, h
@public
def gcdex(f, g, *gens, **args):
"""
Extended Euclidean algorithm of ``f`` and ``g``.
Returns ``(s, t, h)`` such that ``h = gcd(f, g)`` and ``s*f + t*g = h``.
Examples
========
>>> from sympy import gcdex
>>> from sympy.abc import x
>>> gcdex(x**4 - 2*x**3 - 6*x**2 + 12*x + 15, x**3 + x**2 - 4*x - 4)
(-x/5 + 3/5, x**2/5 - 6*x/5 + 2, x + 1)
"""
options.allowed_flags(args, ['auto', 'polys'])
try:
(F, G), opt = parallel_poly_from_expr((f, g), *gens, **args)
except PolificationFailed as exc:
domain, (a, b) = construct_domain(exc.exprs)
try:
s, t, h = domain.gcdex(a, b)
except NotImplementedError:
raise ComputationFailed('gcdex', 2, exc)
else:
return domain.to_sympy(s), domain.to_sympy(t), domain.to_sympy(h)
s, t, h = F.gcdex(G, auto=opt.auto)
if not opt.polys:
return s.as_expr(), t.as_expr(), h.as_expr()
else:
return s, t, h
@public
def invert(f, g, *gens, **args):
"""
Invert ``f`` modulo ``g`` when possible.
Examples
========
>>> from sympy import invert
>>> from sympy.abc import x
>>> invert(x**2 - 1, 2*x - 1)
-4/3
>>> invert(x**2 - 1, x - 1)
Traceback (most recent call last):
...
NotInvertible: zero divisor
"""
options.allowed_flags(args, ['auto', 'polys'])
try:
(F, G), opt = parallel_poly_from_expr((f, g), *gens, **args)
except PolificationFailed as exc:
domain, (a, b) = construct_domain(exc.exprs)
try:
return domain.to_sympy(domain.invert(a, b))
except NotImplementedError:
raise ComputationFailed('invert', 2, exc)
h = F.invert(G, auto=opt.auto)
if not opt.polys:
return h.as_expr()
else:
return h
@public
def subresultants(f, g, *gens, **args):
"""
Compute subresultant PRS of ``f`` and ``g``.
Examples
========
>>> from sympy import subresultants
>>> from sympy.abc import x
>>> subresultants(x**2 + 1, x**2 - 1)
[x**2 + 1, x**2 - 1, -2]
"""
options.allowed_flags(args, ['polys'])
try:
(F, G), opt = parallel_poly_from_expr((f, g), *gens, **args)
except PolificationFailed as exc:
raise ComputationFailed('subresultants', 2, exc)
result = F.subresultants(G)
if not opt.polys:
return [r.as_expr() for r in result]
else:
return result
@public
def resultant(f, g, *gens, **args):
"""
Compute resultant of ``f`` and ``g``.
Examples
========
>>> from sympy import resultant
>>> from sympy.abc import x
>>> resultant(x**2 + 1, x**2 - 1)
4
"""
includePRS = args.pop('includePRS', False)
options.allowed_flags(args, ['polys'])
try:
(F, G), opt = parallel_poly_from_expr((f, g), *gens, **args)
except PolificationFailed as exc:
raise ComputationFailed('resultant', 2, exc)
if includePRS:
result, R = F.resultant(G, includePRS=includePRS)
else:
result = F.resultant(G)
if not opt.polys:
if includePRS:
return result.as_expr(), [r.as_expr() for r in R]
return result.as_expr()
else:
if includePRS:
return result, R
return result
@public
def discriminant(f, *gens, **args):
"""
Compute discriminant of ``f``.
Examples
========
>>> from sympy import discriminant
>>> from sympy.abc import x
>>> discriminant(x**2 + 2*x + 3)
-8
"""
options.allowed_flags(args, ['polys'])
try:
F, opt = poly_from_expr(f, *gens, **args)
except PolificationFailed as exc:
raise ComputationFailed('discriminant', 1, exc)
result = F.discriminant()
if not opt.polys:
return result.as_expr()
else:
return result
@public
def cofactors(f, g, *gens, **args):
"""
Compute GCD and cofactors of ``f`` and ``g``.
Returns polynomials ``(h, cff, cfg)`` such that ``h = gcd(f, g)``, and
``cff = quo(f, h)`` and ``cfg = quo(g, h)`` are, so called, cofactors
of ``f`` and ``g``.
Examples
========
>>> from sympy import cofactors
>>> from sympy.abc import x
>>> cofactors(x**2 - 1, x**2 - 3*x + 2)
(x - 1, x + 1, x - 2)
"""
options.allowed_flags(args, ['polys'])
try:
(F, G), opt = parallel_poly_from_expr((f, g), *gens, **args)
except PolificationFailed as exc:
domain, (a, b) = construct_domain(exc.exprs)
try:
h, cff, cfg = domain.cofactors(a, b)
except NotImplementedError:
raise ComputationFailed('cofactors', 2, exc)
else:
return domain.to_sympy(h), domain.to_sympy(cff), domain.to_sympy(cfg)
h, cff, cfg = F.cofactors(G)
if not opt.polys:
return h.as_expr(), cff.as_expr(), cfg.as_expr()
else:
return h, cff, cfg
@public
def gcd_list(seq, *gens, **args):
"""
Compute GCD of a list of polynomials.
Examples
========
>>> from sympy import gcd_list
>>> from sympy.abc import x
>>> gcd_list([x**3 - 1, x**2 - 1, x**2 - 3*x + 2])
x - 1
"""
seq = sympify(seq)
if not gens and not args:
domain, numbers = construct_domain(seq)
if not numbers:
return domain.zero
elif domain.is_Numerical:
result, numbers = numbers[0], numbers[1:]
for number in numbers:
result = domain.gcd(result, number)
if domain.is_one(result):
break
return domain.to_sympy(result)
options.allowed_flags(args, ['polys'])
try:
polys, opt = parallel_poly_from_expr(seq, *gens, **args)
except PolificationFailed as exc:
raise ComputationFailed('gcd_list', len(seq), exc)
if not polys:
if not opt.polys:
return S.Zero
else:
return Poly(0, opt=opt)
result, polys = polys[0], polys[1:]
for poly in polys:
result = result.gcd(poly)
if result.is_one:
break
if not opt.polys:
return result.as_expr()
else:
return result
@public
def gcd(f, g=None, *gens, **args):
"""
Compute GCD of ``f`` and ``g``.
Examples
========
>>> from sympy import gcd
>>> from sympy.abc import x
>>> gcd(x**2 - 1, x**2 - 3*x + 2)
x - 1
"""
if hasattr(f, '__iter__'):
if g is not None:
gens = (g,) + gens
return gcd_list(f, *gens, **args)
elif g is None:
raise TypeError("gcd() takes 2 arguments or a sequence of arguments")
options.allowed_flags(args, ['polys'])
try:
(F, G), opt = parallel_poly_from_expr((f, g), *gens, **args)
except PolificationFailed as exc:
domain, (a, b) = construct_domain(exc.exprs)
try:
return domain.to_sympy(domain.gcd(a, b))
except NotImplementedError:
raise ComputationFailed('gcd', 2, exc)
result = F.gcd(G)
if not opt.polys:
return result.as_expr()
else:
return result
@public
def lcm_list(seq, *gens, **args):
"""
Compute LCM of a list of polynomials.
Examples
========
>>> from sympy import lcm_list
>>> from sympy.abc import x
>>> lcm_list([x**3 - 1, x**2 - 1, x**2 - 3*x + 2])
x**5 - x**4 - 2*x**3 - x**2 + x + 2
"""
seq = sympify(seq)
if not gens and not args:
domain, numbers = construct_domain(seq)
if not numbers:
return domain.one
elif domain.is_Numerical:
result, numbers = numbers[0], numbers[1:]
for number in numbers:
result = domain.lcm(result, number)
return domain.to_sympy(result)
options.allowed_flags(args, ['polys'])
try:
polys, opt = parallel_poly_from_expr(seq, *gens, **args)
except PolificationFailed as exc:
raise ComputationFailed('lcm_list', len(seq), exc)
if not polys:
if not opt.polys:
return S.One
else:
return Poly(1, opt=opt)
result, polys = polys[0], polys[1:]
for poly in polys:
result = result.lcm(poly)
if not opt.polys:
return result.as_expr()
else:
return result
@public
def lcm(f, g=None, *gens, **args):
"""
Compute LCM of ``f`` and ``g``.
Examples
========
>>> from sympy import lcm
>>> from sympy.abc import x
>>> lcm(x**2 - 1, x**2 - 3*x + 2)
x**3 - 2*x**2 - x + 2
"""
if hasattr(f, '__iter__'):
if g is not None:
gens = (g,) + gens
return lcm_list(f, *gens, **args)
elif g is None:
raise TypeError("lcm() takes 2 arguments or a sequence of arguments")
options.allowed_flags(args, ['polys'])
try:
(F, G), opt = parallel_poly_from_expr((f, g), *gens, **args)
except PolificationFailed as exc:
domain, (a, b) = construct_domain(exc.exprs)
try:
return domain.to_sympy(domain.lcm(a, b))
except NotImplementedError:
raise ComputationFailed('lcm', 2, exc)
result = F.lcm(G)
if not opt.polys:
return result.as_expr()
else:
return result
@public
def terms_gcd(f, *gens, **args):
"""
Remove GCD of terms from ``f``.
If the ``deep`` flag is True, then the arguments of ``f`` will have
terms_gcd applied to them.
If a fraction is factored out of ``f`` and ``f`` is an Add, then
an unevaluated Mul will be returned so that automatic simplification
does not redistribute it. The hint ``clear``, when set to False, can be
used to prevent such factoring when all coefficients are not fractions.
Examples
========
>>> from sympy import terms_gcd, cos
>>> from sympy.abc import x, y
>>> terms_gcd(x**6*y**2 + x**3*y, x, y)
x**3*y*(x**3*y + 1)
The default action of polys routines is to expand the expression
given to them. terms_gcd follows this behavior:
>>> terms_gcd((3+3*x)*(x+x*y))
3*x*(x*y + x + y + 1)
If this is not desired then the hint ``expand`` can be set to False.
In this case the expression will be treated as though it were comprised
of one or more terms:
>>> terms_gcd((3+3*x)*(x+x*y), expand=False)
(3*x + 3)*(x*y + x)
In order to traverse factors of a Mul or the arguments of other
functions, the ``deep`` hint can be used:
>>> terms_gcd((3 + 3*x)*(x + x*y), expand=False, deep=True)
3*x*(x + 1)*(y + 1)
>>> terms_gcd(cos(x + x*y), deep=True)
cos(x*(y + 1))
Rationals are factored out by default:
>>> terms_gcd(x + y/2)
(2*x + y)/2
Only the y-term had a coefficient that was a fraction; if one
does not want to factor out the 1/2 in cases like this, the
flag ``clear`` can be set to False:
>>> terms_gcd(x + y/2, clear=False)
x + y/2
>>> terms_gcd(x*y/2 + y**2, clear=False)
y*(x/2 + y)
The ``clear`` flag is ignored if all coefficients are fractions:
>>> terms_gcd(x/3 + y/2, clear=False)
(2*x + 3*y)/6
See Also
========
sympy.core.exprtools.gcd_terms, sympy.core.exprtools.factor_terms
"""
orig = sympify(f)
if not isinstance(f, Expr) or f.is_Atom:
return orig
if args.get('deep', False):
new = f.func(*[terms_gcd(a, *gens, **args) for a in f.args])
args.pop('deep')
args['expand'] = False
return terms_gcd(new, *gens, **args)
clear = args.pop('clear', True)
options.allowed_flags(args, ['polys'])
try:
F, opt = poly_from_expr(f, *gens, **args)
except PolificationFailed as exc:
return exc.expr
J, f = F.terms_gcd()
if opt.domain.has_Ring:
if opt.domain.has_Field:
denom, f = f.clear_denoms(convert=True)
coeff, f = f.primitive()
if opt.domain.has_Field:
coeff /= denom
else:
coeff = S.One
term = Mul(*[x**j for x, j in zip(f.gens, J)])
if coeff == 1:
coeff = S.One
if term == 1:
return orig
if clear:
return _keep_coeff(coeff, term*f.as_expr())
# base the clearing on the form of the original expression, not
# the (perhaps) Mul that we have now
coeff, f = _keep_coeff(coeff, f.as_expr(), clear=False).as_coeff_Mul()
return _keep_coeff(coeff, term*f, clear=False)
@public
def trunc(f, p, *gens, **args):
"""
Reduce ``f`` modulo a constant ``p``.
Examples
========
>>> from sympy import trunc
>>> from sympy.abc import x
>>> trunc(2*x**3 + 3*x**2 + 5*x + 7, 3)
-x**3 - x + 1
"""
options.allowed_flags(args, ['auto', 'polys'])
try:
F, opt = poly_from_expr(f, *gens, **args)
except PolificationFailed as exc:
raise ComputationFailed('trunc', 1, exc)
result = F.trunc(sympify(p))
if not opt.polys:
return result.as_expr()
else:
return result
@public
def monic(f, *gens, **args):
"""
Divide all coefficients of ``f`` by ``LC(f)``.
Examples
========
>>> from sympy import monic
>>> from sympy.abc import x
>>> monic(3*x**2 + 4*x + 2)
x**2 + 4*x/3 + 2/3
"""
options.allowed_flags(args, ['auto', 'polys'])
try:
F, opt = poly_from_expr(f, *gens, **args)
except PolificationFailed as exc:
raise ComputationFailed('monic', 1, exc)
result = F.monic(auto=opt.auto)
if not opt.polys:
return result.as_expr()
else:
return result
@public
def content(f, *gens, **args):
"""
Compute GCD of coefficients of ``f``.
Examples
========
>>> from sympy import content
>>> from sympy.abc import x
>>> content(6*x**2 + 8*x + 12)
2
"""
options.allowed_flags(args, ['polys'])
try:
F, opt = poly_from_expr(f, *gens, **args)
except PolificationFailed as exc:
raise ComputationFailed('content', 1, exc)
return F.content()
@public
def primitive(f, *gens, **args):
"""
Compute content and the primitive form of ``f``.
Examples
========
>>> from sympy.polys.polytools import primitive
>>> from sympy.abc import x
>>> primitive(6*x**2 + 8*x + 12)
(2, 3*x**2 + 4*x + 6)
>>> eq = (2 + 2*x)*x + 2
Expansion is performed by default:
>>> primitive(eq)
(2, x**2 + x + 1)
Set ``expand`` to False to shut this off. Note that the
extraction will not be recursive; use the as_content_primitive method
for recursive, non-destructive Rational extraction.
>>> primitive(eq, expand=False)
(1, x*(2*x + 2) + 2)
>>> eq.as_content_primitive()
(2, x*(x + 1) + 1)
"""
options.allowed_flags(args, ['polys'])
try:
F, opt = poly_from_expr(f, *gens, **args)
except PolificationFailed as exc:
raise ComputationFailed('primitive', 1, exc)
cont, result = F.primitive()
if not opt.polys:
return cont, result.as_expr()
else:
return cont, result
@public
def compose(f, g, *gens, **args):
"""
Compute functional composition ``f(g)``.
Examples
========
>>> from sympy import compose
>>> from sympy.abc import x
>>> compose(x**2 + x, x - 1)
x**2 - x
"""
options.allowed_flags(args, ['polys'])
try:
(F, G), opt = parallel_poly_from_expr((f, g), *gens, **args)
except PolificationFailed as exc:
raise ComputationFailed('compose', 2, exc)
result = F.compose(G)
if not opt.polys:
return result.as_expr()
else:
return result
@public
def decompose(f, *gens, **args):
"""
Compute functional decomposition of ``f``.
Examples
========
>>> from sympy import decompose
>>> from sympy.abc import x
>>> decompose(x**4 + 2*x**3 - x - 1)
[x**2 - x - 1, x**2 + x]
"""
options.allowed_flags(args, ['polys'])
try:
F, opt = poly_from_expr(f, *gens, **args)
except PolificationFailed as exc:
raise ComputationFailed('decompose', 1, exc)
result = F.decompose()
if not opt.polys:
return [r.as_expr() for r in result]
else:
return result
@public
def sturm(f, *gens, **args):
"""
Compute Sturm sequence of ``f``.
Examples
========
>>> from sympy import sturm
>>> from sympy.abc import x
>>> sturm(x**3 - 2*x**2 + x - 3)
[x**3 - 2*x**2 + x - 3, 3*x**2 - 4*x + 1, 2*x/9 + 25/9, -2079/4]
"""
options.allowed_flags(args, ['auto', 'polys'])
try:
F, opt = poly_from_expr(f, *gens, **args)
except PolificationFailed as exc:
raise ComputationFailed('sturm', 1, exc)
result = F.sturm(auto=opt.auto)
if not opt.polys:
return [r.as_expr() for r in result]
else:
return result
@public
def gff_list(f, *gens, **args):
"""
Compute a list of greatest factorial factors of ``f``.
Examples
========
>>> from sympy import gff_list, ff
>>> from sympy.abc import x
>>> f = x**5 + 2*x**4 - x**3 - 2*x**2
>>> gff_list(f)
[(x, 1), (x + 2, 4)]
>>> (ff(x, 1)*ff(x + 2, 4)).expand() == f
True
"""
options.allowed_flags(args, ['polys'])
try:
F, opt = poly_from_expr(f, *gens, **args)
except PolificationFailed as exc:
raise ComputationFailed('gff_list', 1, exc)
factors = F.gff_list()
if not opt.polys:
return [(g.as_expr(), k) for g, k in factors]
else:
return factors
@public
def gff(f, *gens, **args):
"""Compute greatest factorial factorization of ``f``. """
raise NotImplementedError('symbolic falling factorial')
@public
def sqf_norm(f, *gens, **args):
"""
Compute square-free norm of ``f``.
Returns ``s``, ``f``, ``r``, such that ``g(x) = f(x-sa)`` and
``r(x) = Norm(g(x))`` is a square-free polynomial over ``K``,
where ``a`` is the algebraic extension of the ground domain.
Examples
========
>>> from sympy import sqf_norm, sqrt
>>> from sympy.abc import x
>>> sqf_norm(x**2 + 1, extension=[sqrt(3)])
(1, x**2 - 2*sqrt(3)*x + 4, x**4 - 4*x**2 + 16)
"""
options.allowed_flags(args, ['polys'])
try:
F, opt = poly_from_expr(f, *gens, **args)
except PolificationFailed as exc:
raise ComputationFailed('sqf_norm', 1, exc)
s, g, r = F.sqf_norm()
if not opt.polys:
return Integer(s), g.as_expr(), r.as_expr()
else:
return Integer(s), g, r
@public
def sqf_part(f, *gens, **args):
"""
Compute square-free part of ``f``.
Examples
========
>>> from sympy import sqf_part
>>> from sympy.abc import x
>>> sqf_part(x**3 - 3*x - 2)
x**2 - x - 2
"""
options.allowed_flags(args, ['polys'])
try:
F, opt = poly_from_expr(f, *gens, **args)
except PolificationFailed as exc:
raise ComputationFailed('sqf_part', 1, exc)
result = F.sqf_part()
if not opt.polys:
return result.as_expr()
else:
return result
def _sorted_factors(factors, method):
"""Sort a list of ``(expr, exp)`` pairs. """
if method == 'sqf':
def key(obj):
poly, exp = obj
rep = poly.rep.rep
return (exp, len(rep), rep)
else:
def key(obj):
poly, exp = obj
rep = poly.rep.rep
return (len(rep), exp, rep)
return sorted(factors, key=key)
def _factors_product(factors):
"""Multiply a list of ``(expr, exp)`` pairs. """
return Mul(*[f.as_expr()**k for f, k in factors])
def _symbolic_factor_list(expr, opt, method):
"""Helper function for :func:`_symbolic_factor`. """
coeff, factors = S.One, []
for arg in Mul.make_args(expr):
if arg.is_Number:
coeff *= arg
continue
elif arg.is_Pow:
base, exp = arg.args
if base.is_Number:
factors.append((base, exp))
continue
else:
base, exp = arg, S.One
try:
poly, _ = _poly_from_expr(base, opt)
except PolificationFailed as exc:
factors.append((exc.expr, exp))
else:
func = getattr(poly, method + '_list')
_coeff, _factors = func()
if _coeff is not S.One:
if exp.is_Integer:
coeff *= _coeff**exp
elif _coeff.is_positive:
factors.append((_coeff, exp))
else:
_factors.append((_coeff, S.One))
if exp is S.One:
factors.extend(_factors)
elif exp.is_integer or len(_factors) == 1:
factors.extend([(f, k*exp) for f, k in _factors])
else:
other = []
for f, k in _factors:
if f.as_expr().is_positive:
factors.append((f, k*exp))
else:
other.append((f, k))
if len(other) == 1:
f, k = other[0]
factors.append((f, k*exp))
else:
factors.append((_factors_product(other), exp))
return coeff, factors
def _symbolic_factor(expr, opt, method):
"""Helper function for :func:`_factor`. """
if isinstance(expr, Expr) and not expr.is_Relational:
if hasattr(expr,'_eval_factor'):
return expr._eval_factor()
coeff, factors = _symbolic_factor_list(together(expr), opt, method)
return _keep_coeff(coeff, _factors_product(factors))
elif hasattr(expr, 'args'):
return expr.func(*[_symbolic_factor(arg, opt, method) for arg in expr.args])
elif hasattr(expr, '__iter__'):
return expr.__class__([_symbolic_factor(arg, opt, method) for arg in expr])
else:
return expr
def _generic_factor_list(expr, gens, args, method):
"""Helper function for :func:`sqf_list` and :func:`factor_list`. """
options.allowed_flags(args, ['frac', 'polys'])
opt = options.build_options(gens, args)
expr = sympify(expr)
if isinstance(expr, Expr) and not expr.is_Relational:
numer, denom = together(expr).as_numer_denom()
cp, fp = _symbolic_factor_list(numer, opt, method)
cq, fq = _symbolic_factor_list(denom, opt, method)
if fq and not opt.frac:
raise PolynomialError("a polynomial expected, got %s" % expr)
_opt = opt.clone(dict(expand=True))
for factors in (fp, fq):
for i, (f, k) in enumerate(factors):
if not f.is_Poly:
f, _ = _poly_from_expr(f, _opt)
factors[i] = (f, k)
fp = _sorted_factors(fp, method)
fq = _sorted_factors(fq, method)
if not opt.polys:
fp = [(f.as_expr(), k) for f, k in fp]
fq = [(f.as_expr(), k) for f, k in fq]
coeff = cp/cq
if not opt.frac:
return coeff, fp
else:
return coeff, fp, fq
else:
raise PolynomialError("a polynomial expected, got %s" % expr)
def _generic_factor(expr, gens, args, method):
"""Helper function for :func:`sqf` and :func:`factor`. """
options.allowed_flags(args, [])
opt = options.build_options(gens, args)
return _symbolic_factor(sympify(expr), opt, method)
def to_rational_coeffs(f):
"""
try to transform a polynomial to have rational coefficients
try to find a transformation ``x = alpha*y``
``f(x) = lc*alpha**n * g(y)`` where ``g`` is a polynomial with
rational coefficients, ``lc`` the leading coefficient.
If this fails, try ``x = y + beta``
``f(x) = g(y)``
Returns ``None`` if ``g`` not found;
``(lc, alpha, None, g)`` in case of rescaling
``(None, None, beta, g)`` in case of translation
Notes
=====
Currently it transforms only polynomials without roots larger than 2.
Examples
========
>>> from sympy import sqrt, Poly, simplify
>>> from sympy.polys.polytools import to_rational_coeffs
>>> from sympy.abc import x
>>> p = Poly(((x**2-1)*(x-2)).subs({x:x*(1 + sqrt(2))}), x, domain='EX')
>>> lc, r, _, g = to_rational_coeffs(p)
>>> lc, r
(7 + 5*sqrt(2), -2*sqrt(2) + 2)
>>> g
Poly(x**3 + x**2 - 1/4*x - 1/4, x, domain='QQ')
>>> r1 = simplify(1/r)
>>> Poly(lc*r**3*(g.as_expr()).subs({x:x*r1}), x, domain='EX') == p
True
"""
from sympy.simplify.simplify import simplify
def _try_rescale(f):
"""
try rescaling ``x -> alpha*x`` to convert f to a polynomial
with rational coefficients.
Returns ``alpha, f``; if the rescaling is successful,
``alpha`` is the rescaling factor, and ``f`` is the rescaled
polynomial; else ``alpha`` is ``None``.
"""
from sympy.core.add import Add
if not len(f.gens) == 1 or not (f.gens[0]).is_Atom:
return None, f
n = f.degree()
lc = f.LC()
coeffs = f.monic().all_coeffs()[1:]
coeffs = [simplify(coeffx) for coeffx in coeffs]
if coeffs[-2] and not all(coeffx.is_rational for coeffx in coeffs):
rescale1_x = simplify(coeffs[-2]/coeffs[-1])
coeffs1 = []
for i in range(len(coeffs)):
coeffx = simplify(coeffs[i]*rescale1_x**(i + 1))
if not coeffx.is_rational:
break
coeffs1.append(coeffx)
else:
rescale_x = simplify(1/rescale1_x)
x = f.gens[0]
v = [x**n]
for i in range(1, n + 1):
v.append(coeffs1[i - 1]*x**(n - i))
f = Add(*v)
f = Poly(f)
return lc, rescale_x, f
return None
def _try_translate(f):
"""
try translating ``x -> x + alpha`` to convert f to a polynomial
with rational coefficients.
Returns ``alpha, f``; if the translating is successful,
``alpha`` is the translating factor, and ``f`` is the shifted
polynomial; else ``alpha`` is ``None``.
"""
from sympy.core.add import Add
if not len(f.gens) == 1 or not (f.gens[0]).is_Atom:
return None, f
n = f.degree()
f1 = f.monic()
coeffs = f1.all_coeffs()[1:]
c = simplify(coeffs[0])
if c and not c.is_rational:
func = Add
if c.is_Add:
args = c.args
func = c.func
else:
args = [c]
sifted = sift(args, lambda z: z.is_rational)
c1, c2 = sifted[True], sifted[False]
alpha = -func(*c2)/n
f2 = f1.shift(alpha)
return alpha, f2
return None
def _has_square_roots(p):
"""
Return True if ``f`` is a sum with square roots but no other root
"""
from sympy.core.exprtools import Factors
coeffs = p.coeffs()
has_sq = False
for y in coeffs:
for x in Add.make_args(y):
f = Factors(x).factors
r = [wx.q for wx in f.values() if wx.is_Rational and wx.q >= 2]
if not r:
continue
if min(r) == 2:
has_sq = True
if max(r) > 2:
return False
return has_sq
if f.get_domain().is_EX and _has_square_roots(f):
r = _try_rescale(f)
if r:
return r[0], r[1], None, r[2]
else:
r = _try_translate(f)
if r:
return None, None, r[0], r[1]
return None
def _torational_factor_list(p, x):
"""
helper function to factor polynomial using to_rational_coeffs
Examples
========
>>> from sympy.polys.polytools import _torational_factor_list
>>> from sympy.abc import x
>>> from sympy import sqrt, expand, Mul
>>> p = expand(((x**2-1)*(x-2)).subs({x:x*(1 + sqrt(2))}))
>>> factors = _torational_factor_list(p, x); factors
(-2, [(-x*(1 + sqrt(2))/2 + 1, 1), (-x*(1 + sqrt(2)) - 1, 1), (-x*(1 + sqrt(2)) + 1, 1)])
>>> expand(factors[0]*Mul(*[z[0] for z in factors[1]])) == p
True
>>> p = expand(((x**2-1)*(x-2)).subs({x:x + sqrt(2)}))
>>> factors = _torational_factor_list(p, x); factors
(1, [(x - 2 + sqrt(2), 1), (x - 1 + sqrt(2), 1), (x + 1 + sqrt(2), 1)])
>>> expand(factors[0]*Mul(*[z[0] for z in factors[1]])) == p
True
"""
from sympy.simplify.simplify import simplify
p1 = Poly(p, x, domain='EX')
n = p1.degree()
res = to_rational_coeffs(p1)
if not res:
return None
lc, r, t, g = res
factors = factor_list(g.as_expr())
if lc:
c = simplify(factors[0]*lc*r**n)
r1 = simplify(1/r)
a = []
for z in factors[1:][0]:
a.append((simplify(z[0].subs({x: x*r1})), z[1]))
else:
c = factors[0]
a = []
for z in factors[1:][0]:
a.append((z[0].subs({x: x - t}), z[1]))
return (c, a)
@public
def sqf_list(f, *gens, **args):
"""
Compute a list of square-free factors of ``f``.
Examples
========
>>> from sympy import sqf_list
>>> from sympy.abc import x
>>> sqf_list(2*x**5 + 16*x**4 + 50*x**3 + 76*x**2 + 56*x + 16)
(2, [(x + 1, 2), (x + 2, 3)])
"""
return _generic_factor_list(f, gens, args, method='sqf')
@public
def sqf(f, *gens, **args):
"""
Compute square-free factorization of ``f``.
Examples
========
>>> from sympy import sqf
>>> from sympy.abc import x
>>> sqf(2*x**5 + 16*x**4 + 50*x**3 + 76*x**2 + 56*x + 16)
2*(x + 1)**2*(x + 2)**3
"""
return _generic_factor(f, gens, args, method='sqf')
@public
def factor_list(f, *gens, **args):
"""
Compute a list of irreducible factors of ``f``.
Examples
========
>>> from sympy import factor_list
>>> from sympy.abc import x, y
>>> factor_list(2*x**5 + 2*x**4*y + 4*x**3 + 4*x**2*y + 2*x + 2*y)
(2, [(x + y, 1), (x**2 + 1, 2)])
"""
return _generic_factor_list(f, gens, args, method='factor')
@public
def factor(f, *gens, **args):
"""
Compute the factorization of expression, ``f``, into irreducibles. (To
factor an integer into primes, use ``factorint``.)
There two modes implemented: symbolic and formal. If ``f`` is not an
instance of :class:`Poly` and generators are not specified, then the
former mode is used. Otherwise, the formal mode is used.
In symbolic mode, :func:`factor` will traverse the expression tree and
factor its components without any prior expansion, unless an instance
of :class:`Add` is encountered (in this case formal factorization is
used). This way :func:`factor` can handle large or symbolic exponents.
By default, the factorization is computed over the rationals. To factor
over other domain, e.g. an algebraic or finite field, use appropriate
options: ``extension``, ``modulus`` or ``domain``.
Examples
========
>>> from sympy import factor, sqrt
>>> from sympy.abc import x, y
>>> factor(2*x**5 + 2*x**4*y + 4*x**3 + 4*x**2*y + 2*x + 2*y)
2*(x + y)*(x**2 + 1)**2
>>> factor(x**2 + 1)
x**2 + 1
>>> factor(x**2 + 1, modulus=2)
(x + 1)**2
>>> factor(x**2 + 1, gaussian=True)
(x - I)*(x + I)
>>> factor(x**2 - 2, extension=sqrt(2))
(x - sqrt(2))*(x + sqrt(2))
>>> factor((x**2 - 1)/(x**2 + 4*x + 4))
(x - 1)*(x + 1)/(x + 2)**2
>>> factor((x**2 + 4*x + 4)**10000000*(x**2 + 1))
(x + 2)**20000000*(x**2 + 1)
By default, factor deals with an expression as a whole:
>>> eq = 2**(x**2 + 2*x + 1)
>>> factor(eq)
2**(x**2 + 2*x + 1)
If the ``deep`` flag is True then subexpressions will
be factored:
>>> factor(eq, deep=True)
2**((x + 1)**2)
See Also
========
sympy.ntheory.factor_.factorint
"""
f = sympify(f)
if args.pop('deep', False):
partials = {}
muladd = f.atoms(Mul, Add)
for p in muladd:
fac = factor(p, *gens, **args)
if (fac.is_Mul or fac.is_Pow) and fac != p:
partials[p] = fac
return f.xreplace(partials)
try:
return _generic_factor(f, gens, args, method='factor')
except PolynomialError as msg:
if not f.is_commutative:
from sympy.core.exprtools import factor_nc
return factor_nc(f)
else:
raise PolynomialError(msg)
@public
def intervals(F, all=False, eps=None, inf=None, sup=None, strict=False, fast=False, sqf=False):
"""
Compute isolating intervals for roots of ``f``.
Examples
========
>>> from sympy import intervals
>>> from sympy.abc import x
>>> intervals(x**2 - 3)
[((-2, -1), 1), ((1, 2), 1)]
>>> intervals(x**2 - 3, eps=1e-2)
[((-26/15, -19/11), 1), ((19/11, 26/15), 1)]
"""
if not hasattr(F, '__iter__'):
try:
F = Poly(F)
except GeneratorsNeeded:
return []
return F.intervals(all=all, eps=eps, inf=inf, sup=sup, fast=fast, sqf=sqf)
else:
polys, opt = parallel_poly_from_expr(F, domain='QQ')
if len(opt.gens) > 1:
raise MultivariatePolynomialError
for i, poly in enumerate(polys):
polys[i] = poly.rep.rep
if eps is not None:
eps = opt.domain.convert(eps)
if eps <= 0:
raise ValueError("'eps' must be a positive rational")
if inf is not None:
inf = opt.domain.convert(inf)
if sup is not None:
sup = opt.domain.convert(sup)
intervals = dup_isolate_real_roots_list(polys, opt.domain,
eps=eps, inf=inf, sup=sup, strict=strict, fast=fast)
result = []
for (s, t), indices in intervals:
s, t = opt.domain.to_sympy(s), opt.domain.to_sympy(t)
result.append(((s, t), indices))
return result
@public
def refine_root(f, s, t, eps=None, steps=None, fast=False, check_sqf=False):
"""
Refine an isolating interval of a root to the given precision.
Examples
========
>>> from sympy import refine_root
>>> from sympy.abc import x
>>> refine_root(x**2 - 3, 1, 2, eps=1e-2)
(19/11, 26/15)
"""
try:
F = Poly(f)
except GeneratorsNeeded:
raise PolynomialError(
"can't refine a root of %s, not a polynomial" % f)
return F.refine_root(s, t, eps=eps, steps=steps, fast=fast, check_sqf=check_sqf)
@public
def count_roots(f, inf=None, sup=None):
"""
Return the number of roots of ``f`` in ``[inf, sup]`` interval.
If one of ``inf`` or ``sup`` is complex, it will return the number of roots
in the complex rectangle with corners at ``inf`` and ``sup``.
Examples
========
>>> from sympy import count_roots, I
>>> from sympy.abc import x
>>> count_roots(x**4 - 4, -3, 3)
2
>>> count_roots(x**4 - 4, 0, 1 + 3*I)
1
"""
try:
F = Poly(f, greedy=False)
except GeneratorsNeeded:
raise PolynomialError("can't count roots of %s, not a polynomial" % f)
return F.count_roots(inf=inf, sup=sup)
@public
def real_roots(f, multiple=True):
"""
Return a list of real roots with multiplicities of ``f``.
Examples
========
>>> from sympy import real_roots
>>> from sympy.abc import x
>>> real_roots(2*x**3 - 7*x**2 + 4*x + 4)
[-1/2, 2, 2]
"""
try:
F = Poly(f, greedy=False)
except GeneratorsNeeded:
raise PolynomialError(
"can't compute real roots of %s, not a polynomial" % f)
return F.real_roots(multiple=multiple)
@public
def nroots(f, n=15, maxsteps=50, cleanup=True, error=False):
"""
Compute numerical approximations of roots of ``f``.
Examples
========
>>> from sympy import nroots
>>> from sympy.abc import x
>>> nroots(x**2 - 3, n=15)
[-1.73205080756888, 1.73205080756888]
>>> nroots(x**2 - 3, n=30)
[-1.73205080756887729352744634151, 1.73205080756887729352744634151]
"""
try:
F = Poly(f, greedy=False)
except GeneratorsNeeded:
raise PolynomialError(
"can't compute numerical roots of %s, not a polynomial" % f)
return F.nroots(n=n, maxsteps=maxsteps, cleanup=cleanup, error=error)
@public
def ground_roots(f, *gens, **args):
"""
Compute roots of ``f`` by factorization in the ground domain.
Examples
========
>>> from sympy import ground_roots
>>> from sympy.abc import x
>>> ground_roots(x**6 - 4*x**4 + 4*x**3 - x**2)
{0: 2, 1: 2}
"""
options.allowed_flags(args, [])
try:
F, opt = poly_from_expr(f, *gens, **args)
except PolificationFailed as exc:
raise ComputationFailed('ground_roots', 1, exc)
return F.ground_roots()
@public
def nth_power_roots_poly(f, n, *gens, **args):
"""
Construct a polynomial with n-th powers of roots of ``f``.
Examples
========
>>> from sympy import nth_power_roots_poly, factor, roots
>>> from sympy.abc import x
>>> f = x**4 - x**2 + 1
>>> g = factor(nth_power_roots_poly(f, 2))
>>> g
(x**2 - x + 1)**2
>>> R_f = [ (r**2).expand() for r in roots(f) ]
>>> R_g = roots(g).keys()
>>> set(R_f) == set(R_g)
True
"""
options.allowed_flags(args, [])
try:
F, opt = poly_from_expr(f, *gens, **args)
except PolificationFailed as exc:
raise ComputationFailed('nth_power_roots_poly', 1, exc)
result = F.nth_power_roots_poly(n)
if not opt.polys:
return result.as_expr()
else:
return result
@public
def cancel(f, *gens, **args):
"""
Cancel common factors in a rational function ``f``.
Examples
========
>>> from sympy import cancel, sqrt, Symbol
>>> from sympy.abc import x
>>> A = Symbol('A', commutative=False)
>>> cancel((2*x**2 - 2)/(x**2 - 2*x + 1))
(2*x + 2)/(x - 1)
>>> cancel((sqrt(3) + sqrt(15)*A)/(sqrt(2) + sqrt(10)*A))
sqrt(6)/2
"""
from sympy.core.exprtools import factor_terms
options.allowed_flags(args, ['polys'])
f = sympify(f)
if not isinstance(f, (tuple, Tuple)):
if f.is_Number or isinstance(f, Relational) or not isinstance(f, Expr):
return f
f = factor_terms(f, radical=True)
p, q = f.as_numer_denom()
elif len(f) == 2:
p, q = f
elif isinstance(f, Tuple):
return factor_terms(f)
else:
raise ValueError('unexpected argument: %s' % f)
try:
(F, G), opt = parallel_poly_from_expr((p, q), *gens, **args)
except PolificationFailed:
if not isinstance(f, (tuple, Tuple)):
return f
else:
return S.One, p, q
except PolynomialError as msg:
if f.is_commutative and not f.has(Piecewise):
raise PolynomialError(msg)
# Handling of noncommutative and/or piecewise expressions
if f.is_Add or f.is_Mul:
sifted = sift(f.args, lambda x: x.is_commutative and not x.has(Piecewise))
c, nc = sifted[True], sifted[False]
nc = [cancel(i) for i in nc]
return f.func(cancel(f.func._from_args(c)), *nc)
else:
reps = []
pot = preorder_traversal(f)
next(pot)
for e in pot:
# XXX: This should really skip anything that's not Expr.
if isinstance(e, (tuple, Tuple, BooleanAtom)):
continue
try:
reps.append((e, cancel(e)))
pot.skip() # this was handled successfully
except NotImplementedError:
pass
return f.xreplace(dict(reps))
c, P, Q = F.cancel(G)
if not isinstance(f, (tuple, Tuple)):
return c*(P.as_expr()/Q.as_expr())
else:
if not opt.polys:
return c, P.as_expr(), Q.as_expr()
else:
return c, P, Q
@public
def reduced(f, G, *gens, **args):
"""
Reduces a polynomial ``f`` modulo a set of polynomials ``G``.
Given a polynomial ``f`` and a set of polynomials ``G = (g_1, ..., g_n)``,
computes a set of quotients ``q = (q_1, ..., q_n)`` and the remainder ``r``
such that ``f = q_1*f_1 + ... + q_n*f_n + r``, where ``r`` vanishes or ``r``
is a completely reduced polynomial with respect to ``G``.
Examples
========
>>> from sympy import reduced
>>> from sympy.abc import x, y
>>> reduced(2*x**4 + y**2 - x**2 + y**3, [x**3 - x, y**3 - y])
([2*x, 1], x**2 + y**2 + y)
"""
options.allowed_flags(args, ['polys', 'auto'])
try:
polys, opt = parallel_poly_from_expr([f] + list(G), *gens, **args)
except PolificationFailed as exc:
raise ComputationFailed('reduced', 0, exc)
domain = opt.domain
retract = False
if opt.auto and domain.has_Ring and not domain.has_Field:
opt = opt.clone(dict(domain=domain.get_field()))
retract = True
from sympy.polys.rings import xring
_ring, _ = xring(opt.gens, opt.domain, opt.order)
for i, poly in enumerate(polys):
poly = poly.set_domain(opt.domain).rep.to_dict()
polys[i] = _ring.from_dict(poly)
Q, r = polys[0].div(polys[1:])
Q = [Poly._from_dict(dict(q), opt) for q in Q]
r = Poly._from_dict(dict(r), opt)
if retract:
try:
_Q, _r = [q.to_ring() for q in Q], r.to_ring()
except CoercionFailed:
pass
else:
Q, r = _Q, _r
if not opt.polys:
return [q.as_expr() for q in Q], r.as_expr()
else:
return Q, r
@public
def groebner(F, *gens, **args):
"""
Computes the reduced Groebner basis for a set of polynomials.
Use the ``order`` argument to set the monomial ordering that will be
used to compute the basis. Allowed orders are ``lex``, ``grlex`` and
``grevlex``. If no order is specified, it defaults to ``lex``.
For more information on Groebner bases, see the references and the docstring
of `solve_poly_system()`.
Examples
========
Example taken from [1].
>>> from sympy import groebner
>>> from sympy.abc import x, y
>>> F = [x*y - 2*y, 2*y**2 - x**2]
>>> groebner(F, x, y, order='lex')
GroebnerBasis([x**2 - 2*y**2, x*y - 2*y, y**3 - 2*y], x, y,
domain='ZZ', order='lex')
>>> groebner(F, x, y, order='grlex')
GroebnerBasis([y**3 - 2*y, x**2 - 2*y**2, x*y - 2*y], x, y,
domain='ZZ', order='grlex')
>>> groebner(F, x, y, order='grevlex')
GroebnerBasis([y**3 - 2*y, x**2 - 2*y**2, x*y - 2*y], x, y,
domain='ZZ', order='grevlex')
By default, an improved implementation of the Buchberger algorithm is
used. Optionally, an implementation of the F5B algorithm can be used.
The algorithm can be set using ``method`` flag or with the :func:`setup`
function from :mod:`sympy.polys.polyconfig`:
>>> F = [x**2 - x - 1, (2*x - 1) * y - (x**10 - (1 - x)**10)]
>>> groebner(F, x, y, method='buchberger')
GroebnerBasis([x**2 - x - 1, y - 55], x, y, domain='ZZ', order='lex')
>>> groebner(F, x, y, method='f5b')
GroebnerBasis([x**2 - x - 1, y - 55], x, y, domain='ZZ', order='lex')
References
==========
1. [Buchberger01]_
2. [Cox97]_
"""
return GroebnerBasis(F, *gens, **args)
@public
def is_zero_dimensional(F, *gens, **args):
"""
Checks if the ideal generated by a Groebner basis is zero-dimensional.
The algorithm checks if the set of monomials not divisible by the
leading monomial of any element of ``F`` is bounded.
References
==========
David A. Cox, John B. Little, Donal O'Shea. Ideals, Varieties and
Algorithms, 3rd edition, p. 230
"""
return GroebnerBasis(F, *gens, **args).is_zero_dimensional
@public
class GroebnerBasis(Basic):
"""Represents a reduced Groebner basis. """
def __new__(cls, F, *gens, **args):
"""Compute a reduced Groebner basis for a system of polynomials. """
options.allowed_flags(args, ['polys', 'method'])
try:
polys, opt = parallel_poly_from_expr(F, *gens, **args)
except PolificationFailed as exc:
raise ComputationFailed('groebner', len(F), exc)
from sympy.polys.rings import PolyRing
ring = PolyRing(opt.gens, opt.domain, opt.order)
for i, poly in enumerate(polys):
polys[i] = ring.from_dict(poly.rep.to_dict())
G = _groebner(polys, ring, method=opt.method)
G = [Poly._from_dict(g, opt) for g in G]
return cls._new(G, opt)
@classmethod
def _new(cls, basis, options):
obj = Basic.__new__(cls)
obj._basis = tuple(basis)
obj._options = options
return obj
@property
def args(self):
return (Tuple(*self._basis), Tuple(*self._options.gens))
@property
def exprs(self):
return [poly.as_expr() for poly in self._basis]
@property
def polys(self):
return list(self._basis)
@property
def gens(self):
return self._options.gens
@property
def domain(self):
return self._options.domain
@property
def order(self):
return self._options.order
def __len__(self):
return len(self._basis)
def __iter__(self):
if self._options.polys:
return iter(self.polys)
else:
return iter(self.exprs)
def __getitem__(self, item):
if self._options.polys:
basis = self.polys
else:
basis = self.exprs
return basis[item]
def __hash__(self):
return hash((self._basis, tuple(self._options.items())))
def __eq__(self, other):
if isinstance(other, self.__class__):
return self._basis == other._basis and self._options == other._options
elif iterable(other):
return self.polys == list(other) or self.exprs == list(other)
else:
return False
def __ne__(self, other):
return not self.__eq__(other)
@property
def is_zero_dimensional(self):
"""
Checks if the ideal generated by a Groebner basis is zero-dimensional.
The algorithm checks if the set of monomials not divisible by the
leading monomial of any element of ``F`` is bounded.
References
==========
David A. Cox, John B. Little, Donal O'Shea. Ideals, Varieties and
Algorithms, 3rd edition, p. 230
"""
def single_var(monomial):
return sum(map(bool, monomial)) == 1
exponents = Monomial([0]*len(self.gens))
order = self._options.order
for poly in self.polys:
monomial = poly.LM(order=order)
if single_var(monomial):
exponents *= monomial
# If any element of the exponents vector is zero, then there's
# a variable for which there's no degree bound and the ideal
# generated by this Groebner basis isn't zero-dimensional.
return all(exponents)
def fglm(self, order):
"""
Convert a Groebner basis from one ordering to another.
The FGLM algorithm converts reduced Groebner bases of zero-dimensional
ideals from one ordering to another. This method is often used when it
is infeasible to compute a Groebner basis with respect to a particular
ordering directly.
Examples
========
>>> from sympy.abc import x, y
>>> from sympy import groebner
>>> F = [x**2 - 3*y - x + 1, y**2 - 2*x + y - 1]
>>> G = groebner(F, x, y, order='grlex')
>>> list(G.fglm('lex'))
[2*x - y**2 - y + 1, y**4 + 2*y**3 - 3*y**2 - 16*y + 7]
>>> list(groebner(F, x, y, order='lex'))
[2*x - y**2 - y + 1, y**4 + 2*y**3 - 3*y**2 - 16*y + 7]
References
==========
J.C. Faugere, P. Gianni, D. Lazard, T. Mora (1994). Efficient
Computation of Zero-dimensional Groebner Bases by Change of
Ordering
"""
opt = self._options
src_order = opt.order
dst_order = monomial_key(order)
if src_order == dst_order:
return self
if not self.is_zero_dimensional:
raise NotImplementedError("can't convert Groebner bases of ideals with positive dimension")
polys = list(self._basis)
domain = opt.domain
opt = opt.clone(dict(
domain=domain.get_field(),
order=dst_order,
))
from sympy.polys.rings import xring
_ring, _ = xring(opt.gens, opt.domain, src_order)
for i, poly in enumerate(polys):
poly = poly.set_domain(opt.domain).rep.to_dict()
polys[i] = _ring.from_dict(poly)
G = matrix_fglm(polys, _ring, dst_order)
G = [Poly._from_dict(dict(g), opt) for g in G]
if not domain.has_Field:
G = [g.clear_denoms(convert=True)[1] for g in G]
opt.domain = domain
return self._new(G, opt)
def reduce(self, expr, auto=True):
"""
Reduces a polynomial modulo a Groebner basis.
Given a polynomial ``f`` and a set of polynomials ``G = (g_1, ..., g_n)``,
computes a set of quotients ``q = (q_1, ..., q_n)`` and the remainder ``r``
such that ``f = q_1*f_1 + ... + q_n*f_n + r``, where ``r`` vanishes or ``r``
is a completely reduced polynomial with respect to ``G``.
Examples
========
>>> from sympy import groebner, expand
>>> from sympy.abc import x, y
>>> f = 2*x**4 - x**2 + y**3 + y**2
>>> G = groebner([x**3 - x, y**3 - y])
>>> G.reduce(f)
([2*x, 1], x**2 + y**2 + y)
>>> Q, r = _
>>> expand(sum(q*g for q, g in zip(Q, G)) + r)
2*x**4 - x**2 + y**3 + y**2
>>> _ == f
True
"""
poly = Poly._from_expr(expr, self._options)
polys = [poly] + list(self._basis)
opt = self._options
domain = opt.domain
retract = False
if auto and domain.has_Ring and not domain.has_Field:
opt = opt.clone(dict(domain=domain.get_field()))
retract = True
from sympy.polys.rings import xring
_ring, _ = xring(opt.gens, opt.domain, opt.order)
for i, poly in enumerate(polys):
poly = poly.set_domain(opt.domain).rep.to_dict()
polys[i] = _ring.from_dict(poly)
Q, r = polys[0].div(polys[1:])
Q = [Poly._from_dict(dict(q), opt) for q in Q]
r = Poly._from_dict(dict(r), opt)
if retract:
try:
_Q, _r = [q.to_ring() for q in Q], r.to_ring()
except CoercionFailed:
pass
else:
Q, r = _Q, _r
if not opt.polys:
return [q.as_expr() for q in Q], r.as_expr()
else:
return Q, r
def contains(self, poly):
"""
Check if ``poly`` belongs the ideal generated by ``self``.
Examples
========
>>> from sympy import groebner
>>> from sympy.abc import x, y
>>> f = 2*x**3 + y**3 + 3*y
>>> G = groebner([x**2 + y**2 - 1, x*y - 2])
>>> G.contains(f)
True
>>> G.contains(f + 1)
False
"""
return self.reduce(poly)[1] == 0
@public
def poly(expr, *gens, **args):
"""
Efficiently transform an expression into a polynomial.
Examples
========
>>> from sympy import poly
>>> from sympy.abc import x
>>> poly(x*(x**2 + x - 1)**2)
Poly(x**5 + 2*x**4 - x**3 - 2*x**2 + x, x, domain='ZZ')
"""
options.allowed_flags(args, [])
def _poly(expr, opt):
terms, poly_terms = [], []
for term in Add.make_args(expr):
factors, poly_factors = [], []
for factor in Mul.make_args(term):
if factor.is_Add:
poly_factors.append(_poly(factor, opt))
elif factor.is_Pow and factor.base.is_Add and factor.exp.is_Integer:
poly_factors.append(
_poly(factor.base, opt).pow(factor.exp))
else:
factors.append(factor)
if not poly_factors:
terms.append(term)
else:
product = poly_factors[0]
for factor in poly_factors[1:]:
product = product.mul(factor)
if factors:
factor = Mul(*factors)
if factor.is_Number:
product = product.mul(factor)
else:
product = product.mul(Poly._from_expr(factor, opt))
poly_terms.append(product)
if not poly_terms:
result = Poly._from_expr(expr, opt)
else:
result = poly_terms[0]
for term in poly_terms[1:]:
result = result.add(term)
if terms:
term = Add(*terms)
if term.is_Number:
result = result.add(term)
else:
result = result.add(Poly._from_expr(term, opt))
return result.reorder(*opt.get('gens', ()), **args)
expr = sympify(expr)
if expr.is_Poly:
return Poly(expr, *gens, **args)
if 'expand' not in args:
args['expand'] = False
opt = options.build_options(gens, args)
return _poly(expr, opt)
from sympy.functions import Piecewise
|
ojengwa/sympy
|
sympy/polys/polytools.py
|
Python
|
bsd-3-clause
| 169,982
|
[
"Gaussian"
] |
a10a157692906f1dbfab16a20ef6cd806240a4788c4f57f6357451c1db2ca088
|
#!/usr/bin/env python
# Copyright 2014-2018 The PySCF Developers. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# Author: Qiming Sun <osirpt.sun@gmail.com>
#
import warnings
import ctypes
import numpy
from pyscf import lib
from pyscf.gto.moleintor import make_loc
BLKSIZE = 104 # must be equal to lib/gto/grid_ao_drv.h
libcgto = lib.load_library('libcgto')
def eval_gto(mol, eval_name, coords,
comp=None, shls_slice=None, non0tab=None, ao_loc=None, out=None):
r'''Evaluate AO function value on the given grids,
Args:
eval_name : str
======================== ====== =======================
Function comp Expression
======================== ====== =======================
"GTOval_sph" 1 |AO>
"GTOval_ip_sph" 3 nabla |AO>
"GTOval_ig_sph" 3 (#C(0 1) g) |AO>
"GTOval_ipig_sph" 9 (#C(0 1) nabla g) |AO>
"GTOval_cart" 1 |AO>
"GTOval_ip_cart" 3 nabla |AO>
"GTOval_ig_cart" 3 (#C(0 1) g)|AO>
"GTOval_sph_deriv1" 4 GTO value and 1st order GTO values
"GTOval_sph_deriv2" 10 All derivatives up to 2nd order
"GTOval_sph_deriv3" 20 All derivatives up to 3rd order
"GTOval_sph_deriv4" 35 All derivatives up to 4th order
"GTOval_sp_spinor" 1 sigma dot p |AO> (spinor basis)
"GTOval_ipsp_spinor" 3 nabla sigma dot p |AO> (spinor basis)
"GTOval_ipipsp_spinor" 9 nabla nabla sigma dot p |AO> (spinor basis)
======================== ====== =======================
atm : int32 ndarray
libcint integral function argument
bas : int32 ndarray
libcint integral function argument
env : float64 ndarray
libcint integral function argument
coords : 2D array, shape (N,3)
The coordinates of the grids.
Kwargs:
comp : int
Number of the components of the operator
shls_slice : 2-element list
(shl_start, shl_end).
If given, only part of AOs (shl_start <= shell_id < shl_end) are
evaluated. By default, all shells defined in mol will be evaluated.
non0tab : 2D bool array
mask array to indicate whether the AO values are zero. The mask
array can be obtained by calling :func:`dft.gen_grid.make_mask`
out : ndarray
If provided, results are written into this array.
Returns:
2D array of shape (N,nao) Or 3D array of shape (\*,N,nao) to store AO
values on grids.
Examples:
>>> mol = gto.M(atom='O 0 0 0; H 0 0 1; H 0 1 0', basis='ccpvdz')
>>> coords = numpy.random.random((100,3)) # 100 random points
>>> ao_value = mol.eval_gto("GTOval_sph", coords)
>>> print(ao_value.shape)
(100, 24)
>>> ao_value = mol.eval_gto("GTOval_ig_sph", coords)
>>> print(ao_value.shape)
(3, 100, 24)
'''
eval_name, comp = _get_intor_and_comp(mol, eval_name, comp)
atm = numpy.asarray(mol._atm, dtype=numpy.int32, order='C')
bas = numpy.asarray(mol._bas, dtype=numpy.int32, order='C')
env = numpy.asarray(mol._env, dtype=numpy.double, order='C')
coords = numpy.asarray(coords, dtype=numpy.double, order='F')
natm = atm.shape[0]
nbas = bas.shape[0]
ngrids = coords.shape[0]
if ao_loc is None:
ao_loc = make_loc(bas, eval_name)
if shls_slice is None:
shls_slice = (0, nbas)
sh0, sh1 = shls_slice
nao = ao_loc[sh1] - ao_loc[sh0]
if 'spinor' in eval_name:
ao = numpy.ndarray((2,comp,nao,ngrids), dtype=numpy.complex128, buffer=out)
else:
ao = numpy.ndarray((comp,nao,ngrids), buffer=out)
if non0tab is None:
non0tab = numpy.ones(((ngrids+BLKSIZE-1)//BLKSIZE,nbas),
dtype=numpy.int8)
drv = getattr(libcgto, eval_name)
drv(ctypes.c_int(ngrids),
(ctypes.c_int*2)(*shls_slice), ao_loc.ctypes.data_as(ctypes.c_void_p),
ao.ctypes.data_as(ctypes.c_void_p),
coords.ctypes.data_as(ctypes.c_void_p),
non0tab.ctypes.data_as(ctypes.c_void_p),
atm.ctypes.data_as(ctypes.c_void_p), ctypes.c_int(natm),
bas.ctypes.data_as(ctypes.c_void_p), ctypes.c_int(nbas),
env.ctypes.data_as(ctypes.c_void_p))
ao = numpy.swapaxes(ao, -1, -2)
if comp == 1:
if 'spinor' in eval_name:
ao = ao[:,0]
else:
ao = ao[0]
return ao
def _get_intor_and_comp(mol, eval_name, comp=None):
if not ('_sph' in eval_name or '_cart' in eval_name or
'_spinor' in eval_name):
if mol.cart:
eval_name = eval_name + '_cart'
else:
eval_name = eval_name + '_sph'
if comp is None:
if '_spinor' in eval_name:
fname = eval_name.replace('_spinor', '')
comp = _GTO_EVAL_FUNCTIONS.get(fname, (None,None))[1]
else:
fname = eval_name.replace('_sph', '').replace('_cart', '')
comp = _GTO_EVAL_FUNCTIONS.get(fname, (None,None))[0]
if comp is None:
warnings.warn('Function %s not found. Set its comp to 1' % eval_name)
comp = 1
return eval_name, comp
_GTO_EVAL_FUNCTIONS = {
# Functiona name : (comp-for-scalar, comp-for-spinor)
'GTOval' : (1, 1 ),
'GTOval_ip' : (3, 3 ),
'GTOval_ig' : (3, 3 ),
'GTOval_ipig' : (9, 9 ),
'GTOval_deriv0' : (1, 1 ),
'GTOval_deriv1' : (4, 4 ),
'GTOval_deriv2' : (10,10),
'GTOval_deriv3' : (20,20),
'GTOval_deriv4' : (35,35),
'GTOval_sp' : (4, 1 ),
'GTOval_ipsp' : (12,3 ),
'GTOval_ipipsp' : (36,9 ),
}
if __name__ == '__main__':
from pyscf import gto
mol = gto.M(atom='O 0 0 0; H 0 0 1; H 0 1 0', basis='ccpvdz')
coords = numpy.random.random((100,3))
ao_value = eval_gto(mol, "GTOval_sph", coords)
print(ao_value.shape)
|
sunqm/pyscf
|
pyscf/gto/eval_gto.py
|
Python
|
apache-2.0
| 6,798
|
[
"PySCF"
] |
cb32b6a6823c96cf5129a1aa7aad054b2509990b96e6130b63ab8aa00d2ea1c2
|
#!/usr/bin/env python3
###############################################################
# Copyright 2014 Lawrence Livermore National Security, LLC
# (c.f. AUTHORS, NOTICE.LLNS, COPYING)
#
# This file is part of the Flux resource manager framework.
# For details, see https://github.com/flux-framework.
#
# SPDX-License-Identifier: LGPL-3.0
###############################################################
import os
import errno
import sys
import json
import unittest
import datetime
import signal
import locale
import pathlib
from glob import glob
import yaml
import flux
import flux.kvs
import flux.constants
from flux import job
from flux.job import Jobspec, JobspecV1, ffi, JobID, JobInfo
from flux.job.list import VALID_ATTRS
from flux.job.stats import JobStats
from flux.future import Future
def __flux_size():
return 1
def yaml_to_json(s):
obj = yaml.safe_load(s)
return json.dumps(obj, separators=(",", ":"))
class TestJob(unittest.TestCase):
@classmethod
def setUpClass(self):
self.fh = flux.Flux()
self.use_ascii = locale.getlocale()[1] != "UTF-8"
self.jobspec_dir = os.path.abspath(
os.path.join(os.environ["FLUX_SOURCE_DIR"], "t", "jobspec")
)
# get a valid jobspec
basic_jobspec_fname = os.path.join(self.jobspec_dir, "valid", "basic_v1.yaml")
with open(basic_jobspec_fname, "rb") as infile:
basic_yaml = infile.read()
self.basic_jobspec = yaml_to_json(basic_yaml)
def test_00_null_submit(self):
with self.assertRaises(EnvironmentError) as error:
job.submit(ffi.NULL, self.basic_jobspec)
self.assertEqual(error.exception.errno, errno.EINVAL)
with self.assertRaises(EnvironmentError) as error:
job.submit_get_id(ffi.NULL)
self.assertEqual(error.exception.errno, errno.EINVAL)
with self.assertRaises(EnvironmentError) as error:
job.submit(self.fh, ffi.NULL)
self.assertEqual(error.exception.errno, errno.EINVAL)
def test_01_nonstring_submit(self):
with self.assertRaises(TypeError):
job.submit(self.fh, 0)
def test_02_sync_submit(self):
jobid = job.submit(self.fh, self.basic_jobspec)
self.assertGreater(jobid, 0)
def test_03_invalid_construction(self):
for cls in [Jobspec, JobspecV1]:
for invalid_jobspec_filepath in glob(
os.path.join(self.jobspec_dir, "invalid", "*.yaml")
):
with self.assertRaises(
(ValueError, TypeError, yaml.scanner.ScannerError)
) as cm:
cls.from_yaml_file(invalid_jobspec_filepath)
def test_04_valid_construction(self):
for jobspec_filepath in glob(os.path.join(self.jobspec_dir, "valid", "*.yaml")):
Jobspec.from_yaml_file(jobspec_filepath)
def test_05_valid_v1_construction(self):
for jobspec_filepath in glob(
os.path.join(self.jobspec_dir, "valid_v1", "*.yaml")
):
JobspecV1.from_yaml_file(jobspec_filepath)
def test_06_iter(self):
jobspec_fname = os.path.join(self.jobspec_dir, "valid", "use_case_2.4.yaml")
jobspec = Jobspec.from_yaml_file(jobspec_fname)
self.assertEqual(len(list(jobspec)), 7)
self.assertEqual(len(list([x for x in jobspec if x["type"] == "core"])), 2)
self.assertEqual(len(list([x for x in jobspec if x["type"] == "slot"])), 2)
def test_07_count(self):
jobspec_fname = os.path.join(self.jobspec_dir, "valid", "use_case_2.4.yaml")
jobspec = Jobspec.from_yaml_file(jobspec_fname)
count_dict = jobspec.resource_counts()
self.assertEqual(count_dict["node"], 1)
self.assertEqual(count_dict["slot"], 11)
self.assertEqual(count_dict["core"], 16)
self.assertEqual(count_dict["memory"], 64)
def test_08_jobspec_submit(self):
jobspec = Jobspec.from_yaml_stream(self.basic_jobspec)
jobid = job.submit(self.fh, jobspec)
self.assertGreater(jobid, 0)
def test_09_valid_duration(self):
"""Test setting Jobspec duration to various valid values"""
jobspec = Jobspec.from_yaml_stream(self.basic_jobspec)
for duration in (100, 100.5):
delta = datetime.timedelta(seconds=duration)
for x in [duration, delta, "{}s".format(duration)]:
jobspec.duration = x
# duration setter converts value to a float
self.assertEqual(jobspec.duration, float(duration))
def test_10_invalid_duration(self):
"""Test setting Jobspec duration to various invalid values and types"""
jobspec = Jobspec.from_yaml_stream(self.basic_jobspec)
for duration in (-100, -100.5, datetime.timedelta(seconds=-5), "10h5m"):
with self.assertRaises(ValueError):
jobspec.duration = duration
for duration in ([], {}):
with self.assertRaises(TypeError):
jobspec.duration = duration
def test_11_cwd_pathlib(self):
jobspec_path = pathlib.PosixPath(self.jobspec_dir) / "valid" / "basic_v1.yaml"
jobspec = Jobspec.from_yaml_file(jobspec_path)
cwd = pathlib.PosixPath("/tmp")
jobspec.cwd = cwd
self.assertEqual(jobspec.cwd, os.fspath(cwd))
def test_12_environment(self):
jobspec = Jobspec.from_yaml_stream(self.basic_jobspec)
new_env = {"HOME": "foo", "foo": "bar"}
jobspec.environment = new_env
self.assertEqual(jobspec.environment, new_env)
def test_13_job_kvs(self):
jobid = job.submit(self.fh, self.basic_jobspec, waitable=True)
job.wait(self.fh, jobid=jobid)
for job_kvs_dir in [
job.job_kvs(self.fh, jobid),
job.job_kvs_guest(self.fh, jobid),
]:
self.assertTrue(isinstance(job_kvs_dir, flux.kvs.KVSDir))
self.assertTrue(flux.kvs.exists(self.fh, job_kvs_dir.path))
self.assertTrue(flux.kvs.isdir(self.fh, job_kvs_dir.path))
def test_14_job_cancel_invalid_args(self):
with self.assertRaises(ValueError):
job.kill(self.fh, "abc")
with self.assertRaises(ValueError):
job.cancel(self.fh, "abc")
with self.assertRaises(OSError):
job.kill(self.fh, 123)
with self.assertRaises(OSError):
job.cancel(self.fh, 123)
def test_15_job_cancel(self):
self.sleep_jobspec = JobspecV1.from_command(["sleep", "1000"])
jobid = job.submit(self.fh, self.sleep_jobspec, waitable=True)
job.cancel(self.fh, jobid)
fut = job.wait_async(self.fh, jobid=jobid).wait_for(5.0)
return_id, success, errmsg = fut.get_status()
self.assertEqual(return_id, jobid)
self.assertFalse(success)
def test_16_job_kill(self):
self.sleep_jobspec = JobspecV1.from_command(["sleep", "1000"])
jobid = job.submit(self.fh, self.sleep_jobspec, waitable=True)
# Wait for shell to fully start to avoid delay in signal
job.event_wait(self.fh, jobid, name="start")
job.event_wait(
self.fh, jobid, name="shell.start", eventlog="guest.exec.eventlog"
)
job.kill(self.fh, jobid, signum=signal.SIGKILL)
fut = job.wait_async(self.fh, jobid=jobid).wait_for(5.0)
return_id, success, errmsg = fut.get_status()
self.assertEqual(return_id, jobid)
self.assertFalse(success)
def test_20_000_job_event_functions_invalid_args(self):
with self.assertRaises(OSError) as cm:
for event in job.event_watch(self.fh, 123):
print(event)
self.assertEqual(cm.exception.errno, errno.ENOENT)
with self.assertRaises(OSError) as cm:
job.event_wait(self.fh, 123, "start")
self.assertEqual(cm.exception.errno, errno.ENOENT)
with self.assertRaises(OSError) as cm:
job.event_wait(None, 123, "start")
self.assertEqual(cm.exception.errno, errno.EINVAL)
def test_20_001_job_event_watch_async(self):
myarg = dict(a=1, b=2)
events = []
def cb(future, arg):
self.assertEqual(arg, myarg)
event = future.get_event()
if event is None:
future.get_flux().reactor_stop()
return
self.assertIsInstance(event, job.EventLogEvent)
events.append(event.name)
jobid = job.submit(self.fh, JobspecV1.from_command(["sleep", "0"]))
self.assertTrue(jobid > 0)
future = job.event_watch_async(self.fh, jobid)
self.assertIsInstance(future, job.JobEventWatchFuture)
future.then(cb, myarg)
rc = self.fh.reactor_run()
self.assertGreaterEqual(rc, 0)
self.assertEqual(len(events), 9)
self.assertEqual(events[0], "submit")
self.assertEqual(events[-1], "clean")
def test_20_002_job_event_watch_no_autoreset(self):
jobid = job.submit(self.fh, JobspecV1.from_command(["sleep", "0"]))
self.assertTrue(jobid > 0)
future = job.event_watch_async(self.fh, jobid)
self.assertIsInstance(future, job.JobEventWatchFuture)
# First event should be "submit"
event = future.get_event(autoreset=False)
self.assertIsInstance(event, job.EventLogEvent)
self.assertEqual(event.name, "submit")
# get_event() again with no reset returns same event:
event = future.get_event(autoreset=False)
self.assertIsInstance(event, job.EventLogEvent)
self.assertEqual(event.name, "submit")
# reset, then get_event() should get next event
future.reset()
event = future.get_event(autoreset=False)
self.assertIsInstance(event, job.EventLogEvent)
self.assertEqual(event.name, "depend")
future.cancel()
def test_20_003_job_event_watch_sync(self):
jobid = job.submit(self.fh, JobspecV1.from_command(["sleep", "0"]))
self.assertTrue(jobid > 0)
future = job.event_watch_async(self.fh, jobid)
self.assertIsInstance(future, job.JobEventWatchFuture)
event = future.get_event()
self.assertIsInstance(event, job.EventLogEvent)
self.assertEqual(event.name, "submit")
future.cancel()
def test_20_004_job_event_watch(self):
jobid = job.submit(self.fh, JobspecV1.from_command(["sleep", "0"]))
self.assertTrue(jobid > 0)
events = []
for event in job.event_watch(self.fh, jobid):
self.assertIsInstance(event, job.EventLogEvent)
self.assertTrue(hasattr(event, "timestamp"))
self.assertTrue(hasattr(event, "name"))
self.assertTrue(hasattr(event, "context"))
self.assertIs(type(event.timestamp), float)
self.assertIs(type(event.name), str)
self.assertIs(type(event.context), dict)
events.append(event.name)
self.assertEqual(len(events), 9)
def test_20_005_job_event_watch_with_cancel(self):
jobid = job.submit(
self.fh, JobspecV1.from_command(["sleep", "3"]), waitable=True
)
self.assertTrue(jobid > 0)
events = []
future = job.event_watch_async(self.fh, jobid)
while True:
event = future.get_event()
if event is None:
break
if event.name == "start":
future.cancel()
events.append(event.name)
self.assertEqual(event, None)
# Should have less than the expected number of events due to cancel
self.assertLess(len(events), 8)
job.cancel(self.fh, jobid)
job.wait(self.fh, jobid)
def test_20_005_1_job_event_watch_with_cancel_stop_true(self):
jobid = job.submit(
self.fh, JobspecV1.from_command(["sleep", "3"]), waitable=True
)
self.assertTrue(jobid > 0)
events = []
future = job.event_watch_async(self.fh, jobid)
def cb(future, events):
event = future.get_event()
if event.name == "start":
future.cancel(stop=True)
events.append(event.name)
future.then(cb, events)
rc = self.fh.reactor_run()
# Last event should be "start"
self.assertEqual(events[-1], "start")
job.cancel(self.fh, jobid)
job.wait(self.fh, jobid)
def test_20_006_job_event_wait(self):
jobid = job.submit(self.fh, JobspecV1.from_command(["sleep", "0"]))
self.assertTrue(jobid > 0)
event = job.event_wait(self.fh, jobid, "start")
self.assertIsInstance(event, job.EventLogEvent)
self.assertEqual(event.name, "start")
event = job.event_wait(
self.fh, jobid, "shell.init", eventlog="guest.exec.eventlog"
)
self.assertIsInstance(event, job.EventLogEvent)
self.assertEqual(event.name, "shell.init")
event = job.event_wait(self.fh, jobid, "clean")
self.assertIsInstance(event, job.EventLogEvent)
self.assertEqual(event.name, "clean")
with self.assertRaises(OSError):
job.event_wait(self.fh, jobid, "foo")
def test_20_007_job_event_wait_exception(self):
event = None
jobid = job.submit(
self.fh, JobspecV1.from_command(["sleep", "0"], num_tasks=128)
)
self.assertTrue(jobid > 0)
try:
event = job.event_wait(self.fh, jobid, "start")
except job.JobException as err:
self.assertEqual(err.severity, 0)
self.assertEqual(err.type, "alloc")
self.assertGreater(err.timestamp, 0.0)
self.assertIs(event, None)
try:
event = job.event_wait(self.fh, jobid, "start", raiseJobException=False)
except OSError as err:
self.assertEqual(err.errno, errno.ENODATA)
self.assertIs(event, None)
def test_21_stdio(self):
"""Test getter/setter methods for stdio properties"""
jobspec = Jobspec.from_yaml_stream(self.basic_jobspec)
for stream in ("stderr", "stdout", "stdin"):
self.assertEqual(getattr(jobspec, stream), None)
for path in ("foo.txt", "bar.md", "foo.json"):
setattr(jobspec, stream, path)
self.assertEqual(getattr(jobspec, stream), path)
with self.assertRaises(TypeError):
setattr(jobspec, stream, None)
def test_22_from_batch_command(self):
"""Test that `from_batch_command` produces a valid jobspec"""
jobid = job.submit(
self.fh, JobspecV1.from_batch_command("#!/bin/sh\nsleep 0", "nested sleep")
)
self.assertGreater(jobid, 0)
# test that a shebang is required
with self.assertRaises(ValueError):
job.submit(
self.fh,
JobspecV1.from_batch_command("sleep 0", "nested sleep with no shebang"),
)
def test_23_from_nest_command(self):
"""Test that `from_batch_command` produces a valid jobspec"""
jobid = job.submit(self.fh, JobspecV1.from_nest_command(["sleep", "0"]))
self.assertGreater(jobid, 0)
def test_24_jobid(self):
"""Test JobID class"""
parse_tests = [
{
"int": 0,
"dec": "0",
"hex": "0x0",
"dothex": "0000.0000.0000.0000",
"kvs": "job.0000.0000.0000.0000",
"f58": "ƒ1",
"words": "academy-academy-academy--academy-academy-academy",
},
{
"int": 1,
"dec": "1",
"hex": "0x1",
"dothex": "0000.0000.0000.0001",
"kvs": "job.0000.0000.0000.0001",
"f58": "ƒ2",
"words": "acrobat-academy-academy--academy-academy-academy",
},
{
"int": 65535,
"dec": "65535",
"hex": "0xffff",
"dothex": "0000.0000.0000.ffff",
"kvs": "job.0000.0000.0000.ffff",
"f58": "ƒLUv",
"words": "nevada-archive-academy--academy-academy-academy",
},
{
"int": 6787342413402046,
"dec": "6787342413402046",
"hex": "0x181d0d4d850fbe",
"dothex": "0018.1d0d.4d85.0fbe",
"kvs": "job.0018.1d0d.4d85.0fbe",
"f58": "ƒuzzybunny",
"words": "cake-plume-nepal--neuron-pencil-academy",
},
{
"int": 18446744073709551614,
"dec": "18446744073709551614",
"hex": "0xfffffffffffffffe",
"dothex": "ffff.ffff.ffff.fffe",
"kvs": "job.ffff.ffff.ffff.fffe",
"f58": "ƒjpXCZedGfVP",
"words": "mustang-analyze-verbal--natural-analyze-verbal",
},
]
for test in parse_tests:
for key in test:
if key == "f58" and self.use_ascii:
continue
jobid_int = test["int"]
jobid = job.JobID(test[key])
self.assertEqual(jobid, jobid_int)
jobid_repr = repr(jobid)
self.assertEqual(jobid_repr, f"JobID({jobid_int})")
if key not in ["int", "dec"]:
# Ensure encode back to same type works
self.assertEqual(getattr(jobid, key), test[key])
def test_25_job_list_attrs(self):
valid_attrs = self.fh.rpc("job-list.list-attrs", "{}").get()["attrs"]
self.assertEqual(set(valid_attrs), set(VALID_ATTRS))
def test_30_job_stats_sync(self):
stats = JobStats(self.fh)
# stats are uninitialized at first:
self.assertEqual(stats.active, -1)
self.assertEqual(stats.inactive, -1)
# synchronous update
stats.update_sync()
self.assertGreater(stats.inactive, 0)
def test_31_job_stats_async(self):
called = [False]
def cb(stats, mykw=None):
called[0] = True
self.assertGreater(stats.inactive, 0)
self.assertEqual(mykw, "mykw")
stats = JobStats(self.fh)
# stats are uninitialized at first:
self.assertEqual(stats.active, -1)
self.assertEqual(stats.inactive, -1)
# asynchronous update, no callback
stats.update()
self.assertEqual(stats.active, -1)
self.assertEqual(stats.inactive, -1)
self.fh.reactor_run()
self.assertGreater(stats.inactive, 0)
self.assertFalse(called[0])
# asynchronous update, with callback
stats.update(callback=cb, mykw="mykw")
self.fh.reactor_run()
self.assertTrue(called[0])
def assertJobInfoEqual(self, x, y, msg=None):
self.assertEqual(x.id, y.id)
self.assertEqual(x.result, y.result)
self.assertEqual(x.returncode, y.returncode)
self.assertEqual(x.waitstatus, y.waitstatus)
if y.t_run == 0.0:
self.assertEqual(x.runtime, y.runtime)
else:
self.assertGreater(x.runtime, 0.0)
self.assertEqual(x.exception.occurred, y.exception.occurred)
if y.exception.occurred:
self.assertEqual(x.exception.type, y.exception.type)
self.assertEqual(x.exception.severity, y.exception.severity)
self.assertEqual(x.exception.note, y.exception.note)
def test_32_job_result(self):
result = {}
ids = []
def cb(future, jobid):
result[jobid] = future
ids.append(job.submit(self.fh, JobspecV1.from_command(["true"])))
ids.append(job.submit(self.fh, JobspecV1.from_command(["false"])))
ids.append(job.submit(self.fh, JobspecV1.from_command(["nosuchprog"])))
ids.append(job.submit(self.fh, JobspecV1.from_command(["sleep", "120"])))
# Submit held job so we can cancel before RUN state
ids.append(job.submit(self.fh, JobspecV1.from_command(["true"]), urgency=0))
job.cancel(self.fh, ids[4])
for jobid in ids:
flux.job.result_async(self.fh, jobid).then(cb, jobid)
def cancel_on_start(future, jobid):
event = future.get_event()
if event is None:
return
if event.name == "shell.start":
job.cancel(self.fh, jobid)
future.cancel()
job.event_watch_async(self.fh, ids[3], eventlog="guest.exec.eventlog").then(
cancel_on_start, ids[3]
)
self.fh.reactor_run()
self.assertEqual(len(result.keys()), len(ids))
self.addTypeEqualityFunc(JobInfo, self.assertJobInfoEqual)
self.assertEqual(
result[ids[0]].get_info(),
JobInfo(
{
"id": ids[0],
"result": flux.constants.FLUX_JOB_RESULT_COMPLETED,
"t_start": 1.0,
"t_run": 2.0,
"t_cleanup": 3.0,
"waitstatus": 0,
"exception_occurred": False,
}
),
)
self.assertEqual(
result[ids[1]].get_info(),
JobInfo(
{
"id": ids[1],
"result": flux.constants.FLUX_JOB_RESULT_FAILED,
"t_submit": 1.0,
"t_run": 2.0,
"t_cleanup": 3.0,
"waitstatus": 256,
"exception_occurred": False,
}
),
)
self.assertEqual(
result[ids[2]].get_info(),
JobInfo(
{
"id": ids[2],
"result": flux.constants.FLUX_JOB_RESULT_FAILED,
"t_submit": 1.0,
"t_run": 2.0,
"t_cleanup": 3.0,
"waitstatus": 32512,
"exception_occurred": True,
"exception_type": "exec",
"exception_note": "task 0: start failed: nosuchprog: "
"No such file or directory",
"exception_severity": 0,
}
),
)
self.assertEqual(
result[ids[3]].get_info(),
JobInfo(
{
"id": ids[3],
"result": flux.constants.FLUX_JOB_RESULT_CANCELED,
"t_submit": 1.0,
"t_run": 2.0,
"t_cleanup": 3.0,
"waitstatus": 36608, # 143<<8
"exception_occurred": True,
"exception_type": "cancel",
"exception_note": "",
"exception_severity": 0,
}
),
)
self.assertEqual(
result[ids[4]].get_info(),
JobInfo(
{
"id": ids[4],
"result": flux.constants.FLUX_JOB_RESULT_CANCELED,
"t_submit": 0.0,
"exception_occurred": True,
"exception_type": "cancel",
"exception_note": "",
"exception_severity": 0,
}
),
)
# synchronous job.result() test
self.assertEqual(job.result(self.fh, ids[3]), result[ids[3]].get_info())
if __name__ == "__main__":
from subflux import rerun_under_flux
if rerun_under_flux(size=__flux_size(), personality="job"):
from pycotap import TAPTestRunner
unittest.main(testRunner=TAPTestRunner())
# vi: ts=4 sw=4 expandtab
|
chu11/flux-core
|
t/python/t0010-job.py
|
Python
|
lgpl-3.0
| 23,886
|
[
"NEURON"
] |
1ffa5e339bc4a12c00902dab1ac9389fb0602ea8d5171eb14b43dbef7aedce81
|
""" phenny module written as an example of how to write
phenny modules; supposed to accompany phenny_module_howto.md
written by Mozai <moc.iazom@sesom> one Saturday afternoon
Do I even need to put a license statement here? I do? Jeez.
"This work is licensed under the Creative Commons
Attribution-ShareAlike 3.0 Unported License. To view a copy of this
license, visit http://creativecommons.org/licenses/by-sa/3.0/."
No assurances of performance nor of safety are expressed or implied.
"""
import random, time
# delay between utterances; always a good idea with IRC bots.
COOLDOWN = 60 * 15 # 1: testing, 60 * 15 : normal 15 minute cooldown
# list() of names
NAMES = """
John Rose Dave Jade
WV PM AR WQ
Aradia Tavros Sollux Nepeta Karkat Kanaya
Terezi Vriska Equius Gamzee Eridan Feferi
Slick Droog Deuce Boxcars
Jane Roxy Dirk Jake
Damara Rufioh Mituna Meulin Kankri Porrim
Latula Aranea Horuss Kurloz Cronus Meenah
""".split()
# "Cronus": Hussie wrote in his will, just in case he died before
# he could finish Homestuck, that "dualscar would never get
# together with someone." Even a Leijon would not dare meddle.
def crackship(phenny, cmd_in):
" utters a nonsense romance pairing of two characters "
now = time.time()
self = phenny.bot.variables['crackship']
if self.atime + COOLDOWN < now :
random.shuffle(NAMES)
name_l = cmd_in.group(2)
if name_l and not name_l in NAMES:
name_l = name_l.title()
if not name_l in NAMES:
name_l = NAMES[0]
name_r = NAMES[1]
elif name_l == NAMES[0]:
name_r = NAMES[1]
else:
name_r = NAMES[0]
if name_l == 'Cronus':
name_r = 'nobody'
elif name_r == 'Cronus':
name_r = NAMES[2]
phenny.say("I ship %s with %s" % (name_l, name_r))
self.atime = now
# else:
# delay = self.atime + COOLDOWN - now
# phenny.bot.msg(cmd_in.nick,"%d second cooldown" % (delay))
crackship.priority = 'medium'
crackship.event = 'PRIVMSG'
crackship.thread = False # don't bother, non-blocking func call
crackship.rule = r'.*(ship|ships|shipped) (\S+) (?:and|with) \S+'
crackship.commands = ['ship','crackship']
crackship.atime = 0 # better to write to func attributes than 'global'
if __name__ == '__main__':
# run 'python crackship.py' to test it
import sys
sys.path.extend(('.','..')) # why is this necessary?
from phennytest import PhennyFake, CommandInputFake
print "--- Testing phenny module"
COOLDOWN = -1
FAKEPHENNY = PhennyFake()
for i in range(6):
FAKECMD = CommandInputFake('.ship')
crackship(FAKEPHENNY, FAKECMD)
print "(shipping John) - ",
FAKECMD = CommandInputFake('.ship John')
crackship(FAKEPHENNY, FAKECMD)
print "(shipping Karkat) - ",
FAKECMD = CommandInputFake('.ship Karkat')
crackship(FAKEPHENNY, FAKECMD)
|
bbot/ircbot
|
doc/crackship.py
|
Python
|
unlicense
| 2,834
|
[
"VisIt"
] |
861b1e4ede31cd86ba64dacf3b296a1933673b280d53fd7aae12653ad60618bc
|
#!/usr/bin/env python
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from ase.units import Bohr
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
dat = np.loadtxt('out_0_2dsurfdown.dat')*Bohr
x = dat[0,1:].copy()
y = dat[1:,0].copy()
X, Y = np.meshgrid(x,y)
#ax.plot_surface(X, Y, dat[1:,1:])
ax.plot_wireframe(X, Y, dat[1:,1:])
dat = np.loadtxt('out_0_2dsurfup.dat')*Bohr
x = dat[0,1:].copy()
y = dat[1:,0].copy()
X, Y = np.meshgrid(x,y)
#ax.plot_surface(X, Y, dat[1:,1:])
ax.plot_wireframe(X, Y, dat[1:,1:])
plt.show()
|
fuulish/surf
|
tests/surface/plot_2d_surf.py
|
Python
|
gpl-3.0
| 588
|
[
"ASE"
] |
27b833f8c90b5f023107e04ae25873f83fed50e2f8901eab043daa064e6024a0
|
import os
import numpy as np
from ase import Atom, Atoms
from gpaw import GPAW
from gpaw.test import equal
from gpaw.lrtddft import LrTDDFT
txt = '-'
txt = None
load = True
load = False
xc = 'LDA'
R = 0.7 # approx. experimental bond length
a = 4.0
c = 5.0
H2 = Atoms([Atom('H', (a / 2, a / 2, (c - R) / 2)),
Atom('H', (a / 2, a / 2, (c + R) / 2))],
cell=(a, a, c))
calc = GPAW(xc=xc, nbands=2, spinpol=False, eigensolver='rmm-diis', txt=txt)
H2.set_calculator(calc)
H2.get_potential_energy()
calc.write('H2saved_wfs.gpw', 'all')
calc.write('H2saved.gpw')
wfs_error = calc.wfs.eigensolver.error
#print "-> starting directly after a gs calculation"
lr = LrTDDFT(calc, txt='-')
lr.diagonalize()
#print "-> reading gs with wfs"
gs = GPAW('H2saved_wfs.gpw', txt=txt)
# check that the wfs error is read correctly,
# but take rounding errors into account
assert( abs(calc.wfs.eigensolver.error/gs.wfs.eigensolver.error - 1) < 1e-8)
lr1 = LrTDDFT(gs, xc=xc, txt='-')
lr1.diagonalize()
# check the oscillator strrength
assert (abs(lr1[0].get_oscillator_strength()[0] /
lr[0].get_oscillator_strength()[0] -1) < 1e-10)
#print "-> reading gs without wfs"
gs = GPAW('H2saved.gpw', txt=None)
lr2 = LrTDDFT(gs, txt='-')
lr2.diagonalize()
# check the oscillator strrength
d = abs(lr2[0].get_oscillator_strength()[0] /
lr[0].get_oscillator_strength()[0] - 1)
assert (d < 2e-3), d
|
robwarm/gpaw-symm
|
gpaw/test/lrtddft2.py
|
Python
|
gpl-3.0
| 1,416
|
[
"ASE",
"GPAW"
] |
10782e4140d47b8882781bfb3c0bdd79cd464d1e4254a0d091e33e8c742109e4
|
from milecastles import Story, Box, ThroughPage, ThroughSequence, ConditionFork, NodeFork, SackChange
# inspects the module to figure out the story name (e.g. corbridge)
storyName = __name__.split(".")[-1]
# create story
story = Story(
uid=storyName,
# choose an alternative startNodeUid and startSack for debugging
#startNodeUid = "stoneFork",
startNodeUid = "landing",
startSack={
"points": 0,
"hours": 0,
"horseFat": 0,
"sword": False,
"armour": False,
"helmet": False,
"recon": False,
}
)
#reset cache: rm templates/t_housesteads*.py
with story:
# populate with boxes
barracksBox = Box( uid="1", label="Box I", description="the Barracks")
battleBox = Box( uid="2", label="Box II", description="the Battle")
westgateBox = Box( uid="3", label="Box III", description="the West Gate")
stablesBox = Box( uid="4", label="Box IV", description="the Stables")
entranceBox = barracksBox
# INTRO
#23456789012345678901234
ThroughSequence(
uid = "landing",
change = SackChange(
reset=story.startSack
),
goalBoxUid = barracksBox.uid,
nextNodeUid = "stables1",
sequence = [
""" Welcome to Milecastles!
Look for numbered boxes
and use your tag to
progress the story...""",
""" You are a member of
the Cavalry stationed
at Housesteads Fort.
News has reached
you of barbarians
approaching
from the West...""",
""" Your quest is to help
the fort repel the
attack using all of
your training...""",
""" You wake up in your
barracks.
MARCH up to the fort
and find the stables
at BOX IV.""",
],
)
#1. MORNING / BATTLE PREP
#23456789012345678901234
ThroughSequence(
uid = "stables1",
goalBoxUid = stablesBox.uid,
nextNodeUid = "stableFork",
sequence = [
""" You arrive at the
stables. The smelly
latrine is nearby.
HOLD YOUR NOSE for
20 seconds. Your
commander orders you
to prepare for battle...""",
""" You find your horse,
who seems a bit hungry.
You also have a
feeling that you should
do some scouting at the
West gate.""",
],
)
#23456789012345678901234
NodeFork(
uid = "stableFork",
choices = {
"food1": """ Collect horse food from
the South Gate""",
"recon1": """ Go to the West Gate to
survey the land""",
},
)
ThroughPage(
uid = "food1",
page = """ You collect a bale of
hay from the supply hut.
CARRY THE HAY back to
the stable to feed
your horse.""",
goalBoxUid = battleBox.uid,
nextNodeUid = "food2",
)
#23456789012345678901234
ThroughPage(
uid = "food2",
change = SackChange(
plus = {"horseFat":40},
),
page = """ Back at the stable,
Your horse munches
greedily. The horse's
stomach is now {{node.change.plus['horseFat']}}%
full.""",
goalBoxUid = stablesBox.uid,
nextNodeUid = "stables2",
)
#23456789012345678901234
ThroughPage(
uid = "recon1",
change = SackChange(
assign = {"recon":True},
),
missTemplate = """ This isn't the
West Gate!
Go to {{node.goalBox.label}}""",
page = """ At the WEST GATE, you
can see distant shapes
coming across the hills.
You MAKE A NOTE
of their position on
your slate and return
to the stables.""",
goalBoxUid = westgateBox.uid,
nextNodeUid = "stables2",
)
#23456789012345678901234
ThroughPage(
uid="stables2",
missTemplate = """ This isn't the stables!
Go to {{node.goalBox.label}}""",
page= """ You and your horse are
eager to enter the
battle. But are you
ready?
...""",
goalBoxUid = stablesBox.uid,
nextNodeUid="stablesCheck1",
)
ConditionFork(
uid= "stablesCheck1",
condition = "sack.sword==False and sack.horseFat==0",
trueNodeUid= "stablesCheck2",
falseNodeUid= "battlePrep1",
)
#23456789012345678901234
ThroughPage(
uid="stablesCheck2",
missTemplate = """ This isn't the stables!
Go to {{node.goalBox.label}}""",
page= """ You don't quite feel
ready yet...
What did you
forget to do?""",
goalBoxUid = stablesBox.uid,
nextNodeUid="stableFork1",
)
#23456789012345678901234
NodeFork(
uid = "stableFork1",
choices = {
"sword": """ Find your sword""",
"feed": """ Feed the horse""",
"groom": """ Find a helmet""",
},
hideChoices = {
"sword": "sack.sword==True",
"groom": "sack.helmet==True",
},
)
#23456789012345678901234
ThroughPage(
uid="feed",
change = SackChange(
plus={"horseFat":40},
),
goalBoxUid = battleBox.uid,
page= """ You lead your horse to
the supply hut. Pretend
to FEED THE HORSE.
The horse's stomach is
now {{node.change.plus['horseFat']}}% more full.""",
nextNodeUid="stablesCheck1",
)
#23456789012345678901234
ThroughPage(
uid="groom",
change = SackChange(
assign={"helmet":True},
),
page= """ Your groom is lazy!
You have to SHOUT AT HIM
before he will fetch
your helmet. Ouch...
It's too small!""",
goalBoxUid = westgateBox.uid,
nextNodeUid="stablesCheck1",
)
#23456789012345678901234
ThroughPage(
uid="sword",
missTemplate = "Head back to {{node.goalBox.label}}",
change=SackChange(
assign={"sword":True},
),
page= """ Your sword is buried
under some old wolf
skins and barrels.
You check the blade. OW!!
It's still sharp!
Now back to the stables.""",
goalBoxUid = barracksBox.uid,
nextNodeUid="stablesCheck1",
)
#2. MORNING INSPECTION / BATTLE
#23456789012345678901234
ThroughSequence(
uid="battlePrep1",
goalBoxUid = stablesBox.uid,
nextNodeUid="southGate1",
missTemplate = """ Quickly! Back to the
stables{{node.goalBox.label}}""",
sequence= [
""" The commander is getting
impatient. You need to
get to the battle.
'HRMPH! Let's check your
things.'""",
""" 'How full is your horse's
stomach?'
You answer:
{{sack.horseFat}}%, sir!""",
""" 'Do you have a sword
that's nice and sharp?'
You answer:
That's {{sack.sword}}, sir!""",
""" 'Are you even wearing
the right helmet?'
You answer:
That's {{sack.helmet}}, sir!""",
""" 'Well, I've seen better,
but we've no more time'.
MARCH to the South Gate
to start the battle'. """,
]
)
ConditionFork(
uid= "southGate1",
condition = "sack.recon == True",
trueNodeUid= "reconYes",
falseNodeUid= "reconNo",
)
#23456789012345678901234
ThroughSequence(
uid="reconYes",
goalBoxUid = battleBox.uid,
nextNodeUid="battling1",
sequence= [
""" You meet the cavalry
to tell them about what
you saw earlier and
discuss the best way
to attack...""",
""" You decide to flank
the enemy and catch
them in a pincer
movement. RIDE YOUR
HORSE out to fight
then come back...""",
]
) #23456789012345678901234
#23456789012345678901234
ThroughSequence(
uid="battling1",
missTemplate = """ Quick! Head back to the
Battle at {{node.goalBox.label}}""",
goalBoxUid = battleBox.uid,
nextNodeUid="stables3",
sequence= [
""" You BOP a barbarian.""",
""" You PAF a barbarian.""",
""" You WHACK a barbarian.""",
""" You CLOMP a barbarian.""",
""" You DISCOMBOBULATE
a barbarian.""",
""" You BIFF a barbarian.""",
""" You PWN a barbarian.""",
""" You PLOK a barbarian.""",
""" You CRUNK a barbarian.""",
""" This is getting rather
tiring! You'd better
take a rest and re-supply
at the stables.""",
]
)
#23456789012345678901234
ThroughSequence(
uid="reconNo",
goalBoxUid = battleBox.uid,
nextNodeUid="stables3",
missTemplate = "Quick! Head back to the BATTLE at {{node.goalBox.label}}",
sequence= [
""" Well...
You didn't take the
chance to scout the
area earlier and
the cavalry take a
beating...""",
""" You get BOPPED by a
barbarian...""",
""" Your horse gets
PAFFED by a
barbarian...""",
""" You get WHACKED by a
barbarian...""",
""" Your horse gets CLOMPED
by a barbarian...""",
""" You get DISCOMBOBULATED
by a barbarian...""",
""" Your horse gets
WHALLOPED by a
barbarian...""",
""" You get CRUNKED by a
barbarian...""",
""" Are you getting the
idea? It's not going
well is it? Better
take a rest, re-supply
and chill with your
horse at the stables...""",
]
)
#3. PM BATTLE PREP
#23456789012345678901234
ThroughSequence(
uid="stables3",
change = SackChange(
minus={"sack.horseXP":20},
),
goalBoxUid = stablesBox.uid,
nextNodeUid="stablesFork3",
sequence = [
""" Tired from the battle,
you head back to the
stables to rest.""",
""" You fall asleep in
the hay for an hour.
When you wake up,
your horse is making
snorting noises...""",
""" The commander is
imapatient, but
there's still a bit
of time to prepare
before going back
to the battle...""",
]
)
#23456789012345678901234
NodeFork(
uid = "stablesFork3",
choices = {
"hide1": """ This might be the time
to chicken out and HIDE
in the BARRACKS.""",
"groom2": """ Get armour from the
SOUTH GATE.""",
"feed3": """ Get more horse feed at
the WEST Gate""",
},
)
#23456789012345678901234
ThroughPage(
uid="groom2",
change=SackChange(
assign={"armour":True},
),
page= """ Your groom is asleep.
You wake him up with
a KICK. He presents
your armour. Except...
it's too big! Now
back to the inspection!""",
goalBoxUid = battleBox.uid,
nextNodeUid="stablesCheck3",
)
#23456789012345678901234
ThroughPage(
uid="feed3",
missTemplate = "MARCH over to {{node.goalBox.label}}",
change=SackChange(
plus={"horseFat":40},
),
page=
""" You lead your horse
to the hay bale.
He eats half, meaning
that he's {{node.change.plus['horseFat']}}%
more full. Now go back
to the stables.""",
goalBoxUid = westgateBox.uid,
nextNodeUid="stablesCheck3",
)
#23456789012345678901234
ThroughSequence(
uid="hide1",
goalBoxUid = barracksBox.uid,
nextNodeUid="landing",
missTemplate = "Head over to {{node.goalBox.label}}",
sequence= [
""" You played it safe
and decided to hide
out for the rest of
the battle...""",
""" Curled up in a wolf
skin, you dream of
being the hero or
heroine of the wall...""",
""" you THWACK a barbarian...""",
""" You THWAP a barbarian...""",
""" You WALLOP a barbarian...""",
""" You KERPOW a barbarian...""",
""" You DISMANTLE a barbarian...""",
""" You PUNCTURE a barbarian...""",
""" You CRUSH a barbarian...""",
""" Wow, this is a repetitive
dream...""",
""" The cavalry won the
battle but you were
lazy and your commander
will be FURIOUS!!""",
""" Hold your tag here
to try again or
visit another museum
to play as a different
character. """,
]
)
#4. PM INSPECTION / BATTLE
#23456789012345678901234
ThroughPage(
uid="stablesCheck3",
missTemplate = "Head over to {{node.goalBox.label}}",
page=
""" The forgetful
commander is here.
He wants to make an
inspection (again).
'First, let's look at
what you're wearing...'""",
goalBoxUid = stablesBox.uid,
nextNodeUid="gearCheck1",
)
#23456789012345678901234
ThroughPage(
uid="gearCheck1",
missTemplate = "Head over to {{node.goalBox.label}}",
page=
""" 'Did you find a HELMET?'
That's {{sack.helmet}} sir!
'Are you wearing sturdy
ARMOUR?'
That's {{sack.armour}} sir!""",
goalBoxUid = stablesBox.uid,
nextNodeUid="gearCheck2",
)
ConditionFork(
uid= "gearCheck2",
condition = "sack.armour and sack.helmet == True",
trueNodeUid= "goodGear",
falseNodeUid= "badGear",
)
#23456789012345678901234
ThroughPage(
uid="goodGear",
page=
""" 'YOU WILL MAKE THE
CAVALRY PROUD!
(Even though you look
a bit funny)
GOOD WORK! NOW, ONWARDS
TO BATTLE! HRMMM""",
goalBoxUid = stablesBox.uid,
nextNodeUid="horseCheck",
)
ThroughPage(
uid="badGear",
page=
""" 'YOU'RE A DISGRACE!
I'VE SEEN BETTER
DRESSED PONIES THAN
YOU!!! Oh well, I'll
have to let you pass
just this once...'""",
goalBoxUid = stablesBox.uid,
nextNodeUid="horseCheck",
)
#23456789012345678901234
ThroughPage(
uid = "horseCheck",
goalBoxUid = stablesBox.uid,
change = SackChange(
trigger = "sack.horseFat >= 80",
plus = {"horseFat":10},
),
page = """
{% if node.change.triggered %}
{% if node.change.completed %}
'LET ME LOOK AT YOUR
HORSE... OH DEAR
HE'S EATEN TOO MUCH!!'
Your horse's stomach is
{{sack.horseFat}}% full.
He's too fat to ride!
{% else %}
'LET ME SEE YOUR HORSE!
HMM, HE'S IN GOOD SHAPE!
Your horse's stomach is
a perfect {{node.change.plus['horseFat']}}% full.
He's Healthy and ready to
go to battle!
{% endif %}
{% else %}
'LET ME SEE YOUR HORSE!
HMM, HE'S IN GOOD SHAPE!
Your horse's stomach is
a perfect {{node.change.plus['horseFat']}}% full.
He's Healthy and ready to
go to battle! {% endif %}
""",
nextNodeUid = "endingCheck",
)
# CHECK THIS FORK!
ConditionFork(
uid= "endingCheck",
condition = "sack.horseFat>=90",
trueNodeUid= "badEnding",
falseNodeUid= "battling2",
)
#23456789012345678901234
ThroughSequence(
uid="battling2",
missTemplate = """ Quick! Head back to the
BATTLE at {{node.goalBox.label}}""",
goalBoxUid = battleBox.uid,
nextNodeUid="goodEnding",
sequence= [
""" You passed the
inspection!
RIDE out with your
fellow cavalry to
save the day!""",
""" You THWACK a barbarian..""",
""" You PINCH a barbarian...""",
""" You WHACK a barbarian...""",
""" You KERPOW a barbarian...""",
""" You DISCOMBOBULATE
a barbarian...""",
""" You PUNCTURE a barbarian
...""",
""" You CRUSH a barbarian...""",
""" Are you getting the
idea yet?""",
""" The cavalry won!!
You're tired now, but
you still manage to go
and celebrate at the
barracks.""",
]
)
#5. ENDINGS
#23456789012345678901234
ThroughPage(
uid="goodEnding",
page="""You are a legend! You
helped defend the fort!
HOLD YOUR TAG
to play again or
visit another museum
to play as a different
character!""",
goalBoxUid = barracksBox.uid,
nextNodeUid="landing",
)
ThroughPage(
uid="badEnding",
page="""That was a poor show...
You survived the battle
but your horse is fat
and you will be demoted.
USE YOUR TAG at BOX I
to try again!""",
goalBoxUid = stablesBox.uid,
nextNodeUid="landing",
)
|
cefn/avatap
|
python/stories/housesteads.py
|
Python
|
agpl-3.0
| 16,431
|
[
"VisIt"
] |
a4633e4c27d81677d1e3245d24647e361a070b1b6afe4882ace0b4b7b0121836
|
#!/usr/bin/env python3
'''
This script tests the correctness of the algorithm implementation
using the technique called 'stress testing'. It pseudorandomly generates
test cases for the algorithm, and then uses 'networkx' Python library
to compute the right answer. Then it calls the app to get it's answers
and checks if they match.
'''
import sys
import math
import random
import argparse
import subprocess
import networkx
from networkx import dijkstra_path_length
from networkx.generators.random_graphs import gnm_random_graph
AUTO_MAX_EDGES = 1000
AUTO_MAX_VERTICES = 250
APP_TIMEOUT_SEC = 5
MAX_WEIGHT = 20000
# General ase exception
class Error(Exception): pass
# Exception used by call_app when call timeout is expired
class TimeoutExpiredError(Error): pass
def generate_graph(n_vert, n_edges):
'''
Generate graph that consists of `n_vert` (ing) vertices
and `n_edges` (int) edges. Return `networkx.classes.graph.Graph`.
'''
graph = gnm_random_graph(n_vert, n_edges, random.random(), True)
for v1, v2 in graph.edges():
w = random.randrange(1, MAX_WEIGHT)
graph.edge[v1][v2]["weight"] = w
return graph
def shortest_path(graph, src, dst):
'''
Return shortest path length on `graph` (`networkx.classes.graph.Graph`)
from `src` (int) to `dst` (int). If path does not exist, return -1.
'''
try:
return dijkstra_path_length(graph, src, dst)
except networkx.exception.NetworkXNoPath:
return -1
def call_app(app, graph, queries):
'''
Call `app` (path string), feeding `graph` (`networkx.classes.graph.Graph`)
to it's stdout, then `queries` (tuples of source and destination
vertex indices). Return list of ints, where i-th element denotes
the length of the shortest path corresponding to queries[i].
'''
stdin = make_stdin(graph, queries)
try:
pipe = subprocess.Popen([app], stdout=subprocess.PIPE, stderr=subprocess.PIPE,
stdin=subprocess.PIPE)
except FileNotFoundError:
raise Error(exc.strerror)
try:
res, err = pipe.communicate(stdin.encode("ascii"), timeout=APP_TIMEOUT_SEC)
except subprocess.TimeoutExpired:
raise TimeoutExpiredError("timeout expired while calling the app")
else:
lines = res.decode("ascii").split()
return [int(x) for x in lines[1:]]
def make_stdin(graph, queries):
'''
Transform `graph` (`networkx.classes.graph.Graph`) and `queries` (tuples
of source and destination vertex indices) into a string representation
suitable for the app stdin.
'''
return "p sp {n_vert} {n_edges}\n" \
"{edges}\n" \
"{n_queries}\n" \
"{queries}\n" \
.format(
n_vert=len(graph.nodes()), n_edges=len(graph.edges()),
edges="\n".join("a {} {} {}".format(v1+1, v2+1, graph.edge[v1][v2]["weight"])
for v1, v2 in graph.edges()),
n_queries=len(queries),
queries="\n".join("{} {}".format(q[0]+1, q[1]+1) for q in queries)
)
def make_report(graph, queries, true_answers, my_answers=None):
'''
Make a human-readable report on the test case. Parameters:
+ `graph` — `networkx.classes.graph.Graph`;
+ `queries` — tuples of source and destination vertex indices;
+ `true_answers` — a list of ints, where i-th int denotes the shortest
path length corresponding to queries[i];
+ `my_answers` — same as `true_answers`, but returned from the app.
Return: string.
'''
s = "STDIN:\n{}EXPECTED OUTPUT:\n{}".format(
make_stdin(graph, queries), "\n".join(str(x) for x in true_answers))
if my_answers is not None:
s += "\nREAL OUTPUT:\n{}".format("\n".join(str(x) for x in my_answers))
return s
def main():
''' Entry point of the application. '''
parser = argparse.ArgumentParser()
parser.add_argument("app", help="path to the application")
parser.add_argument("-t", "--tests", type=int, default=100,
help="number of tests to perform")
parser.add_argument("-q", "--queries", type=int, default=10,
help="queries per test")
parser.add_argument("-v", "--vertices", type=int, help="number of graph vertices")
parser.add_argument("-e", "--edges", type=int, help="number of edges")
parser.add_argument("-r", "--random", type=int, default=42,
help="PRNG seed")
args = parser.parse_args()
random.seed(args.random)
for i in range(args.tests):
n_vert = args.vertices if args.vertices is not None \
else random.randrange(1, AUTO_MAX_VERTICES)
n_edges = args.edges if args.edges is not None \
else random.randrange(min(AUTO_MAX_EDGES, (n_vert*(n_vert-1))//2)+1)
graph = generate_graph(n_vert, n_edges)
queries = [(random.randrange(n_vert), random.randrange(n_vert))
for _ in range(args.queries)]
true_answers = [shortest_path(graph, src, dst) for src, dst in queries]
try:
my_answers = call_app(args.app, graph, queries)
except TimeoutExpiredError as exc:
print("Timeout expired on test #{}".format(i+1))
print(make_report(graph, queries, true_answers))
sys.exit(1)
except Error as exc:
print("Failed to call app: {}".format(exc))
sys.exit(2)
if true_answers == my_answers:
print("Passed tests: {}/{}".format(i+1, args.tests), end="\r")
else:
print("Failed test #{}".format(i+1))
print(make_report(graph, queries, true_answers, my_answers))
sys.exit(3)
print("")
sys.exit(0)
if __name__ == "__main__":
main()
|
vnetserg/contraction
|
test.py
|
Python
|
mit
| 5,845
|
[
"ASE"
] |
cbd301ce7b46b3c426e9abbfea0898a7e4ea616d15e0cfdfa0a8b18be0d46727
|
# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Handles directives.
This converter removes the directive functions from the code and moves the
information they specify into AST annotations. It is a specialized form of
static analysis, one that is specific to AutoGraph.
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import gast
from tensorflow.contrib.autograph.core import converter
from tensorflow.contrib.autograph.lang import directives
from tensorflow.contrib.autograph.pyct import anno
from tensorflow.python.util import tf_inspect
ENCLOSING_LOOP = 'enclosing_loop'
def _map_args(call_node, function):
"""Maps AST call nodes to the actual function's arguments.
Args:
call_node: ast.Call
function: Callable[..., Any], the actual function matching call_node
Returns:
Dict[Text, ast.AST], mapping each of the function's argument names to
the respective AST node.
"""
args = call_node.args
kwds = {kwd.arg: kwd.value for kwd in call_node.keywords}
return tf_inspect.getcallargs(function, *args, **kwds)
class DirectivesTransformer(converter.Base):
"""Parses compiler directives and converts them into AST annotations."""
def _process_symbol_directive(self, call_node, directive):
if len(call_node.args) < 1:
raise ValueError('"%s" requires a positional first argument'
' as the target' % directive.__name__)
target = call_node.args[0]
defs = anno.getanno(target, anno.Static.ORIG_DEFINITIONS)
for def_ in defs:
def_.directives[directive] = _map_args(call_node, directive)
return call_node
def _process_statement_directive(self, call_node, directive):
if self.local_scope_level < 1:
raise ValueError(
'"%s" must be used inside a statement' % directive.__name__)
target = self.get_local(ENCLOSING_LOOP)
node_anno = anno.getanno(target, converter.AgAnno.DIRECTIVES, {})
node_anno[directive] = _map_args(call_node, directive)
anno.setanno(target, converter.AgAnno.DIRECTIVES, node_anno)
return call_node
def visit_Expr(self, node):
if isinstance(node.value, gast.Call):
call_node = node.value
if anno.hasanno(call_node.func, 'live_val'):
live_val = anno.getanno(call_node.func, 'live_val')
if live_val is directives.set_element_type:
call_node = self._process_symbol_directive(call_node, live_val)
elif live_val is directives.set_loop_options:
call_node = self._process_statement_directive(call_node, live_val)
else:
return self.generic_visit(node)
return None # Directive calls are not output in the generated code.
return self.generic_visit(node)
# TODO(mdan): This will be insufficient for other control flow.
# That means that if we ever have a directive that affects things other than
# loops, we'll need support for parallel scopes, or have multiple converters.
def _track_and_visit_loop(self, node):
self.enter_local_scope()
self.set_local(ENCLOSING_LOOP, node)
node = self.generic_visit(node)
self.exit_local_scope()
return node
def visit_While(self, node):
return self._track_and_visit_loop(node)
def visit_For(self, node):
return self._track_and_visit_loop(node)
def transform(node, ctx):
return DirectivesTransformer(ctx).visit(node)
|
jart/tensorflow
|
tensorflow/contrib/autograph/converters/directives.py
|
Python
|
apache-2.0
| 4,040
|
[
"VisIt"
] |
7bb33dc3a47f6d5caee9ccef24c9d0a72344aa6639b97ac66831b5257f421ba3
|
# Copyright (C) 2019 The ESPResSo project
#
# This file is part of ESPResSo.
#
# ESPResSo is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# ESPResSo is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import unittest as ut
import importlib_wrapper
import numpy as np
# make simulation deterministic
np.random.seed(42)
benchmark, skipIfMissingFeatures = importlib_wrapper.configure_and_import(
"@BENCHMARKS_DIR@/lj.py", cmd_arguments=["--particles_per_core", "800"],
measurement_steps=400, n_iterations=2, min_skin=0.225, max_skin=0.225)
@skipIfMissingFeatures
class Sample(ut.TestCase):
system = benchmark.system
if __name__ == "__main__":
ut.main()
|
pkreissl/espresso
|
testsuite/scripts/benchmarks/test_lj.py
|
Python
|
gpl-3.0
| 1,177
|
[
"ESPResSo"
] |
e0bf61376a0e985858e71854c392c808803f9a108651104ed667a1ef8c30d5ed
|
# -*- coding: utf-8 -*-
"""
This module contains routines to convert from SACData and yt DataSets into
tvtk fields via mayavi.
"""
import numpy as np
from mayavi.tools.sources import vector_field, scalar_field
__all__ = ['get_sacdata_mlab', 'get_yt_mlab', 'process_next_step_yt',
'process_next_step_sacdata', 'yt_to_mlab_vector']
def get_sacdata_mlab(f, cube_slice, flux=True):
"""
Reads in useful variables from a hdf5 file to vtk data structures
Parameters
----------
f : hdf5 file handle
SAC HDF5 file
flux : boolean
Read variables for flux calculation?
cube_slice : np.slice
Slice to apply to the arrays
Returns
-------
bfield, vfield[, density, valf, cs, beta]
"""
#Do this before convert_B
if flux:
va_f = f.get_va()
cs_f = f.get_cs()
thermal_p,mag_p = f.get_thermalp(beta=True)
beta_f = mag_p / thermal_p
#Convert B to Tesla
f.convert_B()
# Create TVTK datasets
bfield = vector_field(f.w_sac['b3'][cube_slice] * 1e3,
f.w_sac['b2'][cube_slice] * 1e3,
f.w_sac['b1'][cube_slice] * 1e3,
name="Magnetic Field",figure=None)
vfield = vector_field(f.w_sac['v3'][cube_slice] / 1e3,
f.w_sac['v2'][cube_slice] / 1e3,
f.w_sac['v1'][cube_slice] / 1e3,
name="Velocity Field",figure=None)
if flux:
density = scalar_field(f.w_sac['rho'][cube_slice],
name="Density", figure=None)
valf = scalar_field(va_f, name="Alven Speed", figure=None)
cs = scalar_field(cs_f, name="Sound Speed", figure=None)
beta = scalar_field(beta_f, name="Beta", figure=None)
return bfield, vfield, density, valf, cs, beta
else:
return bfield, vfield
def yt_to_mlab_vector(ds, xkey, ykey, zkey, cube_slice=np.s_[:,:,:],
field_name=""):
"""
Convert three yt keys to a mlab vector field
Parameters
----------
ds : yt dataset
The dataset to get the data from.
xkey, ykey, zkey : string
yt field names for the three vector components.
cube_slice : numpy slice
The array slice to crop the yt fields with.
field_name : string
The mlab name for the field.
Returns
-------
field : mayavi vector field
"""
cg = ds.index.grids[0]
return vector_field(cg[xkey][cube_slice],
cg[ykey][cube_slice],
cg[zkey][cube_slice],
name=field_name,figure=None)
def yt_to_mlab_scalar(ds, key, cube_slice=np.s_[:,:,:], field_name=""):
"""
Convert obe yt keys to a mlab scalar field
Parameters
----------
ds : yt dataset
The dataset to get the data from.
key : string
yt field names for the three vector components.
cube_slice : numpy slice
The array slice to crop the yt fields with.
field_name : string
The mlab name for the field.
Returns
-------
field : mayavi vector field
"""
cg = ds.index.grids[0]
return scalar_field(cg[key][cube_slice], name=field_name, figure=None)
def update_yt_to_mlab_vector(field, ds, xkey, ykey, zkey,
cube_slice=np.s_[:,:,:]):
"""
Update a mayavi field based on a yt dataset.
Parameters
----------
field : mayavi vector field
The field to update.
ds : yt dataset
The dataset to get the data from.
xkey, ykey, zkey : string
yt field names for the three vector components.
cube_slice : numpy slice
The array slice to crop the yt fields with.
Returns
-------
field : mayavi vector field
"""
cg = ds.index.grids[0]
# Update Datasets
field.set(vector_data = np.rollaxis(np.array([cg[xkey][cube_slice],
cg[ykey][cube_slice],
cg[zkey][cube_slice]]),
0, 4))
return field
def get_yt_mlab(ds, cube_slice, flux=True):
"""
Reads in useful variables from yt to vtk data structures, converts into SI
units before return.
Parameters
----------
ds : yt dataset
with derived fields
flux : boolean
Read variables for flux calculation?
cube_slice : np.slice
Slice to apply to the arrays
Returns
-------
if flux:
bfield, vfield, density, valf, cs, beta
else:
bfield, vfield
"""
cg = ds.index.grids[0]
# Create TVTK datasets
bfield = yt_to_mlab_vector(ds, 'mag_field_x', 'mag_field_y', 'mag_field_z',
cube_slice=cube_slice,
field_name="Magnetic Field")
vfield = yt_to_mlab_vector(ds, 'velocity_x', 'velocity_y', 'velocity_z',
cube_slice=cube_slice,
field_name="Velocity Field")
if flux:
density = scalar_field(cg['density'][cube_slice] * 1e-3,
name="Density", figure=None)
valf = scalar_field(cg['alfven_speed'] * 1e-2, name="Alven Speed", figure=None)
cs = scalar_field(cg['sound_speed'] * 1e-2, name="Sound Speed", figure=None)
beta = scalar_field(cg['plasma_beta'], name="Beta", figure=None)
return bfield, vfield, density, valf, cs, beta
else:
return bfield, vfield
def process_next_step_yt(ds, cube_slice, bfield, vfield, density, valf, cs, beta):
"""
Update all mayavi sources using a yt dataset, in SI units.
Parameters
----------
ds : yt dataset
The dataset to use to update the mayavi fields
cube_slice : np.s\_
A array slice to cut the yt fields with
bfield, vfield, density, valf, cs, beta : mayavi sources
The sources to update
Returns
-------
bfield, vfield, density, valf, cs, beta : mayavi sources
The updated sources
"""
cg = ds.index.grids[0]
# Update Datasets
bfield.set(vector_data = np.rollaxis(np.array([cg['mag_field_x'][cube_slice] * 1e4,
cg['mag_field_y'][cube_slice] * 1e4,
cg['mag_field_z'][cube_slice] * 1e4]),
0, 4))
vfield.set(vector_data = np.rollaxis(np.array([cg['velocity_x'][cube_slice] * 1e-2,
cg['velocity_y'][cube_slice] * 1e-2,
cg['velocity_z'][cube_slice] * 1e-2]),
0, 4))
valf.set(scalar_data = cg['alfven_speed'] * 1e-2)
cs.set(scalar_data = cg['sound_speed'] * 1e-2)
beta.set(scalar_data = cg['plasma_beta'])
density.set(scalar_data = cg['density'] * 1e-3)
return bfield, vfield, density, valf, cs, beta
def process_next_step_sacdata(f, cube_slice, bfield, vfield, density, valf, cs, beta):
"""
Update all mayavi sources using a SACData instance
Parameters
----------
ds : SACData instance
The dataset to use to update the mayavi fields
cube_slice : np.s\_
A array slice to cut the yt fields with
bfield, vfield, density, valf, cs, beta : mayavi sources
The sources to update
Returns
-------
bfield, vfield, density, valf, cs, beta : mayavi sources
The updated sources
"""
va_f = f.get_va()
cs_f = f.get_cs()
thermal_p,mag_p = f.get_thermalp(beta=True)
beta_f = mag_p / thermal_p
density_f = f.w_sac['rho']
f.convert_B()
# Update Datasets
bfield.set(vector_data = np.rollaxis(np.array([f.w_sac['b3'][cube_slice] * 1e3,
f.w_sac['b2'][cube_slice] * 1e3,
f.w_sac['b1'][cube_slice] * 1e3]),
0, 4))
vfield.set(vector_data = np.rollaxis(np.array([f.w_sac['v3'][cube_slice] / 1e3,
f.w_sac['v2'][cube_slice] / 1e3,
f.w_sac['v1'][cube_slice] / 1e3]),
0, 4))
valf.set(scalar_data = va_f)
cs.set(scalar_data = cs_f)
beta.set(scalar_data = beta_f)
density.set(scalar_data = density_f)
return bfield, vfield, density, valf, cs, beta
|
Cadair/pysac
|
pysac/analysis/tube3D/process_utils.py
|
Python
|
bsd-2-clause
| 8,784
|
[
"Mayavi",
"VTK"
] |
a442d496db2b880cba471c839df27a2a97fe3c68556f0049d559f75c6412e285
|
#!/usr/bin/env python
"""
forcing.py
Functions for generating ROMS forcing files (atmosphere, tides,
rivers, etc.)
Written by Brian Powell on 02/09/16
Copyright (c)2010--2021 University of Hawaii under the MIT-License.
"""
import numpy as np
from datetime import datetime
import netCDF4
import seapy
from collections import namedtuple
from rich.progress import track
# Define a named tuple to store raw data for the gridder
forcing_data = namedtuple('forcing_data', 'field ratio offset')
fields = ("Tair", "Qair", "Pair", "rain",
"Uwind", "Vwind", "lwrad_down", "swrad")
gfs_url = "http://oos.soest.hawaii.edu/thredds/dodsC/hioos/model/atm/ncep_global/NCEP_Global_Atmospheric_Model_best.ncd"
gfs_map = {
"pad": 1.0,
"frc_lat": "latitude",
"frc_lon": "longitude",
"frc_time": "time",
"Tair": forcing_data("tmp2m", 1, -273.15),
"Pair": forcing_data("prmslmsl", 0.01, 0),
"Qair": forcing_data("rh2m", 1, 0),
"rain": forcing_data("pratesfc", 1, 0),
"Uwind": forcing_data("ugrd10m", 1, 0),
"Vwind": forcing_data("vgrd10m", 1, 0),
"lwrad_down": forcing_data("dlwrfsfc", 1, 0),
"swrad": forcing_data("dswrfsfc", 1, 0)
}
ncep_map = {
"pad": 2.0,
"frc_lat": "lat",
"frc_lon": "lon",
"frc_time": "time",
"Tair": forcing_data("air", 1, -273.15),
"Pair": forcing_data("slp", 0.01, 0),
"Qair": forcing_data("rhum", 1, 0),
"rain": forcing_data("prate", 1, 0),
"Uwind": forcing_data("uwnd", 1, 0),
"Vwind": forcing_data("vwnd", 1, 0),
"lwrad_down": forcing_data("dlwrf", 1, 0),
"swrad": forcing_data("dswrf", 1, 0)
}
def gen_bulk_forcing(infile, fields, outfile, grid, start_time, end_time,
epoch=seapy.default_epoch, clobber=False, cdl=None):
"""
Given a source file (or URL), a dictionary that defines the
source fields mapped to the ROMS fields, then it will generate
a new bulk forcing file for ROMS.
Parameters
----------
infile: string,
The source file (or URL) to load the data from
fields: dict,
A dictionary of the fields to load and process. The dictionary
is composed of:
"frc_lat":STRING name of latitude field in forcing file
"frc_lon":STRING name of longitude field in forcing file
"frc_time":STRING name of time field in forcing file
"frc_time_units":STRING optional, supply units of frc time
field in forcing file
keys of ROMS bulk forcing field names (Tair, Pair, Qair,
rain, Uwind, Vwind, lwrad_down, swrad) each with an
array of values of a named tuple (forcing_data) with the
following fields:
field: STRING value of the forcing field to use
ratio: FLOAT value to multiply with the source data
offset: FLOAT value to add to the source data
outfile: string,
Name of the output file to create
grid: seapy.model.grid or string,
Grid to use for selecting spatial region
start_time: datetime,
Starting time of data to process
end_time: datetime,
Ending time of data to process
epoch: datetime,
Epoch to use for ROMS times
clobber: bool optional,
Delete existing file or not, default False
cdl: string, optional,
Use the specified CDL file as the definition for the new
netCDF file.
Returns
-------
None: Generates an output file of bulk forcings
Examples
--------
To generate GFS forcing for the grid "mygrid.nc" for the year
2014, then use the standard GFS map definitions (and even
the built-in GFS archive url):
>>> seapy.roms.forcing.gen_bulk_forcing(seapy.roms.forcing.gfs_url,
seapy.roms.forcing.gfs_map, 'my_forcing.nc',
'mygrid.nc', datetime.datetime(2014,1,1),
datetime.datetime(2014,1,1))
NCEP reanalysis is trickier as the files are broken up by
variable type; hence, a separate file will be created for
each variable. We can use the wildcards though to put together
multiple time period (e.g., 2014 through 2015).
>>> seapy.roms.forcing.gen_bulk_forcing("uwnd.10m.*nc",
seapy.roms.forcing.ncep_map, 'ncep_frc.nc',
'mygrid.nc', datetime.datetime(2014,1,1),
datetime.datetime(2015,12,31)), clobber=True)
>>> seapy.roms.forcing.gen_bulk_forcing("vwnd.10m.*nc",
seapy.roms.forcing.ncep_map, 'ncep_frc.nc',
'mygrid.nc', datetime.datetime(2014,1,1),
datetime.datetime(2015,12,31)), clobber=False)
>>> seapy.roms.forcing.gen_bulk_forcing("air.2m.*nc",
seapy.roms.forcing.ncep_map, 'ncep_frc.nc',
'mygrid.nc', datetime.datetime(2014,1,1),
datetime.datetime(2015,12,31)), clobber=False)
>>> seapy.roms.forcing.gen_bulk_forcing("dlwrf.sfc.*nc",
seapy.roms.forcing.ncep_map, 'ncep_frc.nc',
'mygrid.nc', datetime.datetime(2014,1,1),
datetime.datetime(2015,12,31)), clobber=False)
>>> seapy.roms.forcing.gen_bulk_forcing("dswrf.sfc.*nc",
seapy.roms.forcing.ncep_map, 'ncep_frc.nc',
'mygrid.nc', datetime.datetime(2014,1,1),
datetime.datetime(2015,12,31)), clobber=False)
>>> seapy.roms.forcing.gen_bulk_forcing("prate.sfc.*nc",
seapy.roms.forcing.ncep_map, 'ncep_frc.nc',
'mygrid.nc', datetime.datetime(2014,1,1),
datetime.datetime(2015,12,31)), clobber=False)
>>> seapy.roms.forcing.gen_bulk_forcing("rhum.sig995.*nc",
seapy.roms.forcing.ncep_map, 'ncep_frc_rhum_slp.nc',
'mygrid.nc', datetime.datetime(2014,1,1),
datetime.datetime(2015,12,31)), clobber=True)
>>> seapy.roms.forcing.gen_bulk_forcing("slp.*nc",
seapy.roms.forcing.ncep_map, 'ncep_frc_rhum_slp.nc',
'mygrid.nc', datetime.datetime(2014,1,1),
datetime.datetime(2015,12,31)), clobber=False)
Two forcing files, 'ncep_frc.nc' and 'ncep_frc_rhum_slp.nc', are
generated for use with ROMS. NOTE: You will have to use 'ncks'
to eliminate the empty forcing fields between the two files
to prevent ROMS from loading them.
"""
# Load the grid
grid = seapy.model.asgrid(grid)
# Open the Forcing data
forcing = seapy.netcdf(infile)
# Gather the information about the forcing
if 'frc_time_units' in fields:
frc_time = netCDF4.num2date(forcing.variables[fields['frc_time']][:],
fields['frc_time_units'])
else:
frc_time = seapy.roms.num2date(forcing, fields['frc_time'])
# Figure out the time records that are required
time_list = np.where(np.logical_and(frc_time >= start_time,
frc_time <= end_time))[0]
if not np.any(time_list):
raise Exception("Cannot find valid times")
# Get the latitude and longitude ranges
minlat = np.floor(np.min(grid.lat_rho)) - fields['pad']
maxlat = np.ceil(np.max(grid.lat_rho)) + fields['pad']
minlon = np.floor(np.min(grid.lon_rho)) - fields['pad']
maxlon = np.ceil(np.max(grid.lon_rho)) + fields['pad']
frc_lon = forcing.variables[fields['frc_lon']][:]
frc_lat = forcing.variables[fields['frc_lat']][:]
# Make the forcing lat/lon on 2D grid
if frc_lon.ndim == 3:
frc_lon = np.squeeze(frc_lon[0, :, :])
frc_lat = np.squeeze(frc_lat[0, :, :])
elif frc_lon.ndim == 1:
frc_lon, frc_lat = np.meshgrid(frc_lon, frc_lat)
# Find the values in our region
if not grid.east():
frc_lon[frc_lon > 180] -= 360
region_list = np.where(np.logical_and.reduce((
frc_lon <= maxlon,
frc_lon >= minlon,
frc_lat <= maxlat,
frc_lat >= minlat)))
if not np.any(region_list):
raise Exception("Cannot find valid region")
eta_list = np.s_[np.min(region_list[0]):np.max(region_list[0]) + 1]
xi_list = np.s_[np.min(region_list[1]):np.max(region_list[1]) + 1]
frc_lon = frc_lon[eta_list, xi_list]
frc_lat = frc_lat[eta_list, xi_list]
# Create the output file
out = seapy.roms.ncgen.create_frc_bulk(outfile, lat=frc_lat.shape[0],
lon=frc_lon.shape[1], reftime=epoch,
clobber=clobber, cdl=cdl)
out.variables['frc_time'][:] = seapy.roms.date2num(
frc_time[time_list], out, 'frc_time')
out.variables['lat'][:] = frc_lat
out.variables['lon'][:] = frc_lon
# Loop over the fields and fill out the output file
for f in track(list(set(fields.keys()) & (out.variables.keys()))):
if hasattr(fields[f], 'field') and fields[f].field in forcing.variables:
out.variables[f][:] = \
forcing.variables[fields[f].field][time_list, eta_list, xi_list] * \
fields[f].ratio + fields[f].offset
out.sync()
out.close()
def gen_direct_forcing(his_file, frc_file, cdl=None):
"""
Generate a direct forcing file from a history (or other ROMS output) file. It requires
that sustr, svstr, shflux, and ssflux (or swflux) with salt be available. This will
generate a forcing file that contains: sustr, svstr, swflux, and ssflux.
Parameters
----------
his_file: string,
The ROMS history (or other) file(s) (can use wildcards) that contains the fields to
make forcing from
frc_file: string,
The output forcing file
cdl: string, optional,
Use the specified CDL file as the definition for the new
netCDF file.
Returns
-------
None: Generates an output file of bulk forcings
"""
import os
infile = seapy.netcdf(his_file)
ref, _ = seapy.roms.get_reftime(infile)
# Create the output file
nc = seapy.roms.ncgen.create_frc_direct(frc_file,
eta_rho=infile.dimensions[
'eta_rho'].size,
xi_rho=infile.dimensions[
'xi_rho'].size,
reftime=ref,
clobber=True,
title="Forcing from " +
os.path.basename(his_file),
cdl=cdl)
# Copy the data over
time = seapy.roms.num2date(infile, 'ocean_time')
nc.variables['frc_time'][:] = seapy.roms.date2num(time, nc, 'frc_time')
for x in track(seapy.chunker(range(len(time)), 1000)):
nc.variables['SSS'][x, :, :] = seapy.convolve_mask(
infile.variables['salt'][x, -1, :, :], copy=False)
if 'EminusP' in infile.variables:
nc.variables['swflux'][x, :, :] = seapy.convolve_mask(
infile.variables['EminusP'][x, :, :], copy=False) * 86400
elif 'swflux' in infile.variables:
nc.variables['swflux'][x, :, :] = seapy.convolve_mask(
infile.variables['swflux'][x, :, :], copy=False)
else:
nc.variables['swflux'][x, :, :] = seapy.convolve_mask(
infile.variables['ssflux'][x, :, :]
/ nc.variables['SSS'][x, :, :], copy=False)
nc.sync()
for f in ("sustr", "svstr", "shflux", "swrad"):
if f in infile.variables:
nc.variables[f][x, :, :] = seapy.convolve_mask(
infile.variables[f][x, :, :], copy=False)
nc.sync()
for f in ("lat_rho", "lat_u", "lat_v", "lon_rho", "lon_u", "lon_v"):
if f in infile.variables:
nc.variables[f][:] = infile.variables[f][:]
nc.close()
|
powellb/seapy
|
seapy/roms/forcing.py
|
Python
|
mit
| 11,817
|
[
"Brian",
"NetCDF"
] |
c645d15e93af17afb3630b948f25cce4a84c48f2da97bf0bbed53b6a8cdb5e40
|
from ase import *
h2 = Atoms('H2', positions=[(0, 0, 0), (0, 0, 1.1)],
calculator=EMT())
print h2.calc.get_numeric_forces(h2)
print h2.get_forces()
|
freephys/python_ase
|
ase/test/h2.py
|
Python
|
gpl-3.0
| 161
|
[
"ASE"
] |
356b9015e9d264f276c884974c9790ef759fe519683b3a134bd8b8ffaaa2add6
|
from math import sqrt
from typing import List
import pytest
from guacamol.common_scoring_functions import IsomerScoringFunction, SMARTSScoringFunction
from guacamol.score_modifier import GaussianModifier
from guacamol.scoring_function import BatchScoringFunction, ArithmeticMeanScoringFunction, GeometricMeanScoringFunction
from guacamol.utils.math import geometric_mean
class MockScoringFunction(BatchScoringFunction):
"""
Mock scoring function that returns values from an array given in the constructor.
"""
def __init__(self, values: List[float]) -> None:
super().__init__()
self.values = values
self.index = 0
def raw_score_list(self, smiles_list: List[str]) -> List[float]:
start = self.index
self.index += len(smiles_list)
end = self.index
return self.values[start:end]
def test_isomer_scoring_function_uses_geometric_mean_by_default():
scoring_function = IsomerScoringFunction('C2H4')
assert scoring_function.mean_function == geometric_mean
def test_isomer_scoring_function_returns_one_for_correct_molecule():
c11h24_arithmetic = IsomerScoringFunction('C11H24', mean_function='arithmetic')
c11h24_geometric = IsomerScoringFunction('C11H24', mean_function='geometric')
# all those smiles fit the formula C11H24
smiles1 = 'CCCCCCCCCCC'
smiles2 = 'CC(CCC)CCCCCC'
smiles3 = 'CCCCC(CC(C)CC)C'
assert c11h24_arithmetic.score(smiles1) == 1.0
assert c11h24_arithmetic.score(smiles2) == 1.0
assert c11h24_arithmetic.score(smiles3) == 1.0
assert c11h24_geometric.score(smiles1) == 1.0
assert c11h24_geometric.score(smiles2) == 1.0
assert c11h24_geometric.score(smiles3) == 1.0
def test_isomer_scoring_function_penalizes_additional_atoms():
c11h24_arithmetic = IsomerScoringFunction('C11H24', mean_function='arithmetic')
c11h24_geometric = IsomerScoringFunction('C11H24', mean_function='geometric')
# all those smiles are C11H24O
smiles1 = 'CCCCCCCCCCCO'
smiles2 = 'CC(CCC)COCCCCC'
smiles3 = 'CCCCOC(CC(C)CC)C'
# the penalty corresponds to a deviation of 1.0 from the gaussian modifier for the total number of atoms
n_atoms_score = GaussianModifier(mu=0, sigma=2)(1.0)
c_score = 1.0
h_score = 1.0
expected_score_arithmetic = (n_atoms_score + c_score + h_score) / 3.0
expected_score_geometric = (n_atoms_score * c_score * h_score) ** (1 / 3)
assert c11h24_arithmetic.score(smiles1) == pytest.approx(expected_score_arithmetic)
assert c11h24_arithmetic.score(smiles2) == pytest.approx(expected_score_arithmetic)
assert c11h24_arithmetic.score(smiles3) == pytest.approx(expected_score_arithmetic)
assert c11h24_geometric.score(smiles1) == pytest.approx(expected_score_geometric)
assert c11h24_geometric.score(smiles2) == pytest.approx(expected_score_geometric)
assert c11h24_geometric.score(smiles3) == pytest.approx(expected_score_geometric)
def test_isomer_scoring_function_penalizes_incorrect_number_atoms():
c11h24_arithmetic = IsomerScoringFunction('C12H24', mean_function='arithmetic')
c11h24_geometric = IsomerScoringFunction('C12H24', mean_function='geometric')
# all those smiles fit the formula C11H24O
smiles1 = 'CCCCCCCCOCCC'
smiles2 = 'CC(CCOC)CCCCCC'
smiles3 = 'COCCCC(CC(C)CC)C'
# the penalty corresponds to a deviation of 1.0 from the gaussian modifier in the number of C atoms
c_score = GaussianModifier(mu=0, sigma=1)(1.0)
n_atoms_score = 1.0
h_score = 1.0
expected_score_arithmetic = (n_atoms_score + c_score + h_score) / 3.0
expected_score_geometric = (n_atoms_score * c_score * h_score) ** (1 / 3)
assert c11h24_arithmetic.score(smiles1) == pytest.approx(expected_score_arithmetic)
assert c11h24_arithmetic.score(smiles2) == pytest.approx(expected_score_arithmetic)
assert c11h24_arithmetic.score(smiles3) == pytest.approx(expected_score_arithmetic)
assert c11h24_geometric.score(smiles1) == pytest.approx(expected_score_geometric)
assert c11h24_geometric.score(smiles2) == pytest.approx(expected_score_geometric)
assert c11h24_geometric.score(smiles3) == pytest.approx(expected_score_geometric)
def test_isomer_scoring_function_invalid_molecule():
sf = IsomerScoringFunction('C60')
assert sf.score('CCCinvalid') == sf.corrupt_score
def test_smarts_function():
mol1 = 'COc1cc(N(C)CCN(C)C)c(NC(=O)C=C)cc1Nc2nccc(n2)c3cn(C)c4ccccc34'
mol2 = 'Cc1c(C)c2OC(C)(COc3ccc(CC4SC(=O)NC4=O)cc3)CCc2c(C)c1O'
smarts = '[#7;h1]c1ncccn1'
scofu1 = SMARTSScoringFunction(target=smarts)
scofu_inv = SMARTSScoringFunction(target=smarts, inverse=True)
assert scofu1.score(mol1) == 1.0
assert scofu1.score(mol2) == 0.0
assert scofu_inv.score(mol1) == 0.0
assert scofu_inv.score(mol2) == 1.0
assert scofu1.score_list([mol1])[0] == 1.0
assert scofu1.score_list([mol2])[0] == 0.0
def test_arithmetic_mean_scoring_function():
# define a scoring function returning the mean from two mock functions
# and assert that it returns the correct values.
weight_1 = 0.4
weight_2 = 0.6
mock_values_1 = [0.232, 0.665, 0.0, 1.0, 0.993]
mock_values_2 = [0.010, 0.335, 0.8, 0.3, 0.847]
mock_1 = MockScoringFunction(mock_values_1)
mock_2 = MockScoringFunction(mock_values_2)
scoring_function = ArithmeticMeanScoringFunction(scoring_functions=[mock_1, mock_2],
weights=[weight_1, weight_2])
smiles = ['CC'] * 5
scores = scoring_function.score_list(smiles)
expected = [weight_1 * v1 + weight_2 * v2 for v1, v2 in zip(mock_values_1, mock_values_2)]
assert scores == expected
def test_geometric_mean_scoring_function():
# define a scoring function returning the geometric mean from two mock functions
# and assert that it returns the correct values.
mock_values_1 = [0.232, 0.665, 0.0, 1.0, 0.993]
mock_values_2 = [0.010, 0.335, 0.8, 0.3, 0.847]
mock_1 = MockScoringFunction(mock_values_1)
mock_2 = MockScoringFunction(mock_values_2)
scoring_function = GeometricMeanScoringFunction(scoring_functions=[mock_1, mock_2])
smiles = ['CC'] * 5
scores = scoring_function.score_list(smiles)
expected = [sqrt(v1 * v2) for v1, v2 in zip(mock_values_1, mock_values_2)]
assert scores == expected
|
BenevolentAI/guacamol
|
tests/test_scoring_functions.py
|
Python
|
mit
| 6,376
|
[
"Gaussian"
] |
8ae029204ed0b0348379e65c691b10ce6d4e603cfe60ad43b1e76327fbba75a3
|
#!/usr/bin/env python
'''
MO integrals in PBC code
'''
import numpy
from pyscf import ao2mo
from pyscf.pbc import gto, scf, tools
cell = gto.M(
a = numpy.eye(3)*3.5668,
atom = '''C 0. 0. 0.
C 0.8917 0.8917 0.8917
C 1.7834 1.7834 0.
C 2.6751 2.6751 0.8917
C 1.7834 0. 1.7834
C 2.6751 0.8917 2.6751
C 0. 1.7834 1.7834
C 0.8917 2.6751 2.6751''',
basis = 'gth-szv',
pseudo = 'gth-pade',
verbose = 4,
)
#
# MO integrals at Gamma point.
#
mf = scf.RHF(cell).run()
eri = mf.with_df.ao2mo(mf.mo_coeff)
#
# These integrals are real. They can be transformed under the 8-fold
# permutation symmetry.
#
eri = ao2mo.restore(8, eri, mf.mo_coeff.shape[1])
#
# MO integrals at arbitrary k-point.
#
nk = [2,2,2]
kpts = cell.make_kpts(nk)
kmf = scf.KRHF(cell, kpts).run()
nmo = kmf.mo_coeff[0].shape[1]
kconserv = tools.get_kconserv(cell, kpts)
nkpts = len(kpts)
for kp in range(nkpts):
for kq in range(nkpts):
for kr in range(nkpts):
ks = kconserv[kp,kq,kr]
eri_kpt = kmf.with_df.ao2mo([kmf.mo_coeff[i] for i in (kp,kq,kr,ks)],
[kpts[i] for i in (kp,kq,kr,ks)])
eri_kpt = eri_kpt.reshape([nmo]*4)
|
gkc1000/pyscf
|
examples/pbc/30-mo_integrals.py
|
Python
|
apache-2.0
| 1,370
|
[
"PySCF"
] |
81628f91de436e17e572879b76736f0960834375570f64ef4a2965131ac72d4e
|
# encoding: utf-8
import datetime
from south.db import db
from south.v2 import SchemaMigration
from django.db import models
class Migration(SchemaMigration):
def forwards(self, orm):
# Adding field 'ChangeRequest.request_type'
db.add_column('core_changerequest', 'request_type', self.gf('django.db.models.fields.CharField')(default='earlier_date', max_length=100), keep_default=False)
def backwards(self, orm):
# Deleting field 'ChangeRequest.request_type'
db.delete_column('core_changerequest', 'request_type')
models = {
'auth.group': {
'Meta': {'object_name': 'Group'},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '80'}),
'permissions': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['auth.Permission']", 'symmetrical': 'False', 'blank': 'True'})
},
'auth.permission': {
'Meta': {'ordering': "('content_type__app_label', 'content_type__model', 'codename')", 'unique_together': "(('content_type', 'codename'),)", 'object_name': 'Permission'},
'codename': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'content_type': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['contenttypes.ContentType']"}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '50'})
},
'auth.user': {
'Meta': {'object_name': 'User'},
'date_joined': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'email': ('django.db.models.fields.EmailField', [], {'max_length': '75', 'blank': 'True'}),
'first_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'blank': 'True'}),
'groups': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['auth.Group']", 'symmetrical': 'False', 'blank': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'is_active': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
'is_staff': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'is_superuser': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'last_login': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'last_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'blank': 'True'}),
'password': ('django.db.models.fields.CharField', [], {'max_length': '128'}),
'user_permissions': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['auth.Permission']", 'symmetrical': 'False', 'blank': 'True'}),
'username': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '30'})
},
'contenttypes.contenttype': {
'Meta': {'ordering': "('name',)", 'unique_together': "(('app_label', 'model'),)", 'object_name': 'ContentType', 'db_table': "'django_content_type'"},
'app_label': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'model': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '100'})
},
'core.authprofile': {
'Meta': {'object_name': 'AuthProfile'},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'patient': ('django.db.models.fields.related.OneToOneField', [], {'to': "orm['core.Patient']", 'unique': 'True'}),
'user': ('django.db.models.fields.related.OneToOneField', [], {'to': "orm['auth.User']", 'unique': 'True'})
},
'core.changerequest': {
'Meta': {'object_name': 'ChangeRequest'},
'created_at': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'request': ('django.db.models.fields.TextField', [], {}),
'request_type': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'status': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'updated_at': ('django.db.models.fields.DateTimeField', [], {'auto_now': 'True', 'blank': 'True'}),
'visit': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['core.Visit']"})
},
'core.clinic': {
'Meta': {'object_name': 'Clinic'},
'active': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'te_id': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '2'}),
'user': ('django.db.models.fields.related.ForeignKey', [], {'blank': 'True', 'related_name': "'clinic'", 'null': 'True', 'to': "orm['auth.User']"})
},
'core.event': {
'Meta': {'object_name': 'Event'},
'created_at': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
'description': ('django.db.models.fields.TextField', [], {}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'updated_at': ('django.db.models.fields.DateTimeField', [], {'auto_now': 'True', 'blank': 'True'})
},
'core.historicalpatient': {
'Meta': {'ordering': "('-history_id',)", 'object_name': 'HistoricalPatient'},
'active_msisdn': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['core.MSISDN']", 'null': 'True', 'blank': 'True'}),
'age': ('django.db.models.fields.IntegerField', [], {}),
'created_at': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
'deceased': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'deleted': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'disclosed': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'history_date': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'history_id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'history_type': ('django.db.models.fields.CharField', [], {'max_length': '1'}),
'id': ('django.db.models.fields.IntegerField', [], {'db_index': 'True', 'blank': 'True'}),
'language': ('django.db.models.fields.related.ForeignKey', [], {'default': '1', 'to': "orm['core.Language']"}),
'last_clinic': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['core.Clinic']", 'null': 'True', 'blank': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '100', 'blank': 'True'}),
'opted_in': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
'owner': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['auth.User']"}),
'regiment': ('django.db.models.fields.IntegerField', [], {'null': 'True', 'blank': 'True'}),
'risk_profile': ('django.db.models.fields.FloatField', [], {'null': 'True', 'blank': 'True'}),
'sex': ('django.db.models.fields.CharField', [], {'max_length': '3'}),
'surname': ('django.db.models.fields.CharField', [], {'max_length': '100', 'blank': 'True'}),
'te_id': ('django.db.models.fields.CharField', [], {'max_length': '10', 'db_index': 'True'}),
'updated_at': ('django.db.models.fields.DateTimeField', [], {'auto_now': 'True', 'blank': 'True'})
},
'core.historicalvisit': {
'Meta': {'ordering': "('-history_id',)", 'object_name': 'HistoricalVisit'},
'clinic': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['core.Clinic']"}),
'comment': ('django.db.models.fields.TextField', [], {'default': "''"}),
'created_at': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
'date': ('django.db.models.fields.DateField', [], {}),
'deleted': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'history_date': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'history_id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'history_type': ('django.db.models.fields.CharField', [], {'max_length': '1'}),
'id': ('django.db.models.fields.IntegerField', [], {'db_index': 'True', 'blank': 'True'}),
'patient': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['core.Patient']"}),
'status': ('django.db.models.fields.CharField', [], {'max_length': '1'}),
'te_visit_id': ('django.db.models.fields.CharField', [], {'max_length': '20', 'null': 'True', 'db_index': 'True'}),
'updated_at': ('django.db.models.fields.DateTimeField', [], {'auto_now': 'True', 'blank': 'True'}),
'visit_type': ('django.db.models.fields.CharField', [], {'max_length': '80', 'blank': 'True'})
},
'core.language': {
'Meta': {'object_name': 'Language'},
'attended_message': ('django.db.models.fields.TextField', [], {}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'missed_message': ('django.db.models.fields.TextField', [], {}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '50'}),
'tomorrow_message': ('django.db.models.fields.TextField', [], {}),
'twoweeks_message': ('django.db.models.fields.TextField', [], {})
},
'core.msisdn': {
'Meta': {'ordering': "['-id']", 'object_name': 'MSISDN'},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'msisdn': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '32'})
},
'core.patient': {
'Meta': {'ordering': "['created_at']", 'object_name': 'Patient'},
'active_msisdn': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['core.MSISDN']", 'null': 'True', 'blank': 'True'}),
'age': ('django.db.models.fields.IntegerField', [], {}),
'created_at': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
'deceased': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'deleted': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'disclosed': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'language': ('django.db.models.fields.related.ForeignKey', [], {'default': '1', 'to': "orm['core.Language']"}),
'last_clinic': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['core.Clinic']", 'null': 'True', 'blank': 'True'}),
'msisdns': ('django.db.models.fields.related.ManyToManyField', [], {'related_name': "'contacts'", 'symmetrical': 'False', 'to': "orm['core.MSISDN']"}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '100', 'blank': 'True'}),
'opted_in': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
'owner': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['auth.User']"}),
'regiment': ('django.db.models.fields.IntegerField', [], {'null': 'True', 'blank': 'True'}),
'risk_profile': ('django.db.models.fields.FloatField', [], {'null': 'True', 'blank': 'True'}),
'sex': ('django.db.models.fields.CharField', [], {'max_length': '3'}),
'surname': ('django.db.models.fields.CharField', [], {'max_length': '100', 'blank': 'True'}),
'te_id': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '10'}),
'updated_at': ('django.db.models.fields.DateTimeField', [], {'auto_now': 'True', 'blank': 'True'})
},
'core.pleasecallme': {
'Meta': {'object_name': 'PleaseCallMe'},
'clinic': ('django.db.models.fields.related.ForeignKey', [], {'blank': 'True', 'related_name': "'pcms'", 'null': 'True', 'to': "orm['core.Clinic']"}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'message': ('django.db.models.fields.TextField', [], {'blank': 'True'}),
'msisdn': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'pcms'", 'to': "orm['core.MSISDN']"}),
'notes': ('django.db.models.fields.TextField', [], {'blank': 'True'}),
'reason': ('django.db.models.fields.CharField', [], {'default': "'nc'", 'max_length': '2'}),
'timestamp': ('django.db.models.fields.DateTimeField', [], {}),
'user': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['auth.User']"})
},
'core.visit': {
'Meta': {'ordering': "['date']", 'object_name': 'Visit'},
'clinic': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['core.Clinic']"}),
'comment': ('django.db.models.fields.TextField', [], {'default': "''"}),
'created_at': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
'date': ('django.db.models.fields.DateField', [], {}),
'deleted': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'patient': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['core.Patient']"}),
'status': ('django.db.models.fields.CharField', [], {'max_length': '1'}),
'te_visit_id': ('django.db.models.fields.CharField', [], {'max_length': '20', 'unique': 'True', 'null': 'True'}),
'updated_at': ('django.db.models.fields.DateTimeField', [], {'auto_now': 'True', 'blank': 'True'}),
'visit_type': ('django.db.models.fields.CharField', [], {'max_length': '80', 'blank': 'True'})
}
}
complete_apps = ['core']
|
praekelt/txtalert
|
txtalert/core/migrations/0013_auto__add_field_changerequest_request_type.py
|
Python
|
gpl-3.0
| 14,968
|
[
"VisIt"
] |
cd60ace17f6396ca34cbb03356b0ec8df4ced5916549d52cc6d09083ed3f0742
|
#!/usr/bin/env python
import vtk
from vtk.test import Testing
from vtk.util.misc import vtkGetDataRoot
VTK_DATA_ROOT = vtkGetDataRoot()
# Create the RenderWindow, Renderer
#
ren1 = vtk.vtkRenderer()
renWin = vtk.vtkRenderWindow()
renWin.AddRenderer(ren1)
iren = vtk.vtkRenderWindowInteractor()
iren.SetRenderWindow(renWin)
aPlane = vtk.vtkPlaneSource()
aPlane.SetCenter(-100,-100,-100)
aPlane.SetOrigin(-100,-100,-100)
aPlane.SetPoint1(-90,-100,-100)
aPlane.SetPoint2(-100,-90,-100)
aPlane.SetNormal(0,-1,1)
imageIn = vtk.vtkPNMReader()
imageIn.SetFileName("" + str(VTK_DATA_ROOT) + "/Data/earth.ppm")
texture = vtk.vtkTexture()
texture.SetInputConnection(imageIn.GetOutputPort())
texturePlane = vtk.vtkTextureMapToPlane()
texturePlane.SetInputConnection(aPlane.GetOutputPort())
texturePlane.AutomaticPlaneGenerationOn()
planeMapper = vtk.vtkPolyDataMapper()
planeMapper.SetInputConnection(texturePlane.GetOutputPort())
texturedPlane = vtk.vtkActor()
texturedPlane.SetMapper(planeMapper)
texturedPlane.SetTexture(texture)
# Add the actors to the renderer, set the background and size
#
ren1.AddActor(texturedPlane)
#ren1 SetBackground 1 1 1
renWin.SetSize(200,200)
renWin.Render()
renWin.Render()
# prevent the tk window from showing up then start the event loop
# --- end of script --
|
hlzz/dotfiles
|
graphics/VTK-7.0.0/Filters/Texture/Testing/Python/AutomaticPlaneGeneration.py
|
Python
|
bsd-3-clause
| 1,327
|
[
"VTK"
] |
1953250503e9f1f960ef58238cf0765c7c4397bd5eada00fe5127c60dbef05e3
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.