metadata
dict | text
stringlengths 0
40.6M
| id
stringlengths 14
255
|
|---|---|---|
{
"filename": "index.md",
"repo_name": "jiffyclub/palettable",
"repo_path": "palettable_extracted/palettable-master/docs/index.md",
"type": "Markdown"
}
|
---
title: Palettable
layout: home
content: []
tagline:
Color palettes for Python
---
Palettable (formerly brewer2mpl) is a library of color palettes for Python.
It's written in pure Python with no dependencies,
but it can supply color maps for [matplotlib](http://matplotlib.org/).
You can use Palettable to customize matplotlib plots or
supply colors for a web application.
Palettable has color palettes from:
- [CartoColors][cartocolors]
- [cmocean][cmocean]
- [Colorbrewer2][colorbrewer]
- [Cubehelix][cubehelix]
- [Light & Bartlein][lightbartlein]
- [matplotlib][matplotlib]
- [MyCarta][mycarta]
- [Plotly][plotly]
- [Scientific][scientific]
- [Tableau][tableau]
- The [Wes Anderson Palettes][wesanderson] blog
# Documentation
## Installation
Palettable is available on PyPI for installation via pip:
`pip install palettable`.
Palettable is compatible with Python 2.6, 2.7, and Python 3.
## Finding Palettes
Palettes are pre-built and loaded at import time.
They have a naming convention of `<Name>_<number of colors>`.
For example, the Colorbrewer2 palette Dark2 with seven colors is
named `Dark2_7`.
Every palette also has a reversed version with the same name plus
the suffix `_r` (e.g. `Dark2_7_r`).
The modules with palettes are:
- [`palettable.cartocolors.diverging`][cartocolors/diverging]
- [`palettable.cartocolors.qualitative`][cartocolors/qualitative]
- [`palettable.cartocolors.sequential`][cartocolors/sequential]
- [`palettable.cmocean.diverging`][cmocean/diverging]
- [`palettable.cmocean.sequential`][cmocean/sequential]
- [`palettable.colorbrewer.diverging`][colorbrewer/diverging]
- [`palettable.colorbrewer.qualitative`][colorbrewer/qualitative]
- [`palettable.colorbrewer.sequential`][colorbrewer/sequential]
- [`palettable.lightbartlein.diverging`][lightbartlein/diverging]
- [`palettable.lightbartlein.sequential`][lightbartlein/sequential]
- [`palettable.matplotlib`][matplotlib]
- [`palettable.mycarta`][mycarta]
- [`palettable.plotly.diverging`][plotly/diverging]
- [`palettable.plotly.qualitative`][plotly/qualitative]
- [`palettable.plotly.sequential`][plotly/sequential]
- [`palettable.scientific.diverging`][scientific/diverging]
- [`palettable.scientific.sequential`][scientific/sequential]
- [`palettable.tableau`][tableau]
- [`palettable.wesanderson`][wesanderson]
The `Dark2_7` palette could be imported via:
```python
from palettable.colorbrewer.qualitative import Dark2_7
```
## Palette Interface
All the palette instances have a common interface including these attributes:
`name`
: The name of the palette.
`type`
: One of `'diverging'`, `'qualitative'`, or `'sequential`'.
`number`
: The number of defined colors in the palette.
`colors`
: The defined colors in the palette as a list of RGB tuples
in the range 0-255.
`hex_colors`
: Colors as a list of hex strings (e.g. '#A912F4').
`mpl_colors`
: Colors as a list of RGB tuples in the range 0-1 as used by matplotlib.
`mpl_colormap`
: A continuous, interpolated matplotlib
[`LinearSegmentedColormap`](http://matplotlib.org/api/colors_api.html#matplotlib.colors.LinearSegmentedColormap).
Palettes also have these methods:
`get_mpl_colormap`
: Use this method to get a matplotlib color map and pass custom keyword
arguments to
[`LinearSegmentedColormap.from_list`](http://matplotlib.org/api/colors_api.html#matplotlib.colors.LinearSegmentedColormap.from_list).
`show_as_blocks`
: Show the defined colors of the palette in the IPython Notebook.
Requires [ipythonblocks][] to be installed.
`show_discrete_image`
: Show the defined colors of the palette in the IPython Notebook.
Requires [matplotlib][] to be installed.
`show_continuous_image`
: Show the continuous, interpolated palette in the IPython Notebook.
Requires [matplotlib][] to be installed.
`save_discrete_image`
: Save an image of the defined colors of palette to a file.
Requires [matplotlib][] to be installed.
`save_continuous_image`
: Save an image of the continuous, interpolated palette to a file.
Requires [matplotlib][] to be installed.
## Cookbook
### matplotlib Color Cycle
matplotlib follows a default color cycle when drawing a plot.
This can be modified as described
[in this example](http://matplotlib.org/examples/color/color_cycle_demo.html).
To substitute a Palettable palette use the `.mpl_colors` attribute:
```python
ax.set_prop_cycle('color', palettable.colorbrewer.qualitative.Dark2_8.mpl_colors)
```
### matplotlib Colormap
Many matplotlib functions and methods take a `cmap` argument.
You can use the `.mpl_colormap` attribute with this:
```python
from palettable.colorbrewer.sequential import Blues_8
ax.imshow(data, cmap=Blues_8.mpl_colormap)
```
Note that the colorbrewer2 color palettes are all available in [matplotlib][]
already.
See the full list of available color maps in matplotlib here:
[http://matplotlib.org/examples/color/colormaps_reference.html](http://matplotlib.org/examples/color/colormaps_reference.html).
### matplotlib Discrete Colormap
The `.mpl_colormap` attribute is a continuous, interpolated map.
If you want to make discrete color map you can use matplotlib's
[`ListedColormap`](http://matplotlib.org/api/colors_api.html#matplotlib.colors.ListedColormap):
```python
cmap = ListedColormap(palettable.colorbrewer.qualitative.Dark2_7.mpl_colors)
```
### plotly
plotly takes colors as lists of hex strings. So you can use `hex_colors` attributes to pass it to plotly, like this:
```python
import numpy as np
import palettable
import plotly.express as px
data = np.arange(9).reshape(3, 3)
color = palettable.scientific.sequential.Batlow_9
px.imshow(data, color_continuous_scale=color.hex_colors)
```
# Contact
Palettable is on GitHub at
[https://github.com/jiffyclub/palettable](https://github.com/jiffyclub/palettable).
Please report issues there.
|
jiffyclubREPO_NAMEpalettablePATH_START.@palettable_extracted@palettable-master@docs@index.md@.PATH_END.py
|
{
"filename": "about.md",
"repo_name": "google/jax",
"repo_path": "jax_extracted/jax-main/docs/about.md",
"type": "Markdown"
}
|
(about-the-project)=
# About the project
The JAX project is led by the JAX core team. We develop in the open,
and welcome open-source contributions from across the community. We
frequently see contributions from [Google
DeepMind](https://deepmind.google/), Alphabet more broadly,
[NVIDIA](https://docs.nvidia.com/deeplearning/frameworks/jax-release-notes/overview.html),
and elsewhere.
At the heart of the project is the [JAX
core](http://github.com/google/jax) library, which focuses on the
fundamentals of machine learning and numerical computing, at scale.
When [developing](#development) the core, we want to maintain agility
and a focused scope, so we lean heavily on a surrounding [modular
technology stack](#components). First, we design the `jax` module
to be
[composable](https://github.com/jax-ml/jax?tab=readme-ov-file#transformations)
and
[extensible](https://jax.readthedocs.io/en/latest/jax.extend.html), so
that a wide variety of domain-specific libraries can thrive outside of
it in a decentralized manner. Second, we lean heavily on a modular
backend stack (compiler and runtime) to target different
accelerators. Whether you are [writing a new domain-specific library
built with JAX](#upstack), or looking to [support
new hardware](#downstack), you can often
contribute these with *minimal to no modifications* to the JAX core
codebase.
Many of JAX's core contributors have roots in open-source software and
in research, in fields spanning computer science and the natural
sciences. We strive to continuously enable the cutting edge of machine
learning and numerical computing---across all compute platforms and
accelerators---and to discover the truths of array programming at
scale.
(development)=
## Open development
JAX's day-to-day development takes place in the open on GitHub, using
pull requests, the issue tracker, discussions, and [JAX Enhancement
Proposals
(JEPs)](https://jax.readthedocs.io/en/latest/jep/index.html). Reading
and participating in these is a good way to get involved. We also
maintain [developer
notes](https://jax.readthedocs.io/en/latest/contributor_guide.html)
that cover JAX's internal design.
The JAX core team determines whether to accept changes and
enhancements. Maintaining a simple decision-making structure currently
helps us develop at the speed of the research frontier. Open
development is a core value of ours, and we may adapt to a more
intricate decision structure over time (e.g. with designated area
owners) if/when it becomes useful to do so.
For more see [contributing to
JAX](https://jax.readthedocs.io/en/latest/contributing.html).
(components)=
## A modular stack
To enable (a) a growing community of users across numerical domains,
and (b) an advancing hardware landscape, we lean heavily on
**modularity**.
(upstack)=
### Libraries built on JAX
While the JAX core library focuses on the fundamentals, we want to
encourage domain-specific libraries and tools to be built on top of
JAX. Indeed, [many
libraries](https://jax.readthedocs.io/en/latest/#ecosystem) have
emerged around JAX to offer higher-level features and extensions.
How do we encourage such decentralized development? We guide it with
several technical choices. First, JAX's main API focuses on basic
building blocks (e.g. numerical primitives, NumPy operations, arrays,
and transformations), encouraging auxiliary libraries to develop
utilities as needed for their domain. In addition, JAX exposes a
handful of more advanced APIs for
[customization](https://jax.readthedocs.io/en/latest/notebooks/Custom_derivative_rules_for_Python_code.html)
and
[extensibility](https://jax.readthedocs.io/en/latest/jax.extend.html). Libraries
can [lean on these
APIs](https://jax.readthedocs.io/en/latest/building_on_jax.html) in
order to use JAX as an internal means of implementation, to integrate
more with its transformations like autodiff, and more.
Projects across the JAX ecosystem are developed in a distributed and
often open fashion. They are not governed by the JAX core team, even
though sometimes team members contribute to them or maintain contact
with their developers.
(downstack)=
### A pluggable backend
We want JAX to run on CPUs, GPUs, TPUs, and other hardware platforms
as they emerge. To encourage unhindered support of JAX on new
platforms, the JAX core emphasizes modularity in its backend too.
To manage hardware devices and memory, and for compilation to such
devices, JAX calls out to the open [XLA
compiler](https://openxla.org/) and the [PJRT
runtime](https://github.com/openxla/xla/tree/main/xla/pjrt/c#pjrt---uniform-device-api). Both
of these are projects external to the JAX core, governed and
maintained by OpenXLA (again, with frequent contributions from and
discussion with the JAX core developers).
XLA aims for interoperability across accelerators (e.g. by ingesting
[StableHLO](https://openxla.org/stablehlo) as input) and PJRT offers
extensibility through a plug-in device API. Adding support for new
devices is done by implementing a backend lowering for XLA, and
implementing a plug-in device API defined by PJRT. If you're looking
to contribute to compilation, or to supporting new hardware, we
encourage you to contribute at the XLA and PJRT layers.
These open system components allow third parties to support JAX on new
accelerator platforms, *without requiring changes in the JAX
core*. There are several plug-ins in development today. For example, a
team at Apple is working on a PJRT plug-in to get [JAX running on
Apple Metal](https://developer.apple.com/metal/jax/).
|
googleREPO_NAMEjaxPATH_START.@jax_extracted@jax-main@docs@about.md@.PATH_END.py
|
{
"filename": "bulkstamp.py",
"repo_name": "mhammond/pywin32",
"repo_path": "pywin32_extracted/pywin32-main/win32/scripts/VersionStamp/bulkstamp.py",
"type": "Python"
}
|
#
# bulkstamp.py:
# Stamp versions on all files that can be found in a given tree.
#
# USAGE: python bulkstamp.py <version> <root directory> <descriptions>
#
# Example: python bulkstamp.py 103 ..\win32\Build\ desc.txt
#
# <version> corresponds to the build number. It will be concatenated with
# the major and minor version numbers found in the description file.
#
# Description information is pulled from an input text file with lines of
# the form:
#
# <basename> <white space> <description>
#
# For example:
#
# PyWinTypes.dll Common types for Python on Win32
# etc
#
# The product's name, major, and minor versions are specified as:
#
# name <white space> <value>
# major <white space> <value>
# minor <white space> <value>
#
# The tags are case-sensitive.
#
# Any line beginning with "#" will be ignored. Empty lines are okay.
#
import fnmatch
import os
import sys
from collections.abc import Mapping
from optparse import Values
try:
import win32verstamp
except ModuleNotFoundError:
# If run with pywin32 not already installed
sys.path.append(os.path.abspath(__file__ + "/../../../Lib"))
import win32verstamp
g_patterns = [
"*.dll",
"*.pyd",
"*.exe",
"*.ocx",
]
def walk(vars: Mapping[str, str], debug, descriptions, dirname, names) -> int:
"""Returns the number of stamped files."""
numStamped = 0
for name in names:
for pat in g_patterns:
if fnmatch.fnmatch(name, pat):
# Handle the "_d" thing.
pathname = os.path.join(dirname, name)
base, ext = os.path.splitext(name)
if base.endswith("_d"):
name = base[:-2] + ext
is_dll = ext.lower() != ".exe"
if os.path.normcase(name) in descriptions:
description = descriptions[os.path.normcase(name)]
try:
options = Values(
{**vars, "description": description, "dll": is_dll}
)
win32verstamp.stamp(pathname, options)
numStamped += 1
except OSError as exc:
print(
"Could not stamp",
pathname,
"Error",
exc.winerror,
"-",
exc.strerror,
)
else:
print("WARNING: description not provided for:", name)
# skip branding this - assume already branded or handled elsewhere
return numStamped
# print("Stamped", pathname)
def load_descriptions(fname, vars):
retvars: dict[str, str] = {}
descriptions = {}
lines = open(fname, "r").readlines()
for i in range(len(lines)):
line = lines[i].strip()
if line != "" and line[0] != "#":
idx1 = line.find(" ")
idx2 = line.find("\t")
if idx1 == -1 or idx2 < idx1:
idx1 = idx2
if idx1 == -1:
print("ERROR: bad syntax in description file at line %d." % (i + 1))
sys.exit(1)
key = line[:idx1]
val = line[idx1:].strip()
if key in vars:
retvars[key] = val
else:
descriptions[key] = val
if "product" not in retvars:
print("ERROR: description file is missing the product name.")
sys.exit(1)
if "major" not in retvars:
print("ERROR: description file is missing the major version number.")
sys.exit(1)
if "minor" not in retvars:
print("ERROR: description file is missing the minor version number.")
sys.exit(1)
return retvars, descriptions
def scan(build, root: str, desc, **custom_vars):
try:
build = int(build)
except ValueError:
print("ERROR: build number is not a number: %s" % build)
sys.exit(1)
debug = 0 ### maybe fix this one day
varList = ["major", "minor", "sub", "company", "copyright", "trademarks", "product"]
vars, descriptions = load_descriptions(desc, varList)
vars["build"] = build
vars.update(custom_vars)
numStamped = 0
for directory, dirnames, filenames in os.walk(root):
numStamped += walk(vars, debug, descriptions, directory, filenames)
print(f"Stamped {numStamped} files.")
if __name__ == "__main__":
if len(sys.argv) != 4:
print("ERROR: incorrect invocation. See script's header comments.")
sys.exit(1)
scan(*sys.argv[1:])
|
mhammondREPO_NAMEpywin32PATH_START.@pywin32_extracted@pywin32-main@win32@scripts@VersionStamp@bulkstamp.py@.PATH_END.py
|
{
"filename": "shell.py",
"repo_name": "catboost/catboost",
"repo_path": "catboost_extracted/catboost-master/contrib/python/Pygments/py2/pygments/lexers/shell.py",
"type": "Python"
}
|
# -*- coding: utf-8 -*-
"""
pygments.lexers.shell
~~~~~~~~~~~~~~~~~~~~~
Lexers for various shells.
:copyright: Copyright 2006-2019 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
import re
from pygments.lexer import Lexer, RegexLexer, do_insertions, bygroups, \
include, default, this, using, words
from pygments.token import Punctuation, \
Text, Comment, Operator, Keyword, Name, String, Number, Generic
from pygments.util import shebang_matches
__all__ = ['BashLexer', 'BashSessionLexer', 'TcshLexer', 'BatchLexer',
'SlurmBashLexer', 'MSDOSSessionLexer', 'PowerShellLexer',
'PowerShellSessionLexer', 'TcshSessionLexer', 'FishShellLexer']
line_re = re.compile('.*?\n')
class BashLexer(RegexLexer):
"""
Lexer for (ba|k|z|)sh shell scripts.
.. versionadded:: 0.6
"""
name = 'Bash'
aliases = ['bash', 'sh', 'ksh', 'zsh', 'shell']
filenames = ['*.sh', '*.ksh', '*.bash', '*.ebuild', '*.eclass',
'*.exheres-0', '*.exlib', '*.zsh',
'.bashrc', 'bashrc', '.bash_*', 'bash_*', 'zshrc', '.zshrc',
'PKGBUILD']
mimetypes = ['application/x-sh', 'application/x-shellscript', 'text/x-shellscript']
tokens = {
'root': [
include('basic'),
(r'`', String.Backtick, 'backticks'),
include('data'),
include('interp'),
],
'interp': [
(r'\$\(\(', Keyword, 'math'),
(r'\$\(', Keyword, 'paren'),
(r'\$\{#?', String.Interpol, 'curly'),
(r'\$[a-zA-Z_]\w*', Name.Variable), # user variable
(r'\$(?:\d+|[#$?!_*@-])', Name.Variable), # builtin
(r'\$', Text),
],
'basic': [
(r'\b(if|fi|else|while|do|done|for|then|return|function|case|'
r'select|continue|until|esac|elif)(\s*)\b',
bygroups(Keyword, Text)),
(r'\b(alias|bg|bind|break|builtin|caller|cd|command|compgen|'
r'complete|declare|dirs|disown|echo|enable|eval|exec|exit|'
r'export|false|fc|fg|getopts|hash|help|history|jobs|kill|let|'
r'local|logout|popd|printf|pushd|pwd|read|readonly|set|shift|'
r'shopt|source|suspend|test|time|times|trap|true|type|typeset|'
r'ulimit|umask|unalias|unset|wait)(?=[\s)`])',
Name.Builtin),
(r'\A#!.+\n', Comment.Hashbang),
(r'#.*\n', Comment.Single),
(r'\\[\w\W]', String.Escape),
(r'(\b\w+)(\s*)(\+?=)', bygroups(Name.Variable, Text, Operator)),
(r'[\[\]{}()=]', Operator),
(r'<<<', Operator), # here-string
(r'<<-?\s*(\'?)\\?(\w+)[\w\W]+?\2', String),
(r'&&|\|\|', Operator),
],
'data': [
(r'(?s)\$?"(\\.|[^"\\$])*"', String.Double),
(r'"', String.Double, 'string'),
(r"(?s)\$'(\\\\|\\[0-7]+|\\.|[^'\\])*'", String.Single),
(r"(?s)'.*?'", String.Single),
(r';', Punctuation),
(r'&', Punctuation),
(r'\|', Punctuation),
(r'\s+', Text),
(r'\d+\b', Number),
(r'[^=\s\[\]{}()$"\'`\\<&|;]+', Text),
(r'<', Text),
],
'string': [
(r'"', String.Double, '#pop'),
(r'(?s)(\\\\|\\[0-7]+|\\.|[^"\\$])+', String.Double),
include('interp'),
],
'curly': [
(r'\}', String.Interpol, '#pop'),
(r':-', Keyword),
(r'\w+', Name.Variable),
(r'[^}:"\'`$\\]+', Punctuation),
(r':', Punctuation),
include('root'),
],
'paren': [
(r'\)', Keyword, '#pop'),
include('root'),
],
'math': [
(r'\)\)', Keyword, '#pop'),
(r'[-+*/%^|&]|\*\*|\|\|', Operator),
(r'\d+#\d+', Number),
(r'\d+#(?! )', Number),
(r'\d+', Number),
include('root'),
],
'backticks': [
(r'`', String.Backtick, '#pop'),
include('root'),
],
}
def analyse_text(text):
if shebang_matches(text, r'(ba|z|)sh'):
return 1
if text.startswith('$ '):
return 0.2
class SlurmBashLexer(BashLexer):
"""
Lexer for (ba|k|z|)sh Slurm scripts.
.. versionadded:: 2.4
"""
name = 'Slurm'
aliases = ['slurm', 'sbatch']
filenames = ['*.sl']
mimetypes = []
EXTRA_KEYWORDS = {'srun'}
def get_tokens_unprocessed(self, text):
for index, token, value in BashLexer.get_tokens_unprocessed(self, text):
if token is Text and value in self.EXTRA_KEYWORDS:
yield index, Name.Builtin, value
elif token is Comment.Single and 'SBATCH' in value:
yield index, Keyword.Pseudo, value
else:
yield index, token, value
class ShellSessionBaseLexer(Lexer):
"""
Base lexer for simplistic shell sessions.
.. versionadded:: 2.1
"""
_venv = re.compile(r'^(\([^)]*\))(\s*)')
def get_tokens_unprocessed(self, text):
innerlexer = self._innerLexerCls(**self.options)
pos = 0
curcode = ''
insertions = []
backslash_continuation = False
for match in line_re.finditer(text):
line = match.group()
if backslash_continuation:
curcode += line
backslash_continuation = curcode.endswith('\\\n')
continue
venv_match = self._venv.match(line)
if venv_match:
venv = venv_match.group(1)
venv_whitespace = venv_match.group(2)
insertions.append((len(curcode),
[(0, Generic.Prompt.VirtualEnv, venv)]))
if venv_whitespace:
insertions.append((len(curcode),
[(0, Text, venv_whitespace)]))
line = line[venv_match.end():]
m = self._ps1rgx.match(line)
if m:
# To support output lexers (say diff output), the output
# needs to be broken by prompts whenever the output lexer
# changes.
if not insertions:
pos = match.start()
insertions.append((len(curcode),
[(0, Generic.Prompt, m.group(1))]))
curcode += m.group(2)
backslash_continuation = curcode.endswith('\\\n')
elif line.startswith(self._ps2):
insertions.append((len(curcode),
[(0, Generic.Prompt, line[:len(self._ps2)])]))
curcode += line[len(self._ps2):]
backslash_continuation = curcode.endswith('\\\n')
else:
if insertions:
toks = innerlexer.get_tokens_unprocessed(curcode)
for i, t, v in do_insertions(insertions, toks):
yield pos+i, t, v
yield match.start(), Generic.Output, line
insertions = []
curcode = ''
if insertions:
for i, t, v in do_insertions(insertions,
innerlexer.get_tokens_unprocessed(curcode)):
yield pos+i, t, v
class BashSessionLexer(ShellSessionBaseLexer):
"""
Lexer for simplistic shell sessions.
.. versionadded:: 1.1
"""
name = 'Bash Session'
aliases = ['console', 'shell-session']
filenames = ['*.sh-session', '*.shell-session']
mimetypes = ['application/x-shell-session', 'application/x-sh-session']
_innerLexerCls = BashLexer
_ps1rgx = re.compile(
r'^((?:(?:\[.*?\])|(?:\(\S+\))?(?:| |sh\S*?|\w+\S+[@:]\S+(?:\s+\S+)' \
r'?|\[\S+[@:][^\n]+\].+))\s*[$#%])(.*\n?)')
_ps2 = '>'
class BatchLexer(RegexLexer):
"""
Lexer for the DOS/Windows Batch file format.
.. versionadded:: 0.7
"""
name = 'Batchfile'
aliases = ['bat', 'batch', 'dosbatch', 'winbatch']
filenames = ['*.bat', '*.cmd']
mimetypes = ['application/x-dos-batch']
flags = re.MULTILINE | re.IGNORECASE
_nl = r'\n\x1a'
_punct = r'&<>|'
_ws = r'\t\v\f\r ,;=\xa0'
_space = r'(?:(?:(?:\^[%s])?[%s])+)' % (_nl, _ws)
_keyword_terminator = (r'(?=(?:\^[%s]?)?[%s+./:[\\\]]|[%s%s(])' %
(_nl, _ws, _nl, _punct))
_token_terminator = r'(?=\^?[%s]|[%s%s])' % (_ws, _punct, _nl)
_start_label = r'((?:(?<=^[^:])|^[^:]?)[%s]*)(:)' % _ws
_label = r'(?:(?:[^%s%s%s+:^]|\^[%s]?[\w\W])*)' % (_nl, _punct, _ws, _nl)
_label_compound = (r'(?:(?:[^%s%s%s+:^)]|\^[%s]?[^)])*)' %
(_nl, _punct, _ws, _nl))
_number = r'(?:-?(?:0[0-7]+|0x[\da-f]+|\d+)%s)' % _token_terminator
_opword = r'(?:equ|geq|gtr|leq|lss|neq)'
_string = r'(?:"[^%s"]*(?:"|(?=[%s])))' % (_nl, _nl)
_variable = (r'(?:(?:%%(?:\*|(?:~[a-z]*(?:\$[^:]+:)?)?\d|'
r'[^%%:%s]+(?::(?:~(?:-?\d+)?(?:,(?:-?\d+)?)?|(?:[^%%%s^]|'
r'\^[^%%%s])[^=%s]*=(?:[^%%%s^]|\^[^%%%s])*)?)?%%))|'
r'(?:\^?![^!:%s]+(?::(?:~(?:-?\d+)?(?:,(?:-?\d+)?)?|(?:'
r'[^!%s^]|\^[^!%s])[^=%s]*=(?:[^!%s^]|\^[^!%s])*)?)?\^?!))' %
(_nl, _nl, _nl, _nl, _nl, _nl, _nl, _nl, _nl, _nl, _nl, _nl))
_core_token = r'(?:(?:(?:\^[%s]?)?[^"%s%s%s])+)' % (_nl, _nl, _punct, _ws)
_core_token_compound = r'(?:(?:(?:\^[%s]?)?[^"%s%s%s)])+)' % (_nl, _nl,
_punct, _ws)
_token = r'(?:[%s]+|%s)' % (_punct, _core_token)
_token_compound = r'(?:[%s]+|%s)' % (_punct, _core_token_compound)
_stoken = (r'(?:[%s]+|(?:%s|%s|%s)+)' %
(_punct, _string, _variable, _core_token))
def _make_begin_state(compound, _core_token=_core_token,
_core_token_compound=_core_token_compound,
_keyword_terminator=_keyword_terminator,
_nl=_nl, _punct=_punct, _string=_string,
_space=_space, _start_label=_start_label,
_stoken=_stoken, _token_terminator=_token_terminator,
_variable=_variable, _ws=_ws):
rest = '(?:%s|%s|[^"%%%s%s%s])*' % (_string, _variable, _nl, _punct,
')' if compound else '')
rest_of_line = r'(?:(?:[^%s^]|\^[%s]?[\w\W])*)' % (_nl, _nl)
rest_of_line_compound = r'(?:(?:[^%s^)]|\^[%s]?[^)])*)' % (_nl, _nl)
set_space = r'((?:(?:\^[%s]?)?[^\S\n])*)' % _nl
suffix = ''
if compound:
_keyword_terminator = r'(?:(?=\))|%s)' % _keyword_terminator
_token_terminator = r'(?:(?=\))|%s)' % _token_terminator
suffix = '/compound'
return [
((r'\)', Punctuation, '#pop') if compound else
(r'\)((?=\()|%s)%s' % (_token_terminator, rest_of_line),
Comment.Single)),
(r'(?=%s)' % _start_label, Text, 'follow%s' % suffix),
(_space, using(this, state='text')),
include('redirect%s' % suffix),
(r'[%s]+' % _nl, Text),
(r'\(', Punctuation, 'root/compound'),
(r'@+', Punctuation),
(r'((?:for|if|rem)(?:(?=(?:\^[%s]?)?/)|(?:(?!\^)|'
r'(?<=m))(?:(?=\()|%s)))(%s?%s?(?:\^[%s]?)?/(?:\^[%s]?)?\?)' %
(_nl, _token_terminator, _space,
_core_token_compound if compound else _core_token, _nl, _nl),
bygroups(Keyword, using(this, state='text')),
'follow%s' % suffix),
(r'(goto%s)(%s(?:\^[%s]?)?/(?:\^[%s]?)?\?%s)' %
(_keyword_terminator, rest, _nl, _nl, rest),
bygroups(Keyword, using(this, state='text')),
'follow%s' % suffix),
(words(('assoc', 'break', 'cd', 'chdir', 'cls', 'color', 'copy',
'date', 'del', 'dir', 'dpath', 'echo', 'endlocal', 'erase',
'exit', 'ftype', 'keys', 'md', 'mkdir', 'mklink', 'move',
'path', 'pause', 'popd', 'prompt', 'pushd', 'rd', 'ren',
'rename', 'rmdir', 'setlocal', 'shift', 'start', 'time',
'title', 'type', 'ver', 'verify', 'vol'),
suffix=_keyword_terminator), Keyword, 'follow%s' % suffix),
(r'(call)(%s?)(:)' % _space,
bygroups(Keyword, using(this, state='text'), Punctuation),
'call%s' % suffix),
(r'call%s' % _keyword_terminator, Keyword),
(r'(for%s(?!\^))(%s)(/f%s)' %
(_token_terminator, _space, _token_terminator),
bygroups(Keyword, using(this, state='text'), Keyword),
('for/f', 'for')),
(r'(for%s(?!\^))(%s)(/l%s)' %
(_token_terminator, _space, _token_terminator),
bygroups(Keyword, using(this, state='text'), Keyword),
('for/l', 'for')),
(r'for%s(?!\^)' % _token_terminator, Keyword, ('for2', 'for')),
(r'(goto%s)(%s?)(:?)' % (_keyword_terminator, _space),
bygroups(Keyword, using(this, state='text'), Punctuation),
'label%s' % suffix),
(r'(if(?:(?=\()|%s)(?!\^))(%s?)((?:/i%s)?)(%s?)((?:not%s)?)(%s?)' %
(_token_terminator, _space, _token_terminator, _space,
_token_terminator, _space),
bygroups(Keyword, using(this, state='text'), Keyword,
using(this, state='text'), Keyword,
using(this, state='text')), ('(?', 'if')),
(r'rem(((?=\()|%s)%s?%s?.*|%s%s)' %
(_token_terminator, _space, _stoken, _keyword_terminator,
rest_of_line_compound if compound else rest_of_line),
Comment.Single, 'follow%s' % suffix),
(r'(set%s)%s(/a)' % (_keyword_terminator, set_space),
bygroups(Keyword, using(this, state='text'), Keyword),
'arithmetic%s' % suffix),
(r'(set%s)%s((?:/p)?)%s((?:(?:(?:\^[%s]?)?[^"%s%s^=%s]|'
r'\^[%s]?[^"=])+)?)((?:(?:\^[%s]?)?=)?)' %
(_keyword_terminator, set_space, set_space, _nl, _nl, _punct,
')' if compound else '', _nl, _nl),
bygroups(Keyword, using(this, state='text'), Keyword,
using(this, state='text'), using(this, state='variable'),
Punctuation),
'follow%s' % suffix),
default('follow%s' % suffix)
]
def _make_follow_state(compound, _label=_label,
_label_compound=_label_compound, _nl=_nl,
_space=_space, _start_label=_start_label,
_token=_token, _token_compound=_token_compound,
_ws=_ws):
suffix = '/compound' if compound else ''
state = []
if compound:
state.append((r'(?=\))', Text, '#pop'))
state += [
(r'%s([%s]*)(%s)(.*)' %
(_start_label, _ws, _label_compound if compound else _label),
bygroups(Text, Punctuation, Text, Name.Label, Comment.Single)),
include('redirect%s' % suffix),
(r'(?=[%s])' % _nl, Text, '#pop'),
(r'\|\|?|&&?', Punctuation, '#pop'),
include('text')
]
return state
def _make_arithmetic_state(compound, _nl=_nl, _punct=_punct,
_string=_string, _variable=_variable, _ws=_ws):
op = r'=+\-*/!~'
state = []
if compound:
state.append((r'(?=\))', Text, '#pop'))
state += [
(r'0[0-7]+', Number.Oct),
(r'0x[\da-f]+', Number.Hex),
(r'\d+', Number.Integer),
(r'[(),]+', Punctuation),
(r'([%s]|%%|\^\^)+' % op, Operator),
(r'(%s|%s|(\^[%s]?)?[^()%s%%^"%s%s%s]|\^[%s%s]?%s)+' %
(_string, _variable, _nl, op, _nl, _punct, _ws, _nl, _ws,
r'[^)]' if compound else r'[\w\W]'),
using(this, state='variable')),
(r'(?=[\x00|&])', Text, '#pop'),
include('follow')
]
return state
def _make_call_state(compound, _label=_label,
_label_compound=_label_compound):
state = []
if compound:
state.append((r'(?=\))', Text, '#pop'))
state.append((r'(:?)(%s)' % (_label_compound if compound else _label),
bygroups(Punctuation, Name.Label), '#pop'))
return state
def _make_label_state(compound, _label=_label,
_label_compound=_label_compound, _nl=_nl,
_punct=_punct, _string=_string, _variable=_variable):
state = []
if compound:
state.append((r'(?=\))', Text, '#pop'))
state.append((r'(%s?)((?:%s|%s|\^[%s]?%s|[^"%%^%s%s%s])*)' %
(_label_compound if compound else _label, _string,
_variable, _nl, r'[^)]' if compound else r'[\w\W]', _nl,
_punct, r')' if compound else ''),
bygroups(Name.Label, Comment.Single), '#pop'))
return state
def _make_redirect_state(compound,
_core_token_compound=_core_token_compound,
_nl=_nl, _punct=_punct, _stoken=_stoken,
_string=_string, _space=_space,
_variable=_variable, _ws=_ws):
stoken_compound = (r'(?:[%s]+|(?:%s|%s|%s)+)' %
(_punct, _string, _variable, _core_token_compound))
return [
(r'((?:(?<=[%s%s])\d)?)(>>?&|<&)([%s%s]*)(\d)' %
(_nl, _ws, _nl, _ws),
bygroups(Number.Integer, Punctuation, Text, Number.Integer)),
(r'((?:(?<=[%s%s])(?<!\^[%s])\d)?)(>>?|<)(%s?%s)' %
(_nl, _ws, _nl, _space, stoken_compound if compound else _stoken),
bygroups(Number.Integer, Punctuation, using(this, state='text')))
]
tokens = {
'root': _make_begin_state(False),
'follow': _make_follow_state(False),
'arithmetic': _make_arithmetic_state(False),
'call': _make_call_state(False),
'label': _make_label_state(False),
'redirect': _make_redirect_state(False),
'root/compound': _make_begin_state(True),
'follow/compound': _make_follow_state(True),
'arithmetic/compound': _make_arithmetic_state(True),
'call/compound': _make_call_state(True),
'label/compound': _make_label_state(True),
'redirect/compound': _make_redirect_state(True),
'variable-or-escape': [
(_variable, Name.Variable),
(r'%%%%|\^[%s]?(\^!|[\w\W])' % _nl, String.Escape)
],
'string': [
(r'"', String.Double, '#pop'),
(_variable, Name.Variable),
(r'\^!|%%', String.Escape),
(r'[^"%%^%s]+|[%%^]' % _nl, String.Double),
default('#pop')
],
'sqstring': [
include('variable-or-escape'),
(r'[^%]+|%', String.Single)
],
'bqstring': [
include('variable-or-escape'),
(r'[^%]+|%', String.Backtick)
],
'text': [
(r'"', String.Double, 'string'),
include('variable-or-escape'),
(r'[^"%%^%s%s%s\d)]+|.' % (_nl, _punct, _ws), Text)
],
'variable': [
(r'"', String.Double, 'string'),
include('variable-or-escape'),
(r'[^"%%^%s]+|.' % _nl, Name.Variable)
],
'for': [
(r'(%s)(in)(%s)(\()' % (_space, _space),
bygroups(using(this, state='text'), Keyword,
using(this, state='text'), Punctuation), '#pop'),
include('follow')
],
'for2': [
(r'\)', Punctuation),
(r'(%s)(do%s)' % (_space, _token_terminator),
bygroups(using(this, state='text'), Keyword), '#pop'),
(r'[%s]+' % _nl, Text),
include('follow')
],
'for/f': [
(r'(")((?:%s|[^"])*?")([%s%s]*)(\))' % (_variable, _nl, _ws),
bygroups(String.Double, using(this, state='string'), Text,
Punctuation)),
(r'"', String.Double, ('#pop', 'for2', 'string')),
(r"('(?:%%%%|%s|[\w\W])*?')([%s%s]*)(\))" % (_variable, _nl, _ws),
bygroups(using(this, state='sqstring'), Text, Punctuation)),
(r'(`(?:%%%%|%s|[\w\W])*?`)([%s%s]*)(\))' % (_variable, _nl, _ws),
bygroups(using(this, state='bqstring'), Text, Punctuation)),
include('for2')
],
'for/l': [
(r'-?\d+', Number.Integer),
include('for2')
],
'if': [
(r'((?:cmdextversion|errorlevel)%s)(%s)(\d+)' %
(_token_terminator, _space),
bygroups(Keyword, using(this, state='text'),
Number.Integer), '#pop'),
(r'(defined%s)(%s)(%s)' % (_token_terminator, _space, _stoken),
bygroups(Keyword, using(this, state='text'),
using(this, state='variable')), '#pop'),
(r'(exist%s)(%s%s)' % (_token_terminator, _space, _stoken),
bygroups(Keyword, using(this, state='text')), '#pop'),
(r'(%s%s)(%s)(%s%s)' % (_number, _space, _opword, _space, _number),
bygroups(using(this, state='arithmetic'), Operator.Word,
using(this, state='arithmetic')), '#pop'),
(_stoken, using(this, state='text'), ('#pop', 'if2')),
],
'if2': [
(r'(%s?)(==)(%s?%s)' % (_space, _space, _stoken),
bygroups(using(this, state='text'), Operator,
using(this, state='text')), '#pop'),
(r'(%s)(%s)(%s%s)' % (_space, _opword, _space, _stoken),
bygroups(using(this, state='text'), Operator.Word,
using(this, state='text')), '#pop')
],
'(?': [
(_space, using(this, state='text')),
(r'\(', Punctuation, ('#pop', 'else?', 'root/compound')),
default('#pop')
],
'else?': [
(_space, using(this, state='text')),
(r'else%s' % _token_terminator, Keyword, '#pop'),
default('#pop')
]
}
class MSDOSSessionLexer(ShellSessionBaseLexer):
"""
Lexer for simplistic MSDOS sessions.
.. versionadded:: 2.1
"""
name = 'MSDOS Session'
aliases = ['doscon']
filenames = []
mimetypes = []
_innerLexerCls = BatchLexer
_ps1rgx = re.compile(r'^([^>]*>)(.*\n?)')
_ps2 = 'More? '
class TcshLexer(RegexLexer):
"""
Lexer for tcsh scripts.
.. versionadded:: 0.10
"""
name = 'Tcsh'
aliases = ['tcsh', 'csh']
filenames = ['*.tcsh', '*.csh']
mimetypes = ['application/x-csh']
tokens = {
'root': [
include('basic'),
(r'\$\(', Keyword, 'paren'),
(r'\$\{#?', Keyword, 'curly'),
(r'`', String.Backtick, 'backticks'),
include('data'),
],
'basic': [
(r'\b(if|endif|else|while|then|foreach|case|default|'
r'continue|goto|breaksw|end|switch|endsw)\s*\b',
Keyword),
(r'\b(alias|alloc|bg|bindkey|break|builtins|bye|caller|cd|chdir|'
r'complete|dirs|echo|echotc|eval|exec|exit|fg|filetest|getxvers|'
r'glob|getspath|hashstat|history|hup|inlib|jobs|kill|'
r'limit|log|login|logout|ls-F|migrate|newgrp|nice|nohup|notify|'
r'onintr|popd|printenv|pushd|rehash|repeat|rootnode|popd|pushd|'
r'set|shift|sched|setenv|setpath|settc|setty|setxvers|shift|'
r'source|stop|suspend|source|suspend|telltc|time|'
r'umask|unalias|uncomplete|unhash|universe|unlimit|unset|unsetenv|'
r'ver|wait|warp|watchlog|where|which)\s*\b',
Name.Builtin),
(r'#.*', Comment),
(r'\\[\w\W]', String.Escape),
(r'(\b\w+)(\s*)(=)', bygroups(Name.Variable, Text, Operator)),
(r'[\[\]{}()=]+', Operator),
(r'<<\s*(\'?)\\?(\w+)[\w\W]+?\2', String),
(r';', Punctuation),
],
'data': [
(r'(?s)"(\\\\|\\[0-7]+|\\.|[^"\\])*"', String.Double),
(r"(?s)'(\\\\|\\[0-7]+|\\.|[^'\\])*'", String.Single),
(r'\s+', Text),
(r'[^=\s\[\]{}()$"\'`\\;#]+', Text),
(r'\d+(?= |\Z)', Number),
(r'\$#?(\w+|.)', Name.Variable),
],
'curly': [
(r'\}', Keyword, '#pop'),
(r':-', Keyword),
(r'\w+', Name.Variable),
(r'[^}:"\'`$]+', Punctuation),
(r':', Punctuation),
include('root'),
],
'paren': [
(r'\)', Keyword, '#pop'),
include('root'),
],
'backticks': [
(r'`', String.Backtick, '#pop'),
include('root'),
],
}
class TcshSessionLexer(ShellSessionBaseLexer):
"""
Lexer for Tcsh sessions.
.. versionadded:: 2.1
"""
name = 'Tcsh Session'
aliases = ['tcshcon']
filenames = []
mimetypes = []
_innerLexerCls = TcshLexer
_ps1rgx = re.compile(r'^([^>]+>)(.*\n?)')
_ps2 = '? '
class PowerShellLexer(RegexLexer):
"""
For Windows PowerShell code.
.. versionadded:: 1.5
"""
name = 'PowerShell'
aliases = ['powershell', 'posh', 'ps1', 'psm1']
filenames = ['*.ps1', '*.psm1']
mimetypes = ['text/x-powershell']
flags = re.DOTALL | re.IGNORECASE | re.MULTILINE
keywords = (
'while validateset validaterange validatepattern validatelength '
'validatecount until trap switch return ref process param parameter in '
'if global: function foreach for finally filter end elseif else '
'dynamicparam do default continue cmdletbinding break begin alias \\? '
'% #script #private #local #global mandatory parametersetname position '
'valuefrompipeline valuefrompipelinebypropertyname '
'valuefromremainingarguments helpmessage try catch throw').split()
operators = (
'and as band bnot bor bxor casesensitive ccontains ceq cge cgt cle '
'clike clt cmatch cne cnotcontains cnotlike cnotmatch contains '
'creplace eq exact f file ge gt icontains ieq ige igt ile ilike ilt '
'imatch ine inotcontains inotlike inotmatch ireplace is isnot le like '
'lt match ne not notcontains notlike notmatch or regex replace '
'wildcard').split()
verbs = (
'write where watch wait use update unregister unpublish unprotect '
'unlock uninstall undo unblock trace test tee take sync switch '
'suspend submit stop step start split sort skip show set send select '
'search scroll save revoke resume restore restart resolve resize '
'reset request repair rename remove register redo receive read push '
'publish protect pop ping out optimize open new move mount merge '
'measure lock limit join invoke install initialize import hide group '
'grant get format foreach find export expand exit enter enable edit '
'dismount disconnect disable deny debug cxnew copy convertto '
'convertfrom convert connect confirm compress complete compare close '
'clear checkpoint block backup assert approve aggregate add').split()
aliases_ = (
'ac asnp cat cd cfs chdir clc clear clhy cli clp cls clv cnsn '
'compare copy cp cpi cpp curl cvpa dbp del diff dir dnsn ebp echo epal '
'epcsv epsn erase etsn exsn fc fhx fl foreach ft fw gal gbp gc gci gcm '
'gcs gdr ghy gi gjb gl gm gmo gp gps gpv group gsn gsnp gsv gu gv gwmi '
'h history icm iex ihy ii ipal ipcsv ipmo ipsn irm ise iwmi iwr kill lp '
'ls man md measure mi mount move mp mv nal ndr ni nmo npssc nsn nv ogv '
'oh popd ps pushd pwd r rbp rcjb rcsn rd rdr ren ri rjb rm rmdir rmo '
'rni rnp rp rsn rsnp rujb rv rvpa rwmi sajb sal saps sasv sbp sc select '
'set shcm si sl sleep sls sort sp spjb spps spsv start sujb sv swmi tee '
'trcm type wget where wjb write').split()
commenthelp = (
'component description example externalhelp forwardhelpcategory '
'forwardhelptargetname functionality inputs link '
'notes outputs parameter remotehelprunspace role synopsis').split()
tokens = {
'root': [
# we need to count pairs of parentheses for correct highlight
# of '$(...)' blocks in strings
(r'\(', Punctuation, 'child'),
(r'\s+', Text),
(r'^(\s*#[#\s]*)(\.(?:%s))([^\n]*$)' % '|'.join(commenthelp),
bygroups(Comment, String.Doc, Comment)),
(r'#[^\n]*?$', Comment),
(r'(<|<)#', Comment.Multiline, 'multline'),
(r'@"\n', String.Heredoc, 'heredoc-double'),
(r"@'\n.*?\n'@", String.Heredoc),
# escaped syntax
(r'`[\'"$@-]', Punctuation),
(r'"', String.Double, 'string'),
(r"'([^']|'')*'", String.Single),
(r'(\$|@@|@)((global|script|private|env):)?\w+',
Name.Variable),
(r'(%s)\b' % '|'.join(keywords), Keyword),
(r'-(%s)\b' % '|'.join(operators), Operator),
(r'(%s)-[a-z_]\w*\b' % '|'.join(verbs), Name.Builtin),
(r'(%s)\s' % '|'.join(aliases_), Name.Builtin),
(r'\[[a-z_\[][\w. `,\[\]]*\]', Name.Constant), # .net [type]s
(r'-[a-z_]\w*', Name),
(r'\w+', Name),
(r'[.,;@{}\[\]$()=+*/\\&%!~?^`|<>-]|::', Punctuation),
],
'child': [
(r'\)', Punctuation, '#pop'),
include('root'),
],
'multline': [
(r'[^#&.]+', Comment.Multiline),
(r'#(>|>)', Comment.Multiline, '#pop'),
(r'\.(%s)' % '|'.join(commenthelp), String.Doc),
(r'[#&.]', Comment.Multiline),
],
'string': [
(r"`[0abfnrtv'\"$`]", String.Escape),
(r'[^$`"]+', String.Double),
(r'\$\(', Punctuation, 'child'),
(r'""', String.Double),
(r'[`$]', String.Double),
(r'"', String.Double, '#pop'),
],
'heredoc-double': [
(r'\n"@', String.Heredoc, '#pop'),
(r'\$\(', Punctuation, 'child'),
(r'[^@\n]+"]', String.Heredoc),
(r".", String.Heredoc),
]
}
class PowerShellSessionLexer(ShellSessionBaseLexer):
"""
Lexer for simplistic Windows PowerShell sessions.
.. versionadded:: 2.1
"""
name = 'PowerShell Session'
aliases = ['ps1con']
filenames = []
mimetypes = []
_innerLexerCls = PowerShellLexer
_ps1rgx = re.compile(r'^(PS [^>]+> )(.*\n?)')
_ps2 = '>> '
class FishShellLexer(RegexLexer):
"""
Lexer for Fish shell scripts.
.. versionadded:: 2.1
"""
name = 'Fish'
aliases = ['fish', 'fishshell']
filenames = ['*.fish', '*.load']
mimetypes = ['application/x-fish']
tokens = {
'root': [
include('basic'),
include('data'),
include('interp'),
],
'interp': [
(r'\$\(\(', Keyword, 'math'),
(r'\(', Keyword, 'paren'),
(r'\$#?(\w+|.)', Name.Variable),
],
'basic': [
(r'\b(begin|end|if|else|while|break|for|in|return|function|block|'
r'case|continue|switch|not|and|or|set|echo|exit|pwd|true|false|'
r'cd|count|test)(\s*)\b',
bygroups(Keyword, Text)),
(r'\b(alias|bg|bind|breakpoint|builtin|command|commandline|'
r'complete|contains|dirh|dirs|emit|eval|exec|fg|fish|fish_config|'
r'fish_indent|fish_pager|fish_prompt|fish_right_prompt|'
r'fish_update_completions|fishd|funced|funcsave|functions|help|'
r'history|isatty|jobs|math|mimedb|nextd|open|popd|prevd|psub|'
r'pushd|random|read|set_color|source|status|trap|type|ulimit|'
r'umask|vared|fc|getopts|hash|kill|printf|time|wait)\s*\b(?!\.)',
Name.Builtin),
(r'#.*\n', Comment),
(r'\\[\w\W]', String.Escape),
(r'(\b\w+)(\s*)(=)', bygroups(Name.Variable, Text, Operator)),
(r'[\[\]()=]', Operator),
(r'<<-?\s*(\'?)\\?(\w+)[\w\W]+?\2', String),
],
'data': [
(r'(?s)\$?"(\\\\|\\[0-7]+|\\.|[^"\\$])*"', String.Double),
(r'"', String.Double, 'string'),
(r"(?s)\$'(\\\\|\\[0-7]+|\\.|[^'\\])*'", String.Single),
(r"(?s)'.*?'", String.Single),
(r';', Punctuation),
(r'&|\||\^|<|>', Operator),
(r'\s+', Text),
(r'\d+(?= |\Z)', Number),
(r'[^=\s\[\]{}()$"\'`\\<&|;]+', Text),
],
'string': [
(r'"', String.Double, '#pop'),
(r'(?s)(\\\\|\\[0-7]+|\\.|[^"\\$])+', String.Double),
include('interp'),
],
'paren': [
(r'\)', Keyword, '#pop'),
include('root'),
],
'math': [
(r'\)\)', Keyword, '#pop'),
(r'[-+*/%^|&]|\*\*|\|\|', Operator),
(r'\d+#\d+', Number),
(r'\d+#(?! )', Number),
(r'\d+', Number),
include('root'),
],
}
|
catboostREPO_NAMEcatboostPATH_START.@catboost_extracted@catboost-master@contrib@python@Pygments@py2@pygments@lexers@shell.py@.PATH_END.py
|
{
"filename": "_variant.py",
"repo_name": "catboost/catboost",
"repo_path": "catboost_extracted/catboost-master/contrib/python/plotly/py3/plotly/validators/layout/scene/zaxis/title/font/_variant.py",
"type": "Python"
}
|
import _plotly_utils.basevalidators
class VariantValidator(_plotly_utils.basevalidators.EnumeratedValidator):
def __init__(
self,
plotly_name="variant",
parent_name="layout.scene.zaxis.title.font",
**kwargs,
):
super(VariantValidator, self).__init__(
plotly_name=plotly_name,
parent_name=parent_name,
edit_type=kwargs.pop("edit_type", "plot"),
values=kwargs.pop(
"values",
[
"normal",
"small-caps",
"all-small-caps",
"all-petite-caps",
"petite-caps",
"unicase",
],
),
**kwargs,
)
|
catboostREPO_NAMEcatboostPATH_START.@catboost_extracted@catboost-master@contrib@python@plotly@py3@plotly@validators@layout@scene@zaxis@title@font@_variant.py@.PATH_END.py
|
{
"filename": "_textcase.py",
"repo_name": "catboost/catboost",
"repo_path": "catboost_extracted/catboost-master/contrib/python/plotly/py3/plotly/validators/scatter/marker/colorbar/title/font/_textcase.py",
"type": "Python"
}
|
import _plotly_utils.basevalidators
class TextcaseValidator(_plotly_utils.basevalidators.EnumeratedValidator):
def __init__(
self,
plotly_name="textcase",
parent_name="scatter.marker.colorbar.title.font",
**kwargs,
):
super(TextcaseValidator, self).__init__(
plotly_name=plotly_name,
parent_name=parent_name,
edit_type=kwargs.pop("edit_type", "colorbars"),
values=kwargs.pop("values", ["normal", "word caps", "upper", "lower"]),
**kwargs,
)
|
catboostREPO_NAMEcatboostPATH_START.@catboost_extracted@catboost-master@contrib@python@plotly@py3@plotly@validators@scatter@marker@colorbar@title@font@_textcase.py@.PATH_END.py
|
{
"filename": "doc_gen_1d.md",
"repo_name": "threeML/astromodels",
"repo_path": "astromodels_extracted/astromodels-master/scripts/doc_gen_1d.md",
"type": "Markdown"
}
|
---
jupyter:
jupytext:
formats: ipynb,md
text_representation:
extension: .md
format_name: markdown
format_version: '1.3'
jupytext_version: 1.11.2
kernelspec:
display_name: Python 3
language: python
name: python3
---
# func_title
```python nbsphinx="hidden" tags=[]
%%capture
import numpy as np
import matplotlib.pyplot as plt
import warnings
warnings.simplefilter("ignore")
from astromodels.functions.function import _known_functions
from jupyterthemes import jtplot
jtplot.style(context="talk", fscale=1, ticks=True, grid=False)
%matplotlib inline
```
```python nbsphinx="hidden" tags=["parameters"]
func_name = "TbAbs"
x_scale="log"
y_scale="log"
linear_range = False
wide_energy_range = False
```
```python nbsphinx="hidden" tags=[]
func = _known_functions[func_name]()
if wide_energy_range:
energy_grid = np.geomspace(1e2,1e4,500)
else:
energy_grid = np.geomspace(2e-1,1e1,1000)
if linear_range:
energy_grid = np.linspace(-5,5,1000)
blue = "#4152E3"
red = "#E3414B"
green = "#41E39E"
```
## Description
```python
func.display()
```
## Shape
The shape of the function.
*If this is not a photon model but a prior or linear function then ignore the units as these docs are auto-generated*
```python tags=["nbsphinx-thumbnail"]
fig, ax = plt.subplots()
ax.plot(energy_grid, func(energy_grid), color=blue)
ax.set_xlabel("energy (keV)")
ax.set_ylabel("photon flux")
ax.set_xscale(x_scale)
ax.set_yscale(y_scale)
```
## F$_{\nu}$
The F$_{\nu}$ shape of the photon model
*if this is not a photon model, please ignore this auto-generated plot*
```python
fig, ax = plt.subplots()
ax.plot(energy_grid, energy_grid * func(energy_grid), red)
ax.set_xlabel("energy (keV)")
ax.set_ylabel(r"energy flux (F$_{\nu}$)")
ax.set_xscale(x_scale)
ax.set_yscale(y_scale)
```
## $\nu$F$_{\nu}$
The $\nu$F$_{\nu}$ shape of the photon model
*if this is not a photon model, please ignore this auto-generated plot*
```python
fig, ax = plt.subplots()
ax.plot(energy_grid, energy_grid**2 * func(energy_grid), color=green)
ax.set_xlabel("energy (keV)")
ax.set_ylabel(r"$\nu$F$_{\nu}$")
ax.set_xscale(x_scale)
ax.set_yscale(y_scale)
```
|
threeMLREPO_NAMEastromodelsPATH_START.@astromodels_extracted@astromodels-master@scripts@doc_gen_1d.md@.PATH_END.py
|
{
"filename": "copy_injection_recovery.py",
"repo_name": "ThibeauWouters/TurboPE-BNS",
"repo_path": "TurboPE-BNS_extracted/TurboPE-BNS-main/injections/outdir_NRTv2/injection_81/copy_injection_recovery.py",
"type": "Python"
}
|
"""
Idea: try different learning rate schemes to try and fix the injections
"""
import psutil
p = psutil.Process()
p.cpu_affinity([0])
import os
os.environ['CUDA_VISIBLE_DEVICES'] = "3"
os.environ["XLA_PYTHON_CLIENT_MEM_FRACTION"] = "0.10"
import numpy as np
import argparse
# Regular imports
import argparse
import copy
import numpy as np
from astropy.time import Time
import time
import shutil
import json
import jax
jax.config.update("jax_enable_x64", True)
import jax.numpy as jnp
from jimgw.jim import Jim
from jimgw.single_event.detector import H1, L1, V1
from jimgw.single_event.likelihood import HeterodynedTransientLikelihoodFD, TransientLikelihoodFD
from jimgw.single_event.waveform import RippleTaylorF2, RippleIMRPhenomD_NRTidalv2, RippleIMRPhenomD_NRTidalv2_no_taper
from jimgw.prior import Uniform, Composite
import utils # our plotting and postprocessing utilities script
import optax
# Names of the parameters and their ranges for sampling parameters for the injection
NAMING = ['M_c', 'q', 's1_z', 's2_z', 'lambda_1', 'lambda_2', 'd_L', 't_c', 'phase_c', 'cos_iota', 'psi', 'ra', 'sin_dec']
PRIOR = {
"M_c": [0.8759659737275101, 2.6060030916165484],
"q": [0.5, 1.0],
"s1_z": [-0.05, 0.05],
"s2_z": [-0.05, 0.05],
"lambda_1": [0.0, 5000.0],
"lambda_2": [0.0, 5000.0],
"d_L": [30.0, 300.0],
"t_c": [-0.1, 0.1],
"phase_c": [0.0, 2 * jnp.pi],
"cos_iota": [-1.0, 1.0],
"psi": [0.0, jnp.pi],
"ra": [0.0, 2 * jnp.pi],
"sin_dec": [-1, 1]
}
################
### ARGPARSE ###
################
# TODO save these into a new file
def get_parser(**kwargs):
add_help = kwargs.get("add_help", True)
parser = argparse.ArgumentParser(
description="Perform an injection recovery.",
add_help=add_help,
)
# TODO os does not use them
# parser.add_argument(
# "--GPU-device",
# type=int,
# default=0,
# help="Select GPU index to use.",
# )
# parser.add_argument(
# "--GPU-memory-fraction",
# type=float,
# default=0.5,
# help="Select percentage of GPU memory to use.",
# )
parser.add_argument(
"--outdir",
type=str,
default="./outdir/",
help="Output directory for the injection.",
)
parser.add_argument(
"--load-existing-config",
type=bool,
default=False,
help="Whether to load and redo an existing injection (True) or to generate a new set of parameters (False).",
)
parser.add_argument(
"--N",
type=str,
default="",
help="Number (or generically, a custom identifier) of this injection, used to locate the output directory. If an empty string is passed (default), we generate a new injection.",
)
parser.add_argument(
"--SNR-threshold",
type=float,
default=12,
help="Skip injections with SNR below this threshold.",
)
parser.add_argument(
"--waveform-approximant",
type=str,
default="TaylorF2",
help="Which waveform approximant to use. Recommended to use TaylorF2 for now, NRTidalv2 might still be a bit unstable.",
)
parser.add_argument(
"--relative-binning-binsize",
type=int,
default=100,
help="Number of bins for the relative binning.",
)
parser.add_argument(
"--relative-binning-ref-params-equal-true-params",
type=bool,
default=True,
help="Whether to set the reference parameters in the relative binning code to injection parameters.",
)
parser.add_argument(
"--save-training-chains",
type=bool,
default=False,
help="Whether to save training chains or not (can be very large!)",
)
parser.add_argument(
"--eps-mass-matrix",
type=float,
default=1e-6,
help="Overall scale factor to rescale the step size of the local sampler.",
)
parser.add_argument(
"--which-local-sampler",
type=str,
default="MALA",
help="Which local sampler to use.",
)
parser.add_argument(
"--smart-initial-guess",
type=bool,
default=False,
help="Distribute the walkers around the injected parameters. TODO change this to reference parameters found by the relative binning code.",
)
parser.add_argument(
"--use-scheduler",
type=bool,
default=True,
help="Use a learning rate scheduler instead of a fixed learning rate.",
)
parser.add_argument(
"--stopping-criterion-global-acc",
type=float,
default=1.0,
help="Stop the run once we reach this global acceptance rate.",
)
parser.add_argument(
"--save-likelihood",
type=bool,
default=False,
help="Whether to save the likelihood object",
)
parser.add_argument(
"--tight-Mc-prior",
type=bool,
default=False,
help="Whether to use a tight prior on the Mc values or not",
)
# # TODO this has to be implemented
# parser.add_argument(
# "--autotune_local_sampler",
# type=bool,
# default=False,
# help="TODO Still has to be implemented! Specify whether to use autotuning for the local sampler.",
# )
return parser
####################
### Script setup ###
####################
def body(args):
"""
Run an injection and recovery. To get an explanation of the hyperparameters, go to:
- jim hyperparameters: https://github.com/ThibeauWouters/jim/blob/8cb4ef09fefe9b353bfb89273a4bc0ee52060d72/src/jimgw/jim.py#L26
- flowMC hyperparameters: https://github.com/ThibeauWouters/flowMC/blob/ad1a32dcb6984b2e178d7204a53d5da54b578073/src/flowMC/sampler/Sampler.py#L40
"""
start_time = time.time()
# TODO move and get these as arguments
# Deal with the hyperparameters
naming = NAMING
HYPERPARAMETERS = {
"flowmc":
{
"n_loop_training": 400,
"n_loop_production": 50,
"n_local_steps": 5,
"n_global_steps": 400,
"n_epochs": 50,
"n_chains": 1000,
"learning_rate": 0.001, # using a scheduler below
"max_samples": 50000,
"momentum": 0.9,
"batch_size": 50000,
"use_global": True,
"logging": True,
"keep_quantile": 0.0,
"local_autotune": None,
"train_thinning": 10,
"output_thinning": 30,
"n_sample_max": 10000,
"precompile": False,
"verbose": False,
"outdir": args.outdir,
"stopping_criterion_global_acc": args.stopping_criterion_global_acc,
"which_local_sampler": "MALA"
},
"jim":
{
"seed": 0,
"n_chains": 1000,
"num_layers": 10,
"hidden_size": [128, 128],
"num_bins": 8,
}
}
flowmc_hyperparameters = HYPERPARAMETERS["flowmc"]
jim_hyperparameters = HYPERPARAMETERS["jim"]
hyperparameters = {**flowmc_hyperparameters, **jim_hyperparameters}
# TODO can I just replace this with update dict?
for key, value in args.__dict__.items():
if key in hyperparameters:
hyperparameters[key] = value
### POLYNOMIAL SCHEDULER
if args.use_scheduler:
print("Using polynomial learning rate scheduler")
total_epochs = hyperparameters["n_epochs"] * hyperparameters["n_loop_training"]
start = int(total_epochs / 10)
start_lr = 1e-3
end_lr = 1e-5
power = 4.0
schedule_fn = optax.polynomial_schedule(start_lr, end_lr, power, total_epochs-start, transition_begin=start)
hyperparameters["learning_rate"] = schedule_fn
print(f"Saving output to {args.outdir}")
# Fetch waveform used
supported_waveforms = ["TaylorF2", "NRTidalv2", "IMRPhenomD_NRTidalv2"]
if args.waveform_approximant not in supported_waveforms:
print(f"Waveform approximant {args.waveform_approximant} not supported. Supported waveforms are {supported_waveforms}. Changing to TaylorF2.")
args.waveform_approximant = "TaylorF2"
if args.waveform_approximant == "TaylorF2":
ripple_waveform_fn = RippleTaylorF2
elif args.waveform_approximant in ["IMRPhenomD_NRTidalv2", "NRTv2", "NRTidalv2"]:
ripple_waveform_fn = RippleIMRPhenomD_NRTidalv2
else:
raise ValueError(f"Waveform approximant {args.waveform_approximant} not supported.")
# Before main code, check if outdir is correct dir format TODO improve with sys?
if args.outdir[-1] != "/":
args.outdir += "/"
outdir = f"{args.outdir}injection_{args.N}/"
# Get the prior bounds, both as 1D and 2D arrays
prior_ranges = jnp.array([PRIOR[name] for name in naming])
prior_low, prior_high = prior_ranges[:, 0], prior_ranges[:, 1]
bounds = np.array(list(PRIOR.values()))
# Now go over to creating parameters, and potentially check SNR cutoff
network_snr = 0.0
print(f"The SNR threshold parameter is set to {args.SNR_threshold}")
while network_snr < args.SNR_threshold:
# Generate the parameters or load them from an existing file
if args.load_existing_config:
config_path = f"{outdir}config.json"
print(f"Loading existing config, path: {config_path}")
config = json.load(open(config_path))
else:
print(f"Generating new config")
config = utils.generate_config(prior_low, prior_high, naming, args.N, args.outdir)
key = jax.random.PRNGKey(config["seed"])
# Save the given script hyperparams
with open(f"{outdir}script_args.json", 'w') as json_file:
json.dump(args.__dict__, json_file)
# Start injections
print("Injecting signals . . .")
waveform = ripple_waveform_fn(f_ref=config["fref"])
# Create frequency grid
freqs = jnp.arange(
config["fmin"],
config["f_sampling"] / 2, # maximum frequency being halved of sampling frequency
1. / config["duration"]
)
# convert injected mass ratio to eta, and apply arccos and arcsin
q = config["q"]
eta = q / (1 + q) ** 2
iota = float(jnp.arccos(config["cos_iota"]))
dec = float(jnp.arcsin(config["sin_dec"]))
# Setup the timing setting for the injection
epoch = config["duration"] - config["post_trigger_duration"]
gmst = Time(config["trigger_time"], format='gps').sidereal_time('apparent', 'greenwich').rad
# Array of injection parameters
true_param = {
'M_c': config["M_c"], # chirp mass
'eta': eta, # symmetric mass ratio 0 < eta <= 0.25
's1_z': config["s1_z"], # aligned spin of priminary component s1_z.
's2_z': config["s2_z"], # aligned spin of secondary component s2_z.
'lambda_1': config["lambda_1"], # tidal deformability of priminary component lambda_1.
'lambda_2': config["lambda_2"], # tidal deformability of secondary component lambda_2.
'd_L': config["d_L"], # luminosity distance
't_c': config["t_c"], # timeshift w.r.t. trigger time
'phase_c': config["phase_c"], # merging phase
'iota': iota, # inclination angle
'psi': config["psi"], # polarization angle
'ra': config["ra"], # right ascension
'dec': dec # declination
}
# Get the true parameter values for the plots
truths = copy.deepcopy(true_param)
truths["eta"] = q
truths = np.fromiter(truths.values(), dtype=float)
detector_param = {
'ra': config["ra"],
'dec': dec,
'gmst': gmst,
'psi': config["psi"],
'epoch': epoch,
't_c': config["t_c"],
}
print(f"The injected parameters are {true_param}")
# Generating the geocenter waveform
h_sky = waveform(freqs, true_param)
# Setup interferometers
ifos = [H1, L1, V1]
psd_files = ["./psds/psd.txt", "./psds/psd.txt", "./psds/psd_virgo.txt"]
# inject signal into ifos
for idx, ifo in enumerate(ifos):
key, subkey = jax.random.split(key)
ifo.inject_signal(
subkey,
freqs,
h_sky,
detector_param,
psd_file=psd_files[idx] # note: the function load_psd actaully loads the asd
)
print("Signal injected")
# Compute the SNR
h1_snr = utils.compute_snr(H1, h_sky, detector_param)
l1_snr = utils.compute_snr(L1, h_sky, detector_param)
v1_snr = utils.compute_snr(V1, h_sky, detector_param)
network_snr = np.sqrt(h1_snr**2 + l1_snr**2 + v1_snr**2)
# If the SNR is too low, we need to generate new parameters
if network_snr < args.SNR_threshold:
print(f"Network SNR is less than {args.SNR_threshold}, generating new parameters")
if args.load_existing_config:
raise ValueError("SNR is less than threshold, but loading existing config. This should not happen!")
print("H1 SNR:", h1_snr)
print("L1 SNR:", l1_snr)
print("V1 SNR:", v1_snr)
print("Network SNR:", network_snr)
print(f"Saving network SNR")
with open(outdir + 'network_snr.txt', 'w') as file:
file.write(str(network_snr))
print("Start prior setup")
# Priors without transformation
if args.tight_Mc_prior:
print("INFO: Using a tight chirp mass prior")
true_mc = true_param["M_c"]
Mc_prior = Uniform(true_mc - 0.1, true_mc + 0.1, naming=['M_c'])
else:
Mc_prior = Uniform(prior_low[0], prior_high[0], naming=['M_c'])
q_prior = Uniform(prior_low[1], prior_high[1], naming=['q'],
transforms={
'q': (
'eta',
lambda params: params['q'] / (1 + params['q']) ** 2
)
}
)
s1z_prior = Uniform(prior_low[2], prior_high[2], naming=['s1_z'])
s2z_prior = Uniform(prior_low[3], prior_high[3], naming=['s2_z'])
lambda_1_prior = Uniform(prior_low[4], prior_high[4], naming=['lambda_1'])
lambda_2_prior = Uniform(prior_low[5], prior_high[5], naming=['lambda_2'])
dL_prior = Uniform(prior_low[6], prior_high[6], naming=['d_L'])
tc_prior = Uniform(prior_low[7], prior_high[7], naming=['t_c'])
phic_prior = Uniform(prior_low[8], prior_high[8], naming=['phase_c'])
cos_iota_prior = Uniform(prior_low[9], prior_high[9], naming=["cos_iota"],
transforms={
"cos_iota": (
"iota",
lambda params: jnp.arccos(
jnp.arcsin(jnp.sin(params["cos_iota"] / 2 * jnp.pi)) * 2 / jnp.pi
),
)
},
)
psi_prior = Uniform(prior_low[10], prior_high[10], naming=["psi"])
ra_prior = Uniform(prior_low[11], prior_high[11], naming=["ra"])
sin_dec_prior = Uniform(prior_low[12], prior_high[12], naming=["sin_dec"],
transforms={
"sin_dec": (
"dec",
lambda params: jnp.arcsin(
jnp.arcsin(jnp.sin(params["sin_dec"] / 2 * jnp.pi)) * 2 / jnp.pi
),
)
},
)
# Save the prior bounds
print("Saving prior bounds")
utils.save_prior_bounds(prior_low, prior_high, outdir)
# Compose the prior
prior_list = [
Mc_prior,
q_prior,
s1z_prior,
s2z_prior,
lambda_1_prior,
lambda_2_prior,
dL_prior,
tc_prior,
phic_prior,
cos_iota_prior,
psi_prior,
ra_prior,
sin_dec_prior,
]
complete_prior = Composite(prior_list)
bounds = jnp.array([[p.xmin, p.xmax] for p in complete_prior.priors])
print("Finished prior setup")
print("Initializing likelihood")
if args.relative_binning_ref_params_equal_true_params:
ref_params = true_param
print("Using the true parameters as reference parameters for the relative binning")
else:
ref_params = None
print("Will search for reference waveform for relative binning")
# ### TODO remove
# # Explicitly fix relative binning for NRTidalv2
# if args.waveform_approximant in ["IMRPhenomD_NRTidalv2", "NRTidalv2"]:
# # ## TODO this might be broken?
# # # # Explicitly set the f_min and f_max used there
# # # relbin_kwargs = {"f_min": config["fmin"], "f_max": config["f_sampling"] / 2}
# # relbin_kwargs = {}
# # # Set the reference parameters at the ideal location for not breaking relative binning
# # print("Setting the reference parameters to not break the relative binning for NRTidalv2")
# # ref_params = true_param
# # ref_params["lambda_1"] = 1.0
# # ref_params["lambda_2"] = 1.0
# print("Now, the reference parameters are: ")
# print(ref_params)
# else:
# relbin_kwargs = {}
relbin_kwargs = {}
if args.waveform_approximant == "IMRPhenomD_NRTidalv2":
print("Using IMRPhenomD_NRTidalv2 no taper as the reference waveform for the likelihood")
reference_waveform = RippleIMRPhenomD_NRTidalv2_no_taper(f_ref=config["fref"])
else:
reference_waveform = waveform
likelihood = HeterodynedTransientLikelihoodFD(
ifos,
prior=complete_prior,
bounds=bounds,
n_bins = args.relative_binning_binsize,
waveform=waveform,
reference_waveform=reference_waveform,
trigger_time=config["trigger_time"],
duration=config["duration"],
post_trigger_duration=config["post_trigger_duration"],
ref_params=ref_params,
**relbin_kwargs
)
if args.save_likelihood:
print(f"INFO: Saving the likelihood to {outdir}")
import pickle
with open(f'{outdir}likelihood.pickle', 'wb') as handle:
pickle.dump(likelihood, handle, protocol=pickle.HIGHEST_PROTOCOL)
# Save the ref params
utils.save_relative_binning_ref_params(likelihood, outdir)
# Generate arguments for the local samplercd
mass_matrix = jnp.eye(len(prior_list))
for idx, prior in enumerate(prior_list):
mass_matrix = mass_matrix.at[idx, idx].set(prior.xmax - prior.xmin) # fetch the prior range
local_sampler_arg = {'step_size': mass_matrix * args.eps_mass_matrix} # set the overall step size
hyperparameters["local_sampler_arg"] = local_sampler_arg
# Create jim object
jim = Jim(
likelihood,
complete_prior,
**hyperparameters
)
if args.smart_initial_guess:
n_chains = hyperparameters["n_chains"]
n_dim = len(prior_list)
initial_guess = utils.generate_smart_initial_guess(gmst, [H1, L1, V1], true_param, n_chains, n_dim, prior_low, prior_high)
# Plot it
utils.plot_chains(initial_guess, "initial_guess", outdir, truths = truths)
else:
initial_guess = jnp.array([])
### Finally, do the sampling
jim.sample(jax.random.PRNGKey(24), initial_guess = initial_guess)
# === Show results, save output ===
# Print a summary to screen:
jim.print_summary()
# Save and plot the results of the run
# - training phase
name = outdir + f'results_training.npz'
print(f"Saving samples to {name}")
state = jim.Sampler.get_sampler_state(training = True)
chains, log_prob, local_accs, global_accs, loss_vals = state["chains"], state["log_prob"], state["local_accs"], state["global_accs"], state["loss_vals"]
local_accs = jnp.mean(local_accs, axis=0)
global_accs = jnp.mean(global_accs, axis=0)
if args.save_training_chains:
np.savez(name, log_prob=log_prob, local_accs=local_accs, global_accs=global_accs, loss_vals=loss_vals, chains=chains)
else:
np.savez(name, log_prob=log_prob, local_accs=local_accs, global_accs=global_accs, loss_vals=loss_vals)
utils.plot_accs(local_accs, "Local accs (training)", "local_accs_training", outdir)
utils.plot_accs(global_accs, "Global accs (training)", "global_accs_training", outdir)
utils.plot_loss_vals(loss_vals, "Loss", "loss_vals", outdir)
utils.plot_log_prob(log_prob, "Log probability (training)", "log_prob_training", outdir)
# - production phase
name = outdir + f'results_production.npz'
state = jim.Sampler.get_sampler_state(training = False)
chains, log_prob, local_accs, global_accs = state["chains"], state["log_prob"], state["local_accs"], state["global_accs"]
local_accs = jnp.mean(local_accs, axis=0)
global_accs = jnp.mean(global_accs, axis=0)
np.savez(name, chains=chains, log_prob=log_prob, local_accs=local_accs, global_accs=global_accs)
utils.plot_accs(local_accs, "Local accs (production)", "local_accs_production", outdir)
utils.plot_accs(global_accs, "Global accs (production)", "global_accs_production", outdir)
utils.plot_log_prob(log_prob, "Log probability (production)", "log_prob_production", outdir)
# Plot the chains as corner plots
utils.plot_chains(chains, "chains_production", outdir, truths = truths)
# Save the NF and show a plot of samples from the flow
print("Saving the NF")
jim.Sampler.save_flow(outdir + "nf_model")
name = outdir + 'results_NF.npz'
chains = jim.Sampler.sample_flow(10_000)
np.savez(name, chains = chains)
# Finally, copy over this script to the outdir for reproducibility
shutil.copy2(__file__, outdir + "copy_injection_recovery.py")
print("Saving the jim hyperparameters")
jim.save_hyperparameters(outdir = outdir)
end_time = time.time()
runtime = end_time - start_time
print(f"Time taken: {runtime} seconds ({(runtime)/60} minutes)")
print(f"Saving runtime")
with open(outdir + 'runtime.txt', 'w') as file:
file.write(str(runtime))
print("Finished injection recovery successfully!")
############
### MAIN ###
############
def main(given_args = None):
parser = get_parser()
args = parser.parse_args()
print(given_args)
# Update with given args
if given_args is not None:
args.__dict__.update(given_args)
if args.load_existing_config and args.N == "":
raise ValueError("If load_existing_config is True, you need to specify the N argument to locate the existing injection. ")
print("------------------------------------")
print("Arguments script:")
for key, value in args.__dict__.items():
print(f"{key}: {value}")
print("------------------------------------")
print("Starting main code")
# If no N is given, fetch N from the structure of outdir
if len(args.N) == 0:
N = utils.get_N(args.outdir)
args.N = N
# TODO fix that os uses these
# import os
# os.environ["XLA_PYTHON_CLIENT_MEM_FRACTION"] = str(args.GPU_memory_fraction)
# os.environ['CUDA_VISIBLE_DEVICES'] = str(args.GPU_device)
# print(f"Running on GPU {args.GPU_device}")
# Execute the script
body(args)
if __name__ == "__main__":
main()
|
ThibeauWoutersREPO_NAMETurboPE-BNSPATH_START.@TurboPE-BNS_extracted@TurboPE-BNS-main@injections@outdir_NRTv2@injection_81@copy_injection_recovery.py@.PATH_END.py
|
{
"filename": "Tutorial-SEOBNR_Constant_Coefficients.ipynb",
"repo_name": "zachetienne/nrpytutorial",
"repo_path": "nrpytutorial_extracted/nrpytutorial-master/in_progress-SEOBNR/Tutorial-SEOBNR_Constant_Coefficients.ipynb",
"type": "Jupyter Notebook"
}
|
<script async src="https://www.googletagmanager.com/gtag/js?id=UA-59152712-8"></script>
<script>
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag('js', new Date());
gtag('config', 'UA-59152712-8');
</script>
# The Spinning Effective One-Body Constant Coefficients
## Author: Tyler Knowles
## This module documents the reduced spinning effective one-body constant coefficients with terms calibrated to numerical relativity simulations.
**Notebook Status:** <font color='red'><b> In progress </b></font>
**Validation Notes:** This module is under active development -- do ***not*** use the resulting code for scientific applications. In the future, this module will be validated against the LALSuite [SEOBNRv3/SEOBNRv3_opt code]( https://git.ligo.org/lscsoft/lalsuite) that was reviewed and approved for LIGO parameter estimation by the LIGO Scientific Collaboration.
## Introduction
### The Physical System of Interest
Consider two compact objects (e.g. black holes or neutron stars) with masses $m_{1}$, $m_{2}$ (in solar masses) and spin angular momenta ${\bf S}_{1}$, ${\bf S}_{2}$ in a binary system. The spinning effective one-body ("SEOB") Hamiltonian $H_{\rm real}$ (see [BB2010](https://arxiv.org/abs/0912.3517) Equation (5.69)) describes the dynamics of this system. Certain terms of this $H_{\rm real}$ are calibrated to numerical relativity simulations, and we document those terms here.
The fits to numerical relativity rely on the following physical parameters.
1. Let $m_{1}$, $m_{2}$ be the masses of the compact objects in units of solar mass. We then define the symmetric mass ratio $\eta$ of the system to be (see discussion preceding [BB2010](https://arxiv.org/abs/0912.3517) Equation (5.1))
\begin{equation*}
\eta = \frac{ m_{1} m_{2} }{ \left( m_{1} + m_{2} \right)^{2} }.
\end{equation*}
1. Let ${\bf S}_{1}$, ${\bf S}_{2}$ be the spins of the compact objects in units of FIXME. Combining [BB2010](https://arxiv.org/abs/0912.3517) Equations (5.2), (5.64), and (5.67) we have that the spin of the deformed Kerr background is
\begin{equation*}
{\bf S}_{\rm Kerr} = {\bf S}_{1} + {\bf S}_{2}.
\end{equation*}
From [BB2010](https://arxiv.org/abs/0912.3517) Equation (4.9), we then define the parameter $a$ which appears in metric potenials for the Kerr spacetime as
\begin{equation*}
a = \frac{ \left\lvert {\bf S}_{\rm Kerr} \right\rvert }{ M^{2} }
\end{equation*}
where $M = m_{1} + m_{2}$.
We also use the Euler–Mascheroni constant $\gamma$, [hard-coded in LALSuite](https://lscsoft.docs.ligo.org/lalsuite/lal/group___l_a_l_constants__h.html) with the following value:
\begin{equation*}
\gamma = 0.577215664901532860606512090082402431
\end{equation*}
Please note that throughout this notebook we adopt the following conventions:
1. $c = G = 1$ where $c$ is the speed of light in a vacuum and $G$ is Newton's gravitational constant, and
1. $m_{1} \ge m_{2}$.
### Citations
Throughout this module, we refer to
* [Buonanno, Chen, and Damour (2006)](https://arxiv.org/abs/gr-qc/0508067) as BCD2006.
LALSuite line numbers are taken from Git commit bba40f2 (see [LALSuite's GitLab page](https://git.ligo.org/lscsoft/lalsuite)).
```python
# Constants of fit to numerical relativity for the spinning effective one-body formulation
# Import necessary NumPy, SymPy, and SEOBNR modules
import numpy as np
# Compute fits to numerical relativity
def compute_const_coeffs(eta, EMgamma, a):
# Define frequently-used constants
asq = a*a
pisq = np.pi*np.pi
# Define constants that determine the fitting parameter K
# See the discussion in https://arxiv.org/pdf/1311.2544.pdf between Equations (3) and (4)
K0 = 1.712
K1 = -1.803949138004582
K2 = -39.77229225266885
K3 = 103.16588921239249
# Compute the fitting parameter K
# See https://arxiv.org/abs/0912.3517 Equation (5.67) and the discussion following Equation 6.9
# as well as https://arxiv.org/pdf/1311.2544.pdf
K = K0 + K1*eta + K2*eta*eta + K3*eta*eta*eta
# Define more frequently-used constants
EtaKm1 = eta*K - 1.
EtaKm1sq = EtaKm1*EtaKm1
# Compute the Post-Newtonian coefficients
# See https://arxiv.org/abs/0912.3517 Equations (5.77) to (5.81) and
# https://arxiv.org/pdf/1311.2544.pdf Equation (2)
Delta0 = K*(EtaKm1 - 1.)
Delta1 = -2.*(Delta0 + K)*EtaKm1
Delta1sq = Delta1*Delta1
Delta1cu = Delta1*Delta1sq
Delta1ft = Delta1cu*Delta1
Delta2 = 0.5*Delta1*(Delta1 - 4.*EtaKm1) - asq*EtaKm1sq*Delta0
Delta2sq = Delta2*Delta2
Delta3 = -Delta1cu/3. + Delta1*Delta2 + Delta1sq*EtaKm1 - 2.*(Delta2 - EtaKm1)*EtaKm1 - asq*Delta1*EtaKm1sq
Delta4 = 1./12.*(6*asq*(Delta1sq - 2*Delta2)*EtaKm1sq + 3*Delta1ft - 8*EtaKm1*Delta1cu - 12*Delta2*Delta1sq
+ 12*(2*EtaKm1*Delta2 + Delta3)*Delta1 + 12*(94./3. - 41./32.*pisq)*EtaKm1sq
+ 6*(Delta2*Delta2 - 4*Delta3*EtaKm1))
Delta5 = EtaKm1sq*(-4237./60. + 128./5.*EMgamma + 2275./512.*pisq - asq*(Delta1cu - 3.*Delta1*Delta2 + 3.*Delta3)/3.
- (Delta1ft*Delta1 - 5.*Delta1cu*Delta2 + 5.*Delta1*Delta2sq + 5.*Delta1sq*Delta3
- 5.*Delta2*Delta3 - 5.*Delta1*Delta4)/(5.*EtaKm1sq) + (Delta1ft - 4.*Delta1sq*Delta2
+ 2.*Delta2sq + 4.*Delta1*Delta3 - 4.*Delta4)/(2*EtaKm1) + (256./5.)*np.log(2))
Delta5l = (64./5.)*EtaKm1sq
#Add comment here
dSO = -74.71 - 156.*eta + 627.5*eta*eta
dSS = 8.127 - 154.2*eta + 830.8*eta*eta
return K, Delta0, Delta1, Delta2, Delta3, Delta4, Delta5, Delta5l, dSO, dSS
```
### Output this notebook to $\LaTeX$-formatted PDF file
The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename
[Tutorial-SEOBNR_Constant_Coefficients.pdf](Tutorial-SEOBNR_Constant_Coefficients.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
```python
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
cmd.output_Jupyter_notebook_to_LaTeXed_PDF("Tutorial-SEOBNR_Constant_Coefficients")
```
|
zachetienneREPO_NAMEnrpytutorialPATH_START.@nrpytutorial_extracted@nrpytutorial-master@in_progress-SEOBNR@Tutorial-SEOBNR_Constant_Coefficients.ipynb@.PATH_END.py
|
{
"filename": "using_ipython.ipynb",
"repo_name": "Jashcraf/poke",
"repo_path": "poke_extracted/poke-main/docs/_build/html/notebooks/using_ipython.ipynb",
"type": "Jupyter Notebook"
}
|
# Using Jupyter Notebooks & Raytracer Specifics
_written by Jaren Ashcraft_
Talking to raytracers via their API's is not without its headaches. Jupyter Notebooks were not supported by Poke for some time because running the `trace_raysets` method would crash the Jupyter kernel.
In this tutorial we cover how to run Poke entirely from Jupyter notebooks to trace rays in Zemax and CODE V, as well as some other aspects of Poke that are raytracer specific
## Ansys Zemax OpticStudio
[Ansys Zemax OpticStudio (ZOS)](https://www.ansys.com/products/optics-vr/ansys-zemax-opticstudio?utm_source=google&utm_medium=ppc&utm_campaign=product&utm_content=digital_optics_opticsstudio-rsa_trial_request_search-ad_en_global&utm_term=zemax%20opticstudio&campaignid=7013g000000cXF7AAM&creative=643132945089&keyword=zemax%20opticstudio&matchtype=e&network=g&device=c&s_kwcid=AL!17240!3!643132945089!e!!g!!zemax%20opticstudio&gclid=CjwKCAjw38SoBhB6EiwA8EQVLsM_LHeRhgA2SUfIU9kpZWRUOotDApRJ3NYs1HW2UXxW3L1wN5xJFBoCfS8QAvD_BwE) is a commercial ray tracer that is fairly commonplace in astronomy, but is one of the "industry standard" ray tracers. ZOS is what Poke was originally built on, so we have a long(ish) history of working with its API. Before using Poke with ZOS there are a few things to note:
- Poke relies on the `Raytrace.dll` written by Michael Humphreys in [this Zemax Knowledgebase article](https://support.zemax.com/hc/en-us/articles/1500005576882-Batch-Processing-of-Ray-Trace-Data-using-ZOS-API-in-MATLAB-or-Python). Previously, to perform a batch ray trace one had to loop over the results which slowed the runtime considerably. The `Raytrace.dll` does this all in compiled C# code, so it is done much faster.
- Poke also utilized Michael Humphrey's [zosapi package](https://github.com/x68507/zosapi/), which essentially installs the ZOS-API boilerplate into your site-packages so that you don't have to copy it into every script that you want to write. This is installed when Poke is installed on your device.
Now, to use Poke with a ZOS optical system in Jupyter notebooks we will start by setting up a Rayfront with one of our example files.
```python
from poke.poke_core import Rayfront
pth = "C:/Users/UASAL-OPTICS/Desktop/poke/test_files/PL&OS_CassegrainJonesPupil.zmx"
coating = 0.73677 + 1j*5.77450 # Al at 600nm
nrays = 64
wavelength = 0.6e-6
pupil_radius = 8323.3e-3/2
max_fov = 1e-3
# define surfaces
s1 = {
'surf':1,
'coating':coating,
'mode':'reflect'
}
s2 = {
'surf':2,
'coating':coating,
'mode':'reflect'
}
rf = Rayfront(nrays,wavelength,pupil_radius,max_fov)
rf.as_polarized([s1,s2])
```
norm fov = [0. 0.]
base ray shape (4, 3096)
Now we must initialize a connection to ZOS by importing the `zosapi` package
```python
import zosapi
zos = zosapi.App() # establish the connection
```
We can then proceed to carry out our simulation as normal
```python
rf.trace_rayset(pth)
```
tracing with global coordinates
tracing with global coordinates
1 Raysets traced through 2 surfaces
```python
import poke.plotting as plot
# let's compute a Jones pupil
rf.compute_jones_pupil()
plot.jones_pupil(rf)
```

## SYNOPSYS CODE V
SYNOPSYS CODE V (CODE V) is another industry-standard commercial ray tracer, and the one that I learned lens design on. CODE V's Python API is COM-interface driven, which means that we talk to CODE V from Python by sending commands to the command line. This was somewhat limiting from a performance point of view, because (as far as I know) there isn't a way in the API to ask CODE V to trace many rays at once, just one ray at a time over the command line. As the number of rays increases this gets expensive very quickly, so I had to think of another way of doing so.
`Rayfront.trace_raysets` now calls the faster `poke.raytrace.trace_through_cv` by default. This function does the following:
- opens a file called `intermediate_raytrace.seq` in `C:/CVUSER/`
- writes a macro in the file to create an input array of rays
- sends the input array to RAYRSI
- reads the output of RAYRSI to a buffer
- saves the buffer as a text file `intermediate_output.txt`
- executes the macro
- deletes `intermediate_output.txt` and `intermediate_raytrace.seq`
To demo this, we simply replicate the steps above
```python
pth = "C:/Users/UASAL-OPTICS/Desktop/poke/test_files/PLOS_CassegrainJonesPupil.seq"
rf = Rayfront(nrays,wavelength,pupil_radius,max_fov,circle=False)
rf.as_polarized([s1,s2])
```
norm fov = [0. 0.]
base ray shape (4, 4096)
```python
rf.trace_rayset(pth)
```
in C:/Users/UASAL-OPTICS/Desktop/poke/test_files/PLOS_CassegrainJonesPupil.seq
CODE V warning: Warning: Buffer number 0 does not exist. Nothing deleted.
CODE V warning: Warning: Solves may be affected by a change in the reference wavelength
global coordinate reference set to surface 1
maxrays = 4096
CODE V warning: Warning: Buffer number 1 does not exist. Nothing deleted.
1 Raysets traced through 2 surfaces
```python
import poke.plotting as plot
rf.compute_jones_pupil()
plot.jones_pupil(rf)
```

And with a quick scan of our CVUSER directory we can see that there were no files of the type we saved remaining in the directory!
```python
import os
directory_files = os.listdir('C:/CVUSER/')
failed = False
for file in directory_files:
if (file == 'intermediate_output.txt') or (file == 'intermediate_raytrace.set'):
failed = True
print(failed)
```
False
|
JashcrafREPO_NAMEpokePATH_START.@poke_extracted@poke-main@docs@_build@html@notebooks@using_ipython.ipynb@.PATH_END.py
|
{
"filename": "Star_disk.py",
"repo_name": "bretonr/Icarus",
"repo_path": "Icarus_extracted/Icarus-master/Icarus/Core/Star_disk.py",
"type": "Python"
}
|
# Licensed under a 3-clause BSD style license - see LICENSE
from __future__ import print_function, division
__all__ = ["Star_disk"]
from ..Utils.import_modules import *
from .. import Utils
from .Star import Star
######################## class Star_disk ########################
class Star_disk(Star):
""" Star_disk(Star)
This class allows to determine the flux of a star
in a binary system using an atmosphere grid. It is derived
from the Star class.
The noticeable difference is that this class contains the tools
to add the flux contribution from a disk in the system.
For the moment the contribution is restricted to a constant flux.
"""
def __init__(self, *args, **kwargs):
Star.__init__(self, *args, **kwargs)
def Flux_disk(self, phase, gravscale=None, atmo_grid=None, disk=0.):
"""Flux(phase, gravscale=None, atmo_grid=None, disk=0.)
Return the flux interpolated from the atmosphere grid.
Adds a constant flux contribution due to a disk.
phase: orbital phase (in orbital fraction; 0: companion
in front, 0.5: companion behind).
gravscale (optional): gravitational scaling parameter.
atmo_grid (optional): atmosphere grid instance used to
calculate the flux.
>>> self.Flux(phase)
flux
"""
# print( "Flux_disk" )
if atmo_grid is None:
atmo_grid = self.atmo_grid
if gravscale is None:
gravscale = self._Gravscale()
mu = self._Mu(phase)
inds = (mu > 0).nonzero()[0]
# fsum = 0.
# for i in inds:
# fsum = fsum + self.area[i] * atmo_grid.Get_flux(self.logteff[i],self.logg[i]+gravscale,mu[i])
# if fsum.ndim == 2:
# fsum = self.area[inds].reshape((inds.size,1)) * fsum
# else:
# fsum = self.area[inds] * fsum
# fsum = fsum.sum(axis=0)
fsum = atmo_grid.Get_flux(self.logteff[inds],self.logg[inds]+gravscale,mu[inds],self.area[inds])
return fsum + disk
def Flux_disk_Keff(self, phase, gravscale=None, atmo_grid=None, disk=0.):
"""Flux_disk_Keff(phase, gravscale=None, atmo_grid=None, disk=0.)
Return the flux interpolated from the atmosphere grid.
Adds a constant flux contribution due to a disk.
Also returns the effective velocity of the star.
phase: orbital phase (in orbital fraction; 0: companion
in front, 0.5: companion behind).
gravscale (optional): gravitational scaling parameter.
atmo_grid (optional): atmosphere grid instance used to
calculate the flux.
>>> self.Flux_disk_Keff(phase)
flux, Keff
"""
# print( "Flux_disk_Keff" )
if atmo_grid is None:
atmo_grid = self.atmo_grid
if gravscale is None:
gravscale = self._Gravscale()
mu = self._Mu(phase)
v = self._Velocity_surface(phase)
inds = (mu > 0).nonzero()[0]
# fsum = 0.
# for i in inds:
# fsum = fsum + self.area[i] * atmo_grid.Get_flux(self.logteff[i],self.logg[i]+gravscale,mu[i])
# if fsum.ndim == 2:
# fsum = self.area[inds].reshape((inds.size,1)) * fsum
# else:
# fsum = self.area[inds] * fsum
# fsum = fsum.sum(axis=0)
fsum, Keff = atmo_grid.Get_flux_Keff(self.logteff[inds],self.logg[inds]+gravscale,mu[inds],self.area[inds],v[inds])
return fsum + disk, Keff*cts.c
def Mag_flux_disk(self, phase, gravscale=None, a=None, atmo_grid=None, disk=0.):
"""Mag_flux_disk(phase, gravscale=None, a=None, disk=0.)
Returns the magnitude interpolated from the atmosphere grid.
The flux is added to a constant quantity, which simulates the contribution
from an overly simplistic accretion disk.
phase: orbital phase (in orbital fraction; 0: companion
in front, 0.5: companion behind).
gravscale (optional): gravitational scaling parameter.
a (optional): orbital separation. If not provided, derives it
from q and asini (provided when creating the instance of
the class).
atmo_grid (optional): atmosphere grid instance to work from to
calculate the flux.
disk (=0.): the contribution from the accretion disk around the neutron star.
It is assumed to be a constant flux value that is added to the companion's
emission.
>>> self.Mag_flux_disk(phase)
mag_flux_disk
"""
# print( "Mag_flux_disk" )
if atmo_grid is None:
atmo_grid = self.atmo_grid
if a is not None:
proj = self._Proj(a)
else:
proj = self._Proj(self.separation)
if gravscale is None:
gravscale = self._Gravscale()
return -2.5*np.log10(self.Flux_disk(phase, gravscale=gravscale, atmo_grid=atmo_grid, disk=disk) * proj) + atmo_grid.meta['zp']
######################## class Star_disk ########################
|
bretonrREPO_NAMEIcarusPATH_START.@Icarus_extracted@Icarus-master@Icarus@Core@Star_disk.py@.PATH_END.py
|
{
"filename": "test_hookcaller.py",
"repo_name": "catboost/catboost",
"repo_path": "catboost_extracted/catboost-master/contrib/python/pluggy/py3/tests/test_hookcaller.py",
"type": "Python"
}
|
from typing import Callable
from typing import List
from typing import Sequence
from typing import TypeVar
import pytest
from pluggy import HookimplMarker
from pluggy import HookspecMarker
from pluggy import PluginManager
from pluggy import PluginValidationError
from pluggy._hooks import _HookCaller
from pluggy._hooks import HookImpl
hookspec = HookspecMarker("example")
hookimpl = HookimplMarker("example")
@pytest.fixture
def hc(pm: PluginManager) -> _HookCaller:
class Hooks:
@hookspec
def he_method1(self, arg: object) -> None:
pass
pm.add_hookspecs(Hooks)
return pm.hook.he_method1
FuncT = TypeVar("FuncT", bound=Callable[..., object])
class AddMeth:
def __init__(self, hc: _HookCaller) -> None:
self.hc = hc
def __call__(
self,
tryfirst: bool = False,
trylast: bool = False,
hookwrapper: bool = False,
wrapper: bool = False,
) -> Callable[[FuncT], FuncT]:
def wrap(func: FuncT) -> FuncT:
hookimpl(
tryfirst=tryfirst,
trylast=trylast,
hookwrapper=hookwrapper,
wrapper=wrapper,
)(func)
self.hc._add_hookimpl(
HookImpl(None, "<temp>", func, func.example_impl), # type: ignore[attr-defined]
)
return func
return wrap
@pytest.fixture
def addmeth(hc: _HookCaller) -> AddMeth:
return AddMeth(hc)
def funcs(hookmethods: Sequence[HookImpl]) -> List[Callable[..., object]]:
return [hookmethod.function for hookmethod in hookmethods]
def test_adding_nonwrappers(hc: _HookCaller, addmeth: AddMeth) -> None:
@addmeth()
def he_method1() -> None:
pass
@addmeth()
def he_method2() -> None:
pass
@addmeth()
def he_method3() -> None:
pass
assert funcs(hc.get_hookimpls()) == [he_method1, he_method2, he_method3]
def test_adding_nonwrappers_trylast(hc: _HookCaller, addmeth: AddMeth) -> None:
@addmeth()
def he_method1_middle() -> None:
pass
@addmeth(trylast=True)
def he_method1() -> None:
pass
@addmeth()
def he_method1_b() -> None:
pass
assert funcs(hc.get_hookimpls()) == [he_method1, he_method1_middle, he_method1_b]
def test_adding_nonwrappers_trylast3(hc: _HookCaller, addmeth: AddMeth) -> None:
@addmeth()
def he_method1_a() -> None:
pass
@addmeth(trylast=True)
def he_method1_b() -> None:
pass
@addmeth()
def he_method1_c() -> None:
pass
@addmeth(trylast=True)
def he_method1_d() -> None:
pass
assert funcs(hc.get_hookimpls()) == [
he_method1_d,
he_method1_b,
he_method1_a,
he_method1_c,
]
def test_adding_nonwrappers_trylast2(hc: _HookCaller, addmeth: AddMeth) -> None:
@addmeth()
def he_method1_middle() -> None:
pass
@addmeth()
def he_method1_b() -> None:
pass
@addmeth(trylast=True)
def he_method1() -> None:
pass
assert funcs(hc.get_hookimpls()) == [he_method1, he_method1_middle, he_method1_b]
def test_adding_nonwrappers_tryfirst(hc: _HookCaller, addmeth: AddMeth) -> None:
@addmeth(tryfirst=True)
def he_method1() -> None:
pass
@addmeth()
def he_method1_middle() -> None:
pass
@addmeth()
def he_method1_b() -> None:
pass
assert funcs(hc.get_hookimpls()) == [he_method1_middle, he_method1_b, he_method1]
def test_adding_wrappers_ordering(hc: _HookCaller, addmeth: AddMeth) -> None:
@addmeth(hookwrapper=True)
def he_method1():
yield
@addmeth(wrapper=True)
def he_method1_fun():
yield
@addmeth()
def he_method1_middle():
return
@addmeth(hookwrapper=True)
def he_method3_fun():
yield
@addmeth(hookwrapper=True)
def he_method3():
yield
assert funcs(hc.get_hookimpls()) == [
he_method1_middle,
he_method1,
he_method1_fun,
he_method3_fun,
he_method3,
]
def test_adding_wrappers_ordering_tryfirst(hc: _HookCaller, addmeth: AddMeth) -> None:
@addmeth(hookwrapper=True, tryfirst=True)
def he_method1():
yield
@addmeth(hookwrapper=True)
def he_method2():
yield
@addmeth(wrapper=True, tryfirst=True)
def he_method3():
yield
assert funcs(hc.get_hookimpls()) == [he_method2, he_method1, he_method3]
def test_adding_wrappers_complex(hc: _HookCaller, addmeth: AddMeth) -> None:
assert funcs(hc.get_hookimpls()) == []
@addmeth(hookwrapper=True, trylast=True)
def m1():
yield
assert funcs(hc.get_hookimpls()) == [m1]
@addmeth()
def m2() -> None:
...
assert funcs(hc.get_hookimpls()) == [m2, m1]
@addmeth(trylast=True)
def m3() -> None:
...
assert funcs(hc.get_hookimpls()) == [m3, m2, m1]
@addmeth(hookwrapper=True)
def m4() -> None:
...
assert funcs(hc.get_hookimpls()) == [m3, m2, m1, m4]
@addmeth(wrapper=True, tryfirst=True)
def m5():
yield
assert funcs(hc.get_hookimpls()) == [m3, m2, m1, m4, m5]
@addmeth(tryfirst=True)
def m6() -> None:
...
assert funcs(hc.get_hookimpls()) == [m3, m2, m6, m1, m4, m5]
@addmeth()
def m7() -> None:
...
assert funcs(hc.get_hookimpls()) == [m3, m2, m7, m6, m1, m4, m5]
@addmeth(wrapper=True)
def m8():
yield
assert funcs(hc.get_hookimpls()) == [m3, m2, m7, m6, m1, m4, m8, m5]
@addmeth(trylast=True)
def m9() -> None:
...
assert funcs(hc.get_hookimpls()) == [m9, m3, m2, m7, m6, m1, m4, m8, m5]
@addmeth(tryfirst=True)
def m10() -> None:
...
assert funcs(hc.get_hookimpls()) == [m9, m3, m2, m7, m6, m10, m1, m4, m8, m5]
@addmeth(hookwrapper=True, trylast=True)
def m11() -> None:
...
assert funcs(hc.get_hookimpls()) == [m9, m3, m2, m7, m6, m10, m11, m1, m4, m8, m5]
@addmeth(wrapper=True)
def m12():
yield
assert funcs(hc.get_hookimpls()) == [
m9,
m3,
m2,
m7,
m6,
m10,
m11,
m1,
m4,
m8,
m12,
m5,
]
@addmeth()
def m13() -> None:
...
assert funcs(hc.get_hookimpls()) == [
m9,
m3,
m2,
m7,
m13,
m6,
m10,
m11,
m1,
m4,
m8,
m12,
m5,
]
def test_hookspec(pm: PluginManager) -> None:
class HookSpec:
@hookspec()
def he_myhook1(arg1) -> None:
pass
@hookspec(firstresult=True)
def he_myhook2(arg1) -> None:
pass
@hookspec(firstresult=False)
def he_myhook3(arg1) -> None:
pass
pm.add_hookspecs(HookSpec)
assert pm.hook.he_myhook1.spec is not None
assert not pm.hook.he_myhook1.spec.opts["firstresult"]
assert pm.hook.he_myhook2.spec is not None
assert pm.hook.he_myhook2.spec.opts["firstresult"]
assert pm.hook.he_myhook3.spec is not None
assert not pm.hook.he_myhook3.spec.opts["firstresult"]
@pytest.mark.parametrize("name", ["hookwrapper", "optionalhook", "tryfirst", "trylast"])
@pytest.mark.parametrize("val", [True, False])
def test_hookimpl(name: str, val: bool) -> None:
@hookimpl(**{name: val}) # type: ignore[misc,call-overload]
def he_myhook1(arg1) -> None:
pass
if val:
assert he_myhook1.example_impl.get(name)
else:
assert not hasattr(he_myhook1, name)
def test_hookrelay_registry(pm: PluginManager) -> None:
"""Verify hook caller instances are registered by name onto the relay
and can be likewise unregistered."""
class Api:
@hookspec
def hello(self, arg: object) -> None:
"api hook 1"
pm.add_hookspecs(Api)
hook = pm.hook
assert hasattr(hook, "hello")
assert repr(hook.hello).find("hello") != -1
class Plugin:
@hookimpl
def hello(self, arg):
return arg + 1
plugin = Plugin()
pm.register(plugin)
out = hook.hello(arg=3)
assert out == [4]
assert not hasattr(hook, "world")
pm.unregister(plugin)
assert hook.hello(arg=3) == []
def test_hookrelay_registration_by_specname(pm: PluginManager) -> None:
"""Verify hook caller instances may also be registered by specifying a
specname option to the hookimpl"""
class Api:
@hookspec
def hello(self, arg: object) -> None:
"api hook 1"
pm.add_hookspecs(Api)
hook = pm.hook
assert hasattr(hook, "hello")
assert len(pm.hook.hello.get_hookimpls()) == 0
class Plugin:
@hookimpl(specname="hello")
def foo(self, arg: int) -> int:
return arg + 1
plugin = Plugin()
pm.register(plugin)
out = hook.hello(arg=3)
assert out == [4]
def test_hookrelay_registration_by_specname_raises(pm: PluginManager) -> None:
"""Verify using specname still raises the types of errors during registration as it
would have without using specname."""
class Api:
@hookspec
def hello(self, arg: object) -> None:
"api hook 1"
pm.add_hookspecs(Api)
# make sure a bad signature still raises an error when using specname
class Plugin:
@hookimpl(specname="hello")
def foo(self, arg: int, too, many, args) -> int:
return arg + 1
with pytest.raises(PluginValidationError):
pm.register(Plugin())
# make sure check_pending still fails if specname doesn't have a
# corresponding spec. EVEN if the function name matches one.
class Plugin2:
@hookimpl(specname="bar")
def hello(self, arg: int) -> int:
return arg + 1
pm.register(Plugin2())
with pytest.raises(PluginValidationError):
pm.check_pending()
def test_hook_conflict(pm: PluginManager) -> None:
class Api1:
@hookspec
def conflict(self) -> None:
"""Api1 hook"""
class Api2:
@hookspec
def conflict(self) -> None:
"""Api2 hook"""
pm.add_hookspecs(Api1)
with pytest.raises(ValueError) as exc:
pm.add_hookspecs(Api2)
assert str(exc.value) == (
"Hook 'conflict' is already registered within namespace "
"<class '__tests__.test_hookcaller.test_hook_conflict.<locals>.Api1'>"
)
|
catboostREPO_NAMEcatboostPATH_START.@catboost_extracted@catboost-master@contrib@python@pluggy@py3@tests@test_hookcaller.py@.PATH_END.py
|
{
"filename": "yolov8.md",
"repo_name": "ultralytics/ultralytics",
"repo_path": "ultralytics_extracted/ultralytics-main/docs/en/models/yolov8.md",
"type": "Markdown"
}
|
---
comments: true
description: Discover YOLOv8, the latest advancement in real-time object detection, optimizing performance with an array of pre-trained models for diverse tasks.
keywords: YOLOv8, real-time object detection, YOLO series, Ultralytics, computer vision, advanced object detection, AI, machine learning, deep learning
---
# Ultralytics YOLOv8
## Overview
YOLOv8 is the latest iteration in the YOLO series of real-time object detectors, offering cutting-edge performance in terms of accuracy and speed. Building upon the advancements of previous YOLO versions, YOLOv8 introduces new features and optimizations that make it an ideal choice for various [object detection](https://www.ultralytics.com/glossary/object-detection) tasks in a wide range of applications.

<p align="center">
<br>
<iframe loading="lazy" width="720" height="405" src="https://www.youtube.com/embed/Na0HvJ4hkk0"
title="YouTube video player" frameborder="0"
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
allowfullscreen>
</iframe>
<br>
<strong>Watch:</strong> Ultralytics YOLOv8 Model Overview
</p>
## Key Features
- **Advanced Backbone and Neck Architectures:** YOLOv8 employs state-of-the-art backbone and neck architectures, resulting in improved [feature extraction](https://www.ultralytics.com/glossary/feature-extraction) and object detection performance.
- **Anchor-free Split Ultralytics Head:** YOLOv8 adopts an anchor-free split Ultralytics head, which contributes to better accuracy and a more efficient detection process compared to anchor-based approaches.
- **Optimized Accuracy-Speed Tradeoff:** With a focus on maintaining an optimal balance between accuracy and speed, YOLOv8 is suitable for real-time object detection tasks in diverse application areas.
- **Variety of Pre-trained Models:** YOLOv8 offers a range of pre-trained models to cater to various tasks and performance requirements, making it easier to find the right model for your specific use case.
## Supported Tasks and Modes
The YOLOv8 series offers a diverse range of models, each specialized for specific tasks in computer vision. These models are designed to cater to various requirements, from object detection to more complex tasks like [instance segmentation](https://www.ultralytics.com/glossary/instance-segmentation), pose/keypoints detection, oriented object detection, and classification.
Each variant of the YOLOv8 series is optimized for its respective task, ensuring high performance and accuracy. Additionally, these models are compatible with various operational modes including [Inference](../modes/predict.md), [Validation](../modes/val.md), [Training](../modes/train.md), and [Export](../modes/export.md), facilitating their use in different stages of deployment and development.
| Model | Filenames | Task | Inference | Validation | Training | Export |
| ----------- | -------------------------------------------------------------------------------------------------------------- | -------------------------------------------- | --------- | ---------- | -------- | ------ |
| YOLOv8 | `yolov8n.pt` `yolov8s.pt` `yolov8m.pt` `yolov8l.pt` `yolov8x.pt` | [Detection](../tasks/detect.md) | ✅ | ✅ | ✅ | ✅ |
| YOLOv8-seg | `yolov8n-seg.pt` `yolov8s-seg.pt` `yolov8m-seg.pt` `yolov8l-seg.pt` `yolov8x-seg.pt` | [Instance Segmentation](../tasks/segment.md) | ✅ | ✅ | ✅ | ✅ |
| YOLOv8-pose | `yolov8n-pose.pt` `yolov8s-pose.pt` `yolov8m-pose.pt` `yolov8l-pose.pt` `yolov8x-pose.pt` `yolov8x-pose-p6.pt` | [Pose/Keypoints](../tasks/pose.md) | ✅ | ✅ | ✅ | ✅ |
| YOLOv8-obb | `yolov8n-obb.pt` `yolov8s-obb.pt` `yolov8m-obb.pt` `yolov8l-obb.pt` `yolov8x-obb.pt` | [Oriented Detection](../tasks/obb.md) | ✅ | ✅ | ✅ | ✅ |
| YOLOv8-cls | `yolov8n-cls.pt` `yolov8s-cls.pt` `yolov8m-cls.pt` `yolov8l-cls.pt` `yolov8x-cls.pt` | [Classification](../tasks/classify.md) | ✅ | ✅ | ✅ | ✅ |
This table provides an overview of the YOLOv8 model variants, highlighting their applicability in specific tasks and their compatibility with various operational modes such as Inference, Validation, Training, and Export. It showcases the versatility and robustness of the YOLOv8 series, making them suitable for a variety of applications in [computer vision](https://www.ultralytics.com/glossary/computer-vision-cv).
## Performance Metrics
!!! performance
=== "Detection (COCO)"
See [Detection Docs](../tasks/detect.md) for usage examples with these models trained on [COCO](../datasets/detect/coco.md), which include 80 pre-trained classes.
| Model | size<br><sup>(pixels) | mAP<sup>val<br>50-95 | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>A100 TensorRT<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) |
| ------------------------------------------------------------------------------------ | --------------------- | -------------------- | ------------------------------ | ----------------------------------- | ------------------ | ----------------- |
| [YOLOv8n](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8n.pt) | 640 | 37.3 | 80.4 | 0.99 | 3.2 | 8.7 |
| [YOLOv8s](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8s.pt) | 640 | 44.9 | 128.4 | 1.20 | 11.2 | 28.6 |
| [YOLOv8m](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8m.pt) | 640 | 50.2 | 234.7 | 1.83 | 25.9 | 78.9 |
| [YOLOv8l](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8l.pt) | 640 | 52.9 | 375.2 | 2.39 | 43.7 | 165.2 |
| [YOLOv8x](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8x.pt) | 640 | 53.9 | 479.1 | 3.53 | 68.2 | 257.8 |
=== "Detection (Open Images V7)"
See [Detection Docs](../tasks/detect.md) for usage examples with these models trained on [Open Image V7](../datasets/detect/open-images-v7.md), which include 600 pre-trained classes.
| Model | size<br><sup>(pixels) | mAP<sup>val<br>50-95 | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>A100 TensorRT<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) |
| ----------------------------------------------------------------------------------------- | --------------------- | -------------------- | ------------------------------ | ----------------------------------- | ------------------ | ----------------- |
| [YOLOv8n](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8n-oiv7.pt) | 640 | 18.4 | 142.4 | 1.21 | 3.5 | 10.5 |
| [YOLOv8s](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8s-oiv7.pt) | 640 | 27.7 | 183.1 | 1.40 | 11.4 | 29.7 |
| [YOLOv8m](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8m-oiv7.pt) | 640 | 33.6 | 408.5 | 2.26 | 26.2 | 80.6 |
| [YOLOv8l](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8l-oiv7.pt) | 640 | 34.9 | 596.9 | 2.43 | 44.1 | 167.4 |
| [YOLOv8x](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8x-oiv7.pt) | 640 | 36.3 | 860.6 | 3.56 | 68.7 | 260.6 |
=== "Segmentation (COCO)"
See [Segmentation Docs](../tasks/segment.md) for usage examples with these models trained on [COCO](../datasets/segment/coco.md), which include 80 pre-trained classes.
| Model | size<br><sup>(pixels) | mAP<sup>box<br>50-95 | mAP<sup>mask<br>50-95 | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>A100 TensorRT<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) |
| -------------------------------------------------------------------------------------------- | --------------------- | -------------------- | --------------------- | ------------------------------ | ----------------------------------- | ------------------ | ----------------- |
| [YOLOv8n-seg](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8n-seg.pt) | 640 | 36.7 | 30.5 | 96.1 | 1.21 | 3.4 | 12.6 |
| [YOLOv8s-seg](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8s-seg.pt) | 640 | 44.6 | 36.8 | 155.7 | 1.47 | 11.8 | 42.6 |
| [YOLOv8m-seg](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8m-seg.pt) | 640 | 49.9 | 40.8 | 317.0 | 2.18 | 27.3 | 110.2 |
| [YOLOv8l-seg](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8l-seg.pt) | 640 | 52.3 | 42.6 | 572.4 | 2.79 | 46.0 | 220.5 |
| [YOLOv8x-seg](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8x-seg.pt) | 640 | 53.4 | 43.4 | 712.1 | 4.02 | 71.8 | 344.1 |
=== "Classification (ImageNet)"
See [Classification Docs](../tasks/classify.md) for usage examples with these models trained on [ImageNet](../datasets/classify/imagenet.md), which include 1000 pre-trained classes.
| Model | size<br><sup>(pixels) | acc<br><sup>top1 | acc<br><sup>top5 | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>A100 TensorRT<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) at 640 |
| -------------------------------------------------------------------------------------------- | --------------------- | ---------------- | ---------------- | ------------------------------ | ----------------------------------- | ------------------ | ------------------------ |
| [YOLOv8n-cls](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8n-cls.pt) | 224 | 69.0 | 88.3 | 12.9 | 0.31 | 2.7 | 4.3 |
| [YOLOv8s-cls](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8s-cls.pt) | 224 | 73.8 | 91.7 | 23.4 | 0.35 | 6.4 | 13.5 |
| [YOLOv8m-cls](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8m-cls.pt) | 224 | 76.8 | 93.5 | 85.4 | 0.62 | 17.0 | 42.7 |
| [YOLOv8l-cls](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8l-cls.pt) | 224 | 76.8 | 93.5 | 163.0 | 0.87 | 37.5 | 99.7 |
| [YOLOv8x-cls](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8x-cls.pt) | 224 | 79.0 | 94.6 | 232.0 | 1.01 | 57.4 | 154.8 |
=== "Pose (COCO)"
See [Pose Estimation Docs](../tasks/pose.md) for usage examples with these models trained on [COCO](../datasets/pose/coco.md), which include 1 pre-trained class, 'person'.
| Model | size<br><sup>(pixels) | mAP<sup>pose<br>50-95 | mAP<sup>pose<br>50 | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>A100 TensorRT<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) |
| ---------------------------------------------------------------------------------------------------- | --------------------- | --------------------- | ------------------ | ------------------------------ | ----------------------------------- | ------------------ | ----------------- |
| [YOLOv8n-pose](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8n-pose.pt) | 640 | 50.4 | 80.1 | 131.8 | 1.18 | 3.3 | 9.2 |
| [YOLOv8s-pose](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8s-pose.pt) | 640 | 60.0 | 86.2 | 233.2 | 1.42 | 11.6 | 30.2 |
| [YOLOv8m-pose](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8m-pose.pt) | 640 | 65.0 | 88.8 | 456.3 | 2.00 | 26.4 | 81.0 |
| [YOLOv8l-pose](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8l-pose.pt) | 640 | 67.6 | 90.0 | 784.5 | 2.59 | 44.4 | 168.6 |
| [YOLOv8x-pose](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8x-pose.pt) | 640 | 69.2 | 90.2 | 1607.1 | 3.73 | 69.4 | 263.2 |
| [YOLOv8x-pose-p6](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8x-pose-p6.pt) | 1280 | 71.6 | 91.2 | 4088.7 | 10.04 | 99.1 | 1066.4 |
=== "OBB (DOTAv1)"
See [Oriented Detection Docs](../tasks/obb.md) for usage examples with these models trained on [DOTAv1](../datasets/obb/dota-v2.md#dota-v10), which include 15 pre-trained classes.
| Model | size<br><sup>(pixels) | mAP<sup>test<br>50 | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>A100 TensorRT<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) |
|----------------------------------------------------------------------------------------------|-----------------------| -------------------- | -------------------------------- | ------------------------------------- | -------------------- | ----------------- |
| [YOLOv8n-obb](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8n-obb.pt) | 1024 | 78.0 | 204.77 | 3.57 | 3.1 | 23.3 |
| [YOLOv8s-obb](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8s-obb.pt) | 1024 | 79.5 | 424.88 | 4.07 | 11.4 | 76.3 |
| [YOLOv8m-obb](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8m-obb.pt) | 1024 | 80.5 | 763.48 | 7.61 | 26.4 | 208.6 |
| [YOLOv8l-obb](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8l-obb.pt) | 1024 | 80.7 | 1278.42 | 11.83 | 44.5 | 433.8 |
| [YOLOv8x-obb](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8x-obb.pt) | 1024 | 81.36 | 1759.10 | 13.23 | 69.5 | 676.7 |
## Usage Examples
This example provides simple YOLOv8 training and inference examples. For full documentation on these and other [modes](../modes/index.md) see the [Predict](../modes/predict.md), [Train](../modes/train.md), [Val](../modes/val.md) and [Export](../modes/export.md) docs pages.
Note the below example is for YOLOv8 [Detect](../tasks/detect.md) models for object detection. For additional supported tasks see the [Segment](../tasks/segment.md), [Classify](../tasks/classify.md), [OBB](../tasks/obb.md) docs and [Pose](../tasks/pose.md) docs.
!!! example
=== "Python"
[PyTorch](https://www.ultralytics.com/glossary/pytorch) pretrained `*.pt` models as well as configuration `*.yaml` files can be passed to the `YOLO()` class to create a model instance in python:
```python
from ultralytics import YOLO
# Load a COCO-pretrained YOLOv8n model
model = YOLO("yolov8n.pt")
# Display model information (optional)
model.info()
# Train the model on the COCO8 example dataset for 100 epochs
results = model.train(data="coco8.yaml", epochs=100, imgsz=640)
# Run inference with the YOLOv8n model on the 'bus.jpg' image
results = model("path/to/bus.jpg")
```
=== "CLI"
CLI commands are available to directly run the models:
```bash
# Load a COCO-pretrained YOLOv8n model and train it on the COCO8 example dataset for 100 epochs
yolo train model=yolov8n.pt data=coco8.yaml epochs=100 imgsz=640
# Load a COCO-pretrained YOLOv8n model and run inference on the 'bus.jpg' image
yolo predict model=yolov8n.pt source=path/to/bus.jpg
```
## Citations and Acknowledgements
!!! tip "Ultralytics YOLOv8 Publication"
Ultralytics has not published a formal research paper for YOLOv8 due to the rapidly evolving nature of the models. We focus on advancing the technology and making it easier to use, rather than producing static documentation. For the most up-to-date information on YOLO architecture, features, and usage, please refer to our [GitHub repository](https://github.com/ultralytics/ultralytics) and [documentation](https://docs.ultralytics.com/).
If you use the YOLOv8 model or any other software from this repository in your work, please cite it using the following format:
!!! quote ""
=== "BibTeX"
```bibtex
@software{yolov8_ultralytics,
author = {Glenn Jocher and Ayush Chaurasia and Jing Qiu},
title = {Ultralytics YOLOv8},
version = {8.0.0},
year = {2023},
url = {https://github.com/ultralytics/ultralytics},
orcid = {0000-0001-5950-6979, 0000-0002-7603-6750, 0000-0003-3783-7069},
license = {AGPL-3.0}
}
```
Please note that the DOI is pending and will be added to the citation once it is available. YOLOv8 models are provided under [AGPL-3.0](https://github.com/ultralytics/ultralytics/blob/main/LICENSE) and [Enterprise](https://www.ultralytics.com/license) licenses.
## FAQ
### What is YOLOv8 and how does it differ from previous YOLO versions?
YOLOv8 is the latest iteration in the Ultralytics YOLO series, designed to improve real-time object detection performance with advanced features. Unlike earlier versions, YOLOv8 incorporates an **anchor-free split Ultralytics head**, state-of-the-art backbone and neck architectures, and offers optimized [accuracy](https://www.ultralytics.com/glossary/accuracy)-speed tradeoff, making it ideal for diverse applications. For more details, check the [Overview](#overview) and [Key Features](#key-features) sections.
### How can I use YOLOv8 for different computer vision tasks?
YOLOv8 supports a wide range of computer vision tasks, including object detection, instance segmentation, pose/keypoints detection, oriented object detection, and classification. Each model variant is optimized for its specific task and compatible with various operational modes like [Inference](../modes/predict.md), [Validation](../modes/val.md), [Training](../modes/train.md), and [Export](../modes/export.md). Refer to the [Supported Tasks and Modes](#supported-tasks-and-modes) section for more information.
### What are the performance metrics for YOLOv8 models?
YOLOv8 models achieve state-of-the-art performance across various benchmarking datasets. For instance, the YOLOv8n model achieves a mAP (mean Average Precision) of 37.3 on the COCO dataset and a speed of 0.99 ms on A100 TensorRT. Detailed performance metrics for each model variant across different tasks and datasets can be found in the [Performance Metrics](#performance-metrics) section.
### How do I train a YOLOv8 model?
Training a YOLOv8 model can be done using either Python or CLI. Below are examples for training a model using a COCO-pretrained YOLOv8 model on the COCO8 dataset for 100 [epochs](https://www.ultralytics.com/glossary/epoch):
!!! example
=== "Python"
```python
from ultralytics import YOLO
# Load a COCO-pretrained YOLOv8n model
model = YOLO("yolov8n.pt")
# Train the model on the COCO8 example dataset for 100 epochs
results = model.train(data="coco8.yaml", epochs=100, imgsz=640)
```
=== "CLI"
```bash
yolo train model=yolov8n.pt data=coco8.yaml epochs=100 imgsz=640
```
For further details, visit the [Training](../modes/train.md) documentation.
### Can I benchmark YOLOv8 models for performance?
Yes, YOLOv8 models can be benchmarked for performance in terms of speed and accuracy across various export formats. You can use PyTorch, ONNX, TensorRT, and more for benchmarking. Below are example commands for benchmarking using Python and CLI:
!!! example
=== "Python"
```python
from ultralytics.utils.benchmarks import benchmark
# Benchmark on GPU
benchmark(model="yolov8n.pt", data="coco8.yaml", imgsz=640, half=False, device=0)
```
=== "CLI"
```bash
yolo benchmark model=yolov8n.pt data='coco8.yaml' imgsz=640 half=False device=0
```
For additional information, check the [Performance Metrics](#performance-metrics) section.
|
ultralyticsREPO_NAMEultralyticsPATH_START.@ultralytics_extracted@ultralytics-main@docs@en@models@yolov8.md@.PATH_END.py
|
{
"filename": "README.md",
"repo_name": "exo-cesm/CESM2.1.3",
"repo_path": "CESM2.1.3_extracted/CESM2.1.3-main/O2_Earth_analogues/cases/10pc_PAL_O2_CH4_em0.1/README.md",
"type": "Markdown"
}
|
# 10% PAL CH4 em0.1 case setup instructions
## Warning - a seperate CESM build is required to run this case
This case is a pertubation from the 10% PAL case. Instead of a fixed CH</sub>4</sub> mixing ratio, a fixed flux roughly equal to the present day flux of 50 Tg/yr of CH</sub>4</sub> was specified at the lower boundary. A build where CH</sub>4</sub> is removed from the greenhouse gas checklist is required. This check takes place is the CESM source code, in CESM/components/cam/bld/build-namelist.
## If setting the case up on ARC4:
run buildcase_10pc_O2_CH4_em0.1_ARC4. A case will be created with the name b.e21.BWma1850.f19_g17.10pc_o2.CH4_em0.1.my_case.001. Change the name in buildcase_10pc_O2_CH4_em0.1_ARC4 if you would like a different name.
The restart file for the 10% PAL case is currently at /nobackup/Alternative_Earths/restarts/Ten_pc_O2_CH4_em0.1/
A case will be created and the project will be planet with the job queue planet.q. If switching to the main queue, n the case directory, ./xmlchange the project to be blank (no input) and the job queue to be 40core-192G.q.
A user_nl_cam example will be copied in to the case directory. This specifies the CH</sub>4</sub> emissions file and removes the fixed mixing ratio.
The following modified files will be placed in SourceMods/src.cam/:
chemistry.F90; mo_tgcm_ubc.F90; upper_bc.F90; mo_jshort.F90; mo_photo.F90
These source mods are important for modifying the upper boundary conditions, and for updating the absorption in the Schumann-Runge bands. Note that Cooke et al. 2022 did not have the updated absorption in the Schumann-Runge bands.
## If starting from scratch:
The resolution used was 2.5 by 1.875 degrees (f19_g17)
Example case setup command:
./create_newcase --compset BWma1850 --res f19_g17 --case ~/b.e21.BWma1850.f19_g17.10pc_o2.CH4_em0.1.001 --run-unsupported
The case should be run out for 5 days. Then, the surface mixing ratio of O<sub>2</sub> should be scaled to 0.0021. This can be done using the following command on the restart file and initial condition file (\*cam.r.\* and \*cam.i.\*).
ncap2 -O -s "O2=O2\*0.1" $infile $outfile
You will need to ensure the correct lower boundary condition (LBC) file is being used. LBC files are found at /nobackup/Alternative_Earths/LBC_files/. The LBC is the standard file, except with a constant mixing ratio for O<sub>2</sub> imposed. At 1% PAL and lower O<sub>2</sub> levels, the O<sub>2</sub> mixing ratio may decrease due to photolysis.
Ensure that the following modified files are placed in SourceMods/src.cam/:
chemistry.F90; mo_tgcm_ubc.F90; upper_bc.F90; mo_jshort.F90; mo_photo.F90
|
exo-cesmREPO_NAMECESM2.1.3PATH_START.@CESM2.1.3_extracted@CESM2.1.3-main@O2_Earth_analogues@cases@10pc_PAL_O2_CH4_em0.1@README.md@.PATH_END.py
|
{
"filename": "__init__.py",
"repo_name": "sbi-dev/sbi",
"repo_path": "sbi_extracted/sbi-main/sbi/samplers/score/__init__.py",
"type": "Python"
}
|
from sbi.samplers.score.correctors import Corrector, get_corrector
from sbi.samplers.score.predictors import Predictor, get_predictor
from sbi.samplers.score.score import Diffuser
|
sbi-devREPO_NAMEsbiPATH_START.@sbi_extracted@sbi-main@sbi@samplers@score@__init__.py@.PATH_END.py
|
{
"filename": "__init__.py",
"repo_name": "gammapy/enrico",
"repo_path": "enrico_extracted/enrico-master/enrico/__init__.py",
"type": "Python"
}
|
gammapyREPO_NAMEenricoPATH_START.@enrico_extracted@enrico-master@enrico@__init__.py@.PATH_END.py
|
|
{
"filename": "_legendgroup.py",
"repo_name": "catboost/catboost",
"repo_path": "catboost_extracted/catboost-master/contrib/python/plotly/py2/plotly/validators/densitymapbox/_legendgroup.py",
"type": "Python"
}
|
import _plotly_utils.basevalidators
class LegendgroupValidator(_plotly_utils.basevalidators.StringValidator):
def __init__(
self, plotly_name="legendgroup", parent_name="densitymapbox", **kwargs
):
super(LegendgroupValidator, self).__init__(
plotly_name=plotly_name,
parent_name=parent_name,
edit_type=kwargs.pop("edit_type", "style"),
role=kwargs.pop("role", "info"),
**kwargs
)
|
catboostREPO_NAMEcatboostPATH_START.@catboost_extracted@catboost-master@contrib@python@plotly@py2@plotly@validators@densitymapbox@_legendgroup.py@.PATH_END.py
|
{
"filename": "main.py",
"repo_name": "cdslaborg/paramonte",
"repo_path": "paramonte_extracted/paramonte-main/example/fortran/pm_distNorm/getNormLogPDF/main.py",
"type": "Python"
}
|
#!/usr/bin/env python
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import glob
import sys
linewidth = 2
fontsize = 17
marker ={ "CK" : "-"
, "IK" : "."
, "RK" : "-"
}
xlab = { "CK" : "X ( real/imaginary components )"
, "IK" : "X ( integer-valued )"
, "RK" : "X ( real-valued )"
}
legends = [ r"$\mu = 0.0,~\sigma = 3.0$"
, r"$\mu = 0.0,~\sigma = 1.0$"
, r"$\mu = 0.0,~\sigma = 0.3$"
, r"$\mu = -2.,~\sigma = 1.0$"
]
for kind in ["IK", "CK", "RK"]:
pattern = "*." + kind + ".txt"
fileList = glob.glob(pattern)
if len(fileList) == 1:
df = pd.read_csv(fileList[0], delimiter = ",")
fig = plt.figure(figsize = 1.25 * np.array([6.4, 4.8]), dpi = 200)
ax = plt.subplot()
if kind == "CK":
plt.plot( df.values[:, 0]
, df.values[:,1:5]
, marker[kind]
, linewidth = linewidth
#, color = "r"
)
plt.plot( df.values[:, 1]
, df.values[:,1:5]
, marker[kind]
, linewidth = linewidth
#, color = "blue"
)
else:
plt.plot( df.values[:, 0]
, df.values[:,1:5]
, marker[kind]
, linewidth = linewidth
#, color = "r"
)
ax.legend ( legends
, fontsize = fontsize
)
plt.xticks(fontsize = fontsize - 2)
plt.yticks(fontsize = fontsize - 2)
ax.set_xlabel(xlab[kind], fontsize = 17)
ax.set_ylabel("Probability Density Function (PDF)", fontsize = 17)
plt.grid(visible = True, which = "both", axis = "both", color = "0.85", linestyle = "-")
ax.tick_params(axis = "y", which = "minor")
ax.tick_params(axis = "x", which = "minor")
plt.savefig(fileList[0].replace(".txt",".png"))
elif len(fileList) > 1:
sys.exit("Ambiguous file list exists.")
|
cdslaborgREPO_NAMEparamontePATH_START.@paramonte_extracted@paramonte-main@example@fortran@pm_distNorm@getNormLogPDF@main.py@.PATH_END.py
|
{
"filename": "README.md",
"repo_name": "statsmodels/statsmodels",
"repo_path": "statsmodels_extracted/statsmodels-main/statsmodels/tsa/statespace/tests/results/frbny_nowcast/Nowcasting/README.md",
"type": "Markdown"
}
|
The code in this folder is based on the Federal Reserve Bank of New York code
found at https://github.com/FRBNY-TimeSeriesAnalysis/Nowcasting, which was
downloaded as of commit 19f365cab8269e3aac3faa11ad091d6e913c5c43. Only the
files from that repository which were required for generating the test results
are included here.
In additionm the following files from the original package have been modified
(use git diff against the above repository to see the changes)
- functions/dfm.m
- functions/update_nowcast.m
The following files are not a part of the original package:
- test_DFM_blocks.m
- test_DFM.m
- test_news_blocks.m
- test_news.m
- test_spec_blocks.xls
- test_spec.xls
|
statsmodelsREPO_NAMEstatsmodelsPATH_START.@statsmodels_extracted@statsmodels-main@statsmodels@tsa@statespace@tests@results@frbny_nowcast@Nowcasting@README.md@.PATH_END.py
|
{
"filename": "__init__.py",
"repo_name": "langchain-ai/langchain",
"repo_path": "langchain_extracted/langchain-master/libs/community/langchain_community/tools/openai_dalle_image_generation/__init__.py",
"type": "Python"
}
|
"""Tool to generate an image using DALLE OpenAI V1 SDK."""
from langchain_community.tools.openai_dalle_image_generation.tool import (
OpenAIDALLEImageGenerationTool,
)
__all__ = ["OpenAIDALLEImageGenerationTool"]
|
langchain-aiREPO_NAMElangchainPATH_START.@langchain_extracted@langchain-master@libs@community@langchain_community@tools@openai_dalle_image_generation@__init__.py@.PATH_END.py
|
{
"filename": "comparison_plots.py",
"repo_name": "NannyML/nannyml",
"repo_path": "nannyml_extracted/nannyml-main/tests/plots/comparison_plots.py",
"type": "Python"
}
|
# Author: Niels Nuyttens <niels@nannyml.com>
#
# License: Apache Software License 2.0
"""Test comparison plots."""
import pytest
from pytest_lazyfixture import lazy_fixture
from nannyml.chunk import Chunker, SizeBasedChunker
from nannyml.datasets import load_synthetic_binary_classification_dataset
from nannyml.drift.multivariate.data_reconstruction import DataReconstructionDriftCalculator
from nannyml.drift.multivariate.data_reconstruction import Result as DataReconstructionResult
from nannyml.drift.univariate import Result as UnivariateDriftResult
from nannyml.drift.univariate import UnivariateDriftCalculator
from nannyml.exceptions import InvalidArgumentsException
from nannyml.performance_calculation import PerformanceCalculator
from nannyml.performance_calculation import Result as RealizedPerformanceResult
from nannyml.performance_estimation.confidence_based import CBPE
from nannyml.performance_estimation.confidence_based import Result as CBPEResult
@pytest.fixture(scope='module')
def chunker() -> Chunker: # noqa: D103
return SizeBasedChunker(chunk_size=5000)
@pytest.fixture(scope='module')
def data_reconstruction_results(chunker) -> DataReconstructionResult: # noqa: D103
ref, ana, _ = load_synthetic_binary_classification_dataset()
column_names = [col for col in ref.columns if col not in ['identifier', 'work_home_actual', 'timestamp']]
calc = DataReconstructionDriftCalculator(column_names=column_names, chunker=chunker).fit(ref)
result = calc.calculate(ana)
assert isinstance(result, DataReconstructionResult)
return result
@pytest.fixture(scope='module')
def univariate_drift_results(chunker) -> UnivariateDriftResult: # noqa: D103
ref, ana, _ = load_synthetic_binary_classification_dataset()
column_names = [col for col in ref.columns if col not in ['identifier', 'work_home_actual', 'timestamp']]
calc = UnivariateDriftCalculator(column_names=column_names, chunker=chunker).fit(ref)
result = calc.calculate(ana)
assert isinstance(result, UnivariateDriftResult)
return result
@pytest.fixture(scope='module')
def realized_performance_results(chunker) -> RealizedPerformanceResult: # noqa: D103
ref, ana, tgt = load_synthetic_binary_classification_dataset()
calc = PerformanceCalculator(
metrics=['f1'],
y_pred_proba='y_pred_proba',
y_pred='y_pred',
y_true='work_home_actual',
problem_type='classification',
chunker=chunker,
).fit(ref)
result = calc.calculate(ana.merge(tgt), on='identifier')
assert isinstance(result, RealizedPerformanceResult)
return result
@pytest.fixture(scope='module')
def cbpe_results(chunker) -> CBPEResult: # noqa: D103
ref, ana, tgt = load_synthetic_binary_classification_dataset()
est = CBPE(
metrics=['f1'],
y_pred_proba='y_pred_proba',
y_pred='y_pred',
y_true='work_home_actual',
problem_type='classification',
chunker=chunker,
).fit(ref)
result = est.estimate(ana)
assert isinstance(result, CBPEResult)
return result
@pytest.mark.parametrize(
'result_1, result_2',
[
(lazy_fixture('univariate_drift_results'), lazy_fixture('data_reconstruction_results')),
(lazy_fixture('univariate_drift_results'), lazy_fixture('realized_performance_results')),
(lazy_fixture('univariate_drift_results'), lazy_fixture('cbpe_results')),
(lazy_fixture('data_reconstruction_results'), lazy_fixture('univariate_drift_results')),
(lazy_fixture('data_reconstruction_results'), lazy_fixture('realized_performance_results')),
(lazy_fixture('data_reconstruction_results'), lazy_fixture('cbpe_results')),
(lazy_fixture('realized_performance_results'), lazy_fixture('univariate_drift_results')),
(lazy_fixture('realized_performance_results'), lazy_fixture('data_reconstruction_results')),
(lazy_fixture('realized_performance_results'), lazy_fixture('cbpe_results')),
(lazy_fixture('cbpe_results'), lazy_fixture('univariate_drift_results')),
(lazy_fixture('cbpe_results'), lazy_fixture('data_reconstruction_results')),
(lazy_fixture('cbpe_results'), lazy_fixture('realized_performance_results')),
],
)
def test_comparison_plot_raises_no_exceptions(result_1, result_2): # noqa: D103
try:
_ = result_1.compare(result_2).plot().show()
except Exception as exc:
pytest.fail(f"an unexpected exception occurred: {exc}")
def test_comparison_plot_comparing_multiple_metrics_raises_invalid_arguments_exception(chunker, cbpe_results): # noqa: D103, E501
ref, ana, tgt = load_synthetic_binary_classification_dataset()
calc = PerformanceCalculator(
metrics=['f1', 'roc_auc'],
y_pred_proba='y_pred_proba',
y_pred='y_pred',
y_true='work_home_actual',
problem_type='classification',
chunker=chunker,
).fit(ref)
drift_results = calc.calculate(ana.merge(tgt), on='identifier')
with pytest.raises(
InvalidArgumentsException,
match="you're comparing 2 metrics to 2 metrics, but should only compare 1 to 1 at a time. "
"Please filter yourresults first using `result.filter()",
):
drift_results.compare(cbpe_results).plot().show()
|
NannyMLREPO_NAMEnannymlPATH_START.@nannyml_extracted@nannyml-main@tests@plots@comparison_plots.py@.PATH_END.py
|
{
"filename": "discretization_test.py",
"repo_name": "fchollet/keras",
"repo_path": "keras_extracted/keras-master/keras/src/layers/preprocessing/discretization_test.py",
"type": "Python"
}
|
import os
import numpy as np
import pytest
from absl.testing import parameterized
from tensorflow import data as tf_data
from keras.src import backend
from keras.src import layers
from keras.src import models
from keras.src import testing
from keras.src.saving import saving_api
from keras.src.testing.test_utils import named_product
class DiscretizationTest(testing.TestCase):
def test_discretization_basics(self):
self.run_layer_test(
layers.Discretization,
init_kwargs={
"bin_boundaries": [0.0, 0.5, 1.0],
},
input_shape=(2, 3),
expected_output_shape=(2, 3),
expected_num_trainable_weights=0,
expected_num_non_trainable_weights=0,
expected_num_seed_generators=0,
expected_num_losses=0,
supports_masking=False,
run_training_check=False,
)
def test_adapt_flow(self):
layer = layers.Discretization(num_bins=4)
layer.adapt(
np.random.random((32, 3)),
)
output = layer(np.array([[0.0, 0.1, 0.3]]))
self.assertTrue(output.dtype, "int32")
@parameterized.named_parameters(
named_product(
[
{
"testcase_name": "int",
"output_mode": "int",
"input_array": [[-1.0, 0.0, 0.1, 0.8, 1.2]],
"expected_output": [[0, 1, 1, 2, 3]],
},
{
"testcase_name": "one_hot_rank_1",
"output_mode": "one_hot",
"input_array": [0.1, 0.8],
"expected_output": [[0, 1, 0, 0], [0, 0, 1, 0]],
},
{
"testcase_name": "multi_hot_rank_2",
"output_mode": "multi_hot",
"input_array": [[0.1, 0.8]],
"expected_output": [[0, 1, 1, 0]],
},
{
"testcase_name": "one_hot_rank_3",
"output_mode": "one_hot",
"input_array": [[[0.15, 0.75], [0.85, 0.45]]],
"expected_output": [
[
[[0.0, 1.0, 0.0, 0.0], [0.0, 0.0, 1.0, 0.0]],
[[0.0, 0.0, 1.0, 0.0], [0.0, 1.0, 0.0, 0.0]],
]
],
},
{
"testcase_name": "multi_hot_rank_3",
"output_mode": "multi_hot",
"input_array": [[[0.15, 0.75], [0.85, 0.45]]],
"expected_output": [
[[0.0, 1.0, 1.0, 0.0], [0.0, 1.0, 1.0, 0.0]]
],
},
{
"testcase_name": "count",
"output_mode": "count",
"input_array": [[0.1, 0.8, 0.9]],
"expected_output": [[0, 1, 2, 0]],
},
],
sparse=(
[True, False] if backend.SUPPORTS_SPARSE_TENSORS else [False]
),
)
)
def test_correctness(
self, output_mode, input_array, expected_output, sparse
):
if output_mode == "int" and sparse:
pytest.skip("sparse=True cannot be combined with output_mode=int")
input_array = np.array(input_array)
expected_output = np.array(expected_output)
layer = layers.Discretization(
bin_boundaries=[0.0, 0.5, 1.0],
output_mode=output_mode,
sparse=sparse,
)
output = layer(input_array)
self.assertSparse(output, sparse)
self.assertTrue(backend.is_tensor(output))
self.assertAllClose(output, expected_output)
def test_tf_data_compatibility(self):
# With fixed bins
layer = layers.Discretization(
bin_boundaries=[0.0, 0.35, 0.5, 1.0], dtype="float32"
)
x = np.array([[-1.0, 0.0, 0.1, 0.2, 0.4, 0.5, 1.0, 1.2, 0.98]])
self.assertAllClose(layer(x), np.array([[0, 1, 1, 1, 2, 3, 4, 4, 3]]))
ds = tf_data.Dataset.from_tensor_slices(x).batch(1).map(layer)
for output in ds.take(1):
output = output.numpy()
self.assertAllClose(output, np.array([[0, 1, 1, 1, 2, 3, 4, 4, 3]]))
# With adapt flow
layer = layers.Discretization(num_bins=4)
layer.adapt(
np.random.random((32, 3)),
)
x = np.array([[0.0, 0.1, 0.3]])
ds = tf_data.Dataset.from_tensor_slices(x).batch(1).map(layer)
for output in ds.take(1):
output.numpy()
def test_saving(self):
# With fixed bins
layer = layers.Discretization(bin_boundaries=[0.0, 0.35, 0.5, 1.0])
model = models.Sequential(
[
layers.Input((2,)),
layer,
]
)
fpath = os.path.join(self.get_temp_dir(), "model.keras")
model.save(fpath)
model = saving_api.load_model(fpath)
x = np.array([[-1.0, 0.0, 0.1, 0.2, 0.4, 0.5, 1.0, 1.2, 0.98]])
self.assertAllClose(layer(x), np.array([[0, 1, 1, 1, 2, 3, 4, 4, 3]]))
# With adapt flow
layer = layers.Discretization(num_bins=4)
layer.adapt(
np.random.random((32, 3)),
)
ref_input = np.random.random((1, 2))
ref_output = layer(ref_input)
model = models.Sequential(
[
layers.Input((2,)),
layer,
]
)
fpath = os.path.join(self.get_temp_dir(), "model.keras")
model.save(fpath)
model = saving_api.load_model(fpath)
self.assertAllClose(layer(ref_input), ref_output)
|
fcholletREPO_NAMEkerasPATH_START.@keras_extracted@keras-master@keras@src@layers@preprocessing@discretization_test.py@.PATH_END.py
|
{
"filename": "_bgcolor.py",
"repo_name": "catboost/catboost",
"repo_path": "catboost_extracted/catboost-master/contrib/python/plotly/py3/plotly/validators/histogram2dcontour/hoverlabel/_bgcolor.py",
"type": "Python"
}
|
import _plotly_utils.basevalidators
class BgcolorValidator(_plotly_utils.basevalidators.ColorValidator):
def __init__(
self,
plotly_name="bgcolor",
parent_name="histogram2dcontour.hoverlabel",
**kwargs,
):
super(BgcolorValidator, self).__init__(
plotly_name=plotly_name,
parent_name=parent_name,
array_ok=kwargs.pop("array_ok", True),
edit_type=kwargs.pop("edit_type", "none"),
**kwargs,
)
|
catboostREPO_NAMEcatboostPATH_START.@catboost_extracted@catboost-master@contrib@python@plotly@py3@plotly@validators@histogram2dcontour@hoverlabel@_bgcolor.py@.PATH_END.py
|
{
"filename": "__init__.py",
"repo_name": "catboost/catboost",
"repo_path": "catboost_extracted/catboost-master/contrib/python/plotly/py2/plotly/graph_objs/densitymapbox/__init__.py",
"type": "Python"
}
|
import sys
if sys.version_info < (3, 7):
from ._colorbar import ColorBar
from ._hoverlabel import Hoverlabel
from ._stream import Stream
from . import colorbar
from . import hoverlabel
else:
from _plotly_utils.importers import relative_import
__all__, __getattr__, __dir__ = relative_import(
__name__,
[".colorbar", ".hoverlabel"],
["._colorbar.ColorBar", "._hoverlabel.Hoverlabel", "._stream.Stream"],
)
|
catboostREPO_NAMEcatboostPATH_START.@catboost_extracted@catboost-master@contrib@python@plotly@py2@plotly@graph_objs@densitymapbox@__init__.py@.PATH_END.py
|
{
"filename": "SpectralDecomposer.py",
"repo_name": "jdhenshaw/scousepy",
"repo_path": "scousepy_extracted/scousepy-master/scousepy/SpectralDecomposer.py",
"type": "Python"
}
|
# Licensed under an MIT open source license - see LICENSE
import numpy as np
from .colors import *
import time
from astropy import log
import warnings
import matplotlib.pyplot as plt
class Decomposer(object):
"""
A class containing various methods for decomposing individual spectra
Parameters
----------
spectral_axis : array
An array of the spectral axis
spectrum : array
The spectrum
rms : number
An estimate of the rms
Attributes
----------
fittype : string
A string describing the pyspeckit fitter
guesses : list
A list containing the initial guesses to the fit
guesses_updated : list
Used if the best-fitting solution is to be compared with a parent
spectrum (as in scouse)
psktemplate : instance of pyspeckit's Spectrum class
A template spectrum generated using pyspeckit
pskspectrum : instance of pyspeckit's Spectrum class
This is the spectrum that will be fit
modeldict : dictionary
A dictionary describing the best-fitting solution
validfit : bool
Whether or not the fit is valid. Used if the best-fitting solution is to
be compared with a parent spectrum (as in scouse)
tol : list
list of tolerance values used to compare the best-fitting solution to
that of its parent spectrum
res : number
the channel spacing
method : string
The fitting method used. Current options:
parent: the fitting method predominantly used by scouse, where a
spectrum has been fit using initial guesses from a parent
spectrum
dspec: Where a spectrum has been fit using input guesses from
derivative spectroscopy
manual: Where a spectrum has been fit manually using pyspeckit's
interactive fitter
"""
def __init__(self,spectral_axis,spectrum,rms):
self.spectral_axis=spectral_axis
self.spectrum=spectrum
self.rms=rms
self.fittype=None
self.guesses=None
self.guesses_from_parent=None
self.guesses_updated=None
self.psktemplate=None
self.pskspectrum=None
self.modeldict=None
self.validfit=False
self.tol=None
self.res=None
self.method=None
self.fit_updated=False
self.residuals_shown=False
self.guesses=None
self.happy=False
self.conditions=None
def fit_spectrum_with_guesses(self, guesses, fittype='gaussian', method='dspec'):
"""
Fitting method used when using scouse as a standalone fitter. It takes
guesses supplied by dspec and calls on pyspeckit to fit the spectrum
Parameters
----------
guesses : list
a list containing the initial guesses for the fit parameters
fittype : string
A string describing the pyspeckit fitter
"""
self.method=method
self.fittype=fittype
self.guesses=guesses
self.fit_a_spectrum()
self.get_model_information()
def fit_spectrum_from_parent(self,guesses,guesses_parent,tol,res,fittype='gaussian',method='parent'):
"""
The fitting method most commonly used by scouse. This method will fit
a spectrum and compare the result against another model. Most commonly
a model describing a lower resolution parent spectrum
Parameters
----------
guesses : list
a list containing the initial guesses for the fit parameters
guesses_parent : list
a list containing the model parameters of the parent
tol : list
list of tolerance values used to compare the best-fitting solution to
that of its parent spectrum
res : number
the channel spacing
fittype : string
A string describing the pyspeckit fitter
"""
self.method=method
self.fittype=fittype
self.guesses=guesses
self.guesses_parent=guesses_parent
self.tol=tol
self.res=res
if self.psktemplate is not None:
self.update_template()
else:
self.create_a_spectrum()
self.fit_a_spectrum()
errors=np.copy(self.pskspectrum.specfit.modelerrs)
errors=[np.nan if error is None else error for error in errors ]
errors=np.asarray([np.nan if np.invert(np.isfinite(error)) else error for error in errors ])
if np.any(np.invert(np.isfinite(errors))):
guesses = np.copy(self.pskspectrum.specfit.modelpars)
# adding this in a loop to ensure numpy doesn't spit an error out
rounding = []
for guess in guesses:
if guess !=0.0:
if np.floor(np.log10(np.abs(guess)))<0.0:
rounding.append(np.abs(np.floor(np.log10(np.abs(guess)))))
else:
rounding.append(1.0)
else:
rounding.append(1.0)
self.guesses = np.asarray([np.around(guess,decimals=int(rounding[i])) for i, guess in enumerate(guesses)])
# first get the number of parameters and components
nparams=np.size(self.pskspectrum.specfit.fitter.parnames)
ncomponents=np.size(self.guesses)/nparams
# remove any instances of nans
for i in range(int(ncomponents)):
component = self.guesses[int((i * nparams)) : int((i * nparams) + nparams)]
if np.any(~np.isfinite(np.asarray(component))):
self.guesses[int((i * nparams)) : int((i * nparams) + nparams)] = 0.0
# remove any instances of negative intensity
for i in range(int(ncomponents)):
component = self.guesses[int((i*nparams)):int((i*nparams)+nparams)]
if np.sum([1 for number in component if number < 0.0]) >= 1:
self.guesses[int((i*nparams)):int((i*nparams)+nparams)] = 0.0
# for spectra with more than one component we want to set the component
# with the lowest amplitude to zero as well (this could be the same
# component)
if ncomponents > 1:
# identify where amplitude is in paranames
namelist = ['tex', 'amp', 'amplitude', 'peak', 'tant', 'tmb']
foundname = [pname in namelist for pname in self.pskspectrum.specfit.fitter.parnames]
foundname = np.array(foundname)
idx=np.where(foundname==True)[0]
idx=idx[0]
# Now get the amplitudes
amplist=np.asarray([self.guesses[int(i*nparams)+idx] for i in range(int(ncomponents))])
# identify the lowest amplitude
idx = np.where(amplist==np.min(amplist))[0]
idx = idx[0]
self.guesses[int((idx*nparams)):int((idx*nparams)+nparams)] = 0.0
self.guesses = self.guesses[(self.guesses != 0.0)]
# size of guesses must be !=0.0 and a multiple of nparams
remainder = np.size(self.guesses) % nparams
is_divisible = remainder == 0
if np.size(self.guesses !=0) and is_divisible:
#self.psktemplate=None
#self.pskspectrum=None
if self.psktemplate is not None:
self.update_template()
else:
self.create_a_spectrum()
self.fit_a_spectrum()
self.get_model_information()
self.check_against_parent()
if not self.validfit:
self.modeldict={}
self.psktemplate=None
self.pskspectrum=None
def fit_spectrum_manually(self, fittype='gaussian'):
"""
Method used to manually fit a spectrum
Parameters
----------
fittype : string
A string describing the pyspeckit fitter
"""
plt.ion()
self.method='manual'
self.fittype=fittype
self.interactive_fitter()
with warnings.catch_warnings():
warnings.simplefilter('ignore', category=DeprecationWarning)
while not self.happy:
try:
# using just a few little bits of plt.pause below
plt.gcf().canvas.draw()
plt.gcf().canvas.start_event_loop(0.1)
time.sleep(0.1)
except KeyboardInterrupt:
break
plt.ioff()
self.get_model_information()
def interactive_fitter(self):
"""
Interactive fitter - the interactive fitting process controlled by
fit_spectrum_manually. Starts with the interactive fitter with
fit_updated= false. The user can fit the spectrum. Pressing enter will
initialise the fit (fit_updated=True). Pressing enter again will
accept the fit.
"""
old_log = log.level
log.setLevel('ERROR')
if not self.fit_updated:
self.fit_updated=False
# Interactive fitting with pyspeckit
self.pskspectrum.plotter(xmin=np.min(self.spectral_axis),
xmax=np.max(self.spectral_axis),)
self.pskspectrum.plotter.figure.canvas.callbacks.disconnect(3)
self.pskspectrum.specfit.clear_all_connections()
assert self.pskspectrum.plotter._active_gui is None
# interactive fitting
self.fit_a_spectrum_interactively()
assert self.pskspectrum.plotter._active_gui is not None
self.residuals_shown=False
else:
self.fit_updated=True
self.pskspectrum.plotter(xmin=np.min(self.spectral_axis),
xmax=np.max(self.spectral_axis),)
# disable mpl key commands (especially 'q')
self.pskspectrum.plotter.figure.canvas.callbacks.disconnect(3)
self.pskspectrum.specfit.clear_all_connections()
assert self.pskspectrum.plotter._active_gui is None
if None in self.guesses:
raise ValueError(colors.fg._red_+"Encountered a 'None' value in"+
" guesses"+colors._endc_)
# non interactive - display the fit
self.fit_a_spectrum()
self.pskspectrum.specfit.plot_fit(show_components=True)
self.pskspectrum.specfit.plotresiduals(axis=self.pskspectrum.plotter.axis,
clear=False,
color='g',
label=False)
assert self.pskspectrum.plotter._active_gui is None
self.residuals_shown=True
self.printable_format()
print("Options:"
"\n"
"1) If you are happy with this fit, press Enter."
"\n"
"2) If not, press 'f' to re-enter the interactive fitter.")
log.setLevel(old_log)
with warnings.catch_warnings():
warnings.simplefilter('ignore', category=DeprecationWarning)
if plt.matplotlib.rcParams['interactive']:
self.happy = None
self.pskspectrum.plotter.axis.figure.canvas.mpl_connect('key_press_event',self.interactive_callback)
else:
plt.show()
self.happy = self.interactive_callback('noninteractive')
#
if not hasattr(self.pskspectrum.specfit, 'fitter'):
raise ValueError("No fitter available for the spectrum."
" This can occur if you have plt.ion() set"
" or if you did not fit the spectrum."
)
return
def interactive_callback(self, event):
"""
A 'callback function' to be triggered when the user selects a fit.
Parameters
----------
event : interactive event
"""
if plt.matplotlib.rcParams['interactive']:
if hasattr(event, 'key'):
# Enter to continue
if event.key in ('enter'):
if self.residuals_shown:
print("")
print("'enter' key acknowledged."+
colors.fg._lightgreen_+" Solution accepted"+colors._endc_+".")
self.happy = True
self.pskspectrum.specfit.clear_all_connections()
self.pskspectrum.plotter.disconnect()
plt.close(self.pskspectrum.plotter.figure.number)
assert self.pskspectrum.plotter._active_gui is None
else:
print("")
print("'enter' key acknowledged."+
colors.fg._cyan_+" Showing fit and residuals"+colors._endc_+".")
self.fit_updated=True
self.guesses = self.pskspectrum.specfit.parinfo.values
self.interactive_fitter()
# To re-enter the fitter
elif event.key in ('f', 'F'):
print("")
print("'f' key acknowledged."+
colors.fg._lightred_+" Re-entering interactive fitter"+colors._endc_+".")
self.residuals_shown = False
# to indicate that all components have been selected
elif event.key in ('d','D','3',3):
# The fit has been performed interactively, but we also
# want to print out the nicely-formatted additional
# information
self.pskspectrum.specfit.button3action(event)
print("'d' key acknowledged."+
colors.fg._cyan_+" Guess initialized"+colors._endc_+".")
print('')
print("Options:"
"\n"
"1) To lock the fit and display residuals, press Enter."
"\n"
"2) Press 'f' to re-enter the interactive fitter.")
self.happy = None
else:
self.happy = None
elif hasattr(event, 'button') and event.button in ('d','D','3',3):
# The fit has been performed interactively, but we also
# want to print out the nicely-formatted additional
# information
print("'d' key acknowledged."+
colors.fg._cyan_+" Guess initialized"+colors._endc_+".")
print('')
print("Options:"
"\n"
"1) To lock the fit and display residuals, press Enter."
"\n"
"2) Press 'f' to re-enter the interactive fitter.")
self.happy = None
else:
self.happy = None
else:
# this should only happen if not triggered by a callback
assert event == 'noninteractive'
self.printable_format()
h = input("Are you happy with the fit? (y/n): ")
self.happy = h in ['True', 'T', 'true', '1', 't', 'y', 'yes', 'Y', 'Yes']
print("")
self.fit_updated=True
return self.happy
def fit_a_spectrum(self):
"""
Fits a spectrum
"""
with warnings.catch_warnings():
warnings.simplefilter('ignore')
old_log = log.level
log.setLevel('ERROR')
# HACK: this is a workaround for lmfit getting shirty
try:
self.pskspectrum.specfit(interactive=False,
clear_all_connections=True,
xmin=np.min(self.spectral_axis),
xmax=np.max(self.spectral_axis),
fittype = self.fittype,
guesses = self.guesses,
verbose=True,
use_lmfit=True)
except ValueError:
self.pskspectrum.specfit(interactive=False,
clear_all_connections=True,
xmin=np.min(self.spectral_axis),
xmax=np.max(self.spectral_axis),
fittype = self.fittype,
guesses = self.guesses,
verbose=True,
use_lmfit=False)
log.setLevel(old_log)
def fit_a_spectrum_interactively(self):
"""
Fits a spectrum interactively
"""
with warnings.catch_warnings():
warnings.simplefilter('ignore')
old_log = log.level
log.setLevel('ERROR')
self.pskspectrum.specfit(interactive=True,
print_message=True,
xmin=np.min(self.spectral_axis),
xmax=np.max(self.spectral_axis),
fittype = self.fittype,
verbose=False,
use_lmfit=True,
show_components=True)
log.setLevel(old_log)
def create_a_template(self,unit='',xarrkwargs={}):
"""
generates an instance of pyspeckit's Spectrum class
Parameters
----------
x : array
spectral axis
y : array
the spectrum
rms : number
estimate of the rms
unit : str
unit of the spectral axis
xarrkwargs : dictionary
key word arguments describing the spectral axis
"""
from pyspeckit import Spectrum
spectrum=np.zeros_like(self.spectral_axis,dtype='float')
error_spectrum=np.ones_like(self.spectral_axis,dtype='float')
with warnings.catch_warnings():
warnings.simplefilter('ignore')
old_log = log.level
log.setLevel('ERROR')
self.psktemplate = Spectrum(data=spectrum,
error=error_spectrum,
xarr=self.spectral_axis,
doplot=False,
unit=unit,
xarrkwargs=xarrkwargs,
verbose=False,
)
log.setLevel(old_log)
def create_a_spectrum(self,unit='',xarrkwargs={}):
"""
generates an instance of pyspeckit's Spectrum class
Parameters
----------
x : array
spectral axis
y : array
the spectrum
rms : number
estimate of the rms
unit : str
unit of the spectral axis
xarrkwargs : dictionary
key word arguments describing the spectral axis
"""
from pyspeckit import Spectrum
import astropy.units as u
with warnings.catch_warnings():
warnings.simplefilter('ignore')
old_log = log.level
log.setLevel('ERROR')
self.pskspectrum = Spectrum(data=np.ma.masked_where(np.isnan(u.Quantity(self.spectrum).value) + np.isinf(u.Quantity(self.spectrum).value), u.Quantity(self.spectrum).value),
error=np.ma.masked_where(np.isnan(u.Quantity(np.ones_like(self.spectrum)*self.rms).value) + np.isinf(u.Quantity(np.ones_like(self.spectrum)*self.rms).value), u.Quantity(np.ones_like(self.spectrum)*self.rms).value),
xarr=self.spectral_axis,
doplot=False,
unit=unit,
xarrkwargs=xarrkwargs,
verbose=False,
)
log.setLevel(old_log)
def update_template(self):
"""
updates a template spectrum with the spectrum values
"""
import astropy.units as u
import copy
# create a copy of the template
self.pskspectrum=copy.copy(self.psktemplate)
# update important values
self.pskspectrum.specfit.Spectrum = self.pskspectrum
self.pskspectrum.data = np.ma.masked_where(np.isnan(u.Quantity(self.spectrum).value) + np.isinf(u.Quantity(self.spectrum).value), u.Quantity(self.spectrum).value)
self.pskspectrum.error = np.ma.masked_where(np.isnan(u.Quantity(np.ones_like(self.spectrum)*self.rms).value) + np.isinf(u.Quantity(np.ones_like(self.spectrum)*self.rms).value), u.Quantity(np.ones_like(self.spectrum)*self.rms).value)
self.pskspectrum.specfit.spectofit = np.ma.masked_where(np.isnan(u.Quantity(self.spectrum).value) + np.isinf(u.Quantity(self.spectrum).value), u.Quantity(self.spectrum).value)
self.pskspectrum.specfit.errspec = np.ma.masked_where(np.isnan(u.Quantity(np.ones_like(self.spectrum)*self.rms).value) + np.isinf(u.Quantity(np.ones_like(self.spectrum)*self.rms).value), u.Quantity(np.ones_like(self.spectrum)*self.rms).value)
def get_model_information(self):
"""
Framework for model solution dictionary
"""
self.modeldict={}
if (None in self.pskspectrum.specfit.modelerrs):
self.modeldict['fittype']=None
self.modeldict['parnames']=self.pskspectrum.specfit.fitter.parnames
self.modeldict['ncomps']=0
self.modeldict['params']=np.zeros(len(self.pskspectrum.specfit.modelerrs))
self.modeldict['errors']=np.zeros(len(self.pskspectrum.specfit.modelerrs))
if np.ma.is_masked(self.pskspectrum.error):
idnonmasked=np.where(~self.pskspectrum.error.mask)[0]
if np.size(idnonmasked)==0:
self.modeldict['rms']=np.nan
else:
self.modeldict['rms']=self.pskspectrum.error[idnonmasked[0]]
else:
self.modeldict['rms']=self.pskspectrum.error[0]
self.modeldict['residstd']= np.std(self.pskspectrum.data)
self.modeldict['chisq']=0.0
self.modeldict['dof']=0.0
self.modeldict['redchisq']=0.0
self.modeldict['AIC']=0.0
self.modeldict['fitconverge'] = self.fit_converge()
self.modeldict['method']=self.method
else:
self.modeldict['fittype']=self.pskspectrum.specfit.fittype
self.modeldict['parnames']=self.pskspectrum.specfit.fitter.parnames
self.modeldict['ncomps']=int(self.pskspectrum.specfit.npeaks)
self.modeldict['params']=self.pskspectrum.specfit.modelpars
self.modeldict['errors']=self.pskspectrum.specfit.modelerrs
if np.ma.is_masked(self.pskspectrum.error):
idnonmasked=np.where(~self.pskspectrum.error.mask)[0]
if np.size(idnonmasked)==0:
self.modeldict['rms']=np.nan
else:
self.modeldict['rms']=self.pskspectrum.error[idnonmasked[0]]
else:
self.modeldict['rms']=self.pskspectrum.error[0]
self.modeldict['residstd']= np.std(self.pskspectrum.specfit.residuals)
self.modeldict['chisq']=self.pskspectrum.specfit.chi2
self.modeldict['dof']=self.pskspectrum.specfit.dof
self.modeldict['redchisq']=self.pskspectrum.specfit.chi2/self.pskspectrum.specfit.dof
self.modeldict['AIC']=self.get_aic()
self.modeldict['fitconverge'] = self.fit_converge()
self.modeldict['method']=self.method
def get_aic(self):
"""
Computes the AIC value
"""
from astropy.stats import akaike_info_criterion_lsq as aic
mod = np.zeros([len(self.pskspectrum.xarr), int(self.pskspectrum.specfit.npeaks)])
for k in range(int(self.pskspectrum.specfit.npeaks)):
modparams = self.pskspectrum.specfit.modelpars[(k*len(self.pskspectrum.specfit.fitter.parnames)):(k*len(self.pskspectrum.specfit.fitter.parnames))+len(self.pskspectrum.specfit.fitter.parnames)]
mod[:,k] = self.pskspectrum.specfit.get_model_frompars(self.pskspectrum.xarr, modparams)
totmod = np.nansum(mod, axis=1)
res=self.pskspectrum.data-totmod
ssr=np.nansum((res)**2.0)
return aic(ssr, (int(self.pskspectrum.specfit.npeaks)*len(self.pskspectrum.specfit.fitter.parnames)), len(self.pskspectrum.xarr))
def check_against_parent(self):
"""
"""
self.guesses_updated=np.asarray(self.modeldict['params'])
condition_passed = np.zeros(5, dtype='bool')
condition_passed = self.check_ncomps(condition_passed)
if condition_passed[0]:
condition_passed=self.check_finite(condition_passed)
if (condition_passed[0]) and (condition_passed[1]):
condition_passed=self.check_rms(condition_passed)
if (condition_passed[0]) and (condition_passed[1]) and (condition_passed[2]):
condition_passed=self.check_dispersion(condition_passed)
if (condition_passed[0]) and (condition_passed[1]) and (condition_passed[2]) and (condition_passed[3]):
condition_passed=self.check_velocity(condition_passed)
if np.all(condition_passed):
if int((np.size(self.guesses_updated)/np.size(self.modeldict['parnames']))==1):
self.validfit = True
else:
self.check_distinct()
self.conditions=condition_passed
def check_ncomps(self, condition_passed):
"""
Check to see if the number of components in the fit has changed beyond
a reasonable amount
"""
nparams=np.size(self.modeldict['parnames'])
ncomponents_parent=np.size(self.guesses_parent)/nparams
ncomponents_child=np.size(self.guesses_updated)/nparams
ncompdiff = np.abs(ncomponents_parent-ncomponents_child)
if ncompdiff > self.tol[0]:
condition_passed[0]=False
self.guesses_updated=[]
else:
condition_passed[0]=True
return condition_passed
def check_finite(self, condition_passed):
"""
Check to see if the number of components in the fit are not finite
"""
nparams=np.size(self.modeldict['parnames'])
ncomponents=np.size(self.guesses_updated)/nparams
# Now check all components to see if they are finite
for i in range(int(ncomponents)):
# find violating components
if np.any(np.isnan(self.guesses_updated[int(i*nparams):int(i*nparams)+nparams])):
self.guesses_updated[int((i*nparams)):int((i*nparams)+nparams)] = 0.0
violating_comps = (self.guesses_updated==0.0)
if np.any(violating_comps):
condition_passed[1]=False
else:
condition_passed[1]=True
self.guesses_updated = self.guesses_updated[(self.guesses_updated != 0.0)]
return condition_passed
def check_rms(self,condition_passed):
"""
Check the rms of the best-fitting model components
Parameters
----------
condition_passed : list
boolean list indicating which quality control steps have been satisfied
"""
# Find where in the parameter array the "amplitude" is located. Make this
# general to allow for other models
namelist = ['tex', 'amp', 'amplitude', 'peak', 'tant', 'tmb']
foundname = [pname in namelist for pname in self.modeldict['parnames']]
foundname = np.array(foundname)
idx=np.where(foundname==True)[0]
idx=idx[0]
nparams=np.size(self.modeldict['parnames'])
ncomponents=np.size(self.guesses_updated)/nparams
# Now check all components to see if they are above the rms threshold
for i in range(int(ncomponents)):
if (self.guesses_updated[int(i*nparams)+idx] < self.rms*self.tol[1]):
self.guesses_updated[int((i*nparams)):int((i*nparams)+nparams)] = 0.0
violating_comps = (self.guesses_updated==0.0)
if np.any(violating_comps):
condition_passed[2]=False
else:
condition_passed[2]=True
self.guesses_updated = self.guesses_updated[(self.guesses_updated != 0.0)]
return condition_passed
def check_dispersion(self,condition_passed):
"""
Check the fwhm of the best-fitting model components
Parameters
----------
condition_passed : list
boolean list indicating which quality control steps have been satisfied
"""
fwhmconv = 2.*np.sqrt(2.*np.log(2.))
# Find where the velocity dispersion is located in the parameter array
namelist = ['dispersion', 'width', 'fwhm']
foundname = [pname in namelist for pname in self.modeldict['parnames']]
foundname = np.array(foundname)
idx=np.where(foundname==True)[0]
idx=idx[0]
nparams=np.size(self.modeldict['parnames'])
ncomponents=np.size(self.guesses_updated)/nparams
for i in range(int(ncomponents)):
# Find the closest matching component in the parent SAA model
diff = self.find_closest_match(i, nparams)
idmin = np.where(diff == np.min(diff))[0]
idmin = idmin[0]
# Work out the relative change in velocity dispersion
relchange = self.guesses_updated[int((i*nparams)+idx)]/self.guesses_parent[int((idmin*nparams)+idx)]
if relchange < 1.:
relchange = 1./relchange
# Does this satisfy the criteria
if (self.guesses_updated[int((i*nparams)+idx)]*fwhmconv < self.res*self.tol[2]) or \
(relchange > self.tol[3]):
# set to zero
self.guesses_updated[int((i*nparams)):int((i*nparams)+nparams)] = 0.0
violating_comps = (self.guesses_updated==0.0)
if np.any(violating_comps):
condition_passed[3]=False
else:
condition_passed[3]=True
self.guesses_updated = self.guesses_updated[(self.guesses_updated != 0.0)]
return condition_passed
def check_velocity(self,condition_passed):
"""
Check the centroid velocity of the best-fitting model components
Parameters
----------
condition_passed : list
boolean list indicating which quality control steps have been satisfied
"""
# Find where the peak is located in the parameter array
namelist = ['velocity', 'shift', 'centroid', 'center']
foundname = [pname in namelist for pname in self.modeldict['parnames']]
foundname = np.array(foundname)
idxv=np.where(foundname==True)[0]
idxv=idxv[0]
# Find where the velocity dispersion is located in the parameter array
namelist = ['dispersion', 'width', 'fwhm']
foundname = [pname in namelist for pname in self.modeldict['parnames']]
foundname = np.array(foundname)
idxd=np.where(foundname==True)[0]
idxd=idxd[0]
nparams=np.size(self.modeldict['parnames'])
ncomponents=np.size(self.guesses_updated)/nparams
for i in range(int(ncomponents)):
# Find the closest matching component in the parent SAA model
diff = self.find_closest_match(i, nparams)
idmin = np.where(diff == np.min(diff))[0]
idmin = idmin[0]
# Limits for tolerance
lower_lim = self.guesses_parent[int((idmin*nparams)+idxv)]-(self.tol[4]*self.guesses_parent[int((idmin*nparams)+idxd)])
upper_lim = self.guesses_parent[int((idmin*nparams)+idxv)]+(self.tol[4]*self.guesses_parent[int((idmin*nparams)+idxd)])
# Does this satisfy the criteria
if (self.guesses_updated[(i*nparams)+idxv] < lower_lim) or \
(self.guesses_updated[(i*nparams)+idxv] > upper_lim):
# set to zero
self.guesses_updated[int((i*nparams)):int((i*nparams)+nparams)] = 0.0
violating_comps = (self.guesses_updated==0.0)
if np.any(violating_comps):
condition_passed[4]=False
else:
condition_passed[4]=True
self.guesses_updated = self.guesses_updated[(self.guesses_updated != 0.0)]
return condition_passed
def check_distinct(self):
"""
Check to see if component pairs can be distinguished in velocity
"""
# Find where the peak is located in the parameter array
namelist = ['tex', 'amp', 'amplitude', 'peak', 'tant', 'tmb']
foundname = [pname in namelist for pname in self.modeldict['parnames']]
foundname = np.array(foundname)
idxp=np.where(foundname==True)[0]
idxp=idxp[0]
# Find where the peak is located in the parameter array
namelist = ['velocity', 'shift', 'centroid', 'center']
foundname = [pname in namelist for pname in self.modeldict['parnames']]
foundname = np.array(foundname)
idxv=np.where(foundname==True)[0]
idxv=idxv[0]
# Find where the velocity dispersion is located in the parameter array
namelist = ['dispersion', 'width', 'fwhm']
foundname = [pname in namelist for pname in self.modeldict['parnames']]
foundname = np.array(foundname)
idxd=np.where(foundname==True)[0]
idxd=idxd[0]
fwhmconv = 2.*np.sqrt(2.*np.log(2.))
nparams=np.size(self.modeldict['parnames'])
ncomponents=np.size(self.guesses_updated)/nparams
intlist = [self.guesses_updated[int((i*nparams)+idxp)] for i in range(int(ncomponents))]
velolist = [self.guesses_updated[int((i*nparams)+idxv)] for i in range(int(ncomponents))]
displist = [self.guesses_updated[int((i*nparams)+idxd)] for i in range(int(ncomponents))]
diff = np.zeros(int(ncomponents))
validvs = np.ones(int(ncomponents))
for i in range(int(ncomponents)):
if validvs[i] != 0.0:
# Calculate the velocity difference between all components
for j in range(int(ncomponents)):
diff[j] = abs(velolist[i]-velolist[j])
diff[(diff==0.0)] = np.nan
# Find the minimum difference (i.e. the adjacent component)
idmin = np.where(diff==np.nanmin(diff))[0]
idmin = idmin[0]
adjacent_intensity = intlist[idmin]
adjacent_velocity = velolist[idmin]
adjacent_dispersion = displist[idmin]
# Get the separation between each component and its neighbour
sep = np.abs(velolist[i] - adjacent_velocity)
# Calculate the allowed separation between components
min_allowed_sep = np.min(np.array([displist[i], adjacent_dispersion]))*fwhmconv*self.tol[5]
if sep > min_allowed_sep:
if validvs[idmin] !=0.0:
validvs[i] = 1.0
validvs[idmin] = 1.0
else:
validvs[i] = 1.0
validvs[idmin] = 0.0
intlist[idmin] = 0.0
velolist[idmin] = 0.0
displist[idmin] = 0.0
else:
# If the components do not satisfy the criteria then average
# them and use the new quantities as input guesses
validvs[i] = 1.0
validvs[idmin] = 0.0
intlist[i] = np.mean([intlist[i], adjacent_intensity])
velolist[i] = np.mean([velolist[i], adjacent_velocity])
displist[i] = np.mean([displist[i], adjacent_dispersion])
intlist[idmin] = 0.0
velolist[idmin] = 0.0
displist[idmin] = 0.0
for i in range(int(ncomponents)):
self.guesses_updated[(i*nparams)+idxp] = intlist[i]
self.guesses_updated[(i*nparams)+idxv] = velolist[i]
self.guesses_updated[(i*nparams)+idxd] = displist[i]
violating_comps = (self.guesses_updated==0.0)
if np.any(violating_comps):
self.validfit=False
else:
self.validfit=True
self.guesses_updated = self.guesses_updated[(self.guesses_updated != 0.0)]
def find_closest_match(self,i,nparams):
"""
Find the closest matching component in the parent SAA model to the current
component in bf.
Parameters
----------
i : number
index for params
nparams : number
number of parameters in the pyspeckit model
"""
diff = np.zeros(int(np.size(self.guesses_parent)/nparams))
for j in range(int(np.size(self.guesses_parent)/nparams)):
pdiff = 0.0
for k in range(nparams):
pdiff+=(self.guesses_updated[int((i*nparams)+k)] - self.guesses_parent[int((j*nparams)+k)])**2.
diff[j] = np.sqrt(pdiff)
return diff
def fit_converge(self):
if None in self.pskspectrum.specfit.modelerrs:
return False
else:
return True
def printable_format(self):
"""
Parameters
----------
"""
specfit=self.pskspectrum.specfit
print("")
print("-----------------------------------------------------")
print("")
print("Model type: {0}".format(specfit.fittype))
print("")
print(("Number of components: {0}").format(specfit.npeaks))
print("")
compcount=0
if not self.fit_converge():
print(colors.fg._yellow_+"WARNING: Minimisation failed to converge. Please "
"\nrefit manually. "+colors._endc_)
print("")
for i in range(0, int(specfit.npeaks)):
parlow = int((i*len(specfit.fitter.parnames)))
parhigh = int((i*len(specfit.fitter.parnames))+len(specfit.fitter.parnames))
parrange = np.arange(parlow,parhigh)
for j in range(0, len(specfit.fitter.parnames)):
print(("{0}: {1} +/- {2}").format(specfit.fitter.parnames[j], \
np.around(specfit.modelpars[parrange[j]],
decimals=5), \
np.around(specfit.modelerrs[parrange[j]],
decimals=5)))
print("")
compcount+=1
print(("chisq: {0}").format(np.around(specfit.chi2, decimals=2)))
print(("redchisq: {0}").format(np.around(specfit.chi2/specfit.dof, decimals=2)))
print(("AIC: {0}").format(np.around(self.get_aic(), decimals=2)))
print("-----------------------------------------------------")
print("")
def event_loop():
fig = plt.gcf()
while plt.fignum_exists(fig.number):
try:
# using just a few little bits of plt.pause below
plt.gcf().canvas.draw()
plt.gcf().canvas.start_event_loop(0.1)
time.sleep(0.1)
except KeyboardInterrupt:
break
|
jdhenshawREPO_NAMEscousepyPATH_START.@scousepy_extracted@scousepy-master@scousepy@SpectralDecomposer.py@.PATH_END.py
|
{
"filename": "jb08_analysis.ipynb",
"repo_name": "esa/thermonets",
"repo_path": "thermonets_extracted/thermonets-main/notebooks/jb08_analysis.ipynb",
"type": "Jupyter Notebook"
}
|
```python
import sys
sys.path.append("../")
import thermonets as tn
import datetime
import numpy as np
import matplotlib.pyplot as plt
import pyatmos
import cartopy.crs as ccrs
from tqdm import tqdm
```
The EOP file 'finals2000A.all' in /Users/dario.izzo/src/iers/ is already the latest.
The Leap Second file 'Leap_Second.dat' in /Users/dario.izzo/src/iers/ is already the latest.
### 1 - Load the dataset
The dataset of atmospheric values for the requested ranges of altitudes can be created running `scripts/generate_jb08_db.py`, and is here loaded.
```python
# Here we the db the model was trained upon.
# note that columns are (len 22):
# day, month, year, hour, minute, second, microsecond, alt [km], lat [deg], lon [deg], sun ra [deg], sun dec [deg], f107, f107A, s107, s107A, m107, m107A, y107, y107A, dDstdT, density [kg/m^3]
db = np.loadtxt("../dbs/jb08_db.txt", delimiter=",", skiprows=1)
np.random.shuffle(db)
print(f"Shape of database is: {db.shape}")
```
Shape of database is: (1000000, 22)
### 2 - Perform model inference
The `JB-08` neural model trained in `/notebooks/jb-08.py` is hardcoded into the thermonets module (upon import it get loaded) and here used to reproduce the various atmospheric values in the database it was trained upon.
NOTE: The following cell will take ~1 minute-ish and could be easily parallelized using vectorization across the various inputs, unfortunately the API we forced upon ``jb08_tn`` is not allowing the requested vecorization (only h, lat, lon), hence the slowdown.
```python
from tqdm import tqdm
# we extract the target density:
true = db[:, -1]
# and now we do inference and extract the predicted one:
predicted = []
for item in tqdm(db):
date = datetime.datetime(
year=int(item[2]),
month=int(item[1]),
day=int(item[0]),
hour=int(item[3]),
minute=int(item[4]),
second=int(item[5]),
microsecond=int(item[6]),
)
doy = date.timetuple().tm_yday
sid = item[3] * 3600 + item[4] * 60 + item[5] + item[6] / 1e6
predicted.append(
tn.jb08_tn(
hs=item[7],
lons=np.deg2rad(item[9]),
lats=np.deg2rad(item[8]),
f107=item[12],
f107a=item[13],
s107=item[14],
s107a=item[15],
m107=item[16],
m107a=item[17],
y107=item[18],
y107a=item[19],
dDstdT=item[20],
doy=doy,
sid=sid,
)
)
predicted = np.array(predicted).flatten()
```
0%| | 0/1000000 [00:00<?, ?it/s]
100%|██████████| 1000000/1000000 [00:43<00:00, 22880.13it/s]
We also compute the global fit values on the database and its relative error
```python
predicted_global_fit = tn.rho_approximation(
h=db[:, 7], params=tn.best_global_fit_jb08, backend="numpy"
)
rel_err_global_fit = ((predicted_global_fit - true) / true) * 100
```
```python
rel_err = ((predicted - true) / true) * 100
rel_err_global_fit = ((predicted_global_fit - true) / true) * 100
print("MAPE from NN model: ", np.mean(np.abs(rel_err)))
print("MAPE from Global Fit model: ", np.mean(np.abs(rel_err_global_fit)))
```
MAPE from NN model: 1.4269376787027257
MAPE from Global Fit model: 64.48631625347689
### 3 - Plots of the errors
```python
fig, ax = plt.subplots(1, 2, figsize=(12, 5))
cm = plt.colormaps["RdYlBu"]
rel_err = ((predicted - true) / true) * 100
c = (db[::99, 12] + 1) / 2
sc = ax[0].scatter(db[::99, 7], np.abs(rel_err[::99]), alpha=0.2, c=c, s=5, cmap=cm)
ax[0].set_yscale("log")
ax[0].set_xscale("log")
ax[0].set_yticks([1, 10, 100], ["1%", "10%", "100%"])
ax[0].set_xticks([200, 300, 400, 500, 600], ["200", "300", "400", "500", "600"])
ax[0].set_title("Relative error (thermoNET)")
ax[0].set_xlabel("Altitude [km]")
ax[0].set_ylabel("Relative error in percent")
ax[1].hist(rel_err, bins=100, density=False, alpha=0.5)
ax[1].hist(rel_err_global_fit, bins=100, density=False, alpha=0.5);
```

### 4 - Thermospheric map error
```python
# We first establish the longitude latitude grid thickness
n_grid = 50
# Altitude in km
alt = 300.0
# Date
date = datetime.datetime(2018, 4, 22, 6, 13, 35)
```
```python
# We build the model entries:
latitude = np.linspace(90, -90, n_grid)
longitude = np.linspace(-180, 180, n_grid)
f107, f107a, s107, s107a, m107, m107a, y107, y107a, dDstdT, doy, sid = (
tn.get_jb08_attributes(date)
)
# we extract the target density (running JB-08):
swfile = pyatmos.download_sw_jb2008()
swdata = pyatmos.read_sw_jb2008(swfile)
# we extract the target density (running JB-08):
true = np.zeros((n_grid, n_grid))
for i, lon in enumerate(tqdm(longitude)):
for j, lat in enumerate(latitude):
true[i, j] = pyatmos.jb2008(date, (lat, lon, alt), swdata).rho
predicted = tn.jb08_tn(
hs=[alt],
lons=np.deg2rad(longitude),
lats=np.deg2rad(latitude),
f107=f107,
f107a=f107a,
s107=s107,
s107a=s107a,
m107=m107,
m107a=m107a,
y107=y107,
y107a=y107a,
dDstdT=dDstdT,
doy=doy,
sid=sid,
)
```
The Space Weather files 'SOLFSMY.TXT' and 'DTCFILE.TXT' in /Users/dario.izzo/src/sw-data/ are already the latest.
The Space Weather files 'SOLFSMY.TXT' and 'DTCFILE.TXT' in /Users/dario.izzo/src/sw-data/ are already the latest.
0%| | 0/50 [00:00<?, ?it/s]
100%|██████████| 50/50 [00:06<00:00, 7.21it/s]
```python
# I need the grid for the plotting:
longitude_grid, latitude_grid = np.meshgrid(longitude, latitude, indexing="ij")
# we setup the longitude x latitude grid, and compute the relative error (in %)
rel_err = ((predicted.squeeze() - true) / true) * 100
# we print to scree the mean absolute percentage error on the globe map:
print(
f"Average absolute relative percentage error on globe map: {abs(rel_err).mean()} %"
)
# we now store the minimum and maximum density values (between target & prediction), for plotting purposes:
vmin = min([true.min(), predicted.min()])
vmax = max([true.max(), predicted.max()])
# we now create a figure with a globe projection on top:
fig, ax = plt.subplots(
figsize=(18, 12),
nrows=2,
ncols=2,
subplot_kw={"projection": ccrs.Mollweide(central_longitude=1)},
)
# we flatten the axis and remove the last figure
ax = ax.ravel()
ax[-1].axis("off")
# we plot JB-08 on the first figure:
ax[0].pcolormesh(
longitude_grid,
latitude_grid,
true,
transform=ccrs.PlateCarree(),
vmin=vmin,
vmax=vmax,
)
ax[0].set_global()
ax[0].coastlines()
# ax[0].gridlines()
ax[0].set_title("JB-08")
# the NN prediction on the second:
im2 = ax[1].pcolormesh(
longitude_grid,
latitude_grid,
predicted.squeeze(),
transform=ccrs.PlateCarree(),
vmin=vmin,
vmax=vmax,
)
ax[1].set_global()
ax[1].coastlines()
# ax[1].gridlines()
ax[1].set_title("NN Prediction")
# we add a shared colorbar for the first two figures:
cax1 = fig.add_axes([0.93, 0.6, 0.02, 0.2]) # [left, bottom, width, height]
cbar1 = plt.colorbar(im2, orientation="vertical", fraction=0.035, cax=cax1)
cbar1.set_label("Thermospheric Density [kg/m$^3$]")
# we finally plot the relative error in the second row
im3 = ax[2].pcolormesh(
longitude_grid,
latitude_grid,
rel_err.reshape((n_grid, n_grid)),
transform=ccrs.PlateCarree(),
cmap="inferno",
)
ax[2].set_global()
ax[2].coastlines()
# ax[2].gridlines()
# and we add the colorbar for that:
cax2 = fig.add_axes([0.51, 0.1, 0.02, 0.3]) # [left, bottom, width, height]
cbar1 = plt.colorbar(im3, orientation="vertical", fraction=0.035, cax=cax2)
cbar1.set_label("Relative Error [%]")
```
Average absolute relative percentage error on globe map: 1.0293767778758436 %

```python
```
|
esaREPO_NAMEthermonetsPATH_START.@thermonets_extracted@thermonets-main@notebooks@jb08_analysis.ipynb@.PATH_END.py
|
{
"filename": "base.py",
"repo_name": "langchain-ai/langchain",
"repo_path": "langchain_extracted/langchain-master/libs/langchain/langchain/schema/callbacks/tracers/base.py",
"type": "Python"
}
|
from langchain_core.tracers.base import BaseTracer, TracerException
__all__ = ["TracerException", "BaseTracer"]
|
langchain-aiREPO_NAMElangchainPATH_START.@langchain_extracted@langchain-master@libs@langchain@langchain@schema@callbacks@tracers@base.py@.PATH_END.py
|
{
"filename": "sweet_cat.py",
"repo_name": "sczesla/PyAstronomy",
"repo_path": "PyAstronomy_extracted/PyAstronomy-master/src/pyasl/resBased/sweet_cat.py",
"type": "Python"
}
|
from __future__ import print_function, division
from PyAstronomy.pyaC import pyaPermanent as pp
from PyAstronomy.pyaC import pyaErrors as PE
import os
import gzip
from PyAstronomy.pyasl import _ic
import ssl
class SWEETCat(pp.PyAUpdateCycle):
"""
Access the SWEET-Cat catalog.
The SWEET-Cat catalog provides parameters for planet host stars.
The following data are provided
=========== =================================== ======
Column Description Unit
----------- ----------------------------------- ------
star Name of the star
hd The HD number (if available)
ra The right ascension hms
dec The declination dms
vmag V magnitude mag
ervmag Error on V magnitude mag
par Parallax mas
erpar Error on parallax mas
parsource If par is from Simbad or calculated
teff Effective temperature K
erteff Error on effective temperature K
logg Surface gravity cgs
erlogg Error on surface gravity cgs
logglc Surface gravity from LC cgs
erlogglc Error on surface gravity from LC cgs
vt Micro turbulence km/s
ervt Error on micro turbulence km/s
metal Metallicity ([Fe/H])
ermetal Error on metallicity
mass Mass calculated from Torres et al. Solar
ermass Error on calculated mass Solar
author Author of source
link Link to paper in ADS
source 1 means CAUP's method. 0 otherwise
update When the parameters were updated
comment1 Special comment
comment2 Blank
=========== =================================== ======
Detailed information can be found here:
https://www.astro.up.pt/resources/sweet-cat/
and in the associated publications.
Attributes
----------
data : pandas data frame
The catalog data
"""
def _downloadData(self):
"""
Download SWEETCAT and write it to file
"""
urls = ['https://www.astro.up.pt/resources/sweet-cat/download.php', \
"https://raw.githubusercontent.com/iastro-pt/SWEET-Cat/master/WEBSITE_online_EU.rdb"]
dfn = self._fs.composeFilename(self.dataFileName)
success = False
ex = []
for url in urls:
try:
# The next two lines by-pass certificate verification
context = ssl._create_unverified_context()
self._fs.downloadToFile(url, dfn, clobber=True, verbose=False, openMethod=gzip.open,
context=context)
success = True
except AttributeError:
# Python version does not support ssl._create_unverified_context()
self._fs.downloadToFile(
url, dfn, clobber=True, verbose=False, openMethod=gzip.open)
success = True
except Exception as e:
# "Handle" unexpected error
ex.append((url,e))
if success:
break
if not success:
raise(PE.PyANetworkError(
"Could not download SWEET-Cat data. Received the following errors:\n\n" + \
''.join(["URL: "+x[0]+"\n"+str(x[1])+"\n\n" for x in ex]) ))
self.source_url = url
def _read_sweetcat(self):
"""
Read SWEETCat into a pandas DataFrame
"""
if not _ic.check["pandas"]:
raise(PE.PyARequiredImport("Could not import pandas module.",
solution="Please install pandas (http://pandas.pydata.org). Please also check the dependencies."))
else:
import pandas as pd
ffn = self._fs.requestFile(self.dataFileName, 'r', gzip.open)
self.data = pd.read_csv(
ffn, sep='\t', names=self.names, na_values=['~'])
def __init__(self, skipUpdate=False):
self.dataFileName = os.path.join(
"pyasl", "resBased", "sweetcat.csv.gz")
configFilename = os.path.join("pyasl", "resBased", "sweetcat.cfg")
pp.PyAUpdateCycle.__init__(self, configFilename, "sweetcatupdate")
self.names = ['star', 'hd', 'ra', 'dec', 'vmag', 'ervmag', 'par', 'erpar',
'parsource', 'teff', 'erteff', 'logg', 'erlogg', 'logglc',
'erlogglc', 'vt', 'ervt', 'metal', 'ermetal', 'mass', 'ermass',
'author', 'source', 'update', 'comment']
# Check whether data file exists
self._fs = pp.PyAFS()
if (self.needsUpdate() or (not self._fs.fileExists(self.dataFileName))) and (not skipUpdate):
# Data needs update
print("Downloading exoplanet data from SWEET-Cat archive")
self._update(self._downloadData)
print("Saved data to file: ",
self.dataFileName, " in data directory,")
print(" which has been configured as: ", self._fs.dpath)
print("By default, the data will be downloaded anew every 7 days.")
print("You can use the `changeDownloadCycle` to change this behavior.")
self._read_sweetcat()
def downloadData(self):
"""
Trigger download of data.
"""
self._update(self._downloadData)
self._read_sweetcat()
|
sczeslaREPO_NAMEPyAstronomyPATH_START.@PyAstronomy_extracted@PyAstronomy-master@src@pyasl@resBased@sweet_cat.py@.PATH_END.py
|
{
"filename": "trainer.py",
"repo_name": "fchollet/keras",
"repo_path": "keras_extracted/keras-master/keras/src/backend/jax/trainer.py",
"type": "Python"
}
|
import collections
import itertools
import jax
import numpy as np
from keras.src import backend
from keras.src import callbacks as callbacks_module
from keras.src import optimizers as optimizers_module
from keras.src import tree
from keras.src.backend import distribution_lib as jax_distribution_lib
from keras.src.distribution import distribution_lib
from keras.src.trainers import trainer as base_trainer
from keras.src.trainers.data_adapters import array_slicing
from keras.src.trainers.data_adapters import data_adapter_utils
from keras.src.trainers.epoch_iterator import EpochIterator
from keras.src.utils import traceback_utils
class JAXTrainer(base_trainer.Trainer):
def __init__(self):
super().__init__()
self.train_function = None
self.test_function = None
self.predict_function = None
self._jax_state_synced = True
def compute_loss_and_updates(
self,
trainable_variables,
non_trainable_variables,
metrics_variables,
x,
y,
sample_weight,
training=False,
optimizer_variables=None,
):
"""This method is stateless and is intended for use with jax.grad."""
kwargs = {}
if self._call_has_training_arg:
kwargs["training"] = training
# Run stateless forward pass
y_pred, non_trainable_variables, losses = self.stateless_call(
trainable_variables,
non_trainable_variables,
x,
return_losses=True,
**kwargs,
)
if losses:
# Make forward pass losses available to compute_loss.
self._losses_override.clear()
self._losses_override = losses
loss, variables = self.stateless_compute_loss(
trainable_variables,
non_trainable_variables,
metrics_variables,
x=x,
y=y,
y_pred=y_pred,
sample_weight=sample_weight,
training=training,
)
if losses:
self._losses_override.clear()
(trainable_variables, non_trainable_variables, metrics_variables) = (
variables
)
# Handle loss scaling
unscaled_loss = loss
if training and self.optimizer is not None:
# Scale loss with a StatelessScope, to use an update scale variable.
mapping = list(zip(self.optimizer.variables, optimizer_variables))
with backend.StatelessScope(state_mapping=mapping):
loss = self.optimizer.scale_loss(loss)
return loss, (
unscaled_loss,
y_pred,
non_trainable_variables,
metrics_variables,
)
def train_step(self, state, data):
(
trainable_variables,
non_trainable_variables,
optimizer_variables,
metrics_variables,
) = state
x, y, sample_weight = data_adapter_utils.unpack_x_y_sample_weight(data)
grad_fn = jax.value_and_grad(
self.compute_loss_and_updates, has_aux=True
)
(loss, aux), grads = grad_fn(
trainable_variables,
non_trainable_variables,
metrics_variables,
x,
y,
sample_weight,
training=True,
optimizer_variables=optimizer_variables,
)
(unscaled_loss, y_pred, non_trainable_variables, metrics_variables) = (
aux
)
(
trainable_variables,
optimizer_variables,
) = self.optimizer.stateless_apply(
optimizer_variables, grads, trainable_variables
)
with backend.StatelessScope(
state_mapping=[
(ref_v, v)
for ref_v, v in zip(self.metrics_variables, metrics_variables)
]
) as scope:
self._loss_tracker.update_state(
unscaled_loss, sample_weight=tree.flatten(x)[0].shape[0]
)
logs = self.compute_metrics(x, y, y_pred, sample_weight)
new_metrics_variables = []
for ref_v in self.metrics_variables:
new_v = scope.get_current_value(ref_v)
if new_v is None:
new_v = ref_v.value
new_metrics_variables.append(new_v)
metrics_variables = new_metrics_variables
state = self._enforce_jax_state_sharding(
trainable_variables,
non_trainable_variables,
optimizer_variables,
metrics_variables,
)
return logs, state
def test_step(self, state, data):
(
trainable_variables,
non_trainable_variables,
metrics_variables,
) = state
x, y, sample_weight = data_adapter_utils.unpack_x_y_sample_weight(data)
loss, aux = self.compute_loss_and_updates(
trainable_variables,
non_trainable_variables,
metrics_variables,
x,
y,
sample_weight,
training=False,
)
(unscaled_loss, y_pred, non_trainable_variables, metrics_variables) = (
aux
)
with backend.StatelessScope(
state_mapping=[
(ref_v, v)
for ref_v, v in zip(self.metrics_variables, metrics_variables)
]
) as scope:
self._loss_tracker.update_state(
unscaled_loss, sample_weight=tree.flatten(x)[0].shape[0]
)
logs = self.compute_metrics(x, y, y_pred, sample_weight)
new_metrics_variables = []
for ref_v in self.metrics_variables:
new_v = scope.get_current_value(ref_v)
if new_v is None:
new_v = ref_v.value
new_metrics_variables.append(new_v)
metrics_variables = new_metrics_variables
(
trainable_variables,
non_trainable_variables,
_,
metrics_variables,
) = self._enforce_jax_state_sharding(
trainable_variables=trainable_variables,
non_trainable_variables=non_trainable_variables,
optimizer_variables=None,
metrics_variables=metrics_variables,
)
state = (
trainable_variables,
non_trainable_variables,
metrics_variables,
)
return logs, state
def predict_step(self, state, data):
trainable_variables, non_trainable_variables = state
kwargs = {}
if self._call_has_training_arg:
kwargs["training"] = False
x, _, _ = data_adapter_utils.unpack_x_y_sample_weight(data)
outputs, non_trainable_variables = self.stateless_call(
trainable_variables, non_trainable_variables, x, **kwargs
)
(
_,
non_trainable_variables,
_,
_,
) = self._enforce_jax_state_sharding(
trainable_variables=None,
non_trainable_variables=non_trainable_variables,
optimizer_variables=None,
metrics_variables=None,
)
return outputs, non_trainable_variables
def _make_function(self, step_function, concatenate_outputs=False):
if self.steps_per_execution > 1:
if concatenate_outputs:
def concatenate(outputs):
output = outputs[0]
for next_output in outputs[1:]:
output = tree.map_structure(
lambda t1, t2: jax.numpy.concatenate([t1, t2]),
output,
next_output,
)
return output
if not self.run_eagerly and self.jit_compile:
concatenate = jax.jit(concatenate)
def iterator_step(state, iterator):
data = next(iterator)
outputs, state = step_function(state, data)
outputs = [outputs]
try:
for _ in range(self.steps_per_execution - 1):
data = next(iterator)
_outputs, state = step_function(state, data)
outputs.append(_outputs)
except StopIteration:
pass
outputs = concatenate(outputs)
return outputs, state
else:
def iterator_step(state, iterator):
data = next(iterator)
outputs, state = step_function(state, data)
try:
for _ in range(self.steps_per_execution - 1):
data = next(iterator)
outputs, state = step_function(state, data)
except StopIteration:
pass
return outputs, state
else:
def iterator_step(state, iterator):
return step_function(state, next(iterator))
return iterator_step
def make_train_function(self, force=False):
if self.train_function is not None and not force:
return
if not self.run_eagerly and self.jit_compile:
# Note that we mark the state to be donated to jax,
# so that jax will reuse the memory buffer for outputs.
# This will reduce the memory usage of the training function by
# half.
train_step = jax.jit(self.train_step, donate_argnums=0)
else:
train_step = self.train_step
step_function = self._make_function(train_step)
self.train_function = step_function
def make_test_function(self, force=False):
if self.test_function is not None and not force:
return
if not self.run_eagerly and self.jit_compile:
# Note that we mark the state to be donated to jax,
# so that jax will reuse the memory buffer for outputs.
# This will reduce the memory usage of the training function by
# half.
test_step = jax.jit(self.test_step, donate_argnums=0)
else:
test_step = self.test_step
step_function = self._make_function(test_step)
self.test_function = step_function
def make_predict_function(self, force=False):
if self.predict_function is not None and not force:
return self.predict_function
def predict_step(state, data):
outputs, non_trainable_variables = self.predict_step(state, data)
return outputs, (state[0], non_trainable_variables)
if not self.run_eagerly and self.jit_compile:
predict_step = jax.jit(predict_step)
_step_function = self._make_function(
predict_step, concatenate_outputs=True
)
def step_function(state, iterator):
outputs, state = _step_function(state, iterator)
return outputs, state[1]
self.predict_function = step_function
@traceback_utils.filter_traceback
def fit(
self,
x=None,
y=None,
batch_size=None,
epochs=1,
verbose="auto",
callbacks=None,
validation_split=0.0,
validation_data=None,
shuffle=True,
class_weight=None,
sample_weight=None,
initial_epoch=0,
steps_per_epoch=None,
validation_steps=None,
validation_batch_size=None,
validation_freq=1,
):
self._assert_compile_called("fit")
# TODO: respect compiled trainable state
self._eval_epoch_iterator = None
if validation_split and validation_data is None:
# Create the validation data using the training data. Only supported
# for TF/numpy/jax arrays.
(
(x, y, sample_weight),
validation_data,
) = array_slicing.train_validation_split(
(x, y, sample_weight), validation_split=validation_split
)
if validation_data is not None:
(
val_x,
val_y,
val_sample_weight,
) = data_adapter_utils.unpack_x_y_sample_weight(validation_data)
# Create an iterator that yields batches for one epoch.
epoch_iterator = JAXEpochIterator(
x=x,
y=y,
sample_weight=sample_weight,
batch_size=batch_size,
steps_per_epoch=steps_per_epoch,
shuffle=shuffle,
class_weight=class_weight,
steps_per_execution=self.steps_per_execution,
)
self._symbolic_build(iterator=epoch_iterator)
epoch_iterator.reset()
# Container that configures and calls callbacks.
if not isinstance(callbacks, callbacks_module.CallbackList):
callbacks = callbacks_module.CallbackList(
callbacks,
add_history=True,
add_progbar=verbose != 0,
verbose=verbose,
epochs=epochs,
steps=epoch_iterator.num_batches,
model=self,
)
self._record_training_state_sharding_spec()
self.make_train_function()
self.stop_training = False
training_logs = {}
callbacks.on_train_begin()
initial_epoch = self._initial_epoch or initial_epoch
for epoch in range(initial_epoch, epochs):
self.reset_metrics()
callbacks.on_epoch_begin(epoch)
self._jax_state_synced = True
with epoch_iterator.catch_stop_iteration():
for step, iterator in epoch_iterator:
# Callbacks
callbacks.on_train_batch_begin(step)
# Train step
if self._jax_state_synced:
# The state may have been synced by a callback.
state = self._get_jax_state(
trainable_variables=True,
non_trainable_variables=True,
optimizer_variables=True,
metrics_variables=True,
purge_model_variables=True,
)
self._jax_state_synced = False
logs, state = self.train_function(state, iterator)
(
trainable_variables,
non_trainable_variables,
optimizer_variables,
metrics_variables,
) = state
# Setting _jax_state enables callbacks to force a state sync
# if they need to.
self._jax_state = {
"trainable_variables": trainable_variables,
"non_trainable_variables": non_trainable_variables,
"optimizer_variables": optimizer_variables,
"metrics_variables": metrics_variables,
}
# Dispatch callbacks. This takes care of async dispatch.
callbacks.on_train_batch_end(step, logs)
if self.stop_training:
# Stop training if a callback has set
# this flag in on_(train_)batch_end.
break
# Reattach state to the model (if not already done by a callback).
# NOTE: doing this after each step would be a big performance
# bottleneck.
self.jax_state_sync()
# Override with model metrics instead of last step logs if needed.
# The jax spmd_mode is need for multi-process context, since the
# metrics values are replicated, and we don't want to do a all
# gather, and only need the local copy of the value.
with jax.spmd_mode("allow_all"):
epoch_logs = dict(self._get_metrics_result_or_logs(logs))
# Run validation.
if validation_data is not None and self._should_eval(
epoch, validation_freq
):
# Create JAXEpochIterator for evaluation and cache it.
if getattr(self, "_eval_epoch_iterator", None) is None:
self._eval_epoch_iterator = JAXEpochIterator(
x=val_x,
y=val_y,
sample_weight=val_sample_weight,
batch_size=validation_batch_size or batch_size,
steps_per_execution=self.steps_per_execution,
steps_per_epoch=validation_steps,
shuffle=False,
)
val_logs = self.evaluate(
x=val_x,
y=val_y,
sample_weight=val_sample_weight,
batch_size=validation_batch_size or batch_size,
steps=validation_steps,
callbacks=callbacks,
return_dict=True,
_use_cached_eval_dataset=True,
)
val_logs = {
"val_" + name: val for name, val in val_logs.items()
}
epoch_logs.update(val_logs)
callbacks.on_epoch_end(epoch, epoch_logs)
training_logs = epoch_logs
if self.stop_training:
break
if (
isinstance(self.optimizer, optimizers_module.Optimizer)
and epochs > 0
):
self.optimizer.finalize_variable_values(self.trainable_weights)
# If _eval_epoch_iterator exists, delete it after all epochs are done.
if getattr(self, "_eval_epoch_iterator", None) is not None:
del self._eval_epoch_iterator
callbacks.on_train_end(logs=training_logs)
self._jax_state = None
return self.history
@traceback_utils.filter_traceback
def evaluate(
self,
x=None,
y=None,
batch_size=None,
verbose="auto",
sample_weight=None,
steps=None,
callbacks=None,
return_dict=False,
**kwargs,
):
self._assert_compile_called("evaluate")
# TODO: respect compiled trainable state
use_cached_eval_dataset = kwargs.pop("_use_cached_eval_dataset", False)
if kwargs:
raise ValueError(f"Arguments not recognized: {kwargs}")
if use_cached_eval_dataset:
epoch_iterator = self._eval_epoch_iterator
else:
# Create an iterator that yields batches of input/target data.
epoch_iterator = JAXEpochIterator(
x=x,
y=y,
sample_weight=sample_weight,
batch_size=batch_size,
steps_per_epoch=steps,
shuffle=False,
steps_per_execution=self.steps_per_execution,
)
self._symbolic_build(iterator=epoch_iterator)
epoch_iterator.reset()
# Container that configures and calls callbacks.
if not isinstance(callbacks, callbacks_module.CallbackList):
callbacks = callbacks_module.CallbackList(
callbacks,
add_history=True,
add_progbar=verbose != 0,
verbose=verbose,
epochs=1,
steps=epoch_iterator.num_batches,
model=self,
)
self._record_training_state_sharding_spec()
self.make_test_function()
self.stop_evaluating = False
callbacks.on_test_begin()
logs = {}
self.reset_metrics()
self._jax_state_synced = True
with epoch_iterator.catch_stop_iteration():
for step, iterator in epoch_iterator:
callbacks.on_test_batch_begin(step)
if self._jax_state_synced:
# The state may have been synced by a callback.
state = self._get_jax_state(
trainable_variables=True,
non_trainable_variables=True,
metrics_variables=True,
purge_model_variables=True,
)
self._jax_state_synced = False
logs, state = self.test_function(state, iterator)
(
trainable_variables,
non_trainable_variables,
metrics_variables,
) = state
# Setting _jax_state enables callbacks to force a state sync
# if they need to.
self._jax_state = {
# I wouldn't recommend modifying non-trainable model state
# during evaluate(), but it's allowed.
"trainable_variables": trainable_variables,
"non_trainable_variables": non_trainable_variables,
"metrics_variables": metrics_variables,
}
# Dispatch callbacks. This takes care of async dispatch.
callbacks.on_test_batch_end(step, logs)
if self.stop_evaluating:
break
# Reattach state back to model (if not already done by a callback).
self.jax_state_sync()
# The jax spmd_mode is need for multi-process context, since the
# metrics values are replicated, and we don't want to do a all
# gather, and only need the local copy of the value.
with jax.spmd_mode("allow_all"):
logs = self._get_metrics_result_or_logs(logs)
callbacks.on_test_end(logs)
self._jax_state = None
if return_dict:
return logs
return self._flatten_metrics_in_order(logs)
@traceback_utils.filter_traceback
def predict(
self, x, batch_size=None, verbose="auto", steps=None, callbacks=None
):
# Create an iterator that yields batches of input data.
epoch_iterator = JAXEpochIterator(
x=x,
batch_size=batch_size,
steps_per_epoch=steps,
shuffle=False,
steps_per_execution=self.steps_per_execution,
)
if not all(layer.built for layer in self._flatten_layers()):
# Build the model on one batch of data.
for _, iterator in epoch_iterator:
# Build model
x, _, _ = data_adapter_utils.unpack_x_y_sample_weight(
next(iterator)
)
with backend.StatelessScope():
self(x)
break
epoch_iterator.reset()
# Container that configures and calls callbacks.
if not isinstance(callbacks, callbacks_module.CallbackList):
callbacks = callbacks_module.CallbackList(
callbacks,
add_history=True,
add_progbar=verbose != 0,
verbose=verbose,
epochs=1,
steps=epoch_iterator.num_batches,
model=self,
)
self._record_training_state_sharding_spec()
self.make_predict_function()
self.stop_predicting = False
callbacks.on_predict_begin()
def append_to_outputs(batch_outputs, outputs):
if outputs is None:
outputs = tree.map_structure(
lambda batch_output: [batch_output],
batch_outputs,
)
else:
tree.map_structure_up_to(
batch_outputs,
lambda output, batch_output: output.append(batch_output),
outputs,
batch_outputs,
)
return outputs
self._jax_state_synced = True
outputs = None
non_trainable_variables = None
with epoch_iterator.catch_stop_iteration():
for step, iterator in epoch_iterator:
callbacks.on_predict_batch_begin(step)
if self._jax_state_synced:
# The state may have been synced by a callback.
state = self._get_jax_state(
trainable_variables=True,
non_trainable_variables=True,
)
self._purge_model_variables(non_trainable_variables=True)
self._jax_state_synced = False
else:
state = (state[0], non_trainable_variables)
batch_outputs, non_trainable_variables = self.predict_function(
state, iterator
)
outputs = append_to_outputs(batch_outputs, outputs)
# Dispatch callbacks. This takes care of async dispatch.
callbacks.on_predict_batch_end(step, {"outputs": batch_outputs})
if self.stop_predicting:
break
self._jax_state = {
# I wouldn't recommend modifying non-trainable model state
# during predict(), but it's allowed.
"non_trainable_variables": non_trainable_variables,
}
self.jax_state_sync()
callbacks.on_predict_end()
self._jax_state = None
return tree.map_structure_up_to(batch_outputs, np.concatenate, outputs)
def train_on_batch(
self,
x,
y=None,
sample_weight=None,
class_weight=None,
return_dict=False,
):
self._assert_compile_called("train_on_batch")
if class_weight is not None:
if sample_weight is not None:
raise ValueError(
"Arguments `sample_weight` and `class_weight` "
"cannot be specified at the same time. "
f"Received: sample_weight={sample_weight}, "
f"class_weight={class_weight}"
)
sample_weight = data_adapter_utils.class_weight_to_sample_weights(
y, class_weight
)
def data():
yield _distribute_data((x, y, sample_weight))
# Maybe build model
self._symbolic_build(data_batch=next(data()))
self._record_training_state_sharding_spec()
self.make_train_function()
# Train step
state = self._get_jax_state(
trainable_variables=True,
non_trainable_variables=True,
optimizer_variables=True,
metrics_variables=True,
purge_model_variables=False,
)
self._jax_state_synced = False
logs, state = self.train_function(state, data())
# State sync
(
trainable_variables,
non_trainable_variables,
optimizer_variables,
metrics_variables,
) = state
self._jax_state = {
"trainable_variables": trainable_variables,
"non_trainable_variables": non_trainable_variables,
"optimizer_variables": optimizer_variables,
"metrics_variables": metrics_variables,
}
self.jax_state_sync()
# Format return values
logs = tree.map_structure(lambda x: np.array(x), logs)
if return_dict:
return logs
return self._flatten_metrics_in_order(logs)
def test_on_batch(
self,
x,
y=None,
sample_weight=None,
return_dict=False,
):
self._assert_compile_called("test_on_batch")
def data():
yield _distribute_data((x, y, sample_weight))
# Maybe build model
self._symbolic_build(data_batch=next(data()))
self._record_training_state_sharding_spec()
self.make_test_function()
# Test step
state = self._get_jax_state(
trainable_variables=True,
non_trainable_variables=True,
metrics_variables=True,
purge_model_variables=False,
)
self._jax_state_synced = False
logs, state = self.test_function(state, data())
# State sync
trainable_variables, non_trainable_variables, metrics_variables = state
self._jax_state = {
"trainable_variables": trainable_variables,
"non_trainable_variables": non_trainable_variables,
"metrics_variables": metrics_variables,
}
self.jax_state_sync()
# Format return values.
logs = tree.map_structure(lambda x: np.array(x), logs)
if return_dict:
return logs
return self._flatten_metrics_in_order(logs)
def predict_on_batch(self, x):
if not all(layer.built for layer in self._flatten_layers()):
# Build model
with backend.StatelessScope():
self(x)
self._record_training_state_sharding_spec()
self.make_predict_function()
state = self._get_jax_state(
trainable_variables=True,
non_trainable_variables=True,
metrics_variables=False,
purge_model_variables=False,
)
self._jax_state_synced = False
def data():
yield (x,)
batch_outputs, non_trainable_variables = self.predict_function(
state, data()
)
self._jax_state = {
"non_trainable_variables": non_trainable_variables,
}
self.jax_state_sync()
batch_outputs = tree.map_structure(lambda x: np.array(x), batch_outputs)
return batch_outputs
def jax_state_sync(self):
if not getattr(self, "_jax_state", None) or self._jax_state_synced:
return
trainable_variables = self._jax_state.get("trainable_variables", None)
non_trainable_variables = self._jax_state.get(
"non_trainable_variables", None
)
optimizer_variables = self._jax_state.get("optimizer_variables", None)
metrics_variables = self._jax_state.get("metrics_variables", None)
if trainable_variables:
for ref_v, v in zip(self.trainable_variables, trainable_variables):
ref_v.assign(v)
if non_trainable_variables:
for ref_v, v in zip(
self.non_trainable_variables, non_trainable_variables
):
ref_v.assign(v)
if optimizer_variables:
for ref_v, v in zip(self.optimizer.variables, optimizer_variables):
ref_v.assign(v)
if metrics_variables:
for ref_v, v in zip(self.metrics_variables, metrics_variables):
ref_v.assign(v)
self._jax_state_synced = True
def _record_training_state_sharding_spec(self):
self._trainable_variable_shardings = [
v.value.sharding for v in self.trainable_variables
]
self._non_trainable_variable_shardings = [
v.value.sharding for v in self.non_trainable_variables
]
if hasattr(self, "optimizer") and self.optimizer is not None:
self._optimizer_variable_shardings = [
v.value.sharding for v in self.optimizer.variables
]
else:
self._optimizer_variable_shardings = []
self._metrics_variable_shardings = [
v.value.sharding for v in self.metrics_variables
]
def _enforce_jax_state_sharding(
self,
trainable_variables=None,
non_trainable_variables=None,
optimizer_variables=None,
metrics_variables=None,
):
"""Enforce the sharding spec constraint for all the training state.
Since the output of the train/eval step will be used as inputs to next
step, we need to ensure that they have the same sharding spec, so that
jax.jit won't have to recompile the train/eval function.
Note that this function will also rely on the recorded sharding spec
for each of states.
This function is expected to be called within the jitted train/eval
function, especially around the end of the function.
"""
trainable_variables = trainable_variables or []
non_trainable_variables = non_trainable_variables or []
optimizer_variables = optimizer_variables or []
metrics_variables = metrics_variables or []
for i in range(len(trainable_variables)):
trainable_variables[i] = jax.lax.with_sharding_constraint(
trainable_variables[i], self._trainable_variable_shardings[i]
)
for i in range(len(non_trainable_variables)):
non_trainable_variables[i] = jax.lax.with_sharding_constraint(
non_trainable_variables[i],
self._non_trainable_variable_shardings[i],
)
for i in range(len(optimizer_variables)):
optimizer_variables[i] = jax.lax.with_sharding_constraint(
optimizer_variables[i], self._optimizer_variable_shardings[i]
)
for i in range(len(metrics_variables)):
metrics_variables[i] = jax.lax.with_sharding_constraint(
metrics_variables[i], self._metrics_variable_shardings[i]
)
return (
trainable_variables,
non_trainable_variables,
optimizer_variables,
metrics_variables,
)
def _purge_model_variables(
self,
trainable_variables=False,
non_trainable_variables=False,
optimizer_variables=False,
metrics_variables=False,
):
"""Remove all the model variable for memory saving.
During JAX training, since the training function are stateless, we have
to pass in and get the model weights over and over, during which the
copy of the weights that attached to the Variable are still and
occupying extra memory. We remove those variable to save memory (for
better memory utilization) at the beginning of the epoch, and reattach
the value back to variables at the end of the epoch, via
`jax_state_sync()`.
"""
if trainable_variables:
for v in self.trainable_variables:
v._value = None
if non_trainable_variables:
for v in self.non_trainable_variables:
v._value = None
if optimizer_variables:
for v in self.optimizer.variables:
v._value = None
if metrics_variables:
for v in self.metrics_variables:
v._value = None
def _get_jax_state(
self,
trainable_variables=False,
non_trainable_variables=False,
optimizer_variables=False,
metrics_variables=False,
purge_model_variables=False,
):
state = []
if trainable_variables:
state.append([v.value for v in self.trainable_variables])
if non_trainable_variables:
state.append([v.value for v in self.non_trainable_variables])
if optimizer_variables:
state.append([v.value for v in self.optimizer.variables])
if metrics_variables:
state.append([v.value for v in self.metrics_variables])
if purge_model_variables:
self._purge_model_variables(
trainable_variables=trainable_variables,
non_trainable_variables=non_trainable_variables,
optimizer_variables=optimizer_variables,
metrics_variables=metrics_variables,
)
return tuple(state)
def _distribute_data(data, layouts=None):
distribution = distribution_lib.distribution()
if distribution is not None:
if layouts is None:
layouts = tree.map_structure(
lambda d: distribution.get_data_layout(d.shape),
data,
)
return tree.map_structure(
jax_distribution_lib.distribute_data_input, data, layouts
)
return tree.map_structure(jax.device_put, data)
class JAXEpochIterator(EpochIterator):
def __next__(self):
return next(self._epoch_iterator)
def _get_iterator(self):
distribution = distribution_lib.distribution()
if distribution is not None:
return self._get_distributed_iterator(distribution)
return self._prefetch_numpy_iterator(
self.data_adapter.get_jax_iterator()
)
def _get_distributed_iterator(self, distribution):
"""Lazily compute layouts to reduce host to device transfer latency."""
layouts = None
for data in self.data_adapter.get_jax_iterator():
if layouts is None:
layouts = tree.map_structure(
lambda d: jax_distribution_lib._to_jax_layout(
distribution.get_data_layout(d.shape)
),
data,
)
yield _distribute_data(data, layouts)
def _prefetch_numpy_iterator(self, numpy_iterator):
"""Shard and prefetch batches on device.
Most of the implementation has been borrowed from
`flax.jax_utils.prefetch_to_device`
This utility takes an iterator and returns a new iterator which fills an
on device prefetch buffer. Eager prefetching can improve the performance
of training loops significantly by overlapping compute and data
transfer.
"""
queue = collections.deque()
# If you're training on GPUs, 2 is generally the best choice because
# this guarantees that you can overlap a training step on GPU with a
# data prefetch step on CPU.
def enqueue(n=2):
for data in itertools.islice(numpy_iterator, n):
queue.append(_distribute_data(data))
enqueue(n=2) # TODO: should we make `n` configurable?
while queue:
yield queue.popleft()
enqueue(1)
|
fcholletREPO_NAMEkerasPATH_START.@keras_extracted@keras-master@keras@src@backend@jax@trainer.py@.PATH_END.py
|
{
"filename": "Tutorial-GRHD_Equations-Cartesian-c-code.ipynb",
"repo_name": "zachetienne/nrpytutorial",
"repo_path": "nrpytutorial_extracted/nrpytutorial-master/Tutorial-GRHD_Equations-Cartesian-c-code.ipynb",
"type": "Jupyter Notebook"
}
|
<script async src="https://www.googletagmanager.com/gtag/js?id=UA-59152712-8"></script>
<script>
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag('js', new Date());
gtag('config', 'UA-59152712-8');
</script>
# C code generation of GRHD Equations
## Authors: Phil Chang & Zach Etienne
### Formatting improvements courtesy Brandon Clark
## This notebook demonstrates the C code generation of General Relativistic HydroDynamics (GRHD) equations in conservative form using a specific state vector ${\boldsymbol{\mathcal{U}}}=(\rho_*,\tilde{S},\tilde{\tau})$. The process of transitioning between primitive and conservative variables, computing flux and source terms, as well as performing Lorentz boosts and other transformations is detailed.
[comment]: <> (omit white space somehow?)
$\newcommand{\be}{\begin{equation}}$
$\newcommand{\ee}{\end{equation}}$
$\newcommand{\grad}{{\boldsymbol{\nabla}}}$
$\newcommand{\vel}{{\boldsymbol{v}}}$
$\newcommand{\mom}{{\boldsymbol{p}}}$
$\newcommand{\ddt}[1]{{\frac{\partial #1}{\partial t}}}$
$\newcommand{\ddx}[1]{{\frac{\partial #1}{\partial x}}}$
$\newcommand{\state}{{\boldsymbol{\mathcal{U}}}}$
$\newcommand{\charge}{{\boldsymbol{U}}}$
$\newcommand{\psicharge}{{\boldsymbol{\psi}}}$
$\newcommand{\lapse}{\alpha}$
$\newcommand{\shift}{\boldsymbol{\beta}}$
$\newcommand{\rhostar}{{\rho_*}}$
$\newcommand{\tautilde}{{\tilde{\tau}}}$
$\newcommand{\Svectilde}{{\tilde{\boldsymbol{S}}}}$
$\newcommand{\rtgamma}{{\sqrt{\gamma}}}$
$\newcommand{\T}[2]{{T^{#1 #2}}}$
$\newcommand{\uvec}{{\boldsymbol{u}}}$
$\newcommand{\Vvec}{{\boldsymbol{\mathcal{V}}}}$
$\newcommand{\vfluid}{{\boldsymbol{v}_{\rm f}}}$
$\newcommand{\vVal}{{\tilde{\boldsymbol{v}}}}$
$\newcommand{\flux}{{\boldsymbol{\mathcal{F}}}}$
$\newcommand{\fluxV}{{\boldsymbol{F}}}$
$\newcommand{\source}{{\boldsymbol{\mathcal{S}}}}$
$\newcommand{\sourceV}{{\boldsymbol{S}}}$
$\newcommand{\area}{{\boldsymbol{A}}}$
$\newcommand{\normal}{{\hat{\boldsymbol{n}}}}$
$\newcommand{\pt}{{\boldsymbol{p}}}$
$\newcommand{\nb}{{\boldsymbol{n}}}$
$\newcommand{\meshv}{{\boldsymbol{w}}}$
$\newcommand{\facev}{{\boldsymbol{\tilde{w}}_{ij}}}$
$\newcommand{\facer}{{\boldsymbol{\tilde{r}}_{ij}}}$
$\newcommand{\meshr}{{\boldsymbol{r}}}$
$\newcommand{\cmr}{{\boldsymbol{c}}}$
## Introduction:
We start out with the ** GRHD ** equations in conservative form with the state vector $\state=(\rhostar, \Svectilde, \tautilde)$:
\begin{equation}
\ddt{\state} + \grad\cdot\flux = \source,
\end{equation}
where $\rhostar = \lapse\rho\rtgamma u^0$, $\Svectilde = \rhostar h \uvec$, $\tautilde = \lapse^2\rtgamma \T00 - \rhostar$. The associated set of primitive variables are $(\rho, \vel, \epsilon)$, which are the rest mass density, fluid 3-velocity, and internal energy (measured in the rest frame).
The flux, $\flux$ is given by
\begin{equation}
\flux=(\rhostar \vel, \lapse\rtgamma\T{j}{\beta}g_{\beta i}, \lapse^2\rtgamma\T0j - \rhostar\vel
\end{equation}
where $\vel$ is the 3-velocity, and $\source = (0, \frac 1 2 \lapse\rtgamma \T{\lapse}{\beta}g_{\lapse\beta,i}, s)$ is the source function, and
\begin{equation}
s = \lapse\rtgamma\left[\left(\T00\beta^i\beta^j + 2\T0i\beta^j\right)K_{ij} - \left(\T00\beta^i + \T0i\right)\partial_i\lapse\right]
\end{equation}
The stress-energy tensor for a perfect fluid is written as
\begin{equation}
\T{\mu}{\nu} = \rho h u^{\mu} u^{\nu} + P g^{\mu\nu},
\end{equation}
where $h = 1 + \epsilon + P/\rho$ is the specific enthalpy and $u^{\mu}$ are the respective components of the four velocity.
Noting that the mass $\flux$ is defined in terms of $\rhostar$ and $\vel$, we need to first find a mapping between $\vel$ and $u$.
### Alternative formulation
The Athena++ folks have an alternative formulation that might be superior.
Begin with the continuity equation
\begin{equation}
\grad_{\mu}\rho u^{\mu} = 0,
\end{equation}
where $\grad$ is the covariant derivative. This can be mapped directly to
\begin{equation}
\partial_{0} \sqrt{-g}\rho u^0 + \partial_i\sqrt{-g} \rho u^0 v^i = 0
\end{equation}
which we can identify with $\rhostar = \alpha\rtgamma \rho u^0$ because $\sqrt{-g} = \alpha\rtgamma$.
Now the second equation is the conservation of energy-momentum which we write as
\begin{equation}
\grad_{\nu}T^{\nu}_{\mu} = 0
\end{equation}
writing this out we have
\begin{equation}
\partial_0 g_{\mu\alpha}T^{\alpha 0} + \partial_i g_{\mu\alpha}T^{\alpha i} - \Gamma_{\mu\alpha}^{\gamma} g_{\gamma\beta}T^{\alpha\beta} = 0
\end{equation}
Noting that
\begin{equation}
\Gamma^{\alpha}_{\beta\gamma} = \frac 1 2 g^{\alpha\delta}\left(\partial_{\gamma}g_{\beta\delta} + \partial_{\beta}g_{\gamma\delta} - \partial_{\delta}g_{\beta\gamma}\right)
\end{equation}
Writing this all out, we note the last term is
\begin{equation}
\Gamma_{\mu\alpha}^{\gamma} g_{\gamma\beta}T^{\alpha\beta} =
\frac 1 2 g^{\gamma\delta}\left(\partial_{\alpha}g_{\mu\delta} + \partial_{\mu}g_{\alpha \delta} - \partial_{\delta}g_{\mu \alpha}\right) T_{\gamma}^{\alpha} =
\frac 1 2 \left(\partial_{\alpha}g_{\mu\delta} + \partial_{\mu}g_{\alpha \delta} - \partial_{\delta}g_{\mu \alpha}\right)
T^{\alpha\delta}
\end{equation}
We sum over $\alpha$ and $\delta$, but note that we are antisymmetric in first and last terms in $\alpha$ and $\delta$ in the () but symmetric in $T_{\alpha\delta}$ so we have
\begin{equation}
\Gamma_{\mu\alpha}^{\gamma} g_{\gamma\beta}T^{\alpha\beta} = \frac 1 2 \partial_{\mu}g_{\alpha \delta} T^{\alpha\delta}
\end{equation}
Thus we have
\begin{equation}
\partial_0 T^{0}_{\mu} + \partial_i T^{i}_{\mu} = \frac 1 2 \partial_{\mu}g_{\alpha \delta} T^{\alpha\delta}
\end{equation}
The $\mu = (1,2,3)$, we almost get back the equations in the standard formulation
\begin{equation}
\partial_0 \rho h u^0 u_i + \partial_j T^j_i = \frac 1 2 \partial_{i}g_{\alpha \delta} T^{\alpha\delta},
\end{equation}
which modulo factors of $\lapse\rtgamma$ in front is the same as the "standard" equations.
The $T^0_0$ term is more interesting. Here we have
\begin{equation}
\partial_0 (\rho h u^0 u_0 + + \partial_j T^j_i = \frac 1 2 \partial_{0}g_{\alpha \delta} T^{\alpha\delta},
\end{equation}
However, the disadvantage is that we need the time derivative of the metric.
<a id='toc'></a>
# Table of Contents
$$\label{toc}$$
This notebook is organized as follows
1. [Step 1](#mapping): Primitive to Conservative Mapping
1. [Step 2](#zach): Compute $u^0$ from the Valencia 3-velocity (Zach step)
1. [Step 3](#flux): Compute the flux
1. [Step 4](#source): Source Terms
1. [Step 5](#rotation): Rotation
1. [Step 6](#solver): Conservative to Primitive Solver
1. [Step 7](#lorentz): Lorentz Boosts
1. [Step 8](#TmunuSph): Compute $T^{\mu\nu}$ in Cartesian Coordinates
1. [Step 9](#latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file
<a id='mapping'></a>
# Step 1: Primitive to Conservative Mapping
$$\label{mapping}$$
We want to make a mapping from the primitives to conserved variables following Zach notebook:
\begin{equation}
(\rho, \vel, \epsilon) \rightarrow (\rhostar = \lapse\rho\rtgamma u^0, \Svectilde = \rhostar h \uvec, \tautilde = \lapse^2\rtgamma \T00 - \rhostar).
\end{equation}
```python
import GRHD.equations as Ge # NRPy: Implementation of GRHD equations in Cartesian coordinates
import indexedexp as ixp # NRPy+: Symbolic indexed expression (e.g., tensors, vectors, etc.) support
import sympy as sp # SymPy: The Python computer algebra package upon which NRPy+ depends
from outputC import outputC # NRPy+: Basic C code output functionality
# declare gammaDD
gammaDD = ixp.zerorank2()
components = ["xx", "xy", "xz", "yy", "yz", "zz"]
names = ""
for comp in components :
names = names + "mi.gamDD{0} ".format(comp)
gxx, gxy, gxz, gyy, gyz, gzz = sp.symbols( names)
gammaDD[0][0] = gxx
gammaDD[0][1] = gxy
gammaDD[0][2] = gxz
gammaDD[1][0] = gxy
gammaDD[1][1] = gyy
gammaDD[1][2] = gyz
gammaDD[2][0] = gxz
gammaDD[2][1] = gyz
gammaDD[2][2] = gzz
#declare alpha
alpha = sp.symbols( "mi.alpha")
#declare beta
betaU = ixp.zerorank1()
for i, comp in enumerate(["X", "Y", "Z"]) :
betaU[i] = sp.symbols( "mi.beta{0}".format(comp), real=True)
#now get the primitives
rho_b, epsilon, P = sp.symbols("rho ie p")
#get the 3-velocities
vU = ixp.zerorank1()
for i, comp in enumerate( ["vx", "vy", "vz"]) :
vU[i] = sp.symbols("{0}".format(comp))
Ge.u4U_in_terms_of_vU__rescale_vU_by_applying_speed_limit(alpha,betaU,gammaDD, vU)
u4U = Ge.u4U_ito_vU
# Zach says: Probably want to adopt speed-limited vU[i], Ge.rescaledvU[i], here, a la:
# for i in range(3):
# ... vU[i] = Ge.rescaledvU[i]
# First compute stress-energy tensor T4UU and T4UD:
Ge.compute_T4UU(gammaDD,betaU,alpha, rho_b,P,epsilon,u4U)
Ge.compute_T4UD(gammaDD,betaU,alpha, Ge.T4UU)
# Next sqrt(gamma)
Ge.compute_sqrtgammaDET(gammaDD)
# Compute conservative variables in terms of primitive variables
Ge.compute_rho_star( alpha, Ge.sqrtgammaDET, rho_b, u4U)
Ge.compute_tau_tilde(alpha, Ge.sqrtgammaDET, Ge.T4UU,Ge.rho_star)
Ge.compute_S_tildeD( alpha, Ge.sqrtgammaDET, Ge.T4UD)
# Zach says: Why only output u^x? Debugging reasons?
outputC([u4U[1], Ge.rho_star, Ge.S_tildeD[0], Ge.S_tildeD[1], Ge.S_tildeD[2], Ge.tau_tilde],
["u4U1", "con[iRhoStar]", "con[iSx]", "con[iSy]", "con[iSz]", "con[iTau]"],
filename="NRPY+prim2Con.h", params="outCverbose=False")
!cat NRPY+prim2Con.h
outputC([Ge.sqrtgammaDET*alpha], ["detg"], filename="NRPY+detg.h")
!cat NRPY+detg.h
gammaUU, gammabarDet = ixp.symm_matrix_inverter3x3(gammaDD)
outputC([gammaUU[0][0],gammaUU[0][1],gammaUU[0][2],gammaUU[1][1],gammaUU[1][2],gammaUU[2][2]],
[ "gamUUxx", "gamUUxy", "gamUUxz", "gamUUyy", "gamUUyz", "gamUUzz"],
filename="NRPY+gamUU.h")
!cat NRPY+gamUU.h
```
Wrote to file "NRPY+prim2Con.h"
{
const double tmp_0 = (1.0/((GAMMA_SPEED_LIMIT)*(GAMMA_SPEED_LIMIT)));
const double tmp_3 = (1.0/((mi.alpha)*(mi.alpha)));
const double tmp_4 = mi.betaX + vx;
const double tmp_5 = mi.gamDDxx*tmp_3*((tmp_4)*(tmp_4));
const double tmp_7 = mi.betaY + vy;
const double tmp_8 = mi.gamDDyy*tmp_3*((tmp_7)*(tmp_7));
const double tmp_10 = mi.betaZ + vz;
const double tmp_11 = mi.gamDDzz*((tmp_10)*(tmp_10))*tmp_3;
const double tmp_14 = mi.gamDDxy*tmp_3*tmp_4*tmp_7;
const double tmp_15 = mi.gamDDxz*tmp_10*tmp_3*tmp_4;
const double tmp_16 = mi.gamDDyz*tmp_10*tmp_3*tmp_7;
const double tmp_20 = (1.0/2.0)*fabs(-tmp_0 - tmp_11 - 2*tmp_14 - 2*tmp_15 - 2*tmp_16 - tmp_5 - tmp_8 + 1);
const double tmp_21 = (1.0/2.0)*tmp_0 - 1.0/2.0*tmp_11 - tmp_14 - tmp_15 - tmp_16 + tmp_20 - 1.0/2.0*tmp_5 - 1.0/2.0*tmp_8 + 1.0/2.0;
const double tmp_22 = (1.0/sqrt(tmp_21));
const double tmp_23 = sqrt((-1.0/2.0*tmp_0 + (1.0/2.0)*tmp_11 + tmp_14 + tmp_15 + tmp_16 - tmp_20 + (1.0/2.0)*tmp_5 + (1.0/2.0)*tmp_8 + 1.0/2.0)/(TINYDOUBLE + tmp_11 + 2*tmp_14 + 2*tmp_15 + 2*tmp_16 + tmp_5 + tmp_8));
const double tmp_24 = -mi.betaX + tmp_23*tmp_4;
const double tmp_25 = sqrt(mi.gamDDxx*mi.gamDDyy*mi.gamDDzz - mi.gamDDxx*((mi.gamDDyz)*(mi.gamDDyz)) - ((mi.gamDDxy)*(mi.gamDDxy))*mi.gamDDzz + 2*mi.gamDDxy*mi.gamDDxz*mi.gamDDyz - ((mi.gamDDxz)*(mi.gamDDxz))*mi.gamDDyy);
const double tmp_26 = rho*tmp_22*tmp_25;
const double tmp_27 = p*tmp_3;
const double tmp_28 = rho*tmp_3*(ie + p/rho + 1)/tmp_21;
const double tmp_29 = -tmp_27 + tmp_28;
const double tmp_30 = mi.betaX*tmp_27 + tmp_24*tmp_28;
const double tmp_31 = mi.betaY*tmp_27 + tmp_28*(-mi.betaY + tmp_23*tmp_7);
const double tmp_32 = mi.betaZ*tmp_27 + tmp_28*(-mi.betaZ + tmp_10*tmp_23);
const double tmp_33 = mi.alpha*tmp_25;
u4U1 = tmp_22*tmp_24/mi.alpha;
con[iRhoStar] = tmp_26;
con[iSx] = tmp_33*(mi.gamDDxx*tmp_30 + mi.gamDDxy*tmp_31 + mi.gamDDxz*tmp_32 + tmp_29*(mi.betaX*mi.gamDDxx + mi.betaY*mi.gamDDxy + mi.betaZ*mi.gamDDxz));
con[iSy] = tmp_33*(mi.gamDDxy*tmp_30 + mi.gamDDyy*tmp_31 + mi.gamDDyz*tmp_32 + tmp_29*(mi.betaX*mi.gamDDxy + mi.betaY*mi.gamDDyy + mi.betaZ*mi.gamDDyz));
con[iSz] = tmp_33*(mi.gamDDxz*tmp_30 + mi.gamDDyz*tmp_31 + mi.gamDDzz*tmp_32 + tmp_29*(mi.betaX*mi.gamDDxz + mi.betaY*mi.gamDDyz + mi.betaZ*mi.gamDDzz));
con[iTau] = ((mi.alpha)*(mi.alpha))*tmp_25*tmp_29 - tmp_26;
}
Wrote to file "NRPY+detg.h"
/*
* Original SymPy expression:
* "detg = mi.alpha*sqrt(mi.gamDDxx*mi.gamDDyy*mi.gamDDzz - mi.gamDDxx*mi.gamDDyz**2 - mi.gamDDxy**2*mi.gamDDzz + 2*mi.gamDDxy*mi.gamDDxz*mi.gamDDyz - mi.gamDDxz**2*mi.gamDDyy)"
*/
{
detg = mi.alpha*sqrt(mi.gamDDxx*mi.gamDDyy*mi.gamDDzz - mi.gamDDxx*((mi.gamDDyz)*(mi.gamDDyz)) - ((mi.gamDDxy)*(mi.gamDDxy))*mi.gamDDzz + 2*mi.gamDDxy*mi.gamDDxz*mi.gamDDyz - ((mi.gamDDxz)*(mi.gamDDxz))*mi.gamDDyy);
}
Wrote to file "NRPY+gamUU.h"
/*
* Original SymPy expressions:
* "[gamUUxx = (mi.gamDDyy*mi.gamDDzz - mi.gamDDyz**2)/(mi.gamDDxx*mi.gamDDyy*mi.gamDDzz - mi.gamDDxx*mi.gamDDyz**2 - mi.gamDDxy**2*mi.gamDDzz + 2*mi.gamDDxy*mi.gamDDxz*mi.gamDDyz - mi.gamDDxz**2*mi.gamDDyy),
* gamUUxy = (-mi.gamDDxy*mi.gamDDzz + mi.gamDDxz*mi.gamDDyz)/(mi.gamDDxx*mi.gamDDyy*mi.gamDDzz - mi.gamDDxx*mi.gamDDyz**2 - mi.gamDDxy**2*mi.gamDDzz + 2*mi.gamDDxy*mi.gamDDxz*mi.gamDDyz - mi.gamDDxz**2*mi.gamDDyy),
* gamUUxz = (mi.gamDDxy*mi.gamDDyz - mi.gamDDxz*mi.gamDDyy)/(mi.gamDDxx*mi.gamDDyy*mi.gamDDzz - mi.gamDDxx*mi.gamDDyz**2 - mi.gamDDxy**2*mi.gamDDzz + 2*mi.gamDDxy*mi.gamDDxz*mi.gamDDyz - mi.gamDDxz**2*mi.gamDDyy),
* gamUUyy = (mi.gamDDxx*mi.gamDDzz - mi.gamDDxz**2)/(mi.gamDDxx*mi.gamDDyy*mi.gamDDzz - mi.gamDDxx*mi.gamDDyz**2 - mi.gamDDxy**2*mi.gamDDzz + 2*mi.gamDDxy*mi.gamDDxz*mi.gamDDyz - mi.gamDDxz**2*mi.gamDDyy),
* gamUUyz = (-mi.gamDDxx*mi.gamDDyz + mi.gamDDxy*mi.gamDDxz)/(mi.gamDDxx*mi.gamDDyy*mi.gamDDzz - mi.gamDDxx*mi.gamDDyz**2 - mi.gamDDxy**2*mi.gamDDzz + 2*mi.gamDDxy*mi.gamDDxz*mi.gamDDyz - mi.gamDDxz**2*mi.gamDDyy),
* gamUUzz = (mi.gamDDxx*mi.gamDDyy - mi.gamDDxy**2)/(mi.gamDDxx*mi.gamDDyy*mi.gamDDzz - mi.gamDDxx*mi.gamDDyz**2 - mi.gamDDxy**2*mi.gamDDzz + 2*mi.gamDDxy*mi.gamDDxz*mi.gamDDyz - mi.gamDDxz**2*mi.gamDDyy)]"
*/
{
const double tmp_5 = (1.0/(mi.gamDDxx*mi.gamDDyy*mi.gamDDzz - mi.gamDDxx*((mi.gamDDyz)*(mi.gamDDyz)) - ((mi.gamDDxy)*(mi.gamDDxy))*mi.gamDDzz + 2*mi.gamDDxy*mi.gamDDxz*mi.gamDDyz - ((mi.gamDDxz)*(mi.gamDDxz))*mi.gamDDyy));
gamUUxx = tmp_5*(mi.gamDDyy*mi.gamDDzz - ((mi.gamDDyz)*(mi.gamDDyz)));
gamUUxy = tmp_5*(-mi.gamDDxy*mi.gamDDzz + mi.gamDDxz*mi.gamDDyz);
gamUUxz = tmp_5*(mi.gamDDxy*mi.gamDDyz - mi.gamDDxz*mi.gamDDyy);
gamUUyy = tmp_5*(mi.gamDDxx*mi.gamDDzz - ((mi.gamDDxz)*(mi.gamDDxz)));
gamUUyz = tmp_5*(-mi.gamDDxx*mi.gamDDyz + mi.gamDDxy*mi.gamDDxz);
gamUUzz = tmp_5*(mi.gamDDxx*mi.gamDDyy - ((mi.gamDDxy)*(mi.gamDDxy)));
}
<a id='flux'></a>
# Step 3: Compute the flux
$$\label{flux}$$
The fluxes are as follows
\begin{equation}
\frac{\partial}{\partial t}
\begin{pmatrix}
\rhostar\\
\Svectilde\\
\tautilde
\end{pmatrix} + \frac{\partial}{\partial x^j}\begin{pmatrix} \rhostar v^j\\
\lapse\rtgamma T^j_i\\ \lapse^2\rtgamma T^{0j} - \rhostar v^j
\end{pmatrix} = \begin{pmatrix} 0 \\ \frac 1 2 \lapse\rtgamma T^{\alpha\beta}g_{\alpha\beta,i} \\ s \end{pmatrix}
\end{equation}
so the flux is
\begin{equation}
\mathcal{F} = \begin{pmatrix} \rhostar v^i \\ \lapse\rtgamma T^i_k \\ \lapse^2\rtgamma T^{0i} - \rhostar v^i
\end{pmatrix}
\end{equation}
In the moving-mesh formalism, the flux is just taken along the x directions so we have
\begin{equation}
\mathcal{F} = \begin{pmatrix} \rhostar v^1 \\ \lapse\rtgamma T^1_k \\ \lapse^2\rtgamma T^{01} - \rhostar v^1
\end{pmatrix}
\end{equation}
Note that we will need to rotate $T^{\mu\nu}$ and $g_{\mu\nu}$ to get the right orientation.
In order to do this, we must first compute the stress energy tensor:
\begin{equation}
T^{\mu\nu} = \rho h u^{\mu}u^{\nu} + Pg^{\mu\nu} = \rho h (u^0)^2v^iv^j + P g^{\mu\nu}
\end{equation}
```python
# Next compute fluxes of conservative variables
Ge.compute_rho_star_fluxU( vU, Ge.rho_star)
Ge.compute_tau_tilde_fluxU(alpha, Ge.sqrtgammaDET, vU,Ge.T4UU,Ge.rho_star)
Ge.compute_S_tilde_fluxUD( alpha, Ge.sqrtgammaDET, Ge.T4UD)
normD = ixp.zerorank1()
normD[0], normD[1], normD[2] = sp.symbols("norm[0] norm[1] norm[2]", real=True)
faceVelU = ixp.zerorank1()
faceVelU[0], faceVelU[1], faceVelU[2] = sp.symbols("faceVelocity[0] faceVelocity[1] faceVelocity[2]", real=True)
# Zach says: don't forget to limit the velocities after they are computed!
faceVelNorm = sp.sympify(0)
for i in range(3) :
faceVelNorm += normD[i]*faceVelU[i]
exprArray = []
nameArray = []
exprArray.append( Ge.rho_star)
nameArray.append( "temp_rho_star")
exprArray.append( Ge.T4UU[0][1])
nameArray.append( "temp_T4UU01")
rho_star_flux = sp.sympify(0)
for i in range(3) :
rho_star_flux += Ge.rho_star_fluxU[i]*normD[i]
rho_star_flux -= Ge.rho_star*faceVelNorm
exprArray.append( rho_star_flux)
nameArray.append( "flux[iRhoStar]")
tau_tilde_flux = sp.sympify(0)
for i in range(3) :
tau_tilde_flux += Ge.tau_tilde_fluxU[i]*normD[i]
tau_tilde_flux -= Ge.tau_tilde*faceVelNorm
S_tilde_fluxD = ixp.zerorank1()
for i in range(3) :
S_tilde_fluxD[i] -= Ge.S_tildeD[i]*faceVelNorm
for j in range(3) :
S_tilde_fluxD[i] += Ge.S_tilde_fluxUD[j][i]*normD[j]
for j, comp in enumerate(["x","y", "z"]) :
exprArray.append( S_tilde_fluxD[j])
nameArray.append( "flux[iS{0}]".format(comp))
exprArray.append( tau_tilde_flux)
nameArray.append( "flux[iTau]")
#for expr, name in zip( exprArray, nameArray) :
# print( name)
outputC(exprArray, nameArray, filename="NRPY+calFlux.h", params="outCverbose=False")
!cat NRPY+calFlux.h
```
Wrote to file "NRPY+calFlux.h"
{
const double tmp_6 = mi.gamDDxx*mi.gamDDyy*mi.gamDDzz - mi.gamDDxx*((mi.gamDDyz)*(mi.gamDDyz)) - ((mi.gamDDxy)*(mi.gamDDxy))*mi.gamDDzz + 2*mi.gamDDxy*mi.gamDDxz*mi.gamDDyz - ((mi.gamDDxz)*(mi.gamDDxz))*mi.gamDDyy;
const double tmp_7 = sqrt(tmp_6);
const double tmp_8 = (1.0/((GAMMA_SPEED_LIMIT)*(GAMMA_SPEED_LIMIT)));
const double tmp_11 = (1.0/((mi.alpha)*(mi.alpha)));
const double tmp_12 = mi.betaX + vx;
const double tmp_13 = mi.gamDDxx*tmp_11*((tmp_12)*(tmp_12));
const double tmp_15 = mi.betaY + vy;
const double tmp_16 = mi.gamDDyy*tmp_11*((tmp_15)*(tmp_15));
const double tmp_18 = mi.betaZ + vz;
const double tmp_19 = mi.gamDDzz*tmp_11*((tmp_18)*(tmp_18));
const double tmp_22 = tmp_11*tmp_12*tmp_15;
const double tmp_24 = mi.gamDDxz*tmp_11*tmp_12*tmp_18;
const double tmp_25 = mi.gamDDyz*tmp_11*tmp_15*tmp_18;
const double tmp_26 = 2*mi.gamDDxy*tmp_22;
const double tmp_29 = (1.0/2.0)*fabs(-tmp_13 - tmp_16 - tmp_19 - 2*tmp_24 - 2*tmp_25 - tmp_26 - tmp_8 + 1);
const double tmp_30 = -mi.gamDDxy*tmp_22 - 1.0/2.0*tmp_13 - 1.0/2.0*tmp_16 - 1.0/2.0*tmp_19 - tmp_24 - tmp_25 + tmp_29 + (1.0/2.0)*tmp_8 + 1.0/2.0;
const double tmp_31 = rho*tmp_7/sqrt(tmp_30);
const double tmp_32 = p*tmp_11;
const double tmp_33 = sqrt((mi.gamDDxy*tmp_22 + (1.0/2.0)*tmp_13 + (1.0/2.0)*tmp_16 + (1.0/2.0)*tmp_19 + tmp_24 + tmp_25 - tmp_29 - 1.0/2.0*tmp_8 + 1.0/2.0)/(TINYDOUBLE + tmp_13 + tmp_16 + tmp_19 + 2*tmp_24 + 2*tmp_25 + tmp_26));
const double tmp_34 = -mi.betaX + tmp_12*tmp_33;
const double tmp_35 = rho*tmp_11*(ie + p/rho + 1)/tmp_30;
const double tmp_36 = tmp_34*tmp_35;
const double tmp_37 = mi.betaX*tmp_32 + tmp_36;
const double tmp_41 = faceVelocity[0]*norm[0] + faceVelocity[1]*norm[1] + faceVelocity[2]*norm[2];
const double tmp_42 = mi.betaX*mi.gamDDxx + mi.betaY*mi.gamDDxy + mi.betaZ*mi.gamDDxz;
const double tmp_43 = -tmp_32 + tmp_35;
const double tmp_44 = -mi.betaY + tmp_15*tmp_33;
const double tmp_46 = mi.betaY*tmp_32 + tmp_35*tmp_44;
const double tmp_47 = -mi.betaZ + tmp_18*tmp_33;
const double tmp_48 = mi.betaZ*tmp_32 + tmp_35*tmp_47;
const double tmp_49 = mi.alpha*tmp_7;
const double tmp_50 = tmp_41*tmp_49;
const double tmp_51 = (1.0/(tmp_6));
const double tmp_52 = p*(-((mi.betaX)*(mi.betaX))*tmp_11 + tmp_51*(mi.gamDDyy*mi.gamDDzz - ((mi.gamDDyz)*(mi.gamDDyz)))) + ((tmp_34)*(tmp_34))*tmp_35;
const double tmp_54 = p*(-mi.betaX*mi.betaY*tmp_11 + tmp_51*(-mi.gamDDxy*mi.gamDDzz + mi.gamDDxz*mi.gamDDyz)) + tmp_36*tmp_44;
const double tmp_56 = p*(-mi.betaX*mi.betaZ*tmp_11 + tmp_51*(mi.gamDDxy*mi.gamDDyz - mi.gamDDxz*mi.gamDDyy)) + tmp_36*tmp_47;
const double tmp_58 = norm[0]*tmp_49;
const double tmp_59 = p*(-((mi.betaY)*(mi.betaY))*tmp_11 + tmp_51*(mi.gamDDxx*mi.gamDDzz - ((mi.gamDDxz)*(mi.gamDDxz)))) + tmp_35*((tmp_44)*(tmp_44));
const double tmp_60 = p*(-mi.betaY*mi.betaZ*tmp_11 + tmp_51*(-mi.gamDDxx*mi.gamDDyz + mi.gamDDxy*mi.gamDDxz)) + tmp_35*tmp_44*tmp_47;
const double tmp_61 = norm[1]*tmp_49;
const double tmp_62 = p*(-((mi.betaZ)*(mi.betaZ))*tmp_11 + tmp_51*(mi.gamDDxx*mi.gamDDyy - ((mi.gamDDxy)*(mi.gamDDxy)))) + tmp_35*((tmp_47)*(tmp_47));
const double tmp_63 = norm[2]*tmp_49;
const double tmp_64 = mi.betaX*mi.gamDDxy + mi.betaY*mi.gamDDyy + mi.betaZ*mi.gamDDyz;
const double tmp_66 = mi.betaX*mi.gamDDxz + mi.betaY*mi.gamDDyz + mi.betaZ*mi.gamDDzz;
const double tmp_67 = ((mi.alpha)*(mi.alpha))*tmp_7;
temp_rho_star = tmp_31;
temp_T4UU01 = tmp_37;
flux[iRhoStar] = norm[0]*tmp_31*vx + norm[1]*tmp_31*vy + norm[2]*tmp_31*vz - tmp_31*tmp_41;
flux[iSx] = -tmp_50*(mi.gamDDxx*tmp_37 + mi.gamDDxy*tmp_46 + mi.gamDDxz*tmp_48 + tmp_42*tmp_43) + tmp_58*(mi.gamDDxx*tmp_52 + mi.gamDDxy*tmp_54 + mi.gamDDxz*tmp_56 + tmp_37*tmp_42) + tmp_61*(mi.gamDDxx*tmp_54 + mi.gamDDxy*tmp_59 + mi.gamDDxz*tmp_60 + tmp_42*tmp_46) + tmp_63*(mi.gamDDxx*tmp_56 + mi.gamDDxy*tmp_60 + mi.gamDDxz*tmp_62 + tmp_42*tmp_48);
flux[iSy] = -tmp_50*(mi.gamDDxy*tmp_37 + mi.gamDDyy*tmp_46 + mi.gamDDyz*tmp_48 + tmp_43*tmp_64) + tmp_58*(mi.gamDDxy*tmp_52 + mi.gamDDyy*tmp_54 + mi.gamDDyz*tmp_56 + tmp_37*tmp_64) + tmp_61*(mi.gamDDxy*tmp_54 + mi.gamDDyy*tmp_59 + mi.gamDDyz*tmp_60 + tmp_46*tmp_64) + tmp_63*(mi.gamDDxy*tmp_56 + mi.gamDDyy*tmp_60 + mi.gamDDyz*tmp_62 + tmp_48*tmp_64);
flux[iSz] = -tmp_50*(mi.gamDDxz*tmp_37 + mi.gamDDyz*tmp_46 + mi.gamDDzz*tmp_48 + tmp_43*tmp_66) + tmp_58*(mi.gamDDxz*tmp_52 + mi.gamDDyz*tmp_54 + mi.gamDDzz*tmp_56 + tmp_37*tmp_66) + tmp_61*(mi.gamDDxz*tmp_54 + mi.gamDDyz*tmp_59 + mi.gamDDzz*tmp_60 + tmp_46*tmp_66) + tmp_63*(mi.gamDDxz*tmp_56 + mi.gamDDyz*tmp_60 + mi.gamDDzz*tmp_62 + tmp_48*tmp_66);
flux[iTau] = norm[0]*(-tmp_31*vx + tmp_37*tmp_67) + norm[1]*(-tmp_31*vy + tmp_46*tmp_67) + norm[2]*(-tmp_31*vz + tmp_48*tmp_67) - tmp_41*(-tmp_31 + tmp_43*tmp_67);
}
<a id='source'></a>
# Step 4: Source Terms
$$\label{source}$$
The sources terms are for mass, momentum and energy are:
\begin{equation}
\source = (0, \frac 1 2 \lapse\rtgamma \T{\alpha}{\beta}g_{\alpha\beta,i}, s),
\end{equation}
For a time stationary metric $s\neq 0$, so we will ignore this until the next section. As for the rest, we need to define derivatives of the metric. Suppose I have done this already. Then the code for the source terms is:
```python
# FIXME: Assume static spacetime with KDD = betaU = betaU_dD = 0
KDD = ixp.zerorank2()
betaU = ixp.zerorank1()
betaU_dD = ixp.zerorank2()
# Set+evaluate derivatives of alpha, performing 2nd-order finite difference
alpha_dD = ixp.zerorank1()
h = sp.symbols("h")
for i in range(3) :
alpha_plus, alpha_minus = sp.symbols("mi_plus[{0}].alpha mi_minus[{0}].alpha".format(i))
alpha_dD[i] = (alpha_plus - alpha_minus)/(2*h)
# Set+evaluate derivatives of gamma_{ij}, performing 2nd-order finite difference
gammaDD_dD = ixp.zerorank3()
components = ["xx", "xy", "xz", "yy", "yz", "zz"]
for i in range(3) :
names_plus = ""
names_minus = ""
for comp in components :
names_plus = names_plus + "mi_plus[{0}].gamDD{1} ".format(i, comp)
names_minus = names_minus + "mi_minus[{0}].gamDD{1} ".format(i, comp)
gxx_plus, gxy_plus, gxz_plus, gyy_plus, gyz_plus, gzz_plus = sp.symbols( names_plus)
gxx_minus, gxy_minus, gxz_minus, gyy_minus, gyz_minus, gzz_minus = sp.symbols( names_minus)
gammaDD_dD[0][0][i] = (gxx_plus - gxx_minus)/(2*h)
gammaDD_dD[0][1][i] = (gxy_plus - gxy_minus)/(2*h)
gammaDD_dD[0][2][i] = (gxz_plus - gxz_minus)/(2*h)
gammaDD_dD[1][0][i] = (gxy_plus - gxy_minus)/(2*h)
gammaDD_dD[1][1][i] = (gyy_plus - gyy_minus)/(2*h)
gammaDD_dD[1][2][i] = (gyz_plus - gyz_minus)/(2*h)
gammaDD_dD[2][0][i] = (gxz_plus - gxz_minus)/(2*h)
gammaDD_dD[2][1][i] = (gyz_plus - gyz_minus)/(2*h)
gammaDD_dD[2][2][i] = (gzz_plus - gzz_minus)/(2*h)
# Compute g_{mu nu, i} based on ADM quantities & derivatives defined above
Ge.compute_g4DD_zerotimederiv_dD(gammaDD,betaU,alpha, gammaDD_dD,betaU_dD,alpha_dD)
# Compute source terms for tau tilde & S tilde:
Ge.compute_s_source_term(KDD,betaU,alpha, Ge.sqrtgammaDET,alpha_dD, Ge.T4UU)
Ge.compute_S_tilde_source_termD( alpha, Ge.sqrtgammaDET,Ge.g4DD_zerotimederiv_dD, Ge.T4UU)
exprArray = []
nameArray = []
#momentum terms
for i in range(3) :
exprArray.append( Ge.S_tilde_source_termD[i])
nameArray.append( "vSource[{0}]".format(i))
#tau term
exprArray.append( Ge.s_source_term)
nameArray.append( "eSource")
outputC( exprArray, nameArray, filename="NRPY+calSources.h", params="outCverbose=False")
!cat NRPY+calSources.h
```
Wrote to file "NRPY+calSources.h"
{
const double tmp_0 = (1.0/(h));
const double tmp_1 = (1.0/2.0)*tmp_0;
const double tmp_2 = tmp_1*(-mi_minus[0].alpha + mi_plus[0].alpha);
const double tmp_9 = mi.gamDDxx*mi.gamDDyy*mi.gamDDzz - mi.gamDDxx*((mi.gamDDyz)*(mi.gamDDyz)) - ((mi.gamDDxy)*(mi.gamDDxy))*mi.gamDDzz + 2*mi.gamDDxy*mi.gamDDxz*mi.gamDDyz - ((mi.gamDDxz)*(mi.gamDDxz))*mi.gamDDyy;
const double tmp_10 = sqrt(tmp_9);
const double tmp_12 = (1.0/((mi.alpha)*(mi.alpha)));
const double tmp_13 = p*tmp_12;
const double tmp_14 = (1.0/((GAMMA_SPEED_LIMIT)*(GAMMA_SPEED_LIMIT)));
const double tmp_16 = mi.betaX + vx;
const double tmp_17 = mi.gamDDxx*tmp_12*((tmp_16)*(tmp_16));
const double tmp_19 = mi.betaY + vy;
const double tmp_20 = mi.gamDDyy*tmp_12*((tmp_19)*(tmp_19));
const double tmp_22 = mi.betaZ + vz;
const double tmp_23 = mi.gamDDzz*tmp_12*((tmp_22)*(tmp_22));
const double tmp_26 = tmp_12*tmp_16*tmp_19;
const double tmp_28 = mi.gamDDxz*tmp_12*tmp_16*tmp_22;
const double tmp_29 = mi.gamDDyz*tmp_12*tmp_19*tmp_22;
const double tmp_30 = 2*mi.gamDDxy*tmp_26;
const double tmp_33 = (1.0/2.0)*fabs(-tmp_14 - tmp_17 - tmp_20 - tmp_23 - 2*tmp_28 - 2*tmp_29 - tmp_30 + 1);
const double tmp_34 = rho*tmp_12*(ie + p/rho + 1)/(-mi.gamDDxy*tmp_26 + (1.0/2.0)*tmp_14 - 1.0/2.0*tmp_17 - 1.0/2.0*tmp_20 - 1.0/2.0*tmp_23 - tmp_28 - tmp_29 + tmp_33 + 1.0/2.0);
const double tmp_35 = ((mi.alpha)*(mi.alpha))*tmp_10*(-tmp_13 + tmp_34);
const double tmp_36 = (1.0/(tmp_9));
const double tmp_37 = sqrt((mi.gamDDxy*tmp_26 - 1.0/2.0*tmp_14 + (1.0/2.0)*tmp_17 + (1.0/2.0)*tmp_20 + (1.0/2.0)*tmp_23 + tmp_28 + tmp_29 - tmp_33 + 1.0/2.0)/(TINYDOUBLE + tmp_17 + tmp_20 + tmp_23 + 2*tmp_28 + 2*tmp_29 + tmp_30));
const double tmp_38 = -mi.betaX + tmp_16*tmp_37;
const double tmp_39 = mi.alpha*tmp_10;
const double tmp_40 = (1.0/4.0)*tmp_0*tmp_39;
const double tmp_41 = tmp_40*(p*(-((mi.betaX)*(mi.betaX))*tmp_12 + tmp_36*(mi.gamDDyy*mi.gamDDzz - ((mi.gamDDyz)*(mi.gamDDyz)))) + tmp_34*((tmp_38)*(tmp_38)));
const double tmp_42 = -mi.betaY + tmp_19*tmp_37;
const double tmp_43 = tmp_40*(p*(-((mi.betaY)*(mi.betaY))*tmp_12 + tmp_36*(mi.gamDDxx*mi.gamDDzz - ((mi.gamDDxz)*(mi.gamDDxz)))) + tmp_34*((tmp_42)*(tmp_42)));
const double tmp_44 = -mi.betaZ + tmp_22*tmp_37;
const double tmp_45 = tmp_40*(p*(-((mi.betaZ)*(mi.betaZ))*tmp_12 + tmp_36*(mi.gamDDxx*mi.gamDDyy - ((mi.gamDDxy)*(mi.gamDDxy)))) + tmp_34*((tmp_44)*(tmp_44)));
const double tmp_47 = tmp_34*tmp_38;
const double tmp_48 = tmp_1*tmp_39;
const double tmp_49 = tmp_48*(p*(-mi.betaX*mi.betaY*tmp_12 + tmp_36*(-mi.gamDDxy*mi.gamDDzz + mi.gamDDxz*mi.gamDDyz)) + tmp_42*tmp_47);
const double tmp_50 = tmp_48*(p*(-mi.betaX*mi.betaZ*tmp_12 + tmp_36*(mi.gamDDxy*mi.gamDDyz - mi.gamDDxz*mi.gamDDyy)) + tmp_44*tmp_47);
const double tmp_52 = tmp_48*(p*(-mi.betaY*mi.betaZ*tmp_12 + tmp_36*(-mi.gamDDxx*mi.gamDDyz + mi.gamDDxy*mi.gamDDxz)) + tmp_34*tmp_42*tmp_44);
const double tmp_53 = tmp_1*(-mi_minus[1].alpha + mi_plus[1].alpha);
const double tmp_54 = tmp_1*(-mi_minus[2].alpha + mi_plus[2].alpha);
vSource[0] = -tmp_2*tmp_35 + tmp_41*(-mi_minus[0].gamDDxx + mi_plus[0].gamDDxx) + tmp_43*(-mi_minus[0].gamDDyy + mi_plus[0].gamDDyy) + tmp_45*(-mi_minus[0].gamDDzz + mi_plus[0].gamDDzz) + tmp_49*(-mi_minus[0].gamDDxy + mi_plus[0].gamDDxy) + tmp_50*(-mi_minus[0].gamDDxz + mi_plus[0].gamDDxz) + tmp_52*(-mi_minus[0].gamDDyz + mi_plus[0].gamDDyz);
vSource[1] = -tmp_35*tmp_53 + tmp_41*(-mi_minus[1].gamDDxx + mi_plus[1].gamDDxx) + tmp_43*(-mi_minus[1].gamDDyy + mi_plus[1].gamDDyy) + tmp_45*(-mi_minus[1].gamDDzz + mi_plus[1].gamDDzz) + tmp_49*(-mi_minus[1].gamDDxy + mi_plus[1].gamDDxy) + tmp_50*(-mi_minus[1].gamDDxz + mi_plus[1].gamDDxz) + tmp_52*(-mi_minus[1].gamDDyz + mi_plus[1].gamDDyz);
vSource[2] = -tmp_35*tmp_54 + tmp_41*(-mi_minus[2].gamDDxx + mi_plus[2].gamDDxx) + tmp_43*(-mi_minus[2].gamDDyy + mi_plus[2].gamDDyy) + tmp_45*(-mi_minus[2].gamDDzz + mi_plus[2].gamDDzz) + tmp_49*(-mi_minus[2].gamDDxy + mi_plus[2].gamDDxy) + tmp_50*(-mi_minus[2].gamDDxz + mi_plus[2].gamDDxz) + tmp_52*(-mi_minus[2].gamDDyz + mi_plus[2].gamDDyz);
eSource = tmp_39*(tmp_2*(-mi.betaX*tmp_13 - tmp_47) + tmp_53*(-mi.betaY*tmp_13 - tmp_34*tmp_42) + tmp_54*(-mi.betaZ*tmp_13 - tmp_34*tmp_44));
}
<a id='solver'></a>
# Step 6: Conservative to Primitive Solver
$$\label{solver}$$
We now discuss reverse mapping from conservative to primitive variables.
Given the lapse, shift vector, and $\rtgamma$, the mapping between primitive and conserved variables is straightforward. However, the reverse is not as simple. In GRMHD, the conservative to primitive solver is amplified by the inclusion of the magnetic field, leading to rather sophisticated root-finding strategies. The failure rates of these algorithms are low (??), but since this algorithm may be executed several times per timestep for every gridpoint, even a low failure can give unacceptable collective failure rates. However, for purely polytropic equations of state, e.g., $P\propto\rho^{\Gamma_1}$, the conservative to primitive variable solver is greatly simplified.
To construct the conservative-to-primitive variable solver, we restrict ourselves to polytropic equations of states
\begin{equation}
P = P_0\left(\frac{\rho}{\rho_0}\right)^{\Gamma_1} \quad\textrm{and}\quad \epsilon = \epsilon_0\left(\frac{\rho}{\rho_0}\right)^{\Gamma_1-1},
\end{equation}
where $P_0$, $\rho_0$, and $\epsilon_0$ are the fiducial pressure, density, and internal energy, and we have used the relation $P = (\Gamma_1 - 1)\rho\epsilon$.
For such a polytropic equation of state, the energy equation is redundant and effectively we are only concerned with the continuity and momentum equations. The conservative variables of concern are $\rhostar$ and $\Svectilde$. Noting that the shift, $\alpha$, and $\rtgamma$ are provided by the Einsteins field equation solver, we can write
\begin{equation}
u^0 = \frac{\rhostar}{\alpha\rtgamma\rho} = u^0(\rho) \quad\textrm{and}\quad \uvec = \frac{\Svectilde}{\alpha\rtgamma\rho h} = \uvec(\rho).
\end{equation}
Noting that the four-velocity $u^2 = g_{\mu\nu}u^{\mu}u^{\nu} = g^{00}u^0u^0 + 2g^{0i}u^0\uvec^i + g_{ij}\uvec^i\uvec^j = -1$, we have
\begin{equation}
0 = f(\rho)\equiv \alpha^2\gamma\rho^2h^2 + \left(-\lapse^2 + \shift\cdot\shift\right)\rhostar^2h^2 + 2h\rhostar\shift\cdot\Svectilde + \Svectilde\cdot\Svectilde,
\end{equation}
which is an implicit equation of either $\rho$ or $u^0$, where $h(\rho = \rhostar/(\alpha\rtgamma u^0)) = 1 + \gamma_1 \epsilon$ which can be inverted by standard nonlinear root-finding algorithms, e.g., Newton-Raphson.
We put this all together to define a function, $f(\rho)$, whose root is zero that we will find via Newton-Raphson.
Several checks must be performed:
1. $\rhostar > 0$ : This check is performed at the very beginning
2. $\rho > \rho_{\rm min}$ : This check is performed after the fact
3. $u_0 < \alpha^{-1}\Gamma_{\rm max}$ : This check is performed after the fact as well
```python
DIM = 3
# Declare rank-1 contravariant ("v") vector
vU = ixp.declarerank1("vU")
shiftU = ixp.zerorank1()
rho, gamma1 = sp.symbols("rho gamma")
Sx, Sy, Sz = sp.symbols("con[iSx] con[iSy] con[iSz]")
p0, rho0, rhostar = sp.symbols("p_0 rho_0 rhostar")
# Declare rank-2 covariant gmunu
#gammaDD = ixp.declarerank2("gammaDD","sym01")
StildeD = ixp.declarerank1("StildeD")
lapse, beta_x, beta_y, beta_z = sp.symbols( "mi.alpha mi.betaX mi.betaY mi.betaZ")
rtgamma = Ge.sqrtgammaDET
shiftU[0] = beta_x
shiftU[1] = beta_y
shiftU[2] = beta_z
StildeD[0] = Sx
StildeD[1] = Sy
StildeD[2] = Sz
# gamma = rtgamma*rtgamma <- unused
lapse2 = lapse*lapse
uU0 = rhostar/(lapse*rtgamma*rho)
epsilon = p0/rho0*(rho/rho0)**(gamma1 - 1)/(gamma1 - 1)
h = 1 + gamma1*epsilon
beta2 = 0.
for i in range(DIM) :
for j in range(DIM) :
beta2 += gammaDD[i][j] * shiftU[i]*shiftU[j]
betaDotStilde = 0
for i in range(DIM) :
betaDotStilde += shiftU[i]*StildeD[i]
# Note that this is |Stilde|^2, where the absolute value denotes
# that this is not a proper contraction of Stilde_i, as
# Stilde^i is NOT equal to gamma^{ij} Stilde_j (to understand
# why this is, notice that Stilde_i is proportional to the
# *4D* stress-energy tensor.)
Stilde2 = 0
for i in range(DIM) :
for j in range(DIM) :
Stilde2 += gammaUU[i][j] * StildeD[i]*StildeD[j]
f = rhostar**2*h**2 + (-lapse2 + beta2)*rhostar**2.*h**2.*uU0**2 + 2.*h*rhostar*betaDotStilde*uU0 + Stilde2
outputC(f,"rootRho",filename="NRPY+rhoRoot.h")
outputC(Stilde2, "Stilde2", filename="NRPY+Stilde2.h")
!cat NRPY+rhoRoot.h
!cat NRPY+Stilde2.h
```
Wrote to file "NRPY+rhoRoot.h"
Wrote to file "NRPY+Stilde2.h"
/*
* Original SymPy expression:
* "rootRho = con[iSx]**2*(mi.gamDDyy*mi.gamDDzz - mi.gamDDyz**2)/(mi.gamDDxx*mi.gamDDyy*mi.gamDDzz - mi.gamDDxx*mi.gamDDyz**2 - mi.gamDDxy**2*mi.gamDDzz + 2*mi.gamDDxy*mi.gamDDxz*mi.gamDDyz - mi.gamDDxz**2*mi.gamDDyy) + 2*con[iSx]*con[iSy]*(-mi.gamDDxy*mi.gamDDzz + mi.gamDDxz*mi.gamDDyz)/(mi.gamDDxx*mi.gamDDyy*mi.gamDDzz - mi.gamDDxx*mi.gamDDyz**2 - mi.gamDDxy**2*mi.gamDDzz + 2*mi.gamDDxy*mi.gamDDxz*mi.gamDDyz - mi.gamDDxz**2*mi.gamDDyy) + 2*con[iSx]*con[iSz]*(mi.gamDDxy*mi.gamDDyz - mi.gamDDxz*mi.gamDDyy)/(mi.gamDDxx*mi.gamDDyy*mi.gamDDzz - mi.gamDDxx*mi.gamDDyz**2 - mi.gamDDxy**2*mi.gamDDzz + 2*mi.gamDDxy*mi.gamDDxz*mi.gamDDyz - mi.gamDDxz**2*mi.gamDDyy) + con[iSy]**2*(mi.gamDDxx*mi.gamDDzz - mi.gamDDxz**2)/(mi.gamDDxx*mi.gamDDyy*mi.gamDDzz - mi.gamDDxx*mi.gamDDyz**2 - mi.gamDDxy**2*mi.gamDDzz + 2*mi.gamDDxy*mi.gamDDxz*mi.gamDDyz - mi.gamDDxz**2*mi.gamDDyy) + 2*con[iSy]*con[iSz]*(-mi.gamDDxx*mi.gamDDyz + mi.gamDDxy*mi.gamDDxz)/(mi.gamDDxx*mi.gamDDyy*mi.gamDDzz - mi.gamDDxx*mi.gamDDyz**2 - mi.gamDDxy**2*mi.gamDDzz + 2*mi.gamDDxy*mi.gamDDxz*mi.gamDDyz - mi.gamDDxz**2*mi.gamDDyy) + con[iSz]**2*(mi.gamDDxx*mi.gamDDyy - mi.gamDDxy**2)/(mi.gamDDxx*mi.gamDDyy*mi.gamDDzz - mi.gamDDxx*mi.gamDDyz**2 - mi.gamDDxy**2*mi.gamDDzz + 2*mi.gamDDxy*mi.gamDDxz*mi.gamDDyz - mi.gamDDxz**2*mi.gamDDyy) + rhostar**2*(gamma*p_0*(rho/rho_0)**(gamma - 1)/(rho_0*(gamma - 1)) + 1)**2 + rhostar**2*(2.0*gamma*p_0*(rho/rho_0)**(gamma - 1)/(rho_0*(gamma - 1)) + 2.0)*(con[iSx]*mi.betaX + con[iSy]*mi.betaY + con[iSz]*mi.betaZ)/(mi.alpha*rho*sqrt(mi.gamDDxx*mi.gamDDyy*mi.gamDDzz - mi.gamDDxx*mi.gamDDyz**2 - mi.gamDDxy**2*mi.gamDDzz + 2*mi.gamDDxy*mi.gamDDxz*mi.gamDDyz - mi.gamDDxz**2*mi.gamDDyy)) + rhostar**4.0*(gamma*p_0*(rho/rho_0)**(gamma - 1)/(rho_0*(gamma - 1)) + 1)**2.0*(-mi.alpha**2 + mi.betaX**2*mi.gamDDxx + 2*mi.betaX*mi.betaY*mi.gamDDxy + 2*mi.betaX*mi.betaZ*mi.gamDDxz + mi.betaY**2*mi.gamDDyy + 2*mi.betaY*mi.betaZ*mi.gamDDyz + mi.betaZ**2*mi.gamDDzz)/(mi.alpha**2*rho**2*(mi.gamDDxx*mi.gamDDyy*mi.gamDDzz - mi.gamDDxx*mi.gamDDyz**2 - mi.gamDDxy**2*mi.gamDDzz + 2*mi.gamDDxy*mi.gamDDxz*mi.gamDDyz - mi.gamDDxz**2*mi.gamDDyy))"
*/
{
const double tmp_1 = (1.0/(rho_0));
const double tmp_3 = gamma*p_0*tmp_1*pow(rho*tmp_1, gamma - 1)/(gamma - 1);
const double tmp_11 = mi.gamDDxx*mi.gamDDyy*mi.gamDDzz - mi.gamDDxx*((mi.gamDDyz)*(mi.gamDDyz)) - ((mi.gamDDxy)*(mi.gamDDxy))*mi.gamDDzz + 2*mi.gamDDxy*mi.gamDDxz*mi.gamDDyz - ((mi.gamDDxz)*(mi.gamDDxz))*mi.gamDDyy;
const double tmp_12 = (1.0/(tmp_11));
const double tmp_13 = 2*con[iSx]*tmp_12;
rootRho = ((con[iSx])*(con[iSx]))*tmp_12*(mi.gamDDyy*mi.gamDDzz - ((mi.gamDDyz)*(mi.gamDDyz))) + ((con[iSy])*(con[iSy]))*tmp_12*(mi.gamDDxx*mi.gamDDzz - ((mi.gamDDxz)*(mi.gamDDxz))) + 2*con[iSy]*con[iSz]*tmp_12*(-mi.gamDDxx*mi.gamDDyz + mi.gamDDxy*mi.gamDDxz) + con[iSy]*tmp_13*(-mi.gamDDxy*mi.gamDDzz + mi.gamDDxz*mi.gamDDyz) + ((con[iSz])*(con[iSz]))*tmp_12*(mi.gamDDxx*mi.gamDDyy - ((mi.gamDDxy)*(mi.gamDDxy))) + con[iSz]*tmp_13*(mi.gamDDxy*mi.gamDDyz - mi.gamDDxz*mi.gamDDyy) + ((rhostar)*(rhostar))*((tmp_3 + 1)*(tmp_3 + 1)) + ((rhostar)*(rhostar))*(2.0*tmp_3 + 2.0)*(con[iSx]*mi.betaX + con[iSy]*mi.betaY + con[iSz]*mi.betaZ)/(mi.alpha*rho*sqrt(tmp_11)) + ((rhostar)*(rhostar)*(rhostar)*(rhostar))*tmp_12*((tmp_3 + 1)*(tmp_3 + 1))*(-((mi.alpha)*(mi.alpha)) + ((mi.betaX)*(mi.betaX))*mi.gamDDxx + 2*mi.betaX*mi.betaY*mi.gamDDxy + 2*mi.betaX*mi.betaZ*mi.gamDDxz + ((mi.betaY)*(mi.betaY))*mi.gamDDyy + 2*mi.betaY*mi.betaZ*mi.gamDDyz + ((mi.betaZ)*(mi.betaZ))*mi.gamDDzz)/(((mi.alpha)*(mi.alpha))*((rho)*(rho)));
}
/*
* Original SymPy expression:
* "Stilde2 = con[iSx]**2*(mi.gamDDyy*mi.gamDDzz - mi.gamDDyz**2)/(mi.gamDDxx*mi.gamDDyy*mi.gamDDzz - mi.gamDDxx*mi.gamDDyz**2 - mi.gamDDxy**2*mi.gamDDzz + 2*mi.gamDDxy*mi.gamDDxz*mi.gamDDyz - mi.gamDDxz**2*mi.gamDDyy) + 2*con[iSx]*con[iSy]*(-mi.gamDDxy*mi.gamDDzz + mi.gamDDxz*mi.gamDDyz)/(mi.gamDDxx*mi.gamDDyy*mi.gamDDzz - mi.gamDDxx*mi.gamDDyz**2 - mi.gamDDxy**2*mi.gamDDzz + 2*mi.gamDDxy*mi.gamDDxz*mi.gamDDyz - mi.gamDDxz**2*mi.gamDDyy) + 2*con[iSx]*con[iSz]*(mi.gamDDxy*mi.gamDDyz - mi.gamDDxz*mi.gamDDyy)/(mi.gamDDxx*mi.gamDDyy*mi.gamDDzz - mi.gamDDxx*mi.gamDDyz**2 - mi.gamDDxy**2*mi.gamDDzz + 2*mi.gamDDxy*mi.gamDDxz*mi.gamDDyz - mi.gamDDxz**2*mi.gamDDyy) + con[iSy]**2*(mi.gamDDxx*mi.gamDDzz - mi.gamDDxz**2)/(mi.gamDDxx*mi.gamDDyy*mi.gamDDzz - mi.gamDDxx*mi.gamDDyz**2 - mi.gamDDxy**2*mi.gamDDzz + 2*mi.gamDDxy*mi.gamDDxz*mi.gamDDyz - mi.gamDDxz**2*mi.gamDDyy) + 2*con[iSy]*con[iSz]*(-mi.gamDDxx*mi.gamDDyz + mi.gamDDxy*mi.gamDDxz)/(mi.gamDDxx*mi.gamDDyy*mi.gamDDzz - mi.gamDDxx*mi.gamDDyz**2 - mi.gamDDxy**2*mi.gamDDzz + 2*mi.gamDDxy*mi.gamDDxz*mi.gamDDyz - mi.gamDDxz**2*mi.gamDDyy) + con[iSz]**2*(mi.gamDDxx*mi.gamDDyy - mi.gamDDxy**2)/(mi.gamDDxx*mi.gamDDyy*mi.gamDDzz - mi.gamDDxx*mi.gamDDyz**2 - mi.gamDDxy**2*mi.gamDDzz + 2*mi.gamDDxy*mi.gamDDxz*mi.gamDDyz - mi.gamDDxz**2*mi.gamDDyy)"
*/
{
const double tmp_5 = (1.0/(mi.gamDDxx*mi.gamDDyy*mi.gamDDzz - mi.gamDDxx*((mi.gamDDyz)*(mi.gamDDyz)) - ((mi.gamDDxy)*(mi.gamDDxy))*mi.gamDDzz + 2*mi.gamDDxy*mi.gamDDxz*mi.gamDDyz - ((mi.gamDDxz)*(mi.gamDDxz))*mi.gamDDyy));
const double tmp_6 = 2*con[iSx]*tmp_5;
Stilde2 = ((con[iSx])*(con[iSx]))*tmp_5*(mi.gamDDyy*mi.gamDDzz - ((mi.gamDDyz)*(mi.gamDDyz))) + ((con[iSy])*(con[iSy]))*tmp_5*(mi.gamDDxx*mi.gamDDzz - ((mi.gamDDxz)*(mi.gamDDxz))) + 2*con[iSy]*con[iSz]*tmp_5*(-mi.gamDDxx*mi.gamDDyz + mi.gamDDxy*mi.gamDDxz) + con[iSy]*tmp_6*(-mi.gamDDxy*mi.gamDDzz + mi.gamDDxz*mi.gamDDyz) + ((con[iSz])*(con[iSz]))*tmp_5*(mi.gamDDxx*mi.gamDDyy - ((mi.gamDDxy)*(mi.gamDDxy))) + con[iSz]*tmp_6*(mi.gamDDxy*mi.gamDDyz - mi.gamDDxz*mi.gamDDyy);
}
The root solve above finds $\rho$, which then allows us to get
\begin{equation}
u^0 = \frac{\rhostar}{\alpha\rtgamma\rho}\quad\textrm{and}\quad \vel = \frac{\uvec}{u^0} = \frac{\Svectilde}{\rhostar h(\rho)}.
\end{equation}
and thus we can find the rest of the primitives.
```python
#rhostar = sp.symbols("rhostar")
#StildeU = ixp.declarerank1("StildeU")
velU = ixp.zerorank1()
#lapse, rtgamma, rho, gamma1, c = sp.symbols("lapse rtgamma rho gamma1 c")
rho, rhostar = sp.symbols("testPrim[iRho] con[iRhoStar]")
u0 = rhostar/(lapse*rtgamma*rho)
epsilon = p0/rho0*(rho/rho0)**(gamma1 - 1)/(gamma1 - 1)
h = 1. + gamma1*epsilon
for i in range(DIM) :
for j in range(DIM) :
velU[i] += gammaUU[i][j]*StildeD[j]/(rhostar * h)/u0
outputC([h,u0,velU[0],velU[1],velU[2]], ["h", "u0","testPrim[ivx]", "testPrim[ivy]", "testPrim[ivz]"],filename="NRPY+getv.h")
```
Wrote to file "NRPY+getv.h"
<a id='lorentz'></a>
# Step 7: Lorentz Boosts
$$\label{lorentz}$$
We need to boost to the frame of the moving face. The boost is
\begin{equation}
B(\beta) =\begin{pmatrix}
\gamma & -\beta\gamma n_x & -\beta\gamma n_y & -\beta\gamma n_z \\
-\beta\gamma n_x & 1 + (\gamma-1)n_x^2 & (\gamma-1)n_x n_y & (\gamma-1)n_x n_z\\
-\beta\gamma n_x & (\gamma-1)n_y n_x & 1 + (\gamma-1)n_y^2 & (\gamma-1)n_y n_z\\
-\beta\gamma n_x & (\gamma-1) n_z n_x & (\gamma-1)n_z n_x & 1 + (\gamma-1)n_z^2
\end{pmatrix}
\end{equation}
And the boost is $X' = B(\beta) X$, where $X'$ and $X$ are four vectors.
So the rest of this is straightforward.
<a id='TmunuSph'></a>
# Step 8: Compute $T^{\mu\nu}$ in Cartesian Coordinates
$$\label{TmunuSph}$$
```python
# declare gammaDD
gammaDD = ixp.zerorank2()
components = ["xx", "xy", "xz", "yy", "yz", "zz"]
names = ""
for comp in components :
names = names + "mi.gamDD{0} ".format(comp)
g11, g12, g13, g22, g23, g33 = sp.symbols( names)
gammaDD[0][0] = g11
gammaDD[0][1] = g12
gammaDD[0][2] = g13
gammaDD[1][0] = g12
gammaDD[1][1] = g22
gammaDD[1][2] = g23
gammaDD[2][0] = g13
gammaDD[2][1] = g23
gammaDD[2][2] = g33
#declare alpha
alpha = sp.symbols( "mi.alpha")
#declare beta
betaU = ixp.zerorank1()
for i, comp in enumerate(["X", "Y", "Z"]) :
betaU[i] = 0 #NEED A BETTER WAY sp.symbols( "mi.beta{0}".format(comp), real=True)
#now get the primitives
rho_b, epsilon, P = sp.symbols("rho ie press")
#get the 3-velocities
vU = ixp.zerorank1()
for i, comp in enumerate( ["vx", "vy", "vz"]) :
vU[i] = sp.symbols("{0}".format(comp))
Ge.u4U_in_terms_of_vU__rescale_vU_by_applying_speed_limit(alpha,betaU,gammaDD, vU)
u4U = Ge.u4U_ito_vU
# First compute stress-energy tensor T4UU and T4UD in Spherical Coordinates:
Ge.compute_T4UU(gammaDD,betaU,alpha, rho_b,P,epsilon,u4U)
Ge.compute_T4UD(gammaDD,betaU,alpha, Ge.T4UU)
outputC([Ge.T4UU[0][0],Ge.T4UU[1][1],Ge.T4UU[2][2],Ge.T4UU[3][3] ], ["T4UU_diag[0]", "T4UU_diag[1]","T4UU_diag[2]", "T4UU_diag[3]"],filename="NRPY+getT4UU.h")
```
Wrote to file "NRPY+getT4UU.h"
<a id='latex_pdf_output'></a>
# Step 9: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](#toc)\]
$$\label{latex_pdf_output}$$
The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename
[Tutorial-GRHD_Equations-Cartesian-c-code.pdf](Tutorial-GRHD_Equations-Cartesian-c-code.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
```python
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
cmd.output_Jupyter_notebook_to_LaTeXed_PDF("Tutorial-GRHD_Equations-Cartesian-c-code")
```
Created Tutorial-GRHD_Equations-Cartesian-c-code.tex, and compiled LaTeX
file to PDF file Tutorial-GRHD_Equations-Cartesian-c-code.pdf
|
zachetienneREPO_NAMEnrpytutorialPATH_START.@nrpytutorial_extracted@nrpytutorial-master@Tutorial-GRHD_Equations-Cartesian-c-code.ipynb@.PATH_END.py
|
{
"filename": "cm_redis_corr.py",
"repo_name": "HERA-Team/hera_mc",
"repo_path": "hera_mc_extracted/hera_mc-main/hera_mc/cm_redis_corr.py",
"type": "Python"
}
|
# -*- mode: python; coding: utf-8 -*-
# Copyright 2018 the HERA Collaboration
# Licensed under the 2-clause BSD license.
"""Methods for handling locating correlator and various system aspects."""
import json
import time
import warnings
import redis
from . import cm_sysutils, correlator, mc
REDIS_CMINFO_HASH = "cminfo"
REDIS_CORR_HASH = "corr:map"
def cminfo_redis_snap(cminfo):
"""
Build a dictionary of correlator mappings.
Use hera_mc's get_cminfo_correlator method to build a dictionary
of correlator mappings for redis insertion.
Parameters
----------
cminfo : dict
Dictionary as returned from get_cminfo_correlator()
Returns
-------
snap_to_ant : dict
Dictionary mapping the snaps to the antennas
ant_to_snap : dict
Dictionary mapping each antenna to its snap
"""
snap_to_ant = {}
ant_to_snap = {}
all_snap_inputs = {}
snap_to_serial = {}
for _n, ant in enumerate(cminfo["antenna_numbers"]):
name = cminfo["antenna_names"][_n]
corr_inp = cminfo["correlator_inputs"][_n]
snap_snr = cminfo["snap_serial_numbers"][_n]
ant_to_snap[ant] = {}
for _i, psnap in enumerate(corr_inp):
pol = psnap.lower()[0]
if pol not in ["e", "n"] or ">" not in psnap:
warnings.warn(f"{psnap} is not an allowed correlator input")
continue
snap_input, snap_hostname = psnap.split(">")
channel = int(snap_input[1:]) // 2 # divide by 2 because ADC is in demux 2
ant_to_snap[ant][pol] = {"host": snap_hostname, "channel": channel}
snap_to_ant.setdefault(snap_hostname, [None] * 6)
snap_to_ant[snap_hostname][channel] = name + pol.upper()
all_snap_inputs.setdefault(snap_hostname, [])
all_snap_inputs[snap_hostname].append(snap_input)
try: # if already present make sure it agrees
if snap_to_serial[snap_hostname] != snap_snr[_i]:
msg = "Snap hostname-to-serial inconsistent:\n"
msg += (
"\tCurrent for antpol {}{} -> hostname={}, serial={}\n".format(
ant, pol, snap_hostname, snap_snr[_i]
)
)
msg += "\tStored -> hostname={}, serial={}".format(
snap_hostname, snap_to_serial[snap_hostname]
)
raise ValueError(msg)
except KeyError: # if not present, add it
snap_to_serial[snap_hostname] = snap_snr[_i]
for key, value in all_snap_inputs.items():
all_snap_inputs[key] = sorted(value)
return snap_to_ant, ant_to_snap, all_snap_inputs, snap_to_serial
def set_redis_cminfo(
redishost=correlator.DEFAULT_REDIS_ADDRESS, session=None, testing=False
):
"""
Write config info to redis database for the correlator.
Parameters
----------
redishost : None or str
Hostname for the redis database. If None uses default
session : None or hera_mc session
Session for hera_mc instance. None uses default
testing : bool
If True, will use the testing_ hash in redis
"""
# This is retained so that explicitly providing redishost=None has the desired behavior
if redishost is None: # pragma: no cover
redishost = correlator.DEFAULT_REDIS_ADDRESS
redis_pool = redis.ConnectionPool(host=redishost)
rsession = redis.Redis(connection_pool=redis_pool)
# Write cminfo content into redis (cminfo)
with mc.MCSessionWrapper(session=session, testing=testing) as session:
h = cm_sysutils.Handling(session=session)
cminfo = h.get_cminfo_correlator()
redhkey = {}
for key, value in cminfo.items():
redhkey[key] = json.dumps(value)
redis_hash = REDIS_CMINFO_HASH
if testing:
redis_hash = "testing_" + REDIS_CMINFO_HASH
rsession.hset(redis_hash, mapping=redhkey)
if testing:
rsession.expire(redis_hash, 300)
# Write correlator mappings to redis (corr:map)
redis_hash = REDIS_CORR_HASH
if testing:
redis_hash = "testing_" + REDIS_CORR_HASH
snap_to_ant, ant_to_snap, all_snap_inputs, snap_to_serial = cminfo_redis_snap(
cminfo
)
redhkey = {}
redhkey["snap_to_ant"] = json.dumps(snap_to_ant)
redhkey["ant_to_snap"] = json.dumps(ant_to_snap)
redhkey["all_snap_inputs"] = json.dumps(all_snap_inputs)
redhkey["snap_to_serial"] = json.dumps(snap_to_serial)
redhkey["update_time"] = time.time()
redhkey["update_time_str"] = time.ctime(redhkey["update_time"])
rsession.hset(redis_hash, mapping=redhkey)
if testing:
rsession.expire(redis_hash, 300)
|
HERA-TeamREPO_NAMEhera_mcPATH_START.@hera_mc_extracted@hera_mc-main@hera_mc@cm_redis_corr.py@.PATH_END.py
|
{
"filename": "tfsa-2022-016.md",
"repo_name": "tensorflow/tensorflow",
"repo_path": "tensorflow_extracted/tensorflow-master/tensorflow/security/advisory/tfsa-2022-016.md",
"type": "Markdown"
}
|
## TFSA-2022-016: Undefined behavior in `SparseTensorSliceDataset`
### CVE Number
CVE-2022-21736
### Impact
The [implementation of `SparseTensorSliceDataset`](https://github.com/tensorflow/tensorflow/blob/5100e359aef5c8021f2e71c7b986420b85ce7b3d/tensorflow/core/kernels/data/sparse_tensor_slice_dataset_op.cc#L227-L292) has an undefined behavior: under certain condition it can be made to dereference a `nullptr` value:
```python
import tensorflow as tf
import numpy as np
tf.raw_ops.SparseTensorSliceDataset(
indices=[[]],
values=[],
dense_shape=[1,1])
```
The 3 input arguments represent a sparse tensor. However, there are some preconditions that these arguments must satisfy but these are not validated in the implementation.
### Patches
We have patched the issue in GitHub commit [965b97e4a9650495cda5a8c210ef6684b4b9eceb](https://github.com/tensorflow/tensorflow/commit/965b97e4a9650495cda5a8c210ef6684b4b9eceb).
The fix will be included in TensorFlow 2.8.0. We will also cherrypick this commit on TensorFlow 2.7.1, TensorFlow 2.6.3, and TensorFlow 2.5.3, as these are also affected and still in supported range.
### For more information
Please consult [our security guide](https://github.com/tensorflow/tensorflow/blob/master/SECURITY.md) for more information regarding the security model and how to contact us with issues and questions.
### Attribution
This vulnerability has been reported by Faysal Hossain Shezan from University of Virginia.
|
tensorflowREPO_NAMEtensorflowPATH_START.@tensorflow_extracted@tensorflow-master@tensorflow@security@advisory@tfsa-2022-016.md@.PATH_END.py
|
{
"filename": "run_gPhoton.py",
"repo_name": "rbuehler/vasca",
"repo_path": "vasca_extracted/vasca-main/vasca/examples/run_gPhoton.py",
"type": "Python"
}
|
#!/usr/bin/env python
# coding: utf-8
import gPhoton
import numpy as np
import os
from astropy import units as uu
from vasca.region import Region
from astropy.table import Table
from vasca.resource_manager import ResourceManager
region_name = "ALL_10-800" # "TDS" # "WD" #"MDIS_10-800" # _ELAISN1
t_binning = 5 # Binning in seconds, if -1 "visit binning"
t_name = ""
t_name = "" if t_binning < 0 else "_"+str(t_binning)
# srcs_ids = [
# 193067,
# 432606,
# 535864,
# 451644,
# 1551422,
# 541266,
# 581995,
# 625693,
# 187856,
# 8215,
# 494782,
# 166179,
# 172775,
# 34658,
# 98746,
# 1521738,
# 2136829,
# 297278,
# 426363,
# 426330,
# 151796,
# 305192,
# 259271,
# 388172,
# 265150,
# 54184,
# 472623,
# 419001,
# 25273,
# 26195,
# 32448,
# 199832,
# ] # WD ALL_10-800
srcs_ids = [357455] # Massive pulsating WD
rm = ResourceManager()
outdir = rm.get_path("gal_gphoton", "sas_cloud")
region_fname = (
"./vasca_pipeline/" + region_name + "/region_" + region_name + "_cat.fits"
)
ron = (6 * uu.arcsec).to(uu.deg).value # Radius of signal annulus
roff1 = (10 * uu.arcsec).to(uu.deg).value # Inner radius of background ring
roff2 = (15 * uu.arcsec).to(uu.deg).value # Outer radius of background ring
bands = ["NUV", "FUV"] # "NUV",
rg = Region()
rg.load_from_fits(region_fname)
# Subselect sources based on choice
if len(srcs_ids) > 0:
rg.tt_sources.add_index("rg_src_id")
idx_srcs = rg.tt_sources.loc_indices["rg_src_id", srcs_ids]
tt_srcs = Table(rg.tt_sources[idx_srcs])
else:
tt_srcs = rg.tt_sources
for band in bands:
print("--- Running for band", band)
for rg_src_id in srcs_ids:
tt_src = tt_srcs[tt_srcs["rg_src_id"] == rg_src_id]
# Load file
fname_base = (
"gPhoton_ra"
+ str(round(tt_src["ra"][0], 3))
+ "_dec"
+ str(round(tt_src["dec"][0], 3))
)
outfile_fin = outdir + "/" + fname_base + "_" + band.lower() + "_fin.npy"
outfile_app = outdir + "/" + fname_base + "_" + band.lower() + t_name + "_app.npy"
# Run or load gFind
if os.path.isfile(outfile_fin):
print("Loading file", outfile_fin)
dd_gfind = np.load(outfile_fin, allow_pickle="TRUE").item()
t_bins = list(zip(dd_gfind[band]["t0"], dd_gfind[band]["t1"]))
print("Number of time bins:", len(t_bins))
else:
print("Running query with gFind for:\n", tt_src)
dd_gfind = gPhoton.gFind(
band=band, skypos=[tt_src["ra"][0], tt_src["dec"][0]]
) # ,maxgap=100.,minexp=100.
t_bins = list(zip(dd_gfind[band]["t0"], dd_gfind[band]["t1"]))
np.save(outfile_fin, dd_gfind)
#If binning with fixed time bins is requested, define bin times
if t_binning >0:
t_bins_start = []
t_bins_end = []
for bin in t_bins:
bins_fine = np.arange(bin[0], bin[1], t_binning)
t_bins_start.extend(bins_fine[:-1])
t_bins_end.extend(bins_fine[1:])
t_bins = list(zip(t_bins_start,t_bins_end))
print(t_bins)
# Run or load gApperture
if os.path.isfile(outfile_app):
print("gAperture file already found, not running", outfile_app)
else:
print("Running lightcurve with gAperture..")
dd_gaperture = gPhoton.gAperture(
band=band,
skypos=[tt_src["ra"][0], tt_src["dec"][0]],
radius=ron,
annulus=[roff1, roff2],
tranges=t_bins,
)
dd_gph = {"gAperture": dd_gaperture, "gFind": dd_gfind}
np.save(outfile_app, dd_gph)
print("..done")
|
rbuehlerREPO_NAMEvascaPATH_START.@vasca_extracted@vasca-main@vasca@examples@run_gPhoton.py@.PATH_END.py
|
{
"filename": "plotbandpass3.py",
"repo_name": "nicokurtovic/SIMIO",
"repo_path": "SIMIO_extracted/SIMIO-main/codes/analysis_scripts/plotbandpass3.py",
"type": "Python"
}
|
#########################################################################
#
# plotbandpass3.py
#
# This is meant to be a generic task to display CASA bandpass solutions
# with options to overlay them in various combinations. It is meant to
# work the 'new' (casa 3.4) calibration tables, and (unlike plotbandpass
# and plotbandpass2) to allow mixed spws (e.g. TDM and FDM for ALMA).
# This function uses the msmd tool when run in casa 4.1.0, but for older
# versions, it still relies on
# the ValueMapping class written by Stuartt Corder in analysisUtils.py.
# It is designed to be called from inside the analysisUtils namespace:
# import analysisUtils as au
# au.plotbandpass()
# or directly on its own:
# import plotbandpass3 as p
# p.plotbandpass()
#
# (not to be confused with plotbpass.py, which was written for a specific
# purpose of analyzing ALMA bandpass stability)
#
# Todd R. Hunter September 2012
#
# for regressions, see plotbandpass_regression.py
#
from __future__ import print_function # prevents adding old-style print statements
PLOTBANDPASS_REVISION_STRING = "$Id: plotbandpass3.py,v 1.210 2021/02/19 18:10:02 thunter Exp $"
import pylab as pb
import math, os, sys, re, inspect
import time as timeUtilities
import numpy as np
import analysisUtils as au
try:
import casalith
casaVersion = casalith.version_string()
except:
import casadef # necessary to read the casa version strings
if casadef.casa_version >= '5.0.0':
import casa as mycasa
if 'cutool' in dir(mycasa):
cu = mycasa.cutool()
casaVersion = '.'.join([str(i) for i in cu.version()[:-1]]) + '-' + str(cu.version()[-1])
else:
casaVersion = mycasa.casa['build']['version'].split()[0]
else:
casaVersion = casadef.casa_version
try:
# will work if casaVersion < '5.9.9' or there is a local file taskinit.py
from taskinit import * # needed for things like tb.open
# print("plotbandpass3: imported casatools using taskinit *")
except:
from casatasks import casalog
from casatasks import gaincal
from casatasks import tclean
from casatools import measures as metool
from casatools import table as tbtool
from casatools import atmosphere as attool
from casatools import msmetadata as msmdtool
from casatools import image as iatool
from casatools import ms as mstool
from casatools import quanta as qatool
from casaplotms import plotms
# print("plotbandpass3: imported casatasks and casatools individually")
from matplotlib.ticker import MultipleLocator, FormatStrFormatter, ScalarFormatter
import matplotlib.transforms
import inspect
try:
from six.moves import input as raw_input
except:
pass # raw_input works in older versions where six.moves is unavailable
TOP_MARGIN = 0.25 # Used if showatm=T or showtksy=T
BOTTOM_MARGIN = 0.25 # Used if showfdm=T
MAX_ATM_CALC_CHANNELS = 512
# This is a color sequence found online which has distinguishable colors
markeredgewidth=0.0
overlayColors = [
[0.00, 0.00, 0.00], # black
[0.00, 0.00, 1.00], # blue
[0.00, 0.50, 0.00], # green
[1.00, 0.00, 0.00], # red
[0.00, 0.75, 0.75], # cyan
# [0.75, 0.00, 0.75], # magenta, avoid because it's the same as atmcolor
[0.25, 0.25, 0.25], # gray
[0.75, 0.25, 0.25], # brick
[0.95, 0.95, 0.00], # yellow
[0.25, 0.25, 0.75], # light blue
# [0.75, 0.75, 0.75], this color is invisible for some reason
[0.00, 1.00, 0.00], # bright green
[0.76, 0.57, 0.17], # olive
[0.54, 0.63, 0.22],
[0.34, 0.57, 0.92],
[1.00, 0.10, 0.60],
# [0.88, 0.75, 0.73], invisible
[0.10, 0.49, 0.47],
[0.66, 0.34, 0.65],
[0.99, 0.41, 0.23]]
overlayColors += overlayColors + overlayColors # 17*3 = 51 total colors
overlayColors += overlayColors + overlayColors # try to support antenna,time
overlayColors += overlayColors + overlayColors # try to support antenna,time
overlayColors += overlayColors + overlayColors # try to support antenna,time
overlayColors += overlayColors + overlayColors # try to support antenna,time
# Enumeration to keep track of plot pages
PAGE_ANT = 0
PAGE_SPW = 1
PAGE_TIME = 2
PAGE_AP = 3
# Used to parse command line arguments
myValidCharacterList = ['~', ',', ' ', '*',] + [str(m) for m in range(10)]
myValidCharacterListWithBang = ['~', ',', ' ', '*', '!',] + [str(m) for m in range(10)]
LARGE_POSITIVE = +1e20
LARGE_NEGATIVE = -1e20
maxAntennaNamesAcrossTheTop = 17
maxTimesAcrossTheTop = 13 # 17 for HH:MM, reduced by 1 below for subplot=11
antennaVerticalSpacing = 0.018 # 0.016
antennaHorizontalSpacing = 0.05
xstartTitle = 0.07
ystartTitle = 0.955
xstartPolLabel = 0.05
ystartOverlayLegend = 0.931
opaqueSky = 270. # Kelvin, used for scaling TebbSky
developerEmail = "thunter@nrao.edu"
#class Polarization:
# taken from Stokes.h in casa
# (Undefined, I,Q,U,V,RR,RL,LR,LL,XX,XY,YX,YY) = range(13)
def lineno():
"""Returns the current line number in our program."""
return inspect.currentframe().f_back.f_lineno
def println(linenumber, mystring):
global terminal
mystring = "v%s_%04d: " % (version(False).split()[2], linenumber) + mystring
print(mystring)
if (len(mystring) > 0):
mystring += '\n'
try: # check if global is defined
terminal.write(mystring)
terminal.flush()
except:
return
def version(showfile=True):
"""
Returns the CVS revision number.
"""
myversion = "$Id: plotbandpass3.py,v 1.210 2021/02/19 18:10:02 thunter Exp $"
if (showfile):
print("Loaded from %s" % (__file__))
return myversion
def refTypeToString(rtype):
rtypes = ['REST','LSRK','LSRD','BARY','GEO','TOPO','GALACTO','LGROUP','CMB']
return(rtypes[rtype])
def corrTypeToString(ptype):
ptypes = ['Undefined','I','Q','U','V','RR','RL','LR','LL','XX','XY','YX','YY']
mystring = ptypes[ptype]
# print "mystring = %s" % (mystring)
return(mystring)
def buildAntString(antID,msFound,msAnt):
if (msFound):
antstring = msAnt[antID]
else:
antstring = '%02d' % (antID)
return(antstring)
def makeplot(figfile,msFound,msAnt,overlayAntennas,pages,pagectr,density,
interactive,antennasToPlot,spwsToPlot,overlayTimes,overlayBasebands,
locationCalledFrom, xant, ispw, subplot, resample='1', debug=False,
figfileSequential=False, figfileNumber=0):
if (type(figfile) == str):
if (figfile.find('/')>=0):
directories = figfile.split('/')
directory = ''
for d in range(len(directories)-1):
directory += directories[d] + '/'
if (os.path.exists(directory)==False):
print("Making directory = ", directory)
os.system("mkdir -p %s" % directory)
if (debug):
print("makeplot(%d): pagectr=%d, len(pages)=%d, len(spwsToPlot)=%d, len(figfile)=%d, figfileNumber=%d, xant=%d, msAnt=%s, antennasToPlot=%s, pages(ANT,SPW,TIME,AP)=" % (locationCalledFrom,
pagectr, len(pages),len(spwsToPlot), len(figfile), figfileNumber, xant, str(msAnt), str(antennasToPlot)), pages)
if (pages[pagectr][PAGE_SPW] >= len(spwsToPlot)):
# necessary for test86: overlay='spw' of spectral scan dataset. to avoid indexing beyond the
# end of the array in the the case that the final frame is of a baseband with n spw, and
# earlier frames had >n spws 2014-04-08
ispw = spwsToPlot[-1]
if debug:
print("setting ispw to final (because %d >= %d)" % (pages[pagectr][PAGE_SPW],len(spwsToPlot)))
else:
# CAS-8285: Added 'if' to be sure to use ispw passed in for single-panel plots, but
# use the original behavior for multi-panel plots simply to preserve the pngfile
# naming scheme (i.e. including the spw name of lower right panel) to match old
# regressions. Should probably remove this whole 'else' block someday, if I don't
# mind if future multi-panel filenames contain spw name of upper left panel.
if (subplot != 11 or overlayBasebands): # Add only this line for CAS-8285.
ispw = spwsToPlot[pages[pagectr][PAGE_SPW]]
if debug:
print("setting ispw to spwsToPlot[pages[pagectr=%d]=%d[PAGE_SPW]] = %d" % (pagectr,pages[pagectr][PAGE_SPW],ispw))
t = pages[pagectr][PAGE_TIME] # + 1
if (subplot == 11):
antstring = buildAntString(xant, msFound, msAnt) # fix for CAS-8285
else:
# this causes png file to be named for the antenna in the upper left corner, rather than lower right corner
antstring = buildAntString(antennasToPlot[pages[pagectr][PAGE_ANT]], msFound, msAnt) # original behavior
figfile = figfile.split('.png')[0] # this will be added back later
if (figfileSequential):
plotfilename = figfile + '.%03d' % (figfileNumber)
else:
if (msFound):
if (overlayAntennas and overlayTimes):
plotfilename = figfile+'.spw%02d'%(ispw)
elif (overlayAntennas):
plotfilename = figfile+'.spw%02d'%(ispw)+'.t%02d'%(t)
elif (overlayTimes):
plotfilename = figfile+'.'+antstring+'.spw%02d'%(ispw)
else:
plotfilename = figfile+'.'+antstring+'.spw%02d'%(ispw)+'.t%02d'%(t)
else:
if (overlayAntennas and overlayTimes):
plotfilename = figfile+'.spw%02d'%(ispw)
elif (overlayAntennas):
plotfilename = figfile+'.spw%02d'%(ispw)+'.t%02d'%(t)
elif (overlayTimes):
plotfilename = figfile+'.ant'+antstring+'.spw%02d'%(ispw)
else:
plotfilename = figfile+'.ant'+antstring+'.spw%02d'%(ispw)+'.t%02d'%(t)
if (int(resample) > 1):
plotfilename += '.resample%d.png' % (int(resample))
else:
plotfilename += '.png'
if (interactive == False or 1==1):
print("Building %s" % (plotfilename))
pb.savefig(plotfilename, format='png', dpi=density)
return(plotfilename)
def utdatestring(mjdsec):
(mjd, dateTimeString) = au.mjdSecondsToMJDandUT(mjdsec)
tokens = dateTimeString.split()
return(tokens[0])
def mjdsecArrayToUTString(timerangeListTimes):
"""
accepts [4866334935, 4866335281] etc.
returns '08:04:10, 09:03:00' etc.
"""
timerangeListTimesString = ''
for t in timerangeListTimes:
timerangeListTimesString += utstring(t,3) + ' '
return(timerangeListTimesString)
def utstring(mjdsec, xframeStart=110):
(mjd, dateTimeString) = au.mjdSecondsToMJDandUT(mjdsec)
tokens = dateTimeString.split()
hoursMinutes = tokens[1][0:len(tokens[1])-3]
hoursMinutesSeconds = tokens[1][0:len(tokens[1])]
if (xframeStart == 110): # 2011-01-01 UT 00:00
return(tokens[0]+' '+tokens[2]+' '+hoursMinutes)
elif (xframeStart == 3):
return(hoursMinutesSeconds)
else: # 00:00
return(hoursMinutes)
def openBpolyFile(caltable, debug=False):
mytb = au.createCasaTool(tbtool)
mytb.open(caltable)
desc = mytb.getdesc()
if ('POLY_MODE' in desc):
polyMode = mytb.getcol('POLY_MODE')
print("This is a BPOLY solution = %s" % (polyMode[0]))
polyType = mytb.getcol('POLY_TYPE')
scaleFactor = mytb.getcol('SCALE_FACTOR')
antenna1 = mytb.getcol('ANTENNA1')
times = mytb.getcol('TIME')
cal_desc_id = mytb.getcol('CAL_DESC_ID')
nRows = len(polyType)
for pType in polyType:
if (pType != 'CHEBYSHEV'):
print("I do not recognized polynomial type = %s" % (pType))
return
# Here we assume that all spws have been solved with the same mode
uniqueTimesBP = np.unique(mytb.getcol('TIME'))
nUniqueTimesBP = len(uniqueTimesBP)
print("There are %d unique times in the BPOLY solution:" % (nUniqueTimesBP))
if (nUniqueTimesBP == 2):
print("differing by %g seconds" % (uniqueTimesBP[1]-uniqueTimesBP[0]))
mystring = ''
for u in uniqueTimesBP:
mystring += '%.3f, ' % (u)
print(mystring)
nPolyAmp = mytb.getcol('N_POLY_AMP')
nPolyPhase = mytb.getcol('N_POLY_PHASE')
frequencyLimits = mytb.getcol('VALID_DOMAIN')
increments = 0.001*(frequencyLimits[1,:]-frequencyLimits[0,:])
frequenciesGHz = []
for i in range(len(increments)):
freqs = (1e-9)*np.arange(frequencyLimits[0,i],frequencyLimits[1,i],increments[i])
frequenciesGHz.append(freqs)
polynomialAmplitude = []
polynomialPhase = []
for i in range(len(polyMode)):
polynomialAmplitude.append([1])
polynomialPhase.append([0])
if (polyMode[i] == 'A&P' or polyMode[i] == 'A'):
polynomialAmplitude[i] = mytb.getcell('POLY_COEFF_AMP',i)[0][0][0]
if (polyMode[i] == 'A&P' or polyMode[i] == 'P'):
polynomialPhase[i] = mytb.getcell('POLY_COEFF_PHASE',i)[0][0][0]
mytb.close()
mytb.open(caltable+'/CAL_DESC')
nSpws = len(mytb.getcol('NUM_SPW'))
spws = mytb.getcol('SPECTRAL_WINDOW_ID')
spwBP = []
for c in cal_desc_id:
spwBP.append(spws[0][c])
mytb.close()
nPolarizations = len(polynomialAmplitude[0]) / nPolyAmp[0]
if (debug):
print("(3)Set nPolarizations = %s" % (str(nPolarizations)))
# This value is overridden by the new function doPolarizations in ValueMapping.
# print "Inferring %d polarizations from size of polynomial array" % (nPolarizations)
return([polyMode, polyType, nPolyAmp, nPolyPhase, scaleFactor, nRows, nSpws, nUniqueTimesBP,
uniqueTimesBP, nPolarizations, frequencyLimits, increments, frequenciesGHz,
polynomialPhase, polynomialAmplitude, times, antenna1, cal_desc_id, spwBP])
else:
mytb.close()
return([])
# end of openBpolyFile()
def displayTimesArray(uniqueTimesPerFieldPerSpw):
"""
Displays an array of MJD second timestamps as UT timestamps
"""
legendString = ''
for s in uniqueTimesPerFieldPerSpw:
legendString += "["
for f in s:
legendString += "["
for t in f:
legendString += "%s" % utstring(t,3)
if (t != f[-1]):
legendString += ", "
legendString += "]"
if (f != s[-1]):
legendString += ', '
legendString += "], "
if (s != uniqueTimesPerFieldPerSpw[-1]):
legendString += ', '
print(legendString)
def checkPolsToPlot(polsToPlot, corr_type_string):
firstFailure = 0
for pol in polsToPlot:
if ((pol in corr_type_string) == False):
print("Polarization product %s is not in the ms" % (pol))
firstFailure += 1
if (pol in ['XX','YY']):
polsToPlot = ['RR','LL']
else:
polsToPlot = ['XX','YY']
break
if (firstFailure>0):
print("Looking for instead: ", polsToPlot)
for pol in polsToPlot:
if ((pol in corr_type_string) == False):
print("Polarization product %s is not in the ms" % (pol))
firstFailure += 1
if (pol in ['XX']):
polsToPlot = ['YY']
elif (pol in ['YY']):
polsToPlot = ['XX']
elif (pol in ['RR']):
polsToPlot = ['LL']
elif (pol in ['LL']):
polsToPlot = ['RR']
break
if (firstFailure > 1):
print("Looking for instead: ", polsToPlot)
for pol in polsToPlot:
if ((pol in corr_type_string) == False):
print("Polarization product %s is not in the ms" % (pol))
return([])
return(polsToPlot)
def getCorrType(msName, spwsToPlot, debug):
"""
Open the DATA_DESCRIPTION_ID table. Find the polarization_id of the first
spw in the list of spwsToPlot, then read the CORR_TYPE from POLARIZATION
table.
"""
mytb = au.createCasaTool(tbtool)
mytb.open(msName+'/DATA_DESCRIPTION')
spws = mytb.getcol('SPECTRAL_WINDOW_ID')
polarization_id = mytb.getcol('POLARIZATION_ID')
mytb.close()
pol_id = 0
telescopeName = au.getObservatoryName(msName)
mytb.open(msName+'/POLARIZATION')
for myspw in spwsToPlot:
if (debug):
print("looking for %d in %s" % (myspw, str(spws)))
row = list(spws).index(myspw)
if (row >= 0):
pol_id = polarization_id[row]
corr_type = mytb.getcell('CORR_TYPE',pol_id)
if (corr_type[0] >= 5 or (telescopeName.find('ALMA')<0 and telescopeName.find('VLA')<0)):
# Undefined, I, Q, U, V, which ALMA and VLA never use
# Need to allow non-VLA, non-ALMA to stop here
break
# num_corr = mytb.getcol('NUM_CORR')
mytb.close()
corr_type_string = []
if (len(corr_type) == 4):
print("This is a 4-polarization dataset.")
if (corr_type[0] in [5,6,7,8]):
corr_type = [5,8]
elif (corr_type[0] in [9,10,11,12]):
corr_type = [9,12]
else:
print("Unsupported polarization types = ", corr_type)
return(corr_type, corr_type_string)
# This overrides the len(gain_table) because it can have length=2 even when only 1 pol present
nPolarizations = len(corr_type)
if (debug):
print("getCorrType(): (2)Set nPolarizations = %d" % nPolarizations)
for ct in corr_type:
corr_type_string.append(corrTypeToString(ct))
print("corr_types = ", corr_type, " = ", corr_type_string)
return(corr_type, corr_type_string, nPolarizations)
def writeArgument(f,name,arg):
if (type(arg) == str):
s = "%-18s = '%s'" % (name,arg)
t = "%s='%s'" % (name,arg)
else:
s = "%-18s = %s" % (name,str(arg))
t = "%s=%s" % (name,arg)
f.write(s+'\n')
return(t)
def resampleSolution(x,y,resample=1):
"""
Takes a solution (quantity y vs. channel x) and expands the number
of points via linear interpolation
"""
newx = []
for i in range(len(x)):
newx.append(x[i])
if (i < len(x)-1):
for j in range(1,resample):
newx.append((x[i]*(resample-j) + x[i+1]*j)/(1.0*resample))
newx = np.array(newx)
if (False):
f = scipy.interpolate.interp1d(x,y,kind='linear')
newy = f(newx)
else:
newy = np.interp(newx, x, y)
return(newx,newy)
def channelDifferences(y, x, resample=1):
"""
Takes a vector, and computes the channel-to-channel derivative.
Optionally, it will also resample the data and compute the
derivative.
- Todd Hunter
"""
x = np.array(x)
y = np.array(y)
if (len(x) > 1):
channelWidth = x[1]-x[0]
d = (np.diff(y)/np.diff(x))
newy = d*channelWidth
newx = (x[1:]+x[:-1])/2. # midpoints of input x-axis
else:
newx = x
newy = y
if (resample > 1):
x,y = resampleSolution(x,y,resample)
if (len(x) > 1):
channelWidth = x[1]-x[0]
d = (np.diff(y)/np.diff(x))
resy = d*channelWidth
resx = (x[1:]+x[:-1])/2. # midpoints of input x-axis
else:
resx = x
resy = y
else:
resy = newy
resx = newx
return(newy, newx, resy, resx)
def drawOverlayTimeLegends(xframe,firstFrame,xstartTitle,ystartTitle,caltable,titlesize,
fieldIndicesToPlot,ispwInCalTable,uniqueTimesPerFieldPerSpw,
timerangeListTimes, solutionTimeThresholdSeconds,debugSloppyMatch,
ystartOverlayLegend,debug,mysize, fieldsToPlot,myUniqueColor,
timeHorizontalSpacing, fieldIndex,overlayColors,
antennaVerticalSpacing, overlayAntennas, timerangeList, caltableTitle,
mytime, scansToPlot, scansForUniqueTimes):
"""
Draws the legend at the top of the page, if it is the correct time to do so,
including the overlayTimes, the 'UT' label, and the caltable name.
"""
if (xframe == firstFrame):
# draw title including caltable name
pb.text(xstartTitle, ystartTitle, caltableTitle, size=titlesize,
color='k', transform=pb.gcf().transFigure)
# support multi-fields with overlay='time'
uTPFPS = [] # stands for uniqueTimesPerFieldPerSpw
uTPFPStimerange = []
# Find all timerange integers for all fields, not just the ones that were plotted
allTimeranges = []
for f in range(len(uniqueTimesPerFieldPerSpw[ispwInCalTable])):
for t in uniqueTimesPerFieldPerSpw[ispwInCalTable][f]:
if (t in timerangeListTimes):
allTimeranges.append(list(timerangeListTimes).index(t))
for f in fieldIndicesToPlot:
# print "uniqueTimesPerFieldPerSpw[spw=%d][field=%d] = " % (ispwInCalTable,f), uniqueTimesPerFieldPerSpw[ispwInCalTable][f]
for t in uniqueTimesPerFieldPerSpw[ispwInCalTable][f]:
matched, mymatch = sloppyMatch(t, timerangeListTimes, solutionTimeThresholdSeconds,
myprint=debugSloppyMatch, whichone=True)
if (matched):
uTPFPS.append(t)
uTPFPStimerange.append(mymatch)
allTimeranges = list(np.sort(np.unique(allTimeranges)))
idx = np.argsort(uTPFPS)
uTPFPStimerange = np.array(uTPFPStimerange)[idx]
uTPFPS = np.sort(uTPFPS)
if (debug):
print("uTPFPS = ", np.array(uTPFPS, dtype='int'))
print("uTPFPStimerange = ", np.array(uTPFPStimerange, dtype='int'))
print("scansForUniqueTimes = ", scansForUniqueTimes)
print("timerangeList = ", timerangeList)
print("allTimeranges = ", allTimeranges)
print("fieldsToPlot = ", fieldsToPlot)
print("fieldIndex = ", fieldIndex)
print("fieldsIndicesToPlot = ", fieldIndicesToPlot)
timeFormat = 3 # HH:MM:SS
maxTimesAcross = maxTimesAcrossTheTop
if (firstFrame == 111):
maxTimesAcross -= 2
for a in range(len(uTPFPS)):
legendString = utstring(uTPFPS[a],timeFormat)
if (debug): print("----> %d: Defined legendString: %s" % (a,legendString))
if (a==0):
pb.text(xstartTitle-0.03, ystartOverlayLegend, 'UT',color='k',fontsize=mysize,
transform=pb.gcf().transFigure)
if (a < maxTimesAcross):
x0 = xstartTitle + (a*timeHorizontalSpacing)
y0 = ystartOverlayLegend
else:
# start going down the righthand side
x0 = xstartTitle + (maxTimesAcross*timeHorizontalSpacing)
y0 = ystartOverlayLegend-(a-maxTimesAcross)*antennaVerticalSpacing
if (True):
if (debug):
print("3)checking time %d" % (a))
if (sloppyMatch(uTPFPS[a],timerangeListTimes,solutionTimeThresholdSeconds,
mytime, scansToPlot, scansForUniqueTimes,
debugSloppyMatch)):
myUniqueTime = uTPFPS[a]
if (debug):
print("3)setting myUniqueTime to %d" % (myUniqueTime))
if (debug): print("----> Drawing legendString: %s" % (legendString))
if ((len(fieldsToPlot) > 1 or len(timerangeList) > 1) and overlayAntennas==False):
# having overlayAntennas==False here will force all time labels to be black (as desired)
if (debug):
if uTPFPStimerange[a] not in allTimeranges:
print("%s is not in allTimeranges=%s" % (uTPFPStimerange[a], allTimeranges))
else:
print("allTimeranges.index(%d) = %d" % (a,allTimeranges.index(uTPFPStimerange[a])))
# old method
# print "len(uTPFPS)=%d, a=%d, len(myUniqueColor)=%d, overlayColors[%d]=%s" % (len(uTPFPS),a,len(myUniqueColor),timerangeList[uTPFPStimerange[a]],str(overlayColors[timerangeList[uTPFPStimerange[a]]]))
# pb.text(x0, y0, legendString,color=overlayColors[timerangeList[uTPFPStimerange[a]]],fontsize=mysize,
# transform=pb.gcf().transFigure)
print("len(uTPFPS)=%d, a=%d, len(myUniqueColor)=%d, overlayColors[%d]=%s" % (len(uTPFPS),a,len(myUniqueColor),timerangeList[allTimeranges.index(uTPFPStimerange[a])],str(overlayColors[timerangeList[allTimeranges.index(uTPFPStimerange[a])]])))
# old method
# pb.text(x0, y0, legendString,color=overlayColors[timerangeList[a]],fontsize=mysize,
# transform=pb.gcf().transFigure)
if uTPFPStimerange[a] not in allTimeranges:
print("%s is not in allTimeranges=%s, skipping text label" % (uTPFPStimerange[a], allTimeranges))
else:
pb.text(x0, y0, legendString,color=overlayColors[timerangeList[allTimeranges.index(uTPFPStimerange[a])]],fontsize=mysize,
transform=pb.gcf().transFigure)
else:
pb.text(x0, y0, legendString,fontsize=mysize, transform=pb.gcf().transFigure)
def lineNumber():
"""Returns the current line number in our program."""
return inspect.currentframe().f_back.f_lineno
def drawAtmosphereAndFDM(showatm, showtsky, atmString, subplotRows, mysize, TebbSky,
TebbSkyImage,plotrange, xaxis, atmchan, atmfreq, transmission,
subplotCols, showatmPoints,xframe, channels,LO1,atmchanImage,
atmfreqImage,transmissionImage, firstFrame,showfdm,nChannels,
tableFormat,
originalSpw_casa33, chanFreqGHz_casa33,originalSpw,chanFreqGHz,
overlayTimes, overlayAntennas, xant, antennasToPlot, overlaySpws,
baseband, showBasebandNumber, basebandDict, overlayBasebands,
drewAtmosphere, showtsys=False, Trx=None):
"""
If requested by the user at the command line, draw the atmospheric curve
and the FDM window locations.
"""
mylineno = lineNumber()
ylim = pb.ylim() # CAS-8655
if ((showatm or showtsky) and len(atmString) > 0):
ylim = DrawAtmosphere(showatm, showtsky, subplotRows, atmString,
mysize, TebbSky, plotrange, xaxis, atmchan,
atmfreq, transmission, subplotCols,
showatmPoints=showatmPoints, xframe=xframe, channels=channels,
mylineno=mylineno,xant=xant, overlaySpws=overlaySpws,
overlayBasebands=overlayBasebands, drewAtmosphere=drewAtmosphere,
loc=201, showtsys=showtsys, Trx=Trx)
if (LO1 is not None):
# Now draw the image band
ylim = DrawAtmosphere(showatm,showtsky, subplotRows, atmString,
mysize, TebbSkyImage, plotrange, xaxis,
atmchanImage, atmfreqImage, transmissionImage,
subplotCols, LO1, xframe, firstFrame, showatmPoints, channels=channels,
mylineno=mylineno,xant=xant, overlaySpws=overlaySpws,
overlayBasebands=overlayBasebands,drewAtmosphere=drewAtmosphere,
loc=202, showtsys=showtsys, Trx=Trx)
if (xaxis.find('freq')>=0 and showfdm and nChannels <= 256):
if (tableFormat == 33):
showFDM(originalSpw_casa33, chanFreqGHz_casa33, baseband,
showBasebandNumber, basebandDict)
else:
showFDM(originalSpw, chanFreqGHz, baseband, showBasebandNumber,
basebandDict)
ylim = pb.ylim() # CAS-11062 need to pass the new wider limits back up to calling function
return ylim # CAS-8655
def DrawPolarizationLabelsForOverlayTime(xstartPolLabel,ystartPolLabel,corr_type,polsToPlot,
channeldiff,ystartMadLabel,subplotRows,gamp_mad,
mysize,
ampmarkstyle,markersize,ampmarkstyle2, gamp_std):
"""
Currently this is only called for amp vs. X plots. The corresponding code for phase
vs. X plots is still inside plotbandpass(). But this is okay because overlay='time'
is mainly intended for Tsys plots.
"""
# print "DrawPolarizationLabelsForOverlayTime"
x0 = xstartPolLabel
y0 = ystartPolLabel
if (corrTypeToString(corr_type[0]) in polsToPlot):
if (channeldiff > 0):
pb.text(x0, ystartMadLabel-0.03*subplotRows*0,
corrTypeToString(corr_type[0])+' MAD = %.4f, St.Dev = %.4f'%(gamp_mad[0]['mad'], gamp_std[0]['std']),
color='k',size=mysize, transform=pb.gca().transAxes)
if (ampmarkstyle.find('-')>=0):
pb.text(x0, y0, corrTypeToString(corr_type[0])+' solid', color='k',
size=mysize, transform=pb.gca().transAxes)
else:
pb.text(x0+0.02, y0, corrTypeToString(corr_type[0]), color='k',
size=mysize, transform=pb.gca().transAxes)
pdesc = pb.plot([x0-0.1], [y0], '%sk'%ampmarkstyle, markersize=markersize,
scalex=False,scaley=False, transform=pb.gca().transAxes,markeredgewidth=markeredgewidth)
if (len(corr_type) > 1):
if (corrTypeToString(corr_type[1]) in polsToPlot):
if (channeldiff > 0):
pb.text(x0, ystartMadLabel-0.03*subplotRows*1,
corrTypeToString(corr_type[1])+' MAD = %.4f, St.Dev = %.4f'%(gamp_mad[1]['mad'], gamp_std[1]['std']),
color='k',size=mysize, transform=pb.gca().transAxes)
if (ampmarkstyle2.find('--')>=0):
pb.text(x0, y0-0.03*subplotRows, corrTypeToString(corr_type[1])+' dashed',
color='k', size=mysize, transform=pb.gca().transAxes)
else:
pb.text(x0, y0-0.03*subplotRows, corrTypeToString(corr_type[1]), # removed +0.02*xrange on 11-Mar-2014
color='k', size=mysize, transform=pb.gca().transAxes)
pdesc = pb.plot([x0-0.1], [y0-0.03*subplotRows], '%sk'%ampmarkstyle2,
markersize=markersize, scalex=False,scaley=False,
transform=pb.gca().transAxes,markeredgewidth=markeredgewidth)
def countDigitsInXTickLabels(adesc):
"""
This was supposed to detect when x-axis labels overlap, and reduce the
number of major ticks accordingly.
I have not gotten this to work yet, it only reads back '' as each label.
"""
xticklabels = adesc.get_xticklabels() # both of these give blank labels?
# xticklabels = pb.gca().xaxis.get_majorticklabels() # both of these give blank labels?
print("xticklabels = %s" % (str(xticklabels)))
for l in range(len(xticklabels)):
print("l = ", xticklabels[l])
digits = 0
for l in range(len(xticklabels)):
digits += len('%g' % (float(xticklabels[l].get_text())))
print("%d digits in %d labels" % (digits,len(xticklabels)))
def GetFieldIdsForFieldName(token, vm, mymsmd, msFields):
if (casaVersion >= '4.1.0'):
if (mymsmd != '' and mymsmd is not None):
myid = mymsmd.fieldsforname(token)[0]
else:
myid = list(msFields).index(token)
else:
myid = vm.getFieldIdsForFieldName(token)
return(myid)
def GetFieldNamesForFieldId(u, vm, mymsmd, msFields):
if (casaVersion >= '4.1.0'):
if (mymsmd != '' and mymsmd is not None):
myFieldName = mymsmd.namesforfields(u)[0]
else:
myFieldName = msFields[u]
else:
myFieldName = vm.getFieldNamesForFieldId(u)
return(myFieldName)
def computeHighestSpwIndexInSpwsToPlotThatHasCurrentScan(spwsToPlot, scansToPlotPerSpw, scan):
highestSpwIndex = -1
for i,spw in enumerate(spwsToPlot):
if (scan in scansToPlotPerSpw[spw]):
highestSpwIndex = i
return(highestSpwIndex)
def madOfDiff(solution):
"""
This function is used to decide which of two curves has more scatter, and hence should
be plotted first (i.e. shown in the background) when overlaying two solutions.
Added as part of CAS-9474 to do a better job of the selection
"""
if (len(solution) < 4):
return au.MAD(np.diff(solution))
else:
start = len(solution)/4
stop = len(solution)*3/4
ydata = np.array(solution[start:stop+1])
return au.MAD(np.diff(ydata))
DEFAULT_PLATFORMING_THRESHOLD = 10.0 # unused if platformingSigma != 0
def plotbandpass3(caltable='', antenna='', field='', spw='', yaxis='amp',
xaxis='chan', figfile='', plotrange=[0,0,0,0], help=False,
caltable2='', overlay='', showflagged=False, timeranges='',
buildpdf=False, caltable3='', markersize=3, density=108,
interactive=True, showpoints='auto', showlines='auto', # 20 args
subplot='22', zoom='', poln='', showatm=False, pwv='auto',
gs='gs', convert='convert', chanrange='',
solutionTimeThresholdSeconds=30.0, debug=False, vm='',
phase='', ms='', showtsky=False, showfdm=False,showatmfield='',
lo1=None, showimage=False, showatmPoints=False, parentms='', # 40 args
pdftk='pdftk', channeldiff=False, edge=8, resample=1, vis='',
platformingThreshold=DEFAULT_PLATFORMING_THRESHOLD,
platformingSigma=5.0, basebands=None, showBasebandNumber=False,
scans='', figfileSequential=False, groupByBaseband=False,
cleanup=False, caltable2amplitudeOffset=0, xcolor='b',
ycolor='g', chanrangeSetXrange=False,
overlaySpwDistinguish='', asciiFile=False,
maxAtmCalcChannels=MAX_ATM_CALC_CHANNELS,maxAltitude=60,
firstPlot=0, Feff=0.99, SBGain=0.99, Trx='auto', showtsys=False): # 66 args
"""
This is a tool to plot bandpass solutions faster than plotcal. It is
designed to work on both the old style and new style cal tables. The
source code is in plotbandpass3.py. For more
detailed help, run au.plotbandpass3(help=True) or see examples at:
http://casaguides.nrao.edu/index.php?title=Plotbandpass
-- Todd Hunter
"""
print("%s" % (PLOTBANDPASS_REVISION_STRING))
DEBUG = debug
if (help):
print("Usage: plotbandpass(caltable='', antenna='', field='', spw='', yaxis='amp',")
print(" xaxis='chan', figfile='', plotrange=[0,0,0,0], help=False, caltable2='',")
print(" overlay='', showflagged=False, timeranges='', buildpdf=False, caltable3='',")
print(" markersize=3, density=108, interactive=True, showpoints='auto',")
print(" showlines='auto', subplot='22', zoom='', poln='', showatm=False, pwv='auto',")
print(" gs='gs', convert='convert', chanrange='', debug=False, vm='',")
print(" solutionTimeThresholdSeconds=30.0, phase='', ms='', showtsky=False,")
print(" showfdm=False, showatmfield='', lo1=None, showimage=False,")
print(" showatmPoints=False, parentms='', pdftk='pdftk', channeldiff=False,")
print(" edge=8, resample=1, vis='',platformingThreshold=%f," % (DEFAULT_PLATFORMING_THRESHOLD))
print(" platformingSigma=%.1f, basebands=None, showBasebandNumber=False," % (5.0))
print(" scans='', figfileSequential=False, groupByBaseband=False,")
print(" cleanup=False, caltable2amplitudeOffset=0, xcolor='b',")
print(" ycolor='g', chanrangeSetXrange=False)")
print(" antenna: must be ID (int or string or list), or a single antenna name or list")
print(" asciiFile: if True, then also dump an ascii file of the amplitude spectrum")
print(" basebands: show only spws from the specified baseband or list of basebands (default:None=all)")
print(" buildpdf: True/False, if True and figfile is set, assemble pngs into a pdf")
print(" caltable: a bandpass table, of type B or BPOLY")
print(" caltable2: a second cal table, of type BPOLY or B, to overlay on a B table")
print(" caltable2amplitudeOffset: constant value to add to caltable2")
print(" caltable3: a third cal table, of type BPOLY, to overlay on the first two")
print(" chanrange: set xrange ('5~100' or '80%') over which to autoscale y-axis for xaxis='freq'")
print(" chanrangeSetXrange: if True, then chanrange also sets the xrange to display")
print(" channeldiff: set to value > 0 (sigma) to plot derivatives of amplitude")
print(" cleanup: remove pngs after making pdf when buildpdf=True")
print(" convert: full path for convert command (in case it's not found)")
print(" density: dpi to use in creating PNGs and PDFs (default=108)")
print(" edge: the number of edge channels to ignore in finding outliers (for channeldiff>0)")
print(" field: must be an ID, source name, or list thereof; can use trailing *: 'J*'")
print(" figfile: the base_name of the png files to save: base_name.antX.spwY.png")
print(" figfileSequential: naming scheme, False: name by spw/antenna (default)")
print(" True: figfile.1.png, figfile.2.png, etc.")
print(" firstPlot: relevant for overlaying caltables; 1 -> plot first solution 1st, etc.")
print(" groupByBaseband: group spws for display by baseband")
print(" gs: full path for ghostscript command (in case it's not found)")
print(" help: print this message")
print(" interactive: if False, then figfile will run to completion automatically")
print(" lo1: specify the LO1 setting (in GHz) for the observation (used if showimage=T)")
print(" overlay: 'antenna','time','antenna,time','spw', or 'baseband'")
print(" makes 1 plot with different items in different colors")
print(" overlaySpwDistinguish: '','color','width','color,width'; use 'width2' for wider")
print(" showflagged: show the values of data, even if flagged")
print(" markersize: size of points (default=3)")
print(" ms: name of the ms for this table, in case it does not match the string in the caltable")
print(" parentms: name of the parent ms, in case the ms has been previously split")
print(" pdftk: full path for pdftk command (in case it's not found)")
print(" phase: the y-axis limits to use for phase plots when yaxis='both'")
print(" platformingSigma: declare platforming if the amplitude derivative exceeds this many times the MAD")
print(" platformingThreshold: if platformingSigma=0, then declare platforming if the amplitude")
print(" derivative exceeds this percentage of the median")
print(" plotrange: define axis limits: [x0,x1,y0,y1] where 0,0 means auto")
print(" poln: polarizations to plot (e.g. 'XX','YY','RR','LL' or '' for both)")
print(" pwv: define the pwv to use for the showatm option: 'auto' or value in mm")
print(" resample: channel expansion factor to use when computing MAD of derivative (for channeldiff>0)")
print(" scans: show only solutions for the specified scans (int, list, or string)")
print(" showatm: compute and overlay the atmospheric transmission curve")
print(" showatmfield: for overlay='time', use first observation of this fieldID or name")
print(" showatmPoints: draw atmospheric curve with points instead of a line")
print(" showBasebandNumber: put the BBC_NO in the title of each plot")
print(" showfdm: when showing TDM spws with xaxis='freq', draw locations of FDM spws.")
print(" If showBasebandNumber=True, then show all FDM spws regardless of baseband")
print(" showimage: also show the atmospheric curve for the image sideband (in black)")
print(" showtsky: compute and overlay the sky temperature curve instead of transmission")
print(" showlines: draw lines connecting the data (default=T for amp, F for phase)")
print(" showpoints: draw points for the data (default=F for amp, T for phase)")
print(" solutionTimeThresholdSeconds: consider 2 solutions simultaneous if within this interval (default=30)")
print(" spw: must be single ID or list or range (e.g. 0~4, not the original ID)")
print(" subplot: 11..81,22,32 or 42 for RowsxColumns (default=22), any 3rd digit is ignored")
print(" timeranges: show only these timeranges, the first timerange being 0")
print(" vm: the result from ValueMapping('my.ms'), or as returned from a previous call to plotbandpass")
print(" xaxis: 'chan' or 'freq'")
print(" xcolor: color for XX polarization points (default = blue)")
print(" yaxis: 'amp', 'tsys', 'phase', or 'both' amp&phase == 'ap'; append 'db' for dB")
print(" ycolor: color for YY polarization points (default = green)")
print(" zoom: 'intersect' will zoom to overlap region of caltable with caltable2")
return(vm)
mytimestamp = timeUtilities.time()
if type(poln) == list:
# fix for CASA6 which forces string to be a list of strings of length one
if len(poln) == 1:
poln = poln[0]
debugSloppyMatch = debug
doneOverlayTime = False # changed from True on 08-nov-2012
missingCalWVRErrorPrinted = False
if (ms == '' and vis != ''):
# make au.plotbandpass compatible with casapy's argument list (which uses vis instead of ms)
ms = vis # this is the only use of 'vis' from the command line
# initialize the arguments to DrawAtmosphereAndFDM()
TebbSky = None
TebbSkyImage = None
Tsys = None
TsysImage = None
atmchan = None
atmfreq = None
transmission = None
atmchanImage = None
atmfreqImage = None
transmissionImage = None
originalSpw_casa33 = None
originalSpw = None
chanFreqGHz_casa33 = None
chanFreqGHz = None
# initialize arguments to DrawPolarizationLabelsForOverlayTime()
gamp_mad = None
gamp_std = None
figfileNumber = 0 # only used if figfileSequential == True
# Write a .last file
cmd = 'plotbandpass'
if (os.access(os.getcwd(),os.W_OK)):
if (os.path.exists('plotbandpass.last') == False or os.access('plotbandpass.last',os.W_OK)):
lastfile = open('%s.last'%cmd, 'w')
lastfile.write('taskname = "%s"\n'%cmd)
cmd += '(' + writeArgument(lastfile, "caltable", caltable)
cmd += ',' + writeArgument(lastfile, "antenna" , antenna)
cmd += ',' + writeArgument(lastfile, "field" , field)
cmd += ',' + writeArgument(lastfile, "spw" , spw)
cmd += ',' + writeArgument(lastfile, "yaxis", yaxis)
cmd += ',' + writeArgument(lastfile, "xaxis", xaxis)
cmd += ',' + writeArgument(lastfile, "figfile", figfile)
cmd += ',' + writeArgument(lastfile, "plotrange" , plotrange)
cmd += ',' + writeArgument(lastfile, "help", help)
cmd += ',' + writeArgument(lastfile, "caltable2", caltable2)
cmd += ',' + writeArgument(lastfile, "overlay", overlay)
cmd += ',' + writeArgument(lastfile, "showflagged", showflagged)
cmd += ',' + writeArgument(lastfile, "timeranges", timeranges)
cmd += ',' + writeArgument(lastfile, "buildpdf", buildpdf)
cmd += ',' + writeArgument(lastfile, "caltable3", caltable3)
cmd += ',' + writeArgument(lastfile, "markersize", markersize)
cmd += ',' + writeArgument(lastfile, "density", density)
cmd += ',' + writeArgument(lastfile, "interactive", interactive)
cmd += ',' + writeArgument(lastfile, "showpoints", showpoints)
cmd += ',' + writeArgument(lastfile, "showlines", showlines)
cmd += ',' + writeArgument(lastfile, "subplot", subplot)
cmd += ',' + writeArgument(lastfile, "zoom", zoom)
cmd += ',' + writeArgument(lastfile, "poln", poln)
cmd += ',' + writeArgument(lastfile, "showatm", showatm)
cmd += ',' + writeArgument(lastfile, "showatmfield", showatmfield)
cmd += ',' + writeArgument(lastfile, "pwv", pwv)
cmd += ',' + writeArgument(lastfile, "gs", gs)
cmd += ',' + writeArgument(lastfile, "convert", convert)
cmd += ',' + writeArgument(lastfile, "chanrange", chanrange)
cmd += ',' + writeArgument(lastfile, "chanrangeSetXrange", chanrangeSetXrange)
cmd += ',' + writeArgument(lastfile, "solutionTimeThresholdSeconds", solutionTimeThresholdSeconds)
cmd += ',' + writeArgument(lastfile, "debug", debug)
cmd += ',' + writeArgument(lastfile, "vm", vm)
cmd += ',' + writeArgument(lastfile, "phase", phase)
cmd += ',' + writeArgument(lastfile, "ms", ms)
cmd += ',' + writeArgument(lastfile, "parentms", parentms)
cmd += ',' + writeArgument(lastfile, "lo1", lo1)
cmd += ',' + writeArgument(lastfile, "showimage", showimage)
cmd += ',' + writeArgument(lastfile, "showtsky", showtsky)
cmd += ',' + writeArgument(lastfile, "showatmPoints", showatmPoints)
cmd += ',' + writeArgument(lastfile, "showfdm", showfdm)
cmd += ',' + writeArgument(lastfile, "pdftk", pdftk)
cmd += ',' + writeArgument(lastfile, "channeldiff", channeldiff)
cmd += ',' + writeArgument(lastfile, "edge", edge)
cmd += ',' + writeArgument(lastfile, "resample", resample)
cmd += ',' + writeArgument(lastfile, "vis", vis)
cmd += ',' + writeArgument(lastfile, "platformingThreshold", platformingThreshold)
cmd += ',' + writeArgument(lastfile, "platformingSigma", platformingSigma)
cmd += ',' + writeArgument(lastfile, "basebands", basebands)
cmd += ',' + writeArgument(lastfile, "showBasebandNumber", showBasebandNumber)
cmd += ',' + writeArgument(lastfile, "scans", scans)
cmd += ',' + writeArgument(lastfile, "figfileSequential", figfileSequential)
cmd += ',' + writeArgument(lastfile, "groupByBaseband", groupByBaseband)
cmd += ',' + writeArgument(lastfile, "cleanup", cleanup)
cmd += ',' + writeArgument(lastfile, "caltable2amplitudeOffset", caltable2amplitudeOffset)
cmd += ',' + writeArgument(lastfile, "xcolor", xcolor)
cmd += ',' + writeArgument(lastfile, "ycolor", ycolor) + ')'
lastfile.write('#%s\n'%(cmd))
lastfile.close()
# if (platformingThreshold != DEFAULT_PLATFORMING_THRESHOLD and channeldiff==False):
# channeldiff = 10
LO1 = None # Fix for SCOPS-4877
lo1s = None # Fix for SCOPS-4877
if (showimage == False):
LO1 = lo1 = None
elif (lo1 is not None):
if (lo1 > 1e6):
# convert from Hz to GHz
lo1 *= 1e-9
if (showatm and showtsky):
print("You have selected both showatm and showtsky! Defaulting to showatm=True only.")
showtsky = False
if (showatm==False and showtsky==False and showatmfield!=''):
print("Defaulting to showatm=True because showatmfield was specified.")
showatm = True
if (showatm==False and showtsky==False and showimage==True):
print("Defaulting to showatm=True because showimage was True.")
showatm = True
if showtsys:
showtsky = True
if (overlay.find('time') < 0 and showatmfield != ''):
print("The showatmfield only has meaning for overlay='time'.")
return(vm)
if (plotrange=='' or plotrange==[]):
plotrange = [0,0,0,0]
if (type(plotrange) != list):
print("plotrange must be an array: [0,1,-180,180]")
return(vm)
if (len(plotrange) < 4):
print("plotrange must be an array: [0,1,-180,180]")
return(vm)
if (phase != ''):
if (type(phase) != list):
print("phase must be either '' or 2 values: [x,y]")
return(vm)
if (len(phase) != 2):
print("phase must be either '' or 2 values: [x,y]")
return(vm)
if (edge < 0):
print("edge must be >= 0")
return(vm)
if (resample < 1):
print("resample must be an integer >= 1")
return(vm)
resample = int(resample)
solutionTimeThresholdSeconds = float(solutionTimeThresholdSeconds) # so it also accepts a string
if (buildpdf and figfile==''):
print("With buildPDF=True, you must specify figfile='yourFileName' (.png will be appended if necessary).")
return(vm)
if (interactive==False and figfile=='' and channeldiff == False):
print("With interactive=False and channeldiff=False, you must specify figfile='yourFileName' (.png will be appended if necessary).")
return(vm)
pxl = 0 # polarization number to use for setting xlimits if plotrange=[0,0...]
chanrangePercent = None
if (type(chanrange) != str):
if (type(chanrange) != list):
print("Chanrange must be a string or list: '8~120' or [8,120]")
return(vm)
elif (len(chanrange) != 2):
print("Chanrange must be a string or list: '8~120' or [8,120]")
return(vm)
elif ((type(chanrange[0]) != int) or (type(chanrange[1]) != int)):
print("Chanrange list members must be integers, not ", type(chanrange[0]), type(chanrange[1]))
return
elif (len(chanrange) < 1):
chanrange = [0,0]
else:
if (chanrange.find('%')>0):
chanrangePercent = float(chanrange.split('%')[0])
if (debug): print("******************* Set chanrangePercent to %f, chanrangeSetXrange=" % (chanrangePercent), chanrangeSetXrange)
if (chanrangePercent >= 100 or chanrangePercent <= 0):
chanrangePercent = None
chanrange = [0,0]
elif (chanrange.find('~')>=0):
tokens = chanrange.split('~')
if (len(tokens) < 2):
print("Invalid chanrange, too few tokens")
return(vm)
try:
chanrange = [int(tokens[0]),int(tokens[1])]
if (DEBUG):
print("Using chanrange = ", chanrange)
except:
print("Invalid chanrange, not integers")
return(vm)
else:
print("Invalid chanrange, no tilde or percent sign found")
return(vm)
if (xaxis.find('chan')>=0):
print("The chanrange parameter is only valid for xaxis='freq', and only if the plotrange is [0,0,0,0].")
return(vm)
if (chanrange[0] < 0):
print("Invalid chanrange, cannot be negative")
return(vm)
if ((chanrange[0] != 0 or chanrange[1] != 0 or chanrangePercent is not None) and
(plotrange[0] != 0 or plotrange[1] != 0 or plotrange[2] != 0 or plotrange[3] != 0)):
print("If chanrange is specified, then plotrange must be all zeros.")
return(vm)
if (pwv==''):
pwv = 1.0
if (type(poln) != list):
poln = poln.upper()
if (poln == 'X'):
poln = 'XX'
if (poln == 'Y'):
poln = 'YY'
if (poln == 'X,Y' or poln=='Y,X'):
poln = 'XX,YY'
if (poln == 'R'):
poln = 'RR'
if (poln == 'L'):
poln = 'LL'
if (poln == 'R,L' or poln=='L,R'):
poln = 'RR,LL'
# Parse the polarizations to plot from the command line
# Prior to opening the .ms (later), we cannot tell which products are actually present
useAllPols = False
if (poln == ''):
useAllPols = True
polsToPlot = ['XX','YY'] # assume ALMA initially
elif (type(poln) == list):
polsToPlot = poln.split(',')
else:
if ((poln in ['','RR','RL','LR','LL','XX','XY','YX','YY','RR,LL','XX,YY']) == False):
print("Unrecognized polarization option = ", poln)
return(vm)
if (poln.find(',')>0):
polsToPlot = poln.split(',')
else:
polsToPlot = [poln]
if ((overlay in ['antenna', 'spw', 'time', 'baseband', '', 'antenna,time', 'time,antenna']) == False): # try to support antenna,time
print("Unrecognized option for overlay: only 'antenna', 'spw', 'baseband', 'time' and 'antenna,time' are supported.")
return(vm)
allowedFrames = [11,21,31,41,51,61,71,81,22,32,42] # [11,22,32,42]
if (int(subplot) > 100):
# This will accept 111, 221, 321, 421, etc.
subplot = int(subplot/10)
if ((int(subplot) in allowedFrames)==False):
print("Subplot choice (rows x columns) must be one of ", allowedFrames)
print("(with an optional trailing digit that is ignored).")
return(vm)
if ((int(subplot) % 2) == 1):
timeHorizontalSpacing = 0.06*1.3 # *1.3 is for HH:MM:SS
else:
timeHorizontalSpacing = 0.05*1.3 # *1.3 is for HH:MM:SS
if (yaxis.find('both')<0 and yaxis.find('ap')<0 and yaxis.find('tsys')<0 and
yaxis.find('amp')<0 and yaxis.find('phase')<0):
print("Invalid yaxis. Must be 'amp', 'tsys', 'phase' or 'both'.")
return(vm)
if (yaxis.find('tsys')>=0):
yaxis = 'amp'
if (xaxis.find('chan')<0 and xaxis.find('freq')<0):
print("Invalid xaxis. Must be 'chan' or 'freq'.")
return(vm)
if (showatm and showtsky):
print("showatm=True and showtsky=True are mutually exclusive options")
return(vm)
if (showfdm and xaxis.find('freq')<0):
print("The option showfdm=True requires xaxis='freq'.")
return(vm)
# Plotting settings
minPhaseRange = 0.2
plotfiles = []
if (int(subplot) % 2 == 1):
mysize = '10'
titlesize = 10
elif (int(subplot) == 22 or int(subplot) == 32):
mysize = '8'
titlesize = 8
else:
mysize = '7'
titlesize = 8
maxCharsBeforeReducingTitleFontSize = 72
if (type(subplot) == str):
subplot = int(subplot)
if (subplot in allowedFrames == False):
print("Invalid subplot = %d. Valid options are: " % (subplot), allowedFrames)
return(vm)
xframeStart = int(subplot)*10 # i.e. 110 or 220 or 420
firstFrame = xframeStart + 1
lastFrame = xframeStart + int(subplot/10)*(subplot%10)
bottomRowFrames = [111,212,313,414,515,616,717,818,223,224,325,326,427,428] # try to make this more general
leftColumnFrames = [111,211,212,311,312,313,411,412,413,414,511,512,513,514,515,611,612,613,614,615,616,
711,712,713,714,715,716,717,811,812,813,814,815,816,817,818,221,223,321,323,325,421,423,425,427]
rightColumnFrames = [111,211,212,311,312,313,411,412,413,414,511,512,513,514,515,611,612,613,614,615,616,
711,712,713,714,715,716,717,811,812,813,814,815,816,817,818,222,224,322,324,326,422,424,426,428]
subplotCols = subplot % 10
subplotRows = int(subplot/10)
ystartPolLabel = 1.0-0.04*subplotRows
ystartMadLabel = 0.04*subplotRows
if (subplotCols == 1):
fstringLimit = 40 # character length of multi-field overlay title string
elif (subplotCols == 2):
fstringLimit = 12 # character length of multi-field overlay title string
if (debug):
print("0)setting xframe to %d" % (xframeStart))
xframe = xframeStart
previousSubplot = xframe
# print "Using markersize = ", markersize
pcolor = [xcolor,ycolor]
x2color = 'k'
y2color = 'c'
p2color = ['k','c']
x3color = 'm'
y3color = 'r'
p3color = ['m','r']
if (showpoints == 'auto'):
if (showlines == 'auto'):
ampmarkstyle = '-'
phasemarkstyle = '.'
if (len(polsToPlot) == 1):
ampmarkstyle2 = '-'
else:
ampmarkstyle2 = '--'
phasemarkstyle2 = 'o'
elif (showlines == False):
ampmarkstyle = '.'
ampmarkstyle2 = 'o'
phasemarkstyle = '.'
phasemarkstyle2 = 'o'
else:
ampmarkstyle = '-'
phasemarkstyle = '-'
if (len(polsToPlot) == 1):
ampmarkstyle2 = '-'
phasemarkstyle2 = '-'
else:
ampmarkstyle2 = '--'
phasemarkstyle2 = '--'
elif (showpoints == True):
if (showlines == 'auto'):
ampmarkstyle = '.-'
phasemarkstyle = '.'
if (len(polsToPlot) == 1):
ampmarkstyle2 = 'o-'
else:
ampmarkstyle2 = 'o--'
phasemarkstyle2 = 'o'
elif (showlines == False):
ampmarkstyle = '.'
ampmarkstyle2 = 'o'
phasemarkstyle = '.'
phasemarkstyle2 = 'o'
else:
ampmarkstyle = '.-'
phasemarkstyle = '.-'
if (len(polsToPlot) == 1):
ampmarkstyle2 = 'o-'
phasemarkstyle2 = 'o-'
else:
ampmarkstyle2 = 'o--'
phasemarkstyle2 = 'o--'
else: # showpoints == False
if (showlines == False):
print('You must have either showpoints or showlines set True or auto, assuming showlines=T')
ampmarkstyle = '-'
phasemarkstyle = '-'
if (len(polsToPlot) == 1):
ampmarkstyle2 = '-'
phasemarkstyle2 = '-'
else:
ampmarkstyle2 = '--'
phasemarkstyle2 = '--'
ampmarkstyles = [ampmarkstyle,ampmarkstyle2]
phasemarkstyles = [phasemarkstyle,phasemarkstyle2]
# bpoly solutions should always be shown as lines, not dots or dots+lines
bpolymarkstyle = '-'
amplitudeWithPhase = (yaxis.find('both')>=0 or yaxis.find('ap')>=0)
if (amplitudeWithPhase):
myhspace = 0.30
if (overlay.find('antenna')>=0 or overlay.find('time')>=0 or overlay.find('spw')>=0):
print("Option overlay='antenna' or 'time' is incompatible with yaxis='both'. Pick either amp or phase.")
return(vm)
if (subplotRows % 2 == 1):
print("Option yaxis=='both' is incompatible with odd numbers of rows, change subplot")
return(vm)
else:
myhspace = 0.30
if (subplot/10 > 2):
myhspace = 0.4
if (subplot/10 > 3):
myhspace = 0.6
mywspace = 0.25
# Now open the Bandpass solution table
if (len(caltable) < 1):
print("You need to specify a caltable.")
return(vm)
if (caltable[-1] == '/'):
print("Stripping off the trailing '/' from the caltable name.")
caltable = caltable[:-1]
if not os.path.exists(caltable):
print("Caltable does not exist.")
return
mytb = au.createCasaTool(tbtool)
try:
mytb.open(caltable)
except:
print("Could not open the caltable = %s" % (caltable))
return(vm)
if (caltable[0] != '/'):
# print this so when someone sends me a bug report I can find their data!
try:
print("caltable = %s:%s/%s" % (os.uname()[1], os.getcwd(), caltable))
except:
print("caltable = localhost:%s/%s" % (os.getcwd(), caltable))
else:
try:
print("caltable = %s:%s" % (os.uname()[1], caltable))
except:
print("caltable = localhost:%s" % (caltable))
if (len(caltable) > 90):
caltableTitle = '...' + caltable[-90:]
else:
caltableTitle = caltable
names = mytb.colnames()
if ('UVW' in names):
print("This appears to be a measurement set, not a caltable. Aborting.")
return(vm)
casalog.post(cmd)
ant = mytb.getcol('ANTENNA1')
fields = mytb.getcol('FIELD_ID')
# if (DEBUG):
# print "FIELD_ID column = ", fields
validFields = False
for f in fields:
if (f != -1):
validFields = True
if (validFields == False):
print("The field_id is -1 (invalid) for all rows of this caltable.")
print("Did you remember to run interpTsys.assignFieldAndScanToSolution()?")
return(vm)
try:
flags = {}
complete = True
for f in range(len(fields)):
if mytb.iscelldefined('FLAG',f):
flags[f] = mytb.getcell('FLAG',f)
else:
complete = False
print("Warning: Missing data in the FLAG column, table may not be complete.")
break
except:
pass
if not complete:
print("Missing data in the FLAG column.")
print("If it is a solution file, does it contain solutions for both TDM and FDM spws?")
if 0 not in flags:
print("Are you sure this is a bandpass solution file, or is it the .ms?")
return(vm)
# if (debug): print "%d: flags.keys() = %s" % (len(flags.keys()), str(flags.keys()))
times = mytb.getcol('TIME')
intervals = mytb.getcol('INTERVAL')
if ('SPECTRAL_WINDOW_ID' not in names):
# This is an old-style CASA cal table.
tableFormat = 33
msAnt = []
cal_desc_id = mytb.getcol('CAL_DESC_ID')
VisCal = (mytb.info())['subType']
if (VisCal == "BPOLY"):
print("This appears to be a BPOLY cal table written in the casa 3.3/3.4 style.")
else:
print("This appears to be an old-format cal table from casa 3.3 or earlier.")
if (debug): print("VisCal = ", VisCal)
mytb.close()
ParType = "unknown" # i.e. not Complex
calDesc = mytb.open(caltable+'/CAL_DESC')
originalSpws = mytb.getcol('SPECTRAL_WINDOW_ID') # [[0,1,2,3]]
if debug: print("originalSpws = ", originalSpws)
originalSpw = originalSpws[0] # [0,1,2,3]
if debug: print("originalSpw = ", originalSpw)
msName = mytb.getcol('MS_NAME')[0]
if debug: print("msName in table = ", msName)
if (ms != ''):
msName = ms
# This appears to be the channel range extracted from the original spw, but is
# only present in B solutions.
if (VisCal == "BPOLY"):
originalChannelStart = np.zeros(len(originalSpw))
else:
originalChannelRange = mytb.getcol('CHAN_RANGE')
originalChannelStart = originalChannelRange[0][0][:][0]
mytb.close()
try:
mytb.open(msName+'/SPECTRAL_WINDOW')
refFreq = mytb.getcol('REF_FREQUENCY')
net_sideband = mytb.getcol('NET_SIDEBAND')
measFreqRef = mytb.getcol('MEAS_FREQ_REF')
originalSpw_casa33 = range(len(measFreqRef))
chanFreqGHz_casa33 = [] # used by showFDM
for i in originalSpw_casa33:
# The array shapes can vary.
chanFreqGHz_casa33.append(1e-9 * mytb.getcell('CHAN_FREQ',i))
mytb.close()
except:
print("2) Could not open the associated measurement set tables (%s). Will not translate antenna names." % (msName))
# print "I will assume ALMA data: XX, YY, and refFreq=first channel."
# corr_type_string = ['XX','YY']
# corr_type = [9,12]
else: # 3.4
tableFormat = 34
cal_desc_id = mytb.getcol('SPECTRAL_WINDOW_ID')
cal_scans = mytb.getcol('SCAN_NUMBER')
unique_cal_scans = np.unique(cal_scans)
cal_scans_per_spw = {}
for myspw in np.unique(cal_desc_id):
cal_scans_per_spw[myspw] = np.unique(cal_scans[np.where(myspw == cal_desc_id)[0]])
if (debug):
print("spw %d: scans %s" % (myspw,str(cal_scans_per_spw[myspw])))
ParType = mytb.getkeyword('ParType') # string = 'Complex'
msName = mytb.getkeyword('MSName')
VisCal = mytb.getkeyword('VisCal') # string = 'B TSYS'
PolBasis = mytb.getkeyword('PolBasis') # string = 'LINEAR'
if False:
# the following will fail if there is a space in the ms name
spectralWindowTable = mytb.getkeyword('SPECTRAL_WINDOW').split()[1]
antennaTable = mytb.getkeyword('ANTENNA').split()[1]
fieldTable = mytb.getkeyword('FIELD').split()[1]
else:
# these will work if there is space in the ms name: Feb 19, 2021
spectralWindowTable = ' '.join(mytb.getkeyword('SPECTRAL_WINDOW').split()[1:])
antennaTable = ' '.join(mytb.getkeyword('ANTENNA').split()[1:])
fieldTable = ' '.join(mytb.getkeyword('FIELD').split()[1:])
mytb.close()
print("spectralWindowTable = ", spectralWindowTable)
mytb.open(spectralWindowTable)
chanFreqGHz = []
originalSpws = range(len(mytb.getcol('MEAS_FREQ_REF')))
originalSpw = originalSpws # may need to do a global replace of this <------------------------
originalSpwNames = mytb.getcol('NAME')
for i in originalSpws:
# The array shapes can vary.
chanFreqGHz.append(1e-9 * mytb.getcell('CHAN_FREQ',i))
mytb.close()
# CAS-6801 changes
mytb.open(antennaTable)
msAnt = mytb.getcol('NAME')
mytb.close()
mytb.open(fieldTable)
msFields = mytb.getcol('NAME')
mytb.close()
if (VisCal == 'K Jones'):
delay = True
showpoints = True
ampmarkstyle = '.'
ampmarkstyle2 = 'o'
if (markersize < 8): markersize = 8
else:
delay = False
# Now open the associated ms tables via ValueMapping
mymsmd = ''
# msAnt = [] # comment this out when CAS-6801 changes are in place
observatoryName = ''
if (vm == ''):
if (debug): print("msName = %s." % (msName))
if (os.path.exists(msName) or os.path.exists(os.path.dirname(caltable)+'/'+msName)):
if (os.path.exists(msName) == False):
msName = os.path.dirname(caltable)+'/'+msName
if (debug): print("found msName = %s." % (msName))
if (casaVersion >= '4.1.0'):
mymsmd = au.createCasaTool(msmdtool)
if (debug): print("Running mymsmd on %s..." % (msName))
mymsmd.open(msName)
donetime = timeUtilities.time()
if (debug): print("%.1f sec elapsed" % (donetime-mytimestamp))
mytimestamp = timeUtilities.time()
if (debug): print("time = %s" % (str(mytimestamp)))
msAnt = mymsmd.antennanames(range(mymsmd.nantennas()))
if (debug): print("msAnt = %s" % (str(msAnt)))
# msFields = mymsmd.namesforfields(range(mymsmd.nfields())) # bombs if split has been run on subset of fields
if casaVersion >= '4.5':
msFields = mymsmd.fieldnames() # used to be namesforfields(), but that is broken in early CASA 6
else: # msmd.fieldnames() was not available until 4.5.0
msFields = mymsmd.namesforfields()
observatoryName = mymsmd.observatorynames()[0]
print("Available antennas = %s" % (str(msAnt)))
else: # old CASA
try:
print("Running ValueMapping on %s..." % (msName))
print("(This can take awhile, try the vm option next time: vm=au.plotbandpass(..)")
print(" then, on subsequent calls: au.plotbandpass(..,vm=vm)")
vm = au.ValueMapping(msName)
donetime = timeUtilities.time()
if (debug): print("%.1f sec elapsed" % (donetime-mytimestamp))
mytimestamp = timeUtilities.time()
msAnt = vm.antennaNamesForAntennaIds
msFields = vm.fieldNamesForFieldIds
print("Available antennas = ", msAnt)
observatoryName = au.getObservatoryName(msName)
except:
print("1)Could not open the associated measurement set tables (%s). Will not translate antenna names or frequencies." % (msName))
else: # ms does not exist
if (ms=='' and tableFormat < 34):
print("Could not find the associated measurement set (%s). Will not translate antenna names or frequencies." % (msName))
elif (ms != ''):
# Use the ms name passed in from the command line
msName = ms
# print "************* 2) Set msName to ", msName
if (casaVersion >= '4.1.0'):
mymsmd = au.createCasaTool(msmdtool)
if (debug): print("Running mymsmd on %s..." % (msName))
mymsmd.open(msName)
donetime = timeUtilities.time()
if (debug): print("%.1f sec elapsed" % (donetime-mytimestamp))
mytimestamp = timeUtilities.time()
if (debug): print("time = %s" % (str(mytimestamp)))
msAnt = mymsmd.antennanames(range(mymsmd.nantennas()))
if (debug): print("msAnt = %s" % (str(msAnt)))
# msFields = mymsmd.namesforfields(range(mymsmd.nfields())) # bombs if split has been run on subset of fields
msFields = mymsmd.namesforfields()
observatoryName = mymsmd.observatorynames()[0]
print("Available antennas = %s" % (str(msAnt)))
else:
try:
print("Running ValueMapping on %s..." % (msName))
print("(This can take a minute, try using the vm option next time)")
vm = au.ValueMapping(msName)
donetime = timeUtilities.time()
if (debug): print("%.1f sec elapsed" % (donetime-mytimestamp))
mytimestamp = timeUtilities.time()
msAnt = vm.antennaNamesForAntennaIds
msFields = vm.fieldNamesForFieldIds
print("Available antennas = ", msAnt)
observatoryName = au.getObservatoryName(msName)
except:
print("1b) Could not open the associated measurement set tables (%s). Will not translate antenna names or channels to frequencies." % (msName))
else:
# vm was specified on the command line
if (msName.find(vm.getInputMs()) < 0):
if (msName.find(vm.getInputMs().split('/')[-1]) < 0):
print("WARNING: There is a mismatch between the ms name in the ValueMapping")
print("structure provided and the ms name in the bandpass table:")
print(" %s vs. %s" % (vm.getInputMs(), msName))
print("Using the name passed in via the ms parameter.")
else:
msName = vm.getInputMs()
print("WARNING: There is a mismatch in the directory path for the ms between")
print("the ValueMapping structure and the bandpass table.")
print("Using the path from the ValueMapping result provided via the vm parameter.")
msAnt = vm.antennaNamesForAntennaIds
msFields = vm.fieldNamesForFieldIds
if (casaVersion >= '4.1.0'):
# This is necessary to get the baseband labelling correct.
mymsmd = au.createCasaTool(msmdtool)
if (debug): print("Running mymsmd on %s..." % (msName))
mymsmd.open(msName)
msFound = False
if (len(msAnt) > 0):
msFields = list(msFields) # necessary to avoid having to index with extra 0: msFields[fieldIndex][0]
msFieldsUnique = np.unique(msFields)
msFound = True # will also be true if necessary information is found in the caltable subtables
print("Fields in ms = ", msFields)
else:
msFields = []
if (tableFormat == 33 and msFound): # casa 3.3
# Now open the associated ms tables via ValueMapping to figure out channel freqs
chanFreqGHz = []
for ictr in range(len(originalSpw)):
if debug: print("ictr = ", ictr)
if (casaVersion >= '4.1.0'):
if debug: print("nspw = %d, np.max(originalSpw) = %d" % (mymsmd.nspw(False),np.max(originalSpw)))
if (mymsmd.nspw(False) < np.max(originalSpw)): # waiting on CAS-4285
# Then there was an extra split
i = ictr
else:
i = originalSpw[ictr]
nchan = mymsmd.nchan(i)
if (nchan > 1):
missingFrequencyWidth = originalChannelStart[ictr]*(mymsmd.chanfreqs(i)[-1]-mymsmd.chanfreqs(i)[0])/(nchan-1)
else:
missingFrequencyWidth = 0
if (missingFrequencyWidth > 0):
if (DEBUG):
print("Correcting for channels flagged prior to running bandpass by %f GHz" % (missingFrequencyWidth*1e-9))
newfreqs = 1e-9*(mymsmd.chanfreqs(i)) + missingFrequencyWidth*1e-9
else:
if debug: print("len(vm.spwInfo) = %d, np.max(originalSpw) = %d" % (len(vm.spwInfo),np.max(originalSpw)))
if (len(vm.spwInfo) < np.max(originalSpw)):
# Then there was an extra split
i = ictr
else:
i = originalSpw[ictr]
nchan = len(vm.spwInfo[i]["chanFreqs"])
if (nchan > 1):
missingFrequencyWidth = originalChannelStart[ictr]*(vm.spwInfo[i]["chanFreqs"][-1]-vm.spwInfo[i]["chanFreqs"][0])/(nchan-1)
else:
missingFrequencyWidth = 0
if (missingFrequencyWidth > 0):
if (DEBUG):
print("Correcting for channels flagged prior to running bandpass by %f GHz" % (missingFrequencyWidth*1e-9))
newfreqs = 1e-9*(vm.spwInfo[i]["chanFreqs"]) + missingFrequencyWidth*1e-9
# if (debug): print "Appending onto chanFreqGHz: ", newfreqs
chanFreqGHz.append(newfreqs)
uniqueSpwsInCalTable = np.unique(cal_desc_id)
# initial calculation for final message if not all spws appear with overlay='antenna'
uniqueTimes = sloppyUnique(np.unique(times), 1.0)
nUniqueTimes = len(uniqueTimes)
if (nUniqueTimes == 1):
solutionTimeSpread = 0
else:
solutionTimeSpread = np.max(uniqueTimes)-np.min(uniqueTimes)
print("Found solutions with %d unique times (within a threshold of 1.0 second)." % (nUniqueTimes))
uniqueTimes = sloppyUnique(np.unique(times), solutionTimeThresholdSeconds)
nUniqueTimes = len(uniqueTimes)
if (nUniqueTimes == 1):
print("Found solutions with %d unique time (within a threshold of %d seconds)." % (nUniqueTimes,solutionTimeThresholdSeconds))
else:
print("Found solutions with %d unique times (within a threshold of %d seconds)." % (nUniqueTimes,solutionTimeThresholdSeconds))
displayTimesArray([[uniqueTimes]])
scansForUniqueTimes = []
if (tableFormat >= 34):
if (len(unique_cal_scans) == 1):
print("Found solutions with %d unique scan number %s" % (len(unique_cal_scans), str(unique_cal_scans)))
else:
print("Found solutions with %d unique scan numbers %s" % (len(unique_cal_scans), str(unique_cal_scans)))
scansForUniqueTimes, nUniqueTimes = computeScansForUniqueTimes(uniqueTimes, cal_scans, times, unique_cal_scans)
elif (scans != ''):
print("Selection by scan is not support for old-style tables that do not have the scan number filled.")
return
uniqueTimesCopy = uniqueTimes[:]
mystring = ''
if (debug):
for u in uniqueTimes:
mystring += '%.6f, ' % (u)
print(mystring)
uniqueAntennaIds = np.unique(ant)
uniqueFields = np.unique(fields)
nFields = len(uniqueFields)
spwlist = []
uniqueTimesPerFieldPerSpw = []
for s in uniqueSpwsInCalTable:
uniqueTimesPerField = []
for f in uniqueFields:
timelist = []
for row in range(len(fields)):
if (fields[row] == f and cal_desc_id[row] == s):
if (sloppyMatch(times[row], timelist, solutionTimeThresholdSeconds) == False):
timelist.append(times[row])
spwlist.append(cal_desc_id)
uniqueTimesPerField.append(timelist)
uniqueTimesPerFieldPerSpw.append(uniqueTimesPerField)
# print "uniqueTimesPerFieldPerSpw (len=%d) = %s" % (len(uniqueTimesPerFieldPerSpw), str(uniqueTimesPerFieldPerSpw))
# Parse the spws to plot from the command line
if (spw==''):
spwsToPlot = list(uniqueSpwsInCalTable)
else:
if (type(spw) == str):
if (spw.find('!')>=0):
print("The ! modifier is not (yet) supported")
return(vm)
tokens = spw.split(',')
spwsToPlot = []
for token in tokens:
if (len(token) > 0):
if (token.find('*')>=0):
spwsToPlot = list(uniqueSpwsInCalTable)
break
elif (token.find('~')>0):
(start,finish) = token.split('~')
spwsToPlot += range(int(start),int(finish)+1)
else:
spwsToPlot.append(int(token))
elif (type(spw) == list):
spwsToPlot = np.sort(spw)
else:
spwsToPlot = [spw]
if (len(uniqueSpwsInCalTable) > 1):
print("%d spws in the solution = " % (len(uniqueSpwsInCalTable)), uniqueSpwsInCalTable)
else:
print("%d spw in the solution = " % (len(uniqueSpwsInCalTable)), uniqueSpwsInCalTable)
keepSpwsToPlot = spwsToPlot[:]
for myspw in spwsToPlot:
if (myspw not in uniqueSpwsInCalTable):
print("WARNING: spw %d is not in the solution. Removing it from the list to plot." % (myspw))
print("Available spws = ", uniqueSpwsInCalTable)
keepSpwsToPlot.remove(myspw)
if (casaVersion >= '4.1.0' and mymsmd != ''):
# nonwvrspws = list(set(range(mymsmd.nspw())).difference(set(mymsmd.wvrspws())))
if (myspw not in range(mymsmd.nspw())):
print("FATAL: spw %d is not even in the ms. There might be a bug in your script." % (myspw))
return
elif (myspw in mymsmd.wvrspws()):
print("WARNING: spw %d is a WVR spw." % (myspw))
return
spwsToPlot = keepSpwsToPlot[:]
if (spwsToPlot == []):
print("FATAL: no spws to plot")
return
originalSpwsToPlot = computeOriginalSpwsToPlot(spwsToPlot, originalSpw, tableFormat, debug)
# Now generate the list of minimal basebands that contain the spws to be plotted
if (casaVersion >= '4.1.0' and msFound):
allBasebands = []
if (mymsmd != ''):
try:
for spw in originalSpwsToPlot:
mybaseband = mymsmd.baseband(spw)
if (debug): print("appending: spw=%d -> bb=%d" % (spw,mybaseband))
allBasebands.append(mybaseband)
allBasebands = np.unique(allBasebands)
basebandDict = getBasebandDict(msName,caltable=caltable,mymsmd=mymsmd) # needed later by showFDM()
except:
basebandDict = {}
print("This dataset (%s) does not have a BBC_NO column in the SPECTRAL_WINDOW_TABLE." % (msName))
else:
basebandDict = {}
telescopeName = getTelescopeNameFromCaltable(caltable)
print("Measurement set not found.")
if (basebandDict == {}):
if (overlay.find('spw') >= 0):
print("As such, since the ms cannot be found, overlay='spw' is not supported, but overlay='baseband' should work.")
return
if (showfdm):
print("As such, since the ms cannot be found, showfdm=True is not supported.")
showfdm = False
if (showBasebandNumber):
print("As such, since the ms cannot be found, showBasebandNumber=True is not supported.")
showBasebandNumber = False
elif (msFound==False):
allBasebands = [1,2,3,4]
else:
basebandDict = getBasebandDict(msName,caltable=caltable,mymsmd=mymsmd) # needed later by showFDM()
allBasebands = []
for spw in originalSpwsToPlot:
mybaseband = [key for key in basebandDict if spw in basebandDict[key]]
if (len(mybaseband)>0): allBasebands.append(mybaseband[0])
allBasebands = np.unique(allBasebands)
if (allBasebands == []):
allBasebands = [1,2,3,4]
if (debug):
print("================ allBasebands = ", allBasebands)
if (basebands is None or basebands == ''):
basebands = allBasebands
elif (type(basebands) == str):
basebands = [int(s) for s in basebands.split(',')]
elif (type(basebands) != list):
# it is a single integer
basebands = [basebands]
for baseband in basebands:
if (baseband not in allBasebands):
print("Baseband %d is not in the dataset (only %s)" % (baseband,str(allBasebands)))
return
if (msFound):
msFieldsList = str(np.array(msFields)[uniqueFields])
else:
msFieldsList = 'unknown'
print("%d field(s) in the solution = %s = %s" % (len(uniqueFields), uniqueFields,msFieldsList))
# Figure out which kind of Bandpass solution this is.
bOverlay = False # Am I trying to overlay a second B-type solution?
if (os.path.exists(caltable) == False):
print("Caltable does not exist = %s" % (caltable))
return(vm)
try:
([polyMode, polyType, nPolyAmp, nPolyPhase, scaleFactor, nRows, nSpws, nUniqueTimesBP, uniqueTimesBP,
nPolarizations, frequencyLimits, increments, frequenciesGHz, polynomialPhase,
polynomialAmplitude, timesBP, antennasBP, cal_desc_idBP, spwBP]) = openBpolyFile(caltable, debug)
bpoly = True
bpolyOverlay = bpolyOverlay2 = False
if (xaxis.find('chan') >= 0):
print("Sorry, but BPOLY solutions cannot be plotted with xaxis='chan'. Proceeding with xaxis='freq'.")
xaxis = 'freq'
if (chanrange[0] != 0 or chanrange[1] != 0 or chanrangePercent is not None):
print("The chanrange parameter only applies if the first caltable is a B solution, not a BPOLY.")
return(vm)
if (len(caltable2) > 0):
try:
# figure out if the next file is a BPOLY or another B solution to pick the proper error message.
([polyMode, polyType, nPolyAmp, nPolyPhase, scaleFactor, nRows, nSpws, nUniqueTimesBP, uniqueTimesBP,
nPolarizations, frequencyLimits, increments, frequenciesGHz, polynomialPhase,
polynomialAmplitude, timesBP, antennasBP, cal_desc_idBP, spwBP]) = openBpolyFile(caltable2, debug)
print("Sorry, but you cannot overlay two BPOLY solutions (unless caltable is a B solution and caltable2 and 3 are BPOLYs).")
except:
print("Sorry, but for overlays, caltable must be a B solution, while caltable2 and 3 can be either type.")
return(vm)
except:
print("caltable: This is a %s solution." % (VisCal))
bpoly = bpolyOverlay = bpolyOverlay2 = False
# Now check if there is a second file to overlay
if (len(caltable2) > 0):
if (os.path.exists(caltable2) == False):
print("Caltable2 does not exist = %s" % (caltable2))
return(vm)
try:
# figure out if the next file is a BPOLY or another B solution
if (debug): print("Calling openBpolyFile('%s')" % (caltable2))
([polyMode, polyType, nPolyAmp, nPolyPhase, scaleFactor, nRows, nSpws, nUniqueTimesBP, uniqueTimesBP,
nPolarizations, frequencyLimits, increments, frequenciesGHz, polynomialPhase,
polynomialAmplitude, timesBP, antennasBP, cal_desc_idBP, spwBP]) = openBpolyFile(caltable2, debug)
if (debug): print("Done")
bpolyOverlay = True
if (debug): print("Overlay the BPOLY solution")
if (xaxis.find('chan')>=0):
print("Sorry, but overlap of BPOLY is currently possible only with xaxis='freq'")
return(vm)
if (len(caltable3) > 0):
if (os.path.exists(caltable3) == False):
print("Caltable3 does not exist = %s" % (caltable3))
return(vm)
bpolyOverlay2 = True
if (debug): print("Overlay the second BPOLY solution")
([polyMode2, polyType2, nPolyAmp2, nPolyPhase2, scaleFactor2, nRows2, nSpws2,
nUniqueTimesBP2, uniqueTimesBP2,
nPolarizations2, frequencyLimits2, increments2, frequenciesGHz2, polynomialPhase2,
polynomialAmplitude2, timesBP2, antennasBP2, cal_desc_idBP2, spwBP2]) = openBpolyFile(caltable3, debug)
except:
# this is another B solution
print("Overlay another %s solution" % (VisCal))
bOverlay = True
if (xaxis.find('freq')<0):
print("Currently, you must use xaxis='freq' to overlay two B solutions.")
return(vm)
if (len(caltable3) > 0):
print("You cannot overlay caltable3 because caltable2 is a B solution.")
return(vm)
elif (len(caltable3) > 0):
print("You cannot have a caltable3 argument without a caltable2 argument.")
return(vm)
if (overlay.find('antenna')>=0):
overlayAntennas = True
if (bpoly == True):
print("The overlay of times or antennas is not supported with BPOLY solutions")
return(vm)
if (len(caltable2)>0):
print("The overlay of times or antennas not supported when overlaying a B or BPOLY solution")
return(vm)
print("Will overlay solutions from different antennas")
else:
overlayAntennas = False
if (overlay.find('time')>=0):
overlayTimes = True
if (bpoly == True):
print("The overlay of times or antennas is not supported with BPOLY solutions")
return(vm)
if (len(caltable2)>0):
print("The overlay of times or antennas not supported when overlaying a B or BPOLY solution")
return(vm)
print("Will overlay solutions from different times")
else:
overlayTimes = False
if (overlay.find('spw')>=0):
if (tableFormat < 34):
print("Overlay spw may not work reliably for old cal tables")
overlaySpws = True
if (bpoly == True):
print("The overlay of times, antennas, or spws is not supported with BPOLY solutions")
return(vm)
if (len(caltable2)>0):
print("The overlay of times, antennas, or spws not supported when overlaying a B or BPOLY solution")
return(vm)
print("Will overlay solutions from different spws within a baseband")
else:
overlaySpws = False
if (overlay.find('baseband')>=0):
if (tableFormat < 34):
print("Overlay baseband may not work reliably for old cal tables")
overlayBasebands = True
if (bpoly == True):
print("The overlay of times, antennas, spws, or basebands is not supported with BPOLY solutions")
return(vm)
if (len(caltable2)>0):
print("The overlay of times, antennas, spws, or basebands not supported when overlaying a B or BPOLY solution")
return(vm)
print("Will overlay solutions from all spws regardless of baseband")
else:
overlayBasebands = False
if (bOverlay):
# Now open the Bandpass solution table
try:
mytb.open(caltable2)
except:
print("Could not open the second caltable = %s" % (caltable2))
return(vm)
names = mytb.colnames()
ant2 = mytb.getcol('ANTENNA1')
fields2 = mytb.getcol('FIELD_ID')
times2 = mytb.getcol('TIME')
if ('SPECTRAL_WINDOW_ID' not in names):
if ('SNR' not in names):
print("This does not appear to be a cal table.")
return(vm)
else:
tableFormat2 = 33
print("This appears to be an old-format cal table from casa 3.3 or earlier.")
cal_desc_id2 = mytb.getcol('CAL_DESC_ID')
VisCal2 = (mytb.info())['subType']
mytb.close()
ParType = "unknown" # i.e. not Complex
calDesc2 = mytb.open(caltable2+'/CAL_DESC')
originalSpws2 = mytb.getcol('SPECTRAL_WINDOW_ID') # [[0,1,2,3]]
originalSpw2 = originalSpws2[0] # [0,1,2,3]
msName2 = mytb.getcol('MS_NAME')[0]
mytb.close()
# Now open the associated ms tables via ValueMapping to figure out channel freqs
chanFreqGHz2 = []
for ictr in range(len(originalSpw2)):
if debug: print("ictr = ", ictr)
if (casaVersion >= '4.1.0'):
if debug: print("mymsmd.nspw() = %d, np.max(originalSpw) = %d" % (mymsmd.nspw(),np.max(originalSpw2)))
if (mymsmd.nspw() < np.max(originalSpw2)):
# Then there was an extra split
i = ictr
else:
i = originalSpw2[ictr]
nchan = len(mymsmd.chanfreqs(i))
if (nchan > 1):
missingFrequencyWidth = originalChannelStart[ictr]*(mymsmd.chanfreqs(i)[-1]-mymsmd.chanfreqs(i)[0])/(mymsmd.nchan(i))
else:
missingFrequencyWidth = 0
if (missingFrequencyWidth > 0):
if (DEBUG):
print("Correcting for channels flagged prior to running bandpass by %f GHz" % (missingFrequencyWidth*1e-9))
newfreqs = 1e-9*(mymsmd.chanfreqs(i)) + missingFrequencyWidth*1e-9
# if debug: print "Appending onto chanFreqGHz: ", newfreqs
chanFreqGHz2.append(newfreqs)
else:
if debug: print("len(vm.spwInfo) = %d, np.max(originalSpw) = %d" % (len(vm.spwInfo),np.max(originalSpw2)))
if (len(vm.spwInfo) < np.max(originalSpw2)):
# Then there was an extra split
i = ictr
else:
i = originalSpw2[ictr]
nchan = len(vm.spwInfo[i]["chanFreqs"])
if (nchan > 1):
missingFrequencyWidth = originalChannelStart[ictr]*(vm.spwInfo[i]["chanFreqs"][-1]-vm.spwInfo[i]["chanFreqs"][0])/(nchan-1)
else:
missingFrequencyWidth = 0
if (missingFrequencyWidth > 0):
if (DEBUG):
print("Correcting for channels flagged prior to running bandpass by %f GHz" % (missingFrequencyWidth*1e-9))
newfreqs = 1e-9*(vm.spwInfo[i]["chanFreqs"]) + missingFrequencyWidth*1e-9
# if debug: print "Appending onto chanFreqGHz: ", newfreqs
chanFreqGHz2.append(newfreqs)
else:
tableFormat2 = 34
cal_desc_id2 = mytb.getcol('SPECTRAL_WINDOW_ID')
msName2 = mytb.getkeyword('MSName')
ParType2 = mytb.getkeyword('ParType') # string = 'Complex'
VisCal2 = mytb.getkeyword('VisCal') # string = 'B TSYS'
PolBasis2 = mytb.getkeyword('PolBasis') # string = 'LINEAR'
spectralWindowTable2 = mytb.getkeyword('SPECTRAL_WINDOW').split()[1]
mytb.close()
mytb.open(spectralWindowTable2)
chanFreqGHz2 = []
originalSpws2 = range(len(mytb.getcol('MEAS_FREQ_REF')))
for i in originalSpws2:
# The array shapes can vary.
chanFreqGHz2.append(1e-9 * mytb.getcell('CHAN_FREQ',i))
originalSpws2 = range(len(mytb.getcol('MEAS_FREQ_REF')))
originalSpw2 = originalSpws2 # may want to do a global replace of this <----------------------------------
uniqueSpwsInCalTable2 = np.unique(cal_desc_id2)
mytb.open(caltable2)
if 'FLAG' in mytb.colnames():
flags2 = {}
for f in range(len(fields2)):
if mytb.iscelldefined('FLAG',f):
flags2[f] = mytb.getcell('FLAG',f)
else:
flags2[f] = np.zeros(len(chanFreqGHz[f]))
else:
print("bOverlay: No Flag column found. Are you sure this is a bandpass solution file, or is it the .ms?")
print("If it is a solution file, does it contain solutions for both TDM and FDM spws?")
return(vm)
uniqueTimes2 = sloppyUnique(np.unique(times2), solutionTimeThresholdSeconds)
nUniqueTimes2 = len(uniqueTimes2)
# print "Found %d solutions in time: MJD seconds = " % (nUniqueTimes2), uniqueTimes2
spacing = ''
for i in range(1,nUniqueTimes2):
spacing += '%.0f, ' % (np.abs(uniqueTimes2[i]-uniqueTimes2[i-1]))
print("Found %d solutions in time, spaced by seconds: " % (nUniqueTimes2), spacing)
displayTimesArray([[uniqueTimes2]])
uniqueAntennaIds2 = np.unique(ant2)
uniqueFields2 = np.unique(fields2)
nFields2 = len(uniqueFields2)
if (debug): print("(boverlay) original unique spws in the second dataset = ", np.unique(originalSpw2))
uniqueTimesPerFieldPerSpw2 = []
for s in uniqueSpwsInCalTable2:
uniqueTimesPerField2 = []
for f in uniqueFields2:
timelist2 = []
for row in range(len(fields2)):
if (fields2[row] == f and cal_desc_id2[row] == s):
if (sloppyMatch(times2[row], timelist2, solutionTimeThresholdSeconds,
myprint=False) == False):
timelist2.append(times2[row])
uniqueTimesPerField2.append(timelist2)
uniqueTimesPerFieldPerSpw2.append(uniqueTimesPerField2)
if debug:
print("uniqueTimesPerFieldPerSpw2 = ") #, uniqueTimesPerFieldPerSpw2
displayTimesArray(uniqueTimesPerFieldPerSpw2)
if debug:
print("%d spw(s) in the second solution = " % (len(uniqueSpwsInCalTable2)), uniqueSpwsInCalTable2)
if (msFound):
msFieldsList = str(np.array(msFields)[uniqueFields2])
else:
msFieldsList = 'unknown'
print("%d field(s) in the solution = %s = %s" % (len(uniqueFields2), uniqueFields2, msFieldsList))
# Parse the timeranges field from the command line
if timeranges != '':
timerangesWasSpecified = True
else:
timerangesWasSpecified = False
if (type(timeranges) == str):
# a list of timeranges was given
tokens = timeranges.split(',')
timerangeList = []
removeTime = []
for token in tokens:
if (len(token) > 0):
if (token.find('!')==0):
timerangeList = range(len(uniqueTimes))
removeTime.append(int(token[1:]))
elif (token.find('~')>0):
(start,finish) = token.split('~')
timerangeList += range(int(start),int(finish)+1)
else:
timerangeList.append(int(token))
timerangeList = np.array(timerangeList)
for rt in removeTime:
timerangeList = timerangeList[np.where(timerangeList != rt)[0]]
timerangeList = list(timerangeList)
if (len(timerangeList) < 1):
if (len(removeTime) > 0):
print("Too many negated timeranges -- there are none left to plot.")
return
else:
# then a blank list was specified
timerangeList = range(len(uniqueTimes))
elif (type(timeranges) == list):
# it's already a list of integers
timerangeList = timeranges
else:
# It's a single, integer entry
timerangeList = [timeranges]
if (timerangesWasSpecified and scans != ''): # CAS-8489
if (type(scans) == list or type(scans) == np.ndarray):
myscan = int(scans[0])
else:
myscan = int(str(scans).split(',')[0])
if (myscan not in scansForUniqueTimes):
print("No rows for scan %d, only " % (myscan), np.unique(scansForUniqueTimes))
return
timerangeOffset = scansForUniqueTimes.index(myscan)
timerangeList = np.array(timerangeList) + timerangeOffset
print("Since both timeranges and scans was specified, generated new effective timerangeList: ", timerangeList)
if (max(timerangeList) >= len(uniqueTimes)):
print("Invalid timerange. Solution has %d times (%d~%d)" % (len(uniqueTimes),0,len(uniqueTimes)-1))
return
timerangeListTimes = np.array(uniqueTimes)[timerangeList]
if (tableFormat == 33 or scansForUniqueTimes == []):
# SMA data with scan numbers of -1 has empty list for scansForUniqueTimes
scansToPlot = []
if (scans != ''):
print("Selection by scan is not possible for this dataset.")
return
else:
if (debug): print("scansForUniqueTimes = %s" % (str(scansForUniqueTimes)))
scansToPlot = np.array(scansForUniqueTimes)[timerangeList]
if (np.unique(scansToPlot)[0] == -1):
# scan numbers are not correct in this new-style cal table
scansToPlot = []
if (scans != ''):
print("Selection by scan number is not possible with this dataset.")
return
if (scans != '' and scans != []):
if (type(scans) == list):
scansToPlot = scans
elif (type(scans) == str):
scansToPlot = [int(a) for a in scans.split(',')]
else:
scansToPlot = [scans]
for scan in scansToPlot:
if (scan not in scansForUniqueTimes):
print("Scan %d is not in any solution" % (scan))
return
print("scans to plot: %s" % (str(scansToPlot)))
scansToPlotPerSpw = {}
for myspw in np.unique(cal_desc_id):
scansToPlotPerSpw[myspw] = []
for scan in scansToPlot:
for myspw in np.unique(cal_desc_id):
if (scan in cal_scans_per_spw[myspw]):
scansToPlotPerSpw[myspw].append(scan)
# remove spws that do not have any scans to be plotted
# but only for tables that have a scan number column, and not filled with all -1
if (tableFormat > 33 and scansForUniqueTimes != []):
for myspw in np.unique(cal_desc_id):
if (debug):
print("scans to plot for spw %d: %s" % (myspw, scansToPlotPerSpw[myspw]))
if (scansToPlotPerSpw[myspw] == []):
indexDelete = np.where(np.array(spwsToPlot)==myspw)[0]
# print "indexDelete = ", indexDelete
if (len(indexDelete) > 0):
spwsToPlot = list(np.delete(spwsToPlot, indexDelete[0]))
print("spwsToPlot = ", spwsToPlot)
timerangeListTimesString = mjdsecArrayToUTString(timerangeListTimes)
print("UT times to plot: ", timerangeListTimesString)
print("%d MJDSecond to plot: " % (len(timerangeListTimes)), str([int(b) for b in timerangeListTimes]))
if (len(timerangeListTimes) > len(np.unique(scansToPlot))):
# fix for CAS-9474
uniqueScansToPlot, idx = np.unique(scansToPlot, return_index=True)
if (len(uniqueScansToPlot) < len(scansToPlot)):
# If the solution time for one spw differs by more than solutionTimeThresholdSeconds from
# another spw, then we will get 2 identical entries for the same scan, and thus duplicate
# plots. So, remove one.
print("Engaging fix for CAS-9474")
scansToPlot = uniqueScansToPlot
timerangeListTimes = list(np.array(timerangeListTimes)[idx])
timerangeList = list(np.array(timerangeList)[idx])
timerangeListTimesString = mjdsecArrayToUTString(timerangeListTimes)
print("Revised scans to plot: %s" % (str(scansToPlot)))
print("Revised UT times to plot: ", timerangeListTimesString)
print("%d MJDSecond to plot: " % (len(timerangeListTimes)), str([int(b) for b in timerangeListTimes]))
print("Corresponding time IDs (0-based):", timerangeList) # OneBased
# Check for mismatch
if (bpolyOverlay):
if (len(timerangeListTimes) > nUniqueTimesBP):
print("There are more timeranges (%d) to plot from %s than exist in the caltable2=%s (%d)" % (len(timerangeListTimes), caltable,caltable2, nUniqueTimesBP))
for i in timerangeList:
if (sloppyMatch(timerangeListTimes[i],uniqueTimesBP[0],
solutionTimeThresholdSeconds, mytime,
scansToPlot, scansForUniqueTimes, False)):
print("Try adding 'timeranges=%d'" % (i+1))
return(vm)
if (bpolyOverlay2):
if (len(timerangeListTimes) > nUniqueTimesBP2):
print("There are more timeranges to plot (%d) from %s than exist in the caltable3=%s (%d)" % (len(timerangeListTimes), caltable, caltable3, nUniqueTimesBP2))
return(vm)
# Parse the antenna string to emulate plotms
if (type(antenna) == str):
if (len(antenna) == sum([m in myValidCharacterListWithBang for m in antenna])):
# a simple list of antenna numbers was given
tokens = antenna.split(',')
antlist = []
removeAntenna = []
for token in tokens:
if (len(token) > 0):
if (token.find('*')==0 and len(token)==1):
antlist = uniqueAntennaIds
break
elif (token.find('!')==0):
antlist = uniqueAntennaIds
removeAntenna.append(int(token[1:]))
elif (token.find('~')>0):
(start,finish) = token.split('~')
antlist += range(int(start),int(finish)+1)
else:
antlist.append(int(token))
antlist = np.array(antlist)
for rm in removeAntenna:
antlist = antlist[np.where(antlist != rm)[0]]
antlist = list(antlist)
if (len(antlist) < 1 and len(removeAntenna)>0):
print("Too many negated antennas -- there are no antennas left to plot.")
return(vm)
else:
# The antenna name (or list of names) was specified
tokens = antenna.split(',')
if (msFound):
antlist = []
removeAntenna = []
for token in tokens:
if (casaVersion >= '4.1.0'):
if (token in msAnt):
antlist = list(antlist) # needed in case preceding antenna had ! modifier
# antlist.append(mymsmd.antennaids(token)[0]) # This crashes if ms was not actually found
antlist.append(list(msAnt).index(token)) # This alternative should work in all cases
elif (token[0] == '!'):
if (token[1:] in msAnt):
antlist = uniqueAntennaIds
# removeAntenna.append(mymsmd.antennaids(token[1:])) # This crashes if ms was not actually found
removeAntenna.append(list(msAnt).index(token[1:])) # This alternative should work in all cases
else:
print("Antenna %s is not in the ms. It contains: " % (token), msAnt)
return(vm)
else:
print("Antenna %s is not in the ms. It contains: " % (token), msAnt)
return(vm)
else:
if (token in vm.uniqueAntennas):
antlist = list(antlist) # needed in case preceding antenna had ! modifier
antlist.append(vm.getAntennaIdsForAntennaName(token))
elif (token[0] == '!'):
if (token[1:] in vm.uniqueAntennas):
antlist = uniqueAntennaIds
removeAntenna.append(vm.getAntennaIdsForAntennaName(token[1:]))
else:
print("Antenna %s is not in the ms. It contains: " % (token), vm.uniqueAntennas)
return(vm)
else:
print("Antenna %s is not in the ms. It contains: " % (token), vm.uniqueAntennas)
return(vm)
antlist = np.array(antlist)
for rm in removeAntenna:
antlist = antlist[np.where(antlist != rm)[0]]
antlist = list(antlist)
if (len(antlist) < 1 and len(removeAntenna)>0):
print("Too many negated antennas -- there are no antennas left to plot.")
return(vm)
else:
print("Antennas cannot be specified my name if the ms is not found.")
return(vm)
elif (type(antenna) == list):
# it's a list of integers
antlist = antenna
else:
# It's a single, integer entry
antlist = [antenna]
if (len(antlist) > 0):
antennasToPlot = np.intersect1d(uniqueAntennaIds,antlist)
else:
antennasToPlot = uniqueAntennaIds
if (len(antennasToPlot) < 2 and overlayAntennas):
print("More than 1 antenna is required for overlay='antenna'.")
return
# Parse the field string to emulate plotms
removeField = []
if (type(field) == str):
if (len(field) == sum([m in myValidCharacterListWithBang for m in field])):
# a list of field numbers was given
tokens = field.split(',')
fieldlist = []
for token in tokens:
if (token.find('*')>=0):
fieldlist = uniqueFields
break
elif (token.find('!')==0):
fieldlist = uniqueFields
removeField.append(int(token[1:]))
elif (len(token) > 0):
if (token.find('~')>0):
(start,finish) = token.split('~')
fieldlist += range(int(start),int(finish)+1)
else:
fieldlist.append(int(token))
fieldlist = np.array(fieldlist)
for rm in removeField:
fieldlist = fieldlist[np.where(fieldlist != rm)[0]]
fieldlist = list(fieldlist)
if (len(fieldlist) < 1 and len(removeField)>0):
print("Too many negated fields -- there are no fields left to plot.")
return(vm)
else:
# The field name (or list of names, or wildcard) was specified
tokens = field.split(',')
if (msFound):
fieldlist = []
removeField = []
for token in tokens:
myloc = token.find('*')
if (myloc > 0): # saw wildcard in name
for u in uniqueFields:
# Sept 2014: will crash if ms is not found
myFieldName = GetFieldNamesForFieldId(u,vm,mymsmd,msFields)[:myloc]
if (token[:myloc]==myFieldName[:myloc]):
if (DEBUG):
print("Found wildcard match = %s" % GetFieldNamesForFieldId(u,vm,mymsmd,msFields))
fieldlist.append(u)
else:
if (DEBUG):
print("No wildcard match of %s with = %s" % (token[:myloc], GetFieldNamesForFieldId(u,vm,mymsmd,msFields)))
elif (myloc==0): # saw wildcard at start of name
for u in uniqueFields:
fieldlist.append(u)
elif (token in msFields):
fieldlist = list(fieldlist) # needed in case preceding field had ! modifier
fieldlist.append(GetFieldIdsForFieldName(token,vm,mymsmd,msFields))
elif (token[0] == '!'):
if (fieldlist == []):
for u in uniqueFields: # uniqueFields is a list of IDs
fieldlist.append(u)
if (token[1:] in msFields):
removeField.append(GetFieldIdsForFieldName(token[1:],vm,mymsmd,msFields))
else:
print("1) Field %s is not in the ms. It contains: " % (token), uniqueFields, msFieldsUnique)
return(vm)
else:
print("2) Field %s is not in the ms. It contains: " % (token), uniqueFields, msFieldsUnique)
return(vm)
fieldlist = np.array(fieldlist)
for rm in removeField:
fieldlist = fieldlist[np.where(fieldlist != rm)[0]]
fieldlist = list(fieldlist)
if (len(fieldlist) < 1 and len(removeField)>0):
print("Too many negated fields -- there are no fields left to plot.")
return(vm)
else:
print("Fields cannot be specified my name if the ms is not found.")
return(vm)
elif (type(field) == list):
# it's a list of integers
fieldlist = field
else:
# It's a single, integer entry
fieldlist = [field]
if (len(fieldlist) > 0):
if (DEBUG):
print("Finding intersection of ", uniqueFields, fieldlist)
fieldsToPlot = np.intersect1d(uniqueFields,np.array(fieldlist))
if (bOverlay):
fieldsToPlot = np.intersect1d(np.union1d(uniqueFields,uniqueFields2),np.array(fieldlist))
if (len(fieldsToPlot) < 1):
print("Requested field not found in solution")
return(vm)
else:
fieldsToPlot = uniqueFields # use all fields if none are specified
if (bOverlay):
fieldsToPlot = np.union1d(uniqueFields,uniqueFields2)
if (DEBUG):
print("bOverlay = ", bOverlay)
print("set fieldsToPlot to uniqueFields = ", fieldsToPlot)
fieldIndicesToPlot = []
if (showatmfield == ''):
showatmfield = fieldsToPlot[0]
else:
if (str.isdigit(str(showatmfield))):
showatmfield = int(str(showatmfield))
if (showatmfield not in fieldsToPlot):
print("The showatmfield (%d) is not in the list of fields to plot: " %(showatmfield), fieldsToPlot)
return(vm)
else:
showatmfieldName = showatmfield
showatmfield = GetFieldIdsForFieldName(showatmfield,vm,mymsmd,msFields)
if (list(showatmfield) == []):
print("The showatmfield (%s) is not in the ms." %(showatmfieldName))
return(vm)
if (type(showatmfield) == np.ndarray):
# more than one field IDs exist for this source name, so pick the first
showatmfield = showatmfield[0]
if (showatmfield not in fieldsToPlot):
print("The showatmfield (%d=%s) is not in the list of fields to plot: " %(showatmfield, showatmfieldName), fieldsToPlot)
return(vm)
for i in fieldsToPlot:
match = np.where(i==uniqueFields)[0]
if (len(match) < 1 and bOverlay):
match = np.where(i==uniqueFields2)[0]
fieldIndicesToPlot.append(match[0])
print("Antennas to plot = ", antennasToPlot)
print("spws to plot = ", spwsToPlot)
# I moved the following line upward to line 1483 on 06-Aug-2013.
# originalSpwsToPlot = computeOriginalSpwsToPlot(spwsToPlot, originalSpw, tableFormat, debug)
if (debug): print("original spws to plot = ", originalSpwsToPlot)
print("Field IDs to plot: ", fieldsToPlot)
# print "Field indices to plot: ", fieldIndicesToPlot
redisplay = False
myap = 0 # this variable is necessary to make the 'b' option work for
# subplot=11, yaxis=both. It keeps track of whether 'amp' or
# 'phase' was the first plot on the page.
# I added the call to pb.ion() because Remy suggested it.
if (pb.isinteractive()):
# print "pylab is set to interactive"
wasInteractive = True
else:
# print "pylab is set to non-interactive"
wasInteractive = False
if (interactive):
if (wasInteractive==False):
print("Turning on interactive pylab for the duration of this task.")
pb.ion()
else:
if (wasInteractive):
print("Turning off interactive pylab for the duration of this task.")
pb.ioff()
# But the call to pb.figure() causes another window to open everytime.
# pb.figure()
newylimits = [LARGE_POSITIVE, LARGE_NEGATIVE]
if casaVersion > '5.8':
pb.style.use('classic')
if (bpoly):
# The number of polarizations cannot be reliably inferred from the shape of
# the GAIN column in the caltable. Must use the shape of the DATA column
# in the ms.
if (msFound):
# (corr_type, corr_type_string, nPolarizations) = getCorrType(msName,spwsToPlot,debug)
(corr_type, corr_type_string, nPolarizations) = getCorrType(msName,originalSpwsToPlot,debug)
else:
print("With no ms available, I will assume ALMA data: XX, YY, and refFreq=first channel.")
chanFreqGHz = []
corr_type_string = ['XX','YY']
corr_type = [9,12]
nPolarizations = 2
nPolarizations2 = nPolarizations
if (corr_type_string == []):
return(vm)
polsToPlot = checkPolsToPlot(polsToPlot, corr_type_string)
if (polsToPlot == []):
return(vm)
pb.clf() # BPOLY case
if casaVersion > '5.8':
figdesc = pb.gcf()
figsize = pb.rcParams['figure.figsize']
# print("figsize = ", figsize)
# This leaves a white blank space, but text no longer overlaps.
figdesc.set_size_inches([11.2,8.4])
# Here we are only plotting one BPOLY solution, no overlays implemented.
overlayAntennas = False
# rows in the table are: antennas 0..nAnt for first spw, antennas 0..nAnt
# for 2nd spw...
pagectr = 0
pages = []
xctr = 0
newpage = 1
while (xctr < len(antennasToPlot)):
xant = antennasToPlot[xctr]
antstring = buildAntString(xant,msFound,msAnt)
spwctr = 0
spwctrFirstToPlot = spwctr
while (spwctr < len(spwsToPlot)):
ispw = spwsToPlot[spwctr]
mytime = 0
while (mytime < nUniqueTimes):
if (len(uniqueTimes) > 0 and (mytime not in timerangeList)):
if (debug):
print("@@@@@@@@@@@@@@@ Skipping mytime=%d" % (mytime))
mytime += 1
continue
if (newpage == 1):
pages.append([xctr,spwctr,mytime,0])
if (debug):
print("1) appending [%d,%d,%d,%d]" % (xctr,spwctr,mytime,0))
newpage = 0
antennaString = 'Ant%2d: %s, ' % (xant,antstring)
# fieldIndex = -1 # use this to tell whether we found any match for CAS-7753
for index in range(nRows):
# Find this antenna, spw, and timerange combination in the table
if (xant==ant[index] and sloppyMatch(uniqueTimes[mytime],times[index],solutionTimeThresholdSeconds,
mytime, scansToPlotPerSpw[ispw], scansForUniqueTimes,
myprint=debugSloppyMatch) and
(ispw == cal_desc_id[index]) and (fields[index] in fieldsToPlot)):
fieldIndex = np.where(fields[index] == uniqueFields)[0]
if (type(fieldIndex) == list or type(fieldIndex) == np.ndarray):
fieldIndex = fieldIndex[0]
validDomain = [frequencyLimits[0,index], frequencyLimits[1,index]]
if (msFound):
fieldString = msFields[uniqueFields[fieldIndex]] # [0]
else:
fieldString = str(field)
timeString = ', t%d/%d %s' % (mytime,nUniqueTimes-1,utstring(uniqueTimes[mytime],3))
if (scansForUniqueTimes != []):
if (scansForUniqueTimes[mytime]>=0):
timeString = ', scan%d %s' % (scansForUniqueTimes[mytime],utstring(uniqueTimes[mytime],3))
if ((yaxis.find('amp')>=0 or amplitudeWithPhase) and myap==0):
xframe += 1
myUniqueColor = []
if (debug):
print("v) incrementing xframe to %d" % xframe)
adesc = pb.subplot(xframe)
previousSubplot = xframe
if (ispw==originalSpw[ispw]):
# all this was added mistakenly here. If it causes a bug, remove it.
if (overlayTimes and len(fieldsToPlot) > 1):
indices = fstring = ''
for f in fieldIndicesToPlot:
if (f != fieldIndicesToPlot[0]):
indices += ','
fstring += ','
indices += str(uniqueFields[f])
if (msFound):
fstring += msFields[uniqueFields[f]]
if (len(fstring) > fstringLimit):
fstring = fstring[0:fstringLimit] + '...'
pb.title("%sspw%2d, fields %s: %s%s" % (antennaString,ispw,
indices, fstring, timeString), size=titlesize)
else:
pb.title("%sspw%2d, field %d: %s%s" % (antennaString,ispw,
uniqueFields[fieldIndex],fieldString,timeString), size=titlesize)
else:
if (overlayTimes and len(fieldsToPlot) > 1):
indices = fstring = ''
for f in fieldIndicesToPlot:
if (f != fieldIndicesToPlot[0]):
indices += ','
fstring += ','
indices += str(uniqueFields[f])
if (msFound):
fstring += msFields[uniqueFields[f]]
if (len(fstring) > fstringLimit):
fstring = fstring[0:fstringLimit] + '...'
pb.title("%sspw%2d (%d), fields %s: %s%s" % (antennaString,ispw,originalSpw[ispw],
indices, fstring, timeString), size=titlesize)
else:
pb.title("%sspw%2d (%d), field %d: %s%s" % (antennaString,ispw,originalSpw[ispw],
uniqueFields[fieldIndex],fieldString,timeString), size=titlesize)
amplitudeSolutionX = np.real(scaleFactor[index])+calcChebyshev(polynomialAmplitude[index][0:nPolyAmp[index]], validDomain, frequenciesGHz[index]*1e+9)
amplitudeSolutionY = np.real(scaleFactor[index])+calcChebyshev(polynomialAmplitude[index][nPolyAmp[index]:2*nPolyAmp[index]], validDomain, frequenciesGHz[index]*1e+9)
amplitudeSolutionX += 1 - np.mean(amplitudeSolutionX)
amplitudeSolutionY += 1 - np.mean(amplitudeSolutionY)
if (yaxis.lower().find('db') >= 0):
amplitudeSolutionX = 10*np.log10(amplitudeSolutionX)
amplitudeSolutionY = 10*np.log10(amplitudeSolutionY)
if (nPolarizations == 1):
pb.plot(frequenciesGHz[index], amplitudeSolutionX, '%s%s'%(xcolor,bpolymarkstyle),markeredgewidth=markeredgewidth,markersize=markersize)
else:
pb.plot(frequenciesGHz[index], amplitudeSolutionX, '%s%s'%(xcolor,bpolymarkstyle), frequenciesGHz[index], amplitudeSolutionY, '%s%s'%(ycolor,bpolymarkstyle),markeredgewidth=markeredgewidth,markersize=markersize)
if (plotrange[0] != 0 or plotrange[1] != 0):
SetNewXLimits([plotrange[0],plotrange[1]],loc=0)
if (plotrange[2] != 0 or plotrange[3] != 0):
SetNewYLimits([plotrange[2],plotrange[3]],loc=0)
xlim=pb.xlim()
ylim=pb.ylim()
ResizeFonts(adesc,mysize)
adesc.xaxis.grid(True,which='major')
adesc.yaxis.grid(True,which='major')
if (yaxis.lower().find('db')>=0):
pb.ylabel('Amplitude (dB)', size=mysize)
else:
pb.ylabel('Amplitude', size=mysize)
pb.xlabel('Frequency (GHz) (%d channels)'%(len(frequenciesGHz[index])), size=mysize)
if (xframe == firstFrame):
DrawBottomLegendPageCoords(msName, uniqueTimes[mytime], mysize, figfile)
pb.text(xstartTitle, ystartTitle,
'%s (degamp=%d, degphase=%d)'%(caltable,nPolyAmp[index]-1,
nPolyPhase[index]-1),size=mysize,
transform=pb.gcf().transFigure)
# draw polarization labels (BPOLY case)
x0 = xstartPolLabel
y0 = ystartPolLabel
for p in range(nPolarizations):
if (corrTypeToString(corr_type[p]) in polsToPlot):
pb.text(x0, y0-0.03*subplotRows*p, corrTypeToString(corr_type[p])+'',
color=pcolor[p],size=mysize, transform=pb.gca().transAxes)
if (xframe == 111 and amplitudeWithPhase):
if (len(figfile) > 0):
# We need to make a new figure page
plotfiles.append(makeplot(figfile,msFound,msAnt,
overlayAntennas,pages,pagectr,
density,interactive,antennasToPlot,
spwsToPlot,overlayTimes,overlayBasebands,
0,xant,ispw,subplot,resample,
debug,figfileSequential,figfileNumber))
figfileNumber += 1
donetime = timeUtilities.time()
if (interactive):
pb.draw()
# myinput = raw_input("(%.1f sec) Press return for next page (b for backwards, q to quit): "%(donetime-mytimestamp))
myinput = raw_input("Press return for next page (b for backwards, q to quit): ")
else:
myinput = ''
skippingSpwMessageSent = 0
mytimestamp = timeUtilities.time()
if (myinput.find('q') >= 0):
# showFinalMessage(overlayAntennas, solutionTimeSpread, nUniqueTimes)
return(vm)
if (myinput.find('b') >= 0):
if (pagectr > 0):
pagectr -= 1
#redisplay the current page by setting ctrs back to the value they had at start of that page
xctr = pages[pagectr][PAGE_ANT]
spwctr = pages[pagectr][PAGE_SPW]
mytime = pages[pagectr][PAGE_TIME]
myap = pages[pagectr][PAGE_AP]
xant = antennasToPlot[xctr]
antstring = buildAntString(xant,msFound,msAnt)
ispw = spwsToPlot[spwctr]
redisplay = True
else:
pagectr += 1
if (pagectr >= len(pages)):
pages.append([xctr,spwctr,mytime,1])
if (debug):
print("2) appending [%d,%d,%d,%d]" % (xctr,spwctr,mytime,1))
newpage = 0
pb.clf()
if (yaxis.find('phase')>=0 or amplitudeWithPhase):
xframe += 1
myUniqueColor = []
if (debug):
print("w) incrementing xframe to %d" % xframe)
adesc = pb.subplot(xframe)
previousSubplot = xframe
if (ispw==originalSpw[ispw]):
pb.title("%sspw%2d, field %d: %s%s" % (antennaString,ispw,
uniqueFields[fieldIndex],fieldString,timeString), size=titlesize)
else:
pb.title("%sspw%2d (%d), field %d: %s%s" % (antennaString,ispw,originalSpw[ispw],
uniqueFields[fieldIndex],fieldString,timeString), size=titlesize)
phaseSolutionX = calcChebyshev(polynomialPhase[index][0:nPolyPhase[index]], validDomain, frequenciesGHz[index]*1e+9) * 180/math.pi
phaseSolutionY = calcChebyshev(polynomialPhase[index][nPolyPhase[index]:2*nPolyPhase[index]], validDomain, frequenciesGHz[index]*1e+9) * 180/math.pi
if (nPolarizations == 1):
pb.plot(frequenciesGHz[index], phaseSolutionX, '%s%s'%(xcolor,bpolymarkstyle),markeredgewidth=markeredgewidth,markersize=markersize)
else:
pb.plot(frequenciesGHz[index], phaseSolutionX, '%s%s'%(xcolor,bpolymarkstyle), frequenciesGHz[index], phaseSolutionY, '%s%s'%(ycolor,bpolymarkstyle),markeredgewidth=markeredgewidth,markersize=markersize)
ResizeFonts(adesc,mysize)
adesc.xaxis.grid(True,which='major')
adesc.yaxis.grid(True,which='major')
pb.ylabel('Phase (deg)', size=mysize)
pb.xlabel('Frequency (GHz) (%d channels)'%(len(frequenciesGHz[index])), size=mysize)
if (plotrange[0] != 0 or plotrange[1] != 0):
SetNewXLimits([plotrange[0],plotrange[1]],loc=1)
if (plotrange[2] != 0 or plotrange[3] != 0):
SetNewYLimits([plotrange[2],plotrange[3]],loc=1)
if (amplitudeWithPhase and phase != ''):
if (phase[0] != 0 or phase[1] != 0):
SetNewYLimits(phase,loc=2)
if (xframe == firstFrame):
pb.text(xstartTitle, ystartTitle,
'%s (degamp=%d, degphase=%d)'%(caltableTitle,
nPolyAmp[index]-1,nPolyPhase[index]-1),
size=mysize, transform=pb.gcf().transFigure)
# draw polarization labels (BPOLY case)
x0 = xstartPolLabel
y0 = ystartPolLabel
for p in range(nPolarizations):
if (corrTypeToString(corr_type[p]) in polsToPlot):
pb.text(x0, y0-0.03*p*subplotRows, corrTypeToString(corr_type[p])+'',
color=pcolor[p],size=mysize, transform=pb.gca().transAxes)
# end of 'for' loop over rows
# Probable fix for CAS-7753, not installed because bpoly not used much
# if (fieldIndex == -1 and newpage==0):
# newpage = 1
# pages = pages[:len(pages)-1]
redisplay = False
pb.subplots_adjust(hspace=myhspace, wspace=mywspace)
if (xframe == lastFrame):
if (len(figfile) > 0):
plotfiles.append(makeplot(figfile,msFound,msAnt,
overlayAntennas,pages,pagectr,
density,interactive,antennasToPlot,
spwsToPlot,overlayTimes,overlayBasebands,
1,xant,ispw,subplot,resample,debug,
figfileSequential,figfileNumber))
figfileNumber += 1
donetime = timeUtilities.time()
if (interactive):
pb.draw()
# myinput = raw_input("(%.1f sec) Press return for next page (b for backwards, q to quit): "%(donetime-mytimestamp))
myinput = raw_input("Press return for next page (b for backwards, q to quit): ")
else:
myinput = ''
skippingSpwMessageSent = 0
mytimestamp = timeUtilities.time()
if (myinput.find('q') >= 0):
# showFinalMessage(overlayAntennas, solutionTimeSpread, nUniqueTimes)
return(vm)
if (myinput.find('b') >= 0):
if (pagectr > 0):
pagectr -= 1
#redisplay the current page by setting ctrs back to the value they had at start of that page
xctr = pages[pagectr][PAGE_ANT]
spwctr = pages[pagectr][PAGE_SPW]
mytime = pages[pagectr][PAGE_TIME]
myap = pages[pagectr][PAGE_AP]
xant = antennasToPlot[xctr]
antstring = buildAntString(xant,msFound,msAnt)
ispw = spwsToPlot[spwctr]
redisplay = True
else:
pagectr += 1
if (pagectr >= len(pages)):
newpage = 1
else:
newpage = 0
if (debug):
print("1)Setting xframe to %d" % xframeStart)
xframe = xframeStart
if (xctr+1 < len(antennasToPlot)):
# don't clear the final plot when finished
pb.clf()
if (spwctr+1<len(spwsToPlot) or mytime+1<nUniqueTimes):
# don't clear the final plot when finished
pb.clf()
pb.subplots_adjust(hspace=myhspace, wspace=mywspace)
if (redisplay == False):
mytime += 1
if (debug):
print("Incrementing mytime to %d" % (mytime))
# end while(mytime)
if (redisplay == False):
spwctr +=1
if (debug):
print("Incrementing spwctr to %d" % (spwctr))
# end while(spwctr)
if (redisplay == False):
xctr += 1
# end while(xctr) for BPOLY
if (len(figfile) > 0 and pagectr<len(pages)):
plotfiles.append(makeplot(figfile,msFound,msAnt,overlayAntennas,pages,
pagectr,density,interactive,antennasToPlot,
spwsToPlot,overlayTimes,overlayBasebands,
2,xant,ispw,subplot,resample,debug,
figfileSequential,figfileNumber))
figfileNumber += 1
if (len(plotfiles) > 0 and buildpdf):
pdfname = figfile+'.pdf'
filelist = ''
plotfiles = np.unique(plotfiles)
for i in range(len(plotfiles)):
cmd = '%s -density %d %s %s.pdf' % (convert,density,plotfiles[i],plotfiles[i].split('.png')[0])
print("Running command = %s" % (cmd))
mystatus = os.system(cmd)
if (mystatus != 0):
break
if (cleanup):
os.system('rm -f %s' % (plotfiles[i]))
filelist += plotfiles[i].split('.png')[0] + '.pdf '
if (mystatus != 0):
print("ImageMagick is missing, no PDF created")
buildpdf = False
if (buildpdf):
# The following 2 lines reduce the total number of characters on the command line, which
# was apparently a problem at JAO for Liza.
filelist = ' '.join(au.pruneFilelist(filelist.split()))
pdfname = au.pruneFilelist([pdfname])[0]
cmd = '%s %s cat output %s' % (pdftk, filelist, pdfname)
print("Running command = %s" % (cmd))
mystatus = os.system(cmd)
if (mystatus != 0):
cmd = '%s -q -sPAPERSIZE=letter -dNOPAUSE -dBATCH -sDEVICE=pdfwrite -sOutputFile=%s %s' % (gs,pdfname,filelist)
print("Running command = %s" % (cmd))
mystatus = os.system(cmd)
if (mystatus == 0):
print("PDF left in %s" % (pdfname))
os.system("rm -f %s" % filelist)
else:
print("Both pdftk and ghostscript is missing, no PDF created")
else:
print("Not building PDF (buildpdf=%s, len(plotfiles)=%d)" % (str(buildpdf), len(plotfiles)))
return(vm)
# endif bpoly
#################
# bpoly == false
#################
msFound = False
mytb.open(caltable)
uniqueScanNumbers = sorted(np.unique(mytb.getcol('SCAN_NUMBER')))
if (ParType == 'Complex'): # casa >= 3.4
gain = {}
for f in range(len(fields)):
if not mytb.iscelldefined('CPARAM',f): break
gain[f] = mytb.getcell('CPARAM',f)
else: # casa 3.3
gain = {}
# gain = mytb.getcol('FPARAM') # e.g. (2, 128, 576)
if ('FPARAM' in mytb.colnames()):
for f in range(len(fields)):
if not mytb.iscelldefined('FPARAM',f): break
gain[f] = mytb.getcell('FPARAM',f)
else:
for f in range(len(fields)):
gain[f] = mytb.getcell('GAIN',f)
nPolarizations = len(gain[0])
if (debug):
print("(1)Set nPolarizations = %d" % nPolarizations)
ggx = {}
for g in range(len(gain)):
ggx[g] = gain[g][0]
if (nPolarizations == 2):
ggy = {}
for g in range(len(gain)):
ggy[g] = gain[g][1]
mytb.close()
if (debug):
print("nPolarizations = ", nPolarizations)
nRows = len(gain)
if (bOverlay):
mytb.open(caltable2)
gain2 = {}
if (ParType == 'Complex'):
# gain2 = mytb.getcol('CPARAM') # will bomb if shape varies
if (debug): print("ParType = Complex")
for f in range(len(fields2)):
# if (debug): print "getting row %d/%d" % (f,len(fields))
gain2[f] = mytb.getcell('CPARAM',f)
else:
# gain2 = mytb.getcol('FPARAM') # will bomb if shape varies
if (debug): print("ParType != Complex, it is %s" % (ParType))
for f in range(len(fields2)):
if (tableFormat2 == 34):
if mytb.iscelldefined('FPARAM',f):
gain2[f] = mytb.getcell('FPARAM',f)
else:
print("No data in row %d of 'FPARAM'" % (f))
else:
if mytb.iscelldefined('GAIN',f):
gain2[f] = mytb.getcell('GAIN',f)
else:
print("No data in row %d of 'GAIN'" % (f))
mytb.close()
ggx2 = {}
for g in range(len(gain2)):
# print "Appending to ggx: ", gain2[g][0]
ggx2[g] = gain2[g][0]
nPolarizations2 = len(gain2[0])
if (nPolarizations == 2):
ggy2 = {}
for g in range(len(gain2)):
if (debug): print("len(gain2[%d]) = %d" % (g,len(gain2[g])))
ggy2[g] = gain2[g][1]
nRows2 = len(gain2)
if (debug): print("nRows2 = ", nRows2)
if (tableFormat == 34):
# CAS-6801, unfortunately corr_type is not available in the caltable
mytb.open(caltable)
# the following join will support ms names that contain spaces
spectralWindowTable = ' '.join(mytb.getkeyword('SPECTRAL_WINDOW').split()[1:])
if ('OBSERVATION' in mytb.getkeywords()):
observationTable = ' '.join(mytb.getkeyword('OBSERVATION').split()[1:])
else:
observationTable = None
mytb.open(spectralWindowTable)
refFreq = mytb.getcol('REF_FREQUENCY')
net_sideband = mytb.getcol('NET_SIDEBAND')
measFreqRef = mytb.getcol('MEAS_FREQ_REF')
mytb.close()
corr_type = None
if (os.path.exists(msName)):
try:
(corr_type, corr_type_string, nPolarizations) = getCorrType(msName,originalSpwsToPlot,debug)
except:
print("4) Could not open the associated measurement set tables (%s)." % (msName))
if (corr_type is None):
if (observationTable is None):
corr_type, corr_type_string, nPolarizations = getCorrTypeByAntennaName(msAnt[0].lower())
else:
telescope = getTelescopeNameFromCaltableObservationTable(observationTable)
if (telescope.find('ALMA') >= 0):
print("Using telescope name (%s) to set the polarization type." % (telescope))
corr_type_string = ['XX','YY']
corr_type = [9,12]
elif (telescope.find('VLA') >= 0):
print("Using telescope name (%s) to set the polarization type." % (telescope))
corr_type_string = ['RR','LL']
corr_type = [5,8]
else:
corr_type, corr_type_string, noPolarizations = getCorrTypeByAntennaName(msAnt[0].lower())
else:
try:
if (DEBUG):
print("Trying to open %s" % (msName+'/SPECTRAL_WINDOW'))
mytb.open(msName+'/SPECTRAL_WINDOW')
refFreq = mytb.getcol('REF_FREQUENCY')
net_sideband = mytb.getcol('NET_SIDEBAND')
measFreqRef = mytb.getcol('MEAS_FREQ_REF')
mytb.close()
if (debug): print("Calling getCorrtype('%s',%s,%s)" % (msName,str(originalSpwsToPlot),str(debug)))
(corr_type, corr_type_string, nPolarizations) = getCorrType(msName,originalSpwsToPlot,debug)
if (corr_type_string == []):
return(vm)
except:
print("4) Could not open the associated measurement set tables (%s). Will not translate antenna names." % (msName))
print("I will assume ALMA data: XX, YY, and refFreq=first channel.")
# chanFreqGHz = [] # comment out on 2014-04-08
corr_type_string = ['XX','YY']
corr_type = [9,12]
if (len(polsToPlot) > len(corr_type)):
# Necessary for SMA (single-pol) data
polsToPlot = corr_type_string
print("Polarizations to plot = ", polsToPlot)
polsToPlot = checkPolsToPlot(polsToPlot, corr_type_string)
if (polsToPlot == []):
return(vm)
if (len(msAnt) > 0):
msFound = True
if ((showatm == True or showtsky==True) and (vm=='' and mymsmd=='')):
print("Because I could not open the .ms, I will use default weather for showatm.")
# showatm = False
# showtsky = False
else:
if (xaxis.find('freq')>=0 and tableFormat==33):
print("Because I could not open the .ms and this is an old caltable, you cannot use xaxis='freq'.")
return(vm)
if (showatm == True or showtsky==True):
print("Because I could not open the .ms, you cannot use showatm or showtsky.")
return(vm)
if (bpoly == False):
if (debug):
print("nPolarizations = ", nPolarizations)
print("nFields = %d = " % (nFields), uniqueFields)
if (bOverlay and debug):
print("nPolarizations2 = ", nPolarizations2)
print("nFields2 = %d = " % (nFields2), uniqueFields2)
print("nRows2 = ", nRows2)
uniqueAntennaIds = np.sort(np.unique(ant))
yPhaseLabel = 'Phase (deg)'
tsysPercent = True
ampPercent = True
if (VisCal.lower().find('tsys') >= 0):
if (channeldiff > 0):
if (tsysPercent):
yAmplitudeLabel = "Tsys derivative (%_of_median/channel)"
else:
yAmplitudeLabel = "Tsys derivative (K/channel)"
else:
yAmplitudeLabel = "Tsys (K)"
elif delay:
yAmplitudeLabel = 'Delay (nsec)'
else:
if (yaxis.lower().find('db')>=0):
yAmplitudeLabel = "Amplitude (dB)"
else:
if (channeldiff > 0):
if (ampPercent):
yAmplitudeLabel = "Amp derivative (%_of_median/channel)"
else:
yAmplitudeLabel = "Amplitude derivative"
yPhaseLabel = 'Phase derivative (deg/channel)'
else:
yAmplitudeLabel = "Amplitude"
madsigma = channeldiff # for option channeldiff>0, sets threshold for finding outliers
ampMin = LARGE_POSITIVE
ampMax = LARGE_NEGATIVE
PHASE_ABS_SUM_THRESHOLD = 2e-3 # in degrees, used to avoid printing MAD statistics for refant
pb.clf()
if casaVersion > '5.8':
figdesc = pb.gcf()
figsize = pb.rcParams['figure.figsize']
# print("figsize = ", figsize)
# This leaves a white blank space, but text no longer overlaps.
figdesc.set_size_inches([11.2,8.4])
drewAtmosphere = False
TDMisSecond = False
pagectr = 0
newpage = 1
pages = []
xctr = 0
myap = 0 # determines whether an amp or phase plot starts the page (in the case of 'both')
# zero means amplitude, 1 means phase
redisplay = False
matchctr = 0
myUniqueColor = []
# for the overlay=antenna case, start by assuming the first antenna is not flagged
firstUnflaggedAntennaToPlot = 0
lastUnflaggedAntennaToPlot = len(antennasToPlot)-1
computedAtmSpw = -1
computedAtmTime = -1
computedAtmField = -1
skippingSpwMessageSent = 0
atmString = ''
if (showimage and lo1 is None):
# We only need to run this once per execution.
getLOsReturnValue = au.getLOs(msName, verbose=False)
if (getLOsReturnValue != []):
lo1s = au.interpretLOs(msName,parentms,spwsForIntent=spwsToPlot, mymsmd=mymsmd, verbose=debug)
foundLO1Message = [] # Initialize so that message is only displayed once per spw
if debug:
print("getLOsReturnValue = ", getLOsReturnValue)
if (channeldiff>0):
# build blank dictionary: madstats['DV01']['spw']['time']['pol']['amp' or 'phase' or both]
# where spw, time, pol are each integers
if (len(msAnt) > 0):
madstats = dict.fromkeys(msAnt)
else:
madstats = dict.fromkeys(['Ant '+str(i) for i in range(len(uniqueAntennaIds))])
# print "msAnt = ", msAnt
# print "spwsToPlot = ", spwsToPlot
for i in range(len(madstats)):
madstats[madstats.keys()[i]] = dict.fromkeys(spwsToPlot)
for j in range(len(spwsToPlot)):
madstats[madstats.keys()[i]][spwsToPlot[j]] = dict.fromkeys(timerangeList) # dict.fromkeys(range(len(uniqueTimes)))
for k in timerangeList: # range(len(uniqueTimes)):
madstats[madstats.keys()[i]][spwsToPlot[j]][k] = dict.fromkeys(range(nPolarizations))
for l in range(nPolarizations):
if (yaxis == 'both'):
madstats[madstats.keys()[i]][spwsToPlot[j]][k][l] = {'amp': None, 'phase': None, 'ampstd': None, 'phasestd': None}
elif (yaxis == 'phase'):
madstats[madstats.keys()[i]][spwsToPlot[j]][k][l] = {'phase': None, 'phasestd': None}
else:
# this includes tsys and amp
madstats[madstats.keys()[i]][spwsToPlot[j]][k][l] = {'amp': None, 'ampstd': None}
madstats['platforming'] = {}
# print "madstats = ", madstats
myinput = ''
atmEverBeenCalculated = False
spwsToPlotInBaseband = []
frequencyRangeToPlotInBaseband = []
if (basebands == []):
# MS is too old to have BBC_NO
spwsToPlotInBaseband = [spwsToPlot]
frequencyRangeToPlotInBaseband = [callFrequencyRangeForSpws(mymsmd, spwsToPlot, vm, debug, caltable)]
basebands = [0]
elif (overlayBasebands):
if (list(spwsToPlot) != list(uniqueSpwsInCalTable)):
# then spws were requested, so treat them all as if in the same baseband, and
# ignore the basebands parameter
print("Ignoring the basebands parameter because spws were specified = %s" % (str(spwsToPlot)))
elif (np.array_equal(np.sort(basebands), np.sort(allBasebands)) == False):
# Allow the basebands parameter to select the spws
basebandSpwsToPlot = []
for baseband in basebands:
myspws = list(getSpwsForBaseband(vis=msName, mymsmd=mymsmd, bb=baseband, caltable=caltable))
basebandSpwsToPlot += myspws
spwsToPlot = np.intersect1d(basebandSpwsToPlot, spwsToPlot)
print("selected basebands %s have spwsToPlot = %s" % (str(basebands),str(spwsToPlot)))
else:
if (debug): print("No spws or basebands selected, will overlay all spws from all basebands.")
spwsToPlotInBaseband = [spwsToPlot]
frequencyRangeToPlotInBaseband = [callFrequencyRangeForSpws(mymsmd, spwsToPlot, vm, debug, caltable)]
basebands = [0]
else:
for baseband in basebands:
myspwlist = []
for spw in spwsToPlot:
if (casaVersion >= '4.1.0' and msFound):
if (mymsmd.baseband(originalSpwsToPlot[list(spwsToPlot).index(spw)]) == baseband):
myspwlist.append(spw)
else:
# need to write a function to retrieve baseband (if I ever run casa 4.0 again)
# if (spw != 0):
myspwlist.append(spw)
spwsToPlotInBaseband.append(myspwlist)
frequencyRangeToPlotInBaseband.append(callFrequencyRangeForSpws(mymsmd, myspwlist,vm, debug, caltable))
print("basebands to plot = %s" % (str(basebands)))
firstTimeMatch = -1 # Aug 5, 2013
if (overlaySpws or overlayBasebands):
groupByBaseband = True
if (groupByBaseband and overlaySpws==False and overlayBasebands==False):
showBasebandNumber = True
# Basic nested 'while' loop structure is:
# - antennas
# - baseband (if necessary)
# - spw
# - time
# - for i in rows
while (xctr < len(antennasToPlot)):
if (debug): print("at top of xctr loop: %d" % (xctr))
xant = antennasToPlot[xctr]
bbctr = 0
spwctr = 0
spwctrFirstToPlot = 0
antstring = buildAntString(xant,msFound,msAnt)
alreadyPlottedAmp = False # needed for (overlay='baseband', yaxis='both') CAS-6477
if (antstring.isdigit()):
Antstring = "Ant%s" % antstring
else:
Antstring = antstring
finalSpwWasFlagged = False # inserted on 22-Apr-2014 for g25.27
while ((bbctr < len(spwsToPlotInBaseband) and groupByBaseband) or
(spwctr < len(spwsToPlot) and groupByBaseband==False)
):
if (debug): print("at top of bbctr/spwctr loop with bbctr=%d, spwctr=%d" % (bbctr,spwctr))
if (groupByBaseband):
baseband = basebands[bbctr]
spwsToPlot = spwsToPlotInBaseband[bbctr]
if (debug): print("setting spwsToPlot for baseband %d (bbctr=%d) to %s" % (baseband, bbctr, str(spwsToPlot)))
else:
baseband = 0
if (casaVersion >= '4.1.0'):
if (getBasebandDict(msName,caltable=caltable, mymsmd=mymsmd) != {}):
try:
baseband = mymsmd.baseband(originalSpwsToPlot[spwctr])
if (baseband not in basebands):
spwctr += 1 # was bbctr += 1
if (debug): print("A)incrementing spwctr")
# baseband = mymsmd.baseband(spwsToPlot[spwctr])
# if (baseband not in basebands and bbctr+1 < len(spwsToPlotInBaseband)):
# bbctr += 1
continue
except:
pass
if (debug):
if (overlayBasebands):
print("Regardless of baseband (%s), plotting all spws: %s" % (basebands,str(spwsToPlot)))
else:
print("Showing baseband %d containing spws:" % (baseband))
print(str(spwsToPlotInBaseband[bbctr]))
if (bbctr < len(spwsToPlotInBaseband)):
if (debug):
print("A) spwctr=%d, bbctr=%d < len(spwsToPlotInBaseband)=%d" % (spwctr,bbctr,len(spwsToPlotInBaseband)))
spwctr = 0
spwctrFirstToPlot = spwctr
firstSpwMatch = -1
while (spwctr < len(spwsToPlot)):
if (debug): print("at top of spwctr loop, spwctr=%d" % (spwctr))
allTimesFlaggedOnThisSpw = True # used only by overlay='time'
if (groupByBaseband == False):
baseband = 0
if (casaVersion >= '4.1.0'):
if (getBasebandDict(msName,caltable=caltable,mymsmd=mymsmd) != {}):
try:
baseband = mymsmd.baseband(originalSpwsToPlot[spwctr])
if (baseband not in basebands):
# print "spw %d=%d: baseband %d is not in %s" % (spwsToPlot[spwctr],originalSpwsToPlot[spwctr], baseband, basebands)
if (debug): print("B)incrementing spwctr to %d" % (spwctr+1))
spwctr += 1
continue
except:
pass
ispw = spwsToPlot[spwctr]
ispwInCalTable = list(uniqueSpwsInCalTable).index(ispw)
mytime = 0
if (debug):
print("+++++++ set mytime=0 for ispw=%d, len(chanFreqGHz) = %d" % (ispw, len(chanFreqGHz)))
if (overlayAntennas):
xctr = -1
if (overlayTimes):
# since the times/scans can vary between spws, redefine nUniqueTimes for each spw
nUniqueTimes = len(uniqueTimesCopy)
uniqueTimes = uniqueTimesCopy[:]
uniqueTimesForSpw = []
testtime = 0
while (testtime < nUniqueTimes):
if (ispw in cal_desc_id[np.where(uniqueTimes[testtime] == times)[0]]):
uniqueTimesForSpw.append(uniqueTimes[testtime])
testtime += 1
uniqueTimes = uniqueTimesForSpw[:]
if (tableFormat >= 34):
scansForUniqueTimes, nUniqueTimes = computeScansForUniqueTimes(uniqueTimes, cal_scans, times, unique_cal_scans)
else:
nUniqueTimes = len(uniqueTimes)
# currentSpwctr = spwctr # commented out on 2014-04-04 to match task for task01 regression
if (overlaySpws or overlayBasebands):
if (xctr >= firstUnflaggedAntennaToPlot):
if (debug):
print("xctr=%d >= firstUnflaggedAntennaToPlot=%d" % (xctr, firstUnflaggedAntennaToPlot))
spwctr -= 1
firstTimeMatch = -1
finalTimeMatch = -1 # for CAS-7820
while (mytime < nUniqueTimes):
finalTimerangeFlagged = False # 04-Aug-2014
if (debug):
print("at top of mytime loop: mytime = %d < %d" % (mytime,nUniqueTimes))
print("timerangeList = %s" % (str(timerangeList)))
print("timerangeListTimes = %s" % (str(timerangeListTimes)))
print("debugSloppyMatch = %s" % (str(debugSloppyMatch)))
print("solutionTimeThresholdSeconds = %s" % (str(solutionTimeThresholdSeconds)))
# if ((scansToPlot == scansToPlotPerSpw[ispw]).all() == False and False):
# print " scansToPlot = ", scansToPlot
# print "scansToPlotPerSpw[%2d] = " % (ispw), scansToPlotPerSpw[ispw]
if (len(timerangeList) > 0 and
(sloppyMatch(uniqueTimes[mytime],timerangeListTimes,solutionTimeThresholdSeconds,
mytime, scansToPlot, scansForUniqueTimes, myprint=debugSloppyMatch)==False)): # task version
# mytime, scansToPlotPerSpw[ispw], scansForUniqueTimes, myprint=debugSloppyMatch)==False)): # au version (with this: test 85 has infinite loop)
if (debug):
print("Skipping mytime %d because it is not in %s, or scan %d is not in %s" % (mytime, str(timerangeList),scansForUniqueTimes[mytime],scansToPlotPerSpw[ispw]))
mytime += 1
if (debug): print(" 0006 incrementing mytime to ", mytime)
if (mytime == nUniqueTimes and overlayTimes and overlayAntennas):
# added March 14, 2013 to support the case when final timerange is flagged
doneOverlayTime = False
if (debug):
print("$$$$$$$$$$$$$$$$$$$$$$ Setting doneOverlayTime=False" % (xframe))
else:
if (debug):
print("Not setting doneOverlayTime=False because either mytime(%d) != nUniqueTimes(%d) or we are not overlaying time and antenna" % (mytime,nUniqueTimes))
continue
if (overlayAntennas):
xctr += 1
if (xctr >= len(antennasToPlot)):
xctr = 0
# if (mytime == 0):
# if (debug): print "@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ setting firstTimeMatch = -1"
# firstTimeMatch = -1 # Aug 5, 2013
xant = antennasToPlot[xctr]
antstring = buildAntString(xant,msFound,msAnt)
if (debug):
print("mytime=%d, Set xant to %d" % (mytime,xant))
antennaString = ''
else:
antennaString = 'Ant%2d: %s, ' % (xant,antstring)
if (overlaySpws or overlayBasebands):
if (debug): print("C)incrementing spwctr to %d" % (spwctr+1))
spwctr += 1
if (spwctr >= len(spwsToPlot)):
spwctr = 0 # added on 2014-04-04 to match the task
if (xctr < firstUnflaggedAntennaToPlot):
xctr += 1
if (xctr == len(antennasToPlot)):
break
xant = antennasToPlot[xctr]
antstring = buildAntString(xant,msFound,msAnt)
if (debug):
print("mytime=%d, Set xant to %d" % (mytime,xant))
antennaString = 'Ant%2d: %s, ' % (xant,antstring)
# if (overlaySpws): # commented out on 2014-04-04 to match task for task01 regression
# spwctr = currentSpwctr # commented out on 2014-04-04 to match task for task01 regression
# break # added on Sep 23, 2013 so that mytime gets incremented # commented out on 2014-04-04 to match task for task01 regression
if (overlayBasebands):
# Added on 7/29/2014 to fix infinite loop in uid___A002_X652932_X20fb bandpass
if (mytime == nUniqueTimes): # +0 fixes regression 1
spwctr = len(spwsToPlot)
if (debug):
print("Breaking because spwctr=%d == len(spwsToPlot)=%d" % (spwctr,len(spwsToPlot)))
break
ispw = spwsToPlot[spwctr]
if (ispw not in uniqueSpwsInCalTable):
print("spw %d is not in caltable=%s" % (ispw,uniqueSpwsInCalTable))
return
ispwInCalTable = list(uniqueSpwsInCalTable).index(ispw)
if (debug):
print("----------------------------- spwctr=%d, ispw set to %d, xctr=%d" % (spwctr,ispw,xctr))
if (newpage==1):
# add the current page (being created here) to the list
pages.append([xctr,spwctr,mytime,0])
if (debug):
print("top: appending [%d,%d,%d,%d]" % (xctr,spwctr,mytime,0))
newpage = 0
gplotx = []
gploty = []
channels = []
xchannels = []
ychannels = []
frequencies = []
xfrequencies = []
yfrequencies = []
channels2 = []
xchannels2 = []
ychannels2 = []
frequencies2 = []
xfrequencies2 = []
yfrequencies2 = []
gplotx2 = []
gploty2 = []
xflag = []
yflag = []
xflag2 = []
yflag2 = []
matchFound = False
matchField = -1
matchRow = -1
matchTime = -1
if (debug): print("looping over all nRows = %d" % (nRows))
for i in range(nRows):
if (overlayTimes or overlayAntennas or len(fieldsToPlot)>1 or
(nFields>1 and len(fieldlist)<nFields)):
# When there are multiple fields, then matching by scan causes the first
# matching solution to be displayed every time. So use the original method
# of matching by time until I think of something better.
sm = sloppyMatch(uniqueTimes[mytime],times[i],solutionTimeThresholdSeconds,myprint=False)
else:
if (overlayBasebands):
sTP = scansToPlot
else:
sTP = scansToPlotPerSpw[ispw]
sm = sloppyMatch(uniqueTimes[mytime],times[i],solutionTimeThresholdSeconds,
# mytime, scansToPlot, scansForUniqueTimes, myprint=False) # task version
mytime, sTP, scansForUniqueTimes, myprint=False) # au version
if ((ant[i]==xant) and (cal_desc_id[i]==ispw) and sm
and (mytime in timerangeList) # this test was added to support multiFieldInTimeOverlay
):
if debug: print("len(chanFreqGHz)=%d, ispw=%d" % (len(chanFreqGHz),ispw))
if (msFound or tableFormat==34):
if (len(chanFreqGHz[ispw]) == 1 and delay==False):
if ((skippingSpwMessageSent & (1<<ispw)) == 0):
print("Skipping spw=%d because it has only 1 channel." % (ispw))
skippingSpwMessageSent |= (1<<ispw)
break
if (fields[i] in fieldsToPlot):
interval = intervals[i] # used for CalcAtmTransmission
myFieldIndex = np.where(fields[i] == uniqueFields)[0]
if (type(myFieldIndex) == list or type(myFieldIndex) == np.ndarray):
myFieldIndex = myFieldIndex[0]
if (debug):
print("%d Found match at field,ant,spw,mytime,time = %d(index=%d),%d,%d,%d,%f=%s" % (matchctr,fields[i],myFieldIndex,xant,ispw,mytime,uniqueTimes[mytime],utstring(uniqueTimes[mytime],4)))
if (matchFound):
if (myFieldIndex == matchField and matchTime==times[i]):
print("WARNING: multiple rows for field=%d,ant=%d,spw=%d,scan=%d,time=%d=%.0f=%s,row=%d. Only showing the first one." % (fields[i],xant,ispw,scansForUniqueTimes[mytime],mytime,uniqueTimes[mytime],utstring(uniqueTimes[mytime],3),i))
else:
matchFound = True
fieldIndex = myFieldIndex
matchField = myFieldIndex
matchTime = times[i]
matchRow = i
if (msFound or tableFormat==34):
nChannels = len(chanFreqGHz[ispw])
solnChannels = len(ggx[i])
if (nChannels != solnChannels):
print("Mismatch in table between number of solution channels (%d) and spw channels (%d) for spw %d." % (solnChannels,nChannels,ispw))
return
else:
nChannels = len(ggx[0])
xflag.append(flags[i][0][:])
yflag.append(flags[i][1][:])
BRowNumber = i
for j in range(nChannels): # len(chanFreqGHz[ispw])):
channels.append(j) # both flagged and unflagged
if (msFound or tableFormat==34):
frequencies.append(chanFreqGHz[ispw][j])
if (j==0 and debug):
print("found match: ispw=%d, j=%d, len(chanFreqGHz)=%d, chanFreqGHz[0]=%f" % (ispw,j, len(chanFreqGHz),chanFreqGHz[ispw][0]))
if (showflagged or (showflagged == False and flags[i][0][j]==0)):
gplotx.append(ggx[i][j])
xchannels.append(j)
if (msFound or tableFormat==34):
xfrequencies.append(chanFreqGHz[ispw][j])
if (nPolarizations == 2):
if (showflagged or (showflagged == False and flags[i][1][j]==0)):
gploty.append(ggy[i][j])
ychannels.append(j)
if (msFound or tableFormat==34):
yfrequencies.append(chanFreqGHz[ispw][j])
elif (debug and False):
if (mytime not in timerangeList):
print("mytime %f not in %s" % (mytime,str(timerangeList)))
if (ant[i]!=xant):
print("ant: %d != %d" % (ant[i],xant))
if (cal_desc_id[i]!=ispw):
print("cal_desc_id[i]!=ispw %d != %d" % (cal_desc_id[i],ispw))
if (sm == False):
print("sm failed")
# end 'for i in range(nRows)'
# So, fieldIndex needs to be the correct one when we exit this loop
if (debug):
print("finished loop over nRows: matchFound = ", matchFound)
if (not matchFound and newpage==0):
if (subplot==11 or (subplot!=11 and firstTimeMatch==-1 and firstSpwMatch==-1)):
# Fix for CAS-7753 (subplot==11)
# for subplot=22, firstTimeMatch and firstSpwMatch gives spw17 and 23 (or gives 17 and 39)
# the firstTimeMatch part was needed for regression 65: different antennas having different solution times
# the firstSpwMatch was needed to keep the first spw in the filename rather than the final spw, e.g. regression 14
# print "Applying fix for CAS-7753"
newpage = 1
pages = pages[:len(pages)-1]
myspw = originalSpw[ispw]
if (msFound):
if debug:
print("myspw=", myspw)
print("len(refFreq)=", len(refFreq))
if (myspw >= len(refFreq)):
myspw = ispw
if (msFound and refFreq[myspw]*1e-9 > 60):
# Then this cannot be EVLA data. But I should really check the telescope name!
# Then again, Band 1 may not have USB/LSB distinction (nor Band 2 for that matter).
if debug and False:
print("frequencies = ", frequencies)
print("xfrequencies = ", xfrequencies)
print("chanFreqGHz[%d] = " % (ispw), chanFreqGHz[ispw])
# if (refFreq[myspw]*1e-9 > np.mean(frequencies)):
if (refFreq[myspw]*1e-9 > np.mean(chanFreqGHz[ispw])): # this is safer (since frequencies might be [])
sideband = -1
xlabelString = "%s LSB Frequency (GHz) (%d channels)" % (refTypeToString(measFreqRef[myspw]),len(xchannels))
else:
sideband = +1
xlabelString = "%s USB Frequency (GHz) (%d channels)" % (refTypeToString(measFreqRef[myspw]),len(xchannels))
else:
sideband = -1
xlabelString = "Frequency (GHz) (%d channels)" % (len(xchannels))
if ((len(frequencies)>0) and (chanrange[1] > len(frequencies))):
print("Invalid chanrange (%d-%d) for spw%d in caltable1. Valid range = 0-%d" % (chanrange[0],chanrange[1],ispw,len(frequencies)-1))
return(vm)
pchannels = [xchannels,ychannels]
pfrequencies = [xfrequencies,yfrequencies]
gplot = [gplotx,gploty]
# We only need to compute the atmospheric transmission if:
# * we have been asked to show it,
# * there is a non-trivial number of channels,
# * the current field is the one for which we should calculate it (if times are being overlaid)
# But this will cause no atmcurve to appear if that field is flagged on the first
# antenna; so, I added the atmEverBeenCalculated flag to deal with this.
# * the previous calculation is not identical to what this one will be
#
if ((showatm or showtsky) and (len(xchannels)>1 or len(ychannels)>1) and
# this insures a plot if first fieldsToPlot is missing
((uniqueFields[fieldIndex]==showatmfield or (uniqueFields[fieldIndex] in fieldsToPlot and overlayTimes)) or
overlayTimes==False or atmEverBeenCalculated==False) and
((overlayTimes==False and computedAtmField!=fieldIndex) or (computedAtmSpw!=ispw) or
(overlayTimes==False and computedAtmTime!=mytime))):
atmEverBeenCalculated = True
# print "Calcing Atm: len(xchannels)=%d, len(ychannels=%d), uniqueFields[%d]=%d ?= %d" % (len(xchannels), len(ychannels),fieldIndex,uniqueFields[fieldIndex],showatmfield)
# print " computedAtmField=%d, fieldIndex=%d; computedAtmSpw=%d, ispw=%d; computedAtmTime=%d,mytime=%d" % (computedAtmField, fieldIndex, computedAtmSpw, ispw, computedAtmTime,mytime)
if (True): # support showatm for overlay='antenna,time'
# print "Running CalcAtm: CAF=%d, CAS=%d, CAT=%d, xant=%d" % (computedAtmField, computedAtmSpw, computedAtmTime, xant)
if (type(fieldIndex) == list or type(fieldIndex) == np.ndarray):
computedAtmField = fieldIndex[0]
else:
computedAtmField = fieldIndex
computedAtmSpw = ispw
computedAtmTime = mytime
atmtime = timeUtilities.time()
asdm = ''
(atmfreq,atmchan,transmission,pwvmean,atmairmass,TebbSky,missingCalWVRErrorPrinted,Tsys) = \
CalcAtmTransmission(channels, frequencies, xaxis, pwv,
vm, msName, asdm, xant, uniqueTimes[mytime],
interval, uniqueFields[fieldIndex],
refFreq[originalSpw[ispw]],
net_sideband[originalSpw[ispw]], mytime,
missingCalWVRErrorPrinted, mymsmd, caltable,
verbose=DEBUG, maxAtmCalcChannels=maxAtmCalcChannels,
maxAltitude=maxAltitude, Feff=Feff, SBGain=SBGain,
Trx=Trx, showtsys=showtsys)
if showtsys:
TebbSky = Tsys
if (showimage):
if (lo1 is not None):
LO1 = lo1
else:
if (getLOsReturnValue == []):
if (lo1 is None):
print("Because you do not have the ASDM_RECEIVER table, if you want the showimage")
print("option to work, then you must specify the LO1 frequency with lo1=.")
# return(vm)
LO1 = lo1
else:
if (lo1s is None or lo1s == {}):
print("Failed to get LO1, disabling showimage. Alternatively, you can use printLOsFromASDM and supply the lo1 parameter to plotbandpass.")
showimage = False
LO1 = None
else:
if (originalSpw[ispw] not in lo1s.keys()):
print("There is a problem in reading the LO1 values, cannot showimage for this dataset.")
print("originalSpw[%d]=%d > len(lo1s)=%d" % (ispw, originalSpw[ispw], len(lo1s)))
showimage = False
LO1 = None
else:
LO1 = lo1s[originalSpw[ispw]]*1e-9
if (ispw not in foundLO1Message):
print("For spw %d (%d), found LO1 = %.6f GHz" % (ispw,originalSpw[ispw],LO1))
foundLO1Message.append(ispw)
if (LO1 is not None):
frequenciesImage = list(2*LO1 - np.array(frequencies))
xfrequenciesImage = list(2*LO1 - np.array(pfrequencies[0]))
yfrequenciesImage = list(2*LO1 - np.array(pfrequencies[1]))
pfrequenciesImage = [xfrequenciesImage, yfrequenciesImage]
(atmfreqImage,atmchanImage,transmissionImage,pwvmean,atmairmass,TebbSkyImage,missingCalWVRErrorPrinted,TsysImage) = \
CalcAtmTransmission(channels, frequenciesImage, xaxis,
pwv, vm, msName, asdm, xant, uniqueTimes[mytime],
interval, uniqueFields[fieldIndex],
refFreq[originalSpw[ispw]],
net_sideband[originalSpw[ispw]], mytime,
missingCalWVRErrorPrinted, mymsmd,
caltable, verbose=DEBUG,
maxAtmCalcChannels=maxAtmCalcChannels,
maxAltitude=maxAltitude, Feff=Feff, SBGain=SBGain,
Trx=Trx, showtsys=showtsys)
if showtsys:
TebbSkyImage = TsysImage
atmfreqImage = list(2*LO1 - np.array(atmfreqImage))
# atmfreqImage.reverse()
atmchanImage.reverse()
if (overlayTimes):
atmString = 'PWV %.2fmm, airmass %.2f, maxAlt %.0fkm (field %d)' % (pwvmean,atmairmass,maxAltitude,uniqueFields[fieldIndex])
else:
atmString = 'PWV %.2fmm, airmass %.3f, maxAlt %.0fkm' % (pwvmean,atmairmass,maxAltitude)
elif (False):
print("Skipping CalcAtm: xframe=%d, len(xchannels)=%d, len(ychannels=%d), uniqueFields[%d]=%d ?= %d" % (xframe,len(xchannels), len(ychannels),fieldIndex,uniqueFields[fieldIndex],showatmfield))
print(" computedAtmField=%d, fieldIndex=%d, computedAtmSpw=%d, ispw=%d" % (computedAtmField,fieldIndex,computedAtmSpw,ispw))
print(" computedAtmTime=%d, mytime=%d, xant=%d" % (computedAtmTime, mytime, xant))
if (bOverlay):
for i in range(nRows2):
if (overlayTimes or overlayAntennas or len(fieldsToPlot)>1 or
(nFields>1 and len(fieldlist)<nFields)):
# Not having this path causes Tsys table overlays to behave like overlay='antenna,time'
# for caltable2.
sm = sloppyMatch(uniqueTimes2[mytime],times2[i],solutionTimeThresholdSeconds,myprint=False)
else:
if debug:
if (mytime >= len(uniqueTimes2)):
print("mytime=%d is too long for uniqueTimes2=%d" % (mytime, len(uniqueTimes2)))
if (i >= len(times2)):
print("i=%d is too long for uniqueTimes2=%d" % (i, len(times2)))
if (ispw >= len(scansToPlotPerSpw)):
print("ispw=%d is too long for uniqueTimes2=%d" % (ispw, len(scansToPlotPerSpw)))
if (mytime >= len(uniqueTimes2)):
# Fix for CAS-9474: avoid calling sloppyMatch because it will crash.
# Setting sm=False will result in an abort: "no amp data found in second solution."
sm = False
else:
sm = sloppyMatch(uniqueTimes2[mytime],times2[i],solutionTimeThresholdSeconds,
mytime, scansToPlotPerSpw[ispw], scansForUniqueTimes, # au version
myprint=False)
if ((ant2[i]==xant) and (cal_desc_id2[i]==ispw) and sm
and (mytime in timerangeList) # added to match first caltable logic on 2014-04-09
):
if (fields2[i] in fieldsToPlot):
xflag2.append(flags2[i][0][:])
yflag2.append(flags2[i][1][:])
# With solint='2ch' or more, the following loop should not be over
# chanFreqGHz2 but over the channels in the solution.
# print "len(chanFreqGHz2[%d])=%d" % (ispw,len(chanFreqGHz2[ispw]))
for j in range(len(chanFreqGHz2[ispw])):
channels2.append(j)
frequencies2.append(chanFreqGHz2[ispw][j])
# print "len(chanFreqGHz2[%d])=%d, i=%d,j=%d, len(ggx2)=%d, len(ggx2[0])=%d, shape(ggx2) = " % (ispw,len(chanFreqGHz2[ispw]),i,j,len(ggx2),len(ggx2[0])), np.shape(np.array(ggx2))
if (showflagged or (showflagged == False and flags2[i][0][j]==0)):
gplotx2.append(ggx2[i][j])
xchannels2.append(j)
xfrequencies2.append(chanFreqGHz2[ispw][j])
# print "appending %f to xfrequencies2" % (chanFreqGHz2[ispw][j])
else:
if (debug):
print("********* flags2[%d][0][%d] = %d, showflagged=" % (i,j,flags2[i][0][j]), showflagged)
if (nPolarizations2 == 2):
if (showflagged or (showflagged == False and flags2[i][1][j]==0)):
gploty2.append(ggy2[i][j])
ychannels2.append(j)
yfrequencies2.append(chanFreqGHz2[ispw][j])
# end 'for i'
pchannels2 = [xchannels2,ychannels2]
pfrequencies2 = [xfrequencies2,yfrequencies2]
gplot2 = [gplotx2,gploty2]
# Need to rewrite the xlabel to show the total channel numbers from both caltables. Note that xaxis must be 'freq' for bOverlay
if (msFound and refFreq[myspw]*1e-9 > 60):
# Then this cannot be EVLA data. But I should really check the telescope name!
# Then again, Band 1 may not have USB/LSB distinction (nor Band 2 for that matter).
if (refFreq[myspw]*1e-9 > np.mean(chanFreqGHz2[ispw])): # this is safer (since frequencies might be [])
sideband = -1
xlabelString = "%s LSB Frequency (GHz) (%d, %d channels)" % (refTypeToString(measFreqRef[myspw]),len(xchannels),len(xchannels2))
else:
sideband = +1
xlabelString = "%s USB Frequency (GHz) (%d, %d channels)" % (refTypeToString(measFreqRef[myspw]),len(xchannels),len(xchannels2))
else:
sideband = -1
xlabelString = "Frequency (GHz) (%d, %d channels)" % (len(xchannels),len(xchannels2))
#
# Don't check here, because chanrange refers only to caltable, not caltable2.
# if (len(frequencies2)>0 and (chanrange[1] > len(frequencies2))):
# print "Invalid chanrange for spw%d in caltable2. Valid range = 0-%d" % (ispw,len(frequencies2))
# return(vm)
# Prevents crash if long value is set for solutionTimeThresholdSeconds, but prints a lot of
# messages for Tsys with overlay='antenna'.
# if (len(xchannels) < 1):
# print "No unflagged data found for (ant,spw,mytime,time) = %d,%d,%d,%.1f=%s" % (xant,ispw,mytime,uniqueTimes[mytime],utstring(uniqueTimes[mytime],4))
# matchFound = False
if (matchFound==False):
if ((overlayAntennas==False and overlaySpws==False and overlayBasebands==False) or
(overlayAntennas and xctr+1 >= len(antennasToPlot)) or
((overlaySpws or overlayBasebands) and spwctr+1 >= len(spwsToPlot))):
mytime += 1
# print " 0005 incrementing mytime to ", mytime
if (debug):
print("a) xctr=%d, Incrementing mytime to %d/%d because NO MATCH FOUND ============" % (xctr, mytime,nUniqueTimes))
continue
# The following variable allows color legend of UT times to match line plot
myUniqueTime = []
if (True): # multiFieldsWithOverlayTime
# support multi-fields with overlay='time'
uTPFPS = []
for f in fieldIndicesToPlot:
for t in uniqueTimesPerFieldPerSpw[ispwInCalTable][f]:
if (sloppyMatch(t, timerangeListTimes, solutionTimeThresholdSeconds,
mytime, scansToPlotPerSpw[ispw], scansForUniqueTimes, # au version
# mytime, scansToPlot, scansForUniqueTimes, # task version
myprint=debugSloppyMatch
)):
uTPFPS.append(t)
uTPFPS = np.sort(uTPFPS)
ctr = 0
for t in uTPFPS:
if (debug and False):
print("1)checking time %d" % (t))
if (overlayTimes or overlayAntennas):
sm = sloppyMatch(uniqueTimes[mytime],times[i],solutionTimeThresholdSeconds,myprint=False)
else:
sm = sloppyMatch(t, uniqueTimes[mytime], solutionTimeThresholdSeconds,
# mytime, scansToPlot, scansForUniqueTimes, # task version
mytime, scansToPlotPerSpw[ispw], scansForUniqueTimes, # au version
myprint=debugSloppyMatch
)
if (sm):
if (debug):
print("1)setting myUniqueTime to %d" % (mytime))
myUniqueTime = mytime
ctr += 1
if (debug): print("Found a match")
# if (ctr > len(fieldIndicesToPlot) and bOverlay==False):
# print("multi-field time overlay *************** why are there more matches (%d) than fields (%d)?" % (ctr,len(fieldIndicesToPlot)))
# print "Overlay antenna %d, myUniqueTime=%d" % (xctr, myUniqueTime)
if (xframe == xframeStart):
if (debug):
print("Clearing the page")
pb.clf()
xflag = [item for sublist in xflag for item in sublist]
yflag = [item for sublist in yflag for item in sublist]
# pflag = [xflag, yflag]
# flagfrequencies = [frequencies, frequencies2]
antstring = buildAntString(xant,msFound,msAnt)
if (msFound):
fieldString = msFields[uniqueFields[fieldIndex]] # [0]
else:
fieldString = str(field)
if (overlayTimes):
timeString =''
else:
timeString = ', t%d/%d %s' % (mytime,nUniqueTimes-1,utstring(uniqueTimes[mytime],3))
if (scansForUniqueTimes != []):
if (scansForUniqueTimes[mytime]>=0):
timeString = ', scan%d %s' % (scansForUniqueTimes[mytime],utstring(uniqueTimes[mytime],3))
spwString = buildSpwString(overlaySpws, overlayBasebands, spwsToPlot,
ispw, originalSpw[ispw], observatoryName,
baseband, showBasebandNumber)
titleString = "%sspw%s, field %d: %s%s" % (antennaString,spwString,uniqueFields[fieldIndex],fieldString,timeString)
if (sum(xflag)==nChannels and sum(yflag)==nChannels and showflagged==False):
if (overlayTimes):
print("Skip %s, xant=%2d (%s) for time%d=%s all solutions flagged" % (antstring, xant, titleString,mytime,utstring(uniqueTimes[mytime],3)))
# need to set doneOverlayTime = True if this is the final time,
# otherwise, we get "subplot number exceeds total subplots" at line 2427
# but we need to draw the labels at the top of the page, else they will not get done
if (debug):
print("########## uniqueTimes[%d]=%d, timerangeListTimes[-1]=%d" % (mytime,uniqueTimes[mytime],timerangeListTimes[-1]))
print("########## scansToPlotPerSpw[%d]=%s, scansForUniqueTimes=%s" % (ispw,str(scansToPlotPerSpw[ispw]),str(scansForUniqueTimes)))
if (len(scansToPlotPerSpw[ispw]) < 1):
sTPPS = []
else:
# sTPPS = [scansToPlot[ispw][-1]]# added [[-1]] on 2014-04-04 task version
sTPPS = [scansToPlotPerSpw[ispw][-1]]# added [[-1]] on 2014-04-04 au version
if (sloppyMatch(timerangeListTimes[-1], uniqueTimes[mytime],
solutionTimeThresholdSeconds,
mytime, sTPPS, scansForUniqueTimes,
myprint=debugSloppyMatch
)):
if (overlayAntennas == False or xant==antennasToPlot[-1]): # 11-Mar-2014
doneOverlayTime = True # 08-Nov-2012
finalTimerangeFlagged = True
if (debug):
print("###### set doneOverlayTime = True, xant=%d, lastUnflaggedAntennaToPlot=%d, drewAtmosphere=%s" % (xant,lastUnflaggedAntennaToPlot,drewAtmosphere))
# draw labels
# try adding the following 'if' statement on Jun 18, 2013; it works.
# if (drewAtmosphere==False or overlayAntennas==False):
# Add the 'and not' case to prevent extra atm/fdms shown if one spw's solutions are all flagged
if (drewAtmosphere==False or (overlayAntennas==False and not allTimesFlaggedOnThisSpw)):
drawOverlayTimeLegends(xframe,firstFrame,xstartTitle,ystartTitle,
caltable,titlesize,fieldIndicesToPlot,
ispwInCalTable,uniqueTimesPerFieldPerSpw,
timerangeListTimes,
solutionTimeThresholdSeconds,
debugSloppyMatch,
ystartOverlayLegend,debug,mysize,
fieldsToPlot,myUniqueColor,
timeHorizontalSpacing,fieldIndex,
overlayColors,
antennaVerticalSpacing, overlayAntennas,
timerangeList, caltableTitle, mytime,
scansToPlotPerSpw[ispw], scansForUniqueTimes) # au version
if LO1 is None and type(lo1s) == dict: # Fix for SCOPS-4877
LO1 = lo1s[myspw] # Fix for SCOPS-4877
# CAS-8655
newylimits = drawAtmosphereAndFDM(showatm,showtsky,atmString,subplotRows,mysize,
TebbSky,TebbSkyImage,plotrange, xaxis,atmchan,
atmfreq,transmission,subplotCols,showatmPoints,
xframe, channels,LO1,atmchanImage,atmfreqImage,
transmissionImage,firstFrame,showfdm,nChannels,
tableFormat,originalSpw_casa33,
chanFreqGHz_casa33,originalSpw,chanFreqGHz,
overlayTimes, overlayAntennas, xant,
antennasToPlot, overlaySpws, baseband,
showBasebandNumber, basebandDict,
overlayBasebands, drewAtmosphere, showtsys)
drewAtmosphere = True
if (xctr == firstUnflaggedAntennaToPlot or overlayAntennas==False): # changed xant->xctr on 11-mar-2014
DrawPolarizationLabelsForOverlayTime(xstartPolLabel,ystartPolLabel,corr_type,polsToPlot,
channeldiff,ystartMadLabel,subplotRows,gamp_mad,mysize,
ampmarkstyle,markersize,ampmarkstyle2,gamp_std)
elif (overlayAntennas):
if (debug):
print("xant=%d, firstUnflaggedAntennaToPlot=%d" % (xant,firstUnflaggedAntennaToPlot))
else: # not overlaying times
print("Skipping %s, xant=%d, ispw=%d (%s) all solutions flagged" % (antstring, xant, ispw, titleString))
if ((overlaySpws or overlayBasebands) and spwctr==spwctrFirstToPlot):
spwctrFirstToPlot += 1
if ((overlaySpws or overlayBasebands) and ispw==spwsToPlotInBaseband[bbctr][-1]):
if (debug): print("The final spw was flagged!!!!!!!!!!!!!!")
finalSpwWasFlagged = True # inserted on 22-Apr-2014 for g25.27
if (myinput == 'b'):
redisplay = False # This prevents infinite loop when hitting 'b' on first screen when ant0 flagged. 2013-03-08
if (overlayAntennas==False and overlayBasebands==False): # 07/30/2014 added overlayBasebands==False
if (doneOverlayTime==False or overlayTimes==False): # added on 08-Nov-2012
finalSpwWasFlagged = False # Added on 23-Apr-2014 for regression61
mytime += 1
# print " 0003 incrementing mytime to ", mytime
if (debug):
print("F) all solutions flagged --> incrementing mytime to %d" % mytime)
if (overlayAntennas):
if (xctr == firstUnflaggedAntennaToPlot):
firstUnflaggedAntennaToPlot += 1
if (firstUnflaggedAntennaToPlot >= len(antennasToPlot)):
firstUnflaggedAntennaToPlot = 0
if not finalSpwWasFlagged: # Added first test on 23-Apr-2014 for regression61
mytime += 1
# print " 0002 incrementing mytime to ", mytime
if (debug):
print(" A) incrementing mytime to ", mytime)
print("----- Resetting firstUnflaggedAntennaToPlot from %d to %d" % (firstUnflaggedAntennaToPlot-1, firstUnflaggedAntennaToPlot))
print("----- = antenna %d" % (antennasToPlot[firstUnflaggedAntennaToPlot]))
if (mytime < nUniqueTimes): # Add this 'if' conditional on 9-22-2015 for CAS-7839
continue # Try adding this statement on Apr 2, 2012 to fix bug.
mytime -= 1 # Add this on 9-22-2015 for CAS-7839
if (overlaySpws or overlayBasebands):
if (xctr == firstUnflaggedAntennaToPlot):
firstUnflaggedAntennaToPlot += 1
if (firstUnflaggedAntennaToPlot >= len(antennasToPlot)):
firstUnflaggedAntennaToPlot = 0
if not finalSpwWasFlagged: # Added on 22-Apr-2014 for g25.27 dataset antenna='4'
if (overlayBasebands == False or spwctr>len(spwsToPlot)): # Added on 7/30/2014 for regression 96
mytime += 1
# print " 0001 incrementing mytime to ", mytime
if (debug):
print(" B) incrementing mytime to ", mytime)
print("----- Resetting firstUnflaggedAntennaToPlot from %d to %d" % (firstUnflaggedAntennaToPlot-1, firstUnflaggedAntennaToPlot))
print("----- = antenna %d" % (antennasToPlot[firstUnflaggedAntennaToPlot]))
if (not finalSpwWasFlagged): # add this test on Apr 22, 2014 to prevent crash on g25.27 dataset with antenna='4,5'
continue # Try this 'continue' on Apr 2, 2012 to fix bug -- works.
if (overlayAntennas==False and subplot==11
and not finalSpwWasFlagged # inserted on 22-Apr-2014 for g25.27
and not finalTimerangeFlagged # inserted on 04-Aug-2014 for CAS-6812
):
# added the case (subplot==11) on April 22, 2012 to prevent crash
# on multi-antenna subplot=421
if (debug):
print("####### removing [%d,%d,%d,%d] len(pages) was %d" % (pages[len(pages)-1][PAGE_ANT],
pages[len(pages)-1][PAGE_SPW],
pages[len(pages)-1][PAGE_TIME],
pages[len(pages)-1][PAGE_AP],len(pages)))
pages = pages[0:len(pages)-1]
newpage = 1
if (overlayAntennas==False):
if (doneOverlayTime==False # inserted on 08-Nov-2012
and not finalSpwWasFlagged): # inserted on 22-Apr-2014 for g25.27
if (debug): print("========== continuing before plotting mytime=%d" % (mytime))
continue
elif (debug):
print("=========== Not continuing because doneOverlayTime=",doneOverlayTime)
else:
allTimesFlaggedOnThisSpw = False
if (debug):
print("Not all the data are flagged. doneOverlayTime=%s" % (str(doneOverlayTime)))
if (firstSpwMatch == -1):
firstSpwMatch = spwctr
if (firstTimeMatch == -1):
firstTimeMatch = mytime
if (debug):
print("@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ Setting firstTimeMatch from -1 to ", firstTimeMatch)
# The following was needed to support overlay='antenna,time' for showatm for QA2 report (CAS-7820)
if (finalTimeMatch == -1 or finalTimeMatch < mytime):
if (debug):
print("@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ Setting finalTimeMatch from %d to %d" % (finalTimeMatch, mytime))
finalTimeMatch = mytime
######### Here is the amplitude plotting ############
if (yaxis.find('amp')>=0 or yaxis.find('both')>=0 or yaxis.find('ap')>=0) and doneOverlayTime==False:
if (overlayBasebands and amplitudeWithPhase): # CAS-6477
if (float(xframe/10) != xframe*0.1 and alreadyPlottedAmp):
xframe -= 2
if (debug):
if (mytime < len(uniqueTimes)):
print("amp: xctr=%d, xant=%d, myap=%d, mytime=%d(%s), firstTimeMatch=%d, bOverlay=" % (xctr, xant, myap, mytime, utstring(uniqueTimes[mytime],3), firstTimeMatch), bOverlay)
if (myap==1):
if (overlayTimes == False or mytime==firstTimeMatch):
if ((overlaySpws == False and overlayBasebands==False) or
spwctr == spwctrFirstToPlot or
# (spwctr==lowestSpwInFirstScan and str(xframe)[-1] == '0') or
spwctr>len(spwsToPlot)):
if (overlayAntennas==False or xctr==firstUnflaggedAntennaToPlot
or xctr==antennasToPlot[-1]): # 2012-05-24, to fix the case where all ants flagged on one timerange
xframe += 1
if (debug):
print("y) incrementing xframe to %d" % xframe)
if (debug):
print("mytime=%d == firstTimeMatch=%d" % (mytime, firstTimeMatch))
print("xctr=%d == firstUnflaggedAntennaToPlot=%d, antennastoPlot[-1]=%d" % (xctr, firstUnflaggedAntennaToPlot,antennasToPlot[-1]))
print("spwctr=%d >? len(spwsToPlot)=%d" % (spwctr, len(spwsToPlot)))
myUniqueColor = []
newylimits = [LARGE_POSITIVE, LARGE_NEGATIVE]
else: # (myap == 0)
if (overlayTimes == False or mytime==firstTimeMatch):
if ((overlaySpws == False and overlayBasebands==False) or
spwctr==spwctrFirstToPlot or
# (spwctr==lowestSpwInFirstScan and str(xframe)[-1] == '0') or
(overlayBasebands and amplitudeWithPhase) or # CAS-6477
spwctr>len(spwsToPlot)):
if (overlayAntennas==False or xctr==firstUnflaggedAntennaToPlot
or xctr>antennasToPlot[-1]): # 2012-05-24, to fix the case where all ants flagged on one timerange
xframe += 1
if (debug):
print("Y) incrementing xframe to %d" % xframe)
if (debug):
print("mytime=%d == firstTimeMatch=%d" % (mytime, firstTimeMatch))
print("xctr=%d == firstUnflaggedAntennaToPlot=%d, antennasToPlot[-1]=%d" % (xctr, firstUnflaggedAntennaToPlot,antennasToPlot[-1]))
print("spwctr=%d >? len(spwsToPlot)=%d" % (spwctr, len(spwsToPlot)))
myUniqueColor = []
newylimits = [LARGE_POSITIVE, LARGE_NEGATIVE]
if (debug):
print("myap=%d, mytime == firstTimeMatch" % myap, firstTimeMatch)
else:
if (debug): print("4)Not incrementing xframe from %d" % (xframe))
else:
if (debug): print("2)Not incrementing xframe from %d (spwctr=%d >? len(spwsToPlot)=%d) or (spwctr=%d == spwctrFirstToPlot=%d)" % (xframe,spwctr,len(spwsToPlot),spwctr,spwctrFirstToPlot))
else:
if (debug): print("1)Not incrementing xframe from %d" % (xframe))
if (debug):
print("$$$$$$$$$$$$$$$$$$$$$$$ ready to plot amp on xframe %d" % (xframe))
if casaVersion < '5.9.9':
adesc = pb.subplot(xframe) # there is no deprecation warning in older CASA
if (previousSubplot != xframe):
if casaVersion >= '5.9.9':
adesc = pb.subplot(xframe) # avoid deprecation warning in CASA6 when xframe already was opened
drewAtmosphere = False
previousSubplot = xframe
alreadyPlottedAmp = True # needed for (overlay='baseband', yaxis='both') CAS-6477
# pb.hold(overlayAntennas or overlayTimes or overlaySpws or overlayBasebands) # not available in CASA6, but never needed
if (delay):
gampx = gplotx
else:
gampx = np.abs(gplotx)
if (nPolarizations == 2):
if (delay):
gampy = gploty
else:
gampy = np.abs(gploty)
if (yaxis.lower().find('db') >= 0):
gamp = [10*np.log10(gampx), 10*np.log10(gampy)]
else:
if (channeldiff>0):
if (xaxis == 'chan'):
gamp0, newx0, gamp0res, newx0res = channelDifferences(gampx, pchannels[0], resample)
gamp1, newx1, gamp1res, newx1res = channelDifferences(gampy, pchannels[1], resample)
pchannels = [newx0, newx1]
else:
gamp0, newx0, gamp0res, newx0res = channelDifferences(gampx, pfrequencies[0], resample)
gamp1, newx1, gamp1res, newx1res = channelDifferences(gampy, pfrequencies[1], resample)
pfrequencies = [newx0, newx1]
gamp = [gamp0, gamp1]
gampres = [gamp0res, gamp1res]
if (VisCal.lower().find('tsys') >= 0 and tsysPercent):
gamp = [100*gamp0/np.median(gampx), 100*gamp1/np.median(gampy)]
gampres = [100*gamp0res/np.median(gampx), 100*gamp1res/np.median(gampy)]
elif (VisCal.lower().find('tsys') < 0 and ampPercent):
gamp = [100*gamp0/np.median(gampx), 100*gamp1/np.median(gampy)]
gampres = [100*gamp0res/np.median(gampx), 100*gamp1res/np.median(gampy)]
gamp_mad = [madInfo(gampres[0],madsigma,edge,ispw,xant,0), madInfo(gampres[1],madsigma,edge,ispw,xant,1)]
gamp_std = [stdInfo(gampres[0],madsigma,edge,ispw,xant,0), stdInfo(gampres[1],madsigma,edge,ispw,xant,1)]
if (platformingSigma > 0):
platformingThresholdX = gamp_mad[0]['mad']*platformingSigma
platformingThresholdY = gamp_mad[1]['mad']*platformingSigma
else:
platformingThresholdX = platformingThreshold
platformingThresholdY = platformingThreshold
gamp_platforming = [platformingCheck(gamp[0],platformingThresholdX),
platformingCheck(gamp[1],platformingThresholdY)]
for p in [0,1]:
# print "gamp_mad[%d] = " % (p), gamp_mad[p]
# print "madstats[%s][%d] = " % (Antstring,ispw), madstats[Antstring][ispw]
madstats[Antstring][ispw][mytime][p]['amp'] = gamp_mad[p]['mad']
madstats[Antstring][ispw][mytime][p]['ampstd'] = gamp_std[p]['std']
if (gamp_platforming[p]):
if (Antstring not in madstats['platforming'].keys()):
madstats['platforming'][Antstring] = {}
if (ispw not in madstats['platforming'][Antstring].keys()):
madstats['platforming'][Antstring][ispw] = {}
if (p not in madstats['platforming'][Antstring][ispw].keys()):
madstats['platforming'][Antstring][ispw][p] = []
madstats['platforming'][Antstring][ispw][p].append(uniqueTimes[mytime])
if (gamp_mad[p]['nchan'] > 0):
print("%s, Pol %d, spw %2d, %s, amp: %4d points exceed %.1f sigma (worst=%.2f at chan %d)" % (Antstring, p, ispw, utstring(uniqueTimes[mytime],0), gamp_mad[p]['nchan'], madsigma, gamp_mad[p]['outlierValue'], gamp_mad[p]['outlierChannel']+pchannels[p][0]))
else:
gamp = [gampx,gampy]
else:
if (yaxis.lower().find('db') >= 0):
gamp = [10*np.log10(gampx)]
else:
if (channeldiff>0):
if (xaxis == 'chan'):
gamp0, newx0, gamp0res, newx0res = channelDifferences(gampx, pchannels[0], resample)
pchannels = [newx0]
else:
gamp0, newx0, gamp0res, newx0res = channelDifferences(gampx, pfrequencies[0], resample)
pfrequencies = [newx0]
gamp = [gamp0]
gampres = [gamp0res]
if (VisCal.lower().find('tsys') >= 0 and tsysPercent):
gamp = [100*gamp0/np.median(gampx)]
gampres = [100*gamp0res/np.median(gampx)]
elif (VisCal.lower().find('tsys') < 0 and ampPercent):
gamp = [100*gamp0/np.median(gampx)]
gampres = [100*gamp0res/np.median(gampx)]
p = 0
gamp_mad = [madInfo(gampres[p], madsigma,edge,ispw,xant,p)]
gamp_std = [stdInfo(gampres[p], madsigma,edge,ispw,xant,p)]
if (platformingSigma > 0):
platformingThresholdX = gamp_mad[0]['mad']*platformingSigma
else:
platformingThresholdX = platformingThreshold
gamp_platforming = [platformingCheck(gamp[p], platformingThresholdX)]
madstats[Antstring][ispw][mytime][p]['amp'] = gamp_mad[p]['mad']
madstats[Antstring][ispw][mytime][p]['ampstd'] = gamp_std[p]['std']
if (gamp_platforming[p]):
if (Antstring not in madstats['platforming'].keys()):
madstats['platforming'][Antstring] = {}
if (ispw not in madstats['platforming'][Antstring].keys()):
madstats['platforming'][Antstring][ispw] = {}
if (p not in madstats['platforming'][Antstring][ispw].keys()):
madstats['platforming'][Antstring][ispw][p] = []
madstats['platforming'][Antstring][ispw][p].append(mytime)
if (gamp_mad[p]['nchan'] > 0):
print("%s, Pol %d, spw %2d, %s, amp: %4d points exceed %.1f sigma (worst=%.2f at chan %d)" % (Antstring, p, ispw, utstring(uniqueTimes[mytime],0), gamp_mad[p]['nchan'], madsigma, gamp_mad[p]['outlierValue'], gamp_mad[p]['outlierChannel']+pchannels[p][0]))
else:
gamp = [gampx]
if (bOverlay):
if (delay):
gampx2 = gplotx2 + caltable2amplitudeOffset
else:
gampx2 = np.abs(gplotx2) + caltable2amplitudeOffset
if (nPolarizations == 2):
if (delay):
gampy2 = gploty2 + caltable2amplitudeOffset
else:
gampy2 = np.abs(gploty2) + caltable2amplitudeOffset
if (yaxis.lower().find('db') >= 0):
gamp2 = [10*np.log10(gampx2), 10*np.log10(gampy2)]
else:
if (channeldiff>0):
if (xaxis == 'chan'):
gamp2_0, newx0, gamp2_0res, newx0res = channelDifferences(gampx2, pchannels2[0], resample)
gamp2_1, newx1, gamp2_1res, newx1res = channelDifferences(gampy2, pchannels2[1], resample)
pchannels2 = [newx0, newx1]
else:
gamp2_0, newx0, gamp2_0res, newx0res = channelDifferences(gampx2, pfrequencies2[0], resample)
gamp2_1, newx1, gamp2_1res, newx1res = channelDifferences(gampy2, pfrequencies2[1], resample)
pfrequencies2 = [newx0, newx1]
gamp2 = [gamp2_0, gamp2_1]
gamp2res = [gamp2_0res, gamp2_1res]
if (VisCal.lower().find('tsys') >= 0 and tsysPercent):
gamp2 = [100*gamp2_0/np.median(gampx2), 100*gamp2_1/np.median(gampy2)]
gamp2res = [100*gamp2_0res/np.median(gampx2), 100*gamp2_1res/np.median(gampy2)]
elif (VisCal.lower().find('tsys') < 0 and ampPercent):
gamp2 = [100*gamp2_0/np.median(gampx2), 100*gamp2_1/np.median(gampy2)]
gamp2res = [100*gamp2_0res/np.median(gampx2), 100*gamp2_1res/np.median(gampy2)]
else:
gamp2 = [gampx2, gampy2]
else:
if (yaxis.lower().find('db') >= 0):
gamp2 = [10*np.log10(gampx2)]
else:
if (channeldiff>0):
if (xaxis == 'chan'):
gamp2_0, newx0, gamp2_0res, newx0res = channelDifferences(gampx2, pchannels[0], resample)
pchannels2 = [newx0]
else:
gamp2_0, newx0, gamp2_0res, newx0res = channelDifferences(gampx2, pfrequencies[0], resample)
pfrequencies2 = [newx0]
gamp2 = [gamp2_0]
gamp2res = [gamp2_0res]
if (VisCal.lower().find('tsys') >= 0 and tsysPercent):
gamp2 = [100*gamp2_0/np.median(gampx2)]
gamp2res = [100*gamp2_0res/np.median(gampx2)]
elif (VisCal.lower().find('tsys') < 0 and ampPercent):
gamp2 = [100*gamp2_0/np.median(gampx2)]
gamp2res = [100*gamp2_0res/np.median(gampx2)]
else:
gamp2 = [gampx2]
if (xaxis.find('chan')>=0 or (msFound==False and tableFormat==33)): # 'amp'
if (debug):
print("amp: plot vs. channel **********************")
# pb.hold(True) # not available in CASA6, but never needed
for p in range(nPolarizations):
if (overlayAntennas or overlayTimes):
if (corr_type_string[p] in polsToPlot):
pdesc = pb.plot(pchannels[p],gamp[p],'%s'%ampmarkstyles[p],
markersize=markersize,
markerfacecolor=overlayColors[xctr],markeredgewidth=markeredgewidth)
newylimits = recalcYlimits(plotrange,newylimits,gamp[p])
if (overlayAntennas and overlayTimes==False):
pb.setp(pdesc, color=overlayColors[xctr])
elif (overlayTimes and overlayAntennas==False):
pb.setp(pdesc, color=overlayColors[mytime])
elif (overlayTimes and overlayAntennas): # try to support time,antenna
if (debug):
print("p=%d, len(fieldsToPlot)=%d, len(timerangeList)=%d" % (p,len(fieldsToPlot),len(timerangeList)))
if (len(fieldsToPlot) > 1 or len(timerangeList)>1):
# The third 'or' below is needed if pol='0' is flagged on antenna 0. -- 2012/10/12
if (p==0 or len(polsToPlot)==1 or myUniqueColor==[]):
myUniqueColor.append(overlayColors[len(myUniqueColor)])
pb.setp(pdesc, color=myUniqueColor[-1])
else:
if (corr_type_string[p] in polsToPlot):
# print "pcolor[%d]=%s" % (p,pcolor)
pb.plot(pchannels[p],gamp[p],'%s%s'%(pcolor[p],ampmarkstyle), markersize=markersize,markeredgewidth=markeredgewidth)
newylimits = recalcYlimits(plotrange,newylimits,gamp[p])
if asciiFile:
asciiDesc = open('pol%d_vs_chan.txt' % p,'w')
for i in range(len(pchannels[p])):
asciiDesc.write('%f %f\n'%(pchannels[p][i], gamp[p][i]))
asciiDesc.close()
if (sum(xflag)>0):
myxrange = np.max(channels)-np.min(channels)
SetNewXLimits([np.min(channels)-myxrange/20, np.max(channels)+myxrange/20],loc=2)
# print "amp: Resetting xaxis channel range to counteract flagged data"
if (xframe in bottomRowFrames or (xctr+1==len(antennasToPlot) and ispw==spwsToPlot[-1])):
pb.xlabel("Channels (%d)" % (len(pchannels[p])), size=mysize)
elif (xaxis.find('freq')>=0): # amp
if (bOverlay):
# pb.hold(True)# not available in CASA6, but never needed
myxrange = np.abs(xfrequencies[0]-xfrequencies[-1])
try:
xrange2 = np.abs(xfrequencies2[0]-xfrequencies2[-1])
except:
print("No amp data found in second solution for spw %d. Try increasing the solutionTimeThresholdSeconds above %.0f." % (ispw,solutionTimeThresholdSeconds))
print("If this doesn't work, email the developer (%s)." % (developerEmail))
return(vm)
if (np.abs(myxrange/xrange2 - 1) > 0.05 + len(xflag)/len(xchannels)): # 0.0666 is 2000/1875-1
# These line widths are optimal for visualizing FDM over TDM
# print "*** Solutions differ in frequency width"
width1 = 1
width2 = 4
# solutions differ in frequency width
if (myxrange < xrange2):
for p in range(nPolarizations):
if (corrTypeToString(corr_type[p]) in polsToPlot):
pdesc = pb.plot(pfrequencies[p], gamp[p], '%s%s'%(pcolor[p],ampmarkstyle), linewidth=width2, markersize=markersize,markeredgewidth=markeredgewidth)
newylimits = recalcYlimitsFreq(chanrange, newylimits, gamp[p], sideband,plotrange,xchannels,debug,1,chanrangePercent)
for p in range(nPolarizations):
if (corrTypeToString(corr_type[p]) in polsToPlot):
pdesc = pb.plot(pfrequencies2[p], gamp2[p], '%s%s'%(p2color[p],ampmarkstyle), linewidth=width1, markersize=markersize,markeredgewidth=markeredgewidth)
newylimits = recalcYlimitsFreq(chanrange, newylimits, gamp2[p], sideband,plotrange,xchannels2,debug,2,chanrangePercent)
else:
for p in range(nPolarizations):
if (corrTypeToString(corr_type[p]) in polsToPlot):
pdesc = pb.plot(pfrequencies2[p], gamp2[p], '%s%s'%(p2color[p],ampmarkstyle), linewidth=width2, markersize=markersize,markeredgewidth=markeredgewidth)
newylimits = recalcYlimitsFreq(chanrange, newylimits, gamp2[p], sideband,plotrange,xchannels2,debug,3,chanrangePercent)
for p in range(nPolarizations):
if (corrTypeToString(corr_type[p]) in polsToPlot):
pdesc = pb.plot(pfrequencies[p], gamp[p], '%s%s'%(pcolor[p],ampmarkstyle), linewidth=width1, markersize=markersize,markeredgewidth=markeredgewidth)
newylimits = recalcYlimitsFreq(chanrange, newylimits, gamp[p], sideband,plotrange,xchannels,debug,4,chanrangePercent)
else:
width1 = 1
width2 = 2 # Just enough to distinguish one plotted line from the other.
# solutions may be different level of smoothing, so plot highest rms first
if (madOfDiff(gamp[0]) < madOfDiff(gamp2[0]) and firstPlot != 1):
# plot second solution first
# print "**** MAD of caltable=%f < caltable2=%f" % (madOfDiff(gamp[0]), madOfDiff(gamp2[0]))
for p in range(nPolarizations):
if (corrTypeToString(corr_type[p]) in polsToPlot):
pdesc = pb.plot(pfrequencies2[p], gamp2[p], '%s%s'%(p2color[p],ampmarkstyle), linewidth=width1, markersize=markersize,markeredgewidth=markeredgewidth)
newylimits = recalcYlimitsFreq(chanrange, newylimits, gamp2[p], sideband,plotrange,xchannels2,debug,5,chanrangePercent)
for p in range(nPolarizations):
if (corrTypeToString(corr_type[p]) in polsToPlot):
pdesc = pb.plot(pfrequencies[p], gamp[p], '%s%s'%(pcolor[p],ampmarkstyle), linewidth=width2, markersize=markersize,markeredgewidth=markeredgewidth)
newylimits = recalcYlimitsFreq(chanrange, newylimits, gamp[p], sideband,plotrange,xchannels,debug,6,chanrangePercent)
else:
# plot first solution first
# print "**** MAD of caltable=%f >= caltable2=%f" % (madOfDiff(gamp[0]), madOfDiff(gamp2[0]))
for p in range(nPolarizations):
if (corrTypeToString(corr_type[p]) in polsToPlot):
pdesc = pb.plot(pfrequencies[p], gamp[p], '%s%s'%(pcolor[p],ampmarkstyle), linewidth=width2, markersize=markersize,markeredgewidth=markeredgewidth)
newylimits = recalcYlimitsFreq(chanrange, newylimits, gamp[p], sideband,plotrange,xchannels,debug,7,chanrangePercent)
for p in range(nPolarizations):
if (corrTypeToString(corr_type[p]) in polsToPlot):
pdesc = pb.plot(pfrequencies2[p], gamp2[p], '%s%s'%(p2color[p],ampmarkstyle), linewidth=width1, markersize=markersize,markeredgewidth=markeredgewidth)
newylimits = recalcYlimitsFreq(chanrange, newylimits, gamp2[p], sideband,plotrange,xchannels2,debug,8,chanrangePercent)
# must set new limits after plotting 'amp'
if (zoom=='intersect'):
if (myxrange < xrange2):
SetNewXLimits([min(xfrequencies[0],xfrequencies[-1])-myxrange*0.1, max(xfrequencies[0],xfrequencies[-1])+myxrange*0.1],loc=3)
SetLimits(plotrange, chanrange, newylimits, channels, frequencies,
pfrequencies, ampMin, ampMax, xaxis, pxl, chanrangeSetXrange,
chanrangePercent,loc=101)
else:
# print "len(xfrequencies2) = ", len(xfrequencies2)
SetNewXLimits([min(xfrequencies2[0],xfrequencies2[-1])-xrange2*0.1, max(xfrequencies2[0],xfrequencies2[-1])+xrange2*0.1],loc=4)
slstatus = SetLimits(plotrange, chanrange, newylimits, channels, frequencies2,
pfrequencies2, ampMin, ampMax, xaxis, pxl,
chanrangeSetXrange, chanrangePercent,loc=102)
else:
if (myxrange < xrange2):
SetLimits(plotrange, chanrange, newylimits, channels, frequencies,
pfrequencies, ampMin, ampMax, xaxis, pxl,
chanrangeSetXrange, chanrangePercent,loc=103)
else:
SetLimits(plotrange, chanrange, newylimits, channels, frequencies2,
pfrequencies2, ampMin, ampMax, xaxis, pxl,
chanrangeSetXrange, chanrangePercent,loc=104)
# draw polarization and spw labels
if (xframe == firstFrame):
# draw title including caltable name
caltableList = 'c1=' + caltable + ', c2=' + caltable2 # + ' (%s)'%(utstring(uniqueTimes2[mytime],3))
pb.text(xstartTitle, ystartTitle, caltableList, size=titlesize,
color='k', transform=pb.gcf().transFigure)
if (caltable2amplitudeOffset != 0):
pb.text(xstartTitle, 0.935, 'c2 amplitude offset = %.3f' % (caltable2amplitudeOffset),
color='k',size=titlesize,transform=pb.gcf().transFigure)
elif (bpolyOverlay):
if (debug):
print("in bpolyOverlay **********************************")
matches1 = []
for tbp in range(len(timesBP)):
if (sloppyMatch(uniqueTimes[mytime], timesBP[tbp], solutionTimeThresholdSeconds,
mytime, scansToPlotPerSpw[ispw], scansForUniqueTimes, # au version
myprint=debugSloppyMatch
)):
matches1.append(tbp)
matches1 = np.array(matches1)
if (len(matches1) < 1):
print("No time match found between %.1f and " % (uniqueTimes[mytime]), timesBP)
print("If you are sure the solutions correspond to the same data, you can set solutionTimeThresholdSeconds>=%.0f" % (1+np.ceil(np.abs(timesBP[0]-uniqueTimes[mytime]))))
return(vm)
matches2 = np.where(xant == np.array(antennasBP))[0]
if (len(matches2) < 1):
print("No antenna match found: ", xant, antennasBP)
if (tableFormat == 33):
matches3 = np.where(ispw == np.array(cal_desc_idBP))[0]
if (len(matches3) < 1):
print("No spw match found: %d not in " % (ispw), cal_desc_idBP)
else:
matches3 = np.where(ispw == np.array(spwBP))[0]
if (len(matches3) < 1):
print("No spw match found: %d not in " % (ispw), spwBP)
matches12 = np.intersect1d(matches1,matches2)
if (len(matches12) < 1):
print("No time+antenna match between: ", matches1, matches2)
matches = np.intersect1d(matches12, matches3)
if (len(matches) < 1):
print("No time+antenna+spw match between: ", matches12, matches3)
try:
index = matches[0]
if (debug):
print("Match = %d ***********************************" % (index))
except:
print("No match found for time=%.6f, xant=%d, ispw=%d" % (uniqueTimes[mytime],xant,ispw))
print("antennasBP = ", antennasBP)
print("cal_desc_idBP = ", cal_desc_idBP)
print("timesBP = ")
for i in timesBP:
print("%.6f, " % i)
return(vm)
validDomain = [frequencyLimits[0,index], frequencyLimits[1,index]]
cc = calcChebyshev(polynomialAmplitude[index][0:nPolyAmp[index]], validDomain, frequenciesGHz[index]*1e+9)
fa = np.array(frequenciesGHz[index])
if (xfrequencies[0] < xfrequencies[-1]):
matches = np.where(fa>xfrequencies[0])[0]
matches2 = np.where(fa<xfrequencies[-1])[0]
else:
matches = np.where(fa>xfrequencies[-1])[0]
matches2 = np.where(fa<xfrequencies[0])[0]
if (len(matches) < 1):
print("looking for %f-%f GHz inside %f-%f" % (xfrequencies[0],xfrequencies[-1],fa[0],fa[-1]))
amplitudeSolutionX = np.mean(gampx)*(cc-np.mean(cc)+1)
cc = calcChebyshev(polynomialAmplitude[index][nPolyAmp[index]:2*nPolyAmp[index]], validDomain, frequenciesGHz[index]*1e+9)
if (debug):
print("nPol=%d, len(xfrequencies)=%d, len(yfrequencies)=%d" % (nPolarizations,len(xfrequencies),len(yfrequencies)))
if (nPolarizations > 1):
if (yfrequencies[0] < yfrequencies[-1]):
matches = np.where(fa>yfrequencies[0])[0]
matches2 = np.where(fa<yfrequencies[-1])[0]
else:
matches = np.where(fa>yfrequencies[-1])[0]
matches2 = np.where(fa<yfrequencies[0])[0]
amplitudeSolutionY = np.mean(gampy)*(cc-np.mean(cc)+1)
if (bpolyOverlay2):
validDomain = [frequencyLimits2[0,index], frequencyLimits2[1,index]]
cc = calcChebyshev(polynomialAmplitude2[index][0:nPolyAmp2[index]],
validDomain, frequenciesGHz2[index]*1e+9)
fa = np.array(frequenciesGHz2[index])
if (xfrequencies[0] < xfrequencies[-1]):
matches = np.where(fa>xfrequencies[0])[0]
matches2 = np.where(fa<xfrequencies[-1])[0]
else:
matches = np.where(fa>xfrequencies[-1])[0]
matches2 = np.where(fa<xfrequencies[0])[0]
amplitudeSolution2X = np.mean(gampx)*(cc-np.mean(cc)+1)
cc = calcChebyshev(polynomialAmplitude2[index][nPolyAmp2[index]:2*nPolyAmp2[index]],
validDomain, frequenciesGHz2[index]*1e+9)
fa = np.array(frequenciesGHz2[index])
if (debug):
print("nPol=%d, len(xfrequencies)=%d, len(yfrequencies)=%d" % (nPolarizations,len(xfrequencies),len(yfrequencies)))
if (nPolarizations > 1):
if (yfrequencies[0] < yfrequencies[-1]):
matches = np.where(fa>yfrequencies[0])[0]
matches2 = np.where(fa<yfrequencies[-1])[0]
else:
matches = np.where(fa>yfrequencies[-1])[0]
matches2 = np.where(fa<yfrequencies[0])[0]
amplitudeSolution2Y = np.mean(gampy)*(cc-np.mean(cc)+1)
# pb.hold(True)# not available in CASA6, but never needed
for p in range(nPolarizations):
if (corrTypeToString(corr_type[p]) in polsToPlot):
pdesc = pb.plot(pfrequencies[p], gamp[p],'%s%s'%(pcolor[p],ampmarkstyle), markersize=markersize,markeredgewidth=markeredgewidth)
newylimits = recalcYlimitsFreq(chanrange, newylimits, gamp[p], sideband,plotrange,xchannels,debug,9,chanrangePercent)
if (corrTypeToString(corr_type[0]) in polsToPlot):
pdesc = pb.plot(frequenciesGHz[index], amplitudeSolutionX,'%s%s'%(p2color[0],bpolymarkstyle),markeredgewidth=markeredgewidth,markersize=markersize)
newylimits = recalcYlimitsFreq(chanrange, newylimits, amplitudeSolutionX, sideband,plotrange,xchannels,debug,10,chanrangePercent)
pdesc = pb.plot(frequenciesGHz2[index], amplitudeSolution2X, '%s%s'%(p3color[0],bpolymarkstyle),markeredgewidth=markeredgewidth,markersize=markersize)
newylimits = recalcYlimitsFreq(chanrange, newylimits, amplitudeSolution2X, sideband,plotrange,xchannels2,debug,11,chanrangePercent)
if (nPolarizations == 2):
if (corrTypeToString(corr_type[1]) in polsToPlot):
pdesc = pb.plot(frequenciesGHz[index], amplitudeSolutionY,'%s%s'%(p2color[1],bpolymarkstyle),markeredgewidth=markeredgewidth,markersize=markersize)
newylimits = recalcYlimitsFreq(chanrange, newylimits, amplitudeSolutionY, sideband,plotrange,ychannels,debug,12,chanrangePercent)
pdesc = pb.plot(frequenciesGHz2[index], amplitudeSolution2Y, '%s%s'%(p3color[1],bpolymarkstyle),markeredgewidth=markeredgewidth,markersize=markersize)
newylimits = recalcYlimitsFreq(chanrange, newylimits, amplitudeSolution2Y, sideband,plotrange,ychannels2,debug,13,chanrangePercent)
else:
# pb.hold(True)# not available in CASA6, but never needed
for p in range(nPolarizations):
if (corrTypeToString(corr_type[p]) in polsToPlot):
pdesc = pb.plot(pfrequencies[p], gamp[p],'%s%s'%(pcolor[p],ampmarkstyle), markersize=markersize,markeredgewidth=markeredgewidth)
newylimits = recalcYlimitsFreq(chanrange, newylimits, gamp[p], sideband,plotrange,xchannels,debug,10,chanrangePercent)
if (corrTypeToString(corr_type[0]) in polsToPlot):
pdesc = pb.plot(frequenciesGHz[index], amplitudeSolutionX,'%s%s'%(p2color[0],bpolymarkstyle),markeredgewidth=markeredgewidth,markersize=markersize)
newylimits = recalcYlimitsFreq(chanrange, newylimits, amplitudeSolutionX, sideband,plotrange,xchannels,debug,14,chanrangePercent)
if (nPolarizations == 2):
if (corrTypeToString(corr_type[1]) in polsToPlot):
pdesc = pb.plot(frequenciesGHz[index], amplitudeSolutionY,'%s%s'%(p2color[1],bpolymarkstyle),markeredgewidth=markeredgewidth,markersize=markersize)
newylimits = recalcYlimitsFreq(chanrange, newylimits, amplitudeSolutionY, sideband,plotrange,ychannels,debug,15,chanrangePercent)
# endif (bpolyOverlay2)
else:
# we are not overlaying any B or polynomial solutions 'amp vs. freq'
if (showflagged):
# Also show the flagged data to see where the flags are
# pb.hold(True) # Matches line 2326 for xaxis='chan' # not available in CASA6, but never needed
for p in range(nPolarizations):
if (corrTypeToString(corr_type[p]) in polsToPlot):
if (overlayAntennas or overlayTimes):
pdesc1 = pb.plot(pfrequencies[p], gamp[p], '%s'%ampmarkstyles[p], markersize=markersize,markeredgewidth=markeredgewidth)
newylimits = recalcYlimitsFreq(chanrange, newylimits, gamp[p], sideband,plotrange,xchannels,debug,16,chanrangePercent)
if (overlayAntennas and overlayTimes==False):
pb.setp(pdesc1, color=overlayColors[xctr])
elif (overlayTimes and overlayAntennas==False):
pb.setp(pdesc1, color=overlayColors[mytime])
elif (overlayTimes and overlayAntennas): # try to support antenna,time
if (myUniqueTime != []):
pb.setp(pdesc1, color=overlayColors[myUniqueTime])
# The third 'or' below is needed if pol='0' is flagged on antenna 0. -- 2012/10/12 (original spot)
if (p==0 or len(polsToPlot)==1 or myUniqueColor==[]):
myUniqueColor.append(overlayColors[len(myUniqueColor)])
pb.setp(pdesc1, color=myUniqueColor[-1])
else:
pdesc = pb.plot(pfrequencies[p], gamp[p], '%s%s'%(pcolor[p],ampmarkstyles[p]), markersize=markersize,markeredgewidth=markeredgewidth)
newylimits = recalcYlimitsFreq(chanrange, newylimits, gamp[p], sideband,plotrange,xchannels,debug,17,chanrangePercent)
else: # showing only unflagged data 'amp vs. freq'
# pb.hold(True) # not available in CASA6, but never needed
for p in range(nPolarizations):
if (debug):
print("p=%d, polsToPlot=%s, len(fieldsToPlot)=%d, len(timerangeList)=%d, myUniqueTime=" % (p,str(polsToPlot),len(fieldsToPlot),len(timerangeList)), myUniqueTime)
if (corrTypeToString(corr_type[p]) in polsToPlot):
if (len(gamp[p]) == 0): # Try this on Apr 2, 2012
# print "=============== Skipping flagged data on antenna %d = %s" % (xant,antstring)
continue
if (overlayAntennas or overlayTimes):
pdesc = pb.plot(pfrequencies[p], gamp[p], '%s'%ampmarkstyles[p],
markersize=markersize,markeredgewidth=markeredgewidth)
newylimits = recalcYlimitsFreq(chanrange, newylimits, gamp[p], sideband,
plotrange,xchannels,debug,18,chanrangePercent)
# print " antenna=%s, pol=%d, ispw=%d" % (antstring,p,ispw)
if (overlayAntennas and overlayTimes==False):
pb.setp(pdesc, color=overlayColors[xctr])
elif (overlayTimes and overlayAntennas==False):
pb.setp(pdesc, color=overlayColors[mytime])
elif (overlayTimes and overlayAntennas): # try to support antenna,time
if (myUniqueTime != []):
pb.setp(pdesc, color=overlayColors[myUniqueTime])
# The third 'or' below is needed if pol='0' is flagged on antenna 0. -- 2012/10/12 (original spot)
if (p==0 or len(polsToPlot)==1 or myUniqueColor==[]):
myUniqueColor.append(overlayColors[len(myUniqueColor)])
if (debug):
print("myUniqueColor = ", myUniqueColor)
pb.setp(pdesc, color=myUniqueColor[-1])
elif (overlaySpws):
if overlaySpwDistinguish.find('color') >= 0:
mycolor = overlayColors[list(spwsToPlot).index(ispw)+p*len(spwsToPlot)]
if overlaySpwDistinguish.find('width') >= 0:
if overlaySpwDistinguish.find('width2') >= 0:
linewidth = 1+2*list(spwsToPlot).index(ispw)
else:
linewidth = 1+list(spwsToPlot).index(ispw)
else:
linewidth = 1
elif overlaySpwDistinguish.find('width') >= 0:
mycolor = [xcolor,ycolor][p]
if overlaySpwDistinguish.find('width2') >= 0:
linewidth = 1+2*list(spwsToPlot).index(ispw)
else:
linewidth = 1+list(spwsToPlot).index(ispw)
else:
mycolor = [xcolor,ycolor][p]
linewidth = 1
pdesc = pb.plot(pfrequencies[p], gamp[p], '%s'%(ampmarkstyles[0]), lw=linewidth, color=mycolor,
markersize=markersize,markeredgewidth=markeredgewidth)
newylimits = recalcYlimitsFreq(chanrange, newylimits, gamp[p], sideband,
plotrange,xchannels,debug,-18,chanrangePercent)
# print " antenna=%s, pol=%d, ispw=%d" % (antstring,p,ispw)
else: # show unflagged solutions, no overlay
if (corrTypeToString(corr_type[p]) in polsToPlot):
# since there is no overlay, don't use dashed line, so zero here ---------v
pdesc = pb.plot(pfrequencies[p], gamp[p], '%s%s'%(pcolor[p],ampmarkstyles[0]),markersize=markersize,markeredgewidth=markeredgewidth)
newylimits = recalcYlimitsFreq(chanrange, newylimits, gamp[p], sideband,plotrange,xchannels,debug,19,chanrangePercent)
if asciiFile:
asciiDesc = open('pol%d_vs_freq.txt' % p,'w')
for i in range(len(pfrequencies[p])):
asciiDesc.write('%f %f\n'%(pfrequencies[p][i], gamp[p][i]))
asciiDesc.close()
# print "newylimits for amp = ", newylimits
if (sum(xflag)>0):
# print "amp: Resetting xaxis frequency range to counteract flagged data"
myxrange = np.max(frequencies)-np.min(frequencies)
SetNewXLimits([np.min(frequencies)-0.15*myxrange, np.max(frequencies)+0.15*myxrange],loc=5)
if (1==1 or (xframe in bottomRowFrames) or (xctr+1==len(antennasToPlot) and ispw==spwsToPlot[-1])):
# use 1==1 because spw might change between top row and bottom row of frames
pb.xlabel(xlabelString, size=mysize)
# endif (xaxis=='chan' elif xaxis=='freq' for 'amp')
if (overlayTimes):
timeString =''
else:
if (len(uniqueTimes) > mytime):
timeString = ', t%d/%d %s' % (mytime,nUniqueTimes-1,utstring(uniqueTimes[mytime],3))
if (scansForUniqueTimes != []):
if (scansForUniqueTimes[mytime]>=0):
timeString = ', scan%d %s' % (scansForUniqueTimes[mytime],utstring(uniqueTimes[mytime],3))
spwString = buildSpwString(overlaySpws, overlayBasebands,
spwsToPlot, ispw, originalSpw[ispw],
observatoryName, baseband,
showBasebandNumber)
if (overlayTimes and len(fieldsToPlot) > 1):
indices = fstring = ''
for f in fieldIndicesToPlot:
if (f != fieldIndicesToPlot[0]):
indices += ','
fstring += ','
indices += str(uniqueFields[f])
if (msFound):
fstring += msFields[uniqueFields[f]]
if (len(fstring) > fstringLimit):
fstring = fstring[0:fstringLimit] + '...'
titleString = "%sspw%s, fields %s: %s%s" % (antennaString,spwString,
indices, fstring, timeString)
else:
titleString = "%sspw%s, field %d: %s%s" % (antennaString,spwString,uniqueFields[fieldIndex],
fieldString,timeString)
tsize = titlesize-int(len(titleString)/int(maxCharsBeforeReducingTitleFontSize/subplotCols))
pb.title(titleString, size=tsize)
if (abs(plotrange[0]) > 0 or abs(plotrange[1]) > 0):
SetNewXLimits([plotrange[0],plotrange[1]],loc=6)
else:
# Here is 1st place where we eliminate white space on right and left edge of the plots: 'amp'
if (xaxis.find('chan')>=0):
SetNewXLimits([channels[0],channels[-1]],loc=7)
else:
if (zoom != 'intersect'):
if (overlaySpws or overlayBasebands):
# print "frequencyRangeToPlotInBaseband[%d]=" % (bbctr), frequencyRangeToPlotInBaseband[bbctr]
SetNewXLimits(frequencyRangeToPlotInBaseband[bbctr],loc=8)
else:
SetNewXLimits([frequencies[0], frequencies[-1]],loc=9)
if (bOverlay):
if (xrange2 > myxrange+0.1 and zoom != 'intersect'):
TDMisSecond = True
if (abs(plotrange[2]) > 0 or abs(plotrange[3]) > 0):
SetNewYLimits([plotrange[2],plotrange[3]],loc=3)
ResizeFonts(adesc,mysize)
adesc.xaxis.grid(True,which='major')
adesc.yaxis.grid(True,which='major')
pb.ylabel(yAmplitudeLabel, size=mysize)
pb.subplots_adjust(hspace=myhspace, wspace=mywspace)
xlim = pb.xlim()
ylim = pb.ylim()
myxrange = xlim[1]-xlim[0]
yrange = ylim[1]-ylim[0]
# print "amp: ylim, yrange = ", ylim, yrange
if (overlayAntennas == False and overlayTimes == False and bOverlay == False and
((overlaySpws == False and overlayBasebands == False) or
(spwctr==spwctrFirstToPlot or overlaySpwDistinguish.find('color')>=0))):
# draw polarization labels for no overlay, or overlaySpws/overlayBasebands
x0 = xstartPolLabel
y0 = ystartPolLabel
for p in range(nPolarizations):
if (corrTypeToString(corr_type[p]) in polsToPlot):
if overlaySpwDistinguish.find('color') >= 0:
mySpwIndex = list(spwsToPlot).index(ispw)
mycolor = overlayColors[mySpwIndex+p*len(spwsToPlot)]
pb.text(x0+mySpwIndex*0.04, y0-subplotRows*p*0.03, corrTypeToString(corr_type[p]),
color=mycolor,size=mysize, transform=pb.gca().transAxes)
elif spwctr==spwctrFirstToPlot or (not overlaySpws and not overlayBasebands):
# no need to plot it more than once in the same position
pb.text(x0, y0-subplotRows*p*0.03, corrTypeToString(corr_type[p]),
color=pcolor[p],size=mysize, transform=pb.gca().transAxes)
elif debug:
print("Not drawing polarization labels because %d != %d" % (spwctr,spwctrFirstToPlot))
if (channeldiff > 0):
pb.text(x0, ystartMadLabel-0.03*subplotRows*p,
corrTypeToString(corr_type[p])+' MAD = %.4f, St.Dev = %.4f'%(gamp_mad[p]['mad'],gamp_std[p]['std']),
color=pcolor[p],size=mysize, transform=pb.gca().transAxes)
if (xframe == firstFrame):
# draw title including caltable name
caltableList = caltableTitle
if (bpolyOverlay):
caltableList += ', ' + caltable2 + ' (degamp=%d, degphase=%d)'%(nPolyAmp[index]-1,nPolyPhase[index]-1)
if (bpolyOverlay2):
caltableList += ', ' + caltable3 + ' (degamp=%d, degphase=%d)'%(nPolyAmp2[index]-1,nPolyPhase2[index]-1)
pb.text(xstartTitle, ystartTitle, caltableList, size=titlesize,
color='k', transform=pb.gcf().transFigure)
elif (overlayAntennas==True and xant==antennasToPlot[-1] and bOverlay == False # ):
and overlayTimes==False): # try to support antenna,time avoid antenna labels 'amp'
# We do this last, because by then, the limits will be stable.
x0 = xstartPolLabel
y0 = ystartPolLabel
# draw polarization labels for overlayAntennas
if (corrTypeToString(corr_type[0]) in polsToPlot):
if (channeldiff > 0):
pb.text(x0, ystartMadLabel-0.03*subplotRows*0,
corrTypeToString(corr_type[0])+' MAD = %.4f, St.Dev = %.4f'%(gamp_mad[0]['mad'],gamp_std[0]['std']),
color=overlayColors[0],size=mysize, transform=pb.gca().transAxes)
if (ampmarkstyle.find('-')>=0):
pb.text(x0, y0, corrTypeToString(corr_type[0])+' solid', color=overlayColors[0],size=mysize,
transform=pb.gca().transAxes)
else:
pb.text(x0+0.02, y0, corrTypeToString(corr_type[0]), color=overlayColors[0],size=mysize,
transform=pb.gca().transAxes)
pdesc = pb.plot([x0-0.01], [y0], '%sk'%ampmarkstyle, markersize=markersize,
scalex=False,scaley=False, transform=pb.gca().transAxes,markeredgewidth=markeredgewidth)
if (len(corr_type) > 1):
if (corrTypeToString(corr_type[1]) in polsToPlot):
if (channeldiff > 0):
pb.text(x0, ystartMadLabel-0.03*subplotRows*1,
corrTypeToString(corr_type[1])+' MAD = %.4f, St.Dev = %.4f'%(gamp_mad[1]['mad'],gamp_std[1]['std']),
color=overlayColors[0],size=mysize, transform=pb.gca().transAxes)
if (ampmarkstyle2.find('--')>=0):
pb.text(x0, y0-0.03*subplotRows, corrTypeToString(corr_type[1])+' dashed',
color=overlayColors[0],size=mysize, transform=pb.gca().transAxes)
else:
pb.text(x0+0.02, y0-0.03*subplotRows, corrTypeToString(corr_type[1]),
color=overlayColors[0],size=mysize, transform=pb.gca().transAxes)
pdesc = pb.plot([x0-0.01], [y0-0.03*subplotRows], '%sk'%ampmarkstyle2,
markersize=markersize, scalex=False,scaley=False,markeredgewidth=markeredgewidth)
if (xframe == firstFrame):
# draw title including caltable name
pb.text(xstartTitle, ystartTitle, caltableTitle, size=titlesize, color='k',
transform=pb.gcf().transFigure)
DrawAntennaNames(msAnt, antennasToPlot, msFound, mysize)
elif (overlayTimes==True and bOverlay == False
and overlayAntennas==False): # try to support antenna,time
doneOverlayTime = True # assumed until proven otherwise in the 'for' loop
for f in fieldIndicesToPlot:
if (len(uniqueTimesPerFieldPerSpw[ispwInCalTable][f]) > 0):
if ((uniqueTimes[mytime] < uniqueTimesPerFieldPerSpw[ispwInCalTable][f][-1]-solutionTimeThresholdSeconds) and
(uniqueTimes[mytime] < timerangeListTimes[-1])):
if (debug):
print("-----------Not finished because %.0f < %.0f-%d for fieldIndex=%d and <%.0f" % (uniqueTimes[mytime], uniqueTimesPerFieldPerSpw[ispwInCalTable][f][-1], solutionTimeThresholdSeconds, f, timerangeListTimes[-1]))
print("-----------ispwInCalTable=%d, mytime=%d, len(uniqueTimes) = %d" % (ispwInCalTable, mytime, len(uniqueTimes)))
doneOverlayTime = False
if (debug):
print("------doneOverlayTime = %s" % (str(doneOverlayTime)))
if (doneOverlayTime):
# either it is the last time of any times in solution, or the last time
# in the list of times to plot
if (debug):
print("****** on last time = %d for last fieldIndex %d or %d>=%d" % (mytime,fieldIndex,mytime,timerangeList[-1]))
mytime = nUniqueTimes-1
# We do this last, because by then, the limits will be broad enough and stable.
# draw polarization labels
DrawPolarizationLabelsForOverlayTime(xstartPolLabel,ystartPolLabel,corr_type,polsToPlot,
channeldiff,ystartMadLabel,subplotRows,gamp_mad,mysize,
ampmarkstyle,markersize,ampmarkstyle2, gamp_std)
if (xframe == firstFrame):
# draw title including caltable name
pb.text(xstartTitle, ystartTitle, caltableTitle, size=titlesize,
color='k', transform=pb.gcf().transFigure)
drawOverlayTimeLegends(xframe,firstFrame,xstartTitle,ystartTitle,caltable,
titlesize,fieldIndicesToPlot,ispwInCalTable,
uniqueTimesPerFieldPerSpw,
timerangeListTimes, solutionTimeThresholdSeconds,
debugSloppyMatch,ystartOverlayLegend,debug,mysize,
fieldsToPlot,myUniqueColor,timeHorizontalSpacing,
fieldIndex,overlayColors, antennaVerticalSpacing,
overlayAntennas, timerangeList, caltableTitle,
mytime, scansToPlotPerSpw[ispw], scansForUniqueTimes)
elif (overlayAntennas and overlayTimes): # Oct 23, 2012
# This will only happen for overlay='antenna,time'
if (debug):
print("In overlay antenna,time case, xframe=%d, firstFrame=%d, mytime=%d, firstTimeMatch=%d, xctr=%d, firstUnflaggedAntennaToPlot=%d" % (xframe,firstFrame,mytime,firstTimeMatch,xctr,firstUnflaggedAntennaToPlot))
# if (xframe == firstFrame and mytime == 0 and xctr==firstUnflaggedAntennaToPlot and bOverlay==False):
if (xframe == firstFrame and mytime == firstTimeMatch and xctr==firstUnflaggedAntennaToPlot and bOverlay==False): # bug fix on 2015-08-19 for CAS-7820
# draw title including caltable name
pb.text(xstartTitle, ystartTitle, caltableTitle, size=titlesize, color='k',
transform=pb.gcf().transFigure)
DrawBottomLegendPageCoords(msName, uniqueTimes[mytime], mysize, figfile)
# Adding the following 'for' loop on Mar 13, 2013 to support the case of
# single time range with overlay='antenna,time'
if (xant==antennasToPlot[-1]):
doneOverlayTime = True # assumed until proven otherwise in the 'for' loop
for f in fieldIndicesToPlot:
if (len(uniqueTimesPerFieldPerSpw[ispwInCalTable][f]) > 0):
if ((uniqueTimes[mytime] < uniqueTimesPerFieldPerSpw[ispwInCalTable][f][-1]-solutionTimeThresholdSeconds) and
(uniqueTimes[mytime] < timerangeListTimes[-1])):
if (debug):
print("-----------Not finished because mytime=%d, uniqueTimes[%d]=%.0f < %.0f-%d for fieldIndex=%d and <%.0f" % (mytime,mytime,uniqueTimes[mytime], uniqueTimesPerFieldPerSpw[ispwInCalTable][f][-1], solutionTimeThresholdSeconds, f, timerangeListTimes[-1]))
print("-----------ispwInCalTable=%d, mytime=%d, len(uniqueTimes) = %d" % (ispwInCalTable, mytime, len(uniqueTimes)))
doneOverlayTime = False
if (debug):
print("------doneOverlayTime = %s" % (str(doneOverlayTime)))
if (doneOverlayTime):
if (debug):
print("3412: doneOverlayTime=True, drawOverlayTimeLegends()")
# This is necessary for the case that no antennas were flagged for the single timerange selected
drawOverlayTimeLegends(xframe,firstFrame,xstartTitle,ystartTitle,caltable,titlesize,
fieldIndicesToPlot,ispwInCalTable,uniqueTimesPerFieldPerSpw,
timerangeListTimes, solutionTimeThresholdSeconds,
debugSloppyMatch,ystartOverlayLegend,debug,mysize,
fieldsToPlot,myUniqueColor,timeHorizontalSpacing,
fieldIndex,overlayColors, antennaVerticalSpacing,
overlayAntennas, timerangeList, caltableTitle,
mytime, scansToPlotPerSpw[ispw], scansForUniqueTimes)
else:
if (debug):
print("xant=%d != antennasToPlot[-1]=%d" % (xant, antennasToPlot[-1]))
# Here is 2nd place where we eliminate any white space on the right and left edge of the plots: 'amp'
#
if (abs(plotrange[2]) > 0 or abs(plotrange[3]) > 0):
SetNewYLimits([plotrange[2],plotrange[3]],loc=4)
if (plotrange[0]==0 and plotrange[1]==0):
if (xaxis.find('chan')>=0):
SetNewXLimits([channels[0],channels[-1]],loc=10)
else:
if (zoom != 'intersect'):
if (overlaySpws or overlayBasebands):
SetNewXLimits(frequencyRangeToPlotInBaseband[bbctr],loc=11)
else:
SetNewXLimits([frequencies[0], frequencies[-1]],loc=12)
if (bOverlay):
# print "Checking if %f >= %f" % (xrange2, myxrange)
if (xrange2 >= myxrange and zoom != 'intersect'):
# This is necessary if caltable2=TDM and caltable=FDM
SetNewXLimits([frequencies2[0], frequencies2[-1]],loc=13)
if (xrange2 > myxrange+0.1 and zoom != 'intersect'):
TDMisSecond = True
else:
SetNewXLimits([plotrange[0], plotrange[1]],loc=14)
if (debug): print("done SetNewXLimits")
# I need the following line for chanrange to work
if (chanrange[0] != 0 or chanrange[1] != 0 or chanrangePercent is not None):
SetLimits(plotrange, chanrange, newylimits, channels, frequencies,
pfrequencies, ampMin, ampMax, xaxis, pxl, chanrangeSetXrange,
chanrangePercent, loc=105)
# Finally, draw the atmosphere and FDM windows, if requested. 'amp'
if ((overlayAntennas==False and overlayTimes==False) or
(overlayAntennas==True and overlayTimes==False and xant==antennasToPlot[-1]) or
(overlayTimes==True and overlayAntennas==False and doneOverlayTime) or
# (xant==antennasToPlot[-1] and doneOverlayTime and mytime==nUniqueTimes-1) # support showatm with overlay='antenna,time'
(overlayTimes and overlayAntennas and # Aug 5, 2013
# xant==antennasToPlot[-1] and doneOverlayTime and mytime==nUniqueTimes-1
xant==antennasToPlot[-1] and doneOverlayTime and mytime==finalTimeMatch # 2015-08-19 for CAS-7820
and not drewAtmosphere) # added on 2014-12-04 to support case of a flagged antenna CAS-7187
):
if (overlayTimes and overlayAntennas and debug):
print("xant=%d, antennasToPlot[-1]=%d, doneOverlayTime=%s" % (xant, antennasToPlot[-1], str(doneOverlayTime)))
if ((showatm or showtsky) and len(atmString) > 0):
DrawAtmosphere(showatm, showtsky, subplotRows, atmString,
mysize, TebbSky, plotrange, xaxis, atmchan,
atmfreq, transmission, subplotCols,
showatmPoints=showatmPoints, xframe=xframe,
channels=channels,mylineno=lineNumber(),
overlaySpws=overlaySpws,
overlayBasebands=overlayBasebands,
drewAtmosphere=drewAtmosphere,loc=203,
showtsys=showtsys, Trx=Trx)
if (LO1 is not None):
# Now draw the image band
DrawAtmosphere(showatm,showtsky, subplotRows, atmString,
mysize, TebbSkyImage, plotrange, xaxis,
atmchanImage, atmfreqImage, transmissionImage,
subplotCols, LO1, xframe, firstFrame, showatmPoints,
channels=channels,mylineno=lineNumber(),
overlaySpws=overlaySpws,
overlayBasebands=overlayBasebands,
drewAtmosphere=drewAtmosphere,loc=204,
showtsys=showtsys, Trx=Trx)
drewAtmosphere = True
if (xaxis.find('freq')>=0 and showfdm and nChannels <= 256):
if (debug): # amplitude section
print("calling showFDM(amplitude), ispw=%d, overlayAntennas=%s, overlayTimes=%s, xant=%d, antennasToPlot[-1]=%d, doneOverlayTime=%s" % (ispw, str(overlayAntennas), str(overlayTimes), xant, antennasToPlot[-1], str(doneOverlayTime)))
if (tableFormat == 33):
showFDM(originalSpw_casa33, chanFreqGHz_casa33, baseband, showBasebandNumber, basebandDict)
else:
showFDM(originalSpw, chanFreqGHz, baseband, showBasebandNumber, basebandDict)
else:
if (overlayTimes and overlayAntennas and debug):
print("xant=%d, antennasToPlot[-1]=%d, doneOverlayTime=%s, mytime=%d, finalTimeMatch=%d" % (xant, antennasToPlot[-1], str(doneOverlayTime), mytime, finalTimeMatch))
if (debug): print("done drawAtmosphere/FDM check")
if (bOverlay):
# draw polarization labels for bOverlay
x0 = xstartPolLabel
y0 = ystartPolLabel
for p in range(nPolarizations):
if (corrTypeToString(corr_type[p]) in polsToPlot):
pb.text(x0, y0-p*0.03*subplotRows, corrTypeToString(corr_type[p])+'-c1',
color=pcolor[p],size=mysize,transform=pb.gca().transAxes)
pb.text(x0, y0-p*0.03*subplotRows-0.06*subplotRows, corrTypeToString(corr_type[p])+'-c2',
color=p2color[p],size=mysize,transform=pb.gca().transAxes)
if (debug): print("done pol labels")
if (bpolyOverlay and xaxis.find('freq')>=0):
# draw polarization labels for bpolyOverlay
x0 = xstartPolLabel
y0 = ystartPolLabel
if (x2color != xcolor):
for p in range(nPolarizations):
if (corrTypeToString(corr_type[0]) in polsToPlot):
pb.text(x0+0.1, y0-p*0.03*subplotRows, corrTypeToString(corr_type[p]), color=p2color[p],
size=mysize,transform=pb.gca().transAxes)
if (bpolyOverlay2):
for p in range(nPolarizations):
if (corrTypeToString(corr_type[0]) in polsToPlot):
pb.text(x0+0.2, y0-p*0.03*subplotRows, corrTypeToString(corr_type[p]),
color=p3color[p], size=mysize,transform=pb.gca().transAxes)
myIndexTime = uniqueTimesPerFieldPerSpw[ispwInCalTable][fieldIndex][-1]
if (debug): print("running sloppyMatch")
matched,mymatch = sloppyMatch(myIndexTime,uniqueTimes,
solutionTimeThresholdSeconds,
mytime, scansToPlotPerSpw[ispw], # task version and au version
scansForUniqueTimes,myprint=debug,
whichone=True)
if (debug):
print("1)done sloppyMatch, mytime=%d, scansForUniqueTimes=%s" % (mytime,str(scansForUniqueTimes)))
print("ispw=%d" % (ispw))
print("len(scansToPlotPerSpw)=%d" % (len(scansToPlotPerSpw)))
# The latter condition is needed to support the scans/timeranges parameters.
if (matched==False and scansForUniqueTimes[mytime] in scansToPlotPerSpw[ispw]):
print("---------- Did not find %f within %.0f seconds of anything in %s" % (myIndexTime,solutionTimeThresholdSeconds,str(uniqueTimes)))
print("---------- uniqueTimesPerFieldPerSpw = %s" % (str(uniqueTimesPerFieldPerSpw)))
print("Try re-running with a smaller solutionTimeThresholdSeconds (currently %f)" % (solutionTimeThresholdSeconds))
return
else:
# we are on the final time to be plotted
# if (debug): print "on the final time = %d (scan=%d)" % (mytime,scansForUniqueTimes[mytime])
mytimeTest = mytime==nUniqueTimes-1 # mytime==myIndexTime # mytime==mymatch
if ((xframe == 111 and amplitudeWithPhase) or
# Following case is needed to make subplot=11 to work for: try to support overlay='antenna,time': amp
(xframe == lastFrame and overlayTimes and overlayAntennas and
xctr+1==len(antennasToPlot) and
# mytime+1==len(uniqueTimes) and # this worked for nspw <= 4
mytimeTest and
spwctr<len(spwsToPlot))):
if (debug):
print(":::: xframe=%d == lastFrame=%d, amplitudeWithPhase=" % (xframe, lastFrame), amplitudeWithPhase)
print(":::: xctr+1=%d == len(antennasToPlot)=%d" % (xctr+1,len(antennasToPlot)))
print(":::: mytimeTest = %s (%d==%d)" % (mytimeTest, mytime, mymatch))
print(":::: spwctr=%d < len(spwsToPlot)=%d" % (spwctr,len(spwsToPlot)))
if (len(figfile) > 0):
plotfiles.append(makeplot(figfile,msFound,msAnt,
overlayAntennas,pages,pagectr,
density,interactive,antennasToPlot,
spwsToPlot,overlayTimes,overlayBasebands,
3,xant,ispw,subplot,resample,
debug,
figfileSequential,figfileNumber))
figfileNumber += 1
donetime = timeUtilities.time()
drewAtmosphere = False # needed for CAS-7187 (subplot=11)
if (interactive):
pb.draw()
# myinput = raw_input(":(%.1f sec) Press return for next page (b for backwards, q to quit): "%(donetime-mytimestamp))
myinput = raw_input("Press return for next page (b for backwards, q to quit): ")
else:
myinput = ''
skippingSpwMessageSent = 0
mytimestamp = timeUtilities.time()
if (myinput.find('q') >= 0):
showFinalMessage(overlayAntennas, solutionTimeSpread, nUniqueTimes)
return(vm)
if (myinput.find('b') >= 0):
if (pagectr > 0):
pagectr -= 1
#redisplay the current page by setting ctrs back to the value they had at start of that page
xctr = pages[pagectr][PAGE_ANT]
spwctr = pages[pagectr][PAGE_SPW]
mytime = pages[pagectr][PAGE_TIME]
myap = pages[pagectr][PAGE_AP]
xant = antennasToPlot[xctr]
antstring = buildAntString(xant,msFound,msAnt)
ispw = spwsToPlot[spwctr]
# print "Returning to [%d,%d,%d,%d]" % (xctr,spwctr,mytime,myap)
redisplay = True
if (xctr==pages[0][PAGE_ANT] and spwctr==pages[0][PAGE_SPW] and mytime==pages[0][PAGE_TIME] and pages[0][PAGE_AP]==myap):
pb.clf()
if (debug):
print("2)Setting xframe to %d" % xframeStart)
xframe = xframeStart
myUniqueColor = []
continue
else:
pagectr += 1
if (pagectr >= len(pages)):
# print "spwctr=%d, yaxis=%s, xframe=%d, lastFrame=%d, xctr+1=%d, len(antennasToPlot)=%d" % (spwctr, yaxis, xframe, lastFrame, xctr+1,len(antennasToPlot))
if (xframe == lastFrame and overlayTimes and overlayAntennas and xctr+1==len(antennasToPlot) and
yaxis=='amp'):
# I'm not sure why this works, but is needed to fix CAS-7154
myspwctr = spwctr+1
else:
myspwctr = spwctr
pages.append([xctr,myspwctr,mytime,1])
if (debug):
print("amp: appending [%d,%d,%d,%d]" % (xctr,myspwctr,mytime,1))
newpage = 0
pb.clf()
if (debug):
print("3)Setting xframe to %d" % xframeStart)
xframe = xframeStart
myUniqueColor = []
else:
if (debug):
print("::: Not done page: Not checking whether we need to set xframe=xframeStart")
print("::: xframe=%d ?= lastFrame=%d, amplitudeWithPhase=" % (xframe, lastFrame), amplitudeWithPhase)
print("::: xctr+1=%d ?= len(antennasToPlot)=%d" % (xctr+1,len(antennasToPlot)))
print(":::: mytimeTest = %s" % (mytimeTest))
print("::: spwctr=%d ?< len(spwsToPlot)=%d" % (spwctr,len(spwsToPlot)))
#################################################
######### Here is the phase plotting ############
#################################################
if (yaxis.find('phase')>=0 or amplitudeWithPhase) and doneOverlayTime==False:
if (channeldiff > 0):
pchannels = [xchannels,ychannels] # this is necessary because np.diff reduces nchan by 1
pfrequencies = [xfrequencies,yfrequencies] # this is necessary because np.diff reduces nchan by 1
if (bOverlay):
pchannels2 = [xchannels2,ychannels2] # this is necessary because np.diff reduces nchan by 1
pfrequencies2 = [xfrequencies2,yfrequencies2] # this is necessary because np.diff reduces nchan by 1
if (overlayTimes == False or mytime==firstTimeMatch):
if ((overlaySpws == False and overlayBasebands==False) or spwctr==spwctrFirstToPlot or
spwctr>spwsToPlot[-1] or
(overlayBasebands and amplitudeWithPhase)) :# CAS-6477
if (overlayAntennas==False or xctr==firstUnflaggedAntennaToPlot
or xctr>antennasToPlot[-1]): # 2012-05-24, to fix the case where all ants flagged on one timerange
xframe += 1
if (debug):
print("u) incrementing xframe to %d" % xframe)
myUniqueColor = []
newylimits = [LARGE_POSITIVE, LARGE_NEGATIVE]
if (phase != ''):
if ((phase[0] != 0 or phase[1] != 0) and amplitudeWithPhase):
newylimits = phase
if (debug):
print("$$$$$$$$$$$$$$$$$$$$$$$ ready to plot phase on xframe %d" % (xframe))
if casaVersion < '5.9.9':
adesc = pb.subplot(xframe) # there is no deprecation warning in older CASA
if (previousSubplot != xframe):
if casaVersion >= '5.9.9':
adesc = pb.subplot(xframe) # avoid deprecation warning in CASA6 when xframe already was opened
drewAtmosphere = False
previousSubplot = xframe
# pb.hold(overlayAntennas or overlayTimes) # not available in CASA6, but never needed
gphsx = np.arctan2(np.imag(gplotx),np.real(gplotx))*180.0/math.pi
if (nPolarizations == 2):
gphsy = np.arctan2(np.imag(gploty),np.real(gploty))*180.0/math.pi
if (debug):
print("np.shape(gplotx), (gphsx) = ", np.shape(gplotx), np.shape(gphsx))
print("np.shape(gplotxy, (gphsy) = ", np.shape(gploty), np.shape(gphsy))
if (channeldiff>0):
if (xaxis == 'chan'):
gphs0, newx0, gphs0res, newx0res = channelDifferences(gphsx, pchannels[0], resample)
gphs1, newx1, gphs1res, newx1res = channelDifferences(gphsy, pchannels[1], resample)
pchannels = [newx0,newx1]
else:
gphs0, newx0, gphs0res, newx0res = channelDifferences(gphsx, pfrequencies[0], resample)
gphs1, newx1, gphs1res, newx1res = channelDifferences(gphsy, pfrequencies[1], resample)
pfrequencies = [newx0,newx1]
gphs = [gphs0, gphs1]
gphsres = [gphs0res, gphs1res]
gphs_mad = [madInfo(gphsres[0],madsigma,edge,ispw,xant,0), madInfo(gphsres[1],madsigma,edge,ispw,xant,1)]
gphs_std = [stdInfo(gphsres[0],madsigma,edge,ispw,xant,0), stdInfo(gphsres[1],madsigma,edge,ispw,xant,1)]
for p in [0,1]:
madstats[Antstring][ispw][mytime][p]['phase'] = gphs_mad[p]['mad']
madstats[Antstring][ispw][mytime][p]['phasestd'] = gphs_std[p]['std']
if (gphs_mad[p]['nchan'] > 0):
checkAbsSum = np.sum(np.abs(gphs[p]))
if (checkAbsSum < PHASE_ABS_SUM_THRESHOLD):
if (debug): print("%s, Pol %d, spw %d, %s, phs: not printing because abs sum of all values near zero (%f)" % (Antstring, p, ispw, utstring(uniqueTimes[mytime],0), checkAbsSum))
else:
print("%s, Pol %d, spw %2d, %s, phs: %4d points exceed %.1f sigma (worst=%.2f at chan %d)" % (Antstring, p, ispw, utstring(uniqueTimes[mytime],0), gphs_mad[p]['nchan'], madsigma, gphs_mad[p]['outlierValue'], gphs_mad[p]['outlierChannel']+pchannels[p][0]))
else:
gphs = [gphsx,gphsy]
else: # 1-pol
if (channeldiff>0):
if (xaxis == 'chan'):
gphs0, newx0, gphs0res, newx0res = channelDifferences(gphsx, pchannels[0], resample)
pchannels = [newx0]
else:
gphs0, newx0, gphs0res, newx0res = channelDifferences(gphsx, pfrequencies[0], resample)
pfrequencies = [newx0]
gphs = [gphs0]
gphsres = [gphs0res]
p = 0
gphs_mad = [madInfo(gphsres[p], madsigma, edge, ispw,xant,p)]
gphs_std = [stdInfo(gphsres[p], madsigma, edge, ispw,xant,p)]
madstats[Antstring][ispw][mytime][p]['phase'] = gphs_mad[p]['mad']
madstats[Antstring][ispw][mytime][p]['phasestd'] = gphs_mad[p]['std']
if (gphs_mad[p]['nchan'] > 0):
checkAbsSum = np.sum(np.abs(gphs[p]))
if (checkAbsSum < PHASE_ABS_SUM_THRESHOLD):
if (debug): print("%s, Pol %d, spw %d, %s, phs: not printing because all values near zero (%f)" % (Antstring, p, ispw, utstring(uniqueTimes[mytime],0), checkAbsSum))
else:
print("%s, Pol %d, spw %2d, %s, phs: %4d points exceed %.1f sigma (worst=%.2f at chan %d)" % (Antstring, p, ispw, utstring(uniqueTimes[mytime],0), gphs_mad[p]['nchan'], madsigma, gphs_mad[p]['outlierValue'], gphs_mad[p]['outlierChannel']+pchannels[p][0]))
else:
gphs = [gphsx]
if (bOverlay):
if (debug):
print("computing phase for second table")
gphsx2 = np.arctan2(np.imag(gplotx2),np.real(gplotx2))*180.0/math.pi
if (nPolarizations == 2):
gphsy2 = np.arctan2(np.imag(gploty2),np.real(gploty2))*180.0/math.pi
if (channeldiff>0):
if (xaxis == 'chan'):
gphs2_0, newx0, gphs2_0res, newx0res = channelDifferences(gphsx2, pchannels2[0], resample)
gphs2_1, newx1, gphs2_1res, newx1res = channelDifferences(gphsy2, pchannels2[1], resample)
pchannels2 = [newx0, newx1]
else:
gphs2_0, newx0, gphs2_0res, newx0res = channelDifferences(gphsx2, pfrequencies2[0], resample)
gphs2_1, newx1, gphs2_1res, newx1res = channelDifferences(gphsy2, pfrequencies2[1], resample)
pfrequencies2 = [newx0, newx1]
gphs2 = [gphs2_0, gphs2_1]
gphs2res = [gphs2_0res, gphs2_1res]
else:
gphs2 = [gphsx2, gphsy2]
else:
if (channeldiff>0):
if (xaxis == 'chan'):
gphs2_0, newx0, gphs2_0res, newx0res = channelDifferences(gphsx2, pchannels2[0], resample)
pchannels2 = [newx0]
else:
gphs2_0, newx0, gphs2_0res, newx0res = channelDifferences(gphsx2, pfrequencies2[0], resample)
pfrequencies2 = [newx0]
gphs2 = [gphs2_0]
gphs2res = [gphs2_0res]
else:
gphs2 = [gphsx2]
if (debug):
print("bOverlay is FALSE ===========================")
if (xaxis.find('chan')>=0 or len(xfrequencies) < 1): # 'phase'
# pb.hold(True) # not available in CASA6, but never needed
for p in range(nPolarizations):
if (corrTypeToString(corr_type[p]) in polsToPlot):
if (overlayAntennas or overlayTimes):
pdesc = pb.plot(pchannels[p],gphs[p],'%s'%(phasemarkstyles[p]),markersize=markersize,markeredgewidth=markeredgewidth)
newylimits = recalcYlimits(plotrange,newylimits,gphs[p]) # 10/27/2011
if (newylimits[1]-newylimits[0] < minPhaseRange):
newylimits = [-minPhaseRange,minPhaseRange]
if (phase != ''):
if ((phase[0] != 0 or phase[1] != 0) and amplitudeWithPhase):
newylimits = phase
if (overlayAntennas and overlayTimes==False):
pb.setp(pdesc, color=overlayColors[xctr])
elif (overlayTimes and overlayAntennas==False):
pb.setp(pdesc, color=overlayColors[mytime])
elif (overlayTimes): # try to support antenna,time
if (myUniqueTime != []):
pb.setp(pdesc, color=overlayColors[myUniqueTime])
# The third 'or' below is needed if pol='0' is flagged on antenna 0. -- 2012/10/12 (original spot)
if (p==0 or len(polsToPlot)==1 or myUniqueColor==[]):
myUniqueColor.append(overlayColors[len(myUniqueColor)])
pb.setp(pdesc, color=myUniqueColor[-1])
else:
pb.plot(pchannels[p],gphs[p],'%s%s'%(pcolor[p],phasemarkstyles[0]), markersize=markersize,markeredgewidth=markeredgewidth)
newylimits = recalcYlimits(plotrange,newylimits,gphs[p]) # 10/27/2011
if (newylimits[1]-newylimits[0] < minPhaseRange):
newylimits = [-minPhaseRange,minPhaseRange]
if (phase != ''):
if ((phase[0] != 0 or phase[1] != 0) and amplitudeWithPhase):
newylimits = phase
if (sum(xflag)>0):
# print "phase: Resetting xaxis channel range to counteract flagged data"
myxrange = np.max(channels)-np.min(channels)
SetNewXLimits([np.min(channels)-myxrange/20, np.max(channels)+myxrange/20],loc=15)
if (xframe in bottomRowFrames or (xctr+1==len(antennasToPlot) and ispw==spwsToPlot[-1])):
pb.xlabel("Channels (%d)"%(len(pchannels[p])), size=mysize)
elif (xaxis.find('freq')>=0): # 'phase'
if (bOverlay):
# pb.hold(True) # not available in CASA6, but never needed
if (debug):
print("Preparing to plot phase from %f-%f for pols:" % (xfrequencies[0],xfrequencies[-1]),polsToPlot)
print("Preparing to plot phase from %f-%f for pols:" % (pfrequencies[p][0],pfrequencies[p][-1]),polsToPlot)
print("Preparing to plot phase from %f-%f for pols:" % (pfrequencies2[p][0],pfrequencies2[p][-1]),polsToPlot)
myxrange = np.abs(xfrequencies[0]-xfrequencies[-1])
try:
xrange2 = np.abs(xfrequencies2[0]-xfrequencies2[-1])
except:
print("No phase data found in second solution. Try increasing the solutionTimeThresholdSeconds above %.0f." % (solutionTimeThresholdSeconds))
print("If this doesn't work, email the developer (%s)." % (developerEmail))
return(vm)
if (np.abs(myxrange/xrange2 - 1) > 0.05 + len(xflag)/len(xchannels)): # 0.0666 is 2000/1875-1
# These line widths are optimal for visualizing FDM over TDM
width1 = 1
width2 = 4
# solutions differ in frequency width, so show the narrower one first
if (myxrange < xrange2):
for p in range(nPolarizations):
if (corrTypeToString(corr_type[p]) in polsToPlot):
if (debug): print("pb.plot 1")
pb.plot(pfrequencies[p], gphs[p], '%s%s'%(pcolor[p],phasemarkstyle), linewidth=width2, markersize=markersize,markeredgewidth=markeredgewidth)
newylimits = recalcYlimitsFreq(chanrange, newylimits, gphs[p], sideband,plotrange,xchannels,debug,20,chanrangePercent)
for p in range(nPolarizations):
if (corrTypeToString(corr_type[p]) in polsToPlot):
if (debug): print("pb.plot 2")
pb.plot(pfrequencies2[p], gphs2[p], '%s%s'%(p2color[p],phasemarkstyle), linewidth=width1, markersize=markersize,markeredgewidth=markeredgewidth)
newylimits = recalcYlimitsFreq(chanrange, newylimits, gphs2[p], sideband,plotrange,xchannels2,debug,21,chanrangePercent)
else:
for p in range(nPolarizations):
if (corrTypeToString(corr_type[p]) in polsToPlot):
if (debug): print("pb.plot 3")
pb.plot(pfrequencies2[p], gphs2[p], '%s%s'%(p2color[p],phasemarkstyle), linewidth=width2, markersize=markersize,markeredgewidth=markeredgewidth)
newylimits = recalcYlimitsFreq(chanrange, newylimits, gphs2[p], sideband,plotrange,xchannels2,debug,22,chanrangePercent)
for p in range(nPolarizations):
if (corrTypeToString(corr_type[p]) in polsToPlot):
if (debug): print("pb.plot 4")
pb.plot(pfrequencies[p], gphs[p], '%s%s'%(pcolor[p],phasemarkstyle), linewidth=width1, markersize=markersize,markeredgewidth=markeredgewidth)
newylimits = recalcYlimitsFreq(chanrange, newylimits, gphs[p], sideband,plotrange,xchannels,debug,23,chanrangePercent)
else:
width1 = 1
width2 = 1
# solutions may be different level of smoothing, so plot highest rms first
# pb.hold(True) # not available in CASA6, but never needed
if (madOfDiff(gphsx) < madOfDiff(gphsx2)):
for p in range(nPolarizations):
if (corrTypeToString(corr_type[p]) in polsToPlot):
if (debug): print("pb.plot 5")
pb.plot(pfrequencies2[p], gphs2[p], '%s%s'%(p2color[p],phasemarkstyle), linewidth=width1, markersize=markersize,markeredgewidth=markeredgewidth)
newylimits = recalcYlimitsFreq(chanrange, newylimits, gphs2[p], sideband,plotrange,xchannels2,debug,24,chanrangePercent)
for p in range(nPolarizations):
if (corrTypeToString(corr_type[p]) in polsToPlot):
if (debug): print("pb.plot 6")
pb.plot(pfrequencies[p], gphs[p], '%s%s'%(pcolor[p],phasemarkstyle), linewidth=width2, markersize=markersize,markeredgewidth=markeredgewidth)
newylimits = recalcYlimitsFreq(chanrange, newylimits, gphs[p], sideband,plotrange,xchannels,debug,25,chanrangePercent)
else:
for p in range(nPolarizations):
if (corrTypeToString(corr_type[p]) in polsToPlot):
if (debug): print("pb.plot 7")
pb.plot(pfrequencies[p], gphs[p], '%s%s'%(pcolor[p],phasemarkstyle), linewidth=width2, markersize=markersize,markeredgewidth=markeredgewidth)
newylimits = recalcYlimitsFreq(chanrange, newylimits, gphs[p], sideband,plotrange,xchannels,debug,26,chanrangePercent)
for p in range(nPolarizations):
if (corrTypeToString(corr_type[p]) in polsToPlot):
if (debug): print("pb.plot 9")
pb.plot(pfrequencies2[p], gphs2[p], '%s%s'%(p2color[p],phasemarkstyle), linewidth=width1, markersize=markersize,markeredgewidth=markeredgewidth)
newylimits = recalcYlimitsFreq(chanrange, newylimits, gphs2[p], sideband,plotrange,xchannels2,debug,27,chanrangePercent)
# must set new limits after plotting 'phase'
(y0,y1) = pb.ylim()
if (y1-y0 < minPhaseRange):
# this must come before defining ticks
SetNewYLimits([-minPhaseRange,minPhaseRange],loc=5)
if (zoom=='intersect'):
if (myxrange < xrange2):
SetNewXLimits([min(xfrequencies[0],xfrequencies[-1])-myxrange*0.1, max(xfrequencies[0],xfrequencies[-1])+myxrange*0.1],loc=16)
SetLimits(plotrange, chanrange, newylimits, channels, frequencies,
pfrequencies, ampMin, ampMax, xaxis,pxl,
chanrangeSetXrange, chanrangePercent,loc=106)
else:
SetNewXLimits([min(xfrequencies2[0],xfrequencies2[-1])-xrange2*0.1, max(xfrequencies2[0],xfrequencies2[-1])+xrange2*0.1],loc=17)
SetLimits(plotrange, chanrange, newylimits, channels, frequencies2,
pfrequencies2, ampMin, ampMax, xaxis,pxl,
chanrangeSetXrange, chanrangePercent,loc=107)
else:
if (myxrange < xrange2):
SetLimits(plotrange, chanrange, newylimits, channels, frequencies,
pfrequencies, ampMin, ampMax, xaxis,pxl,
chanrangeSetXrange, chanrangePercent,loc=108)
else:
SetLimits(plotrange, chanrange, newylimits, channels, frequencies2,
pfrequencies2, ampMin, ampMax, xaxis,pxl,
chanrangeSetXrange, chanrangePercent,loc=109)
# draw polarization and spw labels
if (xframe == firstFrame):
# draw title including caltable name
caltableList = 'c1=' + caltable + ', c2=' + caltable2 # + ' (%s)'%(utstring(uniqueTimes2[mytime],3))
pb.text(xstartTitle, ystartTitle, caltableList, size=titlesize,
color='k', transform=pb.gcf().transFigure)
if (caltable2amplitudeOffset != 0):
pb.text(xstartTitle, 0.935, 'c2 amplitude offset = %.3f' % (caltable2amplitudeOffset),
color='k',size=titlesize,transform=pb.gcf().transFigure)
elif (bpolyOverlay):
matches1 = []
for tbp in range(len(timesBP)):
if (sloppyMatch(uniqueTimes[mytime], timesBP[tbp], solutionTimeThresholdSeconds,
mytime, scansToPlotPerSpw[ispw], scansForUniqueTimes,
myprint=debugSloppyMatch)):
matches1.append(tbp)
matches1 = np.array(matches1)
# print "time matches: matches1 = ", matches1
if (len(matches1) < 1):
print("No time match found")
print("If you are sure the solutions correspond to the same data, you can set solutionTimeThresholdSeconds=%.0f" % (1+np.ceil(np.abs(timesBP[0]-uniqueTimes[mytime]))))
return(vm)
# matches1 = np.where(np.floor(uniqueTimes[mytime]) == np.floor(np.array(timesBP)))[0]
matches2 = np.where(xant == np.array(antennasBP))[0]
if (len(matches2) < 1):
print("No antenna match found: ", xant, antennasBP)
# print "antenna matches: matches2 = ", matches2
if (tableFormat == 33):
matches3 = np.where(ispw == np.array(cal_desc_idBP))[0]
if (len(matches3) < 1):
print("No spw match found: %d not in "% (ispw), cal_desc_idBP)
else:
matches3 = np.where(ispw == np.array(spwBP))[0]
if (len(matches3) < 1):
print("No spw match found: %d not in " % (ispw), spwBP)
# print "spw matches: matches3 = ", matches3
matches12 = np.intersect1d(matches1,matches2)
if (len(matches12) < 1):
print("No match between: ", matches1, matches2)
# print "antenna&time matches: matches12 = ", matches12
matches = np.intersect1d(matches12, matches3)
if (len(matches) < 1):
print("No match between: ", matches12, matches3)
# print "antenna&time&spw matches: matches = ", matches
try:
index = matches[0] # holds the row number of the matching solution in the BPOLY table
except:
print("No match found for time=%.6f, xant=%d, ispw=%d" % (uniqueTimes[mytime],xant,ispw))
print("antennasBP = ", antennasBP)
print("cal_desc_idBP = ", cal_desc_idBP)
print("timesBP = ")
for i in timesBP:
print("%.6f, " % i)
return(vm)
# print "phase: Using index = %d/%d (mytime=%d), domain=%.3f,%.3f" % (index,len(polynomialPhase),mytime,frequencyLimits[0,index]*1e-9,frequencyLimits[1,index]*1e-9)
if (debug): print("BRowNumber = %d, BPolyRowNumber = %d" % (BRowNumber, index))
validDomain = [frequencyLimits[0,index], frequencyLimits[1,index]]
cc = calcChebyshev(polynomialPhase[index][0:nPolyPhase[index]], validDomain, frequenciesGHz[index]*1e+9) * 180/math.pi
fa = np.array(frequenciesGHz[index])
if (xfrequencies[0] < xfrequencies[-1]):
matches = np.where(fa>xfrequencies[0])[0]
matches2 = np.where(fa<xfrequencies[-1])[0]
else:
matches = np.where(fa>xfrequencies[-1])[0]
matches2 = np.where(fa<xfrequencies[0])[0]
# print "xfrequencies[0] = %f, xfrequencies[-1] = %f" % (xfrequencies[0], xfrequencies[-1])
# print "len(matches)=%d, len(matches2)=%d" % (len(matches), len(matches2))
# print "fa = ", fa
mymean = complexMeanDeg(np.array(cc)[matches[0]:matches2[-1]+1])
phaseSolutionX = np.mean(gphsx) - mymean + cc
cc = calcChebyshev(polynomialPhase[index][nPolyPhase[index]:2*nPolyPhase[index]], validDomain, frequenciesGHz[index]*1e+9) * 180/math.pi
if (nPolarizations > 1):
if (yfrequencies[0] < yfrequencies[-1]):
matches = np.where(fa>yfrequencies[0])[0]
matches2 = np.where(fa<yfrequencies[-1])[0]
else:
matches = np.where(fa>yfrequencies[-1])[0]
matches2 = np.where(fa<yfrequencies[0])[0]
mymean = complexMeanDeg(np.array(cc)[matches[0]:matches2[-1]+1])
phaseSolutionY = np.mean(gphsy) - mymean + cc
if (bpolyOverlay2):
validDomain = [frequencyLimits2[0,index], frequencyLimits2[1,index]]
cc = calcChebyshev(polynomialPhase2[index][0:nPolyPhase2[index]], validDomain,
frequenciesGHz2[index]*1e+9) * 180/math.pi
fa = np.array(frequenciesGHz2[index])
if (xfrequencies[0] < xfrequencies[-1]):
matches = np.where(fa>xfrequencies[0])[0]
matches2 = np.where(fa<xfrequencies[-1])[0]
else:
matches = np.where(fa>xfrequencies[-1])[0]
matches2 = np.where(fa<xfrequencies[0])[0]
mymean = complexMeanDeg(np.array(cc)[matches[0]:matches2[-1]+1])
phaseSolution2X = np.mean(gphsx) + cc - mymean
cc = calcChebyshev(polynomialPhase2[index][nPolyPhase2[index]:2*nPolyPhase2[index]],
validDomain, frequenciesGHz2[index]*1e+9) * 180/math.pi
if (yfrequencies[0] < yfrequencies[-1]):
matches = np.where(fa>yfrequencies[0])[0]
matches2 = np.where(fa<yfrequencies[-1])[0]
else:
matches = np.where(fa>yfrequencies[-1])[0]
matches2 = np.where(fa<yfrequencies[0])[0]
mymean = complexMeanDeg(np.array(cc)[matches[0]:matches2[-1]+1])
phaseSolution2Y = np.mean(gphsy) + cc - mymean
# pb.hold(True) # not available in CASA6, but never needed
for p in range(nPolarizations):
if (corrTypeToString(corr_type[p]) in polsToPlot):
pb.plot(pfrequencies[p], gphs[p],'%s%s'%(pcolor[p],phasemarkstyle), markersize=markersize,markeredgewidth=markeredgewidth)
newylimits = recalcYlimitsFreq(chanrange, newylimits, gphs[p], sideband,plotrange,xchannels,debug,28,chanrangePercent)
if (corrTypeToString(corr_type[0]) in polsToPlot):
pb.plot(frequenciesGHz[index],phaseSolutionX,'%s%s'%(x2color,bpolymarkstyle),markeredgewidth=markeredgewidth,markersize=markersize)
newylimits = recalcYlimitsFreq(chanrange, newylimits, phaseSolutionX, sideband,plotrange,xchannels,debug,29,chanrangePercent)
pb.plot(frequenciesGHz2[index],phaseSolution2X,'%s%s'%(x3color,bpolymarkstyle),markeredgewidth=markeredgewidth,markersize=markersize)
newylimits = recalcYlimitsFreq(chanrange, newylimits, phaseSolution2X, sideband,plotrange,xchannels2,debug,30,chanrangePercent)
if (nPolarizations == 2):
if (corrTypeToString(corr_type[1]) in polsToPlot):
pb.plot(frequenciesGHz[index],phaseSolutionY,'%s%s'%(y2color,bpolymarkstyle),markeredgewidth=markeredgewidth,markersize=markersize)
newylimits = recalcYlimitsFreq(chanrange, newylimits, phaseSolutionY, sideband,plotrange,xchannels,debug,31,chanrangePercent)
pb.plot(frequenciesGHz2[index],phaseSolution2Y,'%s%s'%(y3color,bpolymarkstyle),markeredgewidth=markeredgewidth,markersize=markersize)
newylimits = recalcYlimitsFreq(chanrange, newylimits, phaseSolution2Y, sideband,plotrange,xchannels2,debug,32,chanrangePercent)
else:
# pb.hold(True) # not available in CASA6, but never needed
for p in range(nPolarizations):
if (corrTypeToString(corr_type[p]) in polsToPlot):
pb.plot(pfrequencies[p], gphs[p],'%s%s'%(pcolor[p],phasemarkstyle), markersize=markersize,markeredgewidth=markeredgewidth)
newylimits = recalcYlimitsFreq(chanrange, newylimits, gphs[p], sideband,plotrange,xchannels,debug,33,chanrangePercent)
if (corrTypeToString(corr_type[0]) in polsToPlot):
pb.plot(frequenciesGHz[index],phaseSolutionX,'%s%s'%(x2color,bpolymarkstyle),markeredgewidth=markeredgewidth,markersize=markersize)
newylimits = recalcYlimitsFreq(chanrange, newylimits, phaseSolutionX, sideband,plotrange,xchannels,debug,34,chanrangePercent)
if (nPolarizations == 2):
if (corrTypeToString(corr_type[1]) in polsToPlot):
pb.plot(frequenciesGHz[index],phaseSolutionY,'%s%s'%(y2color,bpolymarkstyle),markeredgewidth=markeredgewidth,markersize=markersize)
newylimits = recalcYlimitsFreq(chanrange, newylimits, phaseSolutionY, sideband,plotrange,xchannels,debug,35,chanrangePercent)
# endif (bpolyOverlay2)
# Adding the following 4 lines on March 14, 2013
(y0,y1) = pb.ylim()
if (y1-y0 < minPhaseRange):
# this must come before defining ticks
SetNewYLimits([-minPhaseRange,minPhaseRange],loc=6)
else:
# we are not overlaying any B or polynomial solutions 'phase vs. freq'
# pb.hold(True) # not available in CASA6, but never needed
for p in range(nPolarizations):
if (corrTypeToString(corr_type[p]) in polsToPlot):
if (overlayAntennas or overlayTimes):
pdesc = pb.plot(pfrequencies[p], gphs[p],'%s'%(phasemarkstyles[p]), markersize=markersize,markeredgewidth=markeredgewidth)
newylimits = recalcYlimitsFreq(chanrange, newylimits, gphs[p], sideband,plotrange,xchannels,debug,36,chanrangePercent) # Apr 2, 2012
# print "Got newylimits = ", newylimits
if (overlayAntennas and overlayTimes==False):
pb.setp(pdesc, color=overlayColors[xctr])
elif (overlayTimes and overlayAntennas==False):
pb.setp(pdesc, color=overlayColors[mytime])
elif (overlayTimes): # try to support antenna,time
if (myUniqueTime != []):
pb.setp(pdesc, color=overlayColors[myUniqueTime])
# The third 'or' below is needed if pol='0' is flagged on antenna 0. -- 2012/10/12 (original spot)
if (p==0 or len(polsToPlot)==1 or myUniqueColor==[]):
myUniqueColor.append(overlayColors[len(myUniqueColor)])
pb.setp(pdesc, color=myUniqueColor[-1])
else:
pb.plot(pfrequencies[p], gphs[p],'%s%s'%(pcolor[p],phasemarkstyles[0]), markersize=markersize,markeredgewidth=markeredgewidth)
newylimits = recalcYlimitsFreq(chanrange, newylimits, gphs[p], sideband, plotrange,xchannels,debug,37,chanrangePercent)
if (sum(xflag)>0):
# print "phase frame %d: Resetting xaxis frequency range to counteract flagged data" % (xframe)
myxrange = np.max(frequencies)-np.min(frequencies)
SetNewXLimits([np.min(frequencies)-0.15*myxrange, np.max(frequencies)+0.15*myxrange],loc=18)
if (len(gphs[p]) > 0):
if (np.max(gphs[p]) < minPhaseRange and np.min(gphs[p]) > -minPhaseRange):
SetNewYLimits([-minPhaseRange,minPhaseRange],loc=7)
#endif bOverlay
pb.xlabel(xlabelString, size=mysize)
#endif xaxis='chan'/freq for 'phase'
if (overlayTimes):
timeString =''
else:
timeString = ', t%d/%d %s' % (mytime,nUniqueTimes-1,utstring(uniqueTimes[mytime],3))
if (scansForUniqueTimes != []):
if (scansForUniqueTimes[mytime]>=0):
timeString = ', scan%d %s' % (scansForUniqueTimes[mytime],utstring(uniqueTimes[mytime],3))
spwString = buildSpwString(overlaySpws, overlayBasebands,
spwsToPlot, ispw, originalSpw[ispw],
observatoryName, baseband,
showBasebandNumber)
titleString = "%sspw%s, field %d: %s%s" % (antennaString,
spwString,uniqueFields[fieldIndex],fieldString,timeString)
tsize = titlesize-int(len(titleString)/int(maxCharsBeforeReducingTitleFontSize/subplotCols))
pb.title(titleString, size=tsize)
if (abs(plotrange[0]) > 0 or abs(plotrange[1]) > 0):
SetNewXLimits([plotrange[0],plotrange[1]],loc=19)
# Here is 1st place where we eliminate any white space on the right and left edge of the plots: 'phase'
else:
if (xaxis.find('chan')>=0):
SetNewXLimits([channels[0],channels[-1]],loc=20)
else:
if (zoom != 'intersect'):
if (overlaySpws or overlayBasebands):
SetNewXLimits(frequencyRangeToPlotInBaseband[bbctr],loc=21)
else:
SetNewXLimits([frequencies[0], frequencies[-1]],loc=22)
if (bOverlay):
if (xrange2 > myxrange+0.1 and zoom != 'intersect'):
TDMisSecond = True
if (abs(plotrange[2]) > 0 or abs(plotrange[3]) > 0):
if (amplitudeWithPhase == False or phase == ''):
SetNewYLimits([plotrange[2],plotrange[3]],loc=8)
if (amplitudeWithPhase and phase != ''):
if (phase[0] != 0 or phase[1] != 0):
SetNewYLimits(phase,loc=9)
(y0,y1) = pb.ylim()
if (y1-y0 < minPhaseRange):
# this must come before defining ticks
SetNewYLimits([-minPhaseRange,minPhaseRange],loc=10)
SetNewYLimits(newylimits,loc=11) # added 10/2/2012 for the case of only 1 data point
if (amplitudeWithPhase and phase != ''):
if (phase[0] != 0 or phase[1] != 0):
SetNewYLimits(phase,loc=12)
(y0,y1) = pb.ylim()
ResizeFonts(adesc,mysize)
adesc.xaxis.grid(True,which='major')
adesc.yaxis.grid(True,which='major')
pb.ylabel(yPhaseLabel, size=mysize)
pb.subplots_adjust(hspace=myhspace, wspace=mywspace)
ylim = pb.ylim()
xlim = pb.xlim()
myxrange = xlim[1]-xlim[0]
yrange = ylim[1]-ylim[0]
# print "phase: ylim, yrange = ", ylim, yrange
myap = 0
if (overlayAntennas == False and overlayTimes == False and bOverlay == False and
((overlaySpws == False and overlayBasebands == False) or spwctr==spwctrFirstToPlot)):
# draw polarization labels for no overlay
x0 = xstartPolLabel
y0 = ystartPolLabel
for p in range(nPolarizations):
if (corrTypeToString(corr_type[p]) in polsToPlot):
pb.text(x0, y0-0.03*subplotRows*p, corrTypeToString(corr_type[p]), color=pcolor[p],
size=mysize, transform=pb.gca().transAxes)
if (channeldiff > 0):
pb.text(x0, ystartMadLabel-0.03*subplotRows*p,
corrTypeToString(corr_type[p])+' MAD = %.4f, St.Dev = %.4f'%(gphs_mad[p]['mad'],gphs_std[p]['std']),
color=pcolor[p],size=mysize, transform=pb.gca().transAxes)
if (xframe == firstFrame):
# draw title including caltable name
caltableList = caltableTitle
if (bpolyOverlay):
caltableList += ', ' + caltable2 + ' (degamp=%d, degphase=%d)'%(nPolyAmp[index]-1,nPolyPhase[index]-1)
if (bpolyOverlay2):
caltableList += ', ' + caltable3 + ' (degamp=%d, degphase=%d)'%(nPolyAmp2[index]-1,nPolyPhase2[index]-1)
pb.text(xstartTitle, ystartTitle, caltableList, size=titlesize,
color='k', transform=pb.gcf().transFigure)
elif (overlayAntennas==True and xant==antennasToPlot[-1] and bOverlay==False # ):
and overlayTimes==False): # try to support antenna,time avoid antenna labels 'phase'
# We do this last, because by then, the limits will be stable.
# draw polarization labels for overlayAntennas
x0 = xstartPolLabel
y0 = ystartPolLabel
if (corrTypeToString(corr_type[0]) in polsToPlot):
if (channeldiff > 0):
pb.text(x0, ystartMadLabel-0.03*subplotRows*p,
corrTypeToString(corr_type[p])+' MAD = %.4f, St.Dev = %.4f'%(gphs_mad[p]['mad'],gphs_std[p]['std']),
color=overlayColors[0], size=mysize, transform=pb.gca().transAxes)
if (phasemarkstyle.find('-')>=0):
pb.text(x0, y0-0.03*subplotRows*0, corrTypeToString(corr_type[0])+' solid', color=overlayColors[0],
fontsize=mysize, transform=pb.gca().transAxes)
else:
pb.text(x0+0.02, y0-0.03*subplotRows*0, corrTypeToString(corr_type[0]), color=overlayColors[0],
fontsize=mysize, transform=pb.gca().transAxes)
pdesc = pb.plot([x0], [y0+0.015-0*0.03*subplotRows], '%sk'%phasemarkstyle, markersize=markersize,
scalex=False,scaley=False, transform=pb.gca().transAxes,markeredgewidth=markeredgewidth)
if (len(corr_type) > 1):
if (corrTypeToString(corr_type[1]) in polsToPlot):
if (channeldiff > 0):
pb.text(x0, ystartMadLabel-0.03*subplotRows*p,
corrTypeToString(corr_type[p])+' MAD = %.4f, St.Dev = %.4f'%(gphs_mad[p]['mad'],gphs_std[p]['std']),
color=overlayColors[0], size=mysize, transform=pb.gca().transAxes)
if (phasemarkstyle2.find('--')>=0):
pb.text(x0, y0-0.03*subplotRows*1, corrTypeToString(corr_type[1])+' dashed', color=overlayColors[0],
fontsize=mysize, transform=pb.gca().transAxes)
else:
pb.text(x0+0.02, y0-0.03*subplotRows*1, corrTypeToString(corr_type[1]), color=overlayColors[0],
fontsize=mysize, transform=pb.gca().transAxes)
pdesc = pb.plot([x0], [y0+0.015*subplotRows-0.03*subplotRows*1],'%sk'%phasemarkstyle2, markersize=markersize,
scalex=False,scaley=False, transform=pb.gca().transAxes,markeredgewidth=markeredgewidth)
if (xframe == firstFrame):
# draw title including caltable name
pb.text(xstartTitle, ystartTitle, caltableTitle, size=titlesize, color='k',
transform=pb.gcf().transFigure)
DrawAntennaNames(msAnt, antennasToPlot, msFound, mysize)
elif (overlayTimes==True and bOverlay == False
and overlayAntennas==False): # try to support antenna,time
doneOverlayTime = True # assumed until proven otherwise in the 'for' loop
for f in fieldIndicesToPlot:
if (uniqueTimes[mytime] < uniqueTimesPerFieldPerSpw[ispwInCalTable][f][-1]-solutionTimeThresholdSeconds and
uniqueTimes[mytime] < timerangeListTimes[-1]):
doneOverlayTime = False
if (debug):
print("------doneOverlayTime = %s" % (str(doneOverlayTime)))
if (doneOverlayTime):
# either it is the last time of any times in solution, or the last time
# in the list of times to plot
mytime = nUniqueTimes-1
# draw polarization labels for overlayTimes
# We do this last, because by then, the limits will be broad enough and stable.
x0 = xstartPolLabel
y0 = ystartPolLabel
if (corrTypeToString(corr_type[0]) in polsToPlot):
if (channeldiff > 0):
p = 0
pb.text(x0, ystartMadLabel-0.03*subplotRows*p,
corrTypeToString(corr_type[p])+' MAD = %.4f, St.Dev = %.4f'%(gphs_mad[p]['mad'],gphs_std[p]['std']),
color='k', size=mysize, transform=pb.gca().transAxes)
if (phasemarkstyle.find('-')>=0):
pb.text(x0, y0, corrTypeToString(corr_type[0])+' solid', color='k',
fontsize=mysize, transform=pb.gca().transAxes)
else:
pb.text(x0+0.02, y0, corrTypeToString(corr_type[0]), color='k',
fontsize=mysize, transform=pb.gca().transAxes)
pdesc = pb.plot([x0], [y0+0.015*subplotRows], '%sk'%phasemarkstyle, markersize=markersize,
scalex=False,scaley=False, transform=pb.gca().transAxes,markeredgewidth=markeredgewidth)
if (len(corr_type) > 1):
if (corrTypeToString(corr_type[1]) in polsToPlot):
if (channeldiff > 0):
p = 1
pb.text(x0, ystartMadLabel-0.03*subplotRows*p,
corrTypeToString(corr_type[p])+' MAD = %.4f, St.Dev = %.4f'%(gphs_mad[p]['mad'],gphs_std[p]['std']),
color='k', size=mysize, transform=pb.gca().transAxes)
if (phasemarkstyle2.find('--')>=0):
pb.text(x0, y0-0.03*subplotRows, corrTypeToString(corr_type[1])+' dashed',
color='k',fontsize=mysize, transform=pb.gca().transAxes)
else:
pb.text(x0+0.02, y0-0.03*subplotRows, corrTypeToString(corr_type[1]),
color='k', fontsize=mysize, transform=pb.gca().transAxes)
pdesc = pb.plot([x0], [y0+0.015*subplotRows-0.03*subplotRows], '%sk'%phasemarkstyle2,
markersize=markersize, scalex=False,scaley=False, transform=pb.gca().transAxes,markeredgewidth=markeredgewidth)
if (xframe == firstFrame):
# draw title including caltable name
pb.text(xstartTitle, ystartTitle, caltableTitle, size=titlesize, color='k',
transform=pb.gcf().transFigure)
drawOverlayTimeLegends(xframe,firstFrame,xstartTitle,ystartTitle,caltable,
titlesize,fieldIndicesToPlot,ispwInCalTable,
uniqueTimesPerFieldPerSpw,
timerangeListTimes, solutionTimeThresholdSeconds,
debugSloppyMatch,ystartOverlayLegend,debug,mysize,
fieldsToPlot,myUniqueColor,timeHorizontalSpacing,
fieldIndex,overlayColors, antennaVerticalSpacing,
overlayAntennas, timerangeList, caltableTitle,
mytime, scansToPlotPerSpw[ispw], scansForUniqueTimes)
elif (overlayAntennas and overlayTimes): # Oct 23, 2012
# This will only happen for: try to support overlay='antenna,time' for 'phase'
if (xframe == firstFrame and mytime==0 and xctr==firstUnflaggedAntennaToPlot and bOverlay==False):
# draw title including caltable name
pb.text(xstartTitle, ystartTitle, caltableTitle, size=titlesize, color='k',
transform=pb.gcf().transFigure)
DrawBottomLegendPageCoords(msName, uniqueTimes[mytime], mysize, figfile)
#endif (overlayAntennas == False and overlayTimes == False and bOverlay == False)
# Here is 2nd place where we eliminate any white space on the right and left edge of the plots: 'phase'
if (abs(plotrange[2]) > 0 or abs(plotrange[3]) > 0):
if (amplitudeWithPhase == False or phase == ''):
SetNewYLimits([plotrange[2],plotrange[3]],loc=13)
if (phase != '' and amplitudeWithPhase):
if (phase[0] != 0 or phase[1] != 0):
SetNewYLimits(phase,loc=14)
if (plotrange[0]==0 and plotrange[1]==0):
if (xaxis.find('chan')>=0):
SetNewXLimits([channels[0],channels[-1]],loc=23)
else:
if (zoom != 'intersect'):
if (overlaySpws or overlayBasebands):
SetNewXLimits(frequencyRangeToPlotInBaseband[bbctr],loc=24)
else:
SetNewXLimits([frequencies[0], frequencies[-1]],loc=25)
if (bOverlay):
if (xrange2 >= myxrange and zoom != 'intersect'):
# This is necessary if caltable2=TDM and caltable=FDM
SetNewXLimits([frequencies2[0], frequencies2[-1]],loc=26)
if (xrange2 > myxrange+0.1 and zoom != 'intersect'):
TDMisSecond = True
else:
SetNewXLimits([plotrange[0], plotrange[1]],loc=27)
# I need the following line for chanrange to work
if (chanrange[0] != 0 or chanrange[1] != 0):
SetLimits(plotrange, chanrange, newylimits, channels, frequencies,
pfrequencies, ampMin, ampMax, xaxis,pxl, chanrangeSetXrange, chanrangePercent,loc=110)
# Finally, draw the atmosphere and FDM windows, if requested. ' phase'
if ((overlayAntennas==False and overlayTimes==False) or
(overlayAntennas==True and overlayTimes==False and xant==antennasToPlot[-1]) or
(overlayTimes==True and overlayAntennas==False and doneOverlayTime) or
(xant==antennasToPlot[-1] and doneOverlayTime)
):
if ((showatm or showtsky) and len(atmString)>0):
DrawAtmosphere(showatm, showtsky, subplotRows, atmString,
mysize, TebbSky, plotrange, xaxis, atmchan,
atmfreq, transmission, subplotCols,
showatmPoints=showatmPoints, xframe=xframe,
channels=channels, mylineno=lineNumber(),
overlaySpws=overlaySpws,
overlayBasebands=overlayBasebands,
drewAtmosphere=drewAtmosphere,loc=205,
showtsys=showtsys, Trx=Trx)
if (LO1 is not None):
DrawAtmosphere(showatm,showtsky, subplotRows, atmString,
mysize, TebbSky, plotrange, xaxis, atmchanImage,
atmfreqImage, transmissionImage, subplotCols,
LO1, xframe, firstFrame, showatmPoints,
channels=channels, mylineno=lineNumber(),
overlaySpws=overlaySpws,
overlayBasebands=overlayBasebands,
drewAtmosphere=True, loc=206,showtsys=showtsys, Trx=Trx)
drewAtmosphere = True
if (xaxis.find('freq')>=0 and showfdm and nChannels <= 256):
if debug or True: # phase section
print("calling showFDM(phase), ispw=%d, overlayAntennas=%s, overlayTimes=%s, xant=%d, antennasToPlot[-1]=%d, doneOverlayTime=%s" % (ispw, str(overlayAntennas), str(overlayTimes), xant, antennasToPlot[-1], str(doneOverlayTime)))
if (tableFormat == 33):
showFDM(originalSpw_casa33, chanFreqGHz_casa33, baseband, showBasebandNumber, basebandDict) # 'phase'
else:
showFDM(originalSpw, chanFreqGHz, baseband, showBasebandNumber, basebandDict)
if (bOverlay):
# draw polarization labels for bOverlay
x0 = xstartPolLabel
y0 = ystartPolLabel
for p in range(nPolarizations):
if (corrTypeToString(corr_type[p]) in polsToPlot):
pb.text(x0, y0-p*0.03*subplotRows, corrTypeToString(corr_type[p])+'-c1',
color=pcolor[p],size=mysize,transform=pb.gca().transAxes)
pb.text(x0, y0-(p*0.03+0.06)*subplotRows, corrTypeToString(corr_type[p])+'-c2',
color=p2color[p],size=mysize, transform=pb.gca().transAxes)
if (bpolyOverlay and xaxis.find('freq')>=0):
# draw polarization labels for bpolyOverlay
x0 = xstartPolLabel
y0 = ystartPolLabel
if (xcolor != x2color):
for p in range(nPolarizations):
if (corrTypeToString(corr_type[p]) in polsToPlot):
pb.text(x0+0.1, y0-p*0.03*subplotRows, corrTypeToString(corr_type[p]), color=p2color[p],
size=mysize, transform=pb.gca().transAxes)
if (bpolyOverlay2):
for p in range(nPolarizations):
if (corrTypeToString(corr_type[p]) in polsToPlot):
pb.text(x0+0.2, y0-p*0.03*subplotRows, corrTypeToString(corr_type[p]), color=p3color[p],
size=mysize, transform=pb.gca().transAxes)
# endif (yaxis='phase')
redisplay = False
if (xframe == lastFrame):
if (debug):
print("*** mytime+1=%d, nUniqueTimes=%d, timerangeList[-1]=%d, doneOverlayTime=%s" % (mytime+1, nUniqueTimes,timerangeList[-1],doneOverlayTime))
print("*** xant=%d, antennasToPlot[-1]=%d, overlayAntennas=%s, overlayTimes=%s" % (xant,antennasToPlot[-1],overlayAntennas,overlayTimes))
print("*** xframe=%d, lastFrame=%d, xctr=%d, pagectr=%d, spwctr=%d, len(antennasToPlot)=%d, len(spwsToPlot)=%d" % (xframe,lastFrame,xctr,pagectr,spwctr,len(antennasToPlot), len(spwsToPlot)))
myIndexTime = uniqueTimesPerFieldPerSpw[ispwInCalTable][fieldIndex][-1]
matched,mymatch = sloppyMatch(myIndexTime,uniqueTimes,solutionTimeThresholdSeconds,
mytime, scansToPlotPerSpw[ispw], scansForUniqueTimes, # au version
# mytime, scansToPlot, scansForUniqueTimes, # task version
whichone=True)
# The latter condition is needed to support the scans/timeranges parameters.
if (matched==False and scansForUniqueTimes[mytime] in scansToPlotPerSpw[ispw]):
print("-------- Did not find %f in %s" % (myIndexTime,str(uniqueTimes)))
print("Try re-running with a smaller solutionTimeThresholdSeconds (currently %f)" % (solutionTimeThresholdSeconds))
return
else:
# we are on the final time to be plotted
if (debug):
# print "xframe==lastFrame: on the final time = %d (scan=%d)" % (mytime,scansForUniqueTimes[mytime])
print("spwctr=%d len(spwsToPlot)-1=%d, spwsToPlot=" % (spwctr,len(spwsToPlot)-1), spwsToPlot)
mytimeTest = mytime==nUniqueTimes-1 # mytime==myIndexTime # mytime==mymatch
if (debug):
print("mytimeTest = %s" % (mytimeTest))
# print "len(scansToPlotPerSpw)=%d, ispw=%d, len(scansForUniqueTimes)=%d, mytime=%d" % (len(scansToPlotPerSpw),ispw,len(scansForUniqueTimes),mytime)
# if (len(scansToPlotPerSpw)>ispw):
# print "len(scansToPlotPerSpw[ispw]) = %d" % (len(scansToPlotPerSpw[ispw]))
if (scansForUniqueTimes == []):
# old 3.3 cal tables will land here
scanTest = False
scanTest2 = False
else:
if (debug):
print("ispw=%d len(scansToPlotPerSpw[ispw])=%d mytime=%d, len(scansForUniqueTimes)=%d, scansForUniqueTimes[%d]=%d" % (ispw,len(scansToPlotPerSpw[ispw]),mytime,len(scansForUniqueTimes),mytime,scansForUniqueTimes[mytime]))
print("scansToPlotPerSpw = ", scansToPlotPerSpw)
if (len(scansToPlotPerSpw[ispw]) == 0):
scanTest = False
else:
scanTest = (scansToPlotPerSpw[ispw][-1]==scansForUniqueTimes[mytime])
highestSpwIndexInSpwsToPlotThatHasCurrentScan = \
computeHighestSpwIndexInSpwsToPlotThatHasCurrentScan(spwsToPlot, scansToPlotPerSpw, scansForUniqueTimes[mytime])
if (highestSpwIndexInSpwsToPlotThatHasCurrentScan == -1):
scanTest2 = False
else:
scanTest2 = (spwctr == highestSpwIndexInSpwsToPlotThatHasCurrentScan)
if ((overlayAntennas==False and overlayTimes==False and overlaySpws==False and overlayBasebands==False)
# either it is the last time of any, or the last time in the list of times to plot
or (overlayAntennas==False and overlaySpws==False and overlayBasebands==False and (mytime+1==nUniqueTimes or mytime == timerangeList[-1])) # or mytimeTest)) # mytimeTest removed on July 25,2013
or (xant==antennasToPlot[-1] and overlayAntennas==True and overlayTimes==False and overlaySpws==False and overlayBasebands==False)
# inserted mytimeTest below (on Sep 16, 2013 for spectral scan dataset) but breaks VLA
# or ((mytimeTest or spwctr==len(spwsToPlot)-1) and (overlaySpws or overlayBasebands) and overlayAntennas==False and overlayTimes==False)
# The following case is needed to prevent frame=225 in test86 (spectral scan dataset with overlay='spw')
# and the lack of showing of 7 of 8 of the spws in final frame of test61. scanTest2 matches both cases.
or (scanTest and scanTest2 and overlaySpws and overlayAntennas==False and overlayTimes==False)
or ((spwctr==len(spwsToPlot)-1) and (overlayBasebands or overlaySpws) and overlayAntennas==False and overlayTimes==False)
# following case is needed for scans parameter with overlay='time'
or (overlayTimes and scanTest and overlayAntennas==False)
# Following case is needed to make subplot=11 to work for: try to support overlay='antenna,time' : 'phase'
or (xframe == lastFrame and overlayTimes and overlayAntennas and
xctr+1==len(antennasToPlot) and
mytimeTest and
spwctr<len(spwsToPlot))
or (doneOverlayTime and overlayTimes==True
and overlayAntennas==False
)
):
if (debug):
print("^^^^^^^^^^^^^ ispw=%d, mytime=%d, len(uniqueTimes)=%d, nUniqueTimes=%d, mytimeTest=%s" % (ispw, mytime,len(uniqueTimes),nUniqueTimes,mytimeTest))
print(" xctr+1=%d, len(antennasToPlot)=%d, doneOverlayTime=%s, scanTest=%s, scanTest2=%s" % (xctr+1, len(antennasToPlot), doneOverlayTime, scanTest, scanTest2))
DrawBottomLegendPageCoords(msName, uniqueTimes[mytime], mysize, figfile)
# added len(pages)>0 on July 30, 2013 to prevent crash when called with single
# antenna and subplot=11 and all solutions flagged.
if (len(figfile) > 0 and len(pages) > 0):
plotfiles.append(makeplot(figfile,msFound,msAnt,
overlayAntennas,pages,pagectr,
density,interactive,antennasToPlot,
spwsToPlot,overlayTimes,overlayBasebands,
4,xant,ispw,subplot,resample,
debug,
figfileSequential,figfileNumber))
figfileNumber += 1
myinput = ''
donetime = timeUtilities.time()
drewAtmosphere = False # needed for CAS-7187 (subplot=11)
if (interactive):
pb.draw()
# myinput = raw_input("(%.1f sec) Press return for next screen (b for backwards, q to quit): "%(donetime-mytimestamp))
myinput = raw_input("*Press return for next page (b for backwards, q to quit): ")
else:
myinput = ''
skippingSpwMessageSent = 0
mytimestamp = timeUtilities.time()
if (myinput.find('q') >= 0):
mytime = len(uniqueTimes)
spwctr = len(spwsToPlot)
xctr = len(antennasToPlot)
bbctr = len(spwsToPlotInBaseband)
break
if (debug):
print("4)Setting xframe to %d" % xframeStart)
xframe = xframeStart
myUniqueColor = []
pb.subplots_adjust(hspace=myhspace, wspace=mywspace)
if (myinput.find('b') >= 0):
if (pagectr > 0):
pagectr -= 1
#redisplay the current page by setting ctrs back to the value they had at start of that page
xctr = pages[pagectr][PAGE_ANT]
spwctr = pages[pagectr][PAGE_SPW]
mytime = pages[pagectr][PAGE_TIME]
myap = pages[pagectr][PAGE_AP]
xant = antennasToPlot[xctr]
antstring = buildAntString(xant,msFound,msAnt)
ispw = spwsToPlot[spwctr]
# print "Returning to [%d,%d,%d,%d]" % (xctr,spwctr,mytime,myap)
redisplay = True
else:
pagectr += 1
if (pagectr >= len(pages)):
newpage = 1
else:
newpage = 0
if (overlayTimes==True and
sloppyMatch(uniqueTimesPerFieldPerSpw[ispwInCalTable][fieldIndex][-1],
uniqueTimes[mytime],solutionTimeThresholdSeconds,
mytime, scansToPlotPerSpw[ispw], scansForUniqueTimes,
myprint=debugSloppyMatch)):
# be sure to avoid any more loops through mytime which will cause 'b' button to fail
mytime = nUniqueTimes
else:
if (debug):
print("Not going to new page, uniqueTimes[mytime]=%.8f, uniqueTimesPerFieldPerSpw[ispwInCalTable][fieldIndex=%d][-1]=%.8f" % (uniqueTimes[mytime], fieldIndex, uniqueTimesPerFieldPerSpw[ispwInCalTable][fieldIndex][-1]))
print("spwctr=%d ?== (len(spwsToPlot)-1)=%d" % (spwctr,len(spwsToPlot)-1))
if (redisplay == False):
if ((overlayAntennas and xctr+1 >= len(antennasToPlot)) or
((overlaySpws or overlayBasebands) and spwctr+1 >= len(spwsToPlot)) or
(overlayAntennas==False and overlaySpws==False and overlayBasebands==False)):
mytime += 1
# print " 0004 incrementing mytime to ", mytime
if (debug):
print("AT BOTTOM OF LOOP: Incrementing mytime to %d (nUniqueTimes=%d), setting firstUnflaggedAntennaToPlot to 0" % (mytime,nUniqueTimes))
firstUnflaggedAntennaToPlot = 0 # try this
if (debug):
print("AT BOTTOM OF LOOP: Setting firstUnflaggedAntennatoPlot = 0")
doneOverlayTime = False # added on 08-nov-2012
if (overlayBasebands and (uniqueScanNumbers == sorted(scansToPlot))):
if (debug): print("Breaking because scans not specified")
break
# end of while(mytime) loop endwhile mytime
if (redisplay == False):
spwctr += 1
if (debug):
print("---------------------------------------- Incrementing spwctr to %d, spwsToPlot=" % (spwctr), spwsToPlot)
if (spwctr < len(spwsToPlot)):
print("---------------------------------------- ispw = %d" % (spwsToPlot[spwctr]))
else:
print("---------------------------------------- done the spws in this baseband (%d)" % (baseband))
# else:
# print "redisplay = True"
# end of while(spwctr) loop
if (debug): print("B)incrementing bbctr to %d" % (bbctr+1))
bbctr += 1
# end of while(bbctr or spwctr) loop endwhile(bbctr or spwctr)
if (debug): print("B)finalSpwWasFlagged = %s, len(figfile)=%d" % (finalSpwWasFlagged,len(figfile)))
if (xant >= antennasToPlot[-1] and xframe != xframeStart):
# this is the last antenna, so make a final plot
if (len(figfile) > 0):
plotfiles.append(makeplot(figfile,msFound,msAnt,overlayAntennas,
pages,pagectr,density,interactive,antennasToPlot,
spwsToPlot,overlayTimes,overlayBasebands,
5,xant,ispw,subplot,resample,debug,
figfileSequential,figfileNumber))
figfileNumber += 1
if (redisplay == False):
xctr += 1
if (debug):
print("---------------------------------------- Incrementing xctr to %d (xframe=%d)" % (xctr,xframe))
if (overlayAntennas):
if (debug):
print("Breaking out of antenna loop because we are done -------------------")
break
# end of while(xant) loop
pb.draw()
if (len(plotfiles) == 1 and figfileSequential):
# rename the single file to remove ".000"
newplotfiles = [plotfiles[0].split('.000.png')[0]+'.png']
print("renaming %s to %s" % (plotfiles[0],newplotfiles[0]))
os.system('mv %s %s' % (plotfiles[0],newplotfiles[0]))
plotfiles = newplotfiles
if (len(plotfiles) > 0 and buildpdf):
pdfname = figfile+'.pdf'
filelist = ''
plotfiles = np.unique(plotfiles)
for i in range(len(plotfiles)):
cmd = '%s -density %d %s %s.pdf' % (convert,density,plotfiles[i],plotfiles[i].split('.png')[0])
print("Running command = %s" % (cmd))
mystatus = os.system(cmd)
if (mystatus != 0):
# Try MacOS typical location
convert = '/opt/local/bin/convert'
cmd = '%s -density %d %s %s.pdf' % (convert,density,plotfiles[i],plotfiles[i].split('.png')[0])
print("Running command = %s" % (cmd))
mystatus = os.system(cmd)
if (mystatus != 0):
print("ImageMagick's convert command not found, no PDF built")
buildpdf = False
break
if (cleanup):
os.system('rm -f %s' % (plotfiles[i]))
filelist += plotfiles[i].split('.png')[0] + '.pdf '
if (buildpdf and (len(plotfiles)>1 or not figfileSequential)):
# The following 2 lines reduce the total number of characters on the command line, which
# was apparently a problem at JAO for Liza.
filelist = ' '.join(au.pruneFilelist(filelist.split()))
pdfname = au.pruneFilelist([pdfname])[0]
cmd = '%s %s cat output %s' % (pdftk, filelist, pdfname)
print("Running command = %s" % (cmd))
mystatus = os.system(cmd)
if (mystatus != 0):
cmd = '%s -q -sPAPERSIZE=letter -dNOPAUSE -dBATCH -sDEVICE=pdfwrite -sOutputFile=%s %s' % (gs,pdfname,filelist)
print("Running command = %s" % (cmd))
mystatus = os.system(cmd)
gs = '/opt/local/bin/gs'
if (mystatus != 0):
cmd = '%s -q -sPAPERSIZE=letter -dNOPAUSE -dBATCH -sDEVICE=pdfwrite -sOutputFile=%s %s' % (gs,pdfname,filelist)
print("Running command = %s" % (cmd))
mystatus = os.system(cmd)
if (mystatus == 0):
print("PDF left in %s" % (pdfname))
os.system("rm -f %s" % filelist)
else:
print("Both pdftk and ghostscript are missing, so no PDF built.")
else:
print("Not building PDF (buildpdf=%s, len(plotfiles)=%d, figfileSequential=%s)" % (str(buildpdf), len(plotfiles), figfileSequential))
else:
print("Not building PDF (buildpdf=%s, len(plotfiles)=%d)" % (str(buildpdf), len(plotfiles)))
if (wasInteractive and interactive==False):
print("Restoring interactive pylab.")
pb.ion()
if (wasInteractive==False and interactive):
print("Restoring non-interactive pylab.")
pb.ioff()
if (debug):
print("lastUnflaggedAntennaToPlot = ", lastUnflaggedAntennaToPlot)
showFinalMessage(overlayAntennas, solutionTimeSpread, nUniqueTimes)
if (channeldiff>0):
# Compute median over all antennas, or at least those completed before 'q' was hit
madstats['median'] = dict.fromkeys(spwsToPlot)
spwvalue = {}
spwvalue['amp'] = []
spwvalue['phase'] = []
for j in spwsToPlot:
madstats['median'][j] = dict.fromkeys(timerangeList) # dict.fromkeys(range(len(uniqueTimes)))
for k in timerangeList: # range(len(uniqueTimes)):
madstats['median'][j][k] = dict.fromkeys(range(nPolarizations))
for l in range(nPolarizations):
if (yaxis == 'both'):
madstats['median'][j][k][l] = {'amp': None, 'phase': None}
elif (yaxis == 'phase'):
madstats['median'][j][k][l] = {'phase': None}
else:
# this includes tsys and amp
madstats['median'][j][k][l] = {'amp': None}
for m in madstats['median'][j][k][l].keys():
value = []
for i in madstats.keys(): # loop over antennas
if (i != 'median' and i != 'platforming'):
if (madstats[i][j][k][l][m] is not None):
# print "madstats[%s][%d][%d][%d][%s] = " % (i,j,k,l,m), madstats[i][j][k][l][m]
value.append(madstats[i][j][k][l][m])
spwvalue[m].append(madstats[i][j][k][l][m])
madstats['median'][j][k][l][m] = np.median(value)
# now add another spw which is the median over spw,time,polarization
if (yaxis == 'both'):
madstats['median']['median']={'amp': np.median(spwvalue['amp']),
'phase': np.median(spwvalue['phase'])}
elif (yaxis == 'phase'):
madstats['median'][j][k][l] = {'phase': np.median(spwvalue['phase'])}
else:
madstats['median']['median'] = {'amp': np.median(spwvalue['amp'])}
return(madstats)
else:
if (msFound and mymsmd != ''):
mymsmd.close()
return(vm)
# end of plotbandpass3
def getTelescopeNameFromCaltable(caltable):
mytb = au.createCasaTool(tbtool)
mytb.open(caltable)
if ('OBSERVATION' in mytb.getkeywords()):
observationTable = mytb.getkeyword('OBSERVATION').split()[1]
else:
observationTable = None
mytb.close()
if (observationTable is None):
return('')
else:
return(getTelescopeNameFromCaltableObservationTable(observationTable))
def getTelescopeNameFromCaltableObservationTable(observationTable):
mytb = au.createCasaTool(tbtool)
mytb.open(observationTable)
telescope = mytb.getcell('TELESCOPE_NAME')
mytb.close()
return(telescope)
def getCorrTypeByAntennaName(firstAntenna):
"""
This function is used only if the OBSERVATION table of the caltable is blank and the MS is unavailable.
"""
print("Using antenna name (%s) to set the polarization type." % (firstAntenna))
if (firstAntenna.find('ea') >= 0):
corr_type_string = ['RR','LL']
corr_type = [5,8]
elif (firstAntenna.find('dv') >= 0 or firstAntenna.find('da') >= 0 or
firstAntenna.find('pm') >= 0 or firstAntenna.find('da') >= 0):
corr_type_string = ['XX','YY']
corr_type = [9,12]
else: # SMA
corr_type_string = ['XX']
corr_type = [9]
return(corr_type, corr_type_string, len(corr_type))
def showFinalMessage(overlayAntennas, solutionTimeSpread, nUniqueTimes):
if (overlayAntennas and solutionTimeSpread > 0 and nUniqueTimes==1):
print("If not all spws were shown, then try setting solutionTimeThreshold=%.0f seconds" % (solutionTimeSpread+1))
def computeOriginalSpwsToPlot(spwsToPlot, originalSpws, tableFormat, debug):
if (tableFormat > 33):
# New caltables use the same numbering as the original ms
return(spwsToPlot)
else:
originalSpwsToPlot = []
for spw in spwsToPlot:
originalSpwsToPlot.append(originalSpws[spw])
return(list(originalSpwsToPlot))
def computeScansForUniqueTimes(uniqueTimes, cal_scans, times, unique_cal_scans,
debug=False):
"""
The original implementation assumed that individual scans map to
individual times with no overlap.
"""
scansForUniqueTimes = []
nUniqueTimes = len(uniqueTimes)
for uT in uniqueTimes:
if (debug): print("-------- Checking uniqueTime = %s" % (str(uT)))
scansForUniqueTimes.append(cal_scans[list(times).index(uT)])
if (len(unique_cal_scans) == 1):
if (unique_cal_scans[0] != -1):
if (len(scansForUniqueTimes) != len(np.unique(scansForUniqueTimes))):
if debug:
print("Because there are multiple timestamps per scan, I will not assume there is a one-to-one match.")
else:
nUniqueTimes = len(np.unique(scansForUniqueTimes))
else:
# This 3.4 table does not have the scan numbers populated
scansForUniqueTimes = []
print("Because the scan numbers are either not filled in this table, or the solutions span multiple scans, I will use timestamps instead.")
else:
if (len(scansForUniqueTimes) != len(np.unique(scansForUniqueTimes))):
if debug:
print("Because there are multiple timestamps per scan, I will not assume there is a one-to-one match.")
else:
nUniqueTimes = len(np.unique(scansForUniqueTimes))
return(scansForUniqueTimes, nUniqueTimes)
def calcChebyshev(coeff, validDomain, x):
"""
Given a set of coefficients,
this method evaluates a Chebyshev approximation.
"""
if (type(x) == float or type(x) == int):
x = [x]
myxrange = validDomain[1] - validDomain[0]
x = -1 + 2*(x-validDomain[0])/myxrange
coeff[0] = 0
if (True):
try:
# python 2.7
v = np.polynomial.chebyshev.chebval(x,coeff)
except:
# python 2.6
v = np.polynomial.chebval(x,coeff)
else:
# manual approach, before I found chebval()
v = np.zeros(len(x))
if (len(coeff) > 0):
v += coeff[0] * 1
if (len(coeff) > 1):
v += coeff[1] * (x)
if (len(coeff) > 2):
v += coeff[2] * (2*x**2 - 1)
if (len(coeff) > 3):
v += coeff[3] * (4*x**3 - 3*x)
if (len(coeff) > 4):
v += coeff[4] * (8*x**4 - 8*x**2 + 1)
if (len(coeff) > 5):
v += coeff[5] * (16*x**5 - 20*x**3 + 5*x)
if (len(coeff) > 6):
v += coeff[6] * (32*x**6 - 48*x**4 + 18*x**2 - 1)
if (len(coeff) > 7):
v += coeff[7] * (64*x**7 -112*x**5 + 56*x**3 - 7*x)
if (len(coeff) > 8):
v += coeff[8] * (128*x**8 -256*x**6 +160*x**5 - 32*x**2 + 1)
if (len(coeff) > 9):
v += coeff[9] * (256*x**9 -576*x**7 +432*x**5 - 120*x**3 + 9*x)
if (len(coeff) > 10):
print("Chebyshev polynomials with degree > 10 are not implemented")
return(v)
def ResizeFonts(adesc,fontsize):
# print "Called ResizeFonts()"
yFormat = ScalarFormatter(useOffset=False)
adesc.yaxis.set_major_formatter(yFormat)
adesc.xaxis.set_major_formatter(yFormat)
pb.setp(adesc.get_xticklabels(), fontsize=fontsize)
pb.setp(adesc.get_yticklabels(), fontsize=fontsize)
def complexMeanRad(phases):
# convert back to real and imaginary, take mean, then convert back to phase
meanSin = np.mean(np.sin(phases))
meanCos = np.mean(np.cos(phases))
return(180*np.arctan2(meanSin, meanCos)/math.pi)
def complexMeanDeg(phases):
# convert back to real and imaginary, take mean, then convert back to phase
phases *= math.pi/180
meanSin = np.mean(np.sin(phases))
meanCos = np.mean(np.cos(phases))
return(180*np.arctan2(meanSin, meanCos)/math.pi)
def CalcAtmTransmission(chans,freqs,xaxis,pwv,vm,vis,asdm,antenna,timestamp,
interval,field,refFreqInTable, net_sideband=0,
mytime=0, missingCalWVRErrorPrinted=False, mymsmd='',
caltable='', verbose=False,
maxAtmCalcChannels=MAX_ATM_CALC_CHANNELS,
maxAltitude=60.0, atmType=1, h0=1.0, dP=5.0, dPm=1.1,
Feff=0.99, SBGain=0.99, Tamb=273.0, Trx=None, showtsys=False):
"""
Uses the CASA 'at' tool to compute atmospheric model.
chans: list of all channels, regardless of whether they are flagged
freqs: list of requested frequencies (in GHz) corresponding to chans
xaxis: what we are plotting on the xaxis: 'chan' or 'freq'
if 'chan', then we reverse the direction of the data if necessary (LSB)
pwv: 'auto', or a value in mm
vm: '' in the modern era of msmd, or an old valueMapping structure
vis: measurement set
asdm: ASDM, passed to getMedianPWV, only if vis directory does not contain any
of the following: ASDM_CALWVR, ASDM_CALATMOSPHERE, or CalWVR.xml
antenna: name string, ID string, or integer ID (passed only to getWeather)
timestamp: in MJDsec; used with interval to determine timerange to pass to getMedianPWV
interval: in sec; used with timestamp to determine timerange to pass to getMedianPWV
field: integer ID; used to find scan observation time to pass to getWeather
refFreqInTable: in Hz; used along with net_sideband to determine usb/lsb
mytime: only used to show how the function was called (which timestamp)
missingCalWVRErrorPrinted: Boolean to prevent multiple printings of same warning
mymsmd: existing instance of msmd tool
caltable: only used to get telescope name if mymsmd=='' and vm==''
maxAtmCalcChannels: set this lower to prevent longer processing times
atmType: tropical=1, midLatitudeSummer=2, midLatitudeWinter=3
Trx: receiver temperature to use when computing Tsys, None -> use au.receiverTrxSpec
"""
if (casaVersion >= '4.1.0' and mymsmd != ''):
telescopeName = mymsmd.observatorynames()[0]
elif (vm != ''):
telescopeName = au.getObservatoryName(vis)
elif (os.path.exists(caltable)):
telescopeName = getTelescopeNameFromCaltable(caltable)
else:
telescopeName = 'ALMA'
if (telescopeName.find('ALMA') >= 0):
defaultPWV = 1.0 # a reasonable value for ALMA in case it cannot be found
elif (telescopeName.find('VLA') >= 0):
defaultPWV = 5.0
else:
defaultPWV = 5.0
if (type(pwv) == str):
if (pwv.find('auto')>=0):
if (os.path.exists(vis+'/ASDM_CALWVR') or os.path.exists(vis+'/ASDM_CALATMOSPHERE') or
os.path.exists('CalWVR.xml')):
if (verbose):
print("*** Computing atmospheric transmission using measured PWV, field %d, time %d (%f). ***" % (field,mytime,timestamp))
timerange = [timestamp-interval/2, timestamp+interval/2]
if (os.path.exists(vis+'/ASDM_CALWVR') or os.path.exists(vis+'/ASDM_CALATMOSPHERE')):
if (verbose):
print("Calling au.getMedianPWV('%s',%s,'%s',verbose=False) " % (vis,timerange,asdm))
[pwvmean, pwvstd] = au.getMedianPWV(vis,timerange,asdm,verbose=False)
else:
if (verbose):
print("Calling au.getMedianPWV('%s',%s,asdm='',verbose=False) " % (vis,timerange))
[pwvmean, pwvstd] = au.getMedianPWV('.',timerange,asdm='',verbose=False)
if (verbose):
print("retrieved pwvmean = %f" % pwvmean)
retrievedPWV = pwvmean
if (pwvmean < 0.00001):
pwvmean = defaultPWV
else:
pwvmean = defaultPWV
if (missingCalWVRErrorPrinted == False):
missingCalWVRErrorPrinted = True
if (telescopeName.find('ALMA')>=0):
print("No ASDM_CALWVR, ASDM_CALATMOSPHERE, or CalWVR.xml table found. Using PWV %.1fmm." % pwvmean)
else:
print("This telescope has no WVR to provide a PWV measurement. Using PWV %.1fmm." % pwvmean)
else:
try:
pwvmean = float(pwv)
except:
pwvmean = defaultPWV
else:
try:
pwvmean = float(pwv)
print("Using supplied PWV = %f mm" % pwvmean)
except:
pwvmean = defaultPWV
if (verbose):
print("Using PWV = %.3f mm" % pwvmean)
# default values in case we can't find them below
airmass = 1.5
P = 563.0
H = 20.0
T = 273.0
roundedScanTimes = []
if (casaVersion >= '4.1.0' and mymsmd != ''):
if (type(field) == list or type(field) == type(np.ndarray(0))):
field = field[0]
if (verbose):
print("Looking for scans for field integer = %d, type(field)=%s" % (field,str(type(field))))
if (casaVersion >= '4.4' and casaVersion < '4.6'):
myscans = []
myobsids = []
for obsid in range(mymsmd.nobservations()):
newScans = list(mymsmd.scansforfield(field, obsid=obsid))
for myNewScan in newScans:
roundedScanTimes.append(np.unique(np.round(mymsmd.timesforscan(myNewScan, obsid=obsid))))
myscans += newScans
myobsids += len(newScans)*[obsid]
myscans = np.array(myscans, dtype=np.int32)
else:
myscans = mymsmd.scansforfield(field)
for myscan in myscans:
roundedScanTimes.append(np.unique(np.round(mymsmd.timesforscan(myscan))))
# This method was much slower and not necessary. Removed for CAS-8065
# scantimes = mymsmd.timesforscans(myscans) # is often longer than the scans array
# roundedScanTimes = np.unique(np.round(scantimes,0))
# if (verbose):
# print "Running getScansForTimes (%d -> %d)" % (len(scantimes), len(roundedScanTimes))
# myscans,roundedScanTimes = getScansForTimes(mymsmd,roundedScanTimes) # be sure that each scantime has a scan associated, round to nearest second to save time (esp. for single dish data)
# if (verbose):
# print "done"
else:
if (verbose):
print("Looking for scans for field integer = %d, type(field)=%s" % (field,str(type(field))))
myscans = []
if (vm != ''):
myscans = vm.getScansForFieldID(field)
scantimes = vm.getTimesForScans(myscans)
roundedScanTimes = scantimes
if (verbose):
print("For field %s, Got scans = " % str(field), np.unique(myscans))
mindiff = 1e20
bestscan = 1
for i in range(len(roundedScanTimes)):
stime = roundedScanTimes[i]
meantime = np.mean(stime)
tdiff = np.abs(meantime-timestamp)
if (tdiff < mindiff):
bestscan = myscans[i]
mindiff = tdiff
if (casaVersion >= '4.4' and casaVersion < '4.6'):
bestscan_obsid = myobsids[i]
if (verbose):
print("For timestamp=%.1f, got closest scan = %d, %.0f sec away" %(timestamp, bestscan,mindiff))
if (vm != '' or mymsmd != ''):
if (casaVersion >= '4.4' and casaVersion < '4.6'):
weatherResult = au.getWeather(vis,bestscan,antenna,verbose,vm,mymsmd,obsid=bestscan_obsid,getSolarDirection=False)
else:
weatherResult = au.getWeather(vis,bestscan,antenna,verbose,vm,mymsmd,getSolarDirection=False)
else:
weatherResult = None
if (weatherResult is None):
conditions = {}
P = 786 # VLA value, if no ms weather is found
H = 20
else:
[conditions,myTimes,vm] = weatherResult
P = conditions['pressure']
H = conditions['humidity']
T = conditions['temperature']+273.15
if (P <= 0.0):
P = 563
if (H <= 0.0):
H = 20
if ((vm!='' or mymsmd!='') and ('elevation' in conditions.keys()) == False):
# Someone cleared the POINTING table, so calculate elevation from Ra/Dec/MJD
if (mymsmd != '' and casaVersion >= '4.1.0'):
if (casaVersion >= '4.4' and casaVersion < '4.6'):
if (verbose):
print("mymsmd.fieldsforscan(bestscan) = ", mymsmd.fieldsforscan(bestscan,obsid=bestscan_obsid))
myfieldId = mymsmd.fieldsforscan(bestscan,obsid=bestscan_obsid)[0]
myscantime = np.mean(mymsmd.timesforscan(bestscan,obsid=bestscan_obsid))
else:
if (verbose):
print("mymsmd.fieldsforscan(bestscan) = ", mymsmd.fieldsforscan(bestscan))
myfieldId = mymsmd.fieldsforscan(bestscan)[0]
myscantime = np.mean(mymsmd.timesforscan(bestscan))
telescopeName = mymsmd.observatorynames()[0]
if (len(telescopeName) < 1):
telescopeName = 'ALMA'
elif (vm != ''):
myfieldId = vm.getFieldIdsForFieldName(vm.getFieldsForScan(bestscan))
myscantime = np.mean(vm.getTimesForScans(bestscan))
telescopeName = au.getObservatoryName(vis)
else:
myfieldId = 0
telescopeName = 'VLA'
mydirection = au.getRADecForField(vis, myfieldId)
if (verbose):
print("Scan = %d, time = %.1f, Field = %d, direction = " % (bestscan, myscantime, myfieldId), mydirection)
if (len(telescopeName) < 1):
telescopeName = 'ALMA'
myazel = au.computeAzElFromRADecMJD(mydirection, myscantime/86400., telescopeName, verbose=False)
conditions['elevation'] = myazel[1] * 180/math.pi
conditions['azimuth'] = myazel[0] * 180/math.pi
if (verbose):
print("Computed elevation = %.1f deg" % (conditions['elevation']))
if (verbose):
if ('elevation' in conditions.keys()):
print("CalcAtm: found elevation=%f (airmass=%.3f) for scan:" % (conditions['elevation'],1/np.sin(conditions['elevation']*np.pi/180.)), bestscan)
print("P,H,T = %f,%f,%f" % (P,H,T))
if ('elevation' not in conditions.keys()):
print("Using 45 deg elevation since the actual value is unavailable.")
airmass = 1.0/math.cos(45*math.pi/180.)
elif (conditions['elevation'] <= 3):
print("Using 45 deg elevation instead of %f" % (conditions['elevation']))
airmass = 1.0/math.cos(45*math.pi/180.)
else:
airmass = 1.0/math.cos((90-conditions['elevation'])*math.pi/180.)
numchan = len(freqs)
# Set the reference freq to be the middle of the middle two channels
reffreq=0.5*(freqs[int(numchan/2)-1]+freqs[int(numchan/2)])
originalnumchan = numchan
while (numchan > maxAtmCalcChannels):
numchan //= 2
# print "Reduce numchan to ", numchan
chans = range(0,originalnumchan,(originalnumchan//numchan))
chansep = (freqs[-1]-freqs[0])/(numchan-1)
nbands = 1
myqa = au.createCasaTool(qatool)
fCenter = au.create_casa_quantity(myqa,reffreq,'GHz')
fResolution = au.create_casa_quantity(myqa,chansep,'GHz')
fWidth = au.create_casa_quantity(myqa,numchan*chansep,'GHz')
try:
myat = au.createCasaTool(attool)
needToCloseAT = True
except: # CASA < 5.0.0
needToCloseAT = False
myat = at
result = myat.initAtmProfile(humidity=H, temperature=au.create_casa_quantity(myqa,T,"K"),
altitude=au.create_casa_quantity(myqa,5059,"m"),
pressure=au.create_casa_quantity(myqa,P,'mbar'),
atmType=atmType,
h0=au.create_casa_quantity(myqa, h0,"km"),
maxAltitude=au.create_casa_quantity(myqa, maxAltitude,"km"),
dP=au.create_casa_quantity(myqa, dP,"mbar"),
dPm=dPm
)
if type(result) == tuple:
# CASA 6 (in CASA 5, it is simply one string)
result = result[0]
if verbose:
au.printNumberOfAtmosphericLayers(result)
myat.initSpectralWindow(nbands,fCenter,fWidth,fResolution)
myat.setUserWH2O(au.create_casa_quantity(myqa,pwvmean,'mm'))
# myat.setAirMass() # This does not affect the opacity, but it does effect TebbSky, so do it manually.
n = myat.getNumChan()
if (casaVersion < '4.0.0'):
dry = np.array(myat.getDryOpacitySpec(0)['dryOpacity'])
wet = np.array(myat.getWetOpacitySpec(0)['wetOpacity'].value)
TebbSky = []
for chan in range(n): # do NOT use numchan here, use n
TebbSky.append(myat.getTebbSky(nc=chan, spwid=0).value)
TebbSky = np.array(TebbSky)
# readback the values to be sure they got set
# rf = myat.getRefFreq().value
# cs = myat.getChanSep().value # MHz
else:
dry = np.array(myat.getDryOpacitySpec(0)[1])
wet = np.array(myat.getWetOpacitySpec(0)[1]['value'])
TebbSky = myat.getTebbSkySpec(spwid=0)[1]['value']
# readback the values to be sure they got set
# rf = myqa.convert(myat.getRefFreq(),'GHz')['value']
# cs = myqa.convert(myat.getChanSep(),'MHz')['value'] # MHz
# for an even-numbered-channel spw (0..127), the center is 63.5, not 128/2=64
# where 0 = middle of first channel and 127 = middle of final channel
transmission = np.exp(-airmass*(wet+dry))
TebbSky *= (1-np.exp(-airmass*(wet+dry)))/(1-np.exp(-wet-dry))
if (refFreqInTable*1e-9>np.mean(freqs)):
if ((net_sideband % 2) == 0):
sense = 1
else:
sense = 2
else:
if ((net_sideband % 2) == 0):
sense = 2
else:
sense = 1
if (sense == 1):
if (xaxis.find('chan')>=0):
trans = np.zeros(len(transmission))
Tebb = np.zeros(len(TebbSky))
for i in range(len(transmission)):
trans[i] = transmission[len(transmission)-1-i]
Tebb[i] = TebbSky[len(TebbSky)-1-i]
transmission = trans
TebbSky = Tebb
if showtsys:
if Trx is None or Trx == 'auto':
Trx = au.receiverTrxSpec(au.getBand(freqs[0]*1e9))
Tsys = (Feff*TebbSky + (1.0-Feff)*Tamb + Trx) * ((1.0 + (1.0-SBGain)) / (Feff*np.exp(-airmass*(wet+dry))))
else:
Tsys = None
# Be sure that number of frequencies matched number of transmission values - CAS-10123
numchan = len(transmission)
chans = list(range(len(transmission))) # python 3 requires list() because .reverse() is later called in the parent function
startFreq = myqa.convert(myat.getChanFreq(0),'GHz')['value']
endFreq = myqa.convert(myat.getChanFreq(numchan-1),'GHz')['value']
if needToCloseAT:
myat.close()
myqa.done()
freq = np.linspace(startFreq, endFreq, numchan)
# print "startFreq=%f endFreq=%f " % (startFreq, endFreq)
# chansepGHz = cs*0.001
# if sense == 2:
# freq = np.linspace(rf-((numchan-1)/2.)*chansepGHz, rf+((numchan-1)/2.)*chansepGHz, numchan)
# else:
# freq = np.linspace(rf+((numchan-1)/2.)*chansepGHz,
# rf-((numchan-1)/2.)*chansepGHz, numchan)
return(freq, chans, transmission, pwvmean, airmass, TebbSky,
missingCalWVRErrorPrinted, Tsys)
def checkForNaNs(mylist):
nans = 0
for x in mylist:
if (x != x):
nans += 1
return(nans)
def RescaleTrans(trans, lim, subplotRows, lo1=None, xframe=0,
overlaySpws=False, overlayBasebands=False):
# Input: the array of transmission or TebbSky values and current limits
# Returns: arrays of the rescaled transmission values and the zero point
# values in units of the frame, and in amplitude.
debug = False
yrange = lim[1]-lim[0]
if (lo1 is None):
labelgap = 0.6 # Use this fraction of the margin for the PWV ATM label
else:
labelgap = 0.5 # Use this fraction of the margin to separate the top
# curve from the upper y-axis
y2 = lim[1] - labelgap*yrange*TOP_MARGIN/(1.0+TOP_MARGIN)
y1 = lim[1] - yrange*TOP_MARGIN/(1.0+TOP_MARGIN)
transmissionRange = np.max(trans)-np.min(trans)
if (overlaySpws or overlayBasebands) and False:
if (transmissionRange < 1):
# Then a transmission range from 0..1 was passed in
transmissionRange = 1.0
else:
# Then a TebbSky was passed in (in Kelvin)
transmissionRange = 1.0
transmissiongRange = 290
else:
if (transmissionRange < 0.05):
# force there to be a minimum range of transmission display
# overemphasize tiny ozone lines
transmissionRange = 0.05
if (transmissionRange > 1 and transmissionRange < 10):
# force there to be a minimum range of Tebbsky (10K) to display
transmissionRange = 10
# convert transmission to amplitude
newtrans = y2 - (y2-y1)*(np.max(trans)-trans)/transmissionRange
# Use edge values
edgeValueTransmission = trans[-1]
otherEdgeValueTransmission = trans[0]
# Now convert the edge channels' transmission values into amplitude
edgeValueAmplitude = y2 - (y2-y1)*(np.max(trans)-trans[-1])/transmissionRange
otherEdgeValueAmplitude = y2 - (y2-y1)*(np.max(trans)-trans[0])/transmissionRange
# Now convert amplitude to frame units, offsetting downward by half
# the font size
fontoffset = 0.01*subplotRows
edgeValueFrame = (edgeValueAmplitude - lim[0])/yrange - fontoffset
otherEdgeValueFrame = (otherEdgeValueAmplitude - lim[0])/yrange - fontoffset
# scaleFactor is how large the plot is from the bottom x-axis
# up to the labelgap, in units of the transmissionRange
scaleFactor = (1+TOP_MARGIN*(1-labelgap)) / (TOP_MARGIN*(1-labelgap))
# compute the transmission at the bottom of the plot, and label it
y0transmission = np.max(trans) - transmissionRange*scaleFactor
y0transmissionFrame = 0
y0transmissionAmplitude = lim[0]
if (y0transmission <= 0):
# If the bottom of the plot is below zero transmission, then label
# the location of zero transmission instead.
if (debug):
print("--------- y0transmission original = %f, (y1,y2)=(%f,%f)" % (y0transmission,y1,y2))
y0transmissionAmplitude = y1-(y2-y1)*(np.min(trans)/transmissionRange)
y0transmissionFrame = (y0transmissionAmplitude-lim[0]) / (lim[1]-lim[0])
y0transmission = 0
if (debug):
print("-------- xframe=%d, scaleFactor = " % (xframe), scaleFactor)
print("edgeValueFrame, other = ", edgeValueFrame, otherEdgeValueFrame)
print("edgeValueTransmission, other = ", edgeValueTransmission, otherEdgeValueTransmission)
print("edgeValueAmplitude, otherEdgeValueAmplitude = ", edgeValueAmplitude, otherEdgeValueAmplitude)
print("y0transmission = %f, y0transmissionFrame = %f" % (y0transmission,y0transmissionFrame))
print("y0transmissionAmplitude = ", y0transmissionAmplitude)
print("transmissionRange = ", transmissionRange)
return(newtrans, edgeValueFrame, y0transmission, y0transmissionFrame,
otherEdgeValueFrame, edgeValueTransmission,
otherEdgeValueTransmission, edgeValueAmplitude,
otherEdgeValueAmplitude, y0transmissionAmplitude)
def RescaleX(chans, lim, plotrange, channels):
# This function is now only used by DrawAtmosphere when xaxis='chan'.
# It is only really necessary when len(chans)>MAX_ATM_CALC_CHANNELS.
# - September 2012
# If the user specified a plotrange, then rescale to this range,
# otherwise rescale to the automatically-determined range.
# chans = 0..N where N=number of channels in the ATM_CALC
# channels = 0..X where X=number of channels in the spw, regardless of flagging
if (len(chans) != len(channels)):
if (chans[1] > chans[0]):
atmchanrange = chans[-1]-chans[0]
else:
atmchanrange = chans[0]-chans[-1]
if (channels[1] > channels[0]):
chanrange = channels[-1]-channels[0]
else:
chanrange = channels[0]-channels[-1]
newchans = np.array(chans)*chanrange/atmchanrange
return(newchans)
else:
return(chans)
def recalcYlimitsFreq(chanrange, ylimits, amp, sideband,plotrange,xchannels,
debug=False,location=0, chanrangePercent=None):
# Used by plots with xaxis='freq'
# xchannels are the actual channel numbers of unflagged data, i.e. displayed points
# amp is actual data plotted
# This function is often overridden by SetLimits which is typically called afterward.
# It is not clear that it ever takes effect, but it might in some cases. - 2015-03-02
ylim_debug = False
amp = np.array(amp)
if (debug):
print("recalcYlimitsFreq(loc=%d): len(xchannels) = %d" % (location, len(xchannels)))
if (len(amp) < 1):
return(pb.ylim()) # ylimits)
if (chanrange[0]==0 and chanrange[1] == 0 and plotrange[2] == 0 and plotrange[3]==0
and chanrangePercent is None):
if (len(amp) == 1):
if (ylim_debug):
print("amp = ", amp)
ylimits = [amp[0]-0.2, amp[0]+0.2]
else:
newmin = np.min(amp)
newmax = np.max(amp)
newmin = np.min([ylimits[0],newmin])
newmax = np.max([ylimits[1],newmax])
ylimits = [newmin, newmax]
elif ((abs(chanrange[0]) > 0 or abs(chanrange[1]) > 0)):
plottedChannels = np.intersect1d(xchannels, range(chanrange[0],chanrange[1]+1))
if (len(plottedChannels) < 1):
return(ylimits)
mylist = np.arange(xchannels.index(plottedChannels[0]), 1+xchannels.index(plottedChannels[-1]))
if (mylist[-1] >= len(amp)):
# prevent crash if many channels are flagged
return(ylimits)
if (ylim_debug):
print("Starting with limits = ", ylimits)
print("Examining channels: ", mylist)
print("len(amp): %d" % (len(amp)))
print("type(amp) = %s" % (str(type(amp))))
print("Examining values: amp[mylist] = %s" % (str(amp[mylist])))
newmin = np.min(amp[mylist])
newmax = np.max(amp[mylist])
newmin = np.min([ylimits[0],newmin])
newmax = np.max([ylimits[1],newmax])
# The following presents a problem with overlays, as it keeps widening forever
# newmin -= 0.05*(newmax-newmin)
# newmax += 0.05*(newmax-newmin)
ylimits = [newmin, newmax]
elif (chanrangePercent is not None):
startFraction = (100-chanrangePercent)*0.5*0.01
stopFraction = 1-(100-chanrangePercent)*0.5*0.01
if (xchannels == []):
# prevent crash if many channels are flagged: 2015-04-13
return(ylimits)
cr0 = int(np.round(np.max(xchannels)*startFraction))
cr1 = int(np.round(np.max(xchannels)*stopFraction))
# print "Plotting %.0f%% of channels %d through %d" % (chanrangePercent, cr0, cr1)
plottedChannels = np.intersect1d(xchannels, range(cr0, cr1+1))
if (len(plottedChannels) < 1):
return(ylimits)
mylist = np.arange(xchannels.index(plottedChannels[0]), 1+xchannels.index(plottedChannels[-1]))
if (mylist[-1] >= len(amp)):
# prevent crash if many channels are flagged
return(ylimits)
if (ylim_debug):
print("Starting with limits = ", ylimits)
print("Examining channels: ", mylist)
print("len(amp): %d" % (len(amp)))
print("type(amp) = %s" % (str(type(amp))))
print("Examining values: amp[mylist] = %s" % (str(amp[mylist])))
newmin = np.min(amp[mylist])
newmax = np.max(amp[mylist])
newmin = np.min([ylimits[0],newmin])
newmax = np.max([ylimits[1],newmax])
ylimits = [newmin, newmax]
if (ylim_debug):
print("Returning from loc=%d with limits = %s" % (location, str(ylimits)))
return ylimits
def recalcYlimits(plotrange, ylimits, amp):
# Used by plots with xaxis='chan'
if (len(amp) < 1):
return(pb.ylim())
if ((abs(plotrange[0]) > 0 or abs(plotrange[1]) > 0) and (plotrange[2] == 0 and plotrange[3] == 0)):
x0 = plotrange[0]
x1 = plotrange[1]
if (x0 < 0):
x0 = 0
if (x1 > len(amp)-1):
x1 = len(amp)-1
if (len(amp) > x1 and x0 < x1):
newmin = np.min(amp[x0:x1])
newmax = np.max(amp[x0:x1])
newmin = np.min([ylimits[0],newmin])
newmax = np.max([ylimits[1],newmax])
ylimits = [newmin, newmax]
else:
ylimits = pb.ylim() # added on 10/27/2011
# print "current ylimits = ", ylimits
return(ylimits)
def SetNewYLimits(newylimits, loc=-1):
if (False):
print("loc=%d, Entered SetNewYLimits with " % (loc), newylimits)
newrange = newylimits[1]-newylimits[0]
if (newrange > 0):
pb.ylim([newylimits[0], newylimits[1]])
def SetNewXLimits(newxlimits, loc=-1):
# print "Entered SetNewXLimits from location=%d with range = %.3f" % (loc,np.max(newxlimits)-np.min(newxlimits))
myxrange = np.abs(newxlimits[1]-newxlimits[0])
if (myxrange == 0):
myxrange = 0.001
mybuffer = 0.01
if (newxlimits[0] < newxlimits[1]):
pb.xlim([newxlimits[0]-myxrange*mybuffer,newxlimits[1]+myxrange*mybuffer] )
else:
# print "Swapping xlimits order"
pb.xlim(newxlimits[1]-myxrange*mybuffer, newxlimits[0]+myxrange*mybuffer)
def sloppyMatch(newvalue, mylist, threshold, mytime=None, scansToPlot=[],
scansForUniqueTimes=[], myprint=False, whichone=False):
"""
If scan numbers are present, perform an exact match, otherwise compare the
time stamps of the solutions.
"""
debug = myprint
if (debug):
print("sloppyMatch: scansToPlot = %s" % (str(scansToPlot)))
mymatch = None
if (len(scansToPlot) > 0):
if (mytime >= len(scansForUniqueTimes)):
print("sloppyMatch() mytime is too large: mytime=%d >= len(scansForUniqueTimes)=%d: " % (mytime, len(scansForUniqueTimes)), scansForUniqueTimes)
matched = scansForUniqueTimes[mytime] in scansToPlot
if (whichone or myprint):
myscan = scansForUniqueTimes[mytime]
if (myscan in scansToPlot):
mymatch = list(scansToPlot).index(myscan)
if (matched == False and myprint==True):
print("sloppyMatch: %d is not in %s" % (myscan, list(scansToPlot)))
elif (myprint==True):
print("sloppyMatch: %d is in %s" % (myscan, list(scansToPlot)))
else:
matched = False
if (type(mylist) != list and type(mylist)!=np.ndarray):
mylist = [mylist]
mymatch = -1
for i in range(len(mylist)):
v = mylist[i]
if (abs(newvalue-v) < threshold):
matched = True
mymatch = i
if (matched == False and myprint==True):
print("sloppyMatch: %.0f is not within %.0f of anything in %s" % (newvalue,threshold, str([int(round(b)) for b in mylist])))
elif (myprint==True):
print("sloppyMatch: %.0f is within %.0f of something in %s" % (newvalue,threshold, str([int(round(b)) for b in mylist])))
if (whichone == False):
return(matched)
else:
return(matched,mymatch)
def sloppyUnique(t, thresholdSeconds):
"""
Takes a list of numbers and returns a list of unique values, subject to a threshold difference.
"""
# start with the first entry, and only add a new entry if it is more than the threshold from prior
sloppyList = [t[0]]
for i in range(1,len(t)):
keepit = True
for j in range(0,i):
if (abs(t[i]-t[j]) < thresholdSeconds):
keepit = False
if (keepit):
sloppyList.append(t[i])
# print "sloppyUnique returns %d values from the original %d" % (len(sloppyList), len(t))
return(sloppyList)
def SetLimits(plotrange, chanrange, newylimits, channels, frequencies, pfrequencies,
ampMin, ampMax, xaxis, pxl, chanrangeSetXrange, chanrangePercent=None,
debug=False, loc=-1):
"""
This is the place where chanrange actually takes effect.
"""
if debug:
print("SetLimits(%d): freq[0, -1]=" % (loc), frequencies[0], frequencies[-1])
print("SetLimits(%d): newylimits=" % (loc), newylimits)
if (abs(plotrange[0]) > 0 or abs(plotrange[1]) > 0):
SetNewXLimits([plotrange[0],plotrange[1]],loc=30)
if (plotrange[2] == 0 and plotrange[3] == 0):
# reset the ylimits based on the channel range shown (selected via plotrange)
SetNewYLimits(newylimits,loc=15)
else: # set xlimits to full range
if (xaxis.find('chan')>=0):
SetNewXLimits([channels[0],channels[-1]],loc=31)
else:
if (chanrangeSetXrange or (chanrange[0]==0 and chanrange[1]==0 and chanrangePercent is None)): # CAS-7965
SetNewXLimits([frequencies[0], frequencies[-1]],loc=32)
if (chanrange[0] != 0 or chanrange[1] != 0 or chanrangePercent is not None):
# reset the ylimits based on the channel range specified (selected via chanrange)
if (newylimits != [LARGE_POSITIVE, LARGE_NEGATIVE]):
SetNewYLimits(newylimits,loc=16)
# print "pxl=%d, chanrange[0]=%d, chanrange[1]=%d, shape(pfreq), shape(freq)=" % (pxl, chanrange[0], chanrange[1]), np.shape(pfrequencies),np.shape(frequencies)
# Use frequencies instead of pfrequencies, because frequencies are not flagged and
# will continue to work if chanranze is specified and data are flagged.
if chanrangeSetXrange:
if (chanrangePercent is None):
try:
SetNewXLimits([frequencies[chanrange[0]], frequencies[chanrange[1]]],loc=33) # Apr 3, 2012
# print "chanrangePercent=None, Set new xlimits from channels %d thru %d" % (chanrange[0],chanrange[1]-1)
except:
print("a)Invalid chanrange (%d-%d). Valid range = 0-%d" % (chanrange[0],chanrange[1],len(frequencies)-1))
return(-1)
else:
startFraction = (100-chanrangePercent)*0.5*0.01
stopFraction = 1-(100-chanrangePercent)*0.5*0.01
cr0 = int(np.round(np.max(channels)*startFraction))
cr1 = int(np.round(np.max(channels)*stopFraction))
try:
SetNewXLimits([frequencies[cr0], frequencies[cr1]],loc=34)
# print "Set new xlimits from channels %d thru %d" % (cr0,cr1)
except:
print("b)Invalid chanrange (%d-%d). Valid range = 0-%d" % (cr0,cr1,len(frequencies)-1))
return(-1)
if (abs(plotrange[2]) > 0 or abs(plotrange[3]) > 0):
SetNewYLimits([plotrange[2],plotrange[3]],loc=17)
return(0)
def showFDM(originalSpw, chanFreqGHz, baseband, showBasebandNumber, basebandDict):
"""
Draws a horizontal bar indicating the location of FDM spws in the dataset.
Still need to limit based on the baseband -- need dictionary passed in.
originalSpw: should contain all spws in the dataset, not just the ones
in the caltable
baseband: the baseband of the current spw
showBasebandNumber: force the display of all FDM spws, and their baseband number
basebandDict: {1:[17,19], 2:[21,23], etc.} or {} for really old datasets
"""
# add some space at the bottom -- Apr 25, 2012
ylim = pb.ylim()
yrange = ylim[1]-ylim[0]
pb.ylim([ylim[0]-BOTTOM_MARGIN*yrange, ylim[1]])
sdebug = False
if (sdebug):
print("Showing FDM (%d)" % (len(originalSpw)), originalSpw)
print("baseband = %d, basebandDict = %s" % (baseband, str(basebandDict)))
fdmctr = -1
x0,x1 = pb.xlim()
y0,y1 = pb.ylim()
yrange = y1 - y0
myxrange = x1 - x0
# pb.hold(True) # not available in CASA6, but never needed
labelAbove = False # False means label to the right
for i in range(len(originalSpw)):
nchan = len(chanFreqGHz[i])
# latter 3 values are for ACA with FPS enabled
if (nchan >= 15 and nchan not in [256,128,64,32,16,248,124,62]):
# To fix in Sept 2014
# crashes if showfdm=True but no ms was found
if (originalSpw[i] in basebandDict[baseband] or showBasebandNumber):
fdmctr += 1
verticalOffset = fdmctr*0.04*yrange
y1a = y0 + 0.03*yrange + verticalOffset
if (labelAbove):
y2 = y1a + 0.01*yrange
else:
y2 = y1a - 0.016*yrange
# print "chan=%d: Drawing line at y=%f (y0=%f) from x=%f to %f" % (len(chanFreqGHz[i]),
# y1a,y0,chanFreqGHz[i][0], chanFreqGHz[i][-1])
f0 = chanFreqGHz[i][0]
f1 = chanFreqGHz[i][-1]
if (f1 < f0):
swap = f1
f1 = f0
f0 = swap
v0 = np.max([f0,x0])
v1 = np.min([f1,x1])
if (v1 > v0):
if (labelAbove):
xlabel = 0.5*(v0+v1)
if (xlabel < x0):
xlabel = x0
if (xlabel > x1):
xlabel = x1
else:
xlabel = v1+0.02*myxrange
pb.plot([v0,v1], [y1a,y1a], '-',
linewidth=4, color=overlayColors[fdmctr],markeredgewidth=markeredgewidth)
if (showBasebandNumber):
mybaseband = [key for key in basebandDict if i in basebandDict[key]]
if (len(mybaseband) > 0):
pb.text(xlabel, y2, "spw%d(bb%d)"%(i,mybaseband[0]), size=7)
else:
pb.text(xlabel, y2, "spw%d(bb?)"%(i), size=7)
else:
pb.text(xlabel, y2, "spw%d"%(i), size=7)
if (sdebug): print("Plotting spw %d (%d), xlabel=%f, y2=%f, y1a=%f, y0=%f, y1=%f" % (i, originalSpw[i], xlabel, y2, y1a, y0, y1))
else:
if (sdebug): print("Not plotting spw %d (%d) because %f < %f" % (i,originalSpw[i],v0,v1))
else:
if (sdebug): print("Not plotting spw %d (%d) because it is not in baseband %d (%s)" % (i,originalSpw[i],baseband,basebandDict[baseband]))
else:
if (sdebug and False): print("Not plotting spw %d (%d) because fewer than 256 channels (%d)" % (i,originalSpw[i],nchan))
if (fdmctr > -1):
pb.ylim([y0,y1])
pb.xlim([x0,x1])
def DrawAtmosphere(showatm, showtsky, subplotRows, atmString, mysize,
TebbSky, plotrange, xaxis, atmchan, atmfreq, transmission,
subplotCols, lo1=None, xframe=0, firstFrame=0,
showatmPoints=False, channels=[0], mylineno=-1,xant=-1,
overlaySpws=False, overlayBasebands=False,
drewAtmosphere=False, loc=-1, showtsys=False, Trx=None):
"""
Draws atmospheric transmission or Tsky on an amplitude vs. chan or freq plot.
"""
xlim = pb.xlim()
ylim = pb.ylim()
myxrange = xlim[1]-xlim[0]
yrange = ylim[1]-ylim[0]
if (not drewAtmosphere and not overlayBasebands): # CAS-8489 final
if (lo1 is None):
# add some space at the top -- Apr 16, 2012
pb.ylim([ylim[0], ylim[1]+TOP_MARGIN*yrange])
else:
pb.ylim([ylim[0], ylim[1]+TOP_MARGIN*yrange*0.5])
ylim = pb.ylim()
yrange = ylim[1]-ylim[0]
#
ystartPolLabel = 1.0-0.04*subplotRows
if (lo1 is None):
transmissionColor = 'm'
tskyColor = 'm'
else:
transmissionColor = 'k'
tskyColor = 'k'
if (showatmPoints):
atmline = '.'
else:
atmline = '-'
if (showatm or showtsky):
if (showatm):
atmcolor = transmissionColor
else:
atmcolor = tskyColor
if (lo1 is None and not drewAtmosphere):
pb.text(0.20, ystartPolLabel, atmString, color=atmcolor, size=mysize, transform=pb.gca().transAxes)
if (showtsky):
if showtsys:
rescaledY = TebbSky
if Trx == 'auto':
scaleFactor = np.mean([ylim[1]-np.max(TebbSky), ylim[0]-np.min(TebbSky)])
scaleFactor = 0.5*(ylim[1]+ylim[0]) - np.mean(TebbSky)
rescaledY += scaleFactor
else:
rescaledY, edgeYvalue, zeroValue, zeroYValue, otherEdgeYvalue, edgeT, otherEdgeT, edgeValueAmplitude, otherEdgeValueAmplitude, zeroValueAmplitude = RescaleTrans(TebbSky, ylim, subplotRows, lo1, xframe, overlaySpws, overlayBasebands)
else:
rescaledY, edgeYvalue, zeroValue, zeroYValue, otherEdgeYvalue, edgeT, otherEdgeT, edgeValueAmplitude, otherEdgeValueAmplitude, zeroValueAmplitude = RescaleTrans(transmission, ylim, subplotRows, lo1, xframe, overlaySpws, overlayBasebands)
if (overlayBasebands and xaxis.find('freq')>=0):
# use axis coordinates for y-axis only so that transmission can be on common scale
trans = matplotlib.transforms.blended_transform_factory(pb.gca().transData, pb.gca().transAxes)
if showtsky:
pb.plot(atmfreq, TebbSky/300., '%s%s'%(atmcolor,atmline),
markeredgewidth=markeredgewidth, transform=trans)
else:
pb.plot(atmfreq, transmission, '%s%s'%(atmcolor,atmline),
markeredgewidth=markeredgewidth, transform=trans)
else:
# use user coordinates
if (xaxis.find('chan')>=0):
rescaledX = RescaleX(atmchan, xlim, plotrange, channels)
# rescaledX = atmchan
pb.plot(rescaledX, rescaledY,'%s%s'%(atmcolor,atmline),markeredgewidth=markeredgewidth)
elif (xaxis.find('freq')>=0):
# print "Calling plot(xmean=%f, ymean=%f)" % (np.mean(atmfreq), np.mean(rescaledY))
pb.plot(atmfreq, rescaledY, '%s%s'%(atmcolor,atmline),markeredgewidth=markeredgewidth)
if (lo1 is None):
xEdgeLabel = 1.01
else:
if (xframe == firstFrame):
xEdgeLabel = -0.10*subplotCols # avoids overwriting y-axis label
else:
xEdgeLabel = -0.10*subplotCols
SetNewXLimits(xlim,loc=35) # necessary for zoom='intersect'
if (not overlayBasebands): # CAS-8489 final
SetNewYLimits(ylim,loc=18)
# Now draw the percentage on right edge of plot
if (not drewAtmosphere):
if (overlayBasebands and xaxis.find('freq')>=0): # CAS-8489 final
trans = matplotlib.transforms.blended_transform_factory(pb.gca().transData, pb.gca().transAxes)
zeroValue = 0
zeroValueAmplitude = 0
edgeValueAmplitude = 1
if (showtsky):
edgeT = 300
if (lo1 is None):
pb.text(xlim[1]+0.06*myxrange/subplotCols, edgeValueAmplitude,
'%.0fK'%(edgeT), color=atmcolor, size=mysize, transform=trans)
pb.text(xlim[1]+0.06*myxrange/subplotCols, zeroValueAmplitude,
'%.0fK'%(zeroValue), color=atmcolor, transform=trans,
size=mysize)
else:
pb.text(xEdgeLabel, edgeValueAmplitude,'%.0fK'%(edgeT),
color=atmcolor,
size=mysize, transform=pb.gca().transAxes)
pb.text(xEdgeLabel, zeroValueAmplitude,'%.0fK'%(zeroValue),
color=atmcolor,
size=mysize, transform=pb.gca().transAxes)
else:
# showatm=True
edgeT = 1
if (lo1 is None):
pb.text(xlim[1]+0.05*myxrange/subplotCols, edgeValueAmplitude,
'%.0f%%'%(edgeT*100), color=atmcolor, size=mysize,
transform=trans, va='center')
pb.text(xlim[1]+0.05*myxrange/subplotCols, zeroValueAmplitude,
'%.0f%%'%(zeroValue*100), color=atmcolor, transform=trans,
size=mysize, va='center')
else:
pb.text(xEdgeLabel, edgeValueAmplitude,'%.0f%%'%(edgeT*100),
color=atmcolor, va='center',
size=mysize, transform=pb.gca().transAxes)
pb.text(xEdgeLabel, zeroValueAmplitude,'%.0f%%'%(zeroValue*100),
color=atmcolor, va='center',
size=mysize, transform=pb.gca().transAxes)
elif not showtsys:
if (showtsky):
if (lo1 is None):
# This must be done in user coordinates since another curve
# is plotted following this one.
pb.text(xlim[1]+0.06*myxrange/subplotCols, edgeValueAmplitude,
'%.0fK'%(edgeT), color=atmcolor, size=mysize)
pb.text(xlim[1]+0.06*myxrange/subplotCols, zeroValueAmplitude,
'%.0fK'%(zeroValue), color=atmcolor,
size=mysize)
else:
# This can remain in axes units since it is the final plot.
pb.text(xEdgeLabel, otherEdgeYvalue,'%.0fK'%(otherEdgeT),
color=atmcolor,
size=mysize, transform=pb.gca().transAxes)
pb.text(xEdgeLabel, zeroYValue,'%.0fK'%(zeroValue),
color=atmcolor,
size=mysize, transform=pb.gca().transAxes)
else:
# showatm=True
if (lo1 is None):
# This must be done in user coordinates since another curve
# is plotted following this one.
pb.text(xlim[1]+0.05*myxrange/subplotCols, edgeValueAmplitude,
'%.0f%%'%(edgeT*100), color=atmcolor, size=mysize)
pb.text(xlim[1]+0.05*myxrange/subplotCols, zeroValueAmplitude,
'%.0f%%'%(zeroValue*100), color=atmcolor,
size=mysize)
else:
# This can remain in axes units since it is the final plot.
pb.text(xEdgeLabel, otherEdgeYvalue,'%.0f%%'%(otherEdgeT*100),
color=atmcolor,
size=mysize, transform=pb.gca().transAxes)
pb.text(xEdgeLabel, zeroYValue,'%.0f%%'%(zeroValue*100),
color=atmcolor,
size=mysize, transform=pb.gca().transAxes)
if (lo1 is not None):
if (xframe == firstFrame):
pb.text(+1.04-0.04*subplotCols, -0.07*subplotRows,
'Signal SB', color='m', size=mysize,
transform=pb.gca().transAxes)
pb.text(-0.03-0.08*subplotCols, -0.07*subplotRows,
'Image SB', color='k', size=mysize,
transform=pb.gca().transAxes)
return ylim # CAS-8655
def DrawBottomLegendPageCoords(msName, uniqueTimesMytime, mysize, figfile):
msName = msName.split('/')[-1]
bottomLegend = msName + ' ObsDate=' + utdatestring(uniqueTimesMytime)
if (os.path.basename(figfile).find('regression') == 0):
regression = True
else:
regression = False
if (regression == False):
# strip off the seconds from the time to make space for casaVersion
bottomLegend += ' plotbandpass3 v' \
+ PLOTBANDPASS_REVISION_STRING.split()[2] + ': ' \
+ PLOTBANDPASS_REVISION_STRING.split()[3] + ' ' \
+ PLOTBANDPASS_REVISION_STRING.split()[4][:-3] + ', C' + casaVersion
# The following should be used going forward, as it is better for long VLA names
pb.text(0.04, 0.02, bottomLegend, size=mysize, transform=pb.gcf().transFigure)
# pb.text(0.1, 0.02, bottomLegend, size=mysize, transform=pb.gcf().transFigure)
def DrawAntennaNames(msAnt, antennasToPlot, msFound, mysize):
for a in range(len(antennasToPlot)):
if (msFound):
legendString = msAnt[antennasToPlot[a]]
else:
legendString = str(antennasToPlot[a])
if (a<maxAntennaNamesAcrossTheTop):
x0 = xstartTitle+(a*antennaHorizontalSpacing)
y0 = ystartOverlayLegend
else:
# start going down the righthand side
x0 = xstartTitle+(maxAntennaNamesAcrossTheTop*antennaHorizontalSpacing)
y0 = ystartOverlayLegend-(a-maxAntennaNamesAcrossTheTop)*antennaVerticalSpacing
pb.text(x0, y0, legendString,color=overlayColors[a],fontsize=mysize,
transform=pb.gcf().transFigure)
def stdInfo(a, sigma=3, edge=0, spw=-1, xant=-1, pol=-1):
"""
Computes the standard deviation of a list, then returns the value, plus the
number and list of channels that exceed sigma*std, and the worst outlier.
"""
info = {}
if (edge >= len(a)/2): # protect against too large of an edge value
originalEdge = edge
if (len(a) == 2*(len(a)/2)):
edge = len(a)/2 - 1 # use middle 2 points
else:
edge = len(a)/2 # use central point
if (edge < 0):
edge = 0
print("stdInfo: WARNING edge value is too large for spw%d xant%d pol%d, reducing it from %d to %d." % (spw, xant, pol, originalEdge, edge))
info['std'] = np.std(a[edge:len(a)-edge])
chan = []
outlierValue = 0
outlierChannel = None
for i in range(edge,len(a)-edge):
if (np.abs(a[i]) > sigma*info['std']):
chan.append(i)
if (np.abs(a[i]) > np.abs(outlierValue)):
outlierValue = a[i]
outlierChannel = i
info['nchan'] = len(chan)
info['chan'] = chan
info['outlierValue'] = outlierValue/info['std']
info['outlierChannel'] = outlierChannel
return(info)
def madInfo(a, madsigma=3, edge=0, spw=-1, xant=-1, pol=-1):
"""
Computes the MAD of a list, then returns the value, plus the number and
list of channels that exceed madsigma*MAD, and the worst outlier.
"""
info = {}
if (edge >= len(a)/2): # protect against too large of an edge value
originalEdge = edge
if (len(a) == 2*(len(a)/2)):
edge = len(a)/2 - 1 # use middle 2 points
else:
edge = len(a)/2 # use central point
if (edge < 0):
edge = 0
print("madInfo: WARNING edge value is too large for spw%d xant%d pol%d, reducing it from %d to %d." % (spw, xant, pol, originalEdge, edge))
info['mad'] = mad(a[edge:len(a)-edge])
chan = []
outlierValue = 0
outlierChannel = None
for i in range(edge,len(a)-edge):
if (np.abs(a[i]) > madsigma*info['mad']):
chan.append(i)
if (np.abs(a[i]) > np.abs(outlierValue)):
outlierValue = a[i]
outlierChannel = i
info['nchan'] = len(chan)
info['chan'] = chan
info['outlierValue'] = outlierValue/info['mad']
info['outlierChannel'] = outlierChannel
return(info)
def platformingCheck(a, threshold=DEFAULT_PLATFORMING_THRESHOLD):
"""
Checks for values outside the range of +-threshold.
Meant to be passed an amplitude spectrum.
"""
info = {}
startChan = len(a)/32. - 1
endChan = len(a)*31/32. + 1
# print "Checking channels %d-%d for platforming" % (startChan,endChan)
if (startChan <= 0 or endChan >= len(a)):
return
middleChan = (startChan+endChan)/2
channelRange1 = np.arange(startChan,middleChan+1)
channelRange2 = np.arange(endChan,middleChan,-1)
platforming = False
awayFromEdge = False
for i in channelRange1:
if (np.abs(a[i]) > threshold):
if (awayFromEdge):
# print "a[%d]=%f" % (i,a[i])
platforming = True
return(platforming)
else:
awayFromEdge = True
awayFromEdge = False
for i in channelRange2:
if (np.abs(a[i]) > threshold):
if (awayFromEdge):
platforming = True
return(platforming)
else:
awayFromEdge = True
return(platforming)
def mad(a, c=0.6745, axis=0):
"""
Median Absolute Deviation along given axis of an array:
median(abs(a - median(a))) / c
c = 0.6745 is the constant to convert from MAD to std; it is used by
default
"""
a = np.array(a)
good = (a==a)
a = np.asarray(a, np.float64)
if a.ndim == 1:
d = np.median(a[good])
m = np.median(np.fabs(a[good] - d)) / c
# print "mad = %f" % (m)
else:
d = np.median(a[good], axis=axis)
# I don't want the array to change so I have to copy it?
if axis > 0:
aswp = swapaxes(a[good],0,axis)
else:
aswp = a[good]
m = np.median(np.fabs(aswp - d), axis=0) / c
return m
def plotbandpassStats(caltable, chanavg=[], channeldiff=5, title='', usetask=False, resample=True, edge=0):
"""
Calls plotbandpass on a list of caltables with the following naming scheme:
caltable+'_smoothXch'
with the channeldiff option (to compute derivative statistics) and plots
the resulting MADs vs. the level of channel averaging.
chanavg: an integer list -- if not specified, then it will search for
the caltables and build it automatically
usetask: if True, use the casa task plotbandpass rather than analysisUtils version
resample: if True (default), then resample solution appropriately before computing statistics
"""
if (usetask):
from plotbandpass_cli import plotbandpass_cli as plotbandpass
caltableBase = os.path.basename(caltable)
caltableDirectory = os.path.dirname(caltable)
if (chanavg == []):
if (caltableDirectory == ''):
caltableDirectory = '.'
myfiles = os.listdir(caltableDirectory)
for f in myfiles:
if (f.find(caltable) == 0 and f.find('.plots') < 0):
tokens = f.split('_smooth')
if (len(tokens) < 2):
continue
chavg = int(tokens[1].split('ch')[0])
chanavg.append(chavg)
print("found %d files: %s" % (len(chanavg), str(chanavg)))
chanavg = np.sort(chanavg) # necessary for plotting connect-the-dots lines
print("using chanavg = %s" % (str(chanavg)))
stats = {}
mytb = au.createCasaTool(tbtool)
mytb.open(caltable+'_smooth%dch'%(chanavg[0]))
try:
vis = mytb.getkeyword('MSName')
vm = au.ValueMapping(vis)
except:
vis = caltable.split('.')[0]
vm = ''
mytb.close()
res = 1
for ch in chanavg:
if (resample):
res = ch
if (usetask):
print("Calling plotbandpass('%s_smooth%dch',yaxis='both',interactive=False,channeldiff='%d',resample='%d',edge='%d')" % (caltable, ch, channeldiff, res, edge))
stats[ch] = plotbandpass(caltable+'_smooth%dch'%(ch),yaxis='both',interactive=False,channeldiff=channeldiff,resample=res,edge=edge)
else:
print("Calling plotbandpass3('%s_smooth%dch',yaxis='both',interactive=False,channeldiff='%d',vm=vm,resample='%d',edge='%d')" % (caltable, ch, channeldiff, res, edge))
stats[ch] = plotbandpass3(caltable+'_smooth%dch'%(ch),yaxis='both',interactive=False,channeldiff=channeldiff,vm=vm,resample=res,edge=edge)
pb.clf()
# pb.hold(True) # not available in CASA6, but never needed
c=['b','c','g','r']
showdata = True
showmedian = True
plots = ['phase','amp']
for p in range(len(plots)):
mystats = []
pb.subplot(2,1,p+1)
variable = plots[p]
for ch in chanavg:
if (showdata):
for spw in stats[ch]['median'].keys():
if (spw != 'median'):
for pol in stats[ch]['median'][spw][0].keys():
pb.plot([ch],[stats[ch]['median'][spw][0][pol][variable]],'.',color=c[spw],markeredgewidth=markeredgewidth)
else:
pb.plot([ch],[stats[ch]['median'][spw][variable]],'+',color='k',markeredgewidth=markeredgewidth)
mystats.append(stats[ch]['median'][spw][variable])
if (showmedian):
pb.plot(chanavg,mystats,'o-',color='k',markeredgewidth=markeredgewidth)
pb.xlabel('channel average')
pb.ylabel('MAD of %s derivative' % (variable))
spw = stats[ch]['median'].keys()[0] # get first spw
if (spw == 'median'):
spw = stats[ch]['median'].keys()[1]
if (p==0):
if (vm == ''):
pb.title('%s, %s' % (vis.split('.')[0], title))
else:
pb.title('%s (spw %d = %.0f MHz, %d channels)' % (vis.split('.')[0], spw,
vm.spwInfo[spw]['bandwidth']*1e-6,
vm.spwInfo[spw]['numChannels']))
pb.draw()
if (resample):
pngname = caltableBase + '.resample.stats.png'
else:
pngname = caltableBase + '.stats.png'
pb.savefig(pngname)
print("Left plot in: %s" % (pngname))
def callFrequencyRangeForSpws(mymsmd, spwlist, vm, debug=False, caltable=None):
"""
Returns the min and max frequency of a list of spws.
"""
if (mymsmd != '' and casaVersion >= '4.1.0'):
return(frequencyRangeForSpws(mymsmd,spwlist))
else:
freqs = []
if (type(vm) != str):
if (debug):
print("vm.spwInfo.keys() = ", vm.spwInfo.keys())
for spw in spwlist:
freqs += list(vm.spwInfo[spw]["chanFreqs"])
else:
mytb = au.createCasaTool(tbtool)
try:
mytb.open(caltable+'/SPECTRAL_WINDOW')
chanfreq = []
if (debug):
print("getting frequencies of spws = ", originalSpws)
if (len(spwlist) == 0): # CAS-8489b
originalSpws = range(len(mytb.getcol('MEAS_FREQ_REF')))
spwlist = originalSpws
for i in spwlist: # CAS-8489b
# The array shapes can vary.
chanfreq.append(mytb.getcell('CHAN_FREQ',i))
for cf in chanfreq:
freqs += list(cf)
mytb.close()
except:
pass
if (freqs == []):
return(0,0)
else:
return(np.min(freqs)*1e-9, np.max(freqs)*1e-9)
def frequencyRangeForSpws(mymsmd, spwlist):
"""
Returns the min and max frequency of a list of spws.
"""
allfreqs = []
for spw in spwlist:
allfreqs += list(mymsmd.chanfreqs(spw))
if (len(allfreqs) == 0):
print("len allfreqs = zero, spwlist = %s" % (str(spwlist)))
return(0,0)
return(np.min(allfreqs)*1e-9, np.max(allfreqs)*1e-9)
def buildSpwString(overlaySpws, overlayBasebands, spwsToPlot, ispw, originalSpw,
observatoryName, baseband, showBasebandNumber):
if (overlayBasebands):
spwString = ' all'
elif (overlaySpws and len(spwsToPlot)>1):
if (observatoryName.find('ALMA') >= 0 or observatoryName.find('ACA') >= 0):
# show a list of all spws
spwString = str(spwsToPlot).replace(' ','').strip('[').strip(']')
else:
# show the range of spw numbers
spwString = '%2d-%2d' % (np.min(spwsToPlot),np.max(spwsToPlot))
elif (ispw==originalSpw):
spwString = '%2d' % (ispw)
else:
spwString = '%2d (%d)' % (ispw,originalSpw)
if (overlayBasebands==False):
spwString=appendBasebandNumber(spwString, baseband, showBasebandNumber)
return(spwString)
def appendBasebandNumber(spwString, baseband, showBasebandNumber):
if (showBasebandNumber):
spwString += ', bb%d' % (baseband)
return(spwString)
def getSpwsForBaseband(bb, vis=None, mymsmd=None, nonchanavg=True, caltable=None):
needToClose = False
if (casadef.subversion_revision >= '25753' and vis is not None):
if (os.path.exists(vis)):
if (mymsmd is None or mymsmd == ''):
needToClose = True
mymsmd = au.createCasaTool(msmdtool)
mymsmd.open(vis)
s = mymsmd.spwsforbaseband(bb)
else:
s = mymsmd.spwsforbaseband(bb)
else:
s = getBasebandDict(vis=vis, caltable=caltable, mymsmd=mymsmd)[bb]
else:
s = getBasebandDict(vis=vis, caltable=caltable, mymsmd=mymsmd)[bb]
spws = []
for spw in s:
if (mymsmd.nchan(spw) > 1 or nonchanavg==False):
spws.append(spw)
if needToClose:
mymsmd.close()
return(spws)
def getBasebandDict(vis=None, spwlist=[], caltable=None, mymsmd=None):
"""
Builds a dictionary with baseband numbers as the keys and the
associated spws as the values. The optional parameter spwlist can
be used to restrict the contents of the dictionary.
Note: This is obsoleted by msmd.spwsforbaseband(-1)
"""
bbdict = {}
if (vis is not None):
if (os.path.exists(vis)):
bbs = au.getBasebandNumbers(vis)
elif (caltable is not None):
bbs = au.getBasebandNumbersFromCaltable(caltable)
else:
print("Must specify either vis or caltable")
return
elif (caltable is not None):
bbs = au.getBasebandNumbersFromCaltable(caltable)
else:
print("Must specify either vis or caltable")
return
if (type(bbs) == int): # old datasets will bomb on msmd.baseband()
return(bbdict)
needToClose = False
if (casaVersion >= '4.1.0' and vis is not None):
if (os.path.exists(vis)):
if mymsmd is None or mymsmd == '':
needToClose = True
mymsmd = au.createCasaTool(msmdtool)
mymsmd.open(vis)
if (spwlist == []):
nspws = mymsmd.nspw()
spwlist = range(nspws)
for spw in spwlist:
bbc_no = mymsmd.baseband(spw)
if (bbc_no not in bbdict.keys()):
bbdict[bbc_no] = [spw]
else:
bbdict[bbc_no].append(spw)
if needToClose:
mymsmd.close()
if (bbdict == {}):
# read from spw table
ubbs = np.unique(bbs)
for bb in ubbs:
bbdict[bb] = []
for i in range(len(bbs)):
bbdict[bbs[i]].append(i)
return(bbdict)
#def getScansForTimes(mymsmd, scantimes):
# myscans = []
# myscantimes = []
# scantimes = au.splitListIntoContiguousLists(scantimes)
# for t in scantimes:
# mean_t = np.mean(t)
# range_t = (1+np.max(t)-np.min(t))*0.5
# scans_t = mymsmd.scansfortimes(mean_t, range_t)
# if (len(scans_t) > 0):
# scan = scans_t[0]
# myscans.append(scan)
# myscantimes.append(t)
# return(myscans, myscantimes)
|
nicokurtovicREPO_NAMESIMIOPATH_START.@SIMIO_extracted@SIMIO-main@codes@analysis_scripts@plotbandpass3.py@.PATH_END.py
|
{
"filename": "version.py",
"repo_name": "CosmicFish/CosmicFish",
"repo_path": "CosmicFish_extracted/CosmicFish-master/bundled/doxygen/src/version.py",
"type": "Python"
}
|
#
# script to read the version information from `../configure`
# relevant lines are starting with:
# `doxygen_version_major`
# `doxygen_version_minor`
# `doxygen_version_revision`
# `doxygen_version_mmn`
# the collected information is written to: `../VERSION` and `../src/version.cpp`
#
import sys
import os
#
# set 'default' values
#
major = 0
minor = 0
revision = 0
mnt = 'NO'
#
# open input file
# read file and get relevant information
# close
#
f = open('../configure', 'r')
for line in f:
# check if line can match (saves 3 comparisons)
if (line.startswith('doxygen_version')):
if (line.startswith('doxygen_version_major')):
major = line.replace('doxygen_version_major=','')
elif (line.startswith('doxygen_version_minor')):
minor = line.replace('doxygen_version_minor=','')
elif (line.startswith('doxygen_version_revision')):
revision = line.replace('doxygen_version_revision=','')
elif (line.startswith('doxygen_version_mmn')):
mnt = line.replace('doxygen_version_mmn=','')
f.close()
# strip superfluous '\n`
major = major.replace('\n','')
minor = minor.replace('\n','')
revision = revision.replace('\n','')
mnt = mnt.replace('\n','')
#
# open output files
# write relevant infomation
# close files
#
f1 = open('../VERSION','w')
f2 = open(os.path.join(sys.argv[1],'version.cpp'),'w')
if (mnt == 'NO'):
f1.write(major + '.' + minor + '.' + revision)
f2.write('char versionString[]="' + major + '.' + minor + '.' + revision + '";')
else:
f1.write(major + '.' + minor + '.' + revision + '-' + mnt)
f2.write('char versionString[]="' + major + '.' + minor + '.' + revision + '-' + mnt + '";')
f1.close()
f2.close()
|
CosmicFishREPO_NAMECosmicFishPATH_START.@CosmicFish_extracted@CosmicFish-master@bundled@doxygen@src@version.py@.PATH_END.py
|
{
"filename": "gigachat.py",
"repo_name": "langchain-ai/langchain",
"repo_path": "langchain_extracted/langchain-master/libs/community/langchain_community/chat_models/gigachat.py",
"type": "Python"
}
|
from __future__ import annotations
import logging
from typing import (
TYPE_CHECKING,
Any,
AsyncIterator,
Dict,
Iterator,
List,
Mapping,
Optional,
Type,
)
from langchain_core._api.deprecation import deprecated
from langchain_core.callbacks import (
AsyncCallbackManagerForLLMRun,
CallbackManagerForLLMRun,
)
from langchain_core.language_models.chat_models import (
BaseChatModel,
agenerate_from_stream,
generate_from_stream,
)
from langchain_core.messages import (
AIMessage,
AIMessageChunk,
BaseMessage,
BaseMessageChunk,
ChatMessage,
ChatMessageChunk,
FunctionMessage,
FunctionMessageChunk,
HumanMessage,
HumanMessageChunk,
SystemMessage,
SystemMessageChunk,
)
from langchain_core.outputs import ChatGeneration, ChatGenerationChunk, ChatResult
from langchain_community.llms.gigachat import _BaseGigaChat
if TYPE_CHECKING:
import gigachat.models as gm
logger = logging.getLogger(__name__)
def _convert_dict_to_message(message: gm.Messages) -> BaseMessage:
from gigachat.models import FunctionCall, MessagesRole
additional_kwargs: Dict = {}
if function_call := message.function_call:
if isinstance(function_call, FunctionCall):
additional_kwargs["function_call"] = dict(function_call)
elif isinstance(function_call, dict):
additional_kwargs["function_call"] = function_call
if message.role == MessagesRole.SYSTEM:
return SystemMessage(content=message.content)
elif message.role == MessagesRole.USER:
return HumanMessage(content=message.content)
elif message.role == MessagesRole.ASSISTANT:
return AIMessage(content=message.content, additional_kwargs=additional_kwargs)
else:
raise TypeError(f"Got unknown role {message.role} {message}")
def _convert_message_to_dict(message: gm.BaseMessage) -> gm.Messages:
from gigachat.models import Messages, MessagesRole
if isinstance(message, SystemMessage):
return Messages(role=MessagesRole.SYSTEM, content=message.content)
elif isinstance(message, HumanMessage):
return Messages(role=MessagesRole.USER, content=message.content)
elif isinstance(message, AIMessage):
return Messages(
role=MessagesRole.ASSISTANT,
content=message.content,
function_call=message.additional_kwargs.get("function_call", None),
)
elif isinstance(message, ChatMessage):
return Messages(role=MessagesRole(message.role), content=message.content)
elif isinstance(message, FunctionMessage):
return Messages(role=MessagesRole.FUNCTION, content=message.content)
else:
raise TypeError(f"Got unknown type {message}")
def _convert_delta_to_message_chunk(
_dict: Mapping[str, Any], default_class: Type[BaseMessageChunk]
) -> BaseMessageChunk:
role = _dict.get("role")
content = _dict.get("content") or ""
additional_kwargs: Dict = {}
if _dict.get("function_call"):
function_call = dict(_dict["function_call"])
if "name" in function_call and function_call["name"] is None:
function_call["name"] = ""
additional_kwargs["function_call"] = function_call
if role == "user" or default_class == HumanMessageChunk:
return HumanMessageChunk(content=content)
elif role == "assistant" or default_class == AIMessageChunk:
return AIMessageChunk(content=content, additional_kwargs=additional_kwargs)
elif role == "system" or default_class == SystemMessageChunk:
return SystemMessageChunk(content=content)
elif role == "function" or default_class == FunctionMessageChunk:
return FunctionMessageChunk(content=content, name=_dict["name"])
elif role or default_class == ChatMessageChunk:
return ChatMessageChunk(content=content, role=role) # type: ignore[arg-type]
else:
return default_class(content=content) # type: ignore[call-arg]
@deprecated(
since="0.3.5",
removal="1.0",
alternative_import="langchain_gigachat.GigaChat",
)
class GigaChat(_BaseGigaChat, BaseChatModel):
"""`GigaChat` large language models API.
To use, you should pass login and password to access GigaChat API or use token.
Example:
.. code-block:: python
from langchain_community.chat_models import GigaChat
giga = GigaChat(credentials=..., scope=..., verify_ssl_certs=False)
"""
def _build_payload(self, messages: List[BaseMessage], **kwargs: Any) -> gm.Chat:
from gigachat.models import Chat
payload = Chat(
messages=[_convert_message_to_dict(m) for m in messages],
)
payload.functions = kwargs.get("functions", None)
payload.model = self.model
if self.profanity_check is not None:
payload.profanity_check = self.profanity_check
if self.temperature is not None:
payload.temperature = self.temperature
if self.top_p is not None:
payload.top_p = self.top_p
if self.max_tokens is not None:
payload.max_tokens = self.max_tokens
if self.repetition_penalty is not None:
payload.repetition_penalty = self.repetition_penalty
if self.update_interval is not None:
payload.update_interval = self.update_interval
if self.verbose:
logger.warning("Giga request: %s", payload.dict())
return payload
def _create_chat_result(self, response: Any) -> ChatResult:
generations = []
for res in response.choices:
message = _convert_dict_to_message(res.message)
finish_reason = res.finish_reason
gen = ChatGeneration(
message=message,
generation_info={"finish_reason": finish_reason},
)
generations.append(gen)
if finish_reason != "stop":
logger.warning(
"Giga generation stopped with reason: %s",
finish_reason,
)
if self.verbose:
logger.warning("Giga response: %s", message.content)
llm_output = {"token_usage": response.usage, "model_name": response.model}
return ChatResult(generations=generations, llm_output=llm_output)
def _generate(
self,
messages: List[BaseMessage],
stop: Optional[List[str]] = None,
run_manager: Optional[CallbackManagerForLLMRun] = None,
stream: Optional[bool] = None,
**kwargs: Any,
) -> ChatResult:
should_stream = stream if stream is not None else self.streaming
if should_stream:
stream_iter = self._stream(
messages, stop=stop, run_manager=run_manager, **kwargs
)
return generate_from_stream(stream_iter)
payload = self._build_payload(messages, **kwargs)
response = self._client.chat(payload)
return self._create_chat_result(response)
async def _agenerate(
self,
messages: List[BaseMessage],
stop: Optional[List[str]] = None,
run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,
stream: Optional[bool] = None,
**kwargs: Any,
) -> ChatResult:
should_stream = stream if stream is not None else self.streaming
if should_stream:
stream_iter = self._astream(
messages, stop=stop, run_manager=run_manager, **kwargs
)
return await agenerate_from_stream(stream_iter)
payload = self._build_payload(messages, **kwargs)
response = await self._client.achat(payload)
return self._create_chat_result(response)
def _stream(
self,
messages: List[BaseMessage],
stop: Optional[List[str]] = None,
run_manager: Optional[CallbackManagerForLLMRun] = None,
**kwargs: Any,
) -> Iterator[ChatGenerationChunk]:
payload = self._build_payload(messages, **kwargs)
for chunk in self._client.stream(payload):
if not isinstance(chunk, dict):
chunk = chunk.dict()
if len(chunk["choices"]) == 0:
continue
choice = chunk["choices"][0]
content = choice.get("delta", {}).get("content", {})
chunk = _convert_delta_to_message_chunk(choice["delta"], AIMessageChunk)
finish_reason = choice.get("finish_reason")
generation_info = (
dict(finish_reason=finish_reason) if finish_reason is not None else None
)
if run_manager:
run_manager.on_llm_new_token(content)
yield ChatGenerationChunk(message=chunk, generation_info=generation_info)
async def _astream(
self,
messages: List[BaseMessage],
stop: Optional[List[str]] = None,
run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,
**kwargs: Any,
) -> AsyncIterator[ChatGenerationChunk]:
payload = self._build_payload(messages, **kwargs)
async for chunk in self._client.astream(payload):
if not isinstance(chunk, dict):
chunk = chunk.dict()
if len(chunk["choices"]) == 0:
continue
choice = chunk["choices"][0]
content = choice.get("delta", {}).get("content", {})
chunk = _convert_delta_to_message_chunk(choice["delta"], AIMessageChunk)
finish_reason = choice.get("finish_reason")
generation_info = (
dict(finish_reason=finish_reason) if finish_reason is not None else None
)
if run_manager:
await run_manager.on_llm_new_token(content)
yield ChatGenerationChunk(message=chunk, generation_info=generation_info)
|
langchain-aiREPO_NAMElangchainPATH_START.@langchain_extracted@langchain-master@libs@community@langchain_community@chat_models@gigachat.py@.PATH_END.py
|
{
"filename": "plot_stats.py",
"repo_name": "spacetelescope/hst_cosmic_rays",
"repo_path": "hst_cosmic_rays_extracted/hst_cosmic_rays-master/pipeline/utils/plot_stats.py",
"type": "Python"
}
|
#!/usr/bin/env python
from collections import defaultdict
from itertools import chain
from astropy.time import Time
from astropy.visualization import ImageNormalize, LinearStretch, \
LogStretch, ZScaleInterval, SqrtStretch
import dask.array as da
import costools
from collections import Iterable
import h5py
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import rc,rcParams
rc('font', weight='bold', family='sans-serif')
from matplotlib.backends.backend_pdf import PdfPages
from mpl_toolkits.basemap import Basemap
import pandas as pd
import sunpy
import sunpy.timeseries
import sunpy.data.sample
plt.style.use('ggplot')
class PlotData(object):
def __init__(self, fname, instr, subgrp, flist):
self.fname = fname
self.instr = instr.upper()
self.flist = flist
self.data = None
self.data_no_saa = None
self.data_df = None
self.subgrp = subgrp
self.num_images = 0
self.map = None
self.ax = None
self.fig = None
# Detector sizes in cm^2
self.detector_size ={'ACS_WFC': 37.748,
'ACS_HRC': 4.624 ,
'WFC3_UVIS': 37.804,
'WFC3_IR': 3.331,
'STIS_CCD': 4.624,
'WFPC2':5.76}
self.detector_readtime = {'ACS_WFC': 1,
'ACS_HRC': 30,
'WFC3_UVIS': 1.,
'WFC3_IR' : None,
'STIS_CCD': 29.0/2.,
'WFPC2': 60./2.}
def read_rate(self):
data_out = defaultdict(list)
for f in self.flist:
print('Analyzing {}'.format(f))
fobj = h5py.File(f, mode='r')
sgrp = fobj[self.instr.upper()+'/'+self.subgrp]
for key in sgrp.keys():
data_out['obs_name'].append(key)
dset = sgrp[key].value
attrs = sgrp[key].attrs
exptime = attrs['exptime']
factor = (exptime + self.detector_readtime[self.instr.upper()]) \
/ exptime
if isinstance(dset, Iterable):
dset = np.nanmedian(dset)
# Multiply by correction factor to account for various instrument
# readouts
data_out[self.subgrp].append(factor * dset /
self.detector_size[self.instr])
for at_key in attrs.keys():
val = attrs[at_key]
if at_key == 'date':
val = Time(val, format='iso')
elif at_key == 'latitude' or at_key == 'longitude' \
or at_key == 'altitude':
try:
data_out['start_{}'.format(at_key)].append(val[0])
except IndexError as e:
data_out['start_{}'.format(at_key)].append(val)
try:
data_out['end_{}'.format(at_key)].append(val[-1])
except IndexError as e:
data_out['end_{}'.format(at_key)].append(val)
continue
data_out['{}'.format(at_key)].append(val)
data_out['mjd'] = [val.mjd for val in data_out['date']]
date_index = pd.DatetimeIndex([val.iso for val in data_out['date']])
self.data_df = pd.DataFrame(data_out, index = date_index)
self.data_df.sort_index(inplace=True)
def perform_SAA_cut(self):
saa = [list(t) for t in zip(*costools.saamodel.saaModel(5))]
saa[0].append(saa[0][0])
saa[1].append(saa[1][0])
saa = np.asarray(saa)
saa_eastern = (39.0, -30.0) # lon/lat
saa_western = (267.0, -20.0)
saa_northern = (312.0, 1.0)
mask = (self.data_df['longitude'] > saa_eastern[0]) &\
(self.data_df['longitude'] < saa_western[0]) &\
(self.data_df['latitude'] > saa_northern[1])
cut = self.data_df[mask]
self.data_no_saa = cut
return mask
def read_data(self):
data_out = defaultdict(list)
for f in self.flist:
fobj = h5py.File(f, mode='r')
sgrp = fobj[self.instr.upper()+'/'+self.subgrp]
for key in sgrp.keys():
dset = sgrp[key][:][1]
attrs = sgrp[key].attrs
data_out['var_average'].append(np.nanmedian(dset))
for at_key in attrs.keys():
val = attrs[at_key]
if at_key == 'date':
val = Time(val, format='iso')
data_out['{}'.format(at_key)].append(val)
date_index = pd.DatetimeIndex([val.iso for val in data_out['date']])
self.data_df = pd.DataFrame(data_out, index = date_index)
def plot_data(self, ax=None, bins=30, logx=True, logy=True,
range=[0, 3], fill_value=-1,c='r'):
""" plot a histogram of data, defaults are set for sizes
Parameters
----------
subgrp
bins
range
Returns
-------
"""
# Read in the data if it doesn't exist already
tmp = []
if '_' in self.instr:
label = self.instr.replace('_','/')
else:
label = self.instr
if self.data is None:
for f in self.flist:
print('Analyzing file {}'.format(f))
with h5py.File(f, mode='r') as fobj:
subgrp_ = fobj[self.instr.upper()+'/'+self.subgrp]
keys = list(subgrp_.keys())
self.num_images += len(keys)
for name in keys:
if logx:
dset = np.log10(subgrp_[name][:][1])
else:
dset = subgrp_[name][:][1]
tmp.append(da.from_array(dset,chunks=(20000)))
x = da.concatenate(tmp, axis=0)
print(x.shape)
self.data = da.ma.fix_invalid(x, fill_value=fill_value)
h, edges = da.histogram(self.data, bins=bins,
range=range)
hist = h.compute()
# Create an axis if it doesnt exists
lw = 1.75
if ax is None:
self.fig, self.ax = plt.subplots(figsize=(7,5),
nrows=1,
ncols=1)
# Normalize by the highest count
# self.data[subgrp] /= np.nanmax(self.data[subgrp])
if logy:
# self.ax.step(edges[:-1], h.compute(), color='r')
self.ax.semilogy(edges[:-1], hist,
label=label,
drawstyle='steps-mid', color=c, lw=lw)
else:
self.ax.step(edges[:-1], hist,
label=label,
where='mid', color=c,lw=lw)
else:
self.ax = ax
if logy:
# self.ax.step(edges[:-1], h.compute(), color='r')
self.ax.semilogy(edges[:-1], hist,
label=label,
drawstyle='steps-mid', color=c,lw=lw)
else:
self.ax.step(edges[:-1], hist,
label=label,
where='mid', color=c,lw=lw)
self.ax.tick_params(axis='both', which='major',
labelsize=10, width=2)
self.ax.legend(loc='best')
def plot_rate_vs_time(self, ax= None, ptype='rolling',
window='20D', min_periods=20, i=0, saa_exclude=False):
if self.data_df is None:
self.read_rate()
if saa_exclude:
flags = self.data_no_saa.exptime.gt(200)
df1 = self.data_no_saa[['incident_cr_rate','mjd']][flags]
else:
flags = self.data_df.exptime.gt(200)
df1 = self.data_df[['incident_cr_rate','mjd']][flags]
if ptype == 'rolling':
averaged = df1.rolling(window=window, min_periods=min_periods).median()
elif ptype == 'resample':
averaged = df1.resample(rule=window).median()
else:
averaged = df1
avg_no_nan = averaged.dropna()
if ax is None:
self.fig, self.ax = plt.subplots(figsize=(7,5),
nrows=1,
ncols=1)
else:
self.ax = ax
# Normalize the CR rate by the detector size
CB_color_cycle = ['#377eb8', '#ff7f00', '#4daf4a',
'#f781bf', '#a65628', '#984ea3',
'#999999', '#e41a1c', '#dede00']
self.ax.scatter([Time(val, format='mjd').to_datetime()
for val in avg_no_nan['mjd']],
avg_no_nan['incident_cr_rate'],
label=self.instr.replace('_','/'),
s=2,
color=CB_color_cycle[i])
# self.ax.set_xlabel('Date')
self.ax.set_ylabel('Cosmic Ray Rate [CR/s/cm^2]')
self.ax.set_title('Smoothed Cosmic Ray Rate')
# plt.savefig('cr_incidence_rolling_average.png',)
# plt.show()
def plot_solar_cycle(self, variable=None, ax = None, smoothed=False):
noaa = sunpy.timeseries.TimeSeries(sunpy.data.sample.NOAAINDICES_TIMESERIES,
source='NOAAIndices')
print(noaa.columns)
if variable is None:
noaa.peek(type='sunspot RI', ax=ax)
else:
noaa.peek(type=variable, ax=ax)
return noaa
def draw_map(self, map=None, scale=0.2):
if map is None:
pass
else:
self.map=map
# Set the background map up
self.map.shadedrelief(scale=scale)
# Draw the meridians
# lats and longs are returned as a dictionary
lats = self.map.drawparallels(np.linspace(-90, 90, 13),
labels=[True, True, False, False])
lons = self.map.drawmeridians(np.linspace(-180, 180, 13),
labels=[False, False, False, True])
# keys contain the plt.Line2D instances
lat_lines = chain(*(tup[1][0] for tup in lats.items()))
lon_lines = chain(*(tup[1][0] for tup in lons.items()))
all_lines = chain(lat_lines, lon_lines)
# cycle through these lines and set the desired style
for line in all_lines:
line.set(linestyle='-', alpha=0.3, color='w')
def plot_hst_loc(self, i = 5, df = None, key='start', save=False):
self.fig = plt.figure(figsize=(9, 7))
# Get the model for the SAA
self.map = Basemap(projection='cyl')
self.draw_map()
# Generate an SAA contour
saa = [list(t) for t in zip(*costools.saamodel.saaModel(i))]
# Ensure the polygon representing the SAA is a closed curve by adding
# the starting points to the end of the list of lat/lon coords
saa[0].append(saa[0][0])
saa[1].append(saa[1][0])
self.map.plot(saa[1], saa[0],
c='r',
latlon=True,
label='SAA contour {}'.format(i))
if df is None:
lat, lon, rate = self.data_df['{}_latitude'.format(key)], \
self.data_df['{}_longitude'.format(key)], \
self.data_df['incident_cr_rate']
else:
lat, lon, rate = df['{}_latitude'.format(key)], \
df['{}_longitude'.format(key)], \
df['incident_cr_rate']
norm = ImageNormalize(rate,
stretch=LinearStretch(),
vmin=0.75, vmax=2)
scat = self.map.scatter(lon, lat,
marker='o',
s=10,
latlon=True,
c=rate, alpha=0.5,
norm = norm,
cmap='Reds')
cax = self.fig.add_axes([0.1, 0.1, 0.8, 0.05])
cbar = self.fig.colorbar(scat, cax=cax, orientation='horizontal')
cbar.set_label('cosmic rays/s/cm^2', fontweight='bold')
cbar.ax.set_xticklabels(cbar.ax.get_xticklabels(), rotation=45)
if save:
self.fig.savefig('lat_lon_{}.png'.format(key),
format='png',
dpi=300)
plt.show()
def get_full_path(self, obsnames):
data_out = defaultdict(list)
for f in self.flist:
print('Analyzing {}'.format(f))
fobj = h5py.File(f, mode='r')
sgrp = fobj[self.instr.upper()+'/'+self.subgrp]
for key in sgrp.keys():
if key not in obsnames:
continue
dset = sgrp[key].value
attrs = sgrp[key].attrs
num_intervals = len(attrs['latitude'])
data_out['obs_name'].append([key]*num_intervals)
data_out['exptime'].append([attrs['exptime']]*num_intervals)
exptime = attrs['exptime']
factor = (exptime + self.detector_readtime[self.instr.upper()]) \
/ exptime
if isinstance(dset, Iterable):
dset = np.nanmedian(dset)
# Multiply by correction factor to account for various instrument
# readouts
data_out[self.subgrp].append([factor * dset /
self.detector_size[self.instr]] *\
num_intervals)
data_out['latitude'].append(attrs['latitude'])
data_out['longitude'].append(attrs['longitude'])
data_out['altitude'].append(attrs['altitude'])
for key in data_out.keys():
data_out[key] = [val for dset in data_out[key] for val in dset]
return data_out
def plot_full_path(self, df, i):
self.fig = plt.figure(figsize=(9, 7))
# Get the model for the SAA
self.map = Basemap(projection='cyl')
self.draw_map()
# Generate an SAA contour
saa = [list(t) for t in zip(*costools.saamodel.saaModel(i))]
# Ensure the polygon representing the SAA is a closed curve by adding
# the starting points to the end of the list of lat/lon coords
saa[0].append(saa[0][0])
saa[1].append(saa[1][0])
self.map.plot(saa[1], saa[0],
c='r',
latlon=True,
label='SAA contour {}'.format(i))
self.map.scatter(df['longitude'],df['latitude'], latlon=True)
plt.show()
if __name__ == '__main__':
main()
|
spacetelescopeREPO_NAMEhst_cosmic_raysPATH_START.@hst_cosmic_rays_extracted@hst_cosmic_rays-master@pipeline@utils@plot_stats.py@.PATH_END.py
|
{
"filename": "set_scale_and_bias__desc.md",
"repo_name": "catboost/catboost",
"repo_path": "catboost_extracted/catboost-master/catboost/docs/en/_includes/work_src/reusage-python/set_scale_and_bias__desc.md",
"type": "Markdown"
}
|
Set the scale and bias.
|
catboostREPO_NAMEcatboostPATH_START.@catboost_extracted@catboost-master@catboost@docs@en@_includes@work_src@reusage-python@set_scale_and_bias__desc.md@.PATH_END.py
|
{
"filename": "local_fields.py",
"repo_name": "yt-project/yt",
"repo_path": "yt_extracted/yt-main/yt/fields/local_fields.py",
"type": "Python"
}
|
from collections.abc import Callable
from functools import partial
from typing import Any, TypeVar
from yt.funcs import is_sequence
from yt.utilities.logger import ytLogger as mylog
from .field_info_container import FieldInfoContainer
from .field_plugin_registry import register_field_plugin
# workaround mypy not being comfortable around decorator preserving signatures
# adapted from
# https://github.com/python/mypy/issues/1551#issuecomment-253978622
TFun = TypeVar("TFun", bound=Callable[..., Any])
class LocalFieldInfoContainer(FieldInfoContainer):
def add_field(
self, name, function, sampling_type, *, force_override=False, **kwargs
):
from yt.fields.field_functions import validate_field_function
validate_field_function(function)
if isinstance(name, str) or not is_sequence(name):
# the base method only accepts proper tuple field keys
# and is only used internally, while this method is exposed to users
# and is documented as usable with single strings as name
if sampling_type == "particle":
ftype = "all"
else:
ftype = "gas"
name = (ftype, name)
# Handle the case where the field has already been added.
if not force_override and name in self:
mylog.warning(
"Field %s already exists. To override use `force_override=True`.",
name,
)
return super().add_field(
name, function, sampling_type, force_override=force_override, **kwargs
)
# Empty FieldInfoContainer
local_fields = LocalFieldInfoContainer(None, [], None)
# we define two handles, essentially pointing to the same function but documented differently
# yt.add_field() is meant to be used directly, while yt.derived_field is documented
# as a decorator.
add_field = local_fields.add_field
class derived_field:
# implement a decorator accepting keyword arguments to be passed down to add_field
def __init__(self, **kwargs) -> None:
self._kwargs = kwargs
def __call__(self, f: Callable) -> Callable:
partial(local_fields.add_field, function=f)(**self._kwargs)
return f
@register_field_plugin
def setup_local_fields(registry, ftype="gas", slice_info=None):
# This is easy. We just update with the contents of the local_fields field
# info container, and since they are not mutable in any real way, we are
# fine.
# Note that we actually don't care about the ftype here.
for f in local_fields:
registry._show_field_errors.append(f)
registry.update(local_fields)
|
yt-projectREPO_NAMEytPATH_START.@yt_extracted@yt-main@yt@fields@local_fields.py@.PATH_END.py
|
{
"filename": "shadow.py",
"repo_name": "einsteinpy/einsteinpy",
"repo_path": "einsteinpy_extracted/einsteinpy-main/src/einsteinpy/plotting/rays/shadow.py",
"type": "Python"
}
|
import numpy as np
from matplotlib import pyplot as plt
class ShadowPlotter:
"""
Class for plotting and visualising shadows
"""
def __init__(self, shadow, is_line_plot=True):
"""
Constructor for plotter.
Parameters
----------
shadow : ~einsteinpy.rays.Shadow
The shadow object
is_line_plot : bool, optional
If the plot is a line plot or a contour plot. Defaults to True.
"""
self.shadow = shadow
self.is_intensity_plot = is_line_plot
def plot(self):
"""
Plots the shadow.
"""
if self.is_intensity_plot:
plt.plot(self.shadow.fb1, self.shadow.intensity, "r")
plt.plot(self.shadow.fb2, self.shadow.intensity, "r")
plt.xlabel("Impact Paramter (b)")
plt.ylabel("Intensity (Emissivity)")
plt.title("Intensity Plot")
else:
theta1 = np.linspace(0, 2 * np.pi, len(self.shadow.fb1))
self.r1, self.theta1 = np.meshgrid(self.shadow.fb1, theta1)
self.values1, self.values2 = np.meshgrid(
self.shadow.intensity, self.shadow.intensity
)
def show(self):
"""
Shows the plot.
"""
if self.is_intensity_plot:
plt.show()
else:
xx = self.r1 * np.cos(self.theta1)
yy = self.r1 * np.sin(self.theta1)
plt.figure(figsize=(7, 7))
plt.pcolormesh(xx, yy, self.values1, cmap=plt.cm.afmhot, shading="gouraud")
plt.title("Schwarzschild Black Hole")
plt.gca().set_aspect("equal", adjustable="box")
|
einsteinpyREPO_NAMEeinsteinpyPATH_START.@einsteinpy_extracted@einsteinpy-main@src@einsteinpy@plotting@rays@shadow.py@.PATH_END.py
|
{
"filename": "_textsrc.py",
"repo_name": "catboost/catboost",
"repo_path": "catboost_extracted/catboost-master/contrib/python/plotly/py2/plotly/validators/choropleth/_textsrc.py",
"type": "Python"
}
|
import _plotly_utils.basevalidators
class TextsrcValidator(_plotly_utils.basevalidators.SrcValidator):
def __init__(self, plotly_name="textsrc", parent_name="choropleth", **kwargs):
super(TextsrcValidator, self).__init__(
plotly_name=plotly_name,
parent_name=parent_name,
edit_type=kwargs.pop("edit_type", "none"),
role=kwargs.pop("role", "info"),
**kwargs
)
|
catboostREPO_NAMEcatboostPATH_START.@catboost_extracted@catboost-master@contrib@python@plotly@py2@plotly@validators@choropleth@_textsrc.py@.PATH_END.py
|
{
"filename": "_closesrc.py",
"repo_name": "catboost/catboost",
"repo_path": "catboost_extracted/catboost-master/contrib/python/plotly/py3/plotly/validators/candlestick/_closesrc.py",
"type": "Python"
}
|
import _plotly_utils.basevalidators
class ClosesrcValidator(_plotly_utils.basevalidators.SrcValidator):
def __init__(self, plotly_name="closesrc", parent_name="candlestick", **kwargs):
super(ClosesrcValidator, self).__init__(
plotly_name=plotly_name,
parent_name=parent_name,
edit_type=kwargs.pop("edit_type", "none"),
**kwargs,
)
|
catboostREPO_NAMEcatboostPATH_START.@catboost_extracted@catboost-master@contrib@python@plotly@py3@plotly@validators@candlestick@_closesrc.py@.PATH_END.py
|
{
"filename": "base.py",
"repo_name": "lwa-project/lsl",
"repo_path": "lsl_extracted/lsl-main/lsl/reader/base.py",
"type": "Python"
}
|
"""
Python module that contains the base FrameHeader, FramePayload, and Frame
classes for all of the LSL readers.
.. versionadded:: 2.0.0
"""
import copy
import pytz
import numpy as np
from functools import total_ordering
from textwrap import fill as tw_fill
from datetime import datetime, timedelta
from astropy.time import Time as AstroTime, TimeDelta as AstroDelta
from lsl.common import dp as dp_common
from lsl.astro import MJD_OFFSET
__version__ = '0.4'
__all__ = ['FrameHeaderBase', 'FramePayloadBase', 'FrameBase', 'FrameTimestamp', 'CI8']
def _build_repr(name, attrs=[]):
name = '.'.join(name.split('.')[-2:])
output = "<%s" % name
first = True
for key,value in attrs:
output += "%s %s=%s" % (('' if first else ','), key, value)
first = False
output += ">"
return output
class FrameHeaderBase(object):
"""
Base class for all lsl.reader FrameHeader-type objects.
"""
_header_attrs = []
def __repr__(self):
n = self.__class__.__module__+'.'+self.__class__.__name__
a = [(attr,getattr(self, attr, None)) for attr in self._header_attrs]
return tw_fill(_build_repr(n,a), subsequent_indent=' ')
class FramePayloadBase(object):
"""
Base class for all lsl.reader FramePayload-type objects.
"""
_payload_attrs = []
def __init__(self, data):
self._data = data
def __repr__(self):
n = self.__class__.__module__+'.'+self.__class__.__name__
a = [(attr,getattr(self, attr, None)) for attr in self._payload_attrs]
if self._data is not None:
a.append(('dtype',str(self._data.dtype)))
a.append(('shape',str(self._data.shape)))
return tw_fill(_build_repr(n,a), subsequent_indent=' ')
@property
def data(self):
return self._data
@total_ordering
class FrameBase(object):
"""
Base class for all lsl.reader Frame-type objects.
"""
_header_class = FrameHeaderBase
_payload_class = FramePayloadBase
def __init__(self, header=None, payload=None, valid=True):
if header is None:
self.header = self._header_class()
else:
if not isinstance(header, self._header_class):
raise TypeError(f"Excepted header of type '{self._header_class.__type__.__name__}' but found '{header.__type__.__name__}'")
self.header = header
if payload is None:
self.payload = self._payload_class()
else:
if not isinstance(payload, self._payload_class):
raise TypeError(f"Excepted payload of type '{self._payload_class.__type__.__name__}' but found '{payload.__type__.__name__}'")
self.payload = payload
self.valid = valid
def __repr__(self):
n = self.__class__.__module__+'.'+self.__class__.__name__
a = [('header',repr(self.header).replace(',\n ', ', ')),
('payload',repr(self.payload).replace(',\n ', ', ')),
('valid', self.valid)]
return tw_fill(_build_repr(n,a), subsequent_indent=' ')
def __add__(self, y):
"""
Add the data sections of two frames together or add a number
to every element in the data section.
.. note::
In the case where a frame is given the weights are
ignored.
"""
if not isinstance(y, (FrameBase, int, float, complex, np.ndarray)):
raise TypeError(f"Unsupported type: '{type(y).__name__}'")
newFrame = copy.deepcopy(self)
newFrame += y
return newFrame
def __iadd__(self, y):
"""
In-place add the data sections of two frames together or add
a number to every element in the data section.
.. note::
In the case where a frame is given the weights are
ignored.
"""
if not isinstance(y, (FrameBase, int, float, complex, np.ndarray)):
raise TypeError(f"Unsupported type: '{type(y).__name__}'")
try:
self.payload._data += y.payload._data
except AttributeError:
self.payload._data += self.payload._data.dtype.type(y)
return self
def __sub__(self, y):
"""
Subtract the data sections of two frames or subtract a number
from every element in the data section.
.. note::
In the case where a frame is given the weights are
ignored.
"""
if not isinstance(y, (FrameBase, int, float, complex, np.ndarray)):
raise TypeError(f"Unsupported type: '{type(y).__name__}'")
newFrame = copy.deepcopy(self)
newFrame -= y
return newFrame
def __isub__(self, y):
"""
In-place subtract the data sections of two frames together or subtract
a number from every element in the data section.
.. note::
In the case where a frame is given the weights are
ignored.
"""
if not isinstance(y, (FrameBase, int, float, complex, np.ndarray)):
raise TypeError(f"Unsupported type: '{type(y).__name__}'")
try:
self.payload._data -= y.payload._data
except AttributeError:
self.payload._data -= self.payload._data.dtype.type(y)
return self
def __mul__(self, y):
"""
Multiple the data sections of two frames together or multiply
a number to every element in the data section.
.. note::
In the case where a frame is given the weights are
ignored.
"""
if not isinstance(y, (FrameBase, int, float, complex, np.ndarray)):
raise TypeError("Unsupported type '%s'" % type(y).__name__)
newFrame = copy.deepcopy(self)
newFrame *= y
return newFrame
def __imul__(self, y):
"""
In-place multiple the data sections of two frames together or
multiply a number to every element in the data section.
.. note::
In the case where a frame is given the weights are
ignored.
"""
if not isinstance(y, (FrameBase, int, float, complex, np.ndarray)):
raise TypeError("Unsupported type '%s'" % type(y).__name__)
try:
self.payload._data *= y.payload._data
except AttributeError:
self.payload._data *= self.payload._data.dtype.type(y)
return self
def __floordiv__(self, y):
"""
Divide the data sections of two frames together or divide
a number into every element in the data section.
.. note::
In the case where a frame is given the weights are
ignored.
"""
if not isinstance(y, (FrameBase, int, float, complex, np.ndarray)):
raise TypeError("Unsupported type '%s'" % type(y).__name__)
newFrame = copy.deepcopy(self)
newFrame //= y
return newFrame
def __ifloordiv__(self, y):
"""
In-place divide the data sections of two frames together or
divide a number into every element in the data section.
.. note::
In the case where a frame is given the weights are
ignored.
"""
if not isinstance(y, (FrameBase, int, float, complex, np.ndarray)):
raise TypeError("Unsupported type '%s'" % type(y).__name__)
try:
self.payload._data //= y.payload._data
except AttributeError:
self.payload._data //= self.payload._data.dtype.type(y)
return self
def __truediv__(self, y):
"""
Divide the data sections of two frames together or divide
a number into every element in the data section.
.. note::
In the case where a frame is given the weights are
ignored.
"""
if not isinstance(y, (FrameBase, int, float, complex, np.ndarray)):
raise TypeError("Unsupported type '%s'" % type(y).__name__)
newFrame = copy.deepcopy(self)
newFrame /= y
return newFrame
def __itruediv__(self, y):
"""
In-place divide the data sections of two frames together or
divide a number into every element in the data section.
.. note::
In the case where a frame is given the weights are
ignored.
"""
if not isinstance(y, (FrameBase, int, float, complex, np.ndarray)):
raise TypeError("Unsupported type '%s'" % type(y).__name__)
try:
self.payload._data /= y.payload._data
except AttributeError:
self.payload._data /= self.payload._data.dtype.type(y)
return self
def __div__(self, y):
return self.__floordiv__(y)
def __idiv__(self, y):
return self.__ifloordiv__(y)
def __eq__(self, y):
"""
Check if the time tags of two frames are equal or if the time
tag is equal to a particular value.
"""
tX = self.time
if isinstance(y, FrameBase):
tY = y.time
elif isinstance(y, (int, float, np.integer, np.floating, FrameTimestamp)):
tY = y
else:
raise TypeError(f"Unsupported type: '{type(y).__name__}'")
if tX == tY:
return True
else:
return False
def __lt__(self, y):
"""
Check if the time tag of the first frame is less than that of a
second frame or if the time tag is greater than a particular value.
"""
tX = self.time
if isinstance(y, FrameBase):
tY = y.time
elif isinstance(y, (int, float, np.integer, np.floating, FrameTimestamp)):
tY = y
else:
raise TypeError(f"Unsupported type: '{type(y).__name__}'")
if tX < tY:
return True
else:
return False
@total_ordering
class FrameTimestamp(object):
"""
Class to represent the UNIX timestamp of a data frame as an integer
number of seconds and a fractional number of seconds.
"""
def __init__(self, si=0, sf=0.0):
if isinstance(si, AstroTime):
self._time = si
if sf != 0.0:
self._time += AstroDelta(sf, format='sec')
else:
if isinstance(si, (float, np.floating)):
sf = sf + (si - int(si))
si = int(si)
# Make sure sf is [0.0, 1.0)
if sf >= 1:
sfi = int(sf)
sff = sf - sfi
si += sfi
sf = sff
elif sf < 0:
sfi = int(sf) - 1
sff = sf - sfi
si += sfi
sf = sff
self._time = AstroTime(si, sf, format='unix', scale='utc')
@classmethod
def now(cls):
"""
Create a new FrameTimestamp instance for the current time as determined
from `time.time()`.
"""
return cls(AstroTime.now())
@classmethod
def from_dp_timetag(cls, value, offset=0):
"""
Create a new FrameTimestamp instance from a raw DP timetag with an optional
offset.
"""
tt = int(value) - offset
s = tt // int(dp_common.fS)
f = (tt - s*int(dp_common.fS)) / dp_common.fS
return cls(s, f)
@classmethod
def from_mjd_mpm(cls, mjd, mpm):
"""
Create a new FrameTimestamp from a MJD/MPM (milliseconds past midnight) pair.
"""
t = AstroTime(mjd, mpm//1000 / 86400, format='mjd', scale='utc')
f = (mpm % 1000) / 1000.0
return cls(t, f)
@classmethod
def from_pulsar_mjd(cls, mjd, mjd_frac, sec_frac):
"""
Create a new FrameTimstamp from a three-element tuple of integer number
of MJD days, integer seconds since 0h on that day, and fractional seconds.
"""
t = AstroTime(mjd, mjd_frac/86400, format='mjd', scale='utc')
return cls(t, sec_frac)
@classmethod
def from_astropy(cls, t):
"""
Create a new FrameTimestamp from a astropy.time.Time object.
"""
return cls(t, 0.0)
def __str__(self):
dt = self._time.datetime
return str(dt)
def __repr__(self):
t = self._time.unix
return "<FrameTimestamp i=%i, f=%.9f (%.9f %.9f)>" % (int(t), t-int(t), self._time.jd1, self._time.jd2)
def __format__(self, format_spec):
if format_spec == '' or format_spec[-1] == 's':
return format(str(self), format_spec)
elif format_spec[-1] in ('e', 'E', 'f', 'F', 'g', 'G', 'n'):
return format(float(self), format_spec)
elif format_spec[-1] in ('d', 'n'):
t = self._time.unix
return format(int(t), format_spec)
else:
raise TypeError("unsupported format string passed to %s.__format__" % type(self).__name__)
def __float__(self):
return float(self._time.unix)
def __getitem__(self, i):
t = self._time.unix
if i == 0:
return int(t)
elif i == 1:
return t - int(t)
else:
raise IndexError
def __add__(self, other):
if isinstance(other, (int, float, np.integer, np.floating)):
t = self._time + AstroDelta(other, format='sec')
return FrameTimestamp(t)
elif isinstance(other, timedelta):
t = self._time + AstroDelta(other, format='datetime')
return FrameTimestamp(t)
else:
raise TypeError(f"Unsupported type: '{type(other).__name__}'")
def __iadd__(self, other):
if isinstance(other, (int, float, np.integer, np.floating)):
self._time += AstroDelta(other, format='sec')
return self
elif isinstance(other, timedelta):
self._time += AstroDelta(other, format='datetime')
return self
else:
raise TypeError(f"Unsupported type: '{type(other).__name__}'")
def __sub__(self, other):
if isinstance(other, FrameTimestamp):
diff = self._time - other._time
return diff.sec
elif isinstance(other, (int, float, np.integer, np.floating)):
t = self._time - AstroDelta(other, format='sec')
return FrameTimestamp(t)
elif isinstance(other, timedelta):
t = self._time - AstroDelta(other, format='datetime')
return FrameTimestamp(t)
else:
raise TypeError(f"Unsupported type: '{type(other).__name__}'")
def __isub__(self, other):
if isinstance(other, (int, float, np.integer, np.floating)):
self._time -= AstroDelta(other, format='sec')
return self
elif isinstance(other, timedelta):
self._time -= AstroDelta(other, format='datetime')
return self
else:
raise TypeError(f"Unsupported type: '{type(other).__name__}'")
def __eq__(self, y):
if isinstance(y, FrameTimestamp):
return self._time == y._time
elif isinstance(y, (int, np.integer, float, np.floating)):
return float(self) == y
else:
raise TypeError(f"Unsupported type: '{type(y).__name__}'")
def __lt__(self, y):
if isinstance(y, FrameTimestamp):
return self._time < y._time
elif isinstance(y, (int, np.integer, float, np.floating)):
return float(self) < y
else:
raise TypeError(f"Unsupported type: '{type(y).__name__}'")
@property
def unix(self):
"""
UNIX timestamp as a floating point value.
"""
return self._time.unix
@property
def jd(self):
"""
UTC JD as a floating point value.
"""
return self._time.jd
@property
def mjd(self):
"""
UTC MJD as a floating point value.
"""
return self._time.mjd
@property
def pulsar_mjd(self):
"""
UTC MJD as three-element tuple of integer number of MJD days,
integer number of seconds since 0h on that day, and fractional seconds.
"""
j1i = int(self._time.jd1)
j1f = self._time.jd1 - j1i
j2i = int(self._time.jd2)
j2f = self._time.jd2 - j2i
mjd = j1i + j2i - MJD_OFFSET
mjd_frac1 = j1f + (mjd - int(mjd))
mjd_frac2 = j2f
mjd = int(mjd)
while mjd_frac1 >= 1.0:
mjd += 1
mjd_frac1 -= 1
while mjd_frac2 >= 1.0:
mjd += 1
mjd_frac2 -= 1
while mjd_frac1 < 0.0:
mjd -= 1
mjd_frac1 += 1
while mjd_frac2 < 0.0:
mjd -= 1
mjd_frac2 += 1
day_sec1 = mjd_frac1 * 86400
day_sec2 = mjd_frac2 * 86400
day_sec = int(day_sec1) + int(day_sec2)
if day_sec > 86400:
mjd += 1
day_sec -= 86400
sec_frac = (day_sec1 - int(day_sec1)) + (day_sec2 - int(day_sec2))
return mjd, day_sec, sec_frac
@property
def tai_mjd(self):
"""
TAI MJD as a floating point value.
"""
return self._time.tai.mjd
@property
def dp_timetag(self):
"""
Timestamp as a DP timetag (ticks of a 196 MHz clock since UTC midnight
on January 1, 1970).
"""
sec = int(self._time.unix)
sec_frac = self._time - AstroTime(sec, format='unix', scale='utc')
tt = sec * int(dp_common.fS)
tt = tt + int(round(sec_frac.sec*dp_common.fS, 2))
return tt
@property
def datetime(self):
"""
Timestamp as a naive `datetime.datetime` instance in UTC.
"""
return self._time.datetime
@property
def utc_datetime(self):
"""
Timestamp as a time zone-aware datetime instance in UTC.
"""
return pytz.utc.localize(self.datetime)
@property
def astropy(self):
"""
Timestamp as an `astropy.time.Time` instance.
"""
return self._time
CI8 = np.dtype([('re', 'i1'), ('im', 'i1')])
|
lwa-projectREPO_NAMElslPATH_START.@lsl_extracted@lsl-main@lsl@reader@base.py@.PATH_END.py
|
{
"filename": "GLGradientLegendItem.py",
"repo_name": "3fon3fonov/exostriker",
"repo_path": "exostriker_extracted/exostriker-main/exostriker/lib/pyqtgraph/opengl/items/GLGradientLegendItem.py",
"type": "Python"
}
|
from ... import functions as fn
from ...colormap import ColorMap
from ...Qt import QtCore, QtGui
from ..GLGraphicsItem import GLGraphicsItem
__all__ = ['GLGradientLegendItem']
class GLGradientLegendItem(GLGraphicsItem):
"""
Displays legend colorbar on the screen.
"""
def __init__(self, parentItem=None, **kwds):
"""
Arguments:
pos: position of the colorbar on the screen, from the top left corner, in pixels
size: size of the colorbar without the text, in pixels
gradient: a pg.ColorMap used to color the colorbar
labels: a dict of "text":value to display next to the colorbar.
The value corresponds to a position in the gradient from 0 to 1.
fontColor: sets the color of the texts. Accepts any single argument accepted by
:func:`~pyqtgraph.mkColor`
#Todo:
size as percentage
legend title
"""
super().__init__(parentItem=parentItem)
glopts = kwds.pop("glOptions", "additive")
self.setGLOptions(glopts)
self.pos = (10, 10)
self.size = (10, 100)
self.fontColor = QtGui.QColor(QtCore.Qt.GlobalColor.white)
# setup a default trivial gradient
stops = (0.0, 1.0)
self.gradient = ColorMap(pos=stops, color=(0.0, 1.0))
self._gradient = None
self.labels = {str(x) : x for x in stops}
self.setData(**kwds)
def setData(self, **kwds):
args = ["size", "pos", "gradient", "labels", "fontColor"]
for k in kwds.keys():
if k not in args:
raise Exception(
"Invalid keyword argument: %s (allowed arguments are %s)"
% (k, str(args))
)
self.antialias = False
for key in kwds:
value = kwds[key]
if key == 'fontColor':
value = fn.mkColor(value)
elif key == 'gradient':
self._gradient = None
setattr(self, key, value)
##todo: add title
##todo: take more gradient types
self.update()
def paint(self):
self.setupGLState()
if self._gradient is None:
self._gradient = self.gradient.getGradient()
barRect = QtCore.QRectF(self.pos[0], self.pos[1], self.size[0], self.size[1])
self._gradient.setStart(barRect.bottomLeft())
self._gradient.setFinalStop(barRect.topLeft())
painter = QtGui.QPainter(self.view())
painter.fillRect(barRect, self._gradient)
painter.setPen(self.fontColor)
for labelText, labelPosition in self.labels.items():
## todo: draw ticks, position text properly
x = 1.1 * self.size[0] + self.pos[0]
y = self.size[1] - labelPosition * self.size[1] + self.pos[1] + 8
##todo: fonts
painter.drawText(QtCore.QPointF(x, y), labelText)
painter.end()
|
3fon3fonovREPO_NAMEexostrikerPATH_START.@exostriker_extracted@exostriker-main@exostriker@lib@pyqtgraph@opengl@items@GLGradientLegendItem.py@.PATH_END.py
|
{
"filename": "pgm.py",
"repo_name": "jmeyers314/linmix",
"repo_path": "linmix_extracted/linmix-master/docs/pgm/pgm.py",
"type": "Python"
}
|
from matplotlib import rc
rc("font", family="serif", size=10)
rc("text", usetex=True)
import daft
figshape = (6.5, 4)
figorigin = (-1.5, -0.5)
pgm = daft.PGM(figshape, figorigin)
pgm.add_node(daft.Node("y", r"y", 1, 0, observed=True))
pgm.add_node(daft.Node("x", r"x", 2, 0, observed=True))
pgm.add_node(daft.Node("eta", r"$\eta$", 1, 1))
pgm.add_node(daft.Node("xi", r"$\xi$", 2, 1))
pgm.add_node(daft.Node("alpha", r"$\alpha$", 0, 0))
pgm.add_node(daft.Node("beta", r"$\beta$", 0, 1))
pgm.add_node(daft.Node("sigsqr", r"$\sigma^2$", 0, 2))
pgm.add_node(daft.Node("pi", r"$\pi$", 2, 2))
pgm.add_node(daft.Node("mu", r"$\mu$", 3, 1))
pgm.add_node(daft.Node("tausqr", r"$\tau^2$", 3, 2))
pgm.add_node(daft.Node("mu0", r"$\mu_0$", 3, 0))
pgm.add_node(daft.Node("usqr", r"$u^2$", 4, 1))
pgm.add_node(daft.Node("wsqr", r"$w^2$", 4, 2))
pgm.add_node(daft.Node("prior_alpha", r"U($-\infty$, $\infty$)", -1, 0, fixed=True))
pgm.add_node(daft.Node("prior_beta", r"U($-\infty$, $\infty$)", -1, 1, fixed=True))
pgm.add_node(daft.Node("prior_sigsqr", r"U(0, $\infty$)", -1, 2, fixed=True))
pgm.add_node(daft.Node("prior_mu0", r"U(min(x), max(x))", 4, 0, fixed=True))
# pgm.add_node(daft.Node("prior_mu0", r"U($-\infty$, $\infty$)", 4, 0, fixed=True))
pgm.add_node(daft.Node("prior_wsqr", r"U(0, $\infty$)", 4, 3, fixed=True))
pgm.add_node(daft.Node("prior_pi", r"Dirichlet(1, ..., 1)", 2, 3, fixed=True))
pgm.add_edge("xi", "x")
pgm.add_edge("eta", "x")
pgm.add_edge("xi", "eta")
pgm.add_edge("eta", "y")
pgm.add_edge("xi", "y")
pgm.add_edge("alpha", "eta")
pgm.add_edge("beta", "eta")
pgm.add_edge("sigsqr", "eta")
pgm.add_edge("pi", "xi")
pgm.add_edge("mu", "xi")
pgm.add_edge("tausqr", "xi")
pgm.add_edge("mu0", "mu")
pgm.add_edge("usqr", "mu")
pgm.add_edge("wsqr", "usqr")
pgm.add_edge("wsqr", "tausqr")
pgm.add_edge("prior_alpha", "alpha")
pgm.add_edge("prior_beta", "beta")
pgm.add_edge("prior_sigsqr", "sigsqr")
pgm.add_edge("prior_mu0", "mu0")
pgm.add_edge("prior_wsqr", "wsqr")
pgm.add_edge("prior_pi", "pi")
pgm.render()
pgm.figure.savefig("pgm.png", dpi=300)
|
jmeyers314REPO_NAMElinmixPATH_START.@linmix_extracted@linmix-master@docs@pgm@pgm.py@.PATH_END.py
|
{
"filename": "magnitudes.py",
"repo_name": "HETDEX/elixer",
"repo_path": "elixer_extracted/elixer-main/elixer/cosmolopy/magnitudes.py",
"type": "Python"
}
|
"""Conversions between fluxes, luminosities and AB magnitudes.
"""
import math
import numpy
import cosmolopy.distance as cd
import cosmolopy.constants as cc
"""AB Magnitude zero point."""
MAB0 = -2.5 * numpy.log10(3631.e-23)
def nu_lambda(coordinate):
"""Convert between frequency and wavelength, nu to lambda or
lambda to nu.
Either:
given `lambda` returns 'nu' or
given `nu` returns `lambda`.
Units are:
`Hz` for nu and `Ang` for `lambda`.
Works because `nu = c/lambda` and `lambda = c/nu`, and I use `c`
in units of `Angs/s`.
Usage
-----
>>> from cosmolopy import magnitudes
>>> nu = magnitudes.nu_lambda(1216.)
>>> lam = magnitudes.nu_lambda(nu)
>>> lam
1216.0
"""
return (cc.c_light_cm_s / cc.angstrom_cm) / coordinate
def f_nu_lambda(flux, coordinate):
"""Convert f_nu to f_lambda or f_lambda to f_nu.
Either:
given `f_lambda` and `lambda` returns `f_nu` and 'nu' or
given `f_nu` and `nu` returns `f_lambda` and `lambda`.
Units are:
`erg s^-1 cm^-2 Hz^-1` for f_nu and
`erg s^-1 cm^-2 Ang^-1` for `f_lambda`.
Works because `f_nu = f_lambda * lambda**2/c` and `f_lambda = f_nu
* nu**2/c`, and I use `c` in units of `Angs/s`.
Usage
-----
>>> from cosmolopy import magnitudes
>>> fnu, nu = magnitudes.f_nu_lambda(2.0, 1216.)
>>> flam, lam = magnitudes.f_nu_lambda(fnu, nu)
>>> flam, lam
(2.0, 1216.0)
"""
return (flux * coordinate**2. / (cc.c_light_cm_s / cc.angstrom_cm),
(cc.c_light_cm_s / cc.angstrom_cm) / coordinate)
def f_nu_from_magAB(magAB):
"""Convert apparent magnitude into flux (erg s^-1 cm^-2 Hz^-1).
Usage
-----
Check that the AB magnitude zero point is 3631 Jy:
>>> from cosmolopy import magnitudes
>>> "%.4g" % (magnitudes.f_nu_from_magAB(0.0)/1e-23)
'3631'
"""
f_nu = 10.**((magAB + MAB0)/(-2.5))
return f_nu
def L_nu_from_magAB(magAB):
"""Convert absolute magnitude into luminosity (erg s^-1 Hz^-1).
Usage
-----
Check that the AB magnitude zero point is 3631 Jy:
>>> from cosmolopy import magnitudes
>>> import math
>>> L_nu = magnitudes.L_nu_from_magAB(0.0)
>>> "%.4g" % (L_nu/(1e-23 * 4. * math.pi * (10*cc.pc_cm)**2))
'3631'
"""
const = 4. * math.pi * (10. * cc.pc_cm)**2.
L_nu = const * 10.**((magAB + MAB0)/(-2.5))
return L_nu
def magnitude_AB_from_L_nu(luminosity_nu):
"""Convert luminosity (erg s^-1 Hz^-1) into absolute magnitude.
Usage
-----
Check that the AB magnitude zero point is 3631 Jy:
>>> import numpy, math
>>> from cosmolopy import magnitudes, cc
>>> L_nu = 3631e-23 * (4. * math.pi * (10*cc.pc_cm)**2)
>>> "%.3f" % numpy.abs(magnitudes.magnitude_AB_from_L_nu(L_nu))
'0.000'
"""
const = 4. * math.pi * (10. * cc.pc_cm)**2.
magAB = -2.5 * numpy.log10(luminosity_nu/const) - MAB0
return magAB
def distance_modulus(z, **cosmo):
"""Distance modulus mu = m-M.
The distance modulus is the difference between the apparent and
absolute magnitudes,
mu = 5 log(d/10 pc)
Usage
-----
>>> from cosmolopy import fidcosmo, magnitudes
>>> "mu(z=6) = %.4g" % magnitudes.distance_modulus(6.0, **fidcosmo)
'mu(z=6) = 48.86'
"""
dl = cd.luminosity_distance(z, **cosmo)
mu = 5 * numpy.log10(dl/(10e-6))
return mu
def magnitude_AB(z, f_lambda, wavelength, **cosmo):
"""The apparent and absolute AB magnitude given a flux.
Inputs
------
z: array or scalar
the redshift of the source. Set to None to get absolute
magnitude from a luminosity.
f_lambda: array or scalar
observed flux from the source in units of erg s^-1 cm^-2 Ang^-1
wavelength: array or scalar
the observed wavelength of the flux measurement(s) in Angstroms
Returns
-------
Returns ab (apparent), and AB (absolute) magnitudes.
Notes
-----
Note that here you pass fluxes that are per unit wavelength, not
per unit frequency. To get the absolute magnitude for a
*luminosity* specified in units of erg s^-1 Ang^-1, set z=None.
Usage
-----
Check that the AB magnitude zero point is 3631 Jy:
>>> from cosmolopy import fidcosmo, magnitudes, cc, cd
>>> import numpy, math
>>> L_nu = 3631e-23 * (4. * math.pi * (10*cc.pc_cm)**2)
>>> nu = magnitudes.nu_lambda(1216.)
>>> L_lambda, lamb = magnitudes.f_nu_lambda(L_nu, nu)
>>> mAB, MAB = magnitudes.magnitude_AB(None, L_lambda, 1216., **fidcosmo)
>>> "%.3f" % numpy.abs(MAB)
'0.000'
Find the apparent (and absolute, which should be zero) magnitudes
of a 3631 Jy source at z=6.0:
>>> from cosmolopy import fidcosmo, magnitudes, cc, cd
>>> import numpy, math
>>> L_nu = 3631e-23 * (4. * math.pi * (10*cc.pc_cm)**2)
>>> nu = magnitudes.nu_lambda(1216.)
>>> L_lambda, lamb = magnitudes.f_nu_lambda(L_nu, nu)
>>> dl = cd.luminosity_distance(6.0, **fidcosmo)
>>> f_lambda = L_lambda/(4. * math.pi * (dl*cc.Mpc_cm)**2 * (1. + 6.0))
>>> mAB, MAB = magnitudes.magnitude_AB(6.0, f_lambda, 7.*1216., **fidcosmo)
>>> "%.3f, %.3f" % (mAB, MAB)
'48.865, 0.000'
"""
# Distance modulus mu = m-M
if z is None:
mu = 0.0
z = 0
f_lambda = f_lambda / (4. * math.pi * (10. * cc.pc_cm)**2.)
else:
mu = distance_modulus(z, **cosmo)
# Correction to the flux due to redshifted differential wavelength.
f_rest = f_lambda * (1+z)
# Rest wavelength and frequency.
lambda_rest = wavelength/(1+z)
nu_rest = cc.c_light_cm_s / (lambda_rest * cc.angstrom_cm)
# Observed frequency.
nu_0 = cc.c_light_cm_s / (wavelength * cc.angstrom_cm)
# Apparent AB magnitude.
ab_app = -2.5 * numpy.log10(f_rest * (lambda_rest / nu_rest)) - MAB0
# Absolute magnitude
ab_abs = ab_app - mu
return ab_app, ab_abs
def magnitude_AB1450(z, f_lambda, wavelength, nu_power=-0.5, **cosmo):
"""Extrapolate to the AB magnitude at 1450 Angstroms.
Inputs
------
z: array or scalar
the redshift of the source
f_lambda: array or scalar
observed flux from the source in units of erg s^-1 cm^-2 Ang^-1
wavelength: array or scalar
the observed wavelength of the flux measurement(s) in Angstroms.
nu_power:
the powerlaw index (f_nu ~ nu^nu_power) used to extrapolate
the flux to 1450 Angstroms.
Returns
-------
Apparent and absolute magnitudes extrapolated to 1450 Angstroms.
Notes
-----
Follows Fan et al. 2003:
We extrapolate the continuum to rest-frame 1450A, assuming a
continuum shape f_nu ~ nu^-0.5 to calculate AB_1450.
Usage
-----
Find the apparent and absolute rest-frame 1450 Angstrom magnitudes
of source with a flux of 3631 Jy at rest-frame 1216 Angstroms at
z=6.0:
>>> from cosmolopy import fidcosmo, magnitudes, cc, cd
>>> import numpy, math
>>> L_nu = 3631e-23 * (4. * math.pi * (10*cc.pc_cm)**2)
>>> nu = magnitudes.nu_lambda(1216.)
>>> L_lambda, lamb = magnitudes.f_nu_lambda(L_nu, nu)
>>> dl = cd.luminosity_distance(6.0, **fidcosmo)
>>> f_lambda = L_lambda/(4. * math.pi * (dl*cc.Mpc_cm)**2 * (1. + 6.0))
>>> mAB, MAB = magnitudes.magnitude_AB1450(6.0, f_lambda, 7.*1216.,
... **fidcosmo)
>>> "%.3f, %.3f" % (mAB, MAB)
'48.769, -0.096'
And is that offset from an absolute magnitude of zero consisten
with our assumed powerlaw index?
>>> "%.3f" %(-2.5 * numpy.log10((1216./1450)**0.5))
'0.096'
"""
# correction to the flux due to redshifted differential wavelength
f_rest = f_lambda * (1+z)
# rest wavelength and frequency
lambda_rest = wavelength/(1+z)
nu_rest = cc.c_light_cm_s / (lambda_rest * cc.angstrom_cm)
# rest flux per unit freq.
f_nu_rest = f_rest * (lambda_rest / nu_rest)
nu_1450 = cc.c_light_cm_s / (1450 * cc.angstrom_cm)
f_nu_1450 = f_nu_rest * (nu_1450/nu_rest)**nu_power
# apparent AB magnitude
ab_app = -2.5 * numpy.log10(f_nu_1450) - MAB0
# distance modulus mu = m-M
mu = distance_modulus(z, **cosmo)
# absolute magnitude
ab_abs = ab_app - mu
return ab_app, ab_abs
if __name__ == "__main__":
import doctest
doctest.testmod()
|
HETDEXREPO_NAMEelixerPATH_START.@elixer_extracted@elixer-main@elixer@cosmolopy@magnitudes.py@.PATH_END.py
|
{
"filename": "label--short-desc-evaluation.md",
"repo_name": "catboost/catboost",
"repo_path": "catboost_extracted/catboost-master/catboost/docs/en/_includes/work_src/reusage/label--short-desc-evaluation.md",
"type": "Markdown"
}
|
The target variables (in other words, the objects' label values) for the evaluation dataset.
|
catboostREPO_NAMEcatboostPATH_START.@catboost_extracted@catboost-master@catboost@docs@en@_includes@work_src@reusage@label--short-desc-evaluation.md@.PATH_END.py
|
{
"filename": "image_grad_test_base.py",
"repo_name": "tensorflow/tensorflow",
"repo_path": "tensorflow_extracted/tensorflow-master/tensorflow/python/ops/image_grad_test_base.py",
"type": "Python"
}
|
# Copyright 2015 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for Python ops defined in image_grad.py."""
from absl.testing import parameterized
import numpy as np
from tensorflow.python.eager import backprop
from tensorflow.python.eager import context
from tensorflow.python.framework import config
from tensorflow.python.framework import constant_op
from tensorflow.python.framework import dtypes
from tensorflow.python.framework import errors_impl
from tensorflow.python.framework import test_util
from tensorflow.python.ops import array_ops_stack
from tensorflow.python.ops import gen_image_ops
from tensorflow.python.ops import gradient_checker_v2
from tensorflow.python.ops import image_ops
from tensorflow.python.ops import math_ops
from tensorflow.python.platform import test
@test_util.for_all_test_methods(test_util.disable_xla,
'align_corners=False not supported by XLA')
class ResizeNearestNeighborOpTestBase(test.TestCase):
TYPES = [np.float16, np.float32, np.float64, dtypes.bfloat16.as_numpy_dtype]
def testShapeIsCorrectAfterOp(self):
in_shape = [1, 2, 2, 1]
out_shape = [1, 4, 6, 1]
for nptype in self.TYPES:
x = np.arange(0, 4).reshape(in_shape).astype(nptype)
input_tensor = constant_op.constant(x, shape=in_shape)
resize_out = image_ops.resize_nearest_neighbor(input_tensor,
out_shape[1:3])
with self.cached_session():
self.assertEqual(out_shape, list(resize_out.get_shape()))
resize_out = self.evaluate(resize_out)
self.assertEqual(out_shape, list(resize_out.shape))
def testGradFromResizeToLargerInBothDims(self):
in_shape = [1, 2, 3, 1]
out_shape = (1, 4, 6, 1)
for nptype in self.TYPES:
x = np.arange(0, 6).reshape(in_shape).astype(nptype)
def resize_nn(t, shape=out_shape):
return image_ops.resize_nearest_neighbor(t, shape[1:3])
with self.cached_session():
input_tensor = constant_op.constant(x, shape=in_shape)
err = gradient_checker_v2.max_error(
*gradient_checker_v2.compute_gradient(
resize_nn, [input_tensor], delta=1 / 8))
self.assertLess(err, 1e-3)
def testGradFromResizeToSmallerInBothDims(self):
in_shape = [1, 4, 6, 1]
out_shape = (1, 2, 3, 1)
for nptype in self.TYPES:
x = np.arange(0, 24).reshape(in_shape).astype(nptype)
def resize_nn(t, shape=out_shape):
return image_ops.resize_nearest_neighbor(t, shape[1:3])
with self.cached_session():
input_tensor = constant_op.constant(x, shape=in_shape)
err = gradient_checker_v2.max_error(
*gradient_checker_v2.compute_gradient(
resize_nn, [input_tensor], delta=1 / 8))
self.assertLess(err, 1e-3)
def testCompareGpuVsCpu(self):
in_shape = [1, 4, 6, 3]
out_shape = (1, 8, 16, 3)
for nptype in self.TYPES:
x = np.arange(0, np.prod(in_shape)).reshape(in_shape).astype(nptype)
for align_corners in [True, False]:
def resize_nn(t, shape=out_shape, align_corners=align_corners):
return image_ops.resize_nearest_neighbor(
t, shape[1:3], align_corners=align_corners)
with self.cached_session(use_gpu=False):
input_tensor = constant_op.constant(x, shape=in_shape)
grad_cpu = gradient_checker_v2.compute_gradient(
resize_nn, [input_tensor], delta=1 / 8)
with self.cached_session():
input_tensor = constant_op.constant(x, shape=in_shape)
grad_gpu = gradient_checker_v2.compute_gradient(
resize_nn, [input_tensor], delta=1 / 8)
self.assertAllClose(grad_cpu, grad_gpu, rtol=1e-5, atol=1e-5)
class ResizeBilinearOpTestBase(test.TestCase, parameterized.TestCase):
def _itGen(self, smaller_shape, larger_shape):
up_sample = (smaller_shape, larger_shape)
down_sample = (larger_shape, smaller_shape)
pass_through = (larger_shape, larger_shape)
shape_pairs = (up_sample, down_sample, pass_through)
# Align corners is deprecated in TF2.0, but align_corners==False is not
# supported by XLA.
options = [(True, False)]
if not test_util.is_xla_enabled():
options += [(False, True), (False, False)]
for align_corners, half_pixel_centers in options:
for in_shape, out_shape in shape_pairs:
yield in_shape, out_shape, align_corners, half_pixel_centers
def _getJacobians(self,
in_shape,
out_shape,
align_corners=False,
half_pixel_centers=False,
dtype=np.float32,
use_gpu=False,
force_gpu=False):
with self.cached_session(use_gpu=use_gpu, force_gpu=force_gpu):
# Input values should not influence gradients
x = np.arange(np.prod(in_shape)).reshape(in_shape).astype(dtype)
input_tensor = constant_op.constant(x, shape=in_shape)
def func(in_tensor):
return image_ops.resize_bilinear(
in_tensor,
out_shape[1:3],
align_corners=align_corners,
half_pixel_centers=half_pixel_centers)
return gradient_checker_v2.compute_gradient(func, [input_tensor])
@parameterized.parameters(set((True, context.executing_eagerly())))
def _testShapesParameterized(self, use_tape):
TEST_CASES = [[1, 1], [2, 3], [5, 4]] # pylint: disable=invalid-name
for batch_size, channel_count in TEST_CASES:
smaller_shape = [batch_size, 2, 3, channel_count]
larger_shape = [batch_size, 4, 6, channel_count]
for in_shape, out_shape, _, _ in self._itGen(smaller_shape, larger_shape):
with test_util.AbstractGradientTape(use_tape=use_tape) as tape:
# Input values should not influence shapes
x = np.arange(np.prod(in_shape)).reshape(in_shape).astype(np.float32)
input_tensor = constant_op.constant(x, shape=in_shape)
tape.watch(input_tensor)
resized_tensor = image_ops.resize_bilinear(input_tensor,
out_shape[1:3])
self.assertEqual(out_shape, list(resized_tensor.get_shape()))
grad_tensor = tape.gradient(resized_tensor, input_tensor)
self.assertEqual(in_shape, list(grad_tensor.get_shape()))
with self.cached_session():
resized_values = self.evaluate(resized_tensor)
self.assertEqual(out_shape, list(resized_values.shape))
grad_values = self.evaluate(grad_tensor)
self.assertEqual(in_shape, list(grad_values.shape))
@parameterized.parameters({
'batch_size': 1,
'channel_count': 1
}, {
'batch_size': 4,
'channel_count': 3
}, {
'batch_size': 3,
'channel_count': 2
})
def testGradients(self, batch_size, channel_count):
smaller_shape = [batch_size, 2, 3, channel_count]
larger_shape = [batch_size, 5, 6, channel_count]
for in_shape, out_shape, align_corners, half_pixel_centers in \
self._itGen(smaller_shape, larger_shape):
jacob_a, jacob_n = self._getJacobians(in_shape, out_shape, align_corners,
half_pixel_centers)
threshold = 5e-3
self.assertAllClose(jacob_a, jacob_n, threshold, threshold)
def testTypes(self):
in_shape = [1, 4, 6, 1]
out_shape = [1, 2, 3, 1]
for use_gpu in [False, True]:
for dtype in [
np.float16, np.float32, np.float64, dtypes.bfloat16.as_numpy_dtype
]:
jacob_a, jacob_n = self._getJacobians(
in_shape, out_shape, dtype=dtype, use_gpu=use_gpu)
if dtype in (np.float16, dtypes.bfloat16.as_numpy_dtype):
# Compare fp16/bf16 analytical gradients to fp32 numerical gradients,
# since fp16/bf16 numerical gradients are too imprecise unless great
# care is taken with choosing the inputs and the delta. This is
# a weaker, but pragmatic, check (in particular, it does not test
# the op itself, only its gradient).
_, jacob_n = self._getJacobians(
in_shape, out_shape, dtype=np.float32, use_gpu=use_gpu)
threshold = 1e-3
if dtype == np.float64:
threshold = 1e-5
self.assertAllClose(jacob_a, jacob_n, threshold, threshold)
@parameterized.parameters(set((True, context.executing_eagerly())))
def testGradOnUnsupportedType(self, use_tape):
in_shape = [1, 4, 6, 1]
out_shape = [1, 2, 3, 1]
with test_util.AbstractGradientTape(use_tape=use_tape) as tape:
x = np.arange(0, 24).reshape(in_shape).astype(np.uint8)
input_tensor = constant_op.constant(x, shape=in_shape)
tape.watch(input_tensor)
resize_out = image_ops.resize_bilinear(input_tensor, out_shape[1:3])
with self.cached_session():
grad = tape.gradient(resize_out, [input_tensor])
self.assertEqual([None], grad)
def _gpuVsCpuCase(self, in_shape, out_shape, align_corners,
half_pixel_centers, dtype):
grad = {}
for use_gpu in [False, True]:
grad[use_gpu] = self._getJacobians(
in_shape,
out_shape,
align_corners,
half_pixel_centers,
dtype=dtype,
use_gpu=use_gpu)
threshold = 1e-4
# Note that this is comparing both analytical and numerical Jacobians
self.assertAllClose(grad[False], grad[True], rtol=threshold, atol=threshold)
@parameterized.parameters({
'batch_size': 1,
'channel_count': 1
}, {
'batch_size': 2,
'channel_count': 3
}, {
'batch_size': 5,
'channel_count': 4
})
def testCompareGpuVsCpu(self, batch_size, channel_count):
smaller_shape = [batch_size, 4, 6, channel_count]
larger_shape = [batch_size, 8, 16, channel_count]
for params in self._itGen(smaller_shape, larger_shape):
self._gpuVsCpuCase(*params, dtype=np.float32)
def testCompareGpuVsCpuFloat64(self):
in_shape = [1, 5, 7, 1]
out_shape = [1, 9, 11, 1]
# Note that there is no 16-bit floating-point format registered for GPU
self._gpuVsCpuCase(
in_shape,
out_shape,
align_corners=True,
half_pixel_centers=False,
dtype=np.float64)
class ResizeBicubicOpTestBase(test.TestCase, parameterized.TestCase):
"""Tests resize bicubic ops."""
def testShapeIsCorrectAfterOp(self):
in_shape = [1, 2, 2, 1]
out_shape = [1, 4, 6, 1]
x = np.arange(0, 4).reshape(in_shape).astype(np.float32)
for align_corners in [True, False]:
input_tensor = constant_op.constant(x, shape=in_shape)
resize_out = image_ops.resize_bicubic(
input_tensor, out_shape[1:3], align_corners=align_corners)
with self.cached_session():
self.assertEqual(out_shape, list(resize_out.get_shape()))
resize_out = self.evaluate(resize_out)
self.assertEqual(out_shape, list(resize_out.shape))
def testGradFromResizeToLargerInBothDims(self):
in_shape = [1, 2, 3, 1]
out_shape = [1, 4, 6, 1]
x = np.arange(0, 6).reshape(in_shape).astype(np.float32)
input_tensor = constant_op.constant(x, shape=in_shape)
for align_corners in [True, False]:
def func(input_tensor, align_corners=align_corners):
return image_ops.resize_bicubic(
input_tensor, out_shape[1:3], align_corners=align_corners)
with self.cached_session():
err = gradient_checker_v2.max_error(
*gradient_checker_v2.compute_gradient(func, [input_tensor]))
self.assertLess(err, 1e-3)
def testGradFromResizeToSmallerInBothDims(self):
in_shape = [1, 4, 6, 1]
out_shape = [1, 2, 3, 1]
x = np.arange(0, 24).reshape(in_shape).astype(np.float32)
input_tensor = constant_op.constant(x, shape=in_shape)
for align_corners in [True, False]:
def func(input_tensor, align_corners=align_corners):
return image_ops.resize_bicubic(
input_tensor, out_shape[1:3], align_corners=align_corners)
with self.cached_session():
err = gradient_checker_v2.max_error(
*gradient_checker_v2.compute_gradient(func, [input_tensor]))
self.assertLess(err, 1e-3)
@parameterized.parameters(set((True, context.executing_eagerly())))
def testGradOnUnsupportedType(self, use_tape):
with test_util.AbstractGradientTape(use_tape=use_tape) as tape:
in_shape = [1, 4, 6, 1]
out_shape = [1, 2, 3, 1]
x = np.arange(0, 24).reshape(in_shape).astype(np.uint8)
input_tensor = constant_op.constant(x, shape=in_shape)
tape.watch(input_tensor)
resize_out = image_ops.resize_bicubic(input_tensor, out_shape[1:3])
with self.cached_session():
grad = tape.gradient(resize_out, [input_tensor])
self.assertEqual([None], grad)
class ScaleAndTranslateOpTestBase(test.TestCase):
"""Tests scale and translate op."""
def testGrads(self):
in_shape = [1, 2, 3, 1]
out_shape = [1, 4, 6, 1]
x = np.arange(0, 6).reshape(in_shape).astype(np.float32)
kernel_types = [
'lanczos1', 'lanczos3', 'lanczos5', 'gaussian', 'box', 'triangle',
'keyscubic', 'mitchellcubic'
]
scales = [(1.0, 1.0), (0.37, 0.47), (2.1, 2.1)]
translations = [(0.0, 0.0), (3.14, 1.19), (2.1, 3.1), (100.0, 200.0)]
for scale in scales:
for translation in translations:
for kernel_type in kernel_types:
for antialias in [True, False]:
with self.cached_session():
input_tensor = constant_op.constant(x, shape=in_shape)
def scale_trans(input_tensor,
scale=scale,
translation=translation,
kernel_type=kernel_type,
antialias=antialias):
# pylint: disable=cell-var-from-loop
return image_ops.scale_and_translate(
input_tensor,
out_shape[1:3],
scale=constant_op.constant(scale),
translation=constant_op.constant(translation),
kernel_type=kernel_type,
antialias=antialias)
err = gradient_checker_v2.max_error(
*gradient_checker_v2.compute_gradient(scale_trans,
[input_tensor]))
self.assertLess(err, 1e-3)
def testIdentityGrads(self):
"""Tests that Gradients for 1.0 scale should be ones for some kernels."""
in_shape = [1, 2, 3, 1]
out_shape = [1, 4, 6, 1]
x = np.arange(0, 6).reshape(in_shape).astype(np.float32)
kernel_types = ['lanczos1', 'lanczos3', 'lanczos5', 'triangle', 'keyscubic']
scale = (1.0, 1.0)
translation = (0.0, 0.0)
antialias = True
for kernel_type in kernel_types:
with self.cached_session():
input_tensor = constant_op.constant(x, shape=in_shape)
with backprop.GradientTape() as tape:
tape.watch(input_tensor)
scale_and_translate_out = image_ops.scale_and_translate(
input_tensor,
out_shape[1:3],
scale=constant_op.constant(scale),
translation=constant_op.constant(translation),
kernel_type=kernel_type,
antialias=antialias)
grad = tape.gradient(scale_and_translate_out, input_tensor)[0]
grad_v = self.evaluate(grad)
self.assertAllClose(np.ones_like(grad_v), grad_v)
class CropAndResizeOpTestBase(test.TestCase):
def testShapeIsCorrectAfterOp(self):
batch = 2
image_height = 3
image_width = 4
crop_height = 4
crop_width = 5
depth = 2
num_boxes = 2
image_shape = [batch, image_height, image_width, depth]
crop_size = [crop_height, crop_width]
crops_shape = [num_boxes, crop_height, crop_width, depth]
image = np.arange(0, batch * image_height * image_width *
depth).reshape(image_shape).astype(np.float32)
boxes = np.array([[0, 0, 1, 1], [.1, .2, .7, .8]], dtype=np.float32)
box_ind = np.array([0, 1], dtype=np.int32)
crops = image_ops.crop_and_resize(
constant_op.constant(image, shape=image_shape),
constant_op.constant(boxes, shape=[num_boxes, 4]),
constant_op.constant(box_ind, shape=[num_boxes]),
constant_op.constant(crop_size, shape=[2]))
with self.session():
self.assertEqual(crops_shape, list(crops.get_shape()))
crops = self.evaluate(crops)
self.assertEqual(crops_shape, list(crops.shape))
def _randomUniformAvoidAnchors(self, low, high, anchors, radius, num_samples):
"""Generate samples that are far enough from a set of anchor points.
We generate uniform samples in [low, high], then reject those that are less
than radius away from any point in anchors. We stop after we have accepted
num_samples samples.
Args:
low: The lower end of the interval.
high: The upper end of the interval.
anchors: A list of length num_crops with anchor points to avoid.
radius: Distance threshold for the samples from the anchors.
num_samples: How many samples to produce.
Returns:
samples: A list of length num_samples with the accepted samples.
"""
self.assertTrue(low < high)
self.assertTrue(radius >= 0)
num_anchors = len(anchors)
# Make sure that at least half of the interval is not forbidden.
self.assertTrue(2 * radius * num_anchors < 0.5 * (high - low))
anchors = np.reshape(anchors, num_anchors)
samples = []
while len(samples) < num_samples:
sample = np.random.uniform(low, high)
if np.all(np.fabs(sample - anchors) > radius):
samples.append(sample)
return samples
def testGradRandomBoxes(self):
"""Test that the gradient is correct for randomly generated boxes.
The mapping is piecewise differentiable with respect to the box coordinates.
The points where the function is not differentiable are those which are
mapped to image pixels, i.e., the normalized y coordinates in
np.linspace(0, 1, image_height) and normalized x coordinates in
np.linspace(0, 1, image_width). Make sure that the box coordinates are
sufficiently far away from those rectangular grid centers that are points of
discontinuity, so that the finite difference Jacobian is close to the
computed one.
"""
np.random.seed(1) # Make it reproducible.
delta = 1e-3
radius = 2 * delta
low, high = -0.5, 1.5 # Also covers the case of extrapolation.
image_height = 4
for image_width in range(1, 3):
for crop_height in range(1, 3):
for crop_width in range(2, 4):
for depth in range(1, 3):
for num_boxes in range(1, 3):
batch = num_boxes
image_shape = [batch, image_height, image_width, depth]
crop_size = [crop_height, crop_width]
image = np.arange(0, batch * image_height * image_width *
depth).reshape(image_shape).astype(np.float32)
boxes = []
for _ in range(num_boxes):
# pylint: disable=unbalanced-tuple-unpacking
y1, y2 = self._randomUniformAvoidAnchors(
low, high, np.linspace(0, 1, image_height), radius, 2)
x1, x2 = self._randomUniformAvoidAnchors(
low, high, np.linspace(0, 1, image_width), radius, 2)
# pylint: enable=unbalanced-tuple-unpacking
boxes.append([y1, x1, y2, x2])
boxes = np.array(boxes, dtype=np.float32)
box_ind = np.arange(batch, dtype=np.int32)
image_tensor = constant_op.constant(image, shape=image_shape)
boxes_tensor = constant_op.constant(boxes, shape=[num_boxes, 4])
box_ind_tensor = constant_op.constant(box_ind, shape=[num_boxes])
def crop_resize(image_tensor, boxes_tensor):
# pylint: disable=cell-var-from-loop
return image_ops.crop_and_resize(
image_tensor, boxes_tensor, box_ind_tensor,
constant_op.constant(crop_size, shape=[2]))
with test_util.device(use_gpu=True):
with self.cached_session():
# pylint: disable=cell-var-from-loop
if (config.is_op_determinism_enabled() and
test_util.is_gpu_available()):
with self.assertRaises(errors_impl.UnimplementedError):
gradient_checker_v2.compute_gradient(
lambda x: crop_resize(x, boxes_tensor),
[image_tensor])
with self.assertRaises(errors_impl.UnimplementedError):
gradient_checker_v2.compute_gradient(
lambda x: crop_resize(image_tensor, x),
[boxes_tensor])
else:
err1 = gradient_checker_v2.max_error(
*gradient_checker_v2.compute_gradient(
lambda x: crop_resize(x, boxes_tensor),
[image_tensor]))
err2 = gradient_checker_v2.max_error(
*gradient_checker_v2.compute_gradient(
lambda x: crop_resize(image_tensor, x),
[boxes_tensor]))
err = max(err1, err2)
self.assertLess(err, 2e-3)
@test_util.run_all_in_graph_and_eager_modes
class RGBToHSVOpTestBase(test.TestCase):
TYPES = [np.float32, np.float64]
def testShapeIsCorrectAfterOp(self):
in_shape = [2, 20, 30, 3]
out_shape = [2, 20, 30, 3]
for nptype in self.TYPES:
x = np.random.randint(0, high=255, size=[2, 20, 30, 3]).astype(nptype)
rgb_input_tensor = constant_op.constant(x, shape=in_shape)
hsv_out = gen_image_ops.rgb_to_hsv(rgb_input_tensor)
with self.cached_session():
self.assertEqual(out_shape, list(hsv_out.get_shape()))
hsv_out = self.evaluate(hsv_out)
self.assertEqual(out_shape, list(hsv_out.shape))
def testRGBToHSVGradSimpleCase(self):
def f(x):
return gen_image_ops.rgb_to_hsv(x)
for nptype in self.TYPES:
# Building a simple input tensor to avoid any discontinuity
x = np.array([[0.1, 0.2, 0.3], [0.4, 0.5, 0.6], [0.7, 0.8,
0.9]]).astype(nptype)
rgb_input_tensor = constant_op.constant(x, shape=x.shape)
# Computing Analytical and Numerical gradients of f(x)
analytical, numerical = gradient_checker_v2.compute_gradient(
f, [rgb_input_tensor])
self.assertAllClose(numerical, analytical, atol=1e-4)
def testRGBToHSVGradRandomCase(self):
def f(x):
return gen_image_ops.rgb_to_hsv(x)
np.random.seed(0)
# Building a simple input tensor to avoid any discontinuity
x = np.random.rand(1, 5, 5, 3).astype(np.float32)
rgb_input_tensor = constant_op.constant(x, shape=x.shape)
# Computing Analytical and Numerical gradients of f(x)
self.assertLess(
gradient_checker_v2.max_error(
*gradient_checker_v2.compute_gradient(f, [rgb_input_tensor])), 1e-4)
def testRGBToHSVGradSpecialCaseRGreatest(self):
# This test tests a specific subset of the input space
# with a dummy function implemented with native TF operations.
in_shape = [2, 10, 20, 3]
def f(x):
return gen_image_ops.rgb_to_hsv(x)
def f_dummy(x):
# This dummy function is a implementation of RGB to HSV using
# primitive TF functions for one particular case when R>G>B.
r = x[..., 0]
g = x[..., 1]
b = x[..., 2]
# Since MAX = r and MIN = b, we get the following h,s,v values.
v = r
s = 1 - math_ops.div_no_nan(b, r)
h = 60 * math_ops.div_no_nan(g - b, r - b)
h = h / 360
return array_ops_stack.stack([h, s, v], axis=-1)
# Building a custom input tensor where R>G>B
x_reds = np.ones((in_shape[0], in_shape[1], in_shape[2])).astype(np.float32)
x_greens = 0.5 * np.ones(
(in_shape[0], in_shape[1], in_shape[2])).astype(np.float32)
x_blues = 0.2 * np.ones(
(in_shape[0], in_shape[1], in_shape[2])).astype(np.float32)
x = np.stack([x_reds, x_greens, x_blues], axis=-1)
rgb_input_tensor = constant_op.constant(x, shape=in_shape)
# Computing Analytical and Numerical gradients of f(x)
analytical, numerical = gradient_checker_v2.compute_gradient(
f, [rgb_input_tensor])
# Computing Analytical and Numerical gradients of f_dummy(x)
analytical_dummy, numerical_dummy = gradient_checker_v2.compute_gradient(
f_dummy, [rgb_input_tensor])
self.assertAllClose(numerical, analytical, atol=1e-4)
self.assertAllClose(analytical_dummy, analytical, atol=1e-4)
self.assertAllClose(numerical_dummy, numerical, atol=1e-4)
if __name__ == '__main__':
test.main()
|
tensorflowREPO_NAMEtensorflowPATH_START.@tensorflow_extracted@tensorflow-master@tensorflow@python@ops@image_grad_test_base.py@.PATH_END.py
|
{
"filename": "example_client.py",
"repo_name": "xraypy/xraylarch",
"repo_path": "xraylarch_extracted/xraylarch-master/examples/rpc/example_client.py",
"type": "Python"
}
|
#!/usr/bin/env python
from six.moves.xmlrpc_client import ServerProxy
import time
import json
from larch.utils.jsonutils import decode4js
s = ServerProxy('http://127.0.0.1:4966')
print('Avaialable Methods from XML-RPC server: ', s.system.listMethods())
s.larch('m = 222.3')
s.larch('g = group(x=linspace(0, 10, 11))')
s.larch('g.z = cos(g.x)')
# show and print will be done in server process of course!!!
s.larch('show(g)')
s.larch('print( g.z[3:10])')
print( '== Messages:')
print( s.get_messages())
print( '==')
gx = decode4js(s.get_data('g.z'))
print( 'm = ', s.get_data('m'))
print( 'x = ', s.get_data('x'))
print('gx = ', gx, type(gx), gx.dtype)
# could tell server to exit!
# s.exit()
|
xraypyREPO_NAMExraylarchPATH_START.@xraylarch_extracted@xraylarch-master@examples@rpc@example_client.py@.PATH_END.py
|
{
"filename": "data_factory.py",
"repo_name": "glue-viz/glue",
"repo_path": "glue_extracted/glue-main/glue/plugins/dendro_viewer/data_factory.py",
"type": "Python"
}
|
"""
Load files created by the astrodendro package.
astrodendro must be installed in order to use this loader
"""
import numpy as np
from astrodendro import Dendrogram
from glue.core.data_factories.hdf5 import is_hdf5
from glue.core.data_factories.fits import is_fits
from glue.core.data_factories.helpers import data_label
from glue.core.data import Data
from glue.config import data_factory
__all__ = ['load_dendro', 'is_dendro']
def is_dendro(file, **kwargs):
if is_hdf5(file):
import h5py
f = h5py.File(file, 'r')
return 'data' in f and 'index_map' in f and 'newick' in f
elif is_fits(file):
from astropy.io import fits
with fits.open(file, ignore_missing_end=True) as hdulist:
# For recent versions of astrodendro the HDUs have a recongnizable
# set of names.
if 'DATA' in hdulist and 'INDEX_MAP' in hdulist and 'NEWICK' in hdulist:
return True
# For older versions of astrodendro, the HDUs did not have names
# Here we use heuristics to figure out if this is likely to be a
# dendrogram. Specifically, there should be three HDU extensions.
# The primary HDU should be empty, HDU 1 and HDU 2 should have
# matching shapes, and HDU 3 should have a 1D array. Also, if the
# HDUs do have names then this is not a dendrogram since the old
# files did not have names
# This branch can be removed once we think most dendrogram files
# will have HDU names.
if len(hdulist) != 4:
return False
if hdulist[1].name != '' or hdulist[2].name != '' or hdulist[3].name != '':
return False
if hdulist[0].data is not None:
return False
if hdulist[1].data is None or hdulist[2].data is None or hdulist[3].data is None:
return False
if hdulist[1].data.shape != hdulist[2].data.shape:
return False
if hdulist[3].data.ndim != 1:
return False
# We're probably ok, so return True
return True
else:
return False
@data_factory(label='Dendrogram', identifier=is_dendro, priority=1000)
def load_dendro(filename):
"""
Load a dendrogram saved by the astrodendro package
:param file: Path to a dendrogram file
:returns: A list of 2 glue Data objects: the original dataset, and dendrogram.
"""
label = data_label(filename)
dg = Dendrogram.load_from(filename)
structs = np.arange(len(dg))
parent = np.array([dg[i].parent.idx
if dg[i].parent is not None else -1
for i in structs])
height = np.array([dg[i].height for i in structs])
pk = np.array([dg[i].get_peak(True)[1] for i in structs])
dendro = Data(parent=parent,
height=height,
peak=pk,
label="{} [dendrogram]".format(label))
im = Data(intensity=dg.data,
structure=dg.index_map,
label="{} [data]".format(label))
im.join_on_key(dendro, 'structure', dendro.pixel_component_ids[0])
return [dendro, im]
|
glue-vizREPO_NAMEgluePATH_START.@glue_extracted@glue-main@glue@plugins@dendro_viewer@data_factory.py@.PATH_END.py
|
{
"filename": "mosaic.py",
"repo_name": "vterron/lemon",
"repo_path": "lemon_extracted/lemon-master/mosaic.py",
"type": "Python"
}
|
#! /usr/bin/env python2
# Copyright (c) 2012 Victor Terron. All rights reserved.
# Institute of Astrophysics of Andalusia, IAA-CSIC
#
# This file is part of LEMON.
#
# LEMON is free software: you can redistribute it and/or modify it
# under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
from __future__ import division
description = """
Use the Montage (Montage Astronomical Image Mosaic Engine) toolkit [1] to
assemble the input FITS images into a composite mosaic that preserves their
flux calibration and positional fidelity. This is a high-level interface to
mosaic(), a convenience function of the montage-wrapper module which runs the
mosaicking process from start to end. The input FITS images, all of which must
have been astrometrically calibrated, are reprojected onto a common coordinate
system and combined into a mosaic.
Montage is an extremely powerful toolkit, whose algorithms preserve the
astrometric and photometric accuracy of the input images and perform background
rectification in such a fashion that its impact on the photometric quality of
the data is almost negligible [2]. For example, according to the results of an
accuracy testing, 99.7% of the sources in re-projected synthetic images were
within 0.1% of the original flux [3]. There are, however, certain assumptions
of which you should be aware. For example, Montage assumes that the input
images are all calibrated to an absolute energy scale and that any
discrepancies between the images are due to variations in their background
levels that are terrestrial or instrumental in origin [4]
Note that montage_wrapper is not a replacement for the IPAC Montage mosaicking
software, whose commands (such as mAdd or mProject) must be present in PATH.
[1]_http://montage.ipac.caltech.edu/
[2]_http://adsabs.harvard.edu/abs/2003ASPC..295..343B
[3]_http://montage.ipac.caltech.edu/docs/accuracy.html
[4]_http://montage.ipac.caltech.edu/docs/algorithms.html
"""
import atexit
import montage_wrapper as montage
import multiprocessing
import optparse
import os
import os.path
import shutil
import sys
import tempfile
# LEMON modules
import customparser
import defaults
import fitsimage
import keywords
import style
import util
parser = customparser.get_parser(description)
parser.usage = "%prog [OPTION]... INPUT_IMGS... OUTPUT_IMG"
parser.add_option(
"--overwrite",
action="store_true",
dest="overwrite",
help="overwrite output image if it already exists",
)
parser.add_option(
"--background-match",
action="store_true",
dest="background_match",
help="include a background-matching step, thus removing "
"any discrepancies in brightness or background. Note that, "
"although an amazing feature of Montage, this makes the "
"assembling of the images take remarkably longer.",
)
parser.add_option(
"--no-reprojection",
action="store_false",
dest="reproject",
default=True,
help="do not reproject the mosaic so that North is up.",
)
parser.add_option(
"--combine",
action="store",
dest="combine",
default="mean",
help="how FITS images are combined - this should be one "
"of 'mean', 'median', or 'count'. For more details on how "
"Montage performs co-addition, see [4] [default: %default]",
)
parser.add_option(
"--filter",
action="store",
type="passband",
dest="filter",
default=None,
help="do not combine all the FITS files given as input, "
"but only those taken in this photometric filter. " + defaults.desc["filter"],
)
parser.add_option(
"--cores",
action="store",
type="int",
dest="ncores",
default=multiprocessing.cpu_count(),
help="the number of MPI (Message Passing Interface) "
"processes to use with the Montage commands that support "
"parallelization. Note that this requires that the MPI "
"versions of the Montage commands be installed, which is "
"not the case by default. This option defaults to the "
"number of CPUs in the system, which are automatically "
"detected [default: %default]",
)
key_group = optparse.OptionGroup(parser, "FITS Keywords", keywords.group_description)
key_group.add_option(
"--filterk",
action="store",
type="str",
dest="filterk",
default=keywords.filterk,
help="keyword for the name of the filter of the "
"observation. This keyword is not necessary (and, "
"therefore, its value irrelevant) if the --filter "
"option is not used, since it is only in that "
"scenario that we have to read the filter from "
"the header of each FITS image [default: %default]",
)
parser.add_option_group(key_group)
customparser.clear_metavars(parser)
def main(arguments=None):
"""main() function, encapsulated in a method to allow for easy invokation.
This method follows Guido van Rossum's suggestions on how to write Python
main() functions in order to make them more flexible. By encapsulating the
main code of the script in a function and making it take an optional
argument the script can be called not only from other modules, but also
from the interactive Python prompt.
Guido van van Rossum - Python main() functions:
http://www.artima.com/weblogs/viewpost.jsp?thread=4829
Keyword arguments:
arguments - the list of command line arguments passed to the script.
"""
if arguments is None:
arguments = sys.argv[1:] # ignore argv[0], the script name
(options, args) = parser.parse_args(args=arguments)
# Print the help and abort the execution if there are fewer than three
# positional arguments left, as the user must specify at least two FITS
# images and the output mosaic into which they are assembled.
if len(args) < 3:
parser.print_help()
return 2 # used for command line syntax errors
else:
assert len(args) >= 3
input_paths = set(args[:-1])
output_path = args[-1]
# Refuse to overwrite the output FITS file unless explicitly instructed to
# do so. Note that, if the --overwritten option is given, we do not need to
# delete the existing file: it will be silently overwritten when the output
# of montage.mosaic() is shutil.move()'d to the output path.
if os.path.exists(output_path):
if not options.overwrite:
msg = "%sError. The output file '%s' already exists."
print msg % (style.prefix, output_path)
print style.error_exit_message
return 1
# Workaround for a bug in montage.mosaic() that raises an error ('mpirun
# has exited due to process rank [...] without calling "finalize"...') if
# mpi = True and background_match = True. Until this is fixed, we can only
# use one core if the --background-match option is given by the user.
if options.background_match and options.ncores > 1:
options.ncores = 1
for msg in (
"{0}Warning: --background-match is incompatible with --cores > 1.",
"{0}Setting the --cores option to a value of one.",
"{0}This is a workaround for a known bug in montage-wrapper:",
"{0}https://github.com/astropy/montage-wrapper/issues/18",
):
print msg.format(style.prefix)
print
# Map each filter to a list of FITSImage objects
files = fitsimage.InputFITSFiles()
msg = "%sMaking sure the %d input paths are FITS images..."
print msg % (style.prefix, len(input_paths))
util.show_progress(0.0)
for index, path in enumerate(input_paths):
# fitsimage.FITSImage.__init__() raises fitsimage.NonStandardFITS if
# one of the paths is not a standard-conforming FITS file.
try:
img = fitsimage.FITSImage(path)
# If we do not need to know the photometric filter (because the
# --filter was not given) do not read it from the FITS header.
# Instead, use None. This means that 'files', a dictionary, will
# only have a key, None, mapping to all the input FITS images.
if options.filter:
pfilter = img.pfilter(options.filterk)
else:
pfilter = None
files[pfilter].append(img)
except fitsimage.NonStandardFITS:
print
msg = "'%s' is not a standard FITS file"
raise fitsimage.NonStandardFITS(msg % path)
percentage = (index + 1) / len(input_paths) * 100
util.show_progress(percentage)
print # progress bar doesn't include newline
# The --filter option allows the user to specify which FITS files, among
# all those received as input, must be combined: only those images taken
# in the options.filter photometric filter.
if options.filter:
msg = "%s%d different photometric filters were detected:"
print msg % (style.prefix, len(files.keys()))
for pfilter, images in sorted(files.iteritems()):
msg = "%s %s: %d files (%.2f %%)"
percentage = len(images) / len(files) * 100
print msg % (style.prefix, pfilter, len(images), percentage)
msg = "%sIgnoring images not taken in the '%s' photometric filter..."
print msg % (style.prefix, options.filter),
sys.stdout.flush()
discarded = 0
for pfilter, images in files.items():
if pfilter != options.filter:
discarded += len(images)
del files[pfilter]
if not files:
print
msg = "%sError. No image was taken in the '%s' filter."
print msg % (style.prefix, options.filter)
print style.error_exit_message
return 1
else:
print "done."
msg = "%s%d images taken in the '%s' filter, %d were discarded."
print msg % (style.prefix, len(files), options.filter, discarded)
# montage.mosaic() silently ignores those FITS images that have no WCS
# information in their headers, and also raises a rather cryptic exception
# (mMakeHdr: Invalid table file) if none of them has been astrometrically
# solved. Instead of ignoring some images without warning or showing a
# confusing error message that makes it almost impossible to understand
# what may be failing, use FITSImage.center_wcs() to make sure that all the
# images have WCS information, raising NoWCSInformationError otherwise.
for img in files:
# May raise NoWCSInformationError
img.center_wcs()
# montage.mosaic() requires as first argument the directory containing the
# input FITS images but, in order to maintain the same syntax across all
# LEMON commands, we receive them as command-line arguments. Thus, create a
# temporary directory and symlink from it the input images. Hard links are
# not an option because os.link() will raise "OSError: [Errno 18] Invalid
# cross-device link" if the temporary directory is created in a different
# partition.
pid = os.getpid()
suffix = "_LEMON_%d_mosaic" % pid
kwargs = dict(suffix=suffix + "_input")
input_dir = tempfile.mkdtemp(**kwargs)
atexit.register(util.clean_tmp_files, input_dir)
for img in files:
path = img.path
source = os.path.abspath(path)
basename = os.path.basename(path)
link_name = os.path.join(input_dir, basename)
os.symlink(source, link_name)
# The output of montage.mosaic() is another directory, to which several
# files are written, so we need the path to a second temporary directory.
# Delete it before calling mosaic(), as otherwise it will raise IOError
# ("Output directory already exists").
kwargs = dict(suffix=suffix + "_output")
output_dir = tempfile.mkdtemp(**kwargs)
atexit.register(util.clean_tmp_files, output_dir)
os.rmdir(output_dir)
kwargs = dict(
background_match=options.background_match,
combine=options.combine,
bitpix=-64,
)
if options.ncores > 1:
kwargs["mpi"] = True # use MPI whenever possible
kwargs["n_proc"] = options.ncores # number of MPI processes
montage.mosaic(input_dir, output_dir, **kwargs)
# montage.mosaic() writes several files to the output directory, but we are
# only interested in one of them: 'mosaic.fits', the mosaic FITS image.
MOSAIC_OUTPUT = "mosaic.fits"
src = os.path.join(output_dir, MOSAIC_OUTPUT)
if options.reproject:
print "%sReproject mosaic to point North..." % style.prefix,
sys.stdout.flush()
kwargs = dict(north_aligned=True, silent_cleanup=True)
montage.reproject(src, output_path, **kwargs)
print "done."
else:
# No reprojection, move mosaic to the output path
shutil.move(src, output_path)
print "%sYou're done ^_^" % style.prefix
return 0
if __name__ == "__main__":
sys.exit(main())
|
vterronREPO_NAMElemonPATH_START.@lemon_extracted@lemon-master@mosaic.py@.PATH_END.py
|
{
"filename": "uws_jdl.py",
"repo_name": "ParisAstronomicalDataCentre/OPUS",
"repo_path": "OPUS_extracted/OPUS-master/uws_server/uws_jdl.py",
"type": "Python"
}
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# Copyright (c) 2016 by Mathieu Servillat
# Licensed under MIT (https://github.com/mservillat/uws-server/blob/master/LICENSE)
"""
Interfaces between UWS server and job description
"""
import collections
import inspect
import copy
import json
import yaml
import lxml.etree as ETree
import glob
import datetime as dt
from .settings import *
# ---------
# jdl.content structure (class or dict?)
# job.jobname
'''
jdl.content = {
'name': 'test',
'annotation': 'test annotation',
'version': '1.2',
'group': '',
'type': '',
'subtype': '',
'doculink': '', # 'url' --> 'doculink'
'contact_name': 'contact name',
'contact_email': 'contact@email.com',
'parameters': {},
'generated': {},
'used': {}, # add
'executionduration': 1,
'quote': 1,
}
parameters[pname] = {
'type': p.get('type'), # --> should be changed to 'datatype'
'required': p.get('required'), # type="no_query" in VOTable
'default': p.get('default'), # value in VOTable
'unit': p.get('default'), # add
'ucd': p.get('default'), # add
'utype': p.get('default'), # add
'min': p.get('default'), # add
'max': p.get('default'), # add
'options': p.get('default'), # add
'annotation': list(p)[0].text,
}
used[pname] = { # add!
'default': r.get('default'),
'content_type': p.get('type'), # xtype in VOTable (?)
'annotation': list(r)[0].text,
}
generated[rname] = {
'default': r.get('default'),
'content_type': r.get('content_type'),
'annotation': list(r)[0].text,
}
'''
test_job = {
'name': 'test',
'annotation': 'test annotation',
'version': '1.2',
'group': '',
'type': '',
'subtype': '',
'doculink': '',
'contact_name': 'contact name',
'contact_email': 'contact@email.com',
'parameters': {'input': {'datatype': 'xs:string', 'default': 'test_'}},
'generated': {'output': {'content_type': 'text/plain'}},
'used': {},
'executionduration': 1,
'quote': 1,
}
# ---------
# Job Description Language
class JDLFile(object):
"""
Manage job description. This class defines required functions executed
by the UWS server: save(), read().
"""
def __init__(self, jdl_path=JDL_PATH, scripts_path=SCRIPTS_PATH):
self.content = dict(
control_parameters=CONTROL_PARAMETERS,
control_parameters_keys=CONTROL_PARAMETERS_KEYS,
)
self.extension = ''
self.jdl_path = '.'
self.scripts_path = '.'
# content = dict(
# control_parameters=CONTROL_PARAMETERS,
# control_parameters_keys=CONTROL_PARAMETERS_KEYS,
# )
# extension = ''
# jdl_path = '.'
# scripts_path = '.'
def _get_filename(self, jobname, jobid=None):
fn = '{}/{}{}'.format(self.jdl_path, jobname, self.extension)
if jobid:
fn_jobid = '{}/{}/{}{}'.format(JOBDATA_PATH, jobid, jobname, self.extension)
if os.path.isfile(fn_jobid):
fn = fn_jobid
# logger.info('JDL filename: ' + fn)
return fn
def get_jobnames(self):
flist = glob.glob('{}/*{}'.format(self.jdl_path, self.extension))
# Check if JDL file exists on server?
jobnames_jdl = [f.split('/')[-1].split(self.extension)[0] for f in flist]
jobnames_all = [j for j in jobnames_jdl if os.path.isfile('{}/{}.sh'.format(self.scripts_path, j))]
return jobnames_all
def save(self, jobname):
"""Save job description to file"""
pass
def read(self, jobname):
"""Read job description from file"""
pass
def valid_xml_char_ordinal(self, c):
codepoint = ord(c)
# conditions ordered by presumed frequency
return (
0x20 <= codepoint <= 0xD7FF or
codepoint in (0x9, 0xA, 0xD) or
0xE000 <= codepoint <= 0xFFFD or
0x10000 <= codepoint <= 0x10FFFF
)
def save_script(self, jobname, script):
script_fname = '{}/{}.sh'.format(self.scripts_path, jobname)
with open(script_fname, 'w') as f:
f.write(script.replace('\r', ''))
logger.info('Job script saved: ' + script_fname)
def read_script(self, jobname):
script_fname = '{}/{}.sh'.format(self.scripts_path, jobname)
if os.path.isfile(script_fname):
with open(script_fname,'r') as f:
self.content['script'] = f.read()
else:
logger.warning('Script not found for {}'.format(jobname))
def set_from_post(self, post, user):
#now = dt.datetime.now()
# Read form
keys = list(post.keys())
jobname = post.get('name').split('/')[-1]
# Create parameters dict
params = collections.OrderedDict()
iparam = 1
while 'param_name_' + str(iparam) in keys:
pname = post.get('param_name_' + str(iparam), '')
if pname:
params[pname] = {
'datatype': post.get('param_datatype_' + str(iparam), ''),
'default': post.get('param_default_' + str(iparam), ''),
'required': (post.get('param_required_' + str(iparam), '') == 'on'),
'annotation': post.get('param_annotation_' + str(iparam), ''),
}
poptions = post.get('param_options_' + str(iparam), '')
if poptions:
params[pname]['options'] = poptions
patts = post.get('param_attributes_' + str(iparam), '')
if patts:
for patt in patts.split(' '):
if '=' in patt:
pattk, pattv = patt.split('=')
params[pname][pattk] = pattv
iparam += 1
# Create used dict
used = collections.OrderedDict()
iused = 1
while 'used_name_' + str(iused) in keys:
pname = post.get('used_name_' + str(iused), '')
if pname:
ptype = post.get('used_contenttype_' + str(iused), '')
# TODO: do a getall for all options
pisfile = post.get('used_isfile_' + str(iused), '')
purl = post.get('used_url_' + str(iused), '')
if pisfile == 'File':
purl = 'file://$ID'.format(pname)
used[pname] = {
'content_type': ptype, # ', '.join(ptype),
'multiplicity': post.get('used_multiplicity_' + str(iused), ''),
'default': post.get('used_default_' + str(iused), ''),
'annotation': post.get('used_annotation_' + str(iused), ''),
'url': purl,
}
iused += 1
# Create results dict
results = collections.OrderedDict()
iresult = 1
while 'generated_name_' + str(iresult) in keys:
rname = post.get('generated_name_' + str(iresult), '')
if rname:
ptype = post.get('generated_contenttype_' + str(iresult), '')
results[rname] = {
'content_type': ptype,
'multiplicity': post.get('used_multiplicity_' + str(iresult), ''),
'default': post.get('generated_default_' + str(iresult), ''),
'annotation': post.get('generated_annotation_' + str(iresult), ''),
}
results.move_to_end(rname)
iresult += 1
# Create job.content structure
self.content.update({
'name': jobname,
'annotation': post.get('annotation', jobname),
#'description': post.get('description'),
'doculink': post.get('doculink', ''),
'url': post.get('url', ''),
'group': post.get('group', ''),
'type': post.get('type', ''),
'subtype': post.get('subtype', ''),
'version': post.get('version', '') or '1', # now.strftime(DT_FMT),
'contact_name': post.get('contact_name', '') or user.name,
'contact_email': post.get('contact_email', ''),
'parameters': params,
'generated': results,
'used': used,
'executionDuration': post.get('executionDuration', EXECUTION_DURATION_DEF),
'quote': post.get('quote', ''),
'script': post.get('script', ''),
})
class JSONFile(JDLFile):
def __init__(self, jdl_path=JDL_PATH, scripts_path=SCRIPTS_PATH):
self.content = dict(
control_parameters=CONTROL_PARAMETERS,
control_parameters_keys=CONTROL_PARAMETERS_KEYS,
)
self.extension = '.json'
self.jdl_path = os.path.join(jdl_path, 'json')
self.scripts_path = scripts_path
def save(self, jobname):
"""Save job description to file"""
raw_jobname = jobname.split('/')[-1] # remove tmp/ prefix
js = json.dumps(self.content, indent=4)
jdl_fname = self._get_filename(jobname)
with open(jdl_fname, 'w') as f:
f.write(js)
logger.info('JSON saved: ' + jdl_fname)
# Write script file
self.save_script(jobname, self.content['script'])
def read(self, jobname):
"""Read job description from file"""
raw_jobname = jobname.split('/')[-1] # remove tmp/ prefix
fname = self._get_filename(jobname)
with open(fname, 'r') as f:
#self.content = json.load(f)
self.content.update(yaml.safe_load(f))
# Load script in job_def
if 'script' not in self.content:
self.read_script(jobname)
class VOTFile(JDLFile):
datatype_vo2xs = {
"boolean": 'xs:boolean',
"unsignedByte": 'xs:unsignedByte',
"short": 'xs:short',
"int": 'xs:integer',
"long": 'xs:long',
"char": 'xs:string',
"float": 'xs:float',
"double": 'xs:double'
}
datatype_xs2vo = {
"xs:boolean": 'boolean',
"xs:unsignedByte": 'unsignedByte',
"xs:short": 'short',
"xs:int": 'int',
"xs:integer": 'int',
"xs:long": 'long',
"xs:string": 'char',
"xs:float": 'float',
"xs:double": 'double',
'xs:anyURI': 'char',
'file': 'file'
}
def __init__(self, jdl_path=JDL_PATH, scripts_path=SCRIPTS_PATH):
self.content = dict(
control_parameters=CONTROL_PARAMETERS,
control_parameters_keys=CONTROL_PARAMETERS_KEYS,
)
self.extension = '_vot.xml'
self.jdl_path = os.path.join(jdl_path, 'votable')
self.scripts_path = scripts_path
self.xmlns_uris = {
'xmlns': 'http://www.ivoa.net/xml/VOTable/v1.3',
'xmlns:xsi': 'http://www.w3.org/2001/XMLSchema-instance',
'xsi:schemaLocation':
'http://www.ivoa.net/xml/VOTable/v1.3 http://www.ivoa.net/xml/VOTable/v1.3',
}
def save(self, jobname):
"""Save job description to VOTable file"""
raw_jobname = jobname.split('/')[-1] # remove tmp/ prefix
# VOTable root
xmlns = self.xmlns_uris['xmlns']
xsi = self.xmlns_uris['xmlns:xsi']
jdl_tree = ETree.Element('VOTABLE', attrib={
'version': '1.3',
'{' + xsi + '}schemaLocation':
'http://www.ivoa.net/xml/VOTable/v1.3 http://www.ivoa.net/xml/VOTable/v1.3',
}, nsmap={'xsi': xsi, None: xmlns})
resource = ETree.SubElement(jdl_tree, 'RESOURCE', attrib={
'ID': raw_jobname,
'name': raw_jobname,
'type': "meta",
'utype': "voprov:ActivityDescription"
})
# Job attributes
if self.content['annotation']:
ETree.SubElement(resource, 'DESCRIPTION').text = self.content['annotation'] # .decode() # not needed in Python 3
# TODO: automatic list of attributes from jdl.content
for key in ['doculink', 'type', 'subtype', 'version']:
#'<PARAM name="{key}" datatype="char" arraysize="*" value="{value}" utype="voprov:ActivityDescription.{key}"/>'.format(key=key, value=self.content.get(key, '')))
ETree.SubElement(resource, 'PARAM', attrib={
'name': key,
'value': str(self.content.get(key, '')),
'arraysize': "*",
'datatype': "char",
'utype': 'voprov:ActivityDescription.{}'.format(key),
})
# Contact information
for key in ['name', 'email']:
ETree.SubElement(resource, 'PARAM', attrib={
'name': 'contact_' + key,
'value': self.content.get('contact_{}'.format(key), ''),
'arraysize': "*",
'datatype': "char",
'utype': 'voprov:Agent.{}'.format(key),
})
# UWS parameters
for key in ['executionDuration', 'quote']:
ETree.SubElement(resource, 'PARAM', attrib={
'name': key,
'value': self.content.get(key, 1),
'datatype': "int",
'utype': 'uws:Job.{}'.format(key),
})
# Script
script = ETree.SubElement(resource, 'PARAM', attrib={
'name': 'script',
'value': self.content.get('script', ''),
'arraysize': "*",
'datatype': "char",
})
#script_d = ETree.SubElement(script, 'DESCRIPTION').text = ETree.CDATA(self.content['script'])
#logger.debug(script_d.text)
# Python 3
# Insert groups
group_params = ETree.SubElement(resource, 'GROUP', attrib={
'name': "InputParams",
})
group_used = ETree.SubElement(resource, 'GROUP', attrib={
'name': "Used",
})
group_generated = ETree.SubElement(resource, 'GROUP', attrib={
'name': "Generated",
})
# Prepare InputParams group
if 'parameters' in self.content:
for pname, p in self.content['parameters'].items():
param_attrib = {
'ID': pname,
'name': pname,
'datatype': self.datatype_xs2vo[p['datatype']],
'value': p.get('default'),
}
if param_attrib['datatype'] == 'file':
param_attrib['xtype'] = 'application/octet-stream'
param_attrib['datatype'] = 'char'
if param_attrib['datatype'] == 'char':
param_attrib['arraysize'] = '*'
if str(p['required']).lower() == 'false' :
param_attrib['type'] = 'no_query'
for attr in ['unit', 'ucd', 'utype']:
if p.get(attr, False):
param_attrib[attr] = p[attr]
param = ETree.Element('PARAM', attrib=param_attrib)
pdesc = p.get('annotation', '')
# .encode(encoding='utf-8', errors='ignore')
# pdesc_clean = ''.join(c for c in pdesc if self.valid_xml_char_ordinal(c))
# logger.debug(pdesc)
# logger.debug(pdesc_clean)
ETree.SubElement(param, 'DESCRIPTION').text = pdesc
if p.get('min', False) or p.get('max', False) or p.get('options', False):
values = ETree.SubElement(param, 'VALUES')
if p.get('min', False):
ETree.SubElement(values, 'MIN', attrib={'value': p['min']})
if p.get('max', False):
ETree.SubElement(values, 'MAX', attrib={'value': p['max']})
if p.get('options', False):
for o in p['options'].split(','):
ETree.SubElement(values, 'OPTION', attrib={'value': o})
group_params.append(param)
# Prepare used block
used_attr = [
'role',
'multiplicity',
'default',
'content_type',
'url',
]
used_utypes = {
'role': 'voprov:UsedDescription.role',
'multiplicity': 'voprov:UsedDescription.multiplicity',
'default': 'voprov:Entity.id',
'content_type': 'voprov:EntityDescription.content_type',
'url': 'voprov:EntityDescription.url',
}
if 'used' in self.content:
for pname, pdict in self.content['used'].items():
attrib={
'name': pname,
'utype': 'voprov:UsedDescription',
}
if pname in self.content['parameters']:
attrib['ref'] = pname
used = ETree.Element('GROUP', attrib=attrib)
ETree.SubElement(used, 'DESCRIPTION').text = pdict.get('annotation', '')
for edattr in used_attr:
ETree.SubElement(used, 'PARAM', attrib={
'name': edattr,
'value': pdict.get(edattr, ''),
'arraysize': "*",
'datatype': "char",
'utype': used_utypes.get(edattr, ''),
})
group_used.append(used)
# Prepare results block
gen_attr = [
'role',
'multiplicity',
'default',
'content_type'
]
gen_utypes = {
'role': 'voprov:WasGeneratedByDescription.role',
'multiplicity': 'voprov:WasGeneratedByDescription.multiplicity',
'default': 'voprov:Entity.id',
'content_type': 'voprov:EntityDescription.content_type',
'url': 'voprov:EntityDescription.url',
}
if 'generated' in self.content:
for rname, rdict in self.content['generated'].items():
attrib={
'name': rname,
'utype': 'voprov:WasGeneratedBy',
}
if rname in self.content['parameters']:
attrib['ref'] = rname
result = ETree.Element('GROUP', attrib=attrib)
ETree.SubElement(result, 'DESCRIPTION').text = rdict.get('annotation', '')
for edattr in gen_attr:
ETree.SubElement(result, 'PARAM', attrib={
'name': edattr,
'value': rdict.get(edattr, ''),
'arraysize': "*",
'datatype': "char",
'utype': gen_utypes.get(edattr, ''),
})
group_generated.append(result)
# Write file
jdl_content = ETree.tostring(jdl_tree, pretty_print=True)
jdl_fname = self._get_filename(jobname)
with open(jdl_fname, 'wb') as f:
f.write(jdl_content)
logger.info('JDL saved as VOTable: ' + jdl_fname)
# Write script file
self.save_script(jobname, self.content['script'])
def read(self, jobname, jobid=None):
"""Read job description from VOTable file"""
raw_jobname = jobname.split('/')[-1] # remove tmp/ prefix
if jobname == 'test_':
self.content.update(test_job)
elif self.content.get('name') != jobname:
fname = self._get_filename(jobname, jobid=jobid)
# '{}/{}{}'.format(JDL_PATH, job.jobname, self.extension)
groups = {
'InputParams': 'parameters',
'Used': 'used',
'Generated': 'generated'
}
try:
with open(fname, 'r') as f:
jdl_string = f.read()
jdl_tree = ETree.fromstring(jdl_string)
#print jdl_tree
# Get default namespace
xmlns = '{' + jdl_tree.nsmap[None] + '}'
#print xmlns
# Read parameters description
resource_block = jdl_tree.find(".//{}RESOURCE".format(xmlns))
#print resource_block
job_def = {
'name': resource_block.get('name'),
'parameters': collections.OrderedDict(),
'generated': collections.OrderedDict(),
'used': collections.OrderedDict()
}
for elt in resource_block.getchildren():
if elt.tag == '{}DESCRIPTION'.format(xmlns):
job_def['annotation'] = elt.text
#print elt.text
if elt.tag == '{}LINK'.format(xmlns):
job_def['doculink'] = elt.get('href')
if elt.tag == '{}PARAM'.format(xmlns):
# TODO: set datatype of value in the dictionary?
#print elt.get('name'), elt.get('value')
job_def[elt.get('name')] = elt.get('value', '')
#logger.debug(elt.get('value', ''))
#for subelt in elt:
# if subelt.tag == '{}DESCRIPTION'.format(xmlns):
# logger.debug(subelt.text)
if elt.tag == '{}GROUP'.format(xmlns):
group = groups[elt.get('name')]
#print group
order = 0
keys = []
if group == 'parameters':
for p in elt:
if p.tag == '{}PARAM'.format(xmlns):
order += 1
name = p.get('name')
keys.append(name)
#print name, p.get('datatype', 'char')
pdatatype = p.get('datatype', 'char')
pxtype = p.get('xtype', None)
if pxtype == 'application/octet-stream':
pdatatype = 'file'
else:
pdatatype = self.datatype_vo2xs[pdatatype]
prequired = 'true'
if p.get('type') == 'no_query': # type="no_query" in VOTable
prequired = 'false'
item = {
'datatype': pdatatype,
'required': prequired,
'default': p.get('value'),
'unit': p.get('unit', ''),
'ucd': p.get('ucd', ''),
'utype': p.get('utype', ''),
}
for pp in p:
if pp.tag == '{}DESCRIPTION'.format(xmlns):
item['annotation'] = pp.text
if pp.tag == '{}VALUES'.format(xmlns):
options = []
for ppp in pp:
if ppp.tag == '{}MIN'.format(xmlns):
item['min'] = ppp.get('value')
if ppp.tag == '{}MAX'.format(xmlns):
item['max'] = ppp.get('value')
if ppp.tag == '{}OPTION'.format(xmlns):
options.append(ppp.get('value'))
item['options'] = ','.join(options)
job_def[group][name] = item
if group == 'used':
for p in elt:
order += 1
name = p.get('name')
keys.append(name)
ref = p.get('ref')
item = {
'datatype': 'xs:string', # may be changed below
'annotation': '', # filled below
'url': '', # default to ''
}
if ref:
item['datatype'] = job_def.get('parameters').get(ref).get('datatype', item['datatype'])
item['default'] = job_def.get('parameters').get(ref).get('default')
item['annotation'] = job_def.get('parameters').get(ref).get('annotation', item['annotation'])
for pp in p:
if pp.tag == '{}PARAM'.format(xmlns):
if pp.get('name'):
item[pp.get('name')] = pp.get('value')
if pp.tag == '{}DESCRIPTION'.format(xmlns):
item['annotation'] = pp.text
if pp.tag == '{}LINK'.format(xmlns):
purl = pp.get('href')
item['url'] = purl
if 'file://' in purl:
item['datatype'] = 'file'
job_def[group][name] = item
if group == 'generated':
for p in elt:
order += 1
name = p.get('name')
keys.append(name)
ref = p.get('ref')
item = {
'annotation': '', # filled below
}
if ref:
item['default'] = job_def.get('parameters').get(ref).get('default')
item['annotation'] = job_def.get('parameters').get(ref).get('annotation')
for pp in p:
if pp.tag == '{}PARAM'.format(xmlns):
if pp.get('name'):
item[pp.get('name')] = pp.get('value')
if pp.tag == '{}DESCRIPTION'.format(xmlns):
item['annotation'] = pp.text
job_def[group][name] = item
job_def[group + '_keys'] = keys
# Log votable access
# frame, filename, line_number, function_name, lines, index = inspect.stack()[1]
# logger.debug('VOTable read at {} ({}:{}): {}'.format(function_name, filename, line_number, fname))
except IOError:
# if file does not exist, continue and return an empty dict
logger.debug('VOTable not found for job {}'.format(jobname))
raise UserWarning('VOTable not found for job {}'.format(jobname))
except Exception as e:
logger.error('{}'.format(e))
raise
# return {}
self.content.update(job_def)
# Load script in job_def
if 'script' not in self.content:
self.read_script(jobname)
#logger.info('JDL loaded for job: {} {}'.format(jobname, jobid))
#else:
#logger.debug('JDL already loaded for job: {} {}'.format(jobname, jobid))
def save_old(self, jobname):
"""Save job description to VOTable file"""
raw_jobname = jobname.split('/')[-1] # remove tmp/ prefix
# VOTable root
xmlns = self.xmlns_uris['xmlns']
xsi = self.xmlns_uris['xmlns:xsi']
jdl_tree = ETree.Element('VOTABLE', attrib={
'version': '1.3',
'{' + xsi + '}schemaLocation':
'http://www.ivoa.net/xml/VOTable/v1.3 http://www.ivoa.net/xml/VOTable/v1.3',
}, nsmap={'xsi': xsi, None: xmlns})
resource = ETree.SubElement(jdl_tree, 'RESOURCE', attrib={
'ID': raw_jobname,
'name': raw_jobname,
'type': "meta",
'utype': "voprov:ActivityDescription"
})
# Job attributes
if self.content['annotation']:
ETree.SubElement(resource, 'DESCRIPTION').text = self.content['annotation'] # .decode()
# TODO: automatic list of attributes from jdl.content
job_attr = []
for key in ['annotation', 'doculink', 'type', 'subtype', 'version']:
job_attr.append('<PARAM name="{key}" datatype="char" arraysize="*" value="{value}" utype="voprov:ActivityDescription.{key}"/>'.format(key=key, value=self.content.get(key, '')))
# ,
# '<PARAM name="subtype" datatype="char" arraysize="*" value="{}" utype="voprov:ActivityDescription.subtype"/>'.format(self.content.get('job_subtype', '')),
# '<PARAM name="annotation" datatype="char" arraysize="*" value="{}" utype="voprov:ActivityDescription.annotation"/>'.format(self.content.get('annotation', raw_jobname)),
# '<PARAM name="version" datatype="float" value="{}" utype="voprov:ActivityDescription.version"/>'.format(self.content.get('version', '')),
# '<PARAM name="doculink" datatype="float" value="{}" utype="voprov:ActivityDescription.doculink"/>'.format(self.content.get('doculink', '')),
job_attr.append('<PARAM name="contact_name" datatype="char" arraysize="*" value="{}" utype="voprov:Agent.name"/>'.format(self.content.get('contact_name', '')))
job_attr.append('<PARAM name="contact_email" datatype="char" arraysize="*" value="{}" utype="voprov:Agent.email"/>'.format(self.content.get('contact_email', '')))
job_attr.append('<PARAM name="executionduration" datatype="int" value="{}" utype="uws:Job.executionduration"/>'.format(self.content.get('executionduration', '1')))
job_attr.append('<PARAM name="quote" datatype="int" value="{}" utype="uws:Job.quote"/>'.format(self.content.get('quote', '1')))
for attr in job_attr:
resource.append(ETree.fromstring(attr))
# Insert groups
group_params = ETree.SubElement(resource, 'GROUP', attrib={
'name': "InputParams",
'utype': "voprov:Parameter",
})
group_used = ETree.SubElement(resource, 'GROUP', attrib={
'name': "Used",
'utype': "voprov:Used",
})
group_generated = ETree.SubElement(resource, 'GROUP', attrib={
'name': "Generated",
'utype': "voprov:WasGeneratedBy",
})
# Prepare InputParams group
if 'parameters' in self.content:
for pname, p in self.content['parameters'].items():
param_attrib = {
'ID': pname,
'name': pname,
'datatype': self.datatype_xs2vo[p['datatype']],
'value': p.get('default'),
}
if param_attrib['datatype'] == 'file':
param_attrib['xtype'] = 'application/octet-stream'
param_attrib['datatype'] = 'char'
if param_attrib['datatype'] == 'char':
param_attrib['arraysize'] = '*'
if str(p['required']).lower() == 'false' :
param_attrib['type'] = 'no_query'
# 'required': str(p['required']),
# 'content_type': 'text/plain'
param = ETree.Element('PARAM', attrib=param_attrib)
pdesc = p.get('annotation', '')
# .encode(encoding='utf-8', errors='ignore')
#pdesc_clean = ''.join(c for c in pdesc if self.valid_xml_char_ordinal(c))
#logger.debug(pdesc)
#logger.debug(pdesc_clean)
ETree.SubElement(param, 'DESCRIPTION').text = pdesc
if p.get('min', False) or p.get('max', False) or p.get('options', False):
values = ETree.SubElement(param, 'VALUES')
if p.get('min', False):
ETree.SubElement(values, 'MIN', attrib={'value': p['min']})
if p.get('max', False):
ETree.SubElement(values, 'MAX', attrib={'value': p['max']})
if p.get('options', False):
for o in p['options'].split(','):
ETree.SubElement(values, 'OPTION', attrib={'value': o})
group_params.append(param)
# Prepare used block
if 'used' in self.content:
for pname, p in self.content['used'].items():
attrib={
'name': pname,
'datatype': 'char',
'arraysize': '*',
'value': p.get('default'),
'xtype': p.get('content_type'),
'utype': 'voprov:Entity',
}
if pname in self.content['parameters']:
attrib['ref'] = pname
used = ETree.Element('PARAM', attrib=attrib)
url_attrib = {
'content-role': 'location',
'href': p.get('url'),
}
ETree.SubElement(used, 'LINK', attrib=url_attrib)
ETree.SubElement(used, 'DESCRIPTION').text = p.get('annotation', '')
group_used.append(used)
# Prepare results block
if 'generated' in self.content:
for rname, r in self.content['generated'].items():
attrib={
'name': rname,
'datatype': 'char',
'arraysize': '*',
'value': r['default'],
'xtype': r['content_type'],
'utype': 'voprov:Entity',
}
if rname in self.content['parameters']:
attrib['ref'] = rname
result = ETree.Element('PARAM', attrib=attrib)
ETree.SubElement(result, 'DESCRIPTION').text = r.get('annotation', '')
group_generated.append(result)
# Write file
jdl_content = ETree.tostring(jdl_tree, pretty_print=True)
jdl_fname = self._get_filename(jobname)
with open(jdl_fname, 'w') as f:
f.write(jdl_content)
logger.info('JDL saved as VOTable: ' + jdl_fname)
def read_old(self, jobname):
"""Read job description from VOTable file"""
raw_jobname = jobname.split('/')[-1] # remove tmp/ prefix
fname = self._get_filename(jobname)
# '{}/{}{}'.format(JDL_PATH, job.jobname, self.extension)
groups = {
'InputParams': 'parameters',
'Used': 'used',
'Generated': 'generated'
}
try:
with open(fname, 'r') as f:
jdl_string = f.read()
#print jdl_string
jdl_tree = ETree.fromstring(jdl_string)
#print jdl_tree
# Get default namespace
xmlns = '{' + jdl_tree.nsmap[None] + '}'
#print xmlns
# Read parameters description
resource_block = jdl_tree.find(".//{}RESOURCE[@ID='{}']".format(xmlns, raw_jobname))
#print resource_block
job_def = {
'name': resource_block.get('name'),
'parameters': collections.OrderedDict(),
'generated': collections.OrderedDict(),
'used': collections.OrderedDict()
}
for elt in resource_block.getchildren():
if elt.tag == '{}DESCRIPTION'.format(xmlns):
job_def['annotation'] = elt.text
#print elt.text
if elt.tag == '{}LINK'.format(xmlns):
job_def['doculink'] = elt.get('href')
if elt.tag == '{}PARAM'.format(xmlns):
# TODO: set datatype of value in the dictionary?
#print elt.get('name'), elt.get('value')
job_def[elt.get('name')] = elt.get('value', '')
if elt.tag == '{}GROUP'.format(xmlns):
group = groups[elt.get('name')]
#print group
order = 0
keys = []
if group == 'parameters':
for p in elt:
if p.tag == '{}PARAM'.format(xmlns):
order += 1
name = p.get('name')
keys.append(name)
#print name, p.get('datatype', 'char')
pdatatype = p.get('datatype', 'char')
pxtype = p.get('xtype', None)
if pxtype == 'application/octet-stream':
pdatatype = 'file'
else:
pdatatype = self.datatype_vo2xs[pdatatype]
prequired = 'true'
if p.get('type') == 'no_query': # type="no_query" in VOTable
prequired = 'false'
item = {
'datatype': pdatatype,
'required': prequired,
'default': p.get('value'),
'unit': p.get('unit', ''),
'ucd': p.get('ucd', ''),
'utype': p.get('utype', ''),
}
for pp in p:
if pp.tag == '{}DESCRIPTION'.format(xmlns):
item['annotation'] = pp.text
if pp.tag == '{}VALUES'.format(xmlns):
options = []
for ppp in pp:
if ppp.tag == '{}MIN'.format(xmlns):
item['min'] = ppp.get('value')
if ppp.tag == '{}MAX'.format(xmlns):
item['max'] = ppp.get('value')
if ppp.tag == '{}OPTION'.format(xmlns):
options.append(ppp.get('value'))
item['options'] = ','.join(options)
job_def[group][name] = item
if group == 'used':
for p in elt:
order += 1
name = p.get('name')
keys.append(name)
ref = p.get('ref')
if ref:
item = {
'datatype': job_def.get('parameters').get(ref).get('datatype', 'xs:string'),
'default': job_def.get('parameters').get(ref).get('default'),
'content_type': p.get('xtype'),
'annotation': job_def.get('parameters').get(ref).get('annotation'),
'url': '',
}
else:
item = {
'datatype': 'xs:string', # may be changed below
'default': p.get('value'),
'content_type': p.get('xtype'),
'annotation': '', # filled below
'url': '',
}
for pp in p:
if pp.tag == '{}DESCRIPTION'.format(xmlns):
if not ref:
item['annotation'] = pp.text
if pp.tag == '{}LINK'.format(xmlns):
purl = pp.get('href')
item['url'] = purl
if 'file://' in purl:
item['datatype'] = 'file'
job_def[group][name] = item
if group == 'generated':
for p in elt:
order += 1
name = p.get('name')
keys.append(name)
ref = p.get('ref')
if ref:
item = {
'default': job_def.get('parameters').get(ref).get('default'),
'content_type': p.get('xtype'),
'annotation': job_def.get('parameters').get(ref).get('annotation'),
}
else:
item = {
'default': p.get('value'),
'content_type': p.get('xtype'),
'annotation': '', # filled below
}
for pp in p:
if pp.tag == '{}DESCRIPTION'.format(xmlns):
item['annotation'] = pp.text
job_def[group][name] = item
job_def[group + '_keys'] = keys
# Log votable access
# frame, filename, line_number, function_name, lines, index = inspect.stack()[1]
# logger.debug('VOTable read at {} ({}:{}): {}'.format(function_name, filename, line_number, fname))
except IOError:
# if file does not exist, continue and return an empty dict
logger.debug('VOTable not found for job {}'.format(jobname))
raise UserWarning('VOTable not found for job {}'.format(jobname))
except Exception as e:
logger.error('{}'.format(e))
raise
# return {}
self.content.update(job_def)
class WADLFile(JDLFile):
def __init__(self, jdl_path=JDL_PATH, scripts_path=SCRIPTS_PATH):
self.content = dict(
control_parameters=CONTROL_PARAMETERS,
control_parameters_keys=CONTROL_PARAMETERS_KEYS,
)
self.extension = '.wadl'
self.jdl_path = os.path.join(jdl_path, 'wadl')
self.scripts_path = scripts_path
self.xmlns_uris = {
'xmlns:wadl': 'http://wadl.dev.java.net/2009/02',
'xmlns:uws': 'http://www.ivoa.net/xml/UWS/v1.0',
'xmlns:xlink': 'http://www.w3.org/1999/xlink',
'xmlns:xsi': 'http://www.w3.org/2001/XMLSchema-instance',
'xsi:schemaLocation':
'http://wadl.dev.java.net/2009/02 '
'http://www.w3.org/Submission/wadl/wadl.xsd',
}
def save(self, jobname):
"""Save job description to WADL file"""
raw_jobname = jobname
raw_jobname = raw_jobname.split('/')[-1] # remove tmp/ prefix
# Prepare parameter blocks
jdl_params = []
jdl_popts = []
for pname, p in self.content['parameters'].items():
# pline = '<param style="query" name="{}" type="{}" required="{}" default="{}"><doc>{}</doc></param>' \
# ''.format(pname, p['type'], p['required'], p['default'], p.get('annotation', ''))
pelt_attrib = {
'style': 'query',
'name': pname,
'type': p['datatype'],
'required': str(p['required']),
'default': p['default']
}
pelt = ETree.Element('param', attrib=pelt_attrib)
ETree.SubElement(pelt, 'doc').text = p.get('annotation', '')
jdl_params.append(pelt)
# line = '<option value="{}" content_type="text/plain"><doc>{}</doc></option>' \
# ''.format(pname, p['annotation'])
poelt = ETree.Element('option', attrib={
'value': pname,
'content_type': 'text/plain'
})
ETree.SubElement(poelt, 'doc').text = p.get('annotation', '')
jdl_popts.append(poelt)
# Prepare result block
jdl_ropts = []
for rname, r in self.content['generated'].items():
# rline = '<option value="{}" content_type="{}" default="{}"><doc>{}</doc></option>' \
# ''.format(rname, r['content_type'], r['default'], r.get('annotation', ''))
roelt = ETree.Element('option', attrib={
'value': rname,
'content_type': r['content_type'],
'default': r['default']
})
ETree.SubElement(roelt, 'doc').text = r.get('annotation', '')
jdl_ropts.append(roelt)
# Read WADL UWS template as XML Tree
filename = '{}/uws_template.wadl'.format(JDL_PATH)
with open(filename, 'r') as f:
jdl_string = f.read()
jdl_tree = ETree.fromstring(jdl_string)
xmlns = '{' + jdl_tree.nsmap[None] + '}'
# Insert raw_jobname as the resource path
joblist_block = jdl_tree.find(".//{}resource[@id='joblist']".format(xmlns))
joblist_block.set('path', raw_jobname)
joblist_block.set('doculink', self.content['doculink'])
joblist_block.set('contact_name', self.content['contact_name'])
#joblist_block.set('contact_affil', self.content['contact_affil'])
joblist_block.set('contact_email', self.content['contact_email'])
# Insert job description
job_list_description_block = jdl_tree.find(".//{}doc[@title='description']".format(xmlns))
job_list_description_block.text = self.content['annotation'] # .decode()
# Insert parameters
params_block = {}
for block in ['create_job_parameters', 'control_job_parameters', 'set_job_parameters']:
params_block[block] = jdl_tree.find(".//{}request[@id='{}']".format(xmlns, block))
for pelt in jdl_params:
params_block[block].append(copy.copy(pelt))
# Insert parameters as options
param_opts_block = jdl_tree.find(".//{}param[@name='parameter-name']".format(xmlns))
for poelt in jdl_popts:
param_opts_block.append(poelt)
# Insert results as options
result_opts_block = jdl_tree.find(".//{}param[@name='result-id']".format(xmlns))
for roelt in jdl_ropts:
result_opts_block.append(roelt)
# Insert default execution duration
execdur_block = jdl_tree.find(".//{}param[@name='EXECUTIONDURATION']".format(xmlns))
execdur_block.set('default', self.content['executionduration'])
# Insert default quote
quote_block = jdl_tree.find(".//{}representation[@id='quote']".format(xmlns))
quote_block.set('default', self.content['quote'])
jdl_content = ETree.tostring(jdl_tree, pretty_print=True)
jdl_fname = self._get_filename(jobname)
with open(jdl_fname, 'w') as f:
f.write(jdl_content)
logger.info('WADL saved: ' + jdl_fname)
def read(self, jobname):
"""Read job description from WADL file"""
fname = self._get_filename(jobname)
# '{}/{}{}'.format(JDL_PATH, job.jobname, self.extension)
parameters = collections.OrderedDict()
results = collections.OrderedDict()
used = collections.OrderedDict()
try:
with open(fname, 'r') as f:
jdl_string = f.read()
jdl_tree = ETree.fromstring(jdl_string)
# Get default namespace
xmlns = '{' + jdl_tree.nsmap[None] + '}'
# Read parameters description
params_block = jdl_tree.find(".//{}request[@id='create_job_parameters']".format(xmlns))
for p in params_block.getchildren():
pname = p.get('name')
if pname not in ['PHASE', None]:
parameters[pname] = {
'datatype': p.get('type'),
'required': p.get('required'),
'default': p.get('default'),
'annotation': list(p)[0].text,
}
for attr in ['min', 'max', 'choices']:
if p.get(attr):
parameters[pname][attr] = p.get(attr)
if p.get('type') in ['xs:anyURI', 'file']:
item = {
'default': parameters[pname]['default'],
'content_type': '',
'annotation': parameters[pname]['annotation']
}
used[pname] = item
# Read results description
results_block = jdl_tree.find(".//{}param[@name='result-id']".format(xmlns))
for r in results_block.getchildren():
if r.get('value') not in [None]:
ctype = r.get('content_type')
if not ctype:
ctype = r.get('mediaType')
results[r.get('value')] = {
'content_type': ctype,
'default': r.get('default'),
'annotation': list(r)[0].text,
}
job_def = {
'name': jobname,
'parameters': parameters,
'used': used,
'generated': results,
}
# Read job description
joblist_description_block = jdl_tree.find(".//{}doc[@title='description']".format(xmlns))
job_def['annotation'] = joblist_description_block.text
# Read job attributes
joblist_block = jdl_tree.find(".//{}resource[@id='joblist']".format(xmlns))
job_def['doculink'] = joblist_block.get('doculink')
job_def['contact_name'] = joblist_block.get('contact_name')
#job_def['contact_affil'] = joblist_block.get('contact_affil')
job_def['contact_email'] = joblist_block.get('contact_email')
# Read execution duration
execdur_block = jdl_tree.find(".//{}param[@name='EXECUTIONDURATION']".format(xmlns))
job_def['executionduration'] = execdur_block.get('default')
# Read default quote
quote_block = jdl_tree.find(".//{}representation[@id='quote']".format(xmlns))
job_def['quote'] = quote_block.get('default')
# Log wadl access
frame, filename, line_number, function_name, lines, index = inspect.stack()[1]
# logger.debug('WADL read at {} ({}:{}): {}'.format(function_name, filename, line_number, fname))
except IOError:
# if file does not exist, continue and return an empty dict
logger.debug('WADL not found for job {}'.format(jobname))
raise UserWarning('WADL not found for job {}'.format(jobname))
# return {}
self.content.update(job_def)
# ----------
# Convert JDL File
def update_vot(jobname):
logger.info('Updating VOTFile for {}'.format(jobname))
vot = VOTFile()
vot.read_old(jobname)
vot.content['executionDuration'] = vot.content['executionduration']
vot.save(jobname)
vot.read(jobname)
def wadl2vot(jobname):
logger.info('Convert WADLFile to VOTFile for {}'.format(jobname))
wadl = WADLFile()
vot = VOTFile()
wadl.read(jobname)
vot.content = wadl.content
vot.save(jobname)
vot.read(jobname) # test if file can be read
def vot2json(jobname):
logger.info('Convert VOTFile to JSONFile for {}'.format(jobname))
js = JSONFile()
vot = VOTFile()
vot.read(jobname)
js.content = vot.content
js.save(jobname)
js.read(jobname) # test if file can be read
# ---------
# Import PAR files (ftools, ctools)
def read_par(jobname):
"""
Read .par file that contains the parameter list for a given tool
The format of the file is from ftools/pfile, also used by ctools,
it is an ASCII file with the following columns:
name, type, parameter mode, default value, lower limit, upper limit, prompt
See: http://heasarc.gsfc.nasa.gov/ftools/others/pfiles.html
"""
type_dict = {
'b': 'xs:boolean',
'i': 'xs:long',
'r': 'xs:double',
's': 'xs:string',
'f': 'xs:string'
}
filename = jobname+'.par'
from astropy.io import ascii
cnames = ['name', 'type', 'mode', 'default', 'lower', 'upper', 'prompt']
data = ascii.read(filename, data_start=0, names=cnames)
job_par = data
for p in job_par:
# Set if parameter is required (mode q and a)
required = 'false'
if ('q' in p['mode']) or ('a' in p['mode']):
required = 'true'
# If param is an integer or a real, add lower and upper limits (if not 0,0)
lowup = ''
if (('i' in p['type']) or ('r' in p['type'])) and (p['lower'] != '0' and p['upper'] != 0):
lowup = ' lower="%s" upper="%s"' % (p['lower'], p['upper'])
# If param is a string (but not 'mode'), does it have limited choices?
choices = ''
if ('s' in p['type']) and (p['lower'] != '') and (p['name'] != 'mode'):
choices = ' choices="%s"' % (p['lower'])
# Write param block to file
|
ParisAstronomicalDataCentreREPO_NAMEOPUSPATH_START.@OPUS_extracted@OPUS-master@uws_server@uws_jdl.py@.PATH_END.py
|
{
"filename": "_showtickprefix.py",
"repo_name": "catboost/catboost",
"repo_path": "catboost_extracted/catboost-master/contrib/python/plotly/py2/plotly/validators/layout/polar/radialaxis/_showtickprefix.py",
"type": "Python"
}
|
import _plotly_utils.basevalidators
class ShowtickprefixValidator(_plotly_utils.basevalidators.EnumeratedValidator):
def __init__(
self,
plotly_name="showtickprefix",
parent_name="layout.polar.radialaxis",
**kwargs
):
super(ShowtickprefixValidator, self).__init__(
plotly_name=plotly_name,
parent_name=parent_name,
edit_type=kwargs.pop("edit_type", "plot"),
role=kwargs.pop("role", "style"),
values=kwargs.pop("values", ["all", "first", "last", "none"]),
**kwargs
)
|
catboostREPO_NAMEcatboostPATH_START.@catboost_extracted@catboost-master@contrib@python@plotly@py2@plotly@validators@layout@polar@radialaxis@_showtickprefix.py@.PATH_END.py
|
{
"filename": "plot_label_propagation_digits_active_learning.py",
"repo_name": "scikit-learn/scikit-learn",
"repo_path": "scikit-learn_extracted/scikit-learn-main/examples/semi_supervised/plot_label_propagation_digits_active_learning.py",
"type": "Python"
}
|
"""
========================================
Label Propagation digits active learning
========================================
Demonstrates an active learning technique to learn handwritten digits
using label propagation.
We start by training a label propagation model with only 10 labeled points,
then we select the top five most uncertain points to label. Next, we train
with 15 labeled points (original 10 + 5 new ones). We repeat this process
four times to have a model trained with 30 labeled examples. Note you can
increase this to label more than 30 by changing `max_iterations`. Labeling
more than 30 can be useful to get a sense for the speed of convergence of
this active learning technique.
A plot will appear showing the top 5 most uncertain digits for each iteration
of training. These may or may not contain mistakes, but we will train the next
model with their true labels.
"""
# Authors: The scikit-learn developers
# SPDX-License-Identifier: BSD-3-Clause
import matplotlib.pyplot as plt
import numpy as np
from scipy import stats
from sklearn import datasets
from sklearn.metrics import classification_report, confusion_matrix
from sklearn.semi_supervised import LabelSpreading
digits = datasets.load_digits()
rng = np.random.RandomState(0)
indices = np.arange(len(digits.data))
rng.shuffle(indices)
X = digits.data[indices[:330]]
y = digits.target[indices[:330]]
images = digits.images[indices[:330]]
n_total_samples = len(y)
n_labeled_points = 40
max_iterations = 5
unlabeled_indices = np.arange(n_total_samples)[n_labeled_points:]
f = plt.figure()
for i in range(max_iterations):
if len(unlabeled_indices) == 0:
print("No unlabeled items left to label.")
break
y_train = np.copy(y)
y_train[unlabeled_indices] = -1
lp_model = LabelSpreading(gamma=0.25, max_iter=20)
lp_model.fit(X, y_train)
predicted_labels = lp_model.transduction_[unlabeled_indices]
true_labels = y[unlabeled_indices]
cm = confusion_matrix(true_labels, predicted_labels, labels=lp_model.classes_)
print("Iteration %i %s" % (i, 70 * "_"))
print(
"Label Spreading model: %d labeled & %d unlabeled (%d total)"
% (n_labeled_points, n_total_samples - n_labeled_points, n_total_samples)
)
print(classification_report(true_labels, predicted_labels))
print("Confusion matrix")
print(cm)
# compute the entropies of transduced label distributions
pred_entropies = stats.distributions.entropy(lp_model.label_distributions_.T)
# select up to 5 digit examples that the classifier is most uncertain about
uncertainty_index = np.argsort(pred_entropies)[::-1]
uncertainty_index = uncertainty_index[
np.isin(uncertainty_index, unlabeled_indices)
][:5]
# keep track of indices that we get labels for
delete_indices = np.array([], dtype=int)
# for more than 5 iterations, visualize the gain only on the first 5
if i < 5:
f.text(
0.05,
(1 - (i + 1) * 0.183),
"model %d\n\nfit with\n%d labels" % ((i + 1), i * 5 + 10),
size=10,
)
for index, image_index in enumerate(uncertainty_index):
image = images[image_index]
# for more than 5 iterations, visualize the gain only on the first 5
if i < 5:
sub = f.add_subplot(5, 5, index + 1 + (5 * i))
sub.imshow(image, cmap=plt.cm.gray_r, interpolation="none")
sub.set_title(
"predict: %i\ntrue: %i"
% (lp_model.transduction_[image_index], y[image_index]),
size=10,
)
sub.axis("off")
# labeling 5 points, remote from labeled set
(delete_index,) = np.where(unlabeled_indices == image_index)
delete_indices = np.concatenate((delete_indices, delete_index))
unlabeled_indices = np.delete(unlabeled_indices, delete_indices)
n_labeled_points += len(uncertainty_index)
f.suptitle(
(
"Active learning with Label Propagation.\nRows show 5 most "
"uncertain labels to learn with the next model."
),
y=1.15,
)
plt.subplots_adjust(left=0.2, bottom=0.03, right=0.9, top=0.9, wspace=0.2, hspace=0.85)
plt.show()
|
scikit-learnREPO_NAMEscikit-learnPATH_START.@scikit-learn_extracted@scikit-learn-main@examples@semi_supervised@plot_label_propagation_digits_active_learning.py@.PATH_END.py
|
{
"filename": "wiserf.py",
"repo_name": "ChrisBeaumont/brut",
"repo_path": "brut_extracted/brut-master/bubbly/wiserf.py",
"type": "Python"
}
|
import os
import PyWiseRF
import cloud
import numpy as np
if cloud.running_on_cloud():
os.environ['WISERF_ROOT'] = '/home/picloud/WiseRF-1.5.9-linux-x86_64-rc2'
def test():
x = np.random.random((5, 5))
y = np.array([1, 1, 1, 0, 0])
clf = WiseRF().fit(x, y)
return clf
class WiseRF(PyWiseRF.WiseRF):
def decision_function(self, x):
p = self.predict_proba(x)
return p[:, 1] - p[:, 0]
|
ChrisBeaumontREPO_NAMEbrutPATH_START.@brut_extracted@brut-master@bubbly@wiserf.py@.PATH_END.py
|
{
"filename": "enforcement_filters.py",
"repo_name": "langchain-ai/langchain",
"repo_path": "langchain_extracted/langchain-master/libs/community/langchain_community/chains/pebblo_retrieval/enforcement_filters.py",
"type": "Python"
}
|
"""
Identity & Semantic Enforcement filters for PebbloRetrievalQA chain:
This module contains methods for applying Identity and Semantic Enforcement filters
in the PebbloRetrievalQA chain.
These filters are used to control the retrieval of documents based on authorization and
semantic context.
The Identity Enforcement filter ensures that only authorized identities can access
certain documents, while the Semantic Enforcement filter controls document retrieval
based on semantic context.
The methods in this module are designed to work with different types of vector stores.
"""
import logging
from typing import Any, List, Optional, Union
from langchain_core.vectorstores import VectorStoreRetriever
from langchain_community.chains.pebblo_retrieval.models import (
AuthContext,
SemanticContext,
)
logger = logging.getLogger(__name__)
PINECONE = "Pinecone"
QDRANT = "Qdrant"
PGVECTOR = "PGVector"
PINECONE_VECTOR_STORE = "PineconeVectorStore"
SUPPORTED_VECTORSTORES = {PINECONE, QDRANT, PGVECTOR, PINECONE_VECTOR_STORE}
def clear_enforcement_filters(retriever: VectorStoreRetriever) -> None:
"""
Clear the identity and semantic enforcement filters in the retriever search_kwargs.
"""
if retriever.vectorstore.__class__.__name__ == PGVECTOR:
search_kwargs = retriever.search_kwargs
if "filter" in search_kwargs:
filters = search_kwargs["filter"]
_pgvector_clear_pebblo_filters(
search_kwargs, filters, "authorized_identities"
)
_pgvector_clear_pebblo_filters(
search_kwargs, filters, "pebblo_semantic_topics"
)
_pgvector_clear_pebblo_filters(
search_kwargs, filters, "pebblo_semantic_entities"
)
def set_enforcement_filters(
retriever: VectorStoreRetriever,
auth_context: Optional[AuthContext],
semantic_context: Optional[SemanticContext],
) -> None:
"""
Set identity and semantic enforcement filters in the retriever.
"""
# Clear existing enforcement filters
clear_enforcement_filters(retriever)
if auth_context is not None:
_set_identity_enforcement_filter(retriever, auth_context)
if semantic_context is not None:
_set_semantic_enforcement_filter(retriever, semantic_context)
def _apply_qdrant_semantic_filter(
search_kwargs: dict, semantic_context: Optional[SemanticContext]
) -> None:
"""
Set semantic enforcement filter in search_kwargs for Qdrant vectorstore.
"""
try:
from qdrant_client.http import models as rest
except ImportError as e:
raise ValueError(
"Could not import `qdrant-client.http` python package. "
"Please install it with `pip install qdrant-client`."
) from e
# Create a semantic enforcement filter condition
semantic_filters: List[
Union[
rest.FieldCondition,
rest.IsEmptyCondition,
rest.IsNullCondition,
rest.HasIdCondition,
rest.NestedCondition,
rest.Filter,
]
] = []
if (
semantic_context is not None
and semantic_context.pebblo_semantic_topics is not None
):
semantic_topics_filter = rest.FieldCondition(
key="metadata.pebblo_semantic_topics",
match=rest.MatchAny(any=semantic_context.pebblo_semantic_topics.deny),
)
semantic_filters.append(semantic_topics_filter)
if (
semantic_context is not None
and semantic_context.pebblo_semantic_entities is not None
):
semantic_entities_filter = rest.FieldCondition(
key="metadata.pebblo_semantic_entities",
match=rest.MatchAny(any=semantic_context.pebblo_semantic_entities.deny),
)
semantic_filters.append(semantic_entities_filter)
# If 'filter' already exists in search_kwargs
if "filter" in search_kwargs:
existing_filter: rest.Filter = search_kwargs["filter"]
# Check if existing_filter is a qdrant-client filter
if isinstance(existing_filter, rest.Filter):
# If 'must_not' condition exists in the existing filter
if isinstance(existing_filter.must_not, list):
# Warn if 'pebblo_semantic_topics' or 'pebblo_semantic_entities'
# filter is overridden
new_must_not_conditions: List[
Union[
rest.FieldCondition,
rest.IsEmptyCondition,
rest.IsNullCondition,
rest.HasIdCondition,
rest.NestedCondition,
rest.Filter,
]
] = []
# Drop semantic filter conditions if already present
for condition in existing_filter.must_not:
if hasattr(condition, "key"):
if condition.key == "metadata.pebblo_semantic_topics":
continue
if condition.key == "metadata.pebblo_semantic_entities":
continue
new_must_not_conditions.append(condition)
# Add semantic enforcement filters to 'must_not' conditions
existing_filter.must_not = new_must_not_conditions
existing_filter.must_not.extend(semantic_filters)
else:
# Set 'must_not' condition with semantic enforcement filters
existing_filter.must_not = semantic_filters
else:
raise TypeError(
"Using dict as a `filter` is deprecated. "
"Please use qdrant-client filters directly: "
"https://qdrant.tech/documentation/concepts/filtering/"
)
else:
# If 'filter' does not exist in search_kwargs, create it
search_kwargs["filter"] = rest.Filter(must_not=semantic_filters)
def _apply_qdrant_authorization_filter(
search_kwargs: dict, auth_context: Optional[AuthContext]
) -> None:
"""
Set identity enforcement filter in search_kwargs for Qdrant vectorstore.
"""
try:
from qdrant_client.http import models as rest
except ImportError as e:
raise ValueError(
"Could not import `qdrant-client.http` python package. "
"Please install it with `pip install qdrant-client`."
) from e
if auth_context is not None:
# Create a identity enforcement filter condition
identity_enforcement_filter = rest.FieldCondition(
key="metadata.authorized_identities",
match=rest.MatchAny(any=auth_context.user_auth),
)
else:
return
# If 'filter' already exists in search_kwargs
if "filter" in search_kwargs:
existing_filter: rest.Filter = search_kwargs["filter"]
# Check if existing_filter is a qdrant-client filter
if isinstance(existing_filter, rest.Filter):
# If 'must' exists in the existing filter
if existing_filter.must:
new_must_conditions: List[
Union[
rest.FieldCondition,
rest.IsEmptyCondition,
rest.IsNullCondition,
rest.HasIdCondition,
rest.NestedCondition,
rest.Filter,
]
] = []
# Drop 'authorized_identities' filter condition if already present
for condition in existing_filter.must:
if (
hasattr(condition, "key")
and condition.key == "metadata.authorized_identities"
):
continue
new_must_conditions.append(condition)
# Add identity enforcement filter to 'must' conditions
existing_filter.must = new_must_conditions
existing_filter.must.append(identity_enforcement_filter)
else:
# Set 'must' condition with identity enforcement filter
existing_filter.must = [identity_enforcement_filter]
else:
raise TypeError(
"Using dict as a `filter` is deprecated. "
"Please use qdrant-client filters directly: "
"https://qdrant.tech/documentation/concepts/filtering/"
)
else:
# If 'filter' does not exist in search_kwargs, create it
search_kwargs["filter"] = rest.Filter(must=[identity_enforcement_filter])
def _apply_pinecone_semantic_filter(
search_kwargs: dict, semantic_context: Optional[SemanticContext]
) -> None:
"""
Set semantic enforcement filter in search_kwargs for Pinecone vectorstore.
"""
# Check if semantic_context is provided
semantic_context = semantic_context
if semantic_context is not None:
if semantic_context.pebblo_semantic_topics is not None:
# Add pebblo_semantic_topics filter to search_kwargs
search_kwargs.setdefault("filter", {})["pebblo_semantic_topics"] = {
"$nin": semantic_context.pebblo_semantic_topics.deny
}
if semantic_context.pebblo_semantic_entities is not None:
# Add pebblo_semantic_entities filter to search_kwargs
search_kwargs.setdefault("filter", {})["pebblo_semantic_entities"] = {
"$nin": semantic_context.pebblo_semantic_entities.deny
}
def _apply_pinecone_authorization_filter(
search_kwargs: dict, auth_context: Optional[AuthContext]
) -> None:
"""
Set identity enforcement filter in search_kwargs for Pinecone vectorstore.
"""
if auth_context is not None:
search_kwargs.setdefault("filter", {})["authorized_identities"] = {
"$in": auth_context.user_auth
}
def _apply_pgvector_filter(
search_kwargs: dict, filters: Optional[Any], pebblo_filter: dict
) -> None:
"""
Apply pebblo filters in the search_kwargs filters.
"""
if isinstance(filters, dict):
if len(filters) == 1:
# The only operators allowed at the top level are $and, $or, and $not
# First check if an operator or a field
key, value = list(filters.items())[0]
if key.startswith("$"):
# Then it's an operator
if key.lower() not in ["$and", "$or", "$not"]:
raise ValueError(
f"Invalid filter condition. Expected $and, $or or $not "
f"but got: {key}"
)
if not isinstance(value, list):
raise ValueError(
f"Expected a list, but got {type(value)} for value: {value}"
)
# Here we handle the $and, $or, and $not operators(Semantic filters)
if key.lower() == "$and":
# Add pebblo_filter to the $and list as it is
value.append(pebblo_filter)
elif key.lower() == "$not":
# Check if pebblo_filter is an operator or a field
_key, _value = list(pebblo_filter.items())[0]
if _key.startswith("$"):
# Then it's a operator
if _key.lower() == "$not":
# It's Semantic filter, add it's value to filters
value.append(_value)
logger.warning(
"Adding $not operator to the existing $not operator"
)
return
else:
# Only $not operator is supported in pebblo_filter
raise ValueError(
f"Invalid filter key. Expected '$not' but got: {_key}"
)
else:
# Then it's a field(Auth filter), move filters into $and
search_kwargs["filter"] = {"$and": [filters, pebblo_filter]}
return
elif key.lower() == "$or":
search_kwargs["filter"] = {"$and": [filters, pebblo_filter]}
else:
# Then it's a field and we can check pebblo_filter now
# Check if pebblo_filter is an operator or a field
_key, _ = list(pebblo_filter.items())[0]
if _key.startswith("$"):
# Then it's a operator
if _key.lower() == "$not":
# It's a $not operator(Semantic filter), move filters into $and
search_kwargs["filter"] = {"$and": [filters, pebblo_filter]}
return
else:
# Only $not operator is allowed in pebblo_filter
raise ValueError(
f"Invalid filter key. Expected '$not' but got: {_key}"
)
else:
# Then it's a field(This handles Auth filter)
filters.update(pebblo_filter)
return
elif len(filters) > 1:
# Then all keys have to be fields (they cannot be operators)
for key in filters.keys():
if key.startswith("$"):
raise ValueError(
f"Invalid filter condition. Expected a field but got: {key}"
)
# filters should all be fields and we can check pebblo_filter now
# Check if pebblo_filter is an operator or a field
_key, _ = list(pebblo_filter.items())[0]
if _key.startswith("$"):
# Then it's a operator
if _key.lower() == "$not":
# It's a $not operator(Semantic filter), move filters into '$and'
search_kwargs["filter"] = {"$and": [filters, pebblo_filter]}
return
else:
# Only $not operator is supported in pebblo_filter
raise ValueError(
f"Invalid filter key. Expected '$not' but got: {_key}"
)
else:
# Then it's a field(This handles Auth filter)
filters.update(pebblo_filter)
return
else:
# Got an empty dictionary for filters, set pebblo_filter in filter
search_kwargs.setdefault("filter", {}).update(pebblo_filter)
elif filters is None:
# If filters is None, set pebblo_filter as a new filter
search_kwargs.setdefault("filter", {}).update(pebblo_filter)
else:
raise ValueError(
f"Invalid filter. Expected a dictionary/None but got type: {type(filters)}"
)
def _pgvector_clear_pebblo_filters(
search_kwargs: dict, filters: dict, pebblo_filter_key: str
) -> None:
"""
Remove pebblo filters from the search_kwargs filters.
"""
if isinstance(filters, dict):
if len(filters) == 1:
# The only operators allowed at the top level are $and, $or, and $not
# First check if an operator or a field
key, value = list(filters.items())[0]
if key.startswith("$"):
# Then it's an operator
# Validate the operator's key and value type
if key.lower() not in ["$and", "$or", "$not"]:
raise ValueError(
f"Invalid filter condition. Expected $and, $or or $not "
f"but got: {key}"
)
elif not isinstance(value, list):
raise ValueError(
f"Expected a list, but got {type(value)} for value: {value}"
)
# Here we handle the $and, $or, and $not operators
if key.lower() == "$and":
# Remove the pebblo filter from the $and list
for i, _filter in enumerate(value):
if pebblo_filter_key in _filter:
# This handles Auth filter
value.pop(i)
break
# Check for $not operator with Semantic filter
if "$not" in _filter:
sem_filter_found = False
# This handles Semantic filter
for j, nested_filter in enumerate(_filter["$not"]):
if pebblo_filter_key in nested_filter:
if len(_filter["$not"]) == 1:
# If only one filter is left,
# then remove the $not operator
value.pop(i)
else:
value[i]["$not"].pop(j)
sem_filter_found = True
break
if sem_filter_found:
break
if len(value) == 1:
# If only one filter is left, then remove the $and operator
search_kwargs["filter"] = value[0]
elif key.lower() == "$not":
# Remove the pebblo filter from the $not list
for i, _filter in enumerate(value):
if pebblo_filter_key in _filter:
# This removes Semantic filter
value.pop(i)
break
if len(value) == 0:
# If no filter is left, then unset the filter
search_kwargs["filter"] = {}
elif key.lower() == "$or":
# If $or, pebblo filter will not be present
return
else:
# Then it's a field, check if it's a pebblo filter
if key == pebblo_filter_key:
filters.pop(key)
return
elif len(filters) > 1:
# Then all keys have to be fields (they cannot be operators)
if pebblo_filter_key in filters:
# This handles Auth filter
filters.pop(pebblo_filter_key)
return
else:
# Got an empty dictionary for filters, ignore the filter
return
elif filters is None:
# If filters is None, ignore the filter
return
else:
raise ValueError(
f"Invalid filter. Expected a dictionary/None but got type: {type(filters)}"
)
def _apply_pgvector_semantic_filter(
search_kwargs: dict, semantic_context: Optional[SemanticContext]
) -> None:
"""
Set semantic enforcement filter in search_kwargs for PGVector vectorstore.
"""
# Check if semantic_context is provided
if semantic_context is not None:
_semantic_filters = []
filters = search_kwargs.get("filter")
if semantic_context.pebblo_semantic_topics is not None:
# Add pebblo_semantic_topics filter to search_kwargs
topic_filter: dict = {
"pebblo_semantic_topics": {
"$eq": semantic_context.pebblo_semantic_topics.deny
}
}
_semantic_filters.append(topic_filter)
if semantic_context.pebblo_semantic_entities is not None:
# Add pebblo_semantic_entities filter to search_kwargs
entity_filter: dict = {
"pebblo_semantic_entities": {
"$eq": semantic_context.pebblo_semantic_entities.deny
}
}
_semantic_filters.append(entity_filter)
if len(_semantic_filters) > 0:
semantic_filter: dict = {"$not": _semantic_filters}
_apply_pgvector_filter(search_kwargs, filters, semantic_filter)
def _apply_pgvector_authorization_filter(
search_kwargs: dict, auth_context: Optional[AuthContext]
) -> None:
"""
Set identity enforcement filter in search_kwargs for PGVector vectorstore.
"""
if auth_context is not None:
auth_filter: dict = {"authorized_identities": {"$eq": auth_context.user_auth}}
filters = search_kwargs.get("filter")
_apply_pgvector_filter(search_kwargs, filters, auth_filter)
def _set_identity_enforcement_filter(
retriever: VectorStoreRetriever, auth_context: Optional[AuthContext]
) -> None:
"""
Set identity enforcement filter in search_kwargs.
This method sets the identity enforcement filter in the search_kwargs
of the retriever based on the type of the vectorstore.
"""
search_kwargs = retriever.search_kwargs
if retriever.vectorstore.__class__.__name__ in [PINECONE, PINECONE_VECTOR_STORE]:
_apply_pinecone_authorization_filter(search_kwargs, auth_context)
elif retriever.vectorstore.__class__.__name__ == QDRANT:
_apply_qdrant_authorization_filter(search_kwargs, auth_context)
elif retriever.vectorstore.__class__.__name__ == PGVECTOR:
_apply_pgvector_authorization_filter(search_kwargs, auth_context)
def _set_semantic_enforcement_filter(
retriever: VectorStoreRetriever, semantic_context: Optional[SemanticContext]
) -> None:
"""
Set semantic enforcement filter in search_kwargs.
This method sets the semantic enforcement filter in the search_kwargs
of the retriever based on the type of the vectorstore.
"""
search_kwargs = retriever.search_kwargs
if retriever.vectorstore.__class__.__name__ == PINECONE:
_apply_pinecone_semantic_filter(search_kwargs, semantic_context)
elif retriever.vectorstore.__class__.__name__ == QDRANT:
_apply_qdrant_semantic_filter(search_kwargs, semantic_context)
elif retriever.vectorstore.__class__.__name__ == PGVECTOR:
_apply_pgvector_semantic_filter(search_kwargs, semantic_context)
|
langchain-aiREPO_NAMElangchainPATH_START.@langchain_extracted@langchain-master@libs@community@langchain_community@chains@pebblo_retrieval@enforcement_filters.py@.PATH_END.py
|
{
"filename": "README.md",
"repo_name": "danielrd6/ifscube",
"repo_path": "ifscube_extracted/ifscube-master/README.md",
"type": "Markdown"
}
|
# ifscube
A set of python scripts and functions to analyse and process integral
field spectroscopy data cubes.
Author: Daniel Ruschel Dutra
Website: https://github.com/danielrd6/ifscube/
## Acknowleding IFSCube
If you use IFSCube in your research, and feel that it has contributed
significantly to your work, please consider citing the following paper
[Ruschel-Dutra et al. 2021](https://ui.adsabs.harvard.edu/abs/2021MNRAS.507...74R/abstract),
which has been the main driver for the development of the code,
and the Zenodo DOI
[](https://doi.org/10.5281/zenodo.4065550).
## Documentation
The full documentation to IFSCube can be found in the docs directory, or
online at https://ifscube.readthedocs.io/en/latest/.
|
danielrd6REPO_NAMEifscubePATH_START.@ifscube_extracted@ifscube-master@README.md@.PATH_END.py
|
{
"filename": "Image.py",
"repo_name": "catboost/catboost",
"repo_path": "catboost_extracted/catboost-master/contrib/python/Pillow/py3/PIL/Image.py",
"type": "Python"
}
|
#
# The Python Imaging Library.
# $Id$
#
# the Image class wrapper
#
# partial release history:
# 1995-09-09 fl Created
# 1996-03-11 fl PIL release 0.0 (proof of concept)
# 1996-04-30 fl PIL release 0.1b1
# 1999-07-28 fl PIL release 1.0 final
# 2000-06-07 fl PIL release 1.1
# 2000-10-20 fl PIL release 1.1.1
# 2001-05-07 fl PIL release 1.1.2
# 2002-03-15 fl PIL release 1.1.3
# 2003-05-10 fl PIL release 1.1.4
# 2005-03-28 fl PIL release 1.1.5
# 2006-12-02 fl PIL release 1.1.6
# 2009-11-15 fl PIL release 1.1.7
#
# Copyright (c) 1997-2009 by Secret Labs AB. All rights reserved.
# Copyright (c) 1995-2009 by Fredrik Lundh.
#
# See the README file for information on usage and redistribution.
#
from __future__ import annotations
import atexit
import builtins
import io
import logging
import math
import os
import re
import struct
import sys
import tempfile
import warnings
from collections.abc import Callable, MutableMapping
from enum import IntEnum
from pathlib import Path
try:
from defusedxml import ElementTree
except ImportError:
ElementTree = None
# VERSION was removed in Pillow 6.0.0.
# PILLOW_VERSION was removed in Pillow 9.0.0.
# Use __version__ instead.
from . import (
ExifTags,
ImageMode,
TiffTags,
UnidentifiedImageError,
__version__,
_plugins,
)
from ._binary import i32le, o32be, o32le
from ._util import DeferredError, is_path
logger = logging.getLogger(__name__)
class DecompressionBombWarning(RuntimeWarning):
pass
class DecompressionBombError(Exception):
pass
# Limit to around a quarter gigabyte for a 24-bit (3 bpp) image
MAX_IMAGE_PIXELS = int(1024 * 1024 * 1024 // 4 // 3)
try:
# If the _imaging C module is not present, Pillow will not load.
# Note that other modules should not refer to _imaging directly;
# import Image and use the Image.core variable instead.
# Also note that Image.core is not a publicly documented interface,
# and should be considered private and subject to change.
from . import _imaging as core
if __version__ != getattr(core, "PILLOW_VERSION", None):
msg = (
"The _imaging extension was built for another version of Pillow or PIL:\n"
f"Core version: {getattr(core, 'PILLOW_VERSION', None)}\n"
f"Pillow version: {__version__}"
)
raise ImportError(msg)
except ImportError as v:
core = DeferredError.new(ImportError("The _imaging C module is not installed."))
# Explanations for ways that we know we might have an import error
if str(v).startswith("Module use of python"):
# The _imaging C module is present, but not compiled for
# the right version (windows only). Print a warning, if
# possible.
warnings.warn(
"The _imaging extension was built for another version of Python.",
RuntimeWarning,
)
elif str(v).startswith("The _imaging extension"):
warnings.warn(str(v), RuntimeWarning)
# Fail here anyway. Don't let people run with a mostly broken Pillow.
# see docs/porting.rst
raise
USE_CFFI_ACCESS = False
try:
import cffi
except ImportError:
cffi = None
def isImageType(t):
"""
Checks if an object is an image object.
.. warning::
This function is for internal use only.
:param t: object to check if it's an image
:returns: True if the object is an image
"""
return hasattr(t, "im")
#
# Constants
# transpose
class Transpose(IntEnum):
FLIP_LEFT_RIGHT = 0
FLIP_TOP_BOTTOM = 1
ROTATE_90 = 2
ROTATE_180 = 3
ROTATE_270 = 4
TRANSPOSE = 5
TRANSVERSE = 6
# transforms (also defined in Imaging.h)
class Transform(IntEnum):
AFFINE = 0
EXTENT = 1
PERSPECTIVE = 2
QUAD = 3
MESH = 4
# resampling filters (also defined in Imaging.h)
class Resampling(IntEnum):
NEAREST = 0
BOX = 4
BILINEAR = 2
HAMMING = 5
BICUBIC = 3
LANCZOS = 1
_filters_support = {
Resampling.BOX: 0.5,
Resampling.BILINEAR: 1.0,
Resampling.HAMMING: 1.0,
Resampling.BICUBIC: 2.0,
Resampling.LANCZOS: 3.0,
}
# dithers
class Dither(IntEnum):
NONE = 0
ORDERED = 1 # Not yet implemented
RASTERIZE = 2 # Not yet implemented
FLOYDSTEINBERG = 3 # default
# palettes/quantizers
class Palette(IntEnum):
WEB = 0
ADAPTIVE = 1
class Quantize(IntEnum):
MEDIANCUT = 0
MAXCOVERAGE = 1
FASTOCTREE = 2
LIBIMAGEQUANT = 3
module = sys.modules[__name__]
for enum in (Transpose, Transform, Resampling, Dither, Palette, Quantize):
for item in enum:
setattr(module, item.name, item.value)
if hasattr(core, "DEFAULT_STRATEGY"):
DEFAULT_STRATEGY = core.DEFAULT_STRATEGY
FILTERED = core.FILTERED
HUFFMAN_ONLY = core.HUFFMAN_ONLY
RLE = core.RLE
FIXED = core.FIXED
# --------------------------------------------------------------------
# Registries
ID = []
OPEN = {}
MIME = {}
SAVE = {}
SAVE_ALL = {}
EXTENSION = {}
DECODERS = {}
ENCODERS = {}
# --------------------------------------------------------------------
# Modes
_ENDIAN = "<" if sys.byteorder == "little" else ">"
def _conv_type_shape(im):
m = ImageMode.getmode(im.mode)
shape = (im.height, im.width)
extra = len(m.bands)
if extra != 1:
shape += (extra,)
return shape, m.typestr
MODES = ["1", "CMYK", "F", "HSV", "I", "L", "LAB", "P", "RGB", "RGBA", "RGBX", "YCbCr"]
# raw modes that may be memory mapped. NOTE: if you change this, you
# may have to modify the stride calculation in map.c too!
_MAPMODES = ("L", "P", "RGBX", "RGBA", "CMYK", "I;16", "I;16L", "I;16B")
def getmodebase(mode):
"""
Gets the "base" mode for given mode. This function returns "L" for
images that contain grayscale data, and "RGB" for images that
contain color data.
:param mode: Input mode.
:returns: "L" or "RGB".
:exception KeyError: If the input mode was not a standard mode.
"""
return ImageMode.getmode(mode).basemode
def getmodetype(mode):
"""
Gets the storage type mode. Given a mode, this function returns a
single-layer mode suitable for storing individual bands.
:param mode: Input mode.
:returns: "L", "I", or "F".
:exception KeyError: If the input mode was not a standard mode.
"""
return ImageMode.getmode(mode).basetype
def getmodebandnames(mode):
"""
Gets a list of individual band names. Given a mode, this function returns
a tuple containing the names of individual bands (use
:py:method:`~PIL.Image.getmodetype` to get the mode used to store each
individual band.
:param mode: Input mode.
:returns: A tuple containing band names. The length of the tuple
gives the number of bands in an image of the given mode.
:exception KeyError: If the input mode was not a standard mode.
"""
return ImageMode.getmode(mode).bands
def getmodebands(mode):
"""
Gets the number of individual bands for this mode.
:param mode: Input mode.
:returns: The number of bands in this mode.
:exception KeyError: If the input mode was not a standard mode.
"""
return len(ImageMode.getmode(mode).bands)
# --------------------------------------------------------------------
# Helpers
_initialized = 0
def preinit():
"""
Explicitly loads BMP, GIF, JPEG, PPM and PPM file format drivers.
It is called when opening or saving images.
"""
global _initialized
if _initialized >= 1:
return
try:
from . import BmpImagePlugin
assert BmpImagePlugin
except ImportError:
pass
try:
from . import GifImagePlugin
assert GifImagePlugin
except ImportError:
pass
try:
from . import JpegImagePlugin
assert JpegImagePlugin
except ImportError:
pass
try:
from . import PpmImagePlugin
assert PpmImagePlugin
except ImportError:
pass
try:
from . import PngImagePlugin
assert PngImagePlugin
except ImportError:
pass
_initialized = 1
def init():
"""
Explicitly initializes the Python Imaging Library. This function
loads all available file format drivers.
It is called when opening or saving images if :py:meth:`~preinit()` is
insufficient, and by :py:meth:`~PIL.features.pilinfo`.
"""
global _initialized
if _initialized >= 2:
return 0
parent_name = __name__.rpartition(".")[0]
for plugin in _plugins:
try:
logger.debug("Importing %s", plugin)
__import__(f"{parent_name}.{plugin}", globals(), locals(), [])
except ImportError as e:
logger.debug("Image: failed to import %s: %s", plugin, e)
if OPEN or SAVE:
_initialized = 2
return 1
# --------------------------------------------------------------------
# Codec factories (used by tobytes/frombytes and ImageFile.load)
def _getdecoder(mode, decoder_name, args, extra=()):
# tweak arguments
if args is None:
args = ()
elif not isinstance(args, tuple):
args = (args,)
try:
decoder = DECODERS[decoder_name]
except KeyError:
pass
else:
return decoder(mode, *args + extra)
try:
# get decoder
decoder = getattr(core, decoder_name + "_decoder")
except AttributeError as e:
msg = f"decoder {decoder_name} not available"
raise OSError(msg) from e
return decoder(mode, *args + extra)
def _getencoder(mode, encoder_name, args, extra=()):
# tweak arguments
if args is None:
args = ()
elif not isinstance(args, tuple):
args = (args,)
try:
encoder = ENCODERS[encoder_name]
except KeyError:
pass
else:
return encoder(mode, *args + extra)
try:
# get encoder
encoder = getattr(core, encoder_name + "_encoder")
except AttributeError as e:
msg = f"encoder {encoder_name} not available"
raise OSError(msg) from e
return encoder(mode, *args + extra)
# --------------------------------------------------------------------
# Simple expression analyzer
class _E:
def __init__(self, scale, offset):
self.scale = scale
self.offset = offset
def __neg__(self):
return _E(-self.scale, -self.offset)
def __add__(self, other):
if isinstance(other, _E):
return _E(self.scale + other.scale, self.offset + other.offset)
return _E(self.scale, self.offset + other)
__radd__ = __add__
def __sub__(self, other):
return self + -other
def __rsub__(self, other):
return other + -self
def __mul__(self, other):
if isinstance(other, _E):
return NotImplemented
return _E(self.scale * other, self.offset * other)
__rmul__ = __mul__
def __truediv__(self, other):
if isinstance(other, _E):
return NotImplemented
return _E(self.scale / other, self.offset / other)
def _getscaleoffset(expr):
a = expr(_E(1, 0))
return (a.scale, a.offset) if isinstance(a, _E) else (0, a)
# --------------------------------------------------------------------
# Implementation wrapper
class Image:
"""
This class represents an image object. To create
:py:class:`~PIL.Image.Image` objects, use the appropriate factory
functions. There's hardly ever any reason to call the Image constructor
directly.
* :py:func:`~PIL.Image.open`
* :py:func:`~PIL.Image.new`
* :py:func:`~PIL.Image.frombytes`
"""
format: str | None = None
format_description: str | None = None
_close_exclusive_fp_after_loading = True
def __init__(self):
# FIXME: take "new" parameters / other image?
# FIXME: turn mode and size into delegating properties?
self.im = None
self._mode = ""
self._size = (0, 0)
self.palette = None
self.info = {}
self.readonly = 0
self.pyaccess = None
self._exif = None
@property
def width(self):
return self.size[0]
@property
def height(self):
return self.size[1]
@property
def size(self):
return self._size
@property
def mode(self):
return self._mode
def _new(self, im):
new = Image()
new.im = im
new._mode = im.mode
new._size = im.size
if im.mode in ("P", "PA"):
if self.palette:
new.palette = self.palette.copy()
else:
from . import ImagePalette
new.palette = ImagePalette.ImagePalette()
new.info = self.info.copy()
return new
# Context manager support
def __enter__(self):
return self
def _close_fp(self):
if getattr(self, "_fp", False):
if self._fp != self.fp:
self._fp.close()
self._fp = DeferredError(ValueError("Operation on closed image"))
if self.fp:
self.fp.close()
def __exit__(self, *args):
if hasattr(self, "fp"):
if getattr(self, "_exclusive_fp", False):
self._close_fp()
self.fp = None
def close(self):
"""
Closes the file pointer, if possible.
This operation will destroy the image core and release its memory.
The image data will be unusable afterward.
This function is required to close images that have multiple frames or
have not had their file read and closed by the
:py:meth:`~PIL.Image.Image.load` method. See :ref:`file-handling` for
more information.
"""
if hasattr(self, "fp"):
try:
self._close_fp()
self.fp = None
except Exception as msg:
logger.debug("Error closing: %s", msg)
if getattr(self, "map", None):
self.map = None
# Instead of simply setting to None, we're setting up a
# deferred error that will better explain that the core image
# object is gone.
self.im = DeferredError(ValueError("Operation on closed image"))
def _copy(self):
self.load()
self.im = self.im.copy()
self.pyaccess = None
self.readonly = 0
def _ensure_mutable(self):
if self.readonly:
self._copy()
else:
self.load()
def _dump(self, file=None, format=None, **options):
suffix = ""
if format:
suffix = "." + format
if not file:
f, filename = tempfile.mkstemp(suffix)
os.close(f)
else:
filename = file
if not filename.endswith(suffix):
filename = filename + suffix
self.load()
if not format or format == "PPM":
self.im.save_ppm(filename)
else:
self.save(filename, format, **options)
return filename
def __eq__(self, other):
return (
self.__class__ is other.__class__
and self.mode == other.mode
and self.size == other.size
and self.info == other.info
and self.getpalette() == other.getpalette()
and self.tobytes() == other.tobytes()
)
def __repr__(self):
return "<%s.%s image mode=%s size=%dx%d at 0x%X>" % (
self.__class__.__module__,
self.__class__.__name__,
self.mode,
self.size[0],
self.size[1],
id(self),
)
def _repr_pretty_(self, p, cycle):
"""IPython plain text display support"""
# Same as __repr__ but without unpredictable id(self),
# to keep Jupyter notebook `text/plain` output stable.
p.text(
"<%s.%s image mode=%s size=%dx%d>"
% (
self.__class__.__module__,
self.__class__.__name__,
self.mode,
self.size[0],
self.size[1],
)
)
def _repr_image(self, image_format, **kwargs):
"""Helper function for iPython display hook.
:param image_format: Image format.
:returns: image as bytes, saved into the given format.
"""
b = io.BytesIO()
try:
self.save(b, image_format, **kwargs)
except Exception:
return None
return b.getvalue()
def _repr_png_(self):
"""iPython display hook support for PNG format.
:returns: PNG version of the image as bytes
"""
return self._repr_image("PNG", compress_level=1)
def _repr_jpeg_(self):
"""iPython display hook support for JPEG format.
:returns: JPEG version of the image as bytes
"""
return self._repr_image("JPEG")
@property
def __array_interface__(self):
# numpy array interface support
new = {"version": 3}
try:
if self.mode == "1":
# Binary images need to be extended from bits to bytes
# See: https://github.com/python-pillow/Pillow/issues/350
new["data"] = self.tobytes("raw", "L")
else:
new["data"] = self.tobytes()
except Exception as e:
if not isinstance(e, (MemoryError, RecursionError)):
try:
import numpy
from packaging.version import parse as parse_version
except ImportError:
pass
else:
if parse_version(numpy.__version__) < parse_version("1.23"):
warnings.warn(e)
raise
new["shape"], new["typestr"] = _conv_type_shape(self)
return new
def __getstate__(self):
im_data = self.tobytes() # load image first
return [self.info, self.mode, self.size, self.getpalette(), im_data]
def __setstate__(self, state):
Image.__init__(self)
info, mode, size, palette, data = state
self.info = info
self._mode = mode
self._size = size
self.im = core.new(mode, size)
if mode in ("L", "LA", "P", "PA") and palette:
self.putpalette(palette)
self.frombytes(data)
def tobytes(self, encoder_name="raw", *args):
"""
Return image as a bytes object.
.. warning::
This method returns the raw image data from the internal
storage. For compressed image data (e.g. PNG, JPEG) use
:meth:`~.save`, with a BytesIO parameter for in-memory
data.
:param encoder_name: What encoder to use. The default is to
use the standard "raw" encoder.
A list of C encoders can be seen under
codecs section of the function array in
:file:`_imaging.c`. Python encoders are
registered within the relevant plugins.
:param args: Extra arguments to the encoder.
:returns: A :py:class:`bytes` object.
"""
# may pass tuple instead of argument list
if len(args) == 1 and isinstance(args[0], tuple):
args = args[0]
if encoder_name == "raw" and args == ():
args = self.mode
self.load()
if self.width == 0 or self.height == 0:
return b""
# unpack data
e = _getencoder(self.mode, encoder_name, args)
e.setimage(self.im)
bufsize = max(65536, self.size[0] * 4) # see RawEncode.c
output = []
while True:
bytes_consumed, errcode, data = e.encode(bufsize)
output.append(data)
if errcode:
break
if errcode < 0:
msg = f"encoder error {errcode} in tobytes"
raise RuntimeError(msg)
return b"".join(output)
def tobitmap(self, name="image"):
"""
Returns the image converted to an X11 bitmap.
.. note:: This method only works for mode "1" images.
:param name: The name prefix to use for the bitmap variables.
:returns: A string containing an X11 bitmap.
:raises ValueError: If the mode is not "1"
"""
self.load()
if self.mode != "1":
msg = "not a bitmap"
raise ValueError(msg)
data = self.tobytes("xbm")
return b"".join(
[
f"#define {name}_width {self.size[0]}\n".encode("ascii"),
f"#define {name}_height {self.size[1]}\n".encode("ascii"),
f"static char {name}_bits[] = {{\n".encode("ascii"),
data,
b"};",
]
)
def frombytes(self, data, decoder_name="raw", *args):
"""
Loads this image with pixel data from a bytes object.
This method is similar to the :py:func:`~PIL.Image.frombytes` function,
but loads data into this image instead of creating a new image object.
"""
if self.width == 0 or self.height == 0:
return
# may pass tuple instead of argument list
if len(args) == 1 and isinstance(args[0], tuple):
args = args[0]
# default format
if decoder_name == "raw" and args == ():
args = self.mode
# unpack data
d = _getdecoder(self.mode, decoder_name, args)
d.setimage(self.im)
s = d.decode(data)
if s[0] >= 0:
msg = "not enough image data"
raise ValueError(msg)
if s[1] != 0:
msg = "cannot decode image data"
raise ValueError(msg)
def load(self):
"""
Allocates storage for the image and loads the pixel data. In
normal cases, you don't need to call this method, since the
Image class automatically loads an opened image when it is
accessed for the first time.
If the file associated with the image was opened by Pillow, then this
method will close it. The exception to this is if the image has
multiple frames, in which case the file will be left open for seek
operations. See :ref:`file-handling` for more information.
:returns: An image access object.
:rtype: :ref:`PixelAccess` or :py:class:`PIL.PyAccess`
"""
if self.im is not None and self.palette and self.palette.dirty:
# realize palette
mode, arr = self.palette.getdata()
self.im.putpalette(mode, arr)
self.palette.dirty = 0
self.palette.rawmode = None
if "transparency" in self.info and mode in ("LA", "PA"):
if isinstance(self.info["transparency"], int):
self.im.putpalettealpha(self.info["transparency"], 0)
else:
self.im.putpalettealphas(self.info["transparency"])
self.palette.mode = "RGBA"
else:
palette_mode = "RGBA" if mode.startswith("RGBA") else "RGB"
self.palette.mode = palette_mode
self.palette.palette = self.im.getpalette(palette_mode, palette_mode)
if self.im is not None:
if cffi and USE_CFFI_ACCESS:
if self.pyaccess:
return self.pyaccess
from . import PyAccess
self.pyaccess = PyAccess.new(self, self.readonly)
if self.pyaccess:
return self.pyaccess
return self.im.pixel_access(self.readonly)
def verify(self):
"""
Verifies the contents of a file. For data read from a file, this
method attempts to determine if the file is broken, without
actually decoding the image data. If this method finds any
problems, it raises suitable exceptions. If you need to load
the image after using this method, you must reopen the image
file.
"""
pass
def convert(
self, mode=None, matrix=None, dither=None, palette=Palette.WEB, colors=256
):
"""
Returns a converted copy of this image. For the "P" mode, this
method translates pixels through the palette. If mode is
omitted, a mode is chosen so that all information in the image
and the palette can be represented without a palette.
The current version supports all possible conversions between
"L", "RGB" and "CMYK". The ``matrix`` argument only supports "L"
and "RGB".
When translating a color image to grayscale (mode "L"),
the library uses the ITU-R 601-2 luma transform::
L = R * 299/1000 + G * 587/1000 + B * 114/1000
The default method of converting a grayscale ("L") or "RGB"
image into a bilevel (mode "1") image uses Floyd-Steinberg
dither to approximate the original image luminosity levels. If
dither is ``None``, all values larger than 127 are set to 255 (white),
all other values to 0 (black). To use other thresholds, use the
:py:meth:`~PIL.Image.Image.point` method.
When converting from "RGBA" to "P" without a ``matrix`` argument,
this passes the operation to :py:meth:`~PIL.Image.Image.quantize`,
and ``dither`` and ``palette`` are ignored.
When converting from "PA", if an "RGBA" palette is present, the alpha
channel from the image will be used instead of the values from the palette.
:param mode: The requested mode. See: :ref:`concept-modes`.
:param matrix: An optional conversion matrix. If given, this
should be 4- or 12-tuple containing floating point values.
:param dither: Dithering method, used when converting from
mode "RGB" to "P" or from "RGB" or "L" to "1".
Available methods are :data:`Dither.NONE` or :data:`Dither.FLOYDSTEINBERG`
(default). Note that this is not used when ``matrix`` is supplied.
:param palette: Palette to use when converting from mode "RGB"
to "P". Available palettes are :data:`Palette.WEB` or
:data:`Palette.ADAPTIVE`.
:param colors: Number of colors to use for the :data:`Palette.ADAPTIVE`
palette. Defaults to 256.
:rtype: :py:class:`~PIL.Image.Image`
:returns: An :py:class:`~PIL.Image.Image` object.
"""
self.load()
has_transparency = "transparency" in self.info
if not mode and self.mode == "P":
# determine default mode
if self.palette:
mode = self.palette.mode
else:
mode = "RGB"
if mode == "RGB" and has_transparency:
mode = "RGBA"
if not mode or (mode == self.mode and not matrix):
return self.copy()
if matrix:
# matrix conversion
if mode not in ("L", "RGB"):
msg = "illegal conversion"
raise ValueError(msg)
im = self.im.convert_matrix(mode, matrix)
new_im = self._new(im)
if has_transparency and self.im.bands == 3:
transparency = new_im.info["transparency"]
def convert_transparency(m, v):
v = m[0] * v[0] + m[1] * v[1] + m[2] * v[2] + m[3] * 0.5
return max(0, min(255, int(v)))
if mode == "L":
transparency = convert_transparency(matrix, transparency)
elif len(mode) == 3:
transparency = tuple(
convert_transparency(matrix[i * 4 : i * 4 + 4], transparency)
for i in range(0, len(transparency))
)
new_im.info["transparency"] = transparency
return new_im
if mode == "P" and self.mode == "RGBA":
return self.quantize(colors)
trns = None
delete_trns = False
# transparency handling
if has_transparency:
if (self.mode in ("1", "L", "I") and mode in ("LA", "RGBA")) or (
self.mode == "RGB" and mode == "RGBA"
):
# Use transparent conversion to promote from transparent
# color to an alpha channel.
new_im = self._new(
self.im.convert_transparent(mode, self.info["transparency"])
)
del new_im.info["transparency"]
return new_im
elif self.mode in ("L", "RGB", "P") and mode in ("L", "RGB", "P"):
t = self.info["transparency"]
if isinstance(t, bytes):
# Dragons. This can't be represented by a single color
warnings.warn(
"Palette images with Transparency expressed in bytes should be "
"converted to RGBA images"
)
delete_trns = True
else:
# get the new transparency color.
# use existing conversions
trns_im = new(self.mode, (1, 1))
if self.mode == "P":
trns_im.putpalette(self.palette)
if isinstance(t, tuple):
err = "Couldn't allocate a palette color for transparency"
try:
t = trns_im.palette.getcolor(t, self)
except ValueError as e:
if str(e) == "cannot allocate more than 256 colors":
# If all 256 colors are in use,
# then there is no need for transparency
t = None
else:
raise ValueError(err) from e
if t is None:
trns = None
else:
trns_im.putpixel((0, 0), t)
if mode in ("L", "RGB"):
trns_im = trns_im.convert(mode)
else:
# can't just retrieve the palette number, got to do it
# after quantization.
trns_im = trns_im.convert("RGB")
trns = trns_im.getpixel((0, 0))
elif self.mode == "P" and mode in ("LA", "PA", "RGBA"):
t = self.info["transparency"]
delete_trns = True
if isinstance(t, bytes):
self.im.putpalettealphas(t)
elif isinstance(t, int):
self.im.putpalettealpha(t, 0)
else:
msg = "Transparency for P mode should be bytes or int"
raise ValueError(msg)
if mode == "P" and palette == Palette.ADAPTIVE:
im = self.im.quantize(colors)
new_im = self._new(im)
from . import ImagePalette
new_im.palette = ImagePalette.ImagePalette(
"RGB", new_im.im.getpalette("RGB")
)
if delete_trns:
# This could possibly happen if we requantize to fewer colors.
# The transparency would be totally off in that case.
del new_im.info["transparency"]
if trns is not None:
try:
new_im.info["transparency"] = new_im.palette.getcolor(trns, new_im)
except Exception:
# if we can't make a transparent color, don't leave the old
# transparency hanging around to mess us up.
del new_im.info["transparency"]
warnings.warn("Couldn't allocate palette entry for transparency")
return new_im
if "LAB" in (self.mode, mode):
other_mode = mode if self.mode == "LAB" else self.mode
if other_mode in ("RGB", "RGBA", "RGBX"):
from . import ImageCms
srgb = ImageCms.createProfile("sRGB")
lab = ImageCms.createProfile("LAB")
profiles = [lab, srgb] if self.mode == "LAB" else [srgb, lab]
transform = ImageCms.buildTransform(
profiles[0], profiles[1], self.mode, mode
)
return transform.apply(self)
# colorspace conversion
if dither is None:
dither = Dither.FLOYDSTEINBERG
try:
im = self.im.convert(mode, dither)
except ValueError:
try:
# normalize source image and try again
modebase = getmodebase(self.mode)
if modebase == self.mode:
raise
im = self.im.convert(modebase)
im = im.convert(mode, dither)
except KeyError as e:
msg = "illegal conversion"
raise ValueError(msg) from e
new_im = self._new(im)
if mode == "P" and palette != Palette.ADAPTIVE:
from . import ImagePalette
new_im.palette = ImagePalette.ImagePalette("RGB", im.getpalette("RGB"))
if delete_trns:
# crash fail if we leave a bytes transparency in an rgb/l mode.
del new_im.info["transparency"]
if trns is not None:
if new_im.mode == "P":
try:
new_im.info["transparency"] = new_im.palette.getcolor(trns, new_im)
except ValueError as e:
del new_im.info["transparency"]
if str(e) != "cannot allocate more than 256 colors":
# If all 256 colors are in use,
# then there is no need for transparency
warnings.warn(
"Couldn't allocate palette entry for transparency"
)
else:
new_im.info["transparency"] = trns
return new_im
def quantize(
self,
colors=256,
method=None,
kmeans=0,
palette=None,
dither=Dither.FLOYDSTEINBERG,
):
"""
Convert the image to 'P' mode with the specified number
of colors.
:param colors: The desired number of colors, <= 256
:param method: :data:`Quantize.MEDIANCUT` (median cut),
:data:`Quantize.MAXCOVERAGE` (maximum coverage),
:data:`Quantize.FASTOCTREE` (fast octree),
:data:`Quantize.LIBIMAGEQUANT` (libimagequant; check support
using :py:func:`PIL.features.check_feature` with
``feature="libimagequant"``).
By default, :data:`Quantize.MEDIANCUT` will be used.
The exception to this is RGBA images. :data:`Quantize.MEDIANCUT`
and :data:`Quantize.MAXCOVERAGE` do not support RGBA images, so
:data:`Quantize.FASTOCTREE` is used by default instead.
:param kmeans: Integer
:param palette: Quantize to the palette of given
:py:class:`PIL.Image.Image`.
:param dither: Dithering method, used when converting from
mode "RGB" to "P" or from "RGB" or "L" to "1".
Available methods are :data:`Dither.NONE` or :data:`Dither.FLOYDSTEINBERG`
(default).
:returns: A new image
"""
self.load()
if method is None:
# defaults:
method = Quantize.MEDIANCUT
if self.mode == "RGBA":
method = Quantize.FASTOCTREE
if self.mode == "RGBA" and method not in (
Quantize.FASTOCTREE,
Quantize.LIBIMAGEQUANT,
):
# Caller specified an invalid mode.
msg = (
"Fast Octree (method == 2) and libimagequant (method == 3) "
"are the only valid methods for quantizing RGBA images"
)
raise ValueError(msg)
if palette:
# use palette from reference image
palette.load()
if palette.mode != "P":
msg = "bad mode for palette image"
raise ValueError(msg)
if self.mode not in {"RGB", "L"}:
msg = "only RGB or L mode images can be quantized to a palette"
raise ValueError(msg)
im = self.im.convert("P", dither, palette.im)
new_im = self._new(im)
new_im.palette = palette.palette.copy()
return new_im
im = self._new(self.im.quantize(colors, method, kmeans))
from . import ImagePalette
mode = im.im.getpalettemode()
palette = im.im.getpalette(mode, mode)[: colors * len(mode)]
im.palette = ImagePalette.ImagePalette(mode, palette)
return im
def copy(self) -> Image:
"""
Copies this image. Use this method if you wish to paste things
into an image, but still retain the original.
:rtype: :py:class:`~PIL.Image.Image`
:returns: An :py:class:`~PIL.Image.Image` object.
"""
self.load()
return self._new(self.im.copy())
__copy__ = copy
def crop(self, box=None) -> Image:
"""
Returns a rectangular region from this image. The box is a
4-tuple defining the left, upper, right, and lower pixel
coordinate. See :ref:`coordinate-system`.
Note: Prior to Pillow 3.4.0, this was a lazy operation.
:param box: The crop rectangle, as a (left, upper, right, lower)-tuple.
:rtype: :py:class:`~PIL.Image.Image`
:returns: An :py:class:`~PIL.Image.Image` object.
"""
if box is None:
return self.copy()
if box[2] < box[0]:
msg = "Coordinate 'right' is less than 'left'"
raise ValueError(msg)
elif box[3] < box[1]:
msg = "Coordinate 'lower' is less than 'upper'"
raise ValueError(msg)
self.load()
return self._new(self._crop(self.im, box))
def _crop(self, im, box):
"""
Returns a rectangular region from the core image object im.
This is equivalent to calling im.crop((x0, y0, x1, y1)), but
includes additional sanity checks.
:param im: a core image object
:param box: The crop rectangle, as a (left, upper, right, lower)-tuple.
:returns: A core image object.
"""
x0, y0, x1, y1 = map(int, map(round, box))
absolute_values = (abs(x1 - x0), abs(y1 - y0))
_decompression_bomb_check(absolute_values)
return im.crop((x0, y0, x1, y1))
def draft(self, mode, size):
"""
Configures the image file loader so it returns a version of the
image that as closely as possible matches the given mode and
size. For example, you can use this method to convert a color
JPEG to grayscale while loading it.
If any changes are made, returns a tuple with the chosen ``mode`` and
``box`` with coordinates of the original image within the altered one.
Note that this method modifies the :py:class:`~PIL.Image.Image` object
in place. If the image has already been loaded, this method has no
effect.
Note: This method is not implemented for most images. It is
currently implemented only for JPEG and MPO images.
:param mode: The requested mode.
:param size: The requested size in pixels, as a 2-tuple:
(width, height).
"""
pass
def _expand(self, xmargin, ymargin=None):
if ymargin is None:
ymargin = xmargin
self.load()
return self._new(self.im.expand(xmargin, ymargin))
def filter(self, filter):
"""
Filters this image using the given filter. For a list of
available filters, see the :py:mod:`~PIL.ImageFilter` module.
:param filter: Filter kernel.
:returns: An :py:class:`~PIL.Image.Image` object."""
from . import ImageFilter
self.load()
if isinstance(filter, Callable):
filter = filter()
if not hasattr(filter, "filter"):
msg = "filter argument should be ImageFilter.Filter instance or class"
raise TypeError(msg)
multiband = isinstance(filter, ImageFilter.MultibandFilter)
if self.im.bands == 1 or multiband:
return self._new(filter.filter(self.im))
ims = [
self._new(filter.filter(self.im.getband(c))) for c in range(self.im.bands)
]
return merge(self.mode, ims)
def getbands(self):
"""
Returns a tuple containing the name of each band in this image.
For example, ``getbands`` on an RGB image returns ("R", "G", "B").
:returns: A tuple containing band names.
:rtype: tuple
"""
return ImageMode.getmode(self.mode).bands
def getbbox(self, *, alpha_only=True):
"""
Calculates the bounding box of the non-zero regions in the
image.
:param alpha_only: Optional flag, defaulting to ``True``.
If ``True`` and the image has an alpha channel, trim transparent pixels.
Otherwise, trim pixels when all channels are zero.
Keyword-only argument.
:returns: The bounding box is returned as a 4-tuple defining the
left, upper, right, and lower pixel coordinate. See
:ref:`coordinate-system`. If the image is completely empty, this
method returns None.
"""
self.load()
return self.im.getbbox(alpha_only)
def getcolors(self, maxcolors=256):
"""
Returns a list of colors used in this image.
The colors will be in the image's mode. For example, an RGB image will
return a tuple of (red, green, blue) color values, and a P image will
return the index of the color in the palette.
:param maxcolors: Maximum number of colors. If this number is
exceeded, this method returns None. The default limit is
256 colors.
:returns: An unsorted list of (count, pixel) values.
"""
self.load()
if self.mode in ("1", "L", "P"):
h = self.im.histogram()
out = [(h[i], i) for i in range(256) if h[i]]
if len(out) > maxcolors:
return None
return out
return self.im.getcolors(maxcolors)
def getdata(self, band=None):
"""
Returns the contents of this image as a sequence object
containing pixel values. The sequence object is flattened, so
that values for line one follow directly after the values of
line zero, and so on.
Note that the sequence object returned by this method is an
internal PIL data type, which only supports certain sequence
operations. To convert it to an ordinary sequence (e.g. for
printing), use ``list(im.getdata())``.
:param band: What band to return. The default is to return
all bands. To return a single band, pass in the index
value (e.g. 0 to get the "R" band from an "RGB" image).
:returns: A sequence-like object.
"""
self.load()
if band is not None:
return self.im.getband(band)
return self.im # could be abused
def getextrema(self):
"""
Gets the minimum and maximum pixel values for each band in
the image.
:returns: For a single-band image, a 2-tuple containing the
minimum and maximum pixel value. For a multi-band image,
a tuple containing one 2-tuple for each band.
"""
self.load()
if self.im.bands > 1:
return tuple(self.im.getband(i).getextrema() for i in range(self.im.bands))
return self.im.getextrema()
def _getxmp(self, xmp_tags):
def get_name(tag):
return re.sub("^{[^}]+}", "", tag)
def get_value(element):
value = {get_name(k): v for k, v in element.attrib.items()}
children = list(element)
if children:
for child in children:
name = get_name(child.tag)
child_value = get_value(child)
if name in value:
if not isinstance(value[name], list):
value[name] = [value[name]]
value[name].append(child_value)
else:
value[name] = child_value
elif value:
if element.text:
value["text"] = element.text
else:
return element.text
return value
if ElementTree is None:
warnings.warn("XMP data cannot be read without defusedxml dependency")
return {}
else:
root = ElementTree.fromstring(xmp_tags)
return {get_name(root.tag): get_value(root)}
def getexif(self):
"""
Gets EXIF data from the image.
:returns: an :py:class:`~PIL.Image.Exif` object.
"""
if self._exif is None:
self._exif = Exif()
self._exif._loaded = False
elif self._exif._loaded:
return self._exif
self._exif._loaded = True
exif_info = self.info.get("exif")
if exif_info is None:
if "Raw profile type exif" in self.info:
exif_info = bytes.fromhex(
"".join(self.info["Raw profile type exif"].split("\n")[3:])
)
elif hasattr(self, "tag_v2"):
self._exif.bigtiff = self.tag_v2._bigtiff
self._exif.endian = self.tag_v2._endian
self._exif.load_from_fp(self.fp, self.tag_v2._offset)
if exif_info is not None:
self._exif.load(exif_info)
# XMP tags
if ExifTags.Base.Orientation not in self._exif:
xmp_tags = self.info.get("XML:com.adobe.xmp")
if xmp_tags:
match = re.search(r'tiff:Orientation(="|>)([0-9])', xmp_tags)
if match:
self._exif[ExifTags.Base.Orientation] = int(match[2])
return self._exif
def _reload_exif(self):
if self._exif is None or not self._exif._loaded:
return
self._exif._loaded = False
self.getexif()
def get_child_images(self):
child_images = []
exif = self.getexif()
ifds = []
if ExifTags.Base.SubIFDs in exif:
subifd_offsets = exif[ExifTags.Base.SubIFDs]
if subifd_offsets:
if not isinstance(subifd_offsets, tuple):
subifd_offsets = (subifd_offsets,)
for subifd_offset in subifd_offsets:
ifds.append((exif._get_ifd_dict(subifd_offset), subifd_offset))
ifd1 = exif.get_ifd(ExifTags.IFD.IFD1)
if ifd1 and ifd1.get(513):
ifds.append((ifd1, exif._info.next))
offset = None
for ifd, ifd_offset in ifds:
current_offset = self.fp.tell()
if offset is None:
offset = current_offset
fp = self.fp
thumbnail_offset = ifd.get(513)
if thumbnail_offset is not None:
try:
thumbnail_offset += self._exif_offset
except AttributeError:
pass
self.fp.seek(thumbnail_offset)
data = self.fp.read(ifd.get(514))
fp = io.BytesIO(data)
with open(fp) as im:
if thumbnail_offset is None:
im._frame_pos = [ifd_offset]
im._seek(0)
im.load()
child_images.append(im)
if offset is not None:
self.fp.seek(offset)
return child_images
def getim(self):
"""
Returns a capsule that points to the internal image memory.
:returns: A capsule object.
"""
self.load()
return self.im.ptr
def getpalette(self, rawmode="RGB"):
"""
Returns the image palette as a list.
:param rawmode: The mode in which to return the palette. ``None`` will
return the palette in its current mode.
.. versionadded:: 9.1.0
:returns: A list of color values [r, g, b, ...], or None if the
image has no palette.
"""
self.load()
try:
mode = self.im.getpalettemode()
except ValueError:
return None # no palette
if rawmode is None:
rawmode = mode
return list(self.im.getpalette(mode, rawmode))
@property
def has_transparency_data(self) -> bool:
"""
Determine if an image has transparency data, whether in the form of an
alpha channel, a palette with an alpha channel, or a "transparency" key
in the info dictionary.
Note the image might still appear solid, if all of the values shown
within are opaque.
:returns: A boolean.
"""
return (
self.mode in ("LA", "La", "PA", "RGBA", "RGBa")
or (self.mode == "P" and self.palette.mode.endswith("A"))
or "transparency" in self.info
)
def apply_transparency(self):
"""
If a P mode image has a "transparency" key in the info dictionary,
remove the key and instead apply the transparency to the palette.
Otherwise, the image is unchanged.
"""
if self.mode != "P" or "transparency" not in self.info:
return
from . import ImagePalette
palette = self.getpalette("RGBA")
transparency = self.info["transparency"]
if isinstance(transparency, bytes):
for i, alpha in enumerate(transparency):
palette[i * 4 + 3] = alpha
else:
palette[transparency * 4 + 3] = 0
self.palette = ImagePalette.ImagePalette("RGBA", bytes(palette))
self.palette.dirty = 1
del self.info["transparency"]
def getpixel(self, xy):
"""
Returns the pixel value at a given position.
:param xy: The coordinate, given as (x, y). See
:ref:`coordinate-system`.
:returns: The pixel value. If the image is a multi-layer image,
this method returns a tuple.
"""
self.load()
if self.pyaccess:
return self.pyaccess.getpixel(xy)
return self.im.getpixel(tuple(xy))
def getprojection(self):
"""
Get projection to x and y axes
:returns: Two sequences, indicating where there are non-zero
pixels along the X-axis and the Y-axis, respectively.
"""
self.load()
x, y = self.im.getprojection()
return list(x), list(y)
def histogram(self, mask=None, extrema=None):
"""
Returns a histogram for the image. The histogram is returned as a
list of pixel counts, one for each pixel value in the source
image. Counts are grouped into 256 bins for each band, even if
the image has more than 8 bits per band. If the image has more
than one band, the histograms for all bands are concatenated (for
example, the histogram for an "RGB" image contains 768 values).
A bilevel image (mode "1") is treated as a grayscale ("L") image
by this method.
If a mask is provided, the method returns a histogram for those
parts of the image where the mask image is non-zero. The mask
image must have the same size as the image, and be either a
bi-level image (mode "1") or a grayscale image ("L").
:param mask: An optional mask.
:param extrema: An optional tuple of manually-specified extrema.
:returns: A list containing pixel counts.
"""
self.load()
if mask:
mask.load()
return self.im.histogram((0, 0), mask.im)
if self.mode in ("I", "F"):
if extrema is None:
extrema = self.getextrema()
return self.im.histogram(extrema)
return self.im.histogram()
def entropy(self, mask=None, extrema=None):
"""
Calculates and returns the entropy for the image.
A bilevel image (mode "1") is treated as a grayscale ("L")
image by this method.
If a mask is provided, the method employs the histogram for
those parts of the image where the mask image is non-zero.
The mask image must have the same size as the image, and be
either a bi-level image (mode "1") or a grayscale image ("L").
:param mask: An optional mask.
:param extrema: An optional tuple of manually-specified extrema.
:returns: A float value representing the image entropy
"""
self.load()
if mask:
mask.load()
return self.im.entropy((0, 0), mask.im)
if self.mode in ("I", "F"):
if extrema is None:
extrema = self.getextrema()
return self.im.entropy(extrema)
return self.im.entropy()
def paste(self, im, box=None, mask=None) -> None:
"""
Pastes another image into this image. The box argument is either
a 2-tuple giving the upper left corner, a 4-tuple defining the
left, upper, right, and lower pixel coordinate, or None (same as
(0, 0)). See :ref:`coordinate-system`. If a 4-tuple is given, the size
of the pasted image must match the size of the region.
If the modes don't match, the pasted image is converted to the mode of
this image (see the :py:meth:`~PIL.Image.Image.convert` method for
details).
Instead of an image, the source can be a integer or tuple
containing pixel values. The method then fills the region
with the given color. When creating RGB images, you can
also use color strings as supported by the ImageColor module.
If a mask is given, this method updates only the regions
indicated by the mask. You can use either "1", "L", "LA", "RGBA"
or "RGBa" images (if present, the alpha band is used as mask).
Where the mask is 255, the given image is copied as is. Where
the mask is 0, the current value is preserved. Intermediate
values will mix the two images together, including their alpha
channels if they have them.
See :py:meth:`~PIL.Image.Image.alpha_composite` if you want to
combine images with respect to their alpha channels.
:param im: Source image or pixel value (integer or tuple).
:param box: An optional 4-tuple giving the region to paste into.
If a 2-tuple is used instead, it's treated as the upper left
corner. If omitted or None, the source is pasted into the
upper left corner.
If an image is given as the second argument and there is no
third, the box defaults to (0, 0), and the second argument
is interpreted as a mask image.
:param mask: An optional mask image.
"""
if isImageType(box) and mask is None:
# abbreviated paste(im, mask) syntax
mask = box
box = None
if box is None:
box = (0, 0)
if len(box) == 2:
# upper left corner given; get size from image or mask
if isImageType(im):
size = im.size
elif isImageType(mask):
size = mask.size
else:
# FIXME: use self.size here?
msg = "cannot determine region size; use 4-item box"
raise ValueError(msg)
box += (box[0] + size[0], box[1] + size[1])
if isinstance(im, str):
from . import ImageColor
im = ImageColor.getcolor(im, self.mode)
elif isImageType(im):
im.load()
if self.mode != im.mode:
if self.mode != "RGB" or im.mode not in ("LA", "RGBA", "RGBa"):
# should use an adapter for this!
im = im.convert(self.mode)
im = im.im
self._ensure_mutable()
if mask:
mask.load()
self.im.paste(im, box, mask.im)
else:
self.im.paste(im, box)
def alpha_composite(self, im, dest=(0, 0), source=(0, 0)):
"""'In-place' analog of Image.alpha_composite. Composites an image
onto this image.
:param im: image to composite over this one
:param dest: Optional 2 tuple (left, top) specifying the upper
left corner in this (destination) image.
:param source: Optional 2 (left, top) tuple for the upper left
corner in the overlay source image, or 4 tuple (left, top, right,
bottom) for the bounds of the source rectangle
Performance Note: Not currently implemented in-place in the core layer.
"""
if not isinstance(source, (list, tuple)):
msg = "Source must be a tuple"
raise ValueError(msg)
if not isinstance(dest, (list, tuple)):
msg = "Destination must be a tuple"
raise ValueError(msg)
if len(source) not in (2, 4):
msg = "Source must be a 2 or 4-tuple"
raise ValueError(msg)
if not len(dest) == 2:
msg = "Destination must be a 2-tuple"
raise ValueError(msg)
if min(source) < 0:
msg = "Source must be non-negative"
raise ValueError(msg)
if len(source) == 2:
source = source + im.size
# over image, crop if it's not the whole thing.
if source == (0, 0) + im.size:
overlay = im
else:
overlay = im.crop(source)
# target for the paste
box = dest + (dest[0] + overlay.width, dest[1] + overlay.height)
# destination image. don't copy if we're using the whole image.
if box == (0, 0) + self.size:
background = self
else:
background = self.crop(box)
result = alpha_composite(background, overlay)
self.paste(result, box)
def point(self, lut, mode=None):
"""
Maps this image through a lookup table or function.
:param lut: A lookup table, containing 256 (or 65536 if
self.mode=="I" and mode == "L") values per band in the
image. A function can be used instead, it should take a
single argument. The function is called once for each
possible pixel value, and the resulting table is applied to
all bands of the image.
It may also be an :py:class:`~PIL.Image.ImagePointHandler`
object::
class Example(Image.ImagePointHandler):
def point(self, data):
# Return result
:param mode: Output mode (default is same as input). In the
current version, this can only be used if the source image
has mode "L" or "P", and the output has mode "1" or the
source image mode is "I" and the output mode is "L".
:returns: An :py:class:`~PIL.Image.Image` object.
"""
self.load()
if isinstance(lut, ImagePointHandler):
return lut.point(self)
if callable(lut):
# if it isn't a list, it should be a function
if self.mode in ("I", "I;16", "F"):
# check if the function can be used with point_transform
# UNDONE wiredfool -- I think this prevents us from ever doing
# a gamma function point transform on > 8bit images.
scale, offset = _getscaleoffset(lut)
return self._new(self.im.point_transform(scale, offset))
# for other modes, convert the function to a table
lut = [lut(i) for i in range(256)] * self.im.bands
if self.mode == "F":
# FIXME: _imaging returns a confusing error message for this case
msg = "point operation not supported for this mode"
raise ValueError(msg)
if mode != "F":
lut = [round(i) for i in lut]
return self._new(self.im.point(lut, mode))
def putalpha(self, alpha):
"""
Adds or replaces the alpha layer in this image. If the image
does not have an alpha layer, it's converted to "LA" or "RGBA".
The new layer must be either "L" or "1".
:param alpha: The new alpha layer. This can either be an "L" or "1"
image having the same size as this image, or an integer or
other color value.
"""
self._ensure_mutable()
if self.mode not in ("LA", "PA", "RGBA"):
# attempt to promote self to a matching alpha mode
try:
mode = getmodebase(self.mode) + "A"
try:
self.im.setmode(mode)
except (AttributeError, ValueError) as e:
# do things the hard way
im = self.im.convert(mode)
if im.mode not in ("LA", "PA", "RGBA"):
msg = "alpha channel could not be added"
raise ValueError(msg) from e # sanity check
self.im = im
self.pyaccess = None
self._mode = self.im.mode
except KeyError as e:
msg = "illegal image mode"
raise ValueError(msg) from e
if self.mode in ("LA", "PA"):
band = 1
else:
band = 3
if isImageType(alpha):
# alpha layer
if alpha.mode not in ("1", "L"):
msg = "illegal image mode"
raise ValueError(msg)
alpha.load()
if alpha.mode == "1":
alpha = alpha.convert("L")
else:
# constant alpha
try:
self.im.fillband(band, alpha)
except (AttributeError, ValueError):
# do things the hard way
alpha = new("L", self.size, alpha)
else:
return
self.im.putband(alpha.im, band)
def putdata(self, data, scale=1.0, offset=0.0):
"""
Copies pixel data from a flattened sequence object into the image. The
values should start at the upper left corner (0, 0), continue to the
end of the line, followed directly by the first value of the second
line, and so on. Data will be read until either the image or the
sequence ends. The scale and offset values are used to adjust the
sequence values: **pixel = value*scale + offset**.
:param data: A flattened sequence object.
:param scale: An optional scale value. The default is 1.0.
:param offset: An optional offset value. The default is 0.0.
"""
self._ensure_mutable()
self.im.putdata(data, scale, offset)
def putpalette(self, data, rawmode="RGB"):
"""
Attaches a palette to this image. The image must be a "P", "PA", "L"
or "LA" image.
The palette sequence must contain at most 256 colors, made up of one
integer value for each channel in the raw mode.
For example, if the raw mode is "RGB", then it can contain at most 768
values, made up of red, green and blue values for the corresponding pixel
index in the 256 colors.
If the raw mode is "RGBA", then it can contain at most 1024 values,
containing red, green, blue and alpha values.
Alternatively, an 8-bit string may be used instead of an integer sequence.
:param data: A palette sequence (either a list or a string).
:param rawmode: The raw mode of the palette. Either "RGB", "RGBA", or a mode
that can be transformed to "RGB" or "RGBA" (e.g. "R", "BGR;15", "RGBA;L").
"""
from . import ImagePalette
if self.mode not in ("L", "LA", "P", "PA"):
msg = "illegal image mode"
raise ValueError(msg)
if isinstance(data, ImagePalette.ImagePalette):
palette = ImagePalette.raw(data.rawmode, data.palette)
else:
if not isinstance(data, bytes):
data = bytes(data)
palette = ImagePalette.raw(rawmode, data)
self._mode = "PA" if "A" in self.mode else "P"
self.palette = palette
self.palette.mode = "RGB"
self.load() # install new palette
def putpixel(self, xy, value):
"""
Modifies the pixel at the given position. The color is given as
a single numerical value for single-band images, and a tuple for
multi-band images. In addition to this, RGB and RGBA tuples are
accepted for P and PA images.
Note that this method is relatively slow. For more extensive changes,
use :py:meth:`~PIL.Image.Image.paste` or the :py:mod:`~PIL.ImageDraw`
module instead.
See:
* :py:meth:`~PIL.Image.Image.paste`
* :py:meth:`~PIL.Image.Image.putdata`
* :py:mod:`~PIL.ImageDraw`
:param xy: The pixel coordinate, given as (x, y). See
:ref:`coordinate-system`.
:param value: The pixel value.
"""
if self.readonly:
self._copy()
self.load()
if self.pyaccess:
return self.pyaccess.putpixel(xy, value)
if (
self.mode in ("P", "PA")
and isinstance(value, (list, tuple))
and len(value) in [3, 4]
):
# RGB or RGBA value for a P or PA image
if self.mode == "PA":
alpha = value[3] if len(value) == 4 else 255
value = value[:3]
value = self.palette.getcolor(value, self)
if self.mode == "PA":
value = (value, alpha)
return self.im.putpixel(xy, value)
def remap_palette(self, dest_map, source_palette=None):
"""
Rewrites the image to reorder the palette.
:param dest_map: A list of indexes into the original palette.
e.g. ``[1,0]`` would swap a two item palette, and ``list(range(256))``
is the identity transform.
:param source_palette: Bytes or None.
:returns: An :py:class:`~PIL.Image.Image` object.
"""
from . import ImagePalette
if self.mode not in ("L", "P"):
msg = "illegal image mode"
raise ValueError(msg)
bands = 3
palette_mode = "RGB"
if source_palette is None:
if self.mode == "P":
self.load()
palette_mode = self.im.getpalettemode()
if palette_mode == "RGBA":
bands = 4
source_palette = self.im.getpalette(palette_mode, palette_mode)
else: # L-mode
source_palette = bytearray(i // 3 for i in range(768))
palette_bytes = b""
new_positions = [0] * 256
# pick only the used colors from the palette
for i, oldPosition in enumerate(dest_map):
palette_bytes += source_palette[
oldPosition * bands : oldPosition * bands + bands
]
new_positions[oldPosition] = i
# replace the palette color id of all pixel with the new id
# Palette images are [0..255], mapped through a 1 or 3
# byte/color map. We need to remap the whole image
# from palette 1 to palette 2. New_positions is
# an array of indexes into palette 1. Palette 2 is
# palette 1 with any holes removed.
# We're going to leverage the convert mechanism to use the
# C code to remap the image from palette 1 to palette 2,
# by forcing the source image into 'L' mode and adding a
# mapping 'L' mode palette, then converting back to 'L'
# sans palette thus converting the image bytes, then
# assigning the optimized RGB palette.
# perf reference, 9500x4000 gif, w/~135 colors
# 14 sec prepatch, 1 sec postpatch with optimization forced.
mapping_palette = bytearray(new_positions)
m_im = self.copy()
m_im._mode = "P"
m_im.palette = ImagePalette.ImagePalette(
palette_mode, palette=mapping_palette * bands
)
# possibly set palette dirty, then
# m_im.putpalette(mapping_palette, 'L') # converts to 'P'
# or just force it.
# UNDONE -- this is part of the general issue with palettes
m_im.im.putpalette(palette_mode + ";L", m_im.palette.tobytes())
m_im = m_im.convert("L")
m_im.putpalette(palette_bytes, palette_mode)
m_im.palette = ImagePalette.ImagePalette(palette_mode, palette=palette_bytes)
if "transparency" in self.info:
try:
m_im.info["transparency"] = dest_map.index(self.info["transparency"])
except ValueError:
if "transparency" in m_im.info:
del m_im.info["transparency"]
return m_im
def _get_safe_box(self, size, resample, box):
"""Expands the box so it includes adjacent pixels
that may be used by resampling with the given resampling filter.
"""
filter_support = _filters_support[resample] - 0.5
scale_x = (box[2] - box[0]) / size[0]
scale_y = (box[3] - box[1]) / size[1]
support_x = filter_support * scale_x
support_y = filter_support * scale_y
return (
max(0, int(box[0] - support_x)),
max(0, int(box[1] - support_y)),
min(self.size[0], math.ceil(box[2] + support_x)),
min(self.size[1], math.ceil(box[3] + support_y)),
)
def resize(self, size, resample=None, box=None, reducing_gap=None):
"""
Returns a resized copy of this image.
:param size: The requested size in pixels, as a 2-tuple:
(width, height).
:param resample: An optional resampling filter. This can be
one of :py:data:`Resampling.NEAREST`, :py:data:`Resampling.BOX`,
:py:data:`Resampling.BILINEAR`, :py:data:`Resampling.HAMMING`,
:py:data:`Resampling.BICUBIC` or :py:data:`Resampling.LANCZOS`.
If the image has mode "1" or "P", it is always set to
:py:data:`Resampling.NEAREST`. If the image mode specifies a number
of bits, such as "I;16", then the default filter is
:py:data:`Resampling.NEAREST`. Otherwise, the default filter is
:py:data:`Resampling.BICUBIC`. See: :ref:`concept-filters`.
:param box: An optional 4-tuple of floats providing
the source image region to be scaled.
The values must be within (0, 0, width, height) rectangle.
If omitted or None, the entire source is used.
:param reducing_gap: Apply optimization by resizing the image
in two steps. First, reducing the image by integer times
using :py:meth:`~PIL.Image.Image.reduce`.
Second, resizing using regular resampling. The last step
changes size no less than by ``reducing_gap`` times.
``reducing_gap`` may be None (no first step is performed)
or should be greater than 1.0. The bigger ``reducing_gap``,
the closer the result to the fair resampling.
The smaller ``reducing_gap``, the faster resizing.
With ``reducing_gap`` greater or equal to 3.0, the result is
indistinguishable from fair resampling in most cases.
The default value is None (no optimization).
:returns: An :py:class:`~PIL.Image.Image` object.
"""
if resample is None:
type_special = ";" in self.mode
resample = Resampling.NEAREST if type_special else Resampling.BICUBIC
elif resample not in (
Resampling.NEAREST,
Resampling.BILINEAR,
Resampling.BICUBIC,
Resampling.LANCZOS,
Resampling.BOX,
Resampling.HAMMING,
):
msg = f"Unknown resampling filter ({resample})."
filters = [
f"{filter[1]} ({filter[0]})"
for filter in (
(Resampling.NEAREST, "Image.Resampling.NEAREST"),
(Resampling.LANCZOS, "Image.Resampling.LANCZOS"),
(Resampling.BILINEAR, "Image.Resampling.BILINEAR"),
(Resampling.BICUBIC, "Image.Resampling.BICUBIC"),
(Resampling.BOX, "Image.Resampling.BOX"),
(Resampling.HAMMING, "Image.Resampling.HAMMING"),
)
]
msg += " Use " + ", ".join(filters[:-1]) + " or " + filters[-1]
raise ValueError(msg)
if reducing_gap is not None and reducing_gap < 1.0:
msg = "reducing_gap must be 1.0 or greater"
raise ValueError(msg)
size = tuple(size)
self.load()
if box is None:
box = (0, 0) + self.size
else:
box = tuple(box)
if self.size == size and box == (0, 0) + self.size:
return self.copy()
if self.mode in ("1", "P"):
resample = Resampling.NEAREST
if self.mode in ["LA", "RGBA"] and resample != Resampling.NEAREST:
im = self.convert({"LA": "La", "RGBA": "RGBa"}[self.mode])
im = im.resize(size, resample, box)
return im.convert(self.mode)
self.load()
if reducing_gap is not None and resample != Resampling.NEAREST:
factor_x = int((box[2] - box[0]) / size[0] / reducing_gap) or 1
factor_y = int((box[3] - box[1]) / size[1] / reducing_gap) or 1
if factor_x > 1 or factor_y > 1:
reduce_box = self._get_safe_box(size, resample, box)
factor = (factor_x, factor_y)
if callable(self.reduce):
self = self.reduce(factor, box=reduce_box)
else:
self = Image.reduce(self, factor, box=reduce_box)
box = (
(box[0] - reduce_box[0]) / factor_x,
(box[1] - reduce_box[1]) / factor_y,
(box[2] - reduce_box[0]) / factor_x,
(box[3] - reduce_box[1]) / factor_y,
)
return self._new(self.im.resize(size, resample, box))
def reduce(self, factor, box=None):
"""
Returns a copy of the image reduced ``factor`` times.
If the size of the image is not dividable by ``factor``,
the resulting size will be rounded up.
:param factor: A greater than 0 integer or tuple of two integers
for width and height separately.
:param box: An optional 4-tuple of ints providing
the source image region to be reduced.
The values must be within ``(0, 0, width, height)`` rectangle.
If omitted or ``None``, the entire source is used.
"""
if not isinstance(factor, (list, tuple)):
factor = (factor, factor)
if box is None:
box = (0, 0) + self.size
else:
box = tuple(box)
if factor == (1, 1) and box == (0, 0) + self.size:
return self.copy()
if self.mode in ["LA", "RGBA"]:
im = self.convert({"LA": "La", "RGBA": "RGBa"}[self.mode])
im = im.reduce(factor, box)
return im.convert(self.mode)
self.load()
return self._new(self.im.reduce(factor, box))
def rotate(
self,
angle,
resample=Resampling.NEAREST,
expand=0,
center=None,
translate=None,
fillcolor=None,
):
"""
Returns a rotated copy of this image. This method returns a
copy of this image, rotated the given number of degrees counter
clockwise around its centre.
:param angle: In degrees counter clockwise.
:param resample: An optional resampling filter. This can be
one of :py:data:`Resampling.NEAREST` (use nearest neighbour),
:py:data:`Resampling.BILINEAR` (linear interpolation in a 2x2
environment), or :py:data:`Resampling.BICUBIC` (cubic spline
interpolation in a 4x4 environment). If omitted, or if the image has
mode "1" or "P", it is set to :py:data:`Resampling.NEAREST`.
See :ref:`concept-filters`.
:param expand: Optional expansion flag. If true, expands the output
image to make it large enough to hold the entire rotated image.
If false or omitted, make the output image the same size as the
input image. Note that the expand flag assumes rotation around
the center and no translation.
:param center: Optional center of rotation (a 2-tuple). Origin is
the upper left corner. Default is the center of the image.
:param translate: An optional post-rotate translation (a 2-tuple).
:param fillcolor: An optional color for area outside the rotated image.
:returns: An :py:class:`~PIL.Image.Image` object.
"""
angle = angle % 360.0
# Fast paths regardless of filter, as long as we're not
# translating or changing the center.
if not (center or translate):
if angle == 0:
return self.copy()
if angle == 180:
return self.transpose(Transpose.ROTATE_180)
if angle in (90, 270) and (expand or self.width == self.height):
return self.transpose(
Transpose.ROTATE_90 if angle == 90 else Transpose.ROTATE_270
)
# Calculate the affine matrix. Note that this is the reverse
# transformation (from destination image to source) because we
# want to interpolate the (discrete) destination pixel from
# the local area around the (floating) source pixel.
# The matrix we actually want (note that it operates from the right):
# (1, 0, tx) (1, 0, cx) ( cos a, sin a, 0) (1, 0, -cx)
# (0, 1, ty) * (0, 1, cy) * (-sin a, cos a, 0) * (0, 1, -cy)
# (0, 0, 1) (0, 0, 1) ( 0, 0, 1) (0, 0, 1)
# The reverse matrix is thus:
# (1, 0, cx) ( cos -a, sin -a, 0) (1, 0, -cx) (1, 0, -tx)
# (0, 1, cy) * (-sin -a, cos -a, 0) * (0, 1, -cy) * (0, 1, -ty)
# (0, 0, 1) ( 0, 0, 1) (0, 0, 1) (0, 0, 1)
# In any case, the final translation may be updated at the end to
# compensate for the expand flag.
w, h = self.size
if translate is None:
post_trans = (0, 0)
else:
post_trans = translate
if center is None:
# FIXME These should be rounded to ints?
rotn_center = (w / 2.0, h / 2.0)
else:
rotn_center = center
angle = -math.radians(angle)
matrix = [
round(math.cos(angle), 15),
round(math.sin(angle), 15),
0.0,
round(-math.sin(angle), 15),
round(math.cos(angle), 15),
0.0,
]
def transform(x, y, matrix):
(a, b, c, d, e, f) = matrix
return a * x + b * y + c, d * x + e * y + f
matrix[2], matrix[5] = transform(
-rotn_center[0] - post_trans[0], -rotn_center[1] - post_trans[1], matrix
)
matrix[2] += rotn_center[0]
matrix[5] += rotn_center[1]
if expand:
# calculate output size
xx = []
yy = []
for x, y in ((0, 0), (w, 0), (w, h), (0, h)):
x, y = transform(x, y, matrix)
xx.append(x)
yy.append(y)
nw = math.ceil(max(xx)) - math.floor(min(xx))
nh = math.ceil(max(yy)) - math.floor(min(yy))
# We multiply a translation matrix from the right. Because of its
# special form, this is the same as taking the image of the
# translation vector as new translation vector.
matrix[2], matrix[5] = transform(-(nw - w) / 2.0, -(nh - h) / 2.0, matrix)
w, h = nw, nh
return self.transform(
(w, h), Transform.AFFINE, matrix, resample, fillcolor=fillcolor
)
def save(self, fp, format=None, **params) -> None:
"""
Saves this image under the given filename. If no format is
specified, the format to use is determined from the filename
extension, if possible.
Keyword options can be used to provide additional instructions
to the writer. If a writer doesn't recognise an option, it is
silently ignored. The available options are described in the
:doc:`image format documentation
<../handbook/image-file-formats>` for each writer.
You can use a file object instead of a filename. In this case,
you must always specify the format. The file object must
implement the ``seek``, ``tell``, and ``write``
methods, and be opened in binary mode.
:param fp: A filename (string), pathlib.Path object or file object.
:param format: Optional format override. If omitted, the
format to use is determined from the filename extension.
If a file object was used instead of a filename, this
parameter should always be used.
:param params: Extra parameters to the image writer.
:returns: None
:exception ValueError: If the output format could not be determined
from the file name. Use the format option to solve this.
:exception OSError: If the file could not be written. The file
may have been created, and may contain partial data.
"""
filename = ""
open_fp = False
if isinstance(fp, Path):
filename = str(fp)
open_fp = True
elif is_path(fp):
filename = fp
open_fp = True
elif fp == sys.stdout:
try:
fp = sys.stdout.buffer
except AttributeError:
pass
if not filename and hasattr(fp, "name") and is_path(fp.name):
# only set the name for metadata purposes
filename = fp.name
# may mutate self!
self._ensure_mutable()
save_all = params.pop("save_all", False)
self.encoderinfo = params
self.encoderconfig = ()
preinit()
ext = os.path.splitext(filename)[1].lower()
if not format:
if ext not in EXTENSION:
init()
try:
format = EXTENSION[ext]
except KeyError as e:
msg = f"unknown file extension: {ext}"
raise ValueError(msg) from e
if format.upper() not in SAVE:
init()
if save_all:
save_handler = SAVE_ALL[format.upper()]
else:
save_handler = SAVE[format.upper()]
created = False
if open_fp:
created = not os.path.exists(filename)
if params.get("append", False):
# Open also for reading ("+"), because TIFF save_all
# writer needs to go back and edit the written data.
fp = builtins.open(filename, "r+b")
else:
fp = builtins.open(filename, "w+b")
try:
save_handler(self, fp, filename)
except Exception:
if open_fp:
fp.close()
if created:
try:
os.remove(filename)
except PermissionError:
pass
raise
if open_fp:
fp.close()
def seek(self, frame) -> Image:
"""
Seeks to the given frame in this sequence file. If you seek
beyond the end of the sequence, the method raises an
``EOFError`` exception. When a sequence file is opened, the
library automatically seeks to frame 0.
See :py:meth:`~PIL.Image.Image.tell`.
If defined, :attr:`~PIL.Image.Image.n_frames` refers to the
number of available frames.
:param frame: Frame number, starting at 0.
:exception EOFError: If the call attempts to seek beyond the end
of the sequence.
"""
# overridden by file handlers
if frame != 0:
msg = "no more images in file"
raise EOFError(msg)
def show(self, title=None):
"""
Displays this image. This method is mainly intended for debugging purposes.
This method calls :py:func:`PIL.ImageShow.show` internally. You can use
:py:func:`PIL.ImageShow.register` to override its default behaviour.
The image is first saved to a temporary file. By default, it will be in
PNG format.
On Unix, the image is then opened using the **xdg-open**, **display**,
**gm**, **eog** or **xv** utility, depending on which one can be found.
On macOS, the image is opened with the native Preview application.
On Windows, the image is opened with the standard PNG display utility.
:param title: Optional title to use for the image window, where possible.
"""
_show(self, title=title)
def split(self):
"""
Split this image into individual bands. This method returns a
tuple of individual image bands from an image. For example,
splitting an "RGB" image creates three new images each
containing a copy of one of the original bands (red, green,
blue).
If you need only one band, :py:meth:`~PIL.Image.Image.getchannel`
method can be more convenient and faster.
:returns: A tuple containing bands.
"""
self.load()
if self.im.bands == 1:
ims = [self.copy()]
else:
ims = map(self._new, self.im.split())
return tuple(ims)
def getchannel(self, channel):
"""
Returns an image containing a single channel of the source image.
:param channel: What channel to return. Could be index
(0 for "R" channel of "RGB") or channel name
("A" for alpha channel of "RGBA").
:returns: An image in "L" mode.
.. versionadded:: 4.3.0
"""
self.load()
if isinstance(channel, str):
try:
channel = self.getbands().index(channel)
except ValueError as e:
msg = f'The image has no channel "{channel}"'
raise ValueError(msg) from e
return self._new(self.im.getband(channel))
def tell(self) -> int:
"""
Returns the current frame number. See :py:meth:`~PIL.Image.Image.seek`.
If defined, :attr:`~PIL.Image.Image.n_frames` refers to the
number of available frames.
:returns: Frame number, starting with 0.
"""
return 0
def thumbnail(self, size, resample=Resampling.BICUBIC, reducing_gap=2.0):
"""
Make this image into a thumbnail. This method modifies the
image to contain a thumbnail version of itself, no larger than
the given size. This method calculates an appropriate thumbnail
size to preserve the aspect of the image, calls the
:py:meth:`~PIL.Image.Image.draft` method to configure the file reader
(where applicable), and finally resizes the image.
Note that this function modifies the :py:class:`~PIL.Image.Image`
object in place. If you need to use the full resolution image as well,
apply this method to a :py:meth:`~PIL.Image.Image.copy` of the original
image.
:param size: The requested size in pixels, as a 2-tuple:
(width, height).
:param resample: Optional resampling filter. This can be one
of :py:data:`Resampling.NEAREST`, :py:data:`Resampling.BOX`,
:py:data:`Resampling.BILINEAR`, :py:data:`Resampling.HAMMING`,
:py:data:`Resampling.BICUBIC` or :py:data:`Resampling.LANCZOS`.
If omitted, it defaults to :py:data:`Resampling.BICUBIC`.
(was :py:data:`Resampling.NEAREST` prior to version 2.5.0).
See: :ref:`concept-filters`.
:param reducing_gap: Apply optimization by resizing the image
in two steps. First, reducing the image by integer times
using :py:meth:`~PIL.Image.Image.reduce` or
:py:meth:`~PIL.Image.Image.draft` for JPEG images.
Second, resizing using regular resampling. The last step
changes size no less than by ``reducing_gap`` times.
``reducing_gap`` may be None (no first step is performed)
or should be greater than 1.0. The bigger ``reducing_gap``,
the closer the result to the fair resampling.
The smaller ``reducing_gap``, the faster resizing.
With ``reducing_gap`` greater or equal to 3.0, the result is
indistinguishable from fair resampling in most cases.
The default value is 2.0 (very close to fair resampling
while still being faster in many cases).
:returns: None
"""
provided_size = tuple(map(math.floor, size))
def preserve_aspect_ratio():
def round_aspect(number, key):
return max(min(math.floor(number), math.ceil(number), key=key), 1)
x, y = provided_size
if x >= self.width and y >= self.height:
return
aspect = self.width / self.height
if x / y >= aspect:
x = round_aspect(y * aspect, key=lambda n: abs(aspect - n / y))
else:
y = round_aspect(
x / aspect, key=lambda n: 0 if n == 0 else abs(aspect - x / n)
)
return x, y
box = None
if reducing_gap is not None:
size = preserve_aspect_ratio()
if size is None:
return
res = self.draft(None, (size[0] * reducing_gap, size[1] * reducing_gap))
if res is not None:
box = res[1]
if box is None:
self.load()
# load() may have changed the size of the image
size = preserve_aspect_ratio()
if size is None:
return
if self.size != size:
im = self.resize(size, resample, box=box, reducing_gap=reducing_gap)
self.im = im.im
self._size = size
self._mode = self.im.mode
self.readonly = 0
self.pyaccess = None
# FIXME: the different transform methods need further explanation
# instead of bloating the method docs, add a separate chapter.
def transform(
self,
size,
method,
data=None,
resample=Resampling.NEAREST,
fill=1,
fillcolor=None,
) -> Image:
"""
Transforms this image. This method creates a new image with the
given size, and the same mode as the original, and copies data
to the new image using the given transform.
:param size: The output size in pixels, as a 2-tuple:
(width, height).
:param method: The transformation method. This is one of
:py:data:`Transform.EXTENT` (cut out a rectangular subregion),
:py:data:`Transform.AFFINE` (affine transform),
:py:data:`Transform.PERSPECTIVE` (perspective transform),
:py:data:`Transform.QUAD` (map a quadrilateral to a rectangle), or
:py:data:`Transform.MESH` (map a number of source quadrilaterals
in one operation).
It may also be an :py:class:`~PIL.Image.ImageTransformHandler`
object::
class Example(Image.ImageTransformHandler):
def transform(self, size, data, resample, fill=1):
# Return result
It may also be an object with a ``method.getdata`` method
that returns a tuple supplying new ``method`` and ``data`` values::
class Example:
def getdata(self):
method = Image.Transform.EXTENT
data = (0, 0, 100, 100)
return method, data
:param data: Extra data to the transformation method.
:param resample: Optional resampling filter. It can be one of
:py:data:`Resampling.NEAREST` (use nearest neighbour),
:py:data:`Resampling.BILINEAR` (linear interpolation in a 2x2
environment), or :py:data:`Resampling.BICUBIC` (cubic spline
interpolation in a 4x4 environment). If omitted, or if the image
has mode "1" or "P", it is set to :py:data:`Resampling.NEAREST`.
See: :ref:`concept-filters`.
:param fill: If ``method`` is an
:py:class:`~PIL.Image.ImageTransformHandler` object, this is one of
the arguments passed to it. Otherwise, it is unused.
:param fillcolor: Optional fill color for the area outside the
transform in the output image.
:returns: An :py:class:`~PIL.Image.Image` object.
"""
if self.mode in ("LA", "RGBA") and resample != Resampling.NEAREST:
return (
self.convert({"LA": "La", "RGBA": "RGBa"}[self.mode])
.transform(size, method, data, resample, fill, fillcolor)
.convert(self.mode)
)
if isinstance(method, ImageTransformHandler):
return method.transform(size, self, resample=resample, fill=fill)
if hasattr(method, "getdata"):
# compatibility w. old-style transform objects
method, data = method.getdata()
if data is None:
msg = "missing method data"
raise ValueError(msg)
im = new(self.mode, size, fillcolor)
if self.mode == "P" and self.palette:
im.palette = self.palette.copy()
im.info = self.info.copy()
if method == Transform.MESH:
# list of quads
for box, quad in data:
im.__transformer(
box, self, Transform.QUAD, quad, resample, fillcolor is None
)
else:
im.__transformer(
(0, 0) + size, self, method, data, resample, fillcolor is None
)
return im
def __transformer(
self, box, image, method, data, resample=Resampling.NEAREST, fill=1
):
w = box[2] - box[0]
h = box[3] - box[1]
if method == Transform.AFFINE:
data = data[:6]
elif method == Transform.EXTENT:
# convert extent to an affine transform
x0, y0, x1, y1 = data
xs = (x1 - x0) / w
ys = (y1 - y0) / h
method = Transform.AFFINE
data = (xs, 0, x0, 0, ys, y0)
elif method == Transform.PERSPECTIVE:
data = data[:8]
elif method == Transform.QUAD:
# quadrilateral warp. data specifies the four corners
# given as NW, SW, SE, and NE.
nw = data[:2]
sw = data[2:4]
se = data[4:6]
ne = data[6:8]
x0, y0 = nw
As = 1.0 / w
At = 1.0 / h
data = (
x0,
(ne[0] - x0) * As,
(sw[0] - x0) * At,
(se[0] - sw[0] - ne[0] + x0) * As * At,
y0,
(ne[1] - y0) * As,
(sw[1] - y0) * At,
(se[1] - sw[1] - ne[1] + y0) * As * At,
)
else:
msg = "unknown transformation method"
raise ValueError(msg)
if resample not in (
Resampling.NEAREST,
Resampling.BILINEAR,
Resampling.BICUBIC,
):
if resample in (Resampling.BOX, Resampling.HAMMING, Resampling.LANCZOS):
msg = {
Resampling.BOX: "Image.Resampling.BOX",
Resampling.HAMMING: "Image.Resampling.HAMMING",
Resampling.LANCZOS: "Image.Resampling.LANCZOS",
}[resample] + f" ({resample}) cannot be used."
else:
msg = f"Unknown resampling filter ({resample})."
filters = [
f"{filter[1]} ({filter[0]})"
for filter in (
(Resampling.NEAREST, "Image.Resampling.NEAREST"),
(Resampling.BILINEAR, "Image.Resampling.BILINEAR"),
(Resampling.BICUBIC, "Image.Resampling.BICUBIC"),
)
]
msg += " Use " + ", ".join(filters[:-1]) + " or " + filters[-1]
raise ValueError(msg)
image.load()
self.load()
if image.mode in ("1", "P"):
resample = Resampling.NEAREST
self.im.transform2(box, image.im, method, data, resample, fill)
def transpose(self, method):
"""
Transpose image (flip or rotate in 90 degree steps)
:param method: One of :py:data:`Transpose.FLIP_LEFT_RIGHT`,
:py:data:`Transpose.FLIP_TOP_BOTTOM`, :py:data:`Transpose.ROTATE_90`,
:py:data:`Transpose.ROTATE_180`, :py:data:`Transpose.ROTATE_270`,
:py:data:`Transpose.TRANSPOSE` or :py:data:`Transpose.TRANSVERSE`.
:returns: Returns a flipped or rotated copy of this image.
"""
self.load()
return self._new(self.im.transpose(method))
def effect_spread(self, distance):
"""
Randomly spread pixels in an image.
:param distance: Distance to spread pixels.
"""
self.load()
return self._new(self.im.effect_spread(distance))
def toqimage(self):
"""Returns a QImage copy of this image"""
from . import ImageQt
if not ImageQt.qt_is_installed:
msg = "Qt bindings are not installed"
raise ImportError(msg)
return ImageQt.toqimage(self)
def toqpixmap(self):
"""Returns a QPixmap copy of this image"""
from . import ImageQt
if not ImageQt.qt_is_installed:
msg = "Qt bindings are not installed"
raise ImportError(msg)
return ImageQt.toqpixmap(self)
# --------------------------------------------------------------------
# Abstract handlers.
class ImagePointHandler:
"""
Used as a mixin by point transforms
(for use with :py:meth:`~PIL.Image.Image.point`)
"""
pass
class ImageTransformHandler:
"""
Used as a mixin by geometry transforms
(for use with :py:meth:`~PIL.Image.Image.transform`)
"""
pass
# --------------------------------------------------------------------
# Factories
#
# Debugging
def _wedge():
"""Create grayscale wedge (for debugging only)"""
return Image()._new(core.wedge("L"))
def _check_size(size):
"""
Common check to enforce type and sanity check on size tuples
:param size: Should be a 2 tuple of (width, height)
:returns: True, or raises a ValueError
"""
if not isinstance(size, (list, tuple)):
msg = "Size must be a tuple"
raise ValueError(msg)
if len(size) != 2:
msg = "Size must be a tuple of length 2"
raise ValueError(msg)
if size[0] < 0 or size[1] < 0:
msg = "Width and height must be >= 0"
raise ValueError(msg)
return True
def new(mode, size, color=0) -> Image:
"""
Creates a new image with the given mode and size.
:param mode: The mode to use for the new image. See:
:ref:`concept-modes`.
:param size: A 2-tuple, containing (width, height) in pixels.
:param color: What color to use for the image. Default is black.
If given, this should be a single integer or floating point value
for single-band modes, and a tuple for multi-band modes (one value
per band). When creating RGB or HSV images, you can also use color
strings as supported by the ImageColor module. If the color is
None, the image is not initialised.
:returns: An :py:class:`~PIL.Image.Image` object.
"""
_check_size(size)
if color is None:
# don't initialize
return Image()._new(core.new(mode, size))
if isinstance(color, str):
# css3-style specifier
from . import ImageColor
color = ImageColor.getcolor(color, mode)
im = Image()
if mode == "P" and isinstance(color, (list, tuple)) and len(color) in [3, 4]:
# RGB or RGBA value for a P image
from . import ImagePalette
im.palette = ImagePalette.ImagePalette()
color = im.palette.getcolor(color)
return im._new(core.fill(mode, size, color))
def frombytes(mode, size, data, decoder_name="raw", *args) -> Image:
"""
Creates a copy of an image memory from pixel data in a buffer.
In its simplest form, this function takes three arguments
(mode, size, and unpacked pixel data).
You can also use any pixel decoder supported by PIL. For more
information on available decoders, see the section
:ref:`Writing Your Own File Codec <file-codecs>`.
Note that this function decodes pixel data only, not entire images.
If you have an entire image in a string, wrap it in a
:py:class:`~io.BytesIO` object, and use :py:func:`~PIL.Image.open` to load
it.
:param mode: The image mode. See: :ref:`concept-modes`.
:param size: The image size.
:param data: A byte buffer containing raw data for the given mode.
:param decoder_name: What decoder to use.
:param args: Additional parameters for the given decoder.
:returns: An :py:class:`~PIL.Image.Image` object.
"""
_check_size(size)
im = new(mode, size)
if im.width != 0 and im.height != 0:
# may pass tuple instead of argument list
if len(args) == 1 and isinstance(args[0], tuple):
args = args[0]
if decoder_name == "raw" and args == ():
args = mode
im.frombytes(data, decoder_name, args)
return im
def frombuffer(mode, size, data, decoder_name="raw", *args):
"""
Creates an image memory referencing pixel data in a byte buffer.
This function is similar to :py:func:`~PIL.Image.frombytes`, but uses data
in the byte buffer, where possible. This means that changes to the
original buffer object are reflected in this image). Not all modes can
share memory; supported modes include "L", "RGBX", "RGBA", and "CMYK".
Note that this function decodes pixel data only, not entire images.
If you have an entire image file in a string, wrap it in a
:py:class:`~io.BytesIO` object, and use :py:func:`~PIL.Image.open` to load it.
In the current version, the default parameters used for the "raw" decoder
differs from that used for :py:func:`~PIL.Image.frombytes`. This is a
bug, and will probably be fixed in a future release. The current release
issues a warning if you do this; to disable the warning, you should provide
the full set of parameters. See below for details.
:param mode: The image mode. See: :ref:`concept-modes`.
:param size: The image size.
:param data: A bytes or other buffer object containing raw
data for the given mode.
:param decoder_name: What decoder to use.
:param args: Additional parameters for the given decoder. For the
default encoder ("raw"), it's recommended that you provide the
full set of parameters::
frombuffer(mode, size, data, "raw", mode, 0, 1)
:returns: An :py:class:`~PIL.Image.Image` object.
.. versionadded:: 1.1.4
"""
_check_size(size)
# may pass tuple instead of argument list
if len(args) == 1 and isinstance(args[0], tuple):
args = args[0]
if decoder_name == "raw":
if args == ():
args = mode, 0, 1
if args[0] in _MAPMODES:
im = new(mode, (0, 0))
im = im._new(core.map_buffer(data, size, decoder_name, 0, args))
if mode == "P":
from . import ImagePalette
im.palette = ImagePalette.ImagePalette("RGB", im.im.getpalette("RGB"))
im.readonly = 1
return im
return frombytes(mode, size, data, decoder_name, args)
def fromarray(obj, mode=None):
"""
Creates an image memory from an object exporting the array interface
(using the buffer protocol)::
from PIL import Image
import numpy as np
a = np.zeros((5, 5))
im = Image.fromarray(a)
If ``obj`` is not contiguous, then the ``tobytes`` method is called
and :py:func:`~PIL.Image.frombuffer` is used.
In the case of NumPy, be aware that Pillow modes do not always correspond
to NumPy dtypes. Pillow modes only offer 1-bit pixels, 8-bit pixels,
32-bit signed integer pixels, and 32-bit floating point pixels.
Pillow images can also be converted to arrays::
from PIL import Image
import numpy as np
im = Image.open("hopper.jpg")
a = np.asarray(im)
When converting Pillow images to arrays however, only pixel values are
transferred. This means that P and PA mode images will lose their palette.
:param obj: Object with array interface
:param mode: Optional mode to use when reading ``obj``. Will be determined from
type if ``None``.
This will not be used to convert the data after reading, but will be used to
change how the data is read::
from PIL import Image
import numpy as np
a = np.full((1, 1), 300)
im = Image.fromarray(a, mode="L")
im.getpixel((0, 0)) # 44
im = Image.fromarray(a, mode="RGB")
im.getpixel((0, 0)) # (44, 1, 0)
See: :ref:`concept-modes` for general information about modes.
:returns: An image object.
.. versionadded:: 1.1.6
"""
arr = obj.__array_interface__
shape = arr["shape"]
ndim = len(shape)
strides = arr.get("strides", None)
if mode is None:
try:
typekey = (1, 1) + shape[2:], arr["typestr"]
except KeyError as e:
msg = "Cannot handle this data type"
raise TypeError(msg) from e
try:
mode, rawmode = _fromarray_typemap[typekey]
except KeyError as e:
typekey_shape, typestr = typekey
msg = f"Cannot handle this data type: {typekey_shape}, {typestr}"
raise TypeError(msg) from e
else:
rawmode = mode
if mode in ["1", "L", "I", "P", "F"]:
ndmax = 2
elif mode == "RGB":
ndmax = 3
else:
ndmax = 4
if ndim > ndmax:
msg = f"Too many dimensions: {ndim} > {ndmax}."
raise ValueError(msg)
size = 1 if ndim == 1 else shape[1], shape[0]
if strides is not None:
if hasattr(obj, "tobytes"):
obj = obj.tobytes()
else:
obj = obj.tostring()
return frombuffer(mode, size, obj, "raw", rawmode, 0, 1)
def fromqimage(im):
"""Creates an image instance from a QImage image"""
from . import ImageQt
if not ImageQt.qt_is_installed:
msg = "Qt bindings are not installed"
raise ImportError(msg)
return ImageQt.fromqimage(im)
def fromqpixmap(im):
"""Creates an image instance from a QPixmap image"""
from . import ImageQt
if not ImageQt.qt_is_installed:
msg = "Qt bindings are not installed"
raise ImportError(msg)
return ImageQt.fromqpixmap(im)
_fromarray_typemap = {
# (shape, typestr) => mode, rawmode
# first two members of shape are set to one
((1, 1), "|b1"): ("1", "1;8"),
((1, 1), "|u1"): ("L", "L"),
((1, 1), "|i1"): ("I", "I;8"),
((1, 1), "<u2"): ("I", "I;16"),
((1, 1), ">u2"): ("I", "I;16B"),
((1, 1), "<i2"): ("I", "I;16S"),
((1, 1), ">i2"): ("I", "I;16BS"),
((1, 1), "<u4"): ("I", "I;32"),
((1, 1), ">u4"): ("I", "I;32B"),
((1, 1), "<i4"): ("I", "I;32S"),
((1, 1), ">i4"): ("I", "I;32BS"),
((1, 1), "<f4"): ("F", "F;32F"),
((1, 1), ">f4"): ("F", "F;32BF"),
((1, 1), "<f8"): ("F", "F;64F"),
((1, 1), ">f8"): ("F", "F;64BF"),
((1, 1, 2), "|u1"): ("LA", "LA"),
((1, 1, 3), "|u1"): ("RGB", "RGB"),
((1, 1, 4), "|u1"): ("RGBA", "RGBA"),
# shortcuts:
((1, 1), _ENDIAN + "i4"): ("I", "I"),
((1, 1), _ENDIAN + "f4"): ("F", "F"),
}
def _decompression_bomb_check(size):
if MAX_IMAGE_PIXELS is None:
return
pixels = max(1, size[0]) * max(1, size[1])
if pixels > 2 * MAX_IMAGE_PIXELS:
msg = (
f"Image size ({pixels} pixels) exceeds limit of {2 * MAX_IMAGE_PIXELS} "
"pixels, could be decompression bomb DOS attack."
)
raise DecompressionBombError(msg)
if pixels > MAX_IMAGE_PIXELS:
warnings.warn(
f"Image size ({pixels} pixels) exceeds limit of {MAX_IMAGE_PIXELS} pixels, "
"could be decompression bomb DOS attack.",
DecompressionBombWarning,
)
def open(fp, mode="r", formats=None) -> Image:
"""
Opens and identifies the given image file.
This is a lazy operation; this function identifies the file, but
the file remains open and the actual image data is not read from
the file until you try to process the data (or call the
:py:meth:`~PIL.Image.Image.load` method). See
:py:func:`~PIL.Image.new`. See :ref:`file-handling`.
:param fp: A filename (string), pathlib.Path object or a file object.
The file object must implement ``file.read``,
``file.seek``, and ``file.tell`` methods,
and be opened in binary mode. The file object will also seek to zero
before reading.
:param mode: The mode. If given, this argument must be "r".
:param formats: A list or tuple of formats to attempt to load the file in.
This can be used to restrict the set of formats checked.
Pass ``None`` to try all supported formats. You can print the set of
available formats by running ``python3 -m PIL`` or using
the :py:func:`PIL.features.pilinfo` function.
:returns: An :py:class:`~PIL.Image.Image` object.
:exception FileNotFoundError: If the file cannot be found.
:exception PIL.UnidentifiedImageError: If the image cannot be opened and
identified.
:exception ValueError: If the ``mode`` is not "r", or if a ``StringIO``
instance is used for ``fp``.
:exception TypeError: If ``formats`` is not ``None``, a list or a tuple.
"""
if mode != "r":
msg = f"bad mode {repr(mode)}"
raise ValueError(msg)
elif isinstance(fp, io.StringIO):
msg = (
"StringIO cannot be used to open an image. "
"Binary data must be used instead."
)
raise ValueError(msg)
if formats is None:
formats = ID
elif not isinstance(formats, (list, tuple)):
msg = "formats must be a list or tuple"
raise TypeError(msg)
exclusive_fp = False
filename = ""
if isinstance(fp, Path):
filename = str(fp.resolve())
elif is_path(fp):
filename = fp
if filename:
fp = builtins.open(filename, "rb")
exclusive_fp = True
try:
fp.seek(0)
except (AttributeError, io.UnsupportedOperation):
fp = io.BytesIO(fp.read())
exclusive_fp = True
prefix = fp.read(16)
preinit()
accept_warnings = []
def _open_core(fp, filename, prefix, formats):
for i in formats:
i = i.upper()
if i not in OPEN:
init()
try:
factory, accept = OPEN[i]
result = not accept or accept(prefix)
if type(result) in [str, bytes]:
accept_warnings.append(result)
elif result:
fp.seek(0)
im = factory(fp, filename)
_decompression_bomb_check(im.size)
return im
except (SyntaxError, IndexError, TypeError, struct.error):
# Leave disabled by default, spams the logs with image
# opening failures that are entirely expected.
# logger.debug("", exc_info=True)
continue
except BaseException:
if exclusive_fp:
fp.close()
raise
return None
im = _open_core(fp, filename, prefix, formats)
if im is None and formats is ID:
checked_formats = formats.copy()
if init():
im = _open_core(
fp,
filename,
prefix,
tuple(format for format in formats if format not in checked_formats),
)
if im:
im._exclusive_fp = exclusive_fp
return im
if exclusive_fp:
fp.close()
for message in accept_warnings:
warnings.warn(message)
msg = "cannot identify image file %r" % (filename if filename else fp)
raise UnidentifiedImageError(msg)
#
# Image processing.
def alpha_composite(im1, im2):
"""
Alpha composite im2 over im1.
:param im1: The first image. Must have mode RGBA.
:param im2: The second image. Must have mode RGBA, and the same size as
the first image.
:returns: An :py:class:`~PIL.Image.Image` object.
"""
im1.load()
im2.load()
return im1._new(core.alpha_composite(im1.im, im2.im))
def blend(im1, im2, alpha):
"""
Creates a new image by interpolating between two input images, using
a constant alpha::
out = image1 * (1.0 - alpha) + image2 * alpha
:param im1: The first image.
:param im2: The second image. Must have the same mode and size as
the first image.
:param alpha: The interpolation alpha factor. If alpha is 0.0, a
copy of the first image is returned. If alpha is 1.0, a copy of
the second image is returned. There are no restrictions on the
alpha value. If necessary, the result is clipped to fit into
the allowed output range.
:returns: An :py:class:`~PIL.Image.Image` object.
"""
im1.load()
im2.load()
return im1._new(core.blend(im1.im, im2.im, alpha))
def composite(image1, image2, mask):
"""
Create composite image by blending images using a transparency mask.
:param image1: The first image.
:param image2: The second image. Must have the same mode and
size as the first image.
:param mask: A mask image. This image can have mode
"1", "L", or "RGBA", and must have the same size as the
other two images.
"""
image = image2.copy()
image.paste(image1, None, mask)
return image
def eval(image, *args):
"""
Applies the function (which should take one argument) to each pixel
in the given image. If the image has more than one band, the same
function is applied to each band. Note that the function is
evaluated once for each possible pixel value, so you cannot use
random components or other generators.
:param image: The input image.
:param function: A function object, taking one integer argument.
:returns: An :py:class:`~PIL.Image.Image` object.
"""
return image.point(args[0])
def merge(mode, bands):
"""
Merge a set of single band images into a new multiband image.
:param mode: The mode to use for the output image. See:
:ref:`concept-modes`.
:param bands: A sequence containing one single-band image for
each band in the output image. All bands must have the
same size.
:returns: An :py:class:`~PIL.Image.Image` object.
"""
if getmodebands(mode) != len(bands) or "*" in mode:
msg = "wrong number of bands"
raise ValueError(msg)
for band in bands[1:]:
if band.mode != getmodetype(mode):
msg = "mode mismatch"
raise ValueError(msg)
if band.size != bands[0].size:
msg = "size mismatch"
raise ValueError(msg)
for band in bands:
band.load()
return bands[0]._new(core.merge(mode, *[b.im for b in bands]))
# --------------------------------------------------------------------
# Plugin registry
def register_open(id, factory, accept=None) -> None:
"""
Register an image file plugin. This function should not be used
in application code.
:param id: An image format identifier.
:param factory: An image file factory method.
:param accept: An optional function that can be used to quickly
reject images having another format.
"""
id = id.upper()
if id not in ID:
ID.append(id)
OPEN[id] = factory, accept
def register_mime(id, mimetype):
"""
Registers an image MIME type by populating ``Image.MIME``. This function
should not be used in application code.
``Image.MIME`` provides a mapping from image format identifiers to mime
formats, but :py:meth:`~PIL.ImageFile.ImageFile.get_format_mimetype` can
provide a different result for specific images.
:param id: An image format identifier.
:param mimetype: The image MIME type for this format.
"""
MIME[id.upper()] = mimetype
def register_save(id, driver):
"""
Registers an image save function. This function should not be
used in application code.
:param id: An image format identifier.
:param driver: A function to save images in this format.
"""
SAVE[id.upper()] = driver
def register_save_all(id, driver):
"""
Registers an image function to save all the frames
of a multiframe format. This function should not be
used in application code.
:param id: An image format identifier.
:param driver: A function to save images in this format.
"""
SAVE_ALL[id.upper()] = driver
def register_extension(id, extension) -> None:
"""
Registers an image extension. This function should not be
used in application code.
:param id: An image format identifier.
:param extension: An extension used for this format.
"""
EXTENSION[extension.lower()] = id.upper()
def register_extensions(id, extensions):
"""
Registers image extensions. This function should not be
used in application code.
:param id: An image format identifier.
:param extensions: A list of extensions used for this format.
"""
for extension in extensions:
register_extension(id, extension)
def registered_extensions():
"""
Returns a dictionary containing all file extensions belonging
to registered plugins
"""
init()
return EXTENSION
def register_decoder(name, decoder):
"""
Registers an image decoder. This function should not be
used in application code.
:param name: The name of the decoder
:param decoder: A callable(mode, args) that returns an
ImageFile.PyDecoder object
.. versionadded:: 4.1.0
"""
DECODERS[name] = decoder
def register_encoder(name, encoder):
"""
Registers an image encoder. This function should not be
used in application code.
:param name: The name of the encoder
:param encoder: A callable(mode, args) that returns an
ImageFile.PyEncoder object
.. versionadded:: 4.1.0
"""
ENCODERS[name] = encoder
# --------------------------------------------------------------------
# Simple display support.
def _show(image, **options):
from . import ImageShow
ImageShow.show(image, **options)
# --------------------------------------------------------------------
# Effects
def effect_mandelbrot(size, extent, quality):
"""
Generate a Mandelbrot set covering the given extent.
:param size: The requested size in pixels, as a 2-tuple:
(width, height).
:param extent: The extent to cover, as a 4-tuple:
(x0, y0, x1, y1).
:param quality: Quality.
"""
return Image()._new(core.effect_mandelbrot(size, extent, quality))
def effect_noise(size, sigma):
"""
Generate Gaussian noise centered around 128.
:param size: The requested size in pixels, as a 2-tuple:
(width, height).
:param sigma: Standard deviation of noise.
"""
return Image()._new(core.effect_noise(size, sigma))
def linear_gradient(mode):
"""
Generate 256x256 linear gradient from black to white, top to bottom.
:param mode: Input mode.
"""
return Image()._new(core.linear_gradient(mode))
def radial_gradient(mode):
"""
Generate 256x256 radial gradient from black to white, centre to edge.
:param mode: Input mode.
"""
return Image()._new(core.radial_gradient(mode))
# --------------------------------------------------------------------
# Resources
def _apply_env_variables(env=None):
if env is None:
env = os.environ
for var_name, setter in [
("PILLOW_ALIGNMENT", core.set_alignment),
("PILLOW_BLOCK_SIZE", core.set_block_size),
("PILLOW_BLOCKS_MAX", core.set_blocks_max),
]:
if var_name not in env:
continue
var = env[var_name].lower()
units = 1
for postfix, mul in [("k", 1024), ("m", 1024 * 1024)]:
if var.endswith(postfix):
units = mul
var = var[: -len(postfix)]
try:
var = int(var) * units
except ValueError:
warnings.warn(f"{var_name} is not int")
continue
try:
setter(var)
except ValueError as e:
warnings.warn(f"{var_name}: {e}")
_apply_env_variables()
atexit.register(core.clear_cache)
class Exif(MutableMapping):
"""
This class provides read and write access to EXIF image data::
from PIL import Image
im = Image.open("exif.png")
exif = im.getexif() # Returns an instance of this class
Information can be read and written, iterated over or deleted::
print(exif[274]) # 1
exif[274] = 2
for k, v in exif.items():
print("Tag", k, "Value", v) # Tag 274 Value 2
del exif[274]
To access information beyond IFD0, :py:meth:`~PIL.Image.Exif.get_ifd`
returns a dictionary::
from PIL import ExifTags
im = Image.open("exif_gps.jpg")
exif = im.getexif()
gps_ifd = exif.get_ifd(ExifTags.IFD.GPSInfo)
print(gps_ifd)
Other IFDs include ``ExifTags.IFD.Exif``, ``ExifTags.IFD.Makernote``,
``ExifTags.IFD.Interop`` and ``ExifTags.IFD.IFD1``.
:py:mod:`~PIL.ExifTags` also has enum classes to provide names for data::
print(exif[ExifTags.Base.Software]) # PIL
print(gps_ifd[ExifTags.GPS.GPSDateStamp]) # 1999:99:99 99:99:99
"""
endian = None
bigtiff = False
def __init__(self):
self._data = {}
self._hidden_data = {}
self._ifds = {}
self._info = None
self._loaded_exif = None
def _fixup(self, value):
try:
if len(value) == 1 and isinstance(value, tuple):
return value[0]
except Exception:
pass
return value
def _fixup_dict(self, src_dict):
# Helper function
# returns a dict with any single item tuples/lists as individual values
return {k: self._fixup(v) for k, v in src_dict.items()}
def _get_ifd_dict(self, offset):
try:
# an offset pointer to the location of the nested embedded IFD.
# It should be a long, but may be corrupted.
self.fp.seek(offset)
except (KeyError, TypeError):
pass
else:
from . import TiffImagePlugin
info = TiffImagePlugin.ImageFileDirectory_v2(self.head)
info.load(self.fp)
return self._fixup_dict(info)
def _get_head(self):
version = b"\x2B" if self.bigtiff else b"\x2A"
if self.endian == "<":
head = b"II" + version + b"\x00" + o32le(8)
else:
head = b"MM\x00" + version + o32be(8)
if self.bigtiff:
head += o32le(8) if self.endian == "<" else o32be(8)
head += b"\x00\x00\x00\x00"
return head
def load(self, data):
# Extract EXIF information. This is highly experimental,
# and is likely to be replaced with something better in a future
# version.
# The EXIF record consists of a TIFF file embedded in a JPEG
# application marker (!).
if data == self._loaded_exif:
return
self._loaded_exif = data
self._data.clear()
self._hidden_data.clear()
self._ifds.clear()
if data and data.startswith(b"Exif\x00\x00"):
data = data[6:]
if not data:
self._info = None
return
self.fp = io.BytesIO(data)
self.head = self.fp.read(8)
# process dictionary
from . import TiffImagePlugin
self._info = TiffImagePlugin.ImageFileDirectory_v2(self.head)
self.endian = self._info._endian
self.fp.seek(self._info.next)
self._info.load(self.fp)
def load_from_fp(self, fp, offset=None):
self._loaded_exif = None
self._data.clear()
self._hidden_data.clear()
self._ifds.clear()
# process dictionary
from . import TiffImagePlugin
self.fp = fp
if offset is not None:
self.head = self._get_head()
else:
self.head = self.fp.read(8)
self._info = TiffImagePlugin.ImageFileDirectory_v2(self.head)
if self.endian is None:
self.endian = self._info._endian
if offset is None:
offset = self._info.next
self.fp.tell()
self.fp.seek(offset)
self._info.load(self.fp)
def _get_merged_dict(self):
merged_dict = dict(self)
# get EXIF extension
if ExifTags.IFD.Exif in self:
ifd = self._get_ifd_dict(self[ExifTags.IFD.Exif])
if ifd:
merged_dict.update(ifd)
# GPS
if ExifTags.IFD.GPSInfo in self:
merged_dict[ExifTags.IFD.GPSInfo] = self._get_ifd_dict(
self[ExifTags.IFD.GPSInfo]
)
return merged_dict
def tobytes(self, offset=8):
from . import TiffImagePlugin
head = self._get_head()
ifd = TiffImagePlugin.ImageFileDirectory_v2(ifh=head)
for tag, value in self.items():
if tag in [
ExifTags.IFD.Exif,
ExifTags.IFD.GPSInfo,
] and not isinstance(value, dict):
value = self.get_ifd(tag)
if (
tag == ExifTags.IFD.Exif
and ExifTags.IFD.Interop in value
and not isinstance(value[ExifTags.IFD.Interop], dict)
):
value = value.copy()
value[ExifTags.IFD.Interop] = self.get_ifd(ExifTags.IFD.Interop)
ifd[tag] = value
return b"Exif\x00\x00" + head + ifd.tobytes(offset)
def get_ifd(self, tag):
if tag not in self._ifds:
if tag == ExifTags.IFD.IFD1:
if self._info is not None and self._info.next != 0:
self._ifds[tag] = self._get_ifd_dict(self._info.next)
elif tag in [ExifTags.IFD.Exif, ExifTags.IFD.GPSInfo]:
offset = self._hidden_data.get(tag, self.get(tag))
if offset is not None:
self._ifds[tag] = self._get_ifd_dict(offset)
elif tag in [ExifTags.IFD.Interop, ExifTags.IFD.Makernote]:
if ExifTags.IFD.Exif not in self._ifds:
self.get_ifd(ExifTags.IFD.Exif)
tag_data = self._ifds[ExifTags.IFD.Exif][tag]
if tag == ExifTags.IFD.Makernote:
from .TiffImagePlugin import ImageFileDirectory_v2
if tag_data[:8] == b"FUJIFILM":
ifd_offset = i32le(tag_data, 8)
ifd_data = tag_data[ifd_offset:]
makernote = {}
for i in range(0, struct.unpack("<H", ifd_data[:2])[0]):
ifd_tag, typ, count, data = struct.unpack(
"<HHL4s", ifd_data[i * 12 + 2 : (i + 1) * 12 + 2]
)
try:
(
unit_size,
handler,
) = ImageFileDirectory_v2._load_dispatch[typ]
except KeyError:
continue
size = count * unit_size
if size > 4:
(offset,) = struct.unpack("<L", data)
data = ifd_data[offset - 12 : offset + size - 12]
else:
data = data[:size]
if len(data) != size:
warnings.warn(
"Possibly corrupt EXIF MakerNote data. "
f"Expecting to read {size} bytes but only got "
f"{len(data)}. Skipping tag {ifd_tag}"
)
continue
if not data:
continue
makernote[ifd_tag] = handler(
ImageFileDirectory_v2(), data, False
)
self._ifds[tag] = dict(self._fixup_dict(makernote))
elif self.get(0x010F) == "Nintendo":
makernote = {}
for i in range(0, struct.unpack(">H", tag_data[:2])[0]):
ifd_tag, typ, count, data = struct.unpack(
">HHL4s", tag_data[i * 12 + 2 : (i + 1) * 12 + 2]
)
if ifd_tag == 0x1101:
# CameraInfo
(offset,) = struct.unpack(">L", data)
self.fp.seek(offset)
camerainfo = {"ModelID": self.fp.read(4)}
self.fp.read(4)
# Seconds since 2000
camerainfo["TimeStamp"] = i32le(self.fp.read(12))
self.fp.read(4)
camerainfo["InternalSerialNumber"] = self.fp.read(4)
self.fp.read(12)
parallax = self.fp.read(4)
handler = ImageFileDirectory_v2._load_dispatch[
TiffTags.FLOAT
][1]
camerainfo["Parallax"] = handler(
ImageFileDirectory_v2(), parallax, False
)
self.fp.read(4)
camerainfo["Category"] = self.fp.read(2)
makernote = {0x1101: dict(self._fixup_dict(camerainfo))}
self._ifds[tag] = makernote
else:
# Interop
self._ifds[tag] = self._get_ifd_dict(tag_data)
ifd = self._ifds.get(tag, {})
if tag == ExifTags.IFD.Exif and self._hidden_data:
ifd = {
k: v
for (k, v) in ifd.items()
if k not in (ExifTags.IFD.Interop, ExifTags.IFD.Makernote)
}
return ifd
def hide_offsets(self):
for tag in (ExifTags.IFD.Exif, ExifTags.IFD.GPSInfo):
if tag in self:
self._hidden_data[tag] = self[tag]
del self[tag]
def __str__(self):
if self._info is not None:
# Load all keys into self._data
for tag in self._info:
self[tag]
return str(self._data)
def __len__(self):
keys = set(self._data)
if self._info is not None:
keys.update(self._info)
return len(keys)
def __getitem__(self, tag):
if self._info is not None and tag not in self._data and tag in self._info:
self._data[tag] = self._fixup(self._info[tag])
del self._info[tag]
return self._data[tag]
def __contains__(self, tag):
return tag in self._data or (self._info is not None and tag in self._info)
def __setitem__(self, tag, value):
if self._info is not None and tag in self._info:
del self._info[tag]
self._data[tag] = value
def __delitem__(self, tag):
if self._info is not None and tag in self._info:
del self._info[tag]
else:
del self._data[tag]
def __iter__(self):
keys = set(self._data)
if self._info is not None:
keys.update(self._info)
return iter(keys)
|
catboostREPO_NAMEcatboostPATH_START.@catboost_extracted@catboost-master@contrib@python@Pillow@py3@PIL@Image.py@.PATH_END.py
|
{
"filename": "igimf_epoch_89.py",
"repo_name": "juzikong/photGalIMF",
"repo_path": "photGalIMF_extracted/photGalIMF-main/simulation_results_from_galaxy_evol/example/igimf_epoch_89.py",
"type": "Python"
}
|
# File to define a custom IMF
# The return value represents the chosen IMF value for the input mass
def custom_imf(mass, time): # there is no time dependence for IGIMF
if mass < 0.08:
return 0
elif mass < 0.101:
return -881912095498.0004 * mass + 126349388048.4443
elif mass < 0.10201:
return -852999343379.5463 * mass + 123429200084.48044
elif mass < 0.10510100501:
return -771825086636.5791 * mass + 115067381227.91583
elif mass < 0.10828567056280801:
return -698375642355.5582 * mass + 107272041085.80531
elif mass < 0.11156683466653165:
return -631915891670.408 * mass + 100004803063.36438
elif mass < 0.11494742132376223:
return -571780672073.184 * mass + 93229890421.70459
elif mass < 0.11843044313729356:
return -517368120134.2647 * mass + 86913950148.33691
elif mass < 0.12201900399479669:
return -468133647751.17334 * mass + 81025888759.68959
elif mass < 0.12571630183484303:
return -423584491638.08563 * mass + 75536719227.383
elif mass < 0.12952563149674062:
return -383274781503.5729 * mass + 70419418274.561
elif mass < 0.13345038765672337:
return -346801077557.1967 * mass + 65648793339.83267
elif mass < 0.13749406785310975:
return -313798332681.9032 * mass + 61201358553.86909
elif mass < 0.14166027560312686:
return -283936238859.25653 * mass + 57055219118.036606
elif mass < 0.14595272361417722:
return -256915921281.43967 * mass + 53189963515.95951
elif mass < 0.15037523709241038:
return -232466947062.03006 * mass + 49586563027.230675
elif mass < 0.15493175715154747:
return -210344618608.2964 * mass + 46227278048.721466
elif mass < 0.1596263443249965:
return -190327524564.86612 * mass + 43095570762.188644
elif mass < 0.16446318218438824:
return -172215323817.93555 * mass + 40176023718.31866
elif mass < 0.1694465810677574:
return -155826740380.95596 * mass + 37454263936.35827
elif mass < 0.17458098192069152:
return -140997749093.59116 * mass + 34916892145.66835
elif mass < 0.1798709602538704:
return -127579933975.75043 * mass + 32551416820.89501
elif mass < 0.18532123022052294:
return -115439002806.02321 * mass + 30346192685.970474
elif mass < 0.190936648817435:
return -104453443057.75208 * mass + 28290363384.209488
elif mass < 0.19672222021325209:
return -94513305740.80682 * mass + 26373808032.28923
elif mass < 0.20268310020793384:
return -85519104976.9103 * mass + 24587091394.955666
elif mass < 0.20882460082733445:
return -77380822295.08893 * mass + 22921417435.19663
elif mass < 0.21515219505700353:
return -70017005681.73714 * mass + 21368586011.210938
elif mass < 0.2216715217194258:
return -63353954367.92658 * mass + 19920952506.94763
elif mass < 0.22838839049904613:
return -57324981195.24943 * mass + 18571390197.54771
elif mass < 0.23530878711955774:
return -51869745177.25965 * mass + 17313255164.34588
elif mass < 0.24243887867806746:
return -46933647576.609474 * mass + 16140353586.739586
elif mass < 0.24978501914089157:
return -42467285453.54647 * mass + 15046911249.911201
elif mass < 0.2573537550058797:
return -38425957216.49946 * mass + 14027545118.26312
elif mass < 0.26515183113631285:
return -34769215226.14052 * mass + 13077236834.640993
elif mass < 0.27318619677157424:
return -31460460975.12397 * mass + 12191308014.872707
elif mass < 0.2814640117199497:
return -28466578791.900406 * mass + 11365397216.00891
elif mass < 0.28999265273907593:
return -25757604402.43424 * mass + 10595438464.85373
elif mass < 0.29877972010972265:
return -23306425032.748863 * mass + 9877641241.114536
elif mass < 0.30783304440876735:
return -21088508050.685318 * mass + 9208471816.603436
elif mass < 0.31716069348739745:
return -19081655431.023422 * mass + 8584635858.633564
elif mass < 0.32677097966075913:
return -17265781586.4992 * mass + 8003062211.957123
elif mass < 0.33667246711545984:
return -15622712341.196539 * mass + 7460887779.420978
elif mass < 0.34687397954152543:
return -14136003034.26778 * mass + 6955443426.885202
elif mass < 0.3573846079956132:
return -12790773933.533293 * mass + 6484240843.032959
elif mass < 0.36821371900248834:
return -11573561311.646477 * mass + 6044960289.365638
elif mass < 0.3793709629019827:
return -10472182694.378483 * mass + 5635439180.096561
elif mass < 0.39086628244887567:
return -9475614932.292295 * mass + 5253661435.698137
elif mass < 0.40270992167335906:
return -8573883875.54401 * mass + 4897747557.71019
elif mass < 0.41491243500998354:
return -7757964547.586873 * mass + 4565945375.935192
elif mass < 0.4274846967032211:
return -7019690818.683823 * mass + 4256621422.4751673
elif mass < 0.4404379104980254:
return -6351673675.18426 * mass + 3968252890.1399918
elif mass < 0.453783619624026:
return -5747227266.568564 * mass + 3699420135.6402307
elif mass < 0.4675337170822536:
return -5200301990.110294 * mass + 3448799690.66144
elif mass < 0.48170045624356295:
return -4705423943.412504 * mass + 3215157746.403937
elif mass < 0.49629646176819914:
return -4257640138.8508053 * mass + 2997344079.521927
elif mass < 0.5113347408562374:
return -3852468931.589354 * mass + 2794286389.553832
elif mass < 0.5268286948389223:
return -3485855165.0324383 * mass + 2604985019.9686236
elif mass < 0.5427921311212365:
return -3154129584.782682 * mass + 2428508036.838693
elif mass < 0.5592392754863411:
return -2853972114.905384 * mass + 2263986640.914044
elif mass < 0.5761847847728528:
return -2582378628.942507 * mass + 2110610890.5078216
elif mass < 0.593643759936255:
return -2336630883.1087937 * mass + 1967625714.139305
elif mass < 0.6116317595060835:
return -2114269310.7448237 * mass + 1834327193.3037505
elif mass < 0.6301648134508773:
return -1913068405.742814 * mass + 1710059097.0706878
elif mass < 0.6492594374632522:
return -1731014448.562266 * mass + 1594209651.4456558
elif mass < 0.6689326476778262:
return -1566285351.917694 * mass + 1486208527.597762
elif mass < 0.689201975835112:
return -1417232424.4142952 * mass + 1385524034.1202335
elif mass < 0.7100854849048917:
return -1282363869.6177766 * mass + 1291660499.5043395
elif mass < 0.7316017851829947:
return -1160329855.4088144 * mass + 1204155831.9407647
elif mass < 0.7537700508758246:
return -1049909004.1847667 * mass + 1122579244.432484
elif mass < 0.7766100371874131:
return -949996168.7014714 * mass + 1046529134.0237803
elif mass < 0.8001420979242287:
return -859591371.2048419 * mass + 975631104.701447
elif mass < 0.8243872036334308:
return -777789795.1523108 * mass + 909536124.2368587
elif mass < 0.8493669602907274:
return -703772729.355241 * mass + 847918805.8942916
elif mass < 0.8751036285544969:
return -636799373.9069062 * mass + 790475806.5464351
elif mass < 0.9016201436033267:
return -576199425.885282 * mass + 736924333.3106526
elif mass < 0.9289401355746512:
return -521366370.6256449 * mass + 687000751.3549962
elif mass < 0.9570879506226987:
return -471751411.4175549 * mass + 640459286.0192342
elif mass < 0.9860886726145172:
return -426857976.8721611 * mass + 597070812.8618844
elif mass < 1.0159681454834097:
return -378224034.20224994 * mass + 548611192.837717
elif mass < 1.0467529962597026:
return -342742154.28123385 * mass + 512208975.96922934
elif mass < 1.078470658799368:
return -310588893.6146222 * mass + 478222169.8875282
elif mass < 1.1111493982316476:
return -281451988.41692126 * mass + 446490503.8791636
elif mass < 1.1448183361474649:
return -255048468.93246916 * mass + 416864341.73713684
elif mass < 1.1795074765510691:
return -231121911.30959457 * mass + 389203976.12528235
elif mass < 1.215247732598043:
return -209439947.281311 * mass + 363378969.76383543
elif mass < 1.2520709541434965:
return -189792007.4675967 * mass + 339267540.3298471
elif mass < 1.2900099561249987:
return -171987276.38238186 * mass + 316755986.1713393
elif mass < 1.3290985478055424:
return -155852839.28501594 * mass + 295738150.12725586
elif mass < 1.3693715629025982:
return -141232002.87908453 * mass + 276114918.9249596
elif mass < 1.4108648906301098:
return -127982773.5493596 * mass + 257793755.7942201
elif mass < 1.4536155076810928:
return -115976478.35815808 * mass + 240688264.09393674
elif mass < 1.4976615111793377:
return -105096515.40857038 * mass + 224717779.89378324
elif mass < 1.5430421526295828:
return -95237221.43822642 * mass + 209806991.5892322
elif mass < 1.589797872896412:
return -86302845.64635111 * mass + 195885584.7567351
elif mass < 1.6379703382430462:
return -78206619.78770545 * mass + 182887910.57358485
elif mass < 1.6876024774621488:
return -70869915.5029363 * mass + 170752676.23962244
elif mass < 1.7387385201317294:
return -64221480.701078944 * mass + 159422655.94017956
elif mass < 1.791424036030241:
return -58196747.57857353 * mass + 148844420.98790362
elif mass < 1.8457059757459933:
return -52737205.554136746 * mass + 138968087.87036958
elif mass < 1.9016327125170727:
return -47789833.029834844 * mass + 129747083.01574302
elif mass < 1.9592540853390523:
return -43306582.45961368 * mass + 121137923.16692841
elif mass < 2.018621443378911:
return -39243913.72449903 * mass + 113100010.32866889
elif mass < 2.0797876917347353:
return -35562371.28275334 * mass + 105595440.32068571
elif mass < 2.14280733858199:
return -32226200.988276925 * mass + 98588824.03384253
elif mass < 2.2077365437483625:
return -29203002.856017157 * mass + 92047120.54667199
elif mass < 2.2746331687604817:
return -26463416.40203488 * mass + 85939481.3150872
elif mass < 2.3435568284070927:
return -23980835.502444685 * mass + 80237104.70076022
elif mass < 2.4145689438646563:
return -21731150.001893587 * mass + 74913100.15190408
elif mass < 2.4877327974326993:
return -19692511.562272504 * mass + 69942361.39625351
elif mass < 2.5631135889277075:
return -17845121.47753356 * mass + 65301448.04800619
elif mass < 2.640778493785806:
return -16171038.394003037 * mass + 60968475.07059142
elif mass < 2.7207967229260097:
return -14654004.068820138 * mass + 56923009.57401629
elif mass < 2.8032395844273905:
return -13279285.474247394 * mass + 53145974.45994402
elif mass < 2.888180547075125:
return -12033531.714499747 * mass + 49619558.46035055
elif mass < 2.9756953058320486:
return -10904644.36544424 * mass + 46327132.145368725
elif mass < 3.0658618492940652:
return -9881659.977970002 * mass + 43253169.50430538
elif mass < 3.1587605291895247:
return -8954643.603935985 * mass + 40383174.730044276
elif mass < 3.2544741319844963:
return -8114592.310631201 * mass + 37703613.86152058
elif mass < 3.353087952657759:
return -7353347.746728442 * mass + 35201850.96198223
elif mass < 3.4546898707112415:
return -6663516.910575382 * mass + 32866088.532013968
elif mass < 3.559370428483663:
return -6038400.351360663 * mass + 30685311.876376472
elif mass < 3.667222911837147:
return -5471927.105856929 * mass + 28649237.16229057
elif mass < 3.7783434332887245:
return -4958595.738863783 * mass + 26748262.92422453
elif mass < 3.892831017660806:
return -4493420.914756326 * mass + 24973424.786514346
elif mass < 4.010787690326946:
return -4071884.9812498903 * mass + 23316353.19027959
elif mass < 4.132318568131543:
return -3689894.095182401 * mass + 21769233.92531387
elif mass < 4.257531953064498:
return -3343738.464214127 * mass + 20324771.28080991
elif mass < 4.386539428774305:
return -3030056.3183271675 * mass + 18976153.641166396
elif mass < 4.519455960005596:
return -2745801.2612217274 * mass + 17717021.364621893
elif mass < 4.656399995049725:
return -2488212.684538307 * mass + 16541436.793256415
elif mass < 4.797493571299727:
return -2254788.957574303 * mass + 15443856.252927545
elif mass < 4.94286242400368:
return -2043263.1321236799 * mass + 14419103.91111523
elif mass < 5.092636098313417:
return -1851580.9264861692 * mass + 13462347.369371472
elif mass < 5.246948064728412:
return -1677880.7748385717 * mass + 12569074.875304293
elif mass < 5.405935838037748:
return -1520475.7482114013 * mass + 11735074.04662867
elif mass < 5.569741099866129:
return -1377837.171488895 * mass + 10956412.006936198
elif mass < 5.738509824933173:
return -1248579.7773293327 * mass + 10229416.839531496
elif mass < 5.912392411138472:
return -1131448.2528230778 * mass + 9550660.27187066
elif mass < 6.091543813588379:
return -1025305.0482320755 * mass + 8916941.508941254
elif mass < 6.27612368268392:
return -929119.3294146868 * mass + 8325272.139359356
elif mass < 6.466296506392926:
return -841956.966641636 * mass + 7772862.042986926
elif mass < 6.662231756833138:
return -762971.4625816531 * mass + 7257106.233641686
elif mass < 6.8641040412969385:
return -691395.7313472136 * mass + 6775572.574825793
elif mass < 7.072093257852278:
return -626534.6487633276 * mass + 6325990.310560303
elif mass < 7.286384755658459:
return -567758.3015101728 * mass + 5906239.357245501
elif mass < 7.507169500139667:
return -514495.8695739057 * mass + 5514340.306029102
elif mass < 7.734644243163398:
return -466230.0825977056 * mass + 5148445.088564688
elif mass < 7.969011698375493:
return -422492.19629126147 * mass + 4806828.262119563
elif mass < 8.210480721847969:
return -382857.44011296297 * mass + 4487878.872949734
elif mass < 8.459266498200684:
return -346940.8920130764 * mass + 4190092.859566341
elif mass < 8.715590732362662:
return -314393.74017471925 * mass + 3912065.9600711716
elif mass < 8.979681847143983:
return -284899.89544761827 * mass + 3652487.0901141223
elif mass < 9.251775186794292:
return -258172.9215758934 * mass + 3410132.1602485017
elif mass < 9.532113226729347:
return -233953.25340609462 * mass + 3183858.3035202585
elif mass < 9.820945789612473:
return -212005.67606085283 * mass + 2972598.4860824393
elif mass < 10.118530267983521:
return -192117.04059527343 * mass + 2775356.475408798
elif mass < 10.42513185363369:
return -174094.1939521161 * mass + 2591202.1423872104
elif mass < 10.741023773930646:
return -157762.10311133554 * mass + 2419267.0751323723
elif mass < 11.06648753530452:
return -142962.15521672467 * mass + 2258740.483838777
elif mass < 11.401813174111782:
return -129550.61717061569 * mass + 2108865.3773591933
elif mass < 11.747299515100543:
return -117397.23973695925 * mass + 1968934.9934820938
elif mass < 12.103254437707607:
return -106383.99259577072 * mass + 1838289.4660695463
elif mass < 12.469995150424586:
return -96403.91806463679 * mass + 1716312.7133444645
elif mass < 12.847848473477601:
return -87360.09235456264 * mass + 1602429.5326492898
elif mass < 13.237151130072446:
return -79164.6842722866 * mass + 1496102.8879772783
elif mass < 13.638250046464773:
return -71738.10222743743 * mass + 1396831.37748537
elif mass < 14.051502661122703:
return -65008.221260554754 * mass + 1304146.869047045
elif mass < 14.477277243257383:
return -58909.68258489232 * mass + 1217612.2926927358
elif mass < 14.915953221005326:
return -53383.25883957604 * mass + 1136819.579530759
elif mass < 15.367921519555008:
return -48375.27888946114 * mass + 1061387.7374272011
elif mass < 15.833584909519045:
return -43837.10658551415 * mass + 990961.054370338
elif mass < 16.31335836586238:
return -39724.66842374193 * mass + 925207.4210497297
elif mass < 16.807669437706377:
return -35998.02551516624 * mass + 863816.764735992
elif mass < 17.316958629338316:
return -32620.98571012829 * mass + 806499.5870788775
elif mass < 17.841679792765895:
return -29560.752109917725 * mass + 752985.5989275265
elif mass < 18.382300532166493:
return -26787.604552143403 * mass + 703022.4457347527
elif mass < 18.93930262059167:
return -24274.61097653632 * mass + 656374.5175350609
elif mass < 19.51318242929822:
return -21997.365868049095 * mass + 612821.837884678
elif mass < 20.104451370088384:
return -19933.75323709776 * mass + 572159.0265245157
elif mass < 20.71363635105343:
return -18063.73183503036 * mass + 534194.3308734964
elif mass < 21.3412802461267:
return -16369.140518938324 * mass + 498748.721785933
elif mass < 21.987942378864584:
return -14833.521875538247 * mass + 465655.0493083184
elif mass < 22.654199020886562:
return -13441.962391212743 * mass + 434757.25445444183
elif mass < 23.340643905418442:
return -12180.947616004374 * mass + 405909.6332822564
elif mass < 24.047888756396528:
return -11038.230914917563 * mass + 378976.1498013189
elif mass < 24.7765638336041:
return -10002.71453190961 * mass + 353829.79447141325
elif mass < 25.52731849432614:
return -9064.341811481938 * mass + 330351.9852669377
elif mass < 26.300821772022715:
return -8213.999531154454 * mass + 308432.0084826039
elif mass < 27.097762972536774:
return -7443.429396312369 * mass + 287966.49664371257
elif mass < 27.91885228836761:
return -6745.1478378761585 * mass + 268858.94105872384
elif mass < 28.764821431557436:
return -6112.373333903076 * mass + 251019.2367157852
elif mass < 29.63642428575506:
return -5538.96054927282 * mass + 234363.25737670768
elif mass < 30.53443757803772:
return -5019.34065385514 * mass + 218812.45886509324
elif mass < 31.459661571089846:
return -4548.467239534732 * mass + 204293.50867755222
elif mass < 32.41292077635544:
return -4121.767310858169 * mass + 190737.94017148804
elif mass < 33.395064688799785:
return -3735.096873336325 * mass + 178081.8296986817
elif mass < 34.40696854393511:
return -3384.7006880894783 * mass + 166265.495162195
elif mass < 35.44953409778489:
return -3067.175801981311 * mass + 155233.21457503116
elif mass < 36.52369043048187:
return -2779.4384990565954 * mass + 144932.96329337286
elif mass < 37.63039477421591:
return -2518.6943523251293 * mass + 135316.16868532004
elif mass < 38.77063336626942:
return -2282.4110850403245 * mass + 126337.48107813275
elif mass < 39.94542232790075:
return -2098.830304819598 * mass + 119149.66772500594
elif mass < 41.15580856985847:
return -1886.2412854129125 * mass + 110574.58168060276
elif mass < 42.402870725333756:
return -1721.2251525174197 * mass + 103716.39177930114
elif mass < 43.68772011118209:
return -1558.5915916816336 * mass + 96757.97174053216
elif mass < 45.01150171827101:
return -1400.4085968484894 * mass + 89779.67112006168
elif mass < 46.37539523183634:
return -1267.9757671119032 * mass + 83750.65664387631
elif mass < 47.78061608275621:
return -1157.3487629808835 * mass + 78569.80448206994
elif mass < 49.22841653067981:
return -1047.9670308564878 * mass + 73296.4201529144
elif mass < 50.720086779975944:
return -948.9152231708367 * mass + 68376.38739686998
elif mass < 52.256956129496:
return -852.2676650968601 * mass + 63426.434415172145
elif mass < 53.84039415717585:
return -771.6413266114362 * mass + 59164.75518994806
elif mass < 55.47181194053243:
return -698.6369762907558 * mass + 55188.962723190736
elif mass < 57.1526633141425:
return -632.5330787055476 * mass + 51479.82283481699
elif mass < 58.88444616522433:
return -572.6793366195942 * mass + 48019.56213315388
elif mass < 60.668703768476796:
return -518.4840159751182 * mass + 44791.43434745292
elif mass < 62.50702616136541:
return -469.4137314874244 * mass + 41779.9596309389
elif mass < 64.40105156108095:
return -428.70207155685165 * mass + 39207.686490530454
elif mass < 66.35246782443328:
return -388.1554827750738 * mass + 36573.149005620304
elif mass < 68.36301395198143:
return -351.44119004559644 * mass + 34115.354301338746
elif mass < 70.43448163774043:
return -318.1430781657978 * mass + 31816.660841848618
elif mass < 72.5687168658456:
return -288.0456767502926 * mass + 29677.91564178936
elif mass < 74.76762155559759:
return -258.3598336364044 * mass + 27500.73324765288
elif mass < 77.03315525635374:
return -236.07609477009555 * mass + 25816.499419184136
elif mass < 79.36733689377651:
return -213.69788027666138 * mass + 24075.721432927
elif mass < 81.77224656899482:
return -191.6048703181 * mass + 22303.55636399969
elif mass < 84.25002741228194:
return -175.1308349925522 * mass + 20941.720795221983
elif mass < 86.8028874929015:
return -158.52447133121225 * mass + 19528.907705399888
elif mass < 89.4331017868239:
return -143.49062072130545 * mass + 18211.122915502798
elif mass < 92.14301420406645:
return -129.85589276250371 * mass + 16978.53657696608
elif mass < 94.93503967746386:
return -117.53733946730361 * mass + 15832.331460204688
elif mass < 97.81166631473069:
return -106.38621211242004 * mass + 14763.31224564384
elif mass < 100.77545761573333:
return -96.27220727177901 * mass + 13763.303578911187
elif mass < 103.82905475694768:
return -87.11809610786906 * mass + 12830.763433328146
elif mass < 106.97517894513796:
return -78.83277492961786 * mass + 11961.139319274958
elif mass < 110.21663384235457:
return -71.33396883599468 * mass + 11150.205667431943
elif mass < 113.55630806441175:
return -65.25968997331644 * mass + 10474.128501135436
elif mass < 116.99717775507149:
return -58.39214107599081 * mass + 9686.645525210588
elif mass < 120.54230923822792:
return -52.82258508484542 * mass + 9027.137875397693
elif mass < 124.19486175045546:
return -48.33882451770344 * mass + 8480.626537567978
elif mass < 127.95809025635602:
return -43.725775306179074 * mass + 7902.089208774394
elif mass < 131.83534834921383:
return -39.551609523690175 * mass + 7362.731228954252
elif mass < 135.83009123954335:
return -35.774718468081005 * mass + 6859.918666609693
elif mass < 139.94587883419274:
return -32.754930560282254 * mass + 6445.70371016537
elif mass < 144.1863789087476:
return -29.621013927287606 * mass + 6003.374484117547
elif mass < 148.55537037606157:
return -27.133515506848582 * mass + 5642.235489786828
elif mass < 153.05674665382662:
return -24.211271151611026 * mass + 5204.37643183128
elif mass < 157.69451913418422:
return -21.878122000081255 * mass + 4842.8025954939685
elif mass < 162.47282075846914:
return -20.0457000220736 * mass + 4550.067523571768
elif mass < 167.39590970027152:
return -18.37401625079183 * mass + 4275.345142828897
elif mass < 172.46817316009944:
return -16.851098356037333 * mass + 4017.9099444185154
elif mass < 177.69413127502364:
return -15.701958729294592 * mass + 3817.7722681209907
elif mass < 183.07844114678812:
return -14.662876310591198 * mass + 3631.325700812953
elif mass < 188.62590099197692:
return -14.655764398061297 * mass + 3630.4831542698353
elif mass < 194.3414544179348:
return -16.798722448739376 * mass + 4041.2264183852003
elif mass < 198.24771765173531:
return 0 * mass + 0
elif mass < 198.24771765173531:
return 0 * mass + 0
else:
return 0
|
juzikongREPO_NAMEphotGalIMFPATH_START.@photGalIMF_extracted@photGalIMF-main@simulation_results_from_galaxy_evol@example@igimf_epoch_89.py@.PATH_END.py
|
{
"filename": "CODE_OF_CONDUCT.md",
"repo_name": "heal-research/operon",
"repo_path": "operon_extracted/operon-main/CODE_OF_CONDUCT.md",
"type": "Markdown"
}
|
# Code of Conduct
* You will be judged by your contributions first, and your sense of humor
second.
* Nobody owes you anything.
|
heal-researchREPO_NAMEoperonPATH_START.@operon_extracted@operon-main@CODE_OF_CONDUCT.md@.PATH_END.py
|
{
"filename": "test_buffer.py",
"repo_name": "mpi4py/mpi4py",
"repo_path": "mpi4py_extracted/mpi4py-master/test/test_buffer.py",
"type": "Python"
}
|
from mpi4py import MPI
import mpiunittest as unittest
try:
import array
except ImportError:
array = None
class TestBuffer(unittest.TestCase):
def testNewEmpty(self):
buffer = MPI.buffer
buf = buffer()
self.assertEqual(buf.address, 0)
self.assertIsNone(buf.obj)
self.assertEqual(buf.nbytes, 0)
self.assertFalse(buf.readonly)
self.assertEqual(buf.format, 'B')
self.assertEqual(buf.itemsize, 1)
self.assertEqual(len(buf), 0)
buf[:] = 0
buf[:] = buffer()
m = memoryview(buf)
self.assertEqual(m.format, 'B')
self.assertEqual(m.itemsize, 1)
self.assertEqual(m.ndim, 1)
self.assertIs(m.readonly, False)
self.assertEqual(m.shape, (0,))
self.assertEqual(m.strides, (1,))
self.assertEqual(m.tobytes(), b"")
self.assertEqual(m.tolist(), [])
buf.release()
self.assertEqual(buf.address, 0)
self.assertEqual(buf.nbytes, 0)
self.assertFalse(buf.readonly)
def testNewBad(self):
buffer = MPI.buffer
for obj in (None, 0, 0.0, [], (), []):
self.assertRaises(TypeError, buffer, obj)
def testNewBytes(self):
buffer = MPI.buffer
obj = b"abc"
buf = buffer(obj)
self.assertEqual(buf.obj, obj)
self.assertEqual(buf.nbytes, len(obj))
self.assertIs(buf.readonly, True)
with self.assertRaises(TypeError):
buf[:] = 0
def testNewBytearray(self):
buffer = MPI.buffer
obj = bytearray([1,2,3])
buf = buffer(obj)
self.assertEqual(buf.obj, obj)
self.assertEqual(buf.nbytes, len(obj))
self.assertFalse(buf.readonly)
with self.assertRaises(ValueError):
buf[0:1] = buf[1:3]
@unittest.skipIf(array is None, 'array')
def testNewArray(self):
buffer = MPI.buffer
obj = array.array('i', [1,2,3])
buf = buffer(obj)
self.assertEqual(buf.obj, obj)
self.assertEqual(buf.nbytes, len(obj)*obj.itemsize)
self.assertFalse(buf.readonly)
def testAllocate(self):
buffer = MPI.buffer
for size in (0, 1, 2):
buf = buffer.allocate(size)
self.assertEqual(buf.nbytes, size)
self.assertNotEqual(buf.address, 0)
view = memoryview(buf.obj)
self.assertEqual(buf.nbytes, view.nbytes)
for clear in (False, True):
buf = buffer.allocate(1024, clear)
self.assertEqual(buf.nbytes, 1024)
self.assertNotEqual(buf.address, 0)
if clear:
self.assertEqual(buf[0], 0)
self.assertEqual(buf[-1], 0)
self.assertRaises(TypeError, buffer.allocate, None)
self.assertRaises(ValueError, buffer.allocate, -1)
def testFromBufferBad(self):
buffer = MPI.buffer
for obj in (None, 0, 0.0, [], (), []):
self.assertRaises(TypeError, buffer.frombuffer, obj)
def testFromBufferBytes(self):
buffer = MPI.buffer
buf = buffer.frombuffer(b"abc", readonly=True)
self.assertNotEqual(buf.address, 0)
self.assertEqual(type(buf.obj), bytes)
self.assertEqual(buf.obj, b"abc")
self.assertEqual(buf.nbytes, 3)
self.assertTrue (buf.readonly)
self.assertEqual(buf.format, 'B')
self.assertEqual(buf.itemsize, 1)
self.assertEqual(len(buf), 3)
m = memoryview(buf)
self.assertEqual(m.format, 'B')
self.assertEqual(m.itemsize, 1)
self.assertEqual(m.ndim, 1)
self.assertTrue (m.readonly)
self.assertEqual(m.shape, (3,))
self.assertEqual(m.strides, (1,))
self.assertEqual(m.tobytes(), b"abc")
self.assertEqual(m.tolist(), [ord(c) for c in "abc"])
buf.release()
self.assertEqual(buf.address, 0)
self.assertEqual(buf.nbytes, 0)
self.assertFalse(buf.readonly)
@unittest.skipIf(array is None, 'array')
def testFromBufferArrayRO(self):
buffer = MPI.buffer
obj = array.array('B', [1,2,3])
buf = buffer.frombuffer(obj, readonly=True)
self.assertNotEqual(buf.address, 0)
self.assertEqual(type(buf.obj), array.array)
self.assertEqual(buf.nbytes, 3)
self.assertTrue (buf.readonly)
self.assertEqual(buf.format, 'B')
self.assertEqual(buf.itemsize, 1)
self.assertEqual(len(buf), 3)
m = memoryview(buf)
self.assertEqual(m.format, 'B')
self.assertEqual(m.itemsize, 1)
self.assertEqual(m.ndim, 1)
self.assertTrue (m.readonly)
self.assertEqual(m.shape, (3,))
self.assertEqual(m.strides, (1,))
self.assertEqual(m.tobytes(), b"\1\2\3")
self.assertEqual(m.tolist(), [1,2,3])
buf.release()
self.assertEqual(buf.address, 0)
self.assertEqual(buf.nbytes, 0)
self.assertFalse(buf.readonly)
@unittest.skipIf(array is None, 'array')
def testFromBufferArrayRW(self):
buffer = MPI.buffer
obj = array.array('B', [1,2,3])
buf = buffer.frombuffer(obj, readonly=False)
self.assertNotEqual(buf.address, 0)
self.assertEqual(buf.nbytes, 3)
self.assertFalse(buf.readonly)
self.assertEqual(len(buf), 3)
m = memoryview(buf)
self.assertEqual(m.format, 'B')
self.assertEqual(m.itemsize, 1)
self.assertEqual(m.ndim, 1)
self.assertFalse(m.readonly)
self.assertEqual(m.shape, (3,))
self.assertEqual(m.strides, (1,))
self.assertEqual(m.tobytes(), b"\1\2\3")
self.assertEqual(m.tolist(), [1,2,3])
buf[:] = 1
self.assertEqual(obj, array.array('B', [1]*3))
buf[1:] = array.array('B', [7]*2)
self.assertEqual(obj, array.array('B', [1,7,7]))
buf[1:2] = array.array('B', [8]*1)
self.assertEqual(obj, array.array('B', [1,8,7]))
buf.release()
self.assertEqual(buf.address, 0)
self.assertEqual(buf.nbytes, 0)
self.assertFalse(buf.readonly)
@unittest.skipIf(array is None, 'array')
def testFromAddress(self):
buffer = MPI.buffer
obj = array.array('B', [1,2,3])
addr, size = obj.buffer_info()
nbytes = size * obj.itemsize
buf = buffer.fromaddress(addr, nbytes, readonly=False)
self.assertNotEqual(buf.address, 0)
self.assertEqual(buf.nbytes, 3)
self.assertFalse(buf.readonly)
self.assertEqual(len(buf), 3)
m = memoryview(buf)
self.assertEqual(m.format, 'B')
self.assertEqual(m.itemsize, 1)
self.assertEqual(m.ndim, 1)
self.assertFalse(m.readonly)
self.assertEqual(m.shape, (3,))
self.assertEqual(m.strides, (1,))
self.assertEqual(m.tobytes(), b"\1\2\3")
self.assertEqual(m.tolist(), [1,2,3])
buf[:] = 1
self.assertEqual(obj, array.array('B', [1]*3))
buf[1:] = array.array('B', [7]*2)
self.assertEqual(obj, array.array('B', [1,7,7]))
buf[1:2] = array.array('B', [8]*1)
self.assertEqual(obj, array.array('B', [1,8,7]))
buf.release()
self.assertEqual(buf.address, 0)
self.assertEqual(buf.nbytes, 0)
self.assertFalse(buf.readonly)
with self.assertRaises(ValueError):
buffer.fromaddress(addr, -1)
with self.assertRaises(ValueError):
buffer.fromaddress(0, 1)
def testToReadonly(self):
buffer = MPI.buffer
obj = bytearray(b"abc")
buf1 = buffer.frombuffer(obj)
buf2 = buf1.toreadonly()
self.assertFalse(buf1.readonly)
self.assertTrue (buf2.readonly)
self.assertEqual(buf1.address, buf2.address)
self.assertEqual(buf1.obj, buf2.obj)
self.assertEqual(type(buf1.obj), type(buf2.obj))
self.assertEqual(buf1.nbytes, buf2.nbytes)
def testCast(self):
buffer = MPI.buffer
buf = buffer.allocate(2 * 3 * 4)
mem = buf.cast('i')
for i in range(2 * 3):
mem[i] = i
mem = buf.cast('i', (2, 3))
for i in range(2):
for j in range(3):
self.assertEqual(mem[i, j], 3 * i + j)
mem = buf.cast('i', (3, 2))
for i in range(3):
for j in range(2):
self.assertEqual(mem[i, j], 2 * i + j)
def testSequence(self):
n = 16
try:
mem = MPI.Alloc_mem(n, MPI.INFO_NULL)
except NotImplementedError:
self.skipTest('mpi-alloc_mem')
try:
self.assertIs(type(mem), MPI.buffer)
self.assertNotEqual(mem.address, 0)
self.assertEqual(mem.nbytes, n)
self.assertFalse(mem.readonly)
self.assertEqual(len(mem), n)
def delitem(): del mem[n]
def getitem1(): return mem[n]
def getitem2(): return mem[::2]
def getitem3(): return mem[None]
def setitem1(): mem[n] = 0
def setitem2(): mem[::2] = 0
def setitem3(): mem[None] = 0
self.assertRaises(Exception, delitem)
self.assertRaises(IndexError, getitem1)
self.assertRaises(IndexError, getitem2)
self.assertRaises(TypeError, getitem3)
self.assertRaises(IndexError, setitem1)
self.assertRaises(IndexError, setitem2)
self.assertRaises(TypeError, setitem3)
for i in range(n):
mem[i] = i
for i in range(n):
self.assertEqual(mem[i], i)
mem[:] = 0
for i in range(-n, 0):
mem[i] = abs(i)
for i in range(-n, 0):
self.assertEqual(mem[i], abs(i))
mem[:] = 0
for i in range(n):
self.assertEqual(mem[i], 0)
mem[:] = 255
for i in range(n):
self.assertEqual(mem[i], 255)
mem[:n//2] = 1
mem[n//2:] = 0
for i in range(n//2):
self.assertEqual(mem[i], 1)
for i in range(n//2, n):
self.assertEqual(mem[i], 0)
mem[:] = 0
mem[1:5] = b"abcd"
mem[10:13] = b"xyz"
self.assertEqual(mem[0], 0)
for i, c in enumerate("abcd"):
self.assertEqual(mem[1+i], ord(c))
for i in range(5, 10):
self.assertEqual(mem[i], 0)
for i, c in enumerate("xyz"):
self.assertEqual(mem[10+i], ord(c))
for i in range(13, n):
self.assertEqual(mem[i], 0)
self.assertEqual(mem[1:5].tobytes(), b"abcd")
self.assertEqual(mem[10:13].tobytes(), b"xyz")
finally:
MPI.Free_mem(mem)
self.assertEqual(mem.address, 0)
self.assertEqual(mem.nbytes, 0)
self.assertFalse(mem.readonly)
def testBuffering(self):
buf = MPI.Alloc_mem((1<<16)+MPI.BSEND_OVERHEAD)
MPI.Attach_buffer(buf)
try:
with self.catchNotImplementedError(4,1):
MPI.Flush_buffer()
with self.catchNotImplementedError(4,1):
MPI.Iflush_buffer().Wait()
finally:
oldbuf = MPI.Detach_buffer()
self.assertEqual(oldbuf.address, buf.address)
self.assertEqual(oldbuf.nbytes, buf.nbytes)
MPI.Free_mem(buf)
if MPI.BUFFER_AUTOMATIC != 0:
MPI.Attach_buffer(MPI.BUFFER_AUTOMATIC)
bufauto = MPI.Detach_buffer()
self.assertEqual(bufauto, MPI.BUFFER_AUTOMATIC)
def testAttachBufferReadonly(self):
buf = MPI.buffer(b"abc")
self.assertRaises(BufferError, MPI.Attach_buffer, buf)
try:
MPI.buffer
except AttributeError:
unittest.disable(TestBuffer, 'mpi4py-buffer')
if __name__ == '__main__':
unittest.main()
|
mpi4pyREPO_NAMEmpi4pyPATH_START.@mpi4py_extracted@mpi4py-master@test@test_buffer.py@.PATH_END.py
|
{
"filename": "_tickformatstopdefaults.py",
"repo_name": "catboost/catboost",
"repo_path": "catboost_extracted/catboost-master/contrib/python/plotly/py2/plotly/validators/scattermapbox/marker/colorbar/_tickformatstopdefaults.py",
"type": "Python"
}
|
import _plotly_utils.basevalidators
class TickformatstopdefaultsValidator(_plotly_utils.basevalidators.CompoundValidator):
def __init__(
self,
plotly_name="tickformatstopdefaults",
parent_name="scattermapbox.marker.colorbar",
**kwargs
):
super(TickformatstopdefaultsValidator, self).__init__(
plotly_name=plotly_name,
parent_name=parent_name,
data_class_str=kwargs.pop("data_class_str", "Tickformatstop"),
data_docs=kwargs.pop(
"data_docs",
"""
""",
),
**kwargs
)
|
catboostREPO_NAMEcatboostPATH_START.@catboost_extracted@catboost-master@contrib@python@plotly@py2@plotly@validators@scattermapbox@marker@colorbar@_tickformatstopdefaults.py@.PATH_END.py
|
{
"filename": "meanvar.py",
"repo_name": "light-curve/light-curve-python",
"repo_path": "light-curve-python_extracted/light-curve-python-master/light-curve/light_curve/light_curve_py/features/meanvar.py",
"type": "Python"
}
|
import numpy as np
from ._base import BaseSingleBandFeature
class MeanVariance(BaseSingleBandFeature):
def _eval_single_band(self, t, m, sigma=None):
return np.std(m, ddof=1) / np.mean(m)
@property
def size_single_band(self):
return 1
__all__ = ("MeanVariance",)
|
light-curveREPO_NAMElight-curve-pythonPATH_START.@light-curve-python_extracted@light-curve-python-master@light-curve@light_curve@light_curve_py@features@meanvar.py@.PATH_END.py
|
{
"filename": "elements.py",
"repo_name": "astropy/pyvo",
"repo_path": "pyvo_extracted/pyvo-main/pyvo/utils/xml/elements.py",
"type": "Python"
}
|
# Licensed under a 3-clause BSD style license - see LICENSE.rst
from inspect import getmembers
from functools import partial
import warnings
from astropy.utils.xml import iterparser
from astropy.io.votable.exceptions import warn_or_raise
from pyvo.utils.xml.exceptions import UnknownElementWarning
__all__ = [
"xmlattribute", "xmlelement",
"make_add_complexcontent", "make_add_simplecontent",
"Element", "ElementWithXSIType", "ContentMixin", "parse_for_object"]
def parse_for_object(
source, object_type, pedantic=None, filename=None,
_debug_python_based_parser=False
):
"""
Parses an xml file (or file-like object), and returns a
object of specified object_type. object_type must be a subtype of
`~pyvo.utils.xml.elements.Element` type
Parameters
----------
source : str or readable file-like object
Path or file object containing a tableset xml file.
object : object type to return (subtype `~pyvo.utils.xml.elements.Element`)
pedantic : bool, optional
When `True`, raise an error when the file violates the spec,
otherwise issue a warning. Warnings may be controlled using
the standard Python mechanisms. See the `warnings`
module in the Python standard library for more information.
Defaults to False.
filename : str, optional
A filename, URL or other identifier to use in error messages.
If *filename* is None and *source* is a string (i.e. a path),
then *source* will be used as a filename for error messages.
Therefore, *filename* is only required when source is a
file-like object.
Returns
-------
object : `~pyvo.utils.xml.elements.Element` object or None
See also
--------
pyvo.io.vosi.exceptions : The exceptions this function may raise.
"""
config = {
'pedantic': pedantic,
'filename': filename
}
if filename is None and isinstance(source, str):
config['filename'] = source
with iterparser.get_xml_iterator(
source,
_debug_python_based_parser=_debug_python_based_parser
) as iterator:
return object_type(
config=config, pos=(1, 1)).parse(iterator, config)
class xmlattribute(property):
def __init__(self, fget=None, fset=None, fdel=None, doc=None, name=None):
super().__init__(fget, fset, fdel, doc)
if name:
self.name = name
elif fget is not None:
self.name = fget.__name__
else:
raise ValueError(
"xmlattribute either needs a getter or a element name or both")
def __call__(self, fget):
return self.__class__(fget, name=self.name)
def getter(self, fget):
return self.__class__(
fget, self.fset, self.fdel, self.__doc__, self.name)
def setter(self, fset):
return self.__class__(
self.fget, fset, self.fdel, self.__doc__, self.name)
def deleter(self, fdel):
return self.__class__(
self.fget, self.fset, fdel, self.__doc__, self.name)
class xmlelement(property):
"""
"""
def __init__(
self, fget=None, fset=None, fdel=None, fadd=None, fformat=None,
doc=None, name=None, ns=None, plain=False, cls=None, multiple_exc=None
):
super().__init__(fget, fset, fdel, doc)
if name:
self.name = name
elif fget is not None:
self.name = fget.__name__
else:
self.name = None
self.ns = ns
self.plain = plain
self.cls = cls
self.multiple_exc = multiple_exc
self.fadd = fadd
self.fformat = fformat
def __call__(self, fget):
return self.__class__(
fget, name=self.name or fget.__name__, ns=self.ns,
plain=self.plain, cls=self.cls, multiple_exc=self.multiple_exc
)
def __get__(self, obj, owner=None):
if obj is not None:
val = super().__get__(obj, owner)
if self.plain:
return val
elif not isinstance(val, (Element, list)):
element = ContentMixin(_name=self.name, _ns=self.ns)
element.content = val
return element
else:
return val
else:
return super().__get__(obj, owner)
def getter(self, fget):
return self.__class__(
fget, self.fset, self.fdel, self.fadd, self.fformat, self.__doc__,
self.name, self.ns, self.plain, self.cls, self.multiple_exc)
def setter(self, fset):
return self.__class__(
self.fget, fset, self.fdel, self.fadd, self.fformat, self.__doc__,
self.name, self.ns, self.plain, self.cls, self.multiple_exc)
def deleter(self, fdel):
return type(self)(
self.fget, self.fset, fdel, self.fadd, self.fformat, self.__doc__,
self.name, self.ns, self.plain, self.cls, self.multiple_exc)
def adder(self, fadd):
if self.cls:
raise RuntimeError(
'xmlelement cls parameter has no effect when adder is'
' defined')
if self.multiple_exc:
raise RuntimeError(
'xmlelement multiple_exc parameter has no effect when'
' adder is defined')
return self.__class__(
self.fget, self.fset, self.fdel, fadd, self.fformat, self.__doc__,
self.name, self.ns, self.plain, self.cls, self.multiple_exc)
def formatter(self, fformat):
return self.__class__(
self.fget, self.fset, self.fdel, self.fadd, fformat, self.__doc__,
self.name, self.ns, self.plain, self.cls, self.multiple_exc)
def object_attrs(obj):
objtype = type(obj)
attrs = {
getattr(objtype, name).name: value for name, value in getmembers(obj)
if isinstance(getattr(objtype, name, None), xmlattribute)}
return attrs
def object_children(obj):
objtype = type(obj)
try:
for child in obj:
if isinstance(child, Element):
yield (child._Element__name, None, child)
except TypeError:
for name, child in getmembers(obj):
if child is None:
continue
descr = getattr(objtype, name, None)
if isinstance(descr, xmlelement):
element_name = descr.name
if descr.fformat:
fformat = partial(descr.fformat, obj)
else:
fformat = None
yield (element_name, fformat, child)
elif isinstance(child, Element):
yield (child._Element__name, None, child)
def object_mapping(obj):
objtype = type(obj)
for name, val in getmembers(obj):
descr = getattr(objtype, name, None)
if isinstance(descr, xmlelement):
if descr.fadd is None:
if descr.cls is None:
fadd = make_add_simplecontent(
obj, descr.name, name, descr.multiple_exc)
else:
fadd = make_add_complexcontent(
obj, descr.name, name, descr.cls, descr.multiple_exc)
else:
fadd = partial(descr.fadd, obj)
yield descr.name, fadd
def make_add_complexcontent(
self, element_name, attr_name, cls_, exc_class=None):
"""
Factory for generating add functions for elements with complex content.
"""
def add_complexcontent(iterator, tag, data, config, pos):
attr = getattr(self, attr_name)
element = cls_(
config=config, pos=pos, _name=element_name, **data)
if attr and exc_class is not None:
warn_or_raise(
exc_class, args=element_name,
config=config, pos=pos)
if isinstance(getattr(self, attr_name, None), list):
getattr(self, attr_name).append(element)
else:
setattr(self, attr_name, element)
element.parse(iterator, config)
return add_complexcontent
def make_add_simplecontent(
self, element_name, attr_name, exc_class=None, check_func=None,
data_func=None):
"""
Factory for generating add functions for elements with simple content.
This means elements with no child elements.
If exc_class is given, warn or raise if element was already set.
"""
def add_simplecontent(iterator, tag_ignored, data_ignored, config, pos_ignored):
# Ignored parameters are kept in the API signature to be compatible
# with other functions.
for start, tag, data, pos in iterator:
if not start and tag == element_name:
attr = getattr(self, attr_name)
if attr and exc_class:
warn_or_raise(
exc_class, args=self._Element__name,
config=config, pos=pos)
if check_func:
check_func(data, config, pos)
if data_func:
data = data_func(data)
if isinstance(getattr(self, attr_name), list):
getattr(self, attr_name).append(data)
else:
setattr(self, attr_name, data or None)
break
return add_simplecontent
class Element:
"""
A base class for all classes that represent XML elements.
Subclasses and Mixins must initialize their independent attributes after
calling ``super().__init__``.
"""
def __init__(self, config=None, pos=None, _name='', _ns='', **kwargs):
if config is None:
config = {}
self._config = config
self._pos = pos
self.__name = _name
self.__ns = _ns
self._tag_mapping = {}
def _add_unknown_tag(self, iterator, tag, data, config, pos):
if tag != 'xml':
warn_or_raise(
UnknownElementWarning, UnknownElementWarning, tag, config, pos)
def _end_tag(self, tag, data, pos):
pass
def _ignore_add(self, iterator, tag, data, config, pos):
pass
def parse(self, iterator, config):
"""
For internal use. Parse the XML content of the children of the element.
Override this method and do after-parse checks after calling
``super().parse``, if you need to.
Parameters
----------
iterator : xml iterator
An iterator over XML elements as returned by
`~astropy.utils.xml.iterparser.get_xml_iterator`.
config : dict
The configuration dictionary that affects how certain
elements are read.
"""
tag_mapping = dict(object_mapping(self))
for start, tag, data, pos in iterator:
if start:
tag_mapping.get(tag, self._add_unknown_tag)(
iterator, tag, data, config, pos)
else:
if tag == self._Element__name:
self._end_tag(tag, data, pos)
break
return self
def to_xml(self, w, **kwargs):
if self._Element__ns:
name = ':'.join((self._Element__ns, self._Element__name))
else:
name = self._Element__name
with w.tag(name, attrib=object_attrs(self)):
for name, formatter, child in object_children(self):
if isinstance(child, Element):
child.to_xml(w, formatter=formatter)
else:
if formatter:
child = formatter()
if not child:
child = ''
w.element(name, str(child))
class ElementWithXSIType(Element):
"""
An XML element that supports type dispatch through xsi:type.
When a class A is derived from this, it gains a decorator
register_xsi_type, which classes derived from A can use to say
"construct me rather than A when xsi:type has the value I'm putting
in.
At this point we are doing *no* namespace processing in our XML
parsing. Hence, we discard any prefixes both when registering
and when matching.
We probably should do namespaces one day; astropy.utils.xml will
presumably learn them when they add VO-DML support. Let's revisit
this when it's there safely. Meanwhere, use canonical Registry
prefixes (cf. RegTAP 1.1, sect. 5) everywhere in your code; these
will continue to be safe no matter what.
"""
_xsi_type_mapping = {}
@classmethod
def register_xsi_type(cls, typename):
"""Decorator factory for registering subtypes."""
def register(class_):
"""Decorator for registering subtypes"""
cls._xsi_type_mapping[typename.split(":")[-1]] = class_
return class_
return register
def __new__(cls, *args, **kwargs):
xsi_type = None
# Another namespace trouble: people can bind the xsi URI
# to anything, *and* it's not unlikely they have another
# type attribute, too. Wiggle out of it by preferring a
# literal xsi:type and otherwise hope for the best. This
# really needs to be fixed when we switch to namespace-aware
# parsing.
for name, val in kwargs.items():
if name == "xsi:type":
xsi_type = val
break
elif name.split(":")[-1] == "type":
xsi_type = val
if xsi_type is None:
dtype = cls
else:
try:
dtype = cls._xsi_type_mapping[xsi_type.split(":")[-1]]
except KeyError:
warnings.warn(f"Unknown xsi:type {xsi_type} ignored")
dtype = cls
obj = Element.__new__(dtype)
obj.__init__(*args, **kwargs)
return obj
class ContentMixin(Element):
"""
Mixin class for elements with inner content.
"""
def __init__(self, config=None, pos=None, _name=None, _ns=None, **kwargs):
super().__init__(config, pos, _name, _ns, **kwargs)
self._content = None
def __bool__(self):
return bool(self.content)
__nonzero__ = __bool__
def _end_tag(self, tag, data, pos):
self.content = data
def _content_check(self, content):
pass
def _content_parse(self, content):
return content
@property
def content(self):
"""The inner content of the element."""
return self._content
@content.setter
def content(self, content):
self._content_check(content)
self._content = self._content_parse(content)
def to_xml(self, w, **kwargs):
if self._Element__ns:
name = ':'.join((self._Element__ns, self._Element__name))
else:
name = self._Element__name
try:
content = kwargs['formatter']()
except (KeyError, TypeError):
content = self.content
if content is not None:
w.element(name, str(content), attrib=object_attrs(self))
|
astropyREPO_NAMEpyvoPATH_START.@pyvo_extracted@pyvo-main@pyvo@utils@xml@elements.py@.PATH_END.py
|
{
"filename": "telluric.py",
"repo_name": "finagle29/dbsp_drp",
"repo_path": "dbsp_drp_extracted/dbsp_drp-main/dbsp_drp/telluric.py",
"type": "Python"
}
|
"""
Telluric correction for P200 DBSP.
"""
import os
from typing import List
from pypeit.par import pypeitpar
from pypeit.core import telluric
from pypeit.spectrographs.util import load_spectrograph
def telluric_correct(coadd: str, output_path: str, spectrograph: str,
user_config_lines: List[str], debug: bool = False, plot: bool = False):
"""
Telluric correct one coadd file.
Args:
coadd (str): Coadd filename.
output_path (str): reduction output path
spectrograph (str): PypeIt name of spectrograph.
user_config_lines (List[str]): User-provided PypeIt configuration
debug (bool, optional): Show debugging output? Defaults to False.
plot (bool, optional): Show debugging plots? Defaults to False.
"""
coadd_path = os.path.join(output_path, 'Science', coadd)
spectrograph = load_spectrograph(spectrograph)
par = spectrograph.default_pypeit_par()
par['telluric']['objmodel'] = 'poly'
## TODO: Change fit_wv_min_max based on where the red detector defect is.
par['telluric']['fit_wv_min_max'] = [5500, 11000]
# maybe somehow choose between poly and exp??????? look at median
par['telluric']['model'] = 'exp'
par['telluric']['polyorder'] = 8
if par['telluric']['telgridfile'] is None:
if par['sensfunc']['IR']['telgridfile'] is not None:
par['telluric']['telgridfile'] = par['sensfunc']['IR']['telgridfile']
par = pypeitpar.PypeItPar.from_cfg_lines(cfg_lines=par.to_config(),
merge_with=user_config_lines)
# Parse the output filename
outfile = os.path.join(output_path, 'Science', os.path.splitext(coadd)[0] + '_tellcorr.fits')
modelfile = os.path.join(output_path, 'Science', os.path.splitext(coadd)[0] + '_tellmodel.fits')
try:
TelPoly = telluric.poly_telluric(coadd_path, par['telluric']['telgridfile'],
modelfile, outfile,
z_obj=par['telluric']['redshift'],
func=par['telluric']['func'],
model=par['telluric']['model'],
polyorder=par['telluric']['polyorder'],
fit_wv_min_max=par['telluric']['fit_wv_min_max'],
mask_lyman_a=par['telluric']['mask_lyman_a'],
delta_coeff_bounds=par['telluric']['delta_coeff_bounds'],
minmax_coeff_bounds=par['telluric']['minmax_coeff_bounds'],
only_orders=par['telluric']['only_orders'],
debug_init=debug, disp=debug, debug=debug, show=plot)
except ValueError:
print(f"ERROR!! Telluric correction of {coadd} FAILED!")
def picklable_telluric_correct(args):
telluric_correct(*args)
|
finagle29REPO_NAMEdbsp_drpPATH_START.@dbsp_drp_extracted@dbsp_drp-main@dbsp_drp@telluric.py@.PATH_END.py
|
{
"filename": "debuglog.py",
"repo_name": "j0r1/GRALE2",
"repo_path": "GRALE2_extracted/GRALE2-master/pygrale/grale/debuglog.py",
"type": "Python"
}
|
#!/usr/bin/env python
import os
import sys
import time
# Set up a function called 'debugLog', which is enabled or disabled depending on
# the existence of the environment variable DEBUGLOG_VERBOSE
if os.environ.get("DEBUGLOG_VERBOSE") is not None:
def debugLog(s):
lines = s.splitlines()
t = time.time()
intT = int(t)
fracT = t - intT
pref = "%s.%03d" % (time.strftime("%H:%M:%S", time.gmtime(t)), int(fracT*1000.0))
pref += " "
spcs = " " * len(pref)
sys.stderr.write(pref + lines[0] + "\n")
for i in range(1,len(lines)):
sys.stderr.write(spcs + lines[i] + "\n")
sys.stderr.flush()
else:
def debugLog(s):
pass
|
j0r1REPO_NAMEGRALE2PATH_START.@GRALE2_extracted@GRALE2-master@pygrale@grale@debuglog.py@.PATH_END.py
|
{
"filename": "evolve.py",
"repo_name": "grackle-project/grackle",
"repo_path": "grackle_extracted/grackle-main/src/python/pygrackle/utilities/evolve.py",
"type": "Python"
}
|
########################################################################
#
# Functions for evolving a fluid container using Grackle
#
#
# Copyright (c) 2016, Grackle Development Team.
#
# Distributed under the terms of the Enzo Public Licence.
#
# The full license is in the file LICENSE, distributed with this
# software.
########################################################################
from collections import defaultdict
import numpy as np
from .physical_constants import \
gravitational_constant_cgs, \
sec_per_year
def evolve_freefall(fc, final_density, safety_factor=0.01,
include_pressure=True):
my_chemistry = fc.chemistry_data
# Set units of gravitational constant
gravitational_constant = (
4.0 * np.pi * gravitational_constant_cgs *
my_chemistry.density_units * my_chemistry.time_units**2)
# some constants for the analytical free-fall solution
freefall_time_constant = np.power(((32. * gravitational_constant) /
(3. * np.pi)), 0.5)
data = defaultdict(list)
current_time = 0.0
while fc["density"][0] * my_chemistry.density_units < final_density:
# calculate timestep based on free-fall solution
dt = safety_factor * \
np.power(((3. * np.pi) /
(32. * gravitational_constant *
fc["density"][0])), 0.5)
add_to_data(fc, data, extra={"time": current_time})
# compute the new density using the modified
# free-fall collapse as per Omukai et al. (2005)
if include_pressure:
force_factor = calculate_collapse_factor(data["pressure"], data["density"])
else:
force_factor = 0.
data["force_factor"].append(force_factor)
# calculate new density from altered free-fall solution
new_density = np.power((np.power(fc["density"][0], -0.5) -
(0.5 * freefall_time_constant * dt *
np.power((1 - force_factor), 0.5))), -2.)
print("Evolve Freefall - t: %e yr, rho: %e g/cm^3, T: %e K." %
((current_time * my_chemistry.time_units / sec_per_year),
(fc["density"][0] * my_chemistry.density_units),
fc["temperature"][0]))
# use this to multiply by elemental densities if you are tracking those
density_ratio = new_density / fc["density"][0]
# update densities
for field in fc.density_fields:
fc[field] *= density_ratio
# now update energy for adiabatic heating from collapse
fc["internal_energy"][0] += (my_chemistry.Gamma - 1.) * \
fc["internal_energy"][0] * freefall_time_constant * \
np.power(fc["density"][0], 0.5) * dt
fc.solve_chemistry(dt)
# update time
current_time += dt
for field in data:
data[field] = np.squeeze(np.array(data[field]))
return fc.finalize_data(data=data)
def calculate_collapse_factor(pressure, density):
# Calculate the effective adiabatic index, dlog(p)/dlog(rho).
if len(pressure) < 3:
return np.array([0.])
# compute dlog(p) / dlog(rho) using last two timesteps
gamma_eff = np.log10(pressure[-1] / pressure[-2]) / \
np.log10(density[-1] / density[-2])
# compute a higher order derivative if more than two points available
if len(pressure) > 2:
gamma_eff += 0.5 * ((np.log10(pressure[-2] / pressure[-3]) /
np.log10(density[-2] / density[-3])) - gamma_eff)
gamma_eff = np.clip(gamma_eff, a_min=0, a_max=4/3)
# Equation 9 of Omukai et al. (2005)
if gamma_eff < 0.83:
force_factor = gamma_eff * 0
elif gamma_eff < 1.0:
force_factor = 0.6 + 2.5 * (gamma_eff - 1) - \
6.0 * np.power((gamma_eff - 1.0), 2.)
else:
force_factor = 1.0 + 0.2 * (gamma_eff - (4./3.)) - \
2.9 * np.power((gamma_eff - (4./3.)), 2.)
force_factor = np.clip(force_factor, a_min=0, a_max=0.95)
return force_factor
def evolve_constant_density(fc, final_temperature=None,
final_time=None, safety_factor=0.01):
my_chemistry = fc.chemistry_data
if final_temperature is None and final_time is None:
raise RuntimeError("Must specify either final_temperature " +
"or final_time.")
data = defaultdict(list)
current_time = 0.0
fc.calculate_cooling_time()
dt = safety_factor * np.abs(fc["cooling_time"][0])
fc.calculate_temperature()
while True:
if final_temperature is not None and fc["temperature"][0] <= final_temperature:
break
if final_time is not None and current_time >= final_time:
break
fc.calculate_temperature()
print("Evolve constant density - t: %e yr, rho: %e g/cm^3, T: %e K." %
(current_time * my_chemistry.time_units / sec_per_year,
fc["density"][0] * my_chemistry.density_units,
fc["temperature"][0]))
fc.solve_chemistry(dt)
add_to_data(fc, data, extra={"time": current_time})
current_time += dt
for field in data:
data[field] = np.squeeze(np.array(data[field]))
return fc.finalize_data(data=data)
def add_to_data(fc, data, extra=None):
"""
Add current fluid container values to the data structure.
"""
for field in fc.all_fields:
if field not in fc.input_fields:
func = getattr(fc, f"calculate_{field}")
if func is None:
raise RuntimeError(f"No function for calculating {field}.")
func()
data[field].append(fc[field].copy())
if extra is not None:
for field in extra:
data[field].append(extra[field])
|
grackle-projectREPO_NAMEgracklePATH_START.@grackle_extracted@grackle-main@src@python@pygrackle@utilities@evolve.py@.PATH_END.py
|
{
"filename": "make_picca_deltas_from_transmission.py",
"repo_name": "igmhub/LyaCoLoRe",
"repo_path": "LyaCoLoRe_extracted/LyaCoLoRe-master/scripts/make_picca_deltas_from_transmission.py",
"type": "Python"
}
|
import os
import scipy as sp
import sys
import fitsio
import glob
import healpy
import scipy.interpolate as interpolate
import iminuit
from multiprocessing import Pool
import multiprocessing
import time
import argparse
from picca.data import delta
from lyacolore import utils
lya = utils.lya_rest
################################################################################
parser = argparse.ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter)
parser.add_argument('--in-dir', type = str, default = None, required=True,
help = 'directory of LyaCoLoRe output files')
parser.add_argument('--out-dir', type = str, default = None, required=True,
help = 'directory of output')
parser.add_argument('--in-files', type = str, default = None, required=False,
help = 'input files', nargs='*')
parser.add_argument('--nproc', type = int, default = 1, required=False,
help = 'number of processes to use')
parser.add_argument('--nside', type = int, default = 16, required=False,
help = 'HEALPix nside for output files (must be 2^n)')
parser.add_argument('--make-qso-zcat', action="store_true", default = False, required=False,
help = 'whether to make QSO catalog or not')
parser.add_argument('--make-dla-zcat', action="store_true", default = False, required=False,
help = 'whether to make DLA catalog or not')
parser.add_argument('--downsampling', type = float, default = 1.0, required=False,
help = 'proportion by which to downsample')
parser.add_argument('--downsampling-seed', type = int, default = 0, required=False,
help = 'seed for the downsampling')
parser.add_argument('--make-randoms-zcats', action="store_true", default = False, required=False,
help = 'whether to make randoms catalogs or not')
parser.add_argument('--randoms-downsampling', type = float, default = None, required=False,
help = 'proportion by which to downsample the randoms')
parser.add_argument('--randoms-downsampling-seed', type = int, default = None, required=False,
help = 'seed for the downsampling of randoms')
parser.add_argument('--make-deltas', action="store_true", default = False, required=False,
help = 'whether to make new deltas or not')
parser.add_argument('--randoms-dir', type = str, default = None, required=False,
help = 'directory of randoms')
parser.add_argument('--min-cat-z', type = float, default = 1.7, required=False,
help = 'minimum z of objects in catalog')
parser.add_argument('--add-Lyb', action="store_true", default = False, required=False,
help = 'whether to add Lyb absorption or not')
parser.add_argument('--add-metals', action="store_true", default = False, required=False,
help = 'whether to add metal absorption or not')
parser.add_argument('--transmission-lambda-min', type = float, default = 3600., required=False,
help = 'minimum wavelength stored in the transmission files')
parser.add_argument('--transmission-lambda-max', type = float, default = 5500., required=False,
help = 'maximum wavelength stored in the transmission files')
parser.add_argument('--transmission-lambda-rest-min', type = float, default = 1040., required=False,
help = 'minimum wavelength in the rest frame stored in the transmission files')
parser.add_argument('--transmission-lambda-rest-max', type = float, default = 1200., required=False,
help = 'maximum wavelength in the rest frame stored in the transmission files')
parser.add_argument('--transmission-delta-lambda', type = float, default = 0.0003, required=False,
help = 'pixel size of transmission files wavelength grid')
parser.add_argument('--single-DLA-per-skw', action="store_true", default = False, required=False,
help = 'whether to allow at most 1 DLA per skewer or not')
parser.add_argument('--DLA-lambda-rest-min', type = float, default = None, required=False,
help = 'minimum wavelength in the rest frame stored for DLAs')
parser.add_argument('--DLA-lambda-rest-max', type = float, default = None, required=False,
help = 'maximum wavelength in the rest frame stored for DLAs')
args = parser.parse_args()
# Wavelength grid for output.
lObs_min = args.transmission_lambda_min
lObs_max = args.transmission_lambda_max
lRF_min = args.transmission_lambda_rest_min
lRF_max = args.transmission_lambda_rest_max
dll = args.transmission_delta_lambda
if not os.path.isdir(args.out_dir):
os.mkdir(args.out_dir)
if args.randoms_dir is None:
args.randoms_dir = args.in_dir
if args.make_randoms_zcats:
if args.randoms_downsampling is None:
args.randoms_downsampling = args.downsampling
if args.randoms_downsampling_seed is None:
args.args.randoms_downsampling_seed = args.args.downsampling_seed
################################################################################
"""
Define the multiprocessing tracking functions
"""
#Define a progress-tracking function.
def log_result(retval):
results.append(retval)
N_complete = len(results)
N_tasks = len(tasks)
utils.progress_bar(N_complete,N_tasks,start_time)
#Define an error-tracking function.
def log_error(retval):
print('Error:',retval)
################################################################################
#Make the zcats.
def create_qso_cat(args):
### Make random generator
state = sp.random.RandomState(args.downsampling_seed)
### Data
h = fitsio.FITS(args.in_dir+'/master.fits')
m_data = sp.sort(h[1].read(),order=['MOCKID','Z_QSO_RSD'])
data = {}
for k in ['RA','DEC']:
data[k] = m_data[k][:]
for k in ['THING_ID','PLATE','MJD','FIBERID']:
data[k] = m_data['MOCKID'][:]
data['Z'] = m_data['Z_QSO_RSD'][:]
print(data['Z'].min())
w = data['Z']>args.min_cat_z
for k in data.keys():
data[k] = data[k][w]
h.close()
phi = data['RA']*sp.pi/180.
th = sp.pi/2.-data['DEC']*sp.pi/180.
pix = healpy.ang2pix(args.nside,th,phi)
data['PIX'] = pix
print('INFO: {} QSO in mocks data'.format(data['RA'].size))
### Get reduced data numbers
original_nbData = data['RA'].shape[0]
nbData = round(original_nbData * args.downsampling)
### Save data
assert nbData<=data['RA'].size
w = state.choice(sp.arange(data['RA'].size), size=nbData, replace=False)
w_thid = data['THING_ID'][w]
print(w_thid.shape)
print('INFO: downsampling to {} QSOs in catalog'.format(nbData))
out = fitsio.FITS(args.out_dir+'/zcat_{}.fits'.format(args.downsampling),'rw',clobber=True)
cols = [ v[w] for k,v in data.items() if k not in ['PIX'] ]
names = [ k for k in data.keys() if k not in ['PIX'] ]
out.write(cols,names=names)
out.close()
return
def create_dla_cat(args):
### DLA data
h = fitsio.FITS(args.in_dir+'/master_DLA.fits')
md_data = sp.sort(h[1].read(),order=['MOCKID','Z_QSO_RSD'])
data = {}
for k in ['RA','DEC']:
data[k] = md_data[k][:]
for k in ['THING_ID','PLATE','MJD','FIBERID']:
data[k] = md_data['MOCKID'][:]
data['Z'] = md_data['Z_DLA_RSD'][:]
# Ensure that DLAs are in the rest frame wavelength range if required
data['Z_QSO'] = md_data['Z_QSO_RSD'][:]
w = sp.ones(data['Z_QSO'].shape).astype('bool')
lr_DLA = lya*(1+data['Z'])/(1+data['Z_QSO'])
if args.DLA_lambda_rest_min is not None:
w *= (lr_DLA > args.DLA_lambda_rest_min)
if args.DLA_lambda_rest_max is not None:
w *= (lr_DLA < args.DLA_lambda_rest_max)
w *= data['Z']>args.min_cat_z
for k in data.keys():
data[k] = data[k][w]
h.close()
phi = data['RA']*sp.pi/180.
th = sp.pi/2.-data['DEC']*sp.pi/180.
pix = healpy.ang2pix(args.nside,th,phi)
data['PIX'] = pix
print('INFO: {} DLA in mocks data'.format(data['RA'].size))
### Save DLA data
if args.single_DLA_per_skw:
reduced_THING_ID = data['THING_ID'][w_DLA]
n_id = 1
current_m = reduced_THING_ID[0]
ind = 0
inds = []
for i,m in enumerate(reduced_THING_ID[1:]):
i += 1
if m == current_m:
n_id += 1
p = state.uniform()
if p > 1/n_id:
ind = i
else:
current_m = m
inds += [ind]
ind = i
n_id = 1
w_DLA = sp.isin(range(len(data['THING_ID'])),inds)
else:
w_DLA = sp.isin(data['THING_ID'],w_thid)
N_DLA = sp.sum(w_DLA)
print('INFO: downsampling leaves {} DLAs in catalog'.format(N_DLA))
suffix = ''
if args.single_DLA_per_skw:
suffix += '_single'
if args.DLA_lambda_rest_min is not None:
suffix += '_lrmin{}'.format(args.DLA_lambda_rest_min)
if args.DLA_lambda_rest_max is not None:
suffix += '_lrmax{}'.format(args.DLA_lambda_rest_max)
out = fitsio.FITS(args.out_dir+'/zcat_DLA_{}{}.fits'.format(args.downsampling,suffix),'rw',clobber=True)
cols = [ v[w_DLA] for k,v in data.items() if k not in ['PIX','Z_QSO'] ]
names = [ k for k in data.keys() if k not in ['PIX','Z_QSO'] ]
out.write(cols,names=names)
out.close()
if args.make_randoms_zcats:
r_state = sp.random.RandomState(args.randoms_downsampling_seed)
### Data
h = fitsio.FITS(args.randoms_dir+'/master_randoms.fits')
data = {}
mr_data = sp.sort(h[1].read(),order=['MOCKID','Z'])
for k in ['RA','DEC']:
data[k] = mr_data[k][:]
for k in ['THING_ID','PLATE','MJD','FIBERID']:
data[k] = mr_data['MOCKID'][:]
data['Z'] = mr_data['Z'][:]
w = data['Z']>args.min_cat_z
for k in data.keys():
data[k] = data[k][w]
h.close()
phi = data['RA']*sp.pi/180.
th = sp.pi/2.-data['DEC']*sp.pi/180.
pix = healpy.ang2pix(args.nside,th,phi)
data['PIX'] = pix
print('INFO: {} QSO in randoms'.format(data['RA'].size))
### Get reduced data numbers
original_nbData = data['RA'].shape[0]
nbData = round(original_nbData * args.randoms_downsampling)
### Save data
assert nbData<=data['RA'].size
w = r_state.choice(sp.arange(data['RA'].size), size=nbData, replace=False)
print('INFO: downsampling to {} QSOs in randoms catalog'.format(nbData))
out = fitsio.FITS(args.out_dir+'/zcat_{}_randoms.fits'.format(args.randoms_downsampling),'rw',clobber=True)
cols = [ v[w] for k,v in data.items() if k not in ['PIX'] ]
names = [ k for k in data.keys() if k not in ['PIX'] ]
out.write(cols,names=names)
out.close()
### DLA randoms
h = fitsio.FITS(args.randoms_dir+'/master_DLA_randoms.fits')
mdr_data = sp.sort(h[1].read(),order=['MOCKID','Z_QSO_RSD'])
N_DLA_rand = mdr_data.shape[0]
data = {}
for k in ['RA','DEC']:
data[k] = mdr_data[k][:]
for k in ['THING_ID','PLATE','MJD','FIBERID']:
data[k] = mdr_data['MOCKID'][:]
data['Z'] = mdr_data['Z_DLA'][:]
data['Z_QSO'] = mdr_data['Z_QSO_RSD'][:]
# Ensure that DLAs are in the rest frame wavelength range if required
w = sp.ones(data['Z_QSO'].shape).astype('bool')
lr_DLA = lya*(1+data['Z'])/(1+data['Z_QSO'])
if args.DLA_lambda_rest_min is not None:
w *= (lr_DLA > args.DLA_lambda_rest_min)
if args.DLA_lambda_rest_max is not None:
w *= (lr_DLA < args.DLA_lambda_rest_max)
w *= data['Z']>args.min_cat_z
for k in data.keys():
data[k] = data[k][w]
h.close()
phi = data['RA']*sp.pi/180.
th = sp.pi/2.-data['DEC']*sp.pi/180.
pix = healpy.ang2pix(args.nside,th,phi)
data['PIX'] = pix
print('INFO: {} DLA in randoms'.format(data['RA'].size))
### Save DLA data
if args.single_DLA_per_skw:
reduced_THING_ID = data['THING_ID'][w_DLA]
n_id = 1
current_m = reduced_THING_ID[0]
ind = 0
inds = []
for i,m in enumerate(reduced_THING_ID[1:]):
i += 1
if m == current_m:
n_id += 1
p = state.uniform()
if p > 1/n_id:
ind = i
else:
current_m = m
inds += [ind]
ind = i
n_id = 1
w_DLA = sp.isin(range(len(data['THING_ID'])),inds)
else:
w_DLA = sp.isin(data['THING_ID'],w_thid)
#Then downsample using a modified ratio to take into account the removal of QSOs.
mod_r_ds = args.randoms_downsampling/args.downsampling
w_DLA *= r_state.choice([0,1],size=data['THING_ID'].shape[0],replace=True,p=[1-mod_r_ds,mod_r_ds]).astype('bool')
print('INFO: downsampling leaves {} DLAs in randoms catalog'.format(sp.sum(w_DLA)))
suffix = ''
if args.single_DLA_per_skw:
suffix += '_single'
if args.DLA_lambda_rest_min is not None:
suffix += '_lrmin{}'.format(args.DLA_lambda_rest_min)
if args.DLA_lambda_rest_max is not None:
suffix += '_lrmax{}'.format(args.DLA_lambda_rest_max)
out = fitsio.FITS(args.out_dir+'/zcat_DLA_{}_randoms{}.fits'.format(args.randoms_downsampling,suffix),'rw',clobber=True)
cols = [ v[w_DLA] for k,v in data.items() if k not in ['PIX','Z_QSO'] ]
names = [ k for k in data.keys() if k not in ['PIX','Z_QSO'] ]
out.write(cols,names=names)
out.close()
return
if args.make_qso_zcat:
create_qso_cat(args)
if args.make_dla_zcat:
create_dla_cat(args)
################################################################################
#Make the delta files.
### Based on the function desi_convert_transmission_to_delta_files in picca.utils
"""Convert desi transmission files to picca delta files
Args:
zcat (str): path to the catalog of object to extract the transmission from
indir (str): path to transmission files directory
outdir (str): path to write delta files directory
lObs_min (float) = 3600.: min observed wavelength in Angstrom
lObs_max (float) = 5500.: max observed wavelength in Angstrom
lRF_min (float) = 1040.: min Rest Frame wavelength in Angstrom
lRF_max (float) = 1200.: max Rest Frame wavelength in Angstrom
dll (float) = 3.e-4: size of the bins in log lambda
nspec (int) = None: number of spectra, if 'None' use all
Returns:
None
"""
zcat = args.out_dir+'/zcat_{}.fits'.format(args.downsampling)
if args.add_Lyb * args.add_metals:
args.out_dir = args.out_dir+'/deltas_{}_Lyb_metals/'.format(args.downsampling)
elif args.add_Lyb:
args.out_dir = args.out_dir+'/deltas_{}_Lyb/'.format(args.downsampling)
elif args.add_metals:
args.out_dir = args.out_dir+'/deltas_{}_metals/'.format(args.downsampling)
else:
args.out_dir = args.out_dir+'/deltas_{}/'.format(args.downsampling)
if not os.path.isdir(args.out_dir):
os.mkdir(args.out_dir)
### Catalog of objects
h = fitsio.FITS(zcat)
key_val = sp.char.strip(sp.array([ h[1].read_header()[k] for k in h[1].read_header().keys()]).astype(str))
if 'TARGETID' in key_val:
zcat_thid = h[1]['TARGETID'][:]
elif 'THING_ID' in key_val:
zcat_thid = h[1]['THING_ID'][:]
w = h[1]['Z'][:]>max(0.,lObs_min/lRF_max -1.)
w &= h[1]['Z'][:]<max(0.,lObs_max/lRF_min -1.)
zcat_ra = h[1]['RA'][:][w].astype('float64')*sp.pi/180.
zcat_dec = h[1]['DEC'][:][w].astype('float64')*sp.pi/180.
zcat_thid = zcat_thid[w]
h.close()
print('INFO: Found {} quasars'.format(zcat_ra.size))
### List of transmission files
if (args.in_dir is None and args.in_files is None):
print("ERROR: No transmisson input files")
sys.exit()
elif args.in_files is None:
fi = glob.glob(args.in_dir+'/*/*/transmission*.fits*')
fi = sp.sort(sp.array(fi))
h = fitsio.FITS(fi[0])
in_nside = h['METADATA'].read_header()['HPXNSIDE']
nest = h['METADATA'].read_header()['HPXNEST']
h.close()
in_pixs = healpy.ang2pix(in_nside, sp.pi/2.-zcat_dec, zcat_ra, nest=nest)
fi = sp.sort(sp.array(['{}/{}/{}/transmission-{}-{}.fits.gz'.format(args.in_dir,int(f//100),f,in_nside,f) for f in sp.unique(in_pixs)]))
else:
fi = sp.sort(sp.array(args.in_files))
print('INFO: Found {} files'.format(fi.size))
### Stack the transmission
lmin = sp.log10(lObs_min)
lmax = sp.log10(lObs_max)
nstack = int((lmax-lmin)/dll)+1
### Read
def get_stack_data(f):
#Set up variables.
deltas = {}
T_stack = sp.zeros(nstack)
n_stack = sp.zeros(nstack)
h = fitsio.FITS(f)
thid = h['METADATA']['MOCKID'][:]
if sp.in1d(thid,zcat_thid).sum()==0:
h.close()
ra = h['METADATA']['RA'][:].astype(sp.float64)*sp.pi/180.
dec = h['METADATA']['DEC'][:].astype(sp.float64)*sp.pi/180.
z = h['METADATA']['Z'][:]
ll = sp.log10(h['WAVELENGTH'].read())
trans_names = []
try:
trans = h['F_LYA'].read()
except KeyError:
try:
trans = h['F'].read()
except KeyError:
try:
trans = h['TRANSMISSION'].read()
except KeyError:
raise KeyError('Transmission not found; check file format.')
if args.add_Lyb:
try:
trans_Lyb = h['F_LYB'].read()
trans *= trans_Lyb
except KeyError:
raise KeyError('Lyb transmission not found; only \'final\' format supported currently.')
if args.add_metals:
try:
trans_metals = h['F_METALS'].read()
trans *= trans_metals
except KeyError:
raise KeyError('Metals transmission not found; only \'final\' format supported currently.')
nObj = z.size
pixnum = f.split('-')[-1].split('.')[0]
if trans.shape[0]!=nObj:
trans = trans.transpose()
bins = sp.floor((ll-lmin)/dll+0.5).astype(int)
tll = lmin + bins*dll
lObs = (10**tll)*sp.ones(nObj)[:,None]
lRF = (10**tll)/(1.+z[:,None])
w = sp.zeros_like(trans).astype(int)
w[ (lObs>=lObs_min) & (lObs<lObs_max) & (lRF>lRF_min) & (lRF<lRF_max) ] = 1
nbPixel = sp.sum(w,axis=1)
cut = nbPixel>=50
cut &= sp.in1d(thid,zcat_thid)
if cut.sum()==0:
h.close()
ra = ra[cut]
dec = dec[cut]
z = z[cut]
thid = thid[cut]
trans = trans[cut,:]
w = w[cut,:]
nObj = z.size
h.close()
deltas[pixnum] = []
for i in range(nObj):
tll = ll[w[i,:]>0]
ttrans = trans[i,:][w[i,:]>0]
bins = sp.floor((tll-lmin)/dll+0.5).astype(int)
cll = lmin + sp.arange(nstack)*dll
cfl = sp.bincount(bins,weights=ttrans,minlength=nstack)
civ = sp.bincount(bins,minlength=nstack).astype(float)
ww = civ>0.
if ww.sum()<50: continue
T_stack += cfl
n_stack += civ
cll = cll[ww]
cfl = cfl[ww]/civ[ww]
civ = civ[ww]
deltas[pixnum].append(delta(thid[i],ra[i],dec[i],z[i],thid[i],thid[i],thid[i],cll,civ,None,cfl,1,None,None,None,None,None,None))
return (n_stack,T_stack,deltas)
if args.make_deltas:
tasks = [(f,) for f in fi]
#Run the multiprocessing pool
if __name__ == '__main__':
pool = Pool(processes = args.nproc)
results = []
start_time = time.time()
for task in tasks:
pool.apply_async(get_stack_data,task,callback=log_result,error_callback=log_error)
pool.close()
pool.join()
### Get stacked transmission
T_stack = sp.zeros(nstack)
n_stack = sp.zeros(nstack)
deltas = {}
for r in results:
n_stack += r[0]
T_stack += r[1]
deltas = {**deltas, **r[2]}
w = n_stack>0.
T_stack[w] /= n_stack[w]
def normalise_deltas(p):
if len(deltas[p])==0:
print('No data in {}'.format(p))
out = fitsio.FITS(args.out_dir+'/delta-{}'.format(p)+'.fits.gz','rw',clobber=True)
for d in deltas[p]:
bins = sp.floor((d.ll-lmin)/dll+0.5).astype(int)
d.de = d.de/T_stack[bins] - 1.
d.we *= T_stack[bins]**2
hd = {}
hd['RA'] = d.ra
hd['DEC'] = d.dec
hd['Z'] = d.zqso
hd['PMF'] = '{}-{}-{}'.format(d.plate,d.mjd,d.fid)
hd['THING_ID'] = d.thid
hd['PLATE'] = d.plate
hd['MJD'] = d.mjd
hd['FIBERID'] = d.fid
hd['ORDER'] = d.order
cols = [d.ll,d.de,d.we,sp.ones(d.ll.size)]
names = ['LOGLAM','DELTA','WEIGHT','CONT']
out.write(cols,names=names,header=hd,extname=str(d.thid))
out.close()
return
if args.make_deltas:
tasks = [(p,) for p in deltas.keys()]
#Run the multiprocessing pool
if __name__ == '__main__':
pool = Pool(processes = args.nproc)
results = []
start_time = time.time()
for task in tasks:
pool.apply_async(normalise_deltas,task,callback=log_result,error_callback=log_error)
pool.close()
pool.join()
|
igmhubREPO_NAMELyaCoLoRePATH_START.@LyaCoLoRe_extracted@LyaCoLoRe-master@scripts@make_picca_deltas_from_transmission.py@.PATH_END.py
|
{
"filename": "conf.py",
"repo_name": "rpoleski/MulensModel",
"repo_path": "MulensModel_extracted/MulensModel-master/docs/source/conf.py",
"type": "Python"
}
|
# -*- coding: utf-8 -*-
#
# MulensModel documentation build configuration file, created by
# sphinx-quickstart on Wed Aug 23 15:22:44 2017.
#
# This file is execfile()d with the current directory set to its
# containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#
# import os
# import sys
# sys.path.insert(0, os.path.abspath('../../source/MulensModel/'))
from os.path import join, dirname, abspath
# -- General configuration ------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
#
# needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = ['sphinx.ext.autodoc',
'sphinx.ext.intersphinx',
'sphinx.ext.coverage',
'sphinx.ext.ifconfig',
'sphinx.ext.mathjax']
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix(es) of source filenames.
# You can specify multiple suffix as a list of string:
#
# source_suffix = ['.rst', '.md']
source_suffix = '.rst'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'MulensModel'
copyright = u'2018, Radek Poleski, Jennifer Yee'
author = u'Radek Poleski, Jennifer Yee'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
#version = u'1.3'
# The full version, including alpha/beta/rc tags.
#release = u'1.3.0'
release = "X.X.X"
dir_ = join(dirname(dirname(dirname(abspath(__file__)))),
'source', 'MulensModel')
with open(join(dir_, 'version.py')) as in_put:
for line_ in in_put.readlines():
if line_.startswith('__version__'):
release = line_.split()[2][1:-1]
version = ".".join(release.split(".")[:-1])
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#
# This is also used if you do content translation via gettext catalogs.
# Usually you set "language" from the command line for these cases.
language = None
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
# This patterns also effect to html_static_path and html_extra_path
#exclude_patterns = []
# RP 10.9.2021 - I'm changing it when we remove fit.py in v2.0
exclude_patterns = ["fit.py"]
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# If true, `todo` and `todoList` produce output, else they produce nothing.
todo_include_todos = False
# Order members by type.
autodoc_member_order = 'bysource'
# -- Options for HTML output ----------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
#
html_theme = 'alabaster'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
#
# html_theme_options = {}
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
#html_static_path = ['_static']
# RP on 10.09.2021 I'm changing it because of
# WARNING: html_static_path entry '_static' does not exist
html_static_path = []
# -- Options for HTMLHelp output ------------------------------------------
# Output file base name for HTML help builder.
htmlhelp_basename = 'MulensModeldoc'
# -- Options for LaTeX output ---------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
#
# 'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
#
# 'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
#
# 'preamble': '',
# Latex figure (float) alignment
#
# 'figure_align': 'htbp',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
(master_doc, 'MulensModel.tex', u'MulensModel Documentation',
u'Radek Poleski, Jennifer Yee', 'manual'),
]
# -- Options for manual page output ---------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
(master_doc, 'mulensmodel', u'MulensModel Documentation',
[author], 1)
]
# -- Options for Texinfo output -------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
(master_doc, 'MulensModel', u'MulensModel Documentation',
author, 'MulensModel', 'One line description of project.',
'Miscellaneous'),
]
# Example configuration for intersphinx: refer to the Python standard library.
intersphinx_mapping = {'https://docs.python.org/': None}
|
rpoleskiREPO_NAMEMulensModelPATH_START.@MulensModel_extracted@MulensModel-master@docs@source@conf.py@.PATH_END.py
|
{
"filename": "model_prompt.py",
"repo_name": "simonsobs/nextline-rdb",
"repo_path": "nextline-rdb_extracted/nextline-rdb-main/src/nextline_rdb/alembic/models/rev_f3edea6dbde2/model_prompt.py",
"type": "Python"
}
|
from datetime import datetime
from typing import TYPE_CHECKING
from sqlalchemy import ForeignKey, UniqueConstraint
from sqlalchemy.orm import Mapped, mapped_column, relationship
from .base import Model
if TYPE_CHECKING:
from .model_run import Run
from .model_trace import Trace
from .model_trace_call import TraceCall
class Prompt(Model):
__tablename__ = "prompt"
id: Mapped[int] = mapped_column(primary_key=True, index=True)
prompt_no: Mapped[int] # unique in each run
open: Mapped[bool]
event: Mapped[str]
started_at: Mapped[datetime]
file_name: Mapped[str | None]
line_no: Mapped[int | None]
stdout: Mapped[str | None]
command: Mapped[str | None]
ended_at: Mapped[datetime | None]
run_id: Mapped[int] = mapped_column(ForeignKey('run.id'))
run: Mapped['Run'] = relationship(back_populates='prompts')
trace_id: Mapped[int] = mapped_column(ForeignKey('trace.id'))
trace: Mapped['Trace'] = relationship(back_populates='prompts')
trace_call_id: Mapped[int | None] = mapped_column(ForeignKey('trace_call.id'))
trace_call: Mapped['TraceCall | None'] = relationship(back_populates='prompts')
__table_args__ = (UniqueConstraint("run_id", "prompt_no"),)
|
simonsobsREPO_NAMEnextline-rdbPATH_START.@nextline-rdb_extracted@nextline-rdb-main@src@nextline_rdb@alembic@models@rev_f3edea6dbde2@model_prompt.py@.PATH_END.py
|
{
"filename": "postgres_extensions.md",
"repo_name": "EranOfek/AstroPack",
"repo_path": "AstroPack_extracted/AstroPack-main/matlab/util/+db/doc/postgres_extensions.md",
"type": "Markdown"
}
|
# Postgres Extensions
### Introduction to PostgreSQL Extensions
https://www.educba.com/postgresql-extensions/
### More
https://www.postgresql.org/download/products/6-postgresql-extensions/
### Top 5 PostgreSQL Extensions
https://www.timescale.com/blog/top-5-postgresql-extensions/
1. TimescaleDB
2. PostGIS
3. pg_stat_statements
4. ZomboDB
5. Postgres_fdw
### UUID-OSSP
https://www.postgresql.org/docs/current/uuid-ossp.html
|
EranOfekREPO_NAMEAstroPackPATH_START.@AstroPack_extracted@AstroPack-main@matlab@util@+db@doc@postgres_extensions.md@.PATH_END.py
|
{
"filename": "prior.py",
"repo_name": "ACCarnall/bagpipes",
"repo_path": "bagpipes_extracted/bagpipes-master/bagpipes/fitting/prior.py",
"type": "Python"
}
|
from __future__ import print_function, division, absolute_import
import numpy as np
from scipy.special import erf, erfinv
from scipy.stats import beta, t
def dirichlet(r, alpha):
""" This function samples from a Dirichlet distribution based on N-1
independent random variables (r) in the range (0, 1). The method is
that of http://www.arxiv.org/abs/1010.3436 by Michael Betancourt."""
n = r.shape[0]+1
x = np.zeros(n)
z = np.zeros(n-1)
alpha_tilda = np.zeros(n-1)
if isinstance(alpha, (float, int)):
alpha = np.repeat(alpha, n)
for i in range(n-1):
alpha_tilda[i] = np.sum(alpha[i+1:])
z[i] = beta.ppf(r[i], alpha_tilda[i], alpha[i])
for i in range(n-1):
x[i] = np.prod(z[:i])*(1-z[i])
x[-1] = np.prod(z)
return np.cumsum(x)
class prior(object):
""" A class which allows for samples to be drawn from a joint prior
distribution in several parameters and for transformations from the
unit cube to the prior volume.
Parameters
----------
limits : list of tuples
List of tuples containing lower and upper limits for the priors
on each parameter.
pdfs : list
List of prior probability density functions which the parameters
should be drawn from between the above limits.
hyper_params : list of dicts
Dictionaries containing fixed values for any hyper-parameters of
the above prior distributions.
"""
def __init__(self, limits, pdfs, hyper_params):
self.limits = limits
self.pdfs = pdfs
self.hyper_params = hyper_params
self.ndim = len(limits)
def sample(self):
""" Sample from the prior distribution. """
cube = np.random.rand(self.ndim)
return self.transform(cube)
def transform(self, cube, ndim=0, nparam=0):
""" Transform numbers on the unit cube to the prior volume. """
# Call the relevant prior functions to draw random values.
for i in range(self.ndim):
prior_function = getattr(self, self.pdfs[i])
cube[i] = prior_function(cube[i], self.limits[i],
self.hyper_params[i])
return cube
def uniform(self, value, limits, hyper_params):
""" Uniform prior in x where x is the parameter. """
value = limits[0] + (limits[1] - limits[0])*value
return value
def log_10(self, value, limits, hyper_params):
""" Uniform prior in log_10(x) where x is the parameter. """
value = 10**((np.log10(limits[1]/limits[0]))*value
+ np.log10(limits[0]))
return value
def log_e(self, value, limits, hyper_params):
""" Uniform prior in log_e(x) where x is the parameter. """
value = np.exp((np.log(limits[1]/limits[0]))*value + np.log(limits[0]))
return value
def pow_10(self, value, limits, hyper_params):
""" Uniform prior in 10**x where x is the parameter. """
value = np.log10((10**limits[1] - 10**limits[0])*value + 10**limits[0])
return value
def recip(self, value, limits, hyper_params):
value = 1./((1./limits[1] - 1./limits[0])*value + 1./limits[0])
return value
def recipsq(self, value, limits, hyper_params):
""" Uniform prior in 1/x**2 where x is the parameter. """
value = 1./np.sqrt((1./limits[1]**2 - 1./limits[0]**2)*value
+ 1./limits[0]**2)
return value
def Gaussian(self, value, limits, hyper_params):
""" Gaussian prior between limits with specified mu and sigma. """
mu = hyper_params["mu"]
sigma = hyper_params["sigma"]
uniform_max = erf((limits[1] - mu)/np.sqrt(2)/sigma)
uniform_min = erf((limits[0] - mu)/np.sqrt(2)/sigma)
value = (uniform_max-uniform_min)*value + uniform_min
value = sigma*np.sqrt(2)*erfinv(value) + mu
return value
def student_t(self, value, limits, hyper_params):
if "df" in list(hyper_params):
df = hyper_params["df"]
else:
df = 2.
if "scale" in list(hyper_params):
scale = hyper_params["scale"]
else:
scale = 0.3
uniform_min = t.cdf(limits[0], df=df, scale=scale)
uniform_max = t.cdf(limits[1], df=df, scale=scale)
value = (uniform_max-uniform_min)*value + uniform_min
value = t.ppf(value, df=df, scale=scale)
return value
|
ACCarnallREPO_NAMEbagpipesPATH_START.@bagpipes_extracted@bagpipes-master@bagpipes@fitting@prior.py@.PATH_END.py
|
{
"filename": "test_infrastructure.py",
"repo_name": "spacetelescope/jwst",
"repo_path": "jwst_extracted/jwst-main/jwst/tests/test_infrastructure.py",
"type": "Python"
}
|
from pathlib import Path
import importlib
from pkgutil import iter_modules
import os
import pytest
from ci_watson.artifactory_helpers import get_bigdata_root
from jwst.regtest.regtestdata import (
_data_glob_local,
_data_glob_url
)
from jwst.tests.helpers import word_precision_check
def test_word_precision_check():
"""Test word_precision_check"""
s1 = "a b c"
s2 = "aa bb cc"
s3 = "aa bb cc dd"
s4 = "aazz bbzz cczz"
assert word_precision_check(s1, s1)
assert not word_precision_check(s1, s2)
assert word_precision_check(s1, s2, length=1)
assert not word_precision_check(s2, s3)
assert word_precision_check(s2, s4, length=2)
@pytest.mark.parametrize(
'glob_filter, nfiles',
[
('*', 3),
('*.txt', 3),
('*.fits', 0)
], ids=['all', 'txt', 'fits']
)
def test_data_glob_local(glob_filter, nfiles, tmp_cwd):
"""Test working of local globbing
Parameters
----------
glob_filter: str
The glob filter to use.
nfiles: int
The number of files expected to find.
"""
path = Path('datadir')
path.mkdir()
for idx in range(3):
with open(path / ('afile' + str(idx) + '.txt'), 'w') as fh:
fh.write(f'I am file {idx}')
files = _data_glob_local(path, glob_filter)
assert len(files) == nfiles
@pytest.mark.bigdata
@pytest.mark.parametrize(
'glob_filter, nfiles',
[
('*', 1),
('*.txt', 0),
('*.fits', 1)
]
)
def test_data_glob_url(glob_filter, nfiles, pytestconfig, request):
"""Test globbing over a URL
Parameters
----------
glob_filter: str
The glob filter to use.
nfiles: int
The number of files expected to find.
"""
inputs_root = pytestconfig.getini('inputs_root')[0]
env = request.config.getoption('env')
path = os.path.join(inputs_root, env, 'infrastructure/test_data_glob')
files = _data_glob_url(path, glob_filter, root=get_bigdata_root())
assert len(files) == nfiles
def test_submodules_can_be_imported():
"""Make sure all package submodules can be imported"""
import jwst
submodules = [mod for _, mod, ispkg in iter_modules(jwst.__path__) if ispkg]
for module in submodules:
importlib.import_module("jwst." + module)
|
spacetelescopeREPO_NAMEjwstPATH_START.@jwst_extracted@jwst-main@jwst@tests@test_infrastructure.py@.PATH_END.py
|
{
"filename": "README.md",
"repo_name": "AmpelAstro/Ampel-contrib-sample",
"repo_path": "Ampel-contrib-sample_extracted/Ampel-contrib-sample-master/README.md",
"type": "Markdown"
}
|
# AMPEL
Ampel is a modular software framework designed for the analysis of streamed data. AMPEL operates in four different tiers:
- T0 filters alerts from a stream
- T1 looks for new transient data to add from outside the stream
- T2 calculates/derives further properties based on the collected information
- T3 triggers reactions
Users are free to add their own operational *units*, implemented as python modules, to each tier of the live AMPEL system. *Channels* request the use of units. This provides great power and freedom in that (almost) any combination of algorithms can be implemented and used for complete, repeatable scientific studies. However, it carries an initial cost in that units and channels have to be preconfigured. This repository contains a development version of AMPEL that allows channels and units to be developed and tested on static alert collections. Modules developed using these tools can later be merged into a full AMPEL instance where they are applied either to live alert streams or archived data. Instructions for how to install the development kit and how to design AMPEL units can be found in the [notebooks directory](notebooks/) of this repository. The rest of this README contains a general introduction to the AMPEL system.
The live AMPEL instance functions as a public broker for use with the ZTF alert stream. High quality, potential extragalactic supernovae are submitted to the TNS in real-time. For further questions regarding how to set up an **AMPEL** channel, contact ampel-info at desy.de.
## How to use this repository
This repository cantains a set of tutorial notebooks, together with some sample units. These introduce both the process for constructing AMPEL units and channels, as well as for hosting a full local AMPEL instance.
These install instructions assume a python 3.8 environment. A convenient way to achieve this is through conda: `conda create -n ampelTutorial python=3.8` && `conda activate ampelTutorial`.
The following steps will install the core AMPEL modules, as well as creating a starting AMPEL configuration file. This will be used to form a `context` in the notebooks.
1. `pip install ampel-ztf`
2. `conda install -c conda-forge sncosmo`
3. `conda install -c conda-forge iminuit`
4. `conda install -c conda-forge jupyter`
5. `pip install git+https://github.com/AmpelProject/Ampel-ipython.git`
6. `git clone https://github.com/AmpelProject/Ampel-contrib-sample.git`
7. `pip install --no-deps -e Ampel-contrib-sample`
8. `ampel-config build > ampel_config.yml`
These steps will clone the `Ampel-contrib-sample` repository, and create a `ampel_config.yml` context file. If the last step produces errors it means some setp of the analysis did not complete. Next, head to the `notebooks` subdirectory of `Ampel-contrib-sample` and look at the notebooks through executing `jupyter notebook`.
Tutorials 2-4 use a `mongoDB` to store intermediate results (as in a full install), which needs to be separately installed and started. Edit the `ampel_config.yml` file in case it should connect through a non-standard port.
### Creating your AMPEL repository
The first step for developing an AMPEL science program is to make a repository, using this as a template. Replace `sample` with a suitable group identifier.
### AMPEL unit files
Units that are to be run through AMPEL are included in the correct folder of the [ampel/contrib/sample/](ampel/contrib/sample/) as python modules inheriting from an appropriate abstract class. This ensures that they can be incorporated into a live AMPEL instance processing real-time or archive data.
### Configuration files
The `conf` directory contains a set of different configuration files: `units.yaml` lists new units added in this repository, the `channel` subdirectory contains configuration for specific channels and the `process` subdirectory lists distinct processes together with their scheduling criteria. A channel configuration can list processes, while operations that join transients from multiple channels have to list these as a separate process.
## Motivation
Both multi-messenger astronomy and new high-throughput wide-field surveys require the development of flexible tools for the selection and analysis of astrophysical transients. The Alert Management, Photometry and Evaluation of Lightcurves (AMPEL) system is a streaming data analysis framework. As such it functions to accept, process and react to streams of transient data. AMPEL contains a broker as the first of four pipeline levels, or 'tiers', where each can incoroporate user-contributed analysis units. These tools are embedded into a framework that encourages provenance and keeps track of the varying information states that a transient displays. The latter concept includes information gathered over time, but also tracks varying data access levels and e.g. improved calibration. AMPEL provides a tool that can assist in filtering transients in real time, running realistic alert reaction simulations, reprocessing of full datasets as well as the final scientific analysis of transient data.
AMPEL differs from traditional brokers in the focus on the full analysis chain of streamed data. As a consequence, there is no (subselected) collection to be queried after alerts have been received. AMPEL users are pro-active in designing channels which are merged into the live instance and exposed to the full stream. This provides full flexibility in analysis design, provenance and what reactions are possible.
## AMPEL in a nutshell
The core object in AMPEL is a *transient*, a single object identified by a creation date and typically a region of origin in the sky. Each transient is linked to a set of *datapoints* that represent individual measurements. Datapoints can be added, updated, marked as bad, or replaced, but never removed. Each datapoint can be associated with tags indicating e.g. any masking or proprietary restrictions. Transients and datapoints are connected by *states*, where a state references a *compound* of datapoints. A state represents a view of a transient available at some time and for some observer. For an optical photometric survey, a compound can be directly interpreted as a set of flux measurements or a lightcurve.
> Example: A ZTF alert corresponds to a potential transient. Datapoints here are simply the photometric magnitudes reported by ZTF. When first inserted, a transient has a single state with a compound consisting of the datapoints in the initial alert. If this transient is detected again, the new datapoint is added to the collection and a new state created containing both previous and new data. Should the first datapoint be public but the second datapoint be private, only users with proper access will see the updated state.
Using AMPEL means creating a *channel*, corresponding to a specific science goal, which prescribes behavior at four different stages, or *tiers*. What tasks should be performed at what tier can be determined by answers to the questions: *Tier 0: What are the minimal requirements for an alert to be interesting?*, *Tier 1: Can datapoints be changed by events external to the stream?*, *Tier 2: What calculations should be done on each of the candidates states?*, *Tier 3: What operations should be done at timed intervals?*
In Tier 0 (T0), the full alert stream is *filtered* to only include potentially interesting candidates. This tier thus works as a data broker: objects that merit further study are selected from the incoming alert stream. However, unlike most brokers, accepted transients are inserted into a database (DB) of active transients rather than immediately being sent downstream. Users can either provide their own algorithm for filtering, or configure one of the filter classes provided by the community, according to their needs.
> Example T0: The simple AMPEL channel `BrightNStable` looks for variables with at least three well behaved detections (few bad pixels and reasonable subtraction FWHM) and not coincident with a Gaia DR2 star-like source. This is implemented through a python class SampleFilter that operates on an alert and returns either a list of requests for follow-up (T2) analysis, if selection criteria are fulfilled, or `False` if they are not. AMPEL will test every ZTF alert using this class, and all alerts that pass the cut are added to the active transient DB. The transient is then associated with the channel ``BrightNStable''.
Tier 1 (T1) is largely autonomous and exists in parallel to the other tiers. T1 carries out duties related to *updates* of datapoints and states. Example activities include completing transient states with datapoints that were present in new alerts but where these were not individually accepted by the channel filter (e.g., in the case of lower significance detections at late phases), as well as querying an external archive for updated calibration or adding photometry from additional sources.
Additional transient information is derived or retrieved in Tier 2 (T2), and are always connected to a state and stored as a *ScienceRecord*. T2 units either work with the empty state, relevant for e.g. catalog matching that only depends on the position, or depends on the datapoints of a state to calculate new, derived transient properties. In the latter case, the T2 task will be called again as soon as a new state is created. This could be due both to new observations or, for example, updated calibration of old datapoints. Possible T2 units include lightcurve fitting, photometric redshift estimation, machine learning classification, and catalog matching.
> Example T2: For an optical transient, a state corresponds to a lightcurve and each photometric observation is represented by a datapoint. A new observation of the transient would extend the lightcurve and thus create a new state. `BrightNStable` requests a third order polynomial fit for each state using the `T2PolyFit` class. The outcome, in this case polynomial coefficients, are saved to the database.
The final AMPEL level, Tier 3 (T3), consists of *schedulable* actions. While T2s are initiated by events (the addition of new states), T3 units are executed at pre-determined times. These can range from yearly data dumps, to daily updates, to effectively real-time execution every few seconds. T3 processes access data through the *TransientView*, which concatenates all information regarding a transient. This includes both states and ScienceRecords that are accessible by the channel. T3s iterate through all transients of a channel. This allows for an evaluation of multiple ScienceRecords, and comparisons between different objects or, more generally, any kind of population analysis. One typical case is the ranking of candidates which would be interesting to observe on a given night. T3 units include options to push and pull information from for example Slack, TNS and web-servers.
> Example T3: The science goal of ``BrightNStable'' is to observe transients with a steady rise. At the T3 stage the channel therefore loops through the TransientViews, and examines all T2PolyFit science records for fit parameters which indicate a lasting linear rise. Any transients fulfilling the final criteria trigger an immediate notification sent to the user. This test is scheduled to be performed at 13:15 UT each day.
## Life of a transient in AMPEL

*Life of a transient in AMPEL.* Sample behavior at the four tiers of \am as well as the database access are shown as columns, with the left side of the figure indicating when the four alerts belonging to the transient were received.
1. T0: The first alert is rejected, while the following two pass the channel acceptance criteria. The final alert is, again, rejected as the transient is now too faint.
2. T1: First, a measurement provided by a secondary facility is ingested. Second, new observations of channel transients are added even if alerts in isolation would not be accepted. Third, updated calibration of a measurement causes this datapoint to be replaced.
3. T2: Every time a new state is created a lightcurve fit is performed and the fit results stored as a Science Records.
4. T3: A unit regularly tests whether the transient warrants a Slack posting (requesting potential further follow-up). The submit criteria are fulfilled the second time the unit is run. In both cases the evaluation is stored in the transient \emph{Journal}, which is later used to prevent a transient to be posted multiple times. Once the transient has not been updated for an extended time a T3 unit \emph{purges} the transient to an external database that can be directly queried by channel owners.
5. Database: A transient entry is created in the DB as the first alert is accepted. After this, each new datapoint causes a new state to be created. T2 Science Records are each associated with one state. The T3 units return information that is stored in the Journal.
A technical outline of AMPEL can be found [here](figures/ZTF_Pipeline_overview_June_18.pdf).
## The AMPEL live instance: parsing the ZTF alert stream and submitting candidates to TNS
An instance of AMPEL hosted at the DESY computer centre (Zeuthen) recieves and parses the live ZTF alert stream distributed by the University of Washingtion. This process is summarized in the following figure:

|
AmpelAstroREPO_NAMEAmpel-contrib-samplePATH_START.@Ampel-contrib-sample_extracted@Ampel-contrib-sample-master@README.md@.PATH_END.py
|
{
"filename": "profiler.py",
"repo_name": "jax-ml/jax",
"repo_path": "jax_extracted/jax-main/jax/_src/profiler.py",
"type": "Python"
}
|
# Copyright 2020 The JAX Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import annotations
from collections.abc import Callable
from contextlib import contextmanager
from functools import wraps
import gzip
import http.server
import json
import logging
import os
import pathlib
import socketserver
import threading
from typing import Any
from jax._src import traceback_util
traceback_util.register_exclusion(__file__)
from jax._src import xla_bridge
from jax._src.lib import xla_client
_profiler_server: xla_client.profiler.ProfilerServer | None = None
logger = logging.getLogger(__name__)
def start_server(port: int) -> xla_client.profiler.ProfilerServer:
"""Starts the profiler server on port `port`.
Using the "TensorFlow profiler" feature in `TensorBoard
<https://www.tensorflow.org/tensorboard>`_ 2.2 or newer, you can
connect to the profiler server and sample execution traces that show CPU,
GPU, and/or TPU device activity.
"""
global _profiler_server
if _profiler_server is not None:
raise ValueError("Only one profiler server can be active at a time.")
# Make sure backends are initialized before creating a profiler
# session. Otherwise on Cloud TPU, libtpu may not be initialized before
# creating the tracer, which will cause the TPU tracer initialization to
# fail and no TPU operations will be included in the profile.
# NOTE(skyewm): I'm not sure this is necessary for start_server (is definitely
# is for start_trace), but I'm putting it here to be safe.
xla_bridge.get_backend()
_profiler_server = xla_client.profiler.start_server(port)
return _profiler_server
def stop_server():
"""Stops the running profiler server."""
global _profiler_server
if _profiler_server is None:
raise ValueError("No active profiler server.")
_profiler_server = None # Should destroy the profiler server
class _ProfileState:
def __init__(self):
self.profile_session = None
self.log_dir = None
self.create_perfetto_link = False
self.create_perfetto_trace = False
self.lock = threading.Lock()
def reset(self):
_profile_state.profile_session = None
_profile_state.create_perfetto_link = False
_profile_state.create_perfetto_trace = False
_profile_state.log_dir = None
_profile_state = _ProfileState()
def start_trace(log_dir: os.PathLike | str, create_perfetto_link: bool = False,
create_perfetto_trace: bool = False) -> None:
"""Starts a profiler trace.
The trace will capture CPU, GPU, and/or TPU activity, including Python
functions and JAX on-device operations. Use :func:`stop_trace` to end the trace
and save the results to ``log_dir``.
The resulting trace can be viewed with TensorBoard. Note that TensorBoard
doesn't need to be running when collecting the trace.
Only one trace may be collected at a time. A RuntimeError will be raised if
:func:`start_trace` is called while another trace is running.
Args:
log_dir: The directory to save the profiler trace to (usually the
TensorBoard log directory).
create_perfetto_link: A boolean which, if true, creates and prints link to
the Perfetto trace viewer UI (https://ui.perfetto.dev). The program will
block until the link is opened and Perfetto loads the trace.
create_perfetto_trace: A boolean which, if true, additionally dumps a
``perfetto_trace.json.gz`` file that is compatible for upload with the
Perfetto trace viewer UI (https://ui.perfetto.dev). The file will also be
generated if ``create_perfetto_link`` is true. This could be useful if you
want to generate a Perfetto-compatible trace without blocking the
process.
"""
with _profile_state.lock:
if _profile_state.profile_session is not None:
raise RuntimeError("Profile has already been started. "
"Only one profile may be run at a time.")
# Make sure backends are initialized before creating a profiler
# session. Otherwise on Cloud TPU, libtpu may not be initialized before
# creating the tracer, which will cause the TPU tracer initialization to
# fail and no TPU operations will be included in the profile.
xla_bridge.get_backend()
_profile_state.profile_session = xla_client.profiler.ProfilerSession()
_profile_state.create_perfetto_link = create_perfetto_link
_profile_state.create_perfetto_trace = (
create_perfetto_trace or create_perfetto_link)
_profile_state.log_dir = str(log_dir)
def _write_perfetto_trace_file(log_dir: os.PathLike | str):
# Navigate to folder with the latest trace dump to find `trace.json.jz`
trace_folders = (pathlib.Path(log_dir).absolute() / "plugins" / "profile").iterdir()
latest_trace_folder = max(trace_folders, key=os.path.getmtime)
trace_jsons = latest_trace_folder.glob("*.trace.json.gz")
try:
trace_json, = trace_jsons
except ValueError as value_error:
raise ValueError(f"Invalid trace folder: {latest_trace_folder}") from value_error
logger.info("Loading trace.json.gz and removing its metadata...")
# Perfetto doesn't like the `metadata` field in `trace.json` so we remove
# it.
# TODO(sharadmv): speed this up by updating the generated `trace.json`
# to not include metadata if possible.
with gzip.open(trace_json, "rb") as fp:
trace = json.load(fp)
del trace["metadata"]
perfetto_trace = latest_trace_folder / "perfetto_trace.json.gz"
logger.info("Writing perfetto_trace.json.gz...")
with gzip.open(perfetto_trace, "w") as fp:
fp.write(json.dumps(trace).encode("utf-8"))
return perfetto_trace
class _PerfettoServer(http.server.SimpleHTTPRequestHandler):
"""Handles requests from `ui.perfetto.dev` for the `trace.json`"""
def end_headers(self):
self.send_header('Access-Control-Allow-Origin', '*')
return super().end_headers()
def do_GET(self):
self.server.last_request = self.path
return super().do_GET()
def do_POST(self):
self.send_error(404, "File not found")
def _host_perfetto_trace_file(path: os.PathLike | str):
# ui.perfetto.dev looks for files hosted on `127.0.0.1:9001`. We set up a
# TCP server that is hosting the `perfetto_trace.json.gz` file.
port = 9001
orig_directory = pathlib.Path.cwd()
directory, filename = os.path.split(path)
try:
os.chdir(directory)
socketserver.TCPServer.allow_reuse_address = True
with socketserver.TCPServer(('127.0.0.1', port), _PerfettoServer) as httpd:
url = f"https://ui.perfetto.dev/#!/?url=http://127.0.0.1:{port}/{filename}"
print(f"Open URL in browser: {url}")
# Once ui.perfetto.dev acquires trace.json from this server we can close
# it down.
while httpd.__dict__.get('last_request') != '/' + filename:
httpd.handle_request()
finally:
os.chdir(orig_directory)
def stop_trace():
"""Stops the currently-running profiler trace.
The trace will be saved to the ``log_dir`` passed to the corresponding
:func:`start_trace` call. Raises a RuntimeError if a trace hasn't been started.
"""
with _profile_state.lock:
if _profile_state.profile_session is None:
raise RuntimeError("No profile started")
sess = _profile_state.profile_session
sess.export(sess.stop(), str(_profile_state.log_dir))
if _profile_state.create_perfetto_trace:
abs_filename = _write_perfetto_trace_file(_profile_state.log_dir)
if _profile_state.create_perfetto_link:
_host_perfetto_trace_file(abs_filename)
_profile_state.reset()
def stop_and_get_fdo_profile() -> bytes | str:
"""Stops the currently-running profiler trace and export fdo_profile.
Currently, this is only supported for GPU.
Raises a RuntimeError if a trace hasn't been started.
"""
with _profile_state.lock:
if _profile_state.profile_session is None:
raise RuntimeError("No profile started")
xspace = _profile_state.profile_session.stop()
fdo_profile = xla_client.profiler.get_fdo_profile(xspace)
_profile_state.reset()
return fdo_profile
@contextmanager
def trace(log_dir: os.PathLike | str, create_perfetto_link=False, create_perfetto_trace=False):
"""Context manager to take a profiler trace.
The trace will capture CPU, GPU, and/or TPU activity, including Python
functions and JAX on-device operations.
The resulting trace can be viewed with TensorBoard. Note that TensorBoard
doesn't need to be running when collecting the trace.
Only one trace may be collected at a time. A RuntimeError will be raised if a
trace is started while another trace is running.
Args:
log_dir: The directory to save the profiler trace to (usually the
TensorBoard log directory).
create_perfetto_link: A boolean which, if true, creates and prints link to
the Perfetto trace viewer UI (https://ui.perfetto.dev). The program will
block until the link is opened and Perfetto loads the trace.
create_perfetto_trace: A boolean which, if true, additionally dumps a
``perfetto_trace.json.gz`` file that is compatible for upload with the
Perfetto trace viewer UI (https://ui.perfetto.dev). The file will also be
generated if ``create_perfetto_link`` is true. This could be useful if you
want to generate a Perfetto-compatible trace without blocking the
process.
"""
start_trace(log_dir, create_perfetto_link, create_perfetto_trace)
try:
yield
finally:
stop_trace()
class TraceAnnotation(xla_client.profiler.TraceMe):
"""Context manager that generates a trace event in the profiler.
The trace event spans the duration of the code enclosed by the context.
For example:
>>> x = jnp.ones((1000, 1000))
>>> with jax.profiler.TraceAnnotation("my_label"):
... result = jnp.dot(x, x.T).block_until_ready()
This will cause a "my_label" event to show up on the trace timeline if the
event occurs while the process is being traced.
"""
pass
class StepTraceAnnotation(TraceAnnotation):
"""Context manager that generates a step trace event in the profiler.
The step trace event spans the duration of the code enclosed by the context.
The profiler will provide the performance analysis for each step trace event.
For example, it can be used to mark training steps and enable the profiler to
provide the performance analysis per step:
>>> while global_step < NUM_STEPS: # doctest: +SKIP
... with jax.profiler.StepTraceAnnotation("train", step_num=global_step): # doctest: +SKIP
... train_step() # doctest: +SKIP
... global_step += 1 # doctest: +SKIP
This will cause a "train xx" event to show up on the trace timeline if the
event occurs while the process is being traced by TensorBoard. In addition,
if using accelerators, the device trace timeline will also show a "train xx"
event. Note that "step_num" can be set as a keyword argument to pass the
global step number to the profiler.
"""
def __init__(self, name: str, **kwargs):
super().__init__(name, _r=1, **kwargs)
def annotate_function(func: Callable, name: str | None = None,
**decorator_kwargs):
"""Decorator that generates a trace event for the execution of a function.
For example:
>>> @jax.profiler.annotate_function
... def f(x):
... return jnp.dot(x, x.T).block_until_ready()
>>>
>>> result = f(jnp.ones((1000, 1000)))
This will cause an "f" event to show up on the trace timeline if the
function execution occurs while the process is being traced by TensorBoard.
Arguments can be passed to the decorator via :py:func:`functools.partial`.
>>> from functools import partial
>>> @partial(jax.profiler.annotate_function, name="event_name")
... def f(x):
... return jnp.dot(x, x.T).block_until_ready()
>>> result = f(jnp.ones((1000, 1000)))
"""
name = name or getattr(func, '__qualname__', None)
name = name or func.__name__
@wraps(func)
def wrapper(*args, **kwargs):
with TraceAnnotation(name, **decorator_kwargs):
return func(*args, **kwargs)
return wrapper
return wrapper
def device_memory_profile(backend: str | None = None) -> bytes:
"""Captures a JAX device memory profile as ``pprof``-format protocol buffer.
A device memory profile is a snapshot of the state of memory, that describes the JAX
:class:`~jax.Array` and executable objects present in memory and their
allocation sites.
For more information how to use the device memory profiler, see
:doc:`/device_memory_profiling`.
The profiling system works by instrumenting JAX on-device allocations,
capturing a Python stack trace for each allocation. The instrumentation is
always enabled; :func:`device_memory_profile` provides an API to capture it.
The output of :func:`device_memory_profile` is a binary protocol buffer that
can be interpreted and visualized by the `pprof tool
<https://github.com/google/pprof>`_.
Args:
backend: optional; the name of the JAX backend for which the device memory
profile should be collected.
Returns:
A byte string containing a binary `pprof`-format protocol buffer.
"""
return xla_client.heap_profile(xla_bridge.get_backend(backend))
def save_device_memory_profile(filename, backend: str | None = None) -> None:
"""Collects a device memory profile and writes it to a file.
:func:`save_device_memory_profile` is a convenience wrapper around :func:`device_memory_profile`
that saves its output to a ``filename``. See the
:func:`device_memory_profile` documentation for more information.
Args:
filename: the filename to which the profile should be written.
backend: optional; the name of the JAX backend for which the device memory
profile should be collected.
"""
profile = device_memory_profile(backend)
with open(filename, "wb") as f:
f.write(profile)
# Allows to run model with profiler given amount of times. After required amount
# of retries achived client can collect FDO data.
class PGLEProfiler:
def __init__(self, retries: int, percentile: int):
self.retries: int = retries
self.percentile: int = percentile
self.collected_fdo: str | None = None
self.called_times: int = 0
self.fdo_profiles: list[Any] = []
self.current_session: xla_client.profiler.ProfilerSession | None = None
def consume_fdo_profile(self) -> str | None:
if self.collected_fdo is not None:
return self.collected_fdo
if not self.is_enabled() or self.called_times != self.retries:
return None
self.collected_fdo = xla_client.profiler.aggregate_profiled_instructions(
self.fdo_profiles, self.percentile
)
return self.collected_fdo
def is_fdo_consumed(self):
return self.collected_fdo is not None
def disable(self):
self.retries = 0
def is_enabled(self):
return self.retries > 0
def is_running(self):
return self.current_session is not None
@classmethod
@contextmanager
def trace(cls, runner: PGLEProfiler | None):
if (runner is None or runner.is_running()
or not runner.is_enabled() or runner.is_fdo_consumed()):
yield
else:
options = xla_client.profiler.ProfileOptions()
options.enable_hlo_proto = True
runner.current_session = xla_client.profiler.ProfilerSession(options)
try:
yield
finally:
xspace = runner.current_session.stop()
runner.fdo_profiles.append(
xla_client.profiler.get_fdo_profile(xspace)
)
runner.current_session = None
runner.called_times += 1
|
jax-mlREPO_NAMEjaxPATH_START.@jax_extracted@jax-main@jax@_src@profiler.py@.PATH_END.py
|
{
"filename": "config_sliders.py",
"repo_name": "hruedisser/3DCOREweb",
"repo_path": "3DCOREweb_extracted/3DCOREweb-main/src/coreweb/dashcore/assets/config_sliders.py",
"type": "Python"
}
|
from dash import dcc, register_page, html
import dash_mantine_components as dmc
import dash_bootstrap_components as dbc
from dash_iconify import DashIconify
'''
Configuration for the sliders
'''
############################################################
# Geometrical Models
modelslidervars = [{'var_name': 'Longitude (HEEQ)',
'min': 0.,
'max': 360.,
'step': 0.01,
'def': 0.,
'doubl_def': [0., 360],
'unit': '[deg.]',
'marks': {i: str(i) for i in range(-180, 361, 90)},
'variablename': 'longit',
'variablename_double': 'longit_double',
},
{'var_name':'Latitude (HEEQ)',
'min': -90.,
'max': 90.,
'step': 0.01,
'def': 0.,
'doubl_def': [-90, 90],
'unit': '[deg.]',
'marks': {i: str(i) for i in range(-90, 91, 45)},
'variablename': 'latitu',
'variablename_double': 'latitu_double',
},
{'var_name':'Inclination',
'min': 0.,
'max': 360.,
'step': 1.,
'def': 0.,
'doubl_def': [0., 360],
'unit': '[deg.]',
'marks': {i: str(i) for i in range(0, 361, 90)},
'variablename': 'inc',
'variablename_double': 'inc_double'
},
{'var_name':'Diameter 1 AU',
'min': 0.05,
'max': 0.35,
'step': 0.01,
'def': 0.2,
'doubl_def': [0.05, 0.35],
'unit': '[AU]',
'marks': {0.05: '0.05', 0.15: '0.15',0.25: '0.25',0.35: '0.35'},
'variablename': 'dia',
'variablename_double': 'dia_double'
},
{'var_name':'Aspect Ratio',
'min': 1.,
'max': 6.,
'step': 0.1,
'def':3.,
'doubl_def': [1., 6],
'unit': '',
'marks': {i: str(i) for i in range(0,7, 1)},
'variablename': 'asp',
'variablename_double': 'asp_double'
},
{'var_name':'Launch Radius',
'min' : 5.,
'max': 100. ,
'step': 1.,
'def':20.,
'doubl_def': [15, 25],
'unit': '[R_Sun]',
'marks': {5: '5', 25: '25',50: '50',75: '75',100: '100'},
'variablename': 'l_rad',
'variablename_double': 'l_rad_double'
},
{'var_name':'Launch Velocity',
'min': 400.,
'max': 3000.,
'step': 10.,
'def':800.,
'doubl_def': [400., 1200],
'unit': '[km/s]',
'marks': {i: str(i) for i in [400, 1000, 1500, 2000, 2500, 3000]},
'variablename': 'l_vel',
'variablename_double': 'l_vel_double'
},
{'var_name':'Expansion Rate',
'min': 0.3 ,
'max': 2.,
'step':0.01 ,
'def':1.14,
'doubl_def': [1.14, 1.14],
'unit': '',
'marks': {0.3: '0.3', 1.14: '1.14',2: '2'},
'variablename': 'exp_rat',
'variablename_double': 'exp_rat_double',
},
{'var_name':'Background Drag',
'min': 0.2,
'max': 3.,
'step': 0.01,
'def':1.,
'doubl_def': [0.1, 3.],
'unit': '',
'marks': {0.2: '0.2', 1: '1',2: '2',3: '3'},
'variablename': 'b_drag',
'variablename_double': 'b_drag_double',
},
{'var_name':'Background Velocity',
'min': 100.,
'max': 700.,
'step': 10.,
'def':500.,
'doubl_def': [100., 700],
'unit': '[km/s]',
'marks': {i: str(i) for i in range(100, 701, 100)},
'variablename': 'bg_vel',
'variablename_double': 'bg_vel_double'
},
]
magslidervars = [{'var_name': 'T_Factor',
'min': -250.,
'max': 250.,
'step': 1.,
'def': 100.,
'doubl_def': [-250., 250],
'unit': '',
'marks': {i: str(i) for i in range(-250, 251, 50)},
'variablename': 't_fac',
'variablename_double': 't_fac_double'
},
{'var_name': 'Magnetic Decay Rate',
'min': 1.,
'max': 2.,
'step': 0.01,
'def': 1.64,
'doubl_def': [1.64, 1.64],
'unit': '',
'marks': {1: '1', 1.64: '1.64',2: '2'},
'variablename': 'mag_dec',
'variablename_double': 'mag_dec_double'
},
{'var_name': 'Magnetic Field Strength 1 AU',
'min': 5.,
'max': 150.,
'step': 1.,
'def': 25.,
'doubl_def': [5., 100.],
'unit': '[nT]',
'marks': {i: str(i) for i in [5, 25, 50, 75, 100, 125, 150]},
'variablename': 'mag_strength',
'variablename_double': 'mag_strength_double'
},
]
fittingstate = ["launch-label", "spacecrafttable"] + [item['variablename_double'] for item in modelslidervars + magslidervars] + ["particle-slider", "reference_frame", "fitter-radio", 'n_jobs', "multiprocesscheck", 'n_iter',"ensemble-slider"]
modelstate = [item['variablename'] for item in modelslidervars + magslidervars]
dataarchive =html.Div([
html.Br(),
html.Hr(),
html.Br(),
dbc.Row([
dbc.Col(
[dcc.Link(
dmc.Group(
[
dmc.ThemeIcon(
DashIconify(icon='ph:folder-bold', width=18, style={"color": "black"}),
size=40,
radius=40,
variant="light",
style={"backgroundColor": "#eaeaea", "marginRight": "12px"}
),
dmc.Text('Data Archive', size="l", color="gray", weight=500),
],
style={"display": "flex",
"alignItems": "center",
"justifyContent": "start"
},
),
href="https://doi.org/10.6084/m9.figshare.11973693.v23",
target="_blank",
style={"textDecoration": "none",
},
),
], width = 3),
dbc.Col([
dmc.Text(
"""
Consider downloading the full data archive to avoid the need for automatic data retrieval during analysis of an event. Place the files in 3DCOREweb/src/coreweb/dashcore/data/archive.
"""
, size="l", color="black", weight=345)], width=9),
]),
html.Br(),
])
|
hruedisserREPO_NAME3DCOREwebPATH_START.@3DCOREweb_extracted@3DCOREweb-main@src@coreweb@dashcore@assets@config_sliders.py@.PATH_END.py
|
{
"filename": "_thickness.py",
"repo_name": "plotly/plotly.py",
"repo_path": "plotly.py_extracted/plotly.py-master/packages/python/plotly/plotly/validators/histogram/error_y/_thickness.py",
"type": "Python"
}
|
import _plotly_utils.basevalidators
class ThicknessValidator(_plotly_utils.basevalidators.NumberValidator):
def __init__(
self, plotly_name="thickness", parent_name="histogram.error_y", **kwargs
):
super(ThicknessValidator, self).__init__(
plotly_name=plotly_name,
parent_name=parent_name,
edit_type=kwargs.pop("edit_type", "style"),
min=kwargs.pop("min", 0),
**kwargs,
)
|
plotlyREPO_NAMEplotly.pyPATH_START.@plotly.py_extracted@plotly.py-master@packages@python@plotly@plotly@validators@histogram@error_y@_thickness.py@.PATH_END.py
|
{
"filename": "perl.py",
"repo_name": "catboost/catboost",
"repo_path": "catboost_extracted/catboost-master/contrib/python/Pygments/py3/pygments/lexers/perl.py",
"type": "Python"
}
|
"""
pygments.lexers.perl
~~~~~~~~~~~~~~~~~~~~
Lexers for Perl, Raku and related languages.
:copyright: Copyright 2006-2024 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
import re
from pygments.lexer import RegexLexer, ExtendedRegexLexer, include, bygroups, \
using, this, default, words
from pygments.token import Text, Comment, Operator, Keyword, Name, String, \
Number, Punctuation, Whitespace
from pygments.util import shebang_matches
__all__ = ['PerlLexer', 'Perl6Lexer']
class PerlLexer(RegexLexer):
"""
For Perl source code.
"""
name = 'Perl'
url = 'https://www.perl.org'
aliases = ['perl', 'pl']
filenames = ['*.pl', '*.pm', '*.t', '*.perl']
mimetypes = ['text/x-perl', 'application/x-perl']
version_added = ''
flags = re.DOTALL | re.MULTILINE
# TODO: give this to a perl guy who knows how to parse perl...
tokens = {
'balanced-regex': [
(r'/(\\\\|\\[^\\]|[^\\/])*/[egimosx]*', String.Regex, '#pop'),
(r'!(\\\\|\\[^\\]|[^\\!])*![egimosx]*', String.Regex, '#pop'),
(r'\\(\\\\|[^\\])*\\[egimosx]*', String.Regex, '#pop'),
(r'\{(\\\\|\\[^\\]|[^\\}])*\}[egimosx]*', String.Regex, '#pop'),
(r'<(\\\\|\\[^\\]|[^\\>])*>[egimosx]*', String.Regex, '#pop'),
(r'\[(\\\\|\\[^\\]|[^\\\]])*\][egimosx]*', String.Regex, '#pop'),
(r'\((\\\\|\\[^\\]|[^\\)])*\)[egimosx]*', String.Regex, '#pop'),
(r'@(\\\\|\\[^\\]|[^\\@])*@[egimosx]*', String.Regex, '#pop'),
(r'%(\\\\|\\[^\\]|[^\\%])*%[egimosx]*', String.Regex, '#pop'),
(r'\$(\\\\|\\[^\\]|[^\\$])*\$[egimosx]*', String.Regex, '#pop'),
],
'root': [
(r'\A\#!.+?$', Comment.Hashbang),
(r'\#.*?$', Comment.Single),
(r'^=[a-zA-Z0-9]+\s+.*?\n=cut', Comment.Multiline),
(words((
'case', 'continue', 'do', 'else', 'elsif', 'for', 'foreach',
'if', 'last', 'my', 'next', 'our', 'redo', 'reset', 'then',
'unless', 'until', 'while', 'print', 'new', 'BEGIN',
'CHECK', 'INIT', 'END', 'return'), suffix=r'\b'),
Keyword),
(r'(format)(\s+)(\w+)(\s*)(=)(\s*\n)',
bygroups(Keyword, Whitespace, Name, Whitespace, Punctuation, Whitespace), 'format'),
(r'(eq|lt|gt|le|ge|ne|not|and|or|cmp)\b', Operator.Word),
# common delimiters
(r's/(\\\\|\\[^\\]|[^\\/])*/(\\\\|\\[^\\]|[^\\/])*/[egimosx]*',
String.Regex),
(r's!(\\\\|\\!|[^!])*!(\\\\|\\!|[^!])*![egimosx]*', String.Regex),
(r's\\(\\\\|[^\\])*\\(\\\\|[^\\])*\\[egimosx]*', String.Regex),
(r's@(\\\\|\\[^\\]|[^\\@])*@(\\\\|\\[^\\]|[^\\@])*@[egimosx]*',
String.Regex),
(r's%(\\\\|\\[^\\]|[^\\%])*%(\\\\|\\[^\\]|[^\\%])*%[egimosx]*',
String.Regex),
# balanced delimiters
(r's\{(\\\\|\\[^\\]|[^\\}])*\}\s*', String.Regex, 'balanced-regex'),
(r's<(\\\\|\\[^\\]|[^\\>])*>\s*', String.Regex, 'balanced-regex'),
(r's\[(\\\\|\\[^\\]|[^\\\]])*\]\s*', String.Regex,
'balanced-regex'),
(r's\((\\\\|\\[^\\]|[^\\)])*\)\s*', String.Regex,
'balanced-regex'),
(r'm?/(\\\\|\\[^\\]|[^\\/\n])*/[gcimosx]*', String.Regex),
(r'm(?=[/!\\{<\[(@%$])', String.Regex, 'balanced-regex'),
(r'((?<==~)|(?<=\())\s*/(\\\\|\\[^\\]|[^\\/])*/[gcimosx]*',
String.Regex),
(r'\s+', Whitespace),
(words((
'abs', 'accept', 'alarm', 'atan2', 'bind', 'binmode', 'bless', 'caller', 'chdir',
'chmod', 'chomp', 'chop', 'chown', 'chr', 'chroot', 'close', 'closedir', 'connect',
'continue', 'cos', 'crypt', 'dbmclose', 'dbmopen', 'defined', 'delete', 'die',
'dump', 'each', 'endgrent', 'endhostent', 'endnetent', 'endprotoent',
'endpwent', 'endservent', 'eof', 'eval', 'exec', 'exists', 'exit', 'exp', 'fcntl',
'fileno', 'flock', 'fork', 'format', 'formline', 'getc', 'getgrent', 'getgrgid',
'getgrnam', 'gethostbyaddr', 'gethostbyname', 'gethostent', 'getlogin',
'getnetbyaddr', 'getnetbyname', 'getnetent', 'getpeername', 'getpgrp',
'getppid', 'getpriority', 'getprotobyname', 'getprotobynumber',
'getprotoent', 'getpwent', 'getpwnam', 'getpwuid', 'getservbyname',
'getservbyport', 'getservent', 'getsockname', 'getsockopt', 'glob', 'gmtime',
'goto', 'grep', 'hex', 'import', 'index', 'int', 'ioctl', 'join', 'keys', 'kill', 'last',
'lc', 'lcfirst', 'length', 'link', 'listen', 'local', 'localtime', 'log', 'lstat',
'map', 'mkdir', 'msgctl', 'msgget', 'msgrcv', 'msgsnd', 'my', 'next', 'oct', 'open',
'opendir', 'ord', 'our', 'pack', 'pipe', 'pop', 'pos', 'printf',
'prototype', 'push', 'quotemeta', 'rand', 'read', 'readdir',
'readline', 'readlink', 'readpipe', 'recv', 'redo', 'ref', 'rename',
'reverse', 'rewinddir', 'rindex', 'rmdir', 'scalar', 'seek', 'seekdir',
'select', 'semctl', 'semget', 'semop', 'send', 'setgrent', 'sethostent', 'setnetent',
'setpgrp', 'setpriority', 'setprotoent', 'setpwent', 'setservent',
'setsockopt', 'shift', 'shmctl', 'shmget', 'shmread', 'shmwrite', 'shutdown',
'sin', 'sleep', 'socket', 'socketpair', 'sort', 'splice', 'split', 'sprintf', 'sqrt',
'srand', 'stat', 'study', 'substr', 'symlink', 'syscall', 'sysopen', 'sysread',
'sysseek', 'system', 'syswrite', 'tell', 'telldir', 'tie', 'tied', 'time', 'times', 'tr',
'truncate', 'uc', 'ucfirst', 'umask', 'undef', 'unlink', 'unpack', 'unshift', 'untie',
'utime', 'values', 'vec', 'wait', 'waitpid', 'wantarray', 'warn', 'write'), suffix=r'\b'),
Name.Builtin),
(r'((__(DATA|DIE|WARN)__)|(STD(IN|OUT|ERR)))\b', Name.Builtin.Pseudo),
(r'(<<)([\'"]?)([a-zA-Z_]\w*)(\2;?\n.*?\n)(\3)(\n)',
bygroups(String, String, String.Delimiter, String, String.Delimiter, Whitespace)),
(r'__END__', Comment.Preproc, 'end-part'),
(r'\$\^[ADEFHILMOPSTWX]', Name.Variable.Global),
(r"\$[\\\"\[\]'&`+*.,;=%~?@$!<>(^|/-](?!\w)", Name.Variable.Global),
(r'[$@%#]+', Name.Variable, 'varname'),
(r'0_?[0-7]+(_[0-7]+)*', Number.Oct),
(r'0x[0-9A-Fa-f]+(_[0-9A-Fa-f]+)*', Number.Hex),
(r'0b[01]+(_[01]+)*', Number.Bin),
(r'(?i)(\d*(_\d*)*\.\d+(_\d*)*|\d+(_\d*)*\.\d+(_\d*)*)(e[+-]?\d+)?',
Number.Float),
(r'(?i)\d+(_\d*)*e[+-]?\d+(_\d*)*', Number.Float),
(r'\d+(_\d+)*', Number.Integer),
(r"'(\\\\|\\[^\\]|[^'\\])*'", String),
(r'"(\\\\|\\[^\\]|[^"\\])*"', String),
(r'`(\\\\|\\[^\\]|[^`\\])*`', String.Backtick),
(r'<([^\s>]+)>', String.Regex),
(r'(q|qq|qw|qr|qx)\{', String.Other, 'cb-string'),
(r'(q|qq|qw|qr|qx)\(', String.Other, 'rb-string'),
(r'(q|qq|qw|qr|qx)\[', String.Other, 'sb-string'),
(r'(q|qq|qw|qr|qx)\<', String.Other, 'lt-string'),
(r'(q|qq|qw|qr|qx)([\W_])(.|\n)*?\2', String.Other),
(r'(package)(\s+)([a-zA-Z_]\w*(?:::[a-zA-Z_]\w*)*)',
bygroups(Keyword, Whitespace, Name.Namespace)),
(r'(use|require|no)(\s+)([a-zA-Z_]\w*(?:::[a-zA-Z_]\w*)*)',
bygroups(Keyword, Whitespace, Name.Namespace)),
(r'(sub)(\s+)', bygroups(Keyword, Whitespace), 'funcname'),
(words((
'no', 'package', 'require', 'use'), suffix=r'\b'),
Keyword),
(r'(\[\]|\*\*|::|<<|>>|>=|<=>|<=|={3}|!=|=~|'
r'!~|&&?|\|\||\.{1,3})', Operator),
(r'[-+/*%=<>&^|!\\~]=?', Operator),
(r'[()\[\]:;,<>/?{}]', Punctuation), # yes, there's no shortage
# of punctuation in Perl!
(r'(?=\w)', Name, 'name'),
],
'format': [
(r'\.\n', String.Interpol, '#pop'),
(r'[^\n]*\n', String.Interpol),
],
'varname': [
(r'\s+', Whitespace),
(r'\{', Punctuation, '#pop'), # hash syntax?
(r'\)|,', Punctuation, '#pop'), # argument specifier
(r'\w+::', Name.Namespace),
(r'[\w:]+', Name.Variable, '#pop'),
],
'name': [
(r'[a-zA-Z_]\w*(::[a-zA-Z_]\w*)*(::)?(?=\s*->)', Name.Namespace, '#pop'),
(r'[a-zA-Z_]\w*(::[a-zA-Z_]\w*)*::', Name.Namespace, '#pop'),
(r'[\w:]+', Name, '#pop'),
(r'[A-Z_]+(?=\W)', Name.Constant, '#pop'),
(r'(?=\W)', Text, '#pop'),
],
'funcname': [
(r'[a-zA-Z_]\w*[!?]?', Name.Function),
(r'\s+', Whitespace),
# argument declaration
(r'(\([$@%]*\))(\s*)', bygroups(Punctuation, Whitespace)),
(r';', Punctuation, '#pop'),
(r'.*?\{', Punctuation, '#pop'),
],
'cb-string': [
(r'\\[{}\\]', String.Other),
(r'\\', String.Other),
(r'\{', String.Other, 'cb-string'),
(r'\}', String.Other, '#pop'),
(r'[^{}\\]+', String.Other)
],
'rb-string': [
(r'\\[()\\]', String.Other),
(r'\\', String.Other),
(r'\(', String.Other, 'rb-string'),
(r'\)', String.Other, '#pop'),
(r'[^()]+', String.Other)
],
'sb-string': [
(r'\\[\[\]\\]', String.Other),
(r'\\', String.Other),
(r'\[', String.Other, 'sb-string'),
(r'\]', String.Other, '#pop'),
(r'[^\[\]]+', String.Other)
],
'lt-string': [
(r'\\[<>\\]', String.Other),
(r'\\', String.Other),
(r'\<', String.Other, 'lt-string'),
(r'\>', String.Other, '#pop'),
(r'[^<>]+', String.Other)
],
'end-part': [
(r'.+', Comment.Preproc, '#pop')
]
}
def analyse_text(text):
if shebang_matches(text, r'perl'):
return True
result = 0
if re.search(r'(?:my|our)\s+[$@%(]', text):
result += 0.9
if ':=' in text:
# := is not valid Perl, but it appears in unicon, so we should
# become less confident if we think we found Perl with :=
result /= 2
return result
class Perl6Lexer(ExtendedRegexLexer):
"""
For Raku (a.k.a. Perl 6) source code.
"""
name = 'Perl6'
url = 'https://www.raku.org'
aliases = ['perl6', 'pl6', 'raku']
filenames = ['*.pl', '*.pm', '*.nqp', '*.p6', '*.6pl', '*.p6l', '*.pl6',
'*.6pm', '*.p6m', '*.pm6', '*.t', '*.raku', '*.rakumod',
'*.rakutest', '*.rakudoc']
mimetypes = ['text/x-perl6', 'application/x-perl6']
version_added = '2.0'
flags = re.MULTILINE | re.DOTALL
PERL6_IDENTIFIER_RANGE = r"['\w:-]"
PERL6_KEYWORDS = (
#Phasers
'BEGIN','CATCH','CHECK','CLOSE','CONTROL','DOC','END','ENTER','FIRST',
'INIT','KEEP','LAST','LEAVE','NEXT','POST','PRE','QUIT','UNDO',
#Keywords
'anon','augment','but','class','constant','default','does','else',
'elsif','enum','for','gather','given','grammar','has','if','import',
'is','let','loop','made','make','method','module','multi','my','need',
'orwith','our','proceed','proto','repeat','require','return',
'return-rw','returns','role','rule','state','sub','submethod','subset',
'succeed','supersede','token','try','unit','unless','until','use',
'when','while','with','without',
#Traits
'export','native','repr','required','rw','symbol',
)
PERL6_BUILTINS = (
'ACCEPTS','abs','abs2rel','absolute','accept','accessed','acos',
'acosec','acosech','acosh','acotan','acotanh','acquire','act','action',
'actions','add','add_attribute','add_enum_value','add_fallback',
'add_method','add_parent','add_private_method','add_role','add_trustee',
'adverb','after','all','allocate','allof','allowed','alternative-names',
'annotations','antipair','antipairs','any','anyof','app_lifetime',
'append','arch','archname','args','arity','Array','asec','asech','asin',
'asinh','ASSIGN-KEY','ASSIGN-POS','assuming','ast','at','atan','atan2',
'atanh','AT-KEY','atomic-assign','atomic-dec-fetch','atomic-fetch',
'atomic-fetch-add','atomic-fetch-dec','atomic-fetch-inc',
'atomic-fetch-sub','atomic-inc-fetch','AT-POS','attributes','auth',
'await','backtrace','Bag','BagHash','bail-out','base','basename',
'base-repeating','batch','BIND-KEY','BIND-POS','bind-stderr',
'bind-stdin','bind-stdout','bind-udp','bits','bless','block','Bool',
'bool-only','bounds','break','Bridge','broken','BUILD','build-date',
'bytes','cache','callframe','calling-package','CALL-ME','callsame',
'callwith','can','cancel','candidates','cando','can-ok','canonpath',
'caps','caption','Capture','cas','catdir','categorize','categorize-list',
'catfile','catpath','cause','ceiling','cglobal','changed','Channel',
'chars','chdir','child','child-name','child-typename','chmod','chomp',
'chop','chr','chrs','chunks','cis','classify','classify-list','cleanup',
'clone','close','closed','close-stdin','cmp-ok','code','codes','collate',
'column','comb','combinations','command','comment','compiler','Complex',
'compose','compose_type','composer','condition','config',
'configure_destroy','configure_type_checking','conj','connect',
'constraints','construct','contains','contents','copy','cos','cosec',
'cosech','cosh','cotan','cotanh','count','count-only','cpu-cores',
'cpu-usage','CREATE','create_type','cross','cue','curdir','curupdir','d',
'Date','DateTime','day','daycount','day-of-month','day-of-week',
'day-of-year','days-in-month','declaration','decode','decoder','deepmap',
'default','defined','DEFINITE','delayed','DELETE-KEY','DELETE-POS',
'denominator','desc','DESTROY','destroyers','devnull','diag',
'did-you-mean','die','dies-ok','dir','dirname','dir-sep','DISTROnames',
'do','does','does-ok','done','done-testing','duckmap','dynamic','e',
'eager','earlier','elems','emit','enclosing','encode','encoder',
'encoding','end','ends-with','enum_from_value','enum_value_list',
'enum_values','enums','eof','EVAL','eval-dies-ok','EVALFILE',
'eval-lives-ok','exception','excludes-max','excludes-min','EXISTS-KEY',
'EXISTS-POS','exit','exitcode','exp','expected','explicitly-manage',
'expmod','extension','f','fail','fails-like','fc','feature','file',
'filename','find_method','find_method_qualified','finish','first','flat',
'flatmap','flip','floor','flunk','flush','fmt','format','formatter',
'freeze','from','from-list','from-loop','from-posix','full',
'full-barrier','get','get_value','getc','gist','got','grab','grabpairs',
'grep','handle','handled','handles','hardware','has_accessor','Hash',
'head','headers','hh-mm-ss','hidden','hides','hour','how','hyper','id',
'illegal','im','in','indent','index','indices','indir','infinite',
'infix','infix:<+>','infix:<->','install_method_cache','Instant',
'instead','Int','int-bounds','interval','in-timezone','invalid-str',
'invert','invocant','IO','IO::Notification.watch-path','is_trusted',
'is_type','isa','is-absolute','isa-ok','is-approx','is-deeply',
'is-hidden','is-initial-thread','is-int','is-lazy','is-leap-year',
'isNaN','isnt','is-prime','is-relative','is-routine','is-setting',
'is-win','item','iterator','join','keep','kept','KERNELnames','key',
'keyof','keys','kill','kv','kxxv','l','lang','last','lastcall','later',
'lazy','lc','leading','level','like','line','lines','link','List',
'listen','live','lives-ok','local','lock','log','log10','lookup','lsb',
'made','MAIN','make','Map','match','max','maxpairs','merge','message',
'method','method_table','methods','migrate','min','minmax','minpairs',
'minute','misplaced','Mix','MixHash','mkdir','mode','modified','month',
'move','mro','msb','multi','multiness','my','name','named','named_names',
'narrow','nativecast','native-descriptor','nativesizeof','new','new_type',
'new-from-daycount','new-from-pairs','next','nextcallee','next-handle',
'nextsame','nextwith','NFC','NFD','NFKC','NFKD','nl-in','nl-out',
'nodemap','nok','none','norm','not','note','now','nude','Num',
'numerator','Numeric','of','offset','offset-in-hours','offset-in-minutes',
'ok','old','on-close','one','on-switch','open','opened','operation',
'optional','ord','ords','orig','os-error','osname','out-buffer','pack',
'package','package-kind','package-name','packages','pair','pairs',
'pairup','parameter','params','parent','parent-name','parents','parse',
'parse-base','parsefile','parse-names','parts','pass','path','path-sep',
'payload','peer-host','peer-port','periods','perl','permutations','phaser',
'pick','pickpairs','pid','placeholder','plan','plus','polar','poll',
'polymod','pop','pos','positional','posix','postfix','postmatch',
'precomp-ext','precomp-target','pred','prefix','prematch','prepend',
'print','printf','print-nl','print-to','private','private_method_table',
'proc','produce','Promise','prompt','protect','pull-one','push',
'push-all','push-at-least','push-exactly','push-until-lazy','put',
'qualifier-type','quit','r','race','radix','rand','range','Rat','raw',
're','read','readchars','readonly','ready','Real','reallocate','reals',
'reason','rebless','receive','recv','redispatcher','redo','reduce',
'rel2abs','relative','release','rename','repeated','replacement',
'report','reserved','resolve','restore','result','resume','rethrow',
'reverse','right','rindex','rmdir','role','roles_to_compose','rolish',
'roll','rootdir','roots','rotate','rotor','round','roundrobin',
'routine-type','run','rwx','s','samecase','samemark','samewith','say',
'schedule-on','scheduler','scope','sec','sech','second','seek','self',
'send','Set','set_hidden','set_name','set_package','set_rw','set_value',
'SetHash','set-instruments','setup_finalization','shape','share','shell',
'shift','sibling','sigil','sign','signal','signals','signature','sin',
'sinh','sink','sink-all','skip','skip-at-least','skip-at-least-pull-one',
'skip-one','skip-rest','sleep','sleep-timer','sleep-until','Slip','slurp',
'slurp-rest','slurpy','snap','snapper','so','socket-host','socket-port',
'sort','source','source-package','spawn','SPEC','splice','split',
'splitdir','splitpath','sprintf','spurt','sqrt','squish','srand','stable',
'start','started','starts-with','status','stderr','stdout','Str',
'sub_signature','subbuf','subbuf-rw','subname','subparse','subst',
'subst-mutate','substr','substr-eq','substr-rw','subtest','succ','sum',
'Supply','symlink','t','tail','take','take-rw','tan','tanh','tap',
'target','target-name','tc','tclc','tell','then','throttle','throw',
'throws-like','timezone','tmpdir','to','today','todo','toggle','to-posix',
'total','trailing','trans','tree','trim','trim-leading','trim-trailing',
'truncate','truncated-to','trusts','try_acquire','trying','twigil','type',
'type_captures','typename','uc','udp','uncaught_handler','unimatch',
'uniname','uninames','uniparse','uniprop','uniprops','unique','unival',
'univals','unlike','unlink','unlock','unpack','unpolar','unshift',
'unwrap','updir','USAGE','use-ok','utc','val','value','values','VAR',
'variable','verbose-config','version','VMnames','volume','vow','w','wait',
'warn','watch','watch-path','week','weekday-of-month','week-number',
'week-year','WHAT','when','WHERE','WHEREFORE','WHICH','WHO',
'whole-second','WHY','wordcase','words','workaround','wrap','write',
'write-to','x','yada','year','yield','yyyy-mm-dd','z','zip','zip-latest',
)
PERL6_BUILTIN_CLASSES = (
#Booleans
'False','True',
#Classes
'Any','Array','Associative','AST','atomicint','Attribute','Backtrace',
'Backtrace::Frame','Bag','Baggy','BagHash','Blob','Block','Bool','Buf',
'Callable','CallFrame','Cancellation','Capture','CArray','Channel','Code',
'compiler','Complex','ComplexStr','Cool','CurrentThreadScheduler',
'Cursor','Date','Dateish','DateTime','Distro','Duration','Encoding',
'Exception','Failure','FatRat','Grammar','Hash','HyperWhatever','Instant',
'Int','int16','int32','int64','int8','IntStr','IO','IO::ArgFiles',
'IO::CatHandle','IO::Handle','IO::Notification','IO::Path',
'IO::Path::Cygwin','IO::Path::QNX','IO::Path::Unix','IO::Path::Win32',
'IO::Pipe','IO::Socket','IO::Socket::Async','IO::Socket::INET','IO::Spec',
'IO::Spec::Cygwin','IO::Spec::QNX','IO::Spec::Unix','IO::Spec::Win32',
'IO::Special','Iterable','Iterator','Junction','Kernel','Label','List',
'Lock','Lock::Async','long','longlong','Macro','Map','Match',
'Metamodel::AttributeContainer','Metamodel::C3MRO','Metamodel::ClassHOW',
'Metamodel::EnumHOW','Metamodel::Finalization','Metamodel::MethodContainer',
'Metamodel::MROBasedMethodDispatch','Metamodel::MultipleInheritance',
'Metamodel::Naming','Metamodel::Primitives','Metamodel::PrivateMethodContainer',
'Metamodel::RoleContainer','Metamodel::Trusting','Method','Mix','MixHash',
'Mixy','Mu','NFC','NFD','NFKC','NFKD','Nil','Num','num32','num64',
'Numeric','NumStr','ObjAt','Order','Pair','Parameter','Perl','Pod::Block',
'Pod::Block::Code','Pod::Block::Comment','Pod::Block::Declarator',
'Pod::Block::Named','Pod::Block::Para','Pod::Block::Table','Pod::Heading',
'Pod::Item','Pointer','Positional','PositionalBindFailover','Proc',
'Proc::Async','Promise','Proxy','PseudoStash','QuantHash','Range','Rat',
'Rational','RatStr','Real','Regex','Routine','Scalar','Scheduler',
'Semaphore','Seq','Set','SetHash','Setty','Signature','size_t','Slip',
'Stash','Str','StrDistance','Stringy','Sub','Submethod','Supplier',
'Supplier::Preserving','Supply','Systemic','Tap','Telemetry',
'Telemetry::Instrument::Thread','Telemetry::Instrument::Usage',
'Telemetry::Period','Telemetry::Sampler','Thread','ThreadPoolScheduler',
'UInt','uint16','uint32','uint64','uint8','Uni','utf8','Variable',
'Version','VM','Whatever','WhateverCode','WrapHandle'
)
PERL6_OPERATORS = (
'X', 'Z', 'after', 'also', 'and', 'andthen', 'before', 'cmp', 'div',
'eq', 'eqv', 'extra', 'ff', 'fff', 'ge', 'gt', 'le', 'leg', 'lt', 'm',
'mm', 'mod', 'ne', 'or', 'orelse', 'rx', 's', 'tr', 'x', 'xor', 'xx',
'++', '--', '**', '!', '+', '-', '~', '?', '|', '||', '+^', '~^', '?^',
'^', '*', '/', '%', '%%', '+&', '+<', '+>', '~&', '~<', '~>', '?&',
'gcd', 'lcm', '+', '-', '+|', '+^', '~|', '~^', '?|', '?^',
'~', '&', '^', 'but', 'does', '<=>', '..', '..^', '^..', '^..^',
'!=', '==', '<', '<=', '>', '>=', '~~', '===', '!eqv',
'&&', '||', '^^', '//', 'min', 'max', '??', '!!', 'ff', 'fff', 'so',
'not', '<==', '==>', '<<==', '==>>','unicmp',
)
# Perl 6 has a *lot* of possible bracketing characters
# this list was lifted from STD.pm6 (https://github.com/perl6/std)
PERL6_BRACKETS = {
'\u0028': '\u0029', '\u003c': '\u003e', '\u005b': '\u005d',
'\u007b': '\u007d', '\u00ab': '\u00bb', '\u0f3a': '\u0f3b',
'\u0f3c': '\u0f3d', '\u169b': '\u169c', '\u2018': '\u2019',
'\u201a': '\u2019', '\u201b': '\u2019', '\u201c': '\u201d',
'\u201e': '\u201d', '\u201f': '\u201d', '\u2039': '\u203a',
'\u2045': '\u2046', '\u207d': '\u207e', '\u208d': '\u208e',
'\u2208': '\u220b', '\u2209': '\u220c', '\u220a': '\u220d',
'\u2215': '\u29f5', '\u223c': '\u223d', '\u2243': '\u22cd',
'\u2252': '\u2253', '\u2254': '\u2255', '\u2264': '\u2265',
'\u2266': '\u2267', '\u2268': '\u2269', '\u226a': '\u226b',
'\u226e': '\u226f', '\u2270': '\u2271', '\u2272': '\u2273',
'\u2274': '\u2275', '\u2276': '\u2277', '\u2278': '\u2279',
'\u227a': '\u227b', '\u227c': '\u227d', '\u227e': '\u227f',
'\u2280': '\u2281', '\u2282': '\u2283', '\u2284': '\u2285',
'\u2286': '\u2287', '\u2288': '\u2289', '\u228a': '\u228b',
'\u228f': '\u2290', '\u2291': '\u2292', '\u2298': '\u29b8',
'\u22a2': '\u22a3', '\u22a6': '\u2ade', '\u22a8': '\u2ae4',
'\u22a9': '\u2ae3', '\u22ab': '\u2ae5', '\u22b0': '\u22b1',
'\u22b2': '\u22b3', '\u22b4': '\u22b5', '\u22b6': '\u22b7',
'\u22c9': '\u22ca', '\u22cb': '\u22cc', '\u22d0': '\u22d1',
'\u22d6': '\u22d7', '\u22d8': '\u22d9', '\u22da': '\u22db',
'\u22dc': '\u22dd', '\u22de': '\u22df', '\u22e0': '\u22e1',
'\u22e2': '\u22e3', '\u22e4': '\u22e5', '\u22e6': '\u22e7',
'\u22e8': '\u22e9', '\u22ea': '\u22eb', '\u22ec': '\u22ed',
'\u22f0': '\u22f1', '\u22f2': '\u22fa', '\u22f3': '\u22fb',
'\u22f4': '\u22fc', '\u22f6': '\u22fd', '\u22f7': '\u22fe',
'\u2308': '\u2309', '\u230a': '\u230b', '\u2329': '\u232a',
'\u23b4': '\u23b5', '\u2768': '\u2769', '\u276a': '\u276b',
'\u276c': '\u276d', '\u276e': '\u276f', '\u2770': '\u2771',
'\u2772': '\u2773', '\u2774': '\u2775', '\u27c3': '\u27c4',
'\u27c5': '\u27c6', '\u27d5': '\u27d6', '\u27dd': '\u27de',
'\u27e2': '\u27e3', '\u27e4': '\u27e5', '\u27e6': '\u27e7',
'\u27e8': '\u27e9', '\u27ea': '\u27eb', '\u2983': '\u2984',
'\u2985': '\u2986', '\u2987': '\u2988', '\u2989': '\u298a',
'\u298b': '\u298c', '\u298d': '\u298e', '\u298f': '\u2990',
'\u2991': '\u2992', '\u2993': '\u2994', '\u2995': '\u2996',
'\u2997': '\u2998', '\u29c0': '\u29c1', '\u29c4': '\u29c5',
'\u29cf': '\u29d0', '\u29d1': '\u29d2', '\u29d4': '\u29d5',
'\u29d8': '\u29d9', '\u29da': '\u29db', '\u29f8': '\u29f9',
'\u29fc': '\u29fd', '\u2a2b': '\u2a2c', '\u2a2d': '\u2a2e',
'\u2a34': '\u2a35', '\u2a3c': '\u2a3d', '\u2a64': '\u2a65',
'\u2a79': '\u2a7a', '\u2a7d': '\u2a7e', '\u2a7f': '\u2a80',
'\u2a81': '\u2a82', '\u2a83': '\u2a84', '\u2a8b': '\u2a8c',
'\u2a91': '\u2a92', '\u2a93': '\u2a94', '\u2a95': '\u2a96',
'\u2a97': '\u2a98', '\u2a99': '\u2a9a', '\u2a9b': '\u2a9c',
'\u2aa1': '\u2aa2', '\u2aa6': '\u2aa7', '\u2aa8': '\u2aa9',
'\u2aaa': '\u2aab', '\u2aac': '\u2aad', '\u2aaf': '\u2ab0',
'\u2ab3': '\u2ab4', '\u2abb': '\u2abc', '\u2abd': '\u2abe',
'\u2abf': '\u2ac0', '\u2ac1': '\u2ac2', '\u2ac3': '\u2ac4',
'\u2ac5': '\u2ac6', '\u2acd': '\u2ace', '\u2acf': '\u2ad0',
'\u2ad1': '\u2ad2', '\u2ad3': '\u2ad4', '\u2ad5': '\u2ad6',
'\u2aec': '\u2aed', '\u2af7': '\u2af8', '\u2af9': '\u2afa',
'\u2e02': '\u2e03', '\u2e04': '\u2e05', '\u2e09': '\u2e0a',
'\u2e0c': '\u2e0d', '\u2e1c': '\u2e1d', '\u2e20': '\u2e21',
'\u3008': '\u3009', '\u300a': '\u300b', '\u300c': '\u300d',
'\u300e': '\u300f', '\u3010': '\u3011', '\u3014': '\u3015',
'\u3016': '\u3017', '\u3018': '\u3019', '\u301a': '\u301b',
'\u301d': '\u301e', '\ufd3e': '\ufd3f', '\ufe17': '\ufe18',
'\ufe35': '\ufe36', '\ufe37': '\ufe38', '\ufe39': '\ufe3a',
'\ufe3b': '\ufe3c', '\ufe3d': '\ufe3e', '\ufe3f': '\ufe40',
'\ufe41': '\ufe42', '\ufe43': '\ufe44', '\ufe47': '\ufe48',
'\ufe59': '\ufe5a', '\ufe5b': '\ufe5c', '\ufe5d': '\ufe5e',
'\uff08': '\uff09', '\uff1c': '\uff1e', '\uff3b': '\uff3d',
'\uff5b': '\uff5d', '\uff5f': '\uff60', '\uff62': '\uff63',
}
def _build_word_match(words, boundary_regex_fragment=None, prefix='', suffix=''):
if boundary_regex_fragment is None:
return r'\b(' + prefix + r'|'.join(re.escape(x) for x in words) + \
suffix + r')\b'
else:
return r'(?<!' + boundary_regex_fragment + r')' + prefix + r'(' + \
r'|'.join(re.escape(x) for x in words) + r')' + suffix + r'(?!' + \
boundary_regex_fragment + r')'
def brackets_callback(token_class):
def callback(lexer, match, context):
groups = match.groupdict()
opening_chars = groups['delimiter']
n_chars = len(opening_chars)
adverbs = groups.get('adverbs')
closer = Perl6Lexer.PERL6_BRACKETS.get(opening_chars[0])
text = context.text
if closer is None: # it's not a mirrored character, which means we
# just need to look for the next occurrence
end_pos = text.find(opening_chars, match.start('delimiter') + n_chars)
else: # we need to look for the corresponding closing character,
# keep nesting in mind
closing_chars = closer * n_chars
nesting_level = 1
search_pos = match.start('delimiter')
while nesting_level > 0:
next_open_pos = text.find(opening_chars, search_pos + n_chars)
next_close_pos = text.find(closing_chars, search_pos + n_chars)
if next_close_pos == -1:
next_close_pos = len(text)
nesting_level = 0
elif next_open_pos != -1 and next_open_pos < next_close_pos:
nesting_level += 1
search_pos = next_open_pos
else: # next_close_pos < next_open_pos
nesting_level -= 1
search_pos = next_close_pos
end_pos = next_close_pos
if end_pos < 0: # if we didn't find a closer, just highlight the
# rest of the text in this class
end_pos = len(text)
if adverbs is not None and re.search(r':to\b', adverbs):
heredoc_terminator = text[match.start('delimiter') + n_chars:end_pos]
end_heredoc = re.search(r'^\s*' + re.escape(heredoc_terminator) +
r'\s*$', text[end_pos:], re.MULTILINE)
if end_heredoc:
end_pos += end_heredoc.end()
else:
end_pos = len(text)
yield match.start(), token_class, text[match.start():end_pos + n_chars]
context.pos = end_pos + n_chars
return callback
def opening_brace_callback(lexer, match, context):
stack = context.stack
yield match.start(), Text, context.text[match.start():match.end()]
context.pos = match.end()
# if we encounter an opening brace and we're one level
# below a token state, it means we need to increment
# the nesting level for braces so we know later when
# we should return to the token rules.
if len(stack) > 2 and stack[-2] == 'token':
context.perl6_token_nesting_level += 1
def closing_brace_callback(lexer, match, context):
stack = context.stack
yield match.start(), Text, context.text[match.start():match.end()]
context.pos = match.end()
# if we encounter a free closing brace and we're one level
# below a token state, it means we need to check the nesting
# level to see if we need to return to the token state.
if len(stack) > 2 and stack[-2] == 'token':
context.perl6_token_nesting_level -= 1
if context.perl6_token_nesting_level == 0:
stack.pop()
def embedded_perl6_callback(lexer, match, context):
context.perl6_token_nesting_level = 1
yield match.start(), Text, context.text[match.start():match.end()]
context.pos = match.end()
context.stack.append('root')
# If you're modifying these rules, be careful if you need to process '{' or '}'
# characters. We have special logic for processing these characters (due to the fact
# that you can nest Perl 6 code in regex blocks), so if you need to process one of
# them, make sure you also process the corresponding one!
tokens = {
'common': [
(r'#[`|=](?P<delimiter>(?P<first_char>[' + ''.join(PERL6_BRACKETS) + r'])(?P=first_char)*)',
brackets_callback(Comment.Multiline)),
(r'#[^\n]*$', Comment.Single),
(r'^(\s*)=begin\s+(\w+)\b.*?^\1=end\s+\2', Comment.Multiline),
(r'^(\s*)=for.*?\n\s*?\n', Comment.Multiline),
(r'^=.*?\n\s*?\n', Comment.Multiline),
(r'(regex|token|rule)(\s*' + PERL6_IDENTIFIER_RANGE + '+:sym)',
bygroups(Keyword, Name), 'token-sym-brackets'),
(r'(regex|token|rule)(?!' + PERL6_IDENTIFIER_RANGE + r')(\s*' + PERL6_IDENTIFIER_RANGE + '+)?',
bygroups(Keyword, Name), 'pre-token'),
# deal with a special case in the Perl 6 grammar (role q { ... })
(r'(role)(\s+)(q)(\s*)', bygroups(Keyword, Whitespace, Name, Whitespace)),
(_build_word_match(PERL6_KEYWORDS, PERL6_IDENTIFIER_RANGE), Keyword),
(_build_word_match(PERL6_BUILTIN_CLASSES, PERL6_IDENTIFIER_RANGE, suffix='(?::[UD])?'),
Name.Builtin),
(_build_word_match(PERL6_BUILTINS, PERL6_IDENTIFIER_RANGE), Name.Builtin),
# copied from PerlLexer
(r'[$@%&][.^:?=!~]?' + PERL6_IDENTIFIER_RANGE + '+(?:<<.*?>>|<.*?>|«.*?»)*',
Name.Variable),
(r'\$[!/](?:<<.*?>>|<.*?>|«.*?»)*', Name.Variable.Global),
(r'::\?\w+', Name.Variable.Global),
(r'[$@%&]\*' + PERL6_IDENTIFIER_RANGE + '+(?:<<.*?>>|<.*?>|«.*?»)*',
Name.Variable.Global),
(r'\$(?:<.*?>)+', Name.Variable),
(r'(?:q|qq|Q)[a-zA-Z]?\s*(?P<adverbs>:[\w\s:]+)?\s*(?P<delimiter>(?P<first_char>[^0-9a-zA-Z:\s])'
r'(?P=first_char)*)', brackets_callback(String)),
# copied from PerlLexer
(r'0_?[0-7]+(_[0-7]+)*', Number.Oct),
(r'0x[0-9A-Fa-f]+(_[0-9A-Fa-f]+)*', Number.Hex),
(r'0b[01]+(_[01]+)*', Number.Bin),
(r'(?i)(\d*(_\d*)*\.\d+(_\d*)*|\d+(_\d*)*\.\d+(_\d*)*)(e[+-]?\d+)?',
Number.Float),
(r'(?i)\d+(_\d*)*e[+-]?\d+(_\d*)*', Number.Float),
(r'\d+(_\d+)*', Number.Integer),
(r'(?<=~~)\s*/(?:\\\\|\\/|.)*?/', String.Regex),
(r'(?<=[=(,])\s*/(?:\\\\|\\/|.)*?/', String.Regex),
(r'm\w+(?=\()', Name),
(r'(?:m|ms|rx)\s*(?P<adverbs>:[\w\s:]+)?\s*(?P<delimiter>(?P<first_char>[^\w:\s])'
r'(?P=first_char)*)', brackets_callback(String.Regex)),
(r'(?:s|ss|tr)\s*(?::[\w\s:]+)?\s*/(?:\\\\|\\/|.)*?/(?:\\\\|\\/|.)*?/',
String.Regex),
(r'<[^\s=].*?\S>', String),
(_build_word_match(PERL6_OPERATORS), Operator),
(r'\w' + PERL6_IDENTIFIER_RANGE + '*', Name),
(r"'(\\\\|\\[^\\]|[^'\\])*'", String),
(r'"(\\\\|\\[^\\]|[^"\\])*"', String),
],
'root': [
include('common'),
(r'\{', opening_brace_callback),
(r'\}', closing_brace_callback),
(r'.+?', Text),
],
'pre-token': [
include('common'),
(r'\{', Text, ('#pop', 'token')),
(r'.+?', Text),
],
'token-sym-brackets': [
(r'(?P<delimiter>(?P<first_char>[' + ''.join(PERL6_BRACKETS) + '])(?P=first_char)*)',
brackets_callback(Name), ('#pop', 'pre-token')),
default(('#pop', 'pre-token')),
],
'token': [
(r'\}', Text, '#pop'),
(r'(?<=:)(?:my|our|state|constant|temp|let).*?;', using(this)),
# make sure that quotes in character classes aren't treated as strings
(r'<(?:[-!?+.]\s*)?\[.*?\]>', String.Regex),
# make sure that '#' characters in quotes aren't treated as comments
(r"(?<!\\)'(\\\\|\\[^\\]|[^'\\])*'", String.Regex),
(r'(?<!\\)"(\\\\|\\[^\\]|[^"\\])*"', String.Regex),
(r'#.*?$', Comment.Single),
(r'\{', embedded_perl6_callback),
('.+?', String.Regex),
],
}
def analyse_text(text):
def strip_pod(lines):
in_pod = False
stripped_lines = []
for line in lines:
if re.match(r'^=(?:end|cut)', line):
in_pod = False
elif re.match(r'^=\w+', line):
in_pod = True
elif not in_pod:
stripped_lines.append(line)
return stripped_lines
# XXX handle block comments
lines = text.splitlines()
lines = strip_pod(lines)
text = '\n'.join(lines)
if shebang_matches(text, r'perl6|rakudo|niecza|pugs'):
return True
saw_perl_decl = False
rating = False
# check for my/our/has declarations
if re.search(r"(?:my|our|has)\s+(?:" + Perl6Lexer.PERL6_IDENTIFIER_RANGE +
r"+\s+)?[$@%&(]", text):
rating = 0.8
saw_perl_decl = True
for line in lines:
line = re.sub('#.*', '', line)
if re.match(r'^\s*$', line):
continue
# match v6; use v6; use v6.0; use v6.0.0;
if re.match(r'^\s*(?:use\s+)?v6(?:\.\d(?:\.\d)?)?;', line):
return True
# match class, module, role, enum, grammar declarations
class_decl = re.match(r'^\s*(?:(?P<scope>my|our)\s+)?(?:module|class|role|enum|grammar)', line)
if class_decl:
if saw_perl_decl or class_decl.group('scope') is not None:
return True
rating = 0.05
continue
break
if ':=' in text:
# Same logic as above for PerlLexer
rating /= 2
return rating
def __init__(self, **options):
super().__init__(**options)
self.encoding = options.get('encoding', 'utf-8')
|
catboostREPO_NAMEcatboostPATH_START.@catboost_extracted@catboost-master@contrib@python@Pygments@py3@pygments@lexers@perl.py@.PATH_END.py
|
{
"filename": "scalars.py",
"repo_name": "numpy/numpy",
"repo_path": "numpy_extracted/numpy-main/numpy/typing/tests/data/pass/scalars.py",
"type": "Python"
}
|
import datetime as dt
import pytest
import numpy as np
b = np.bool()
b_ = np.bool_()
u8 = np.uint64()
i8 = np.int64()
f8 = np.float64()
c16 = np.complex128()
U = np.str_()
S = np.bytes_()
# Construction
class D:
def __index__(self) -> int:
return 0
class C:
def __complex__(self) -> complex:
return 3j
class B:
def __int__(self) -> int:
return 4
class A:
def __float__(self) -> float:
return 4.0
np.complex64(3j)
np.complex64(A())
np.complex64(C())
np.complex128(3j)
np.complex128(C())
np.complex128(None)
np.complex64("1.2")
np.complex128(b"2j")
np.int8(4)
np.int16(3.4)
np.int32(4)
np.int64(-1)
np.uint8(B())
np.uint32()
np.int32("1")
np.int64(b"2")
np.float16(A())
np.float32(16)
np.float64(3.0)
np.float64(None)
np.float32("1")
np.float16(b"2.5")
np.uint64(D())
np.float32(D())
np.complex64(D())
np.bytes_(b"hello")
np.bytes_("hello", 'utf-8')
np.bytes_("hello", encoding='utf-8')
np.str_("hello")
np.str_(b"hello", 'utf-8')
np.str_(b"hello", encoding='utf-8')
# Array-ish semantics
np.int8().real
np.int16().imag
np.int32().data
np.int64().flags
np.uint8().itemsize * 2
np.uint16().ndim + 1
np.uint32().strides
np.uint64().shape
# Time structures
np.datetime64()
np.datetime64(0, "D")
np.datetime64(0, b"D")
np.datetime64(0, ('ms', 3))
np.datetime64("2019")
np.datetime64(b"2019")
np.datetime64("2019", "D")
np.datetime64(np.datetime64())
np.datetime64(dt.datetime(2000, 5, 3))
np.datetime64(dt.date(2000, 5, 3))
np.datetime64(None)
np.datetime64(None, "D")
np.timedelta64()
np.timedelta64(0)
np.timedelta64(0, "D")
np.timedelta64(0, ('ms', 3))
np.timedelta64(0, b"D")
np.timedelta64("3")
np.timedelta64(b"5")
np.timedelta64(np.timedelta64(2))
np.timedelta64(dt.timedelta(2))
np.timedelta64(None)
np.timedelta64(None, "D")
np.void(1)
np.void(np.int64(1))
np.void(True)
np.void(np.bool(True))
np.void(b"test")
np.void(np.bytes_("test"))
np.void(object(), [("a", "O"), ("b", "O")])
np.void(object(), dtype=[("a", "O"), ("b", "O")])
# Protocols
i8 = np.int64()
u8 = np.uint64()
f8 = np.float64()
c16 = np.complex128()
b = np.bool()
td = np.timedelta64()
U = np.str_("1")
S = np.bytes_("1")
AR = np.array(1, dtype=np.float64)
int(i8)
int(u8)
int(f8)
int(b)
int(td)
int(U)
int(S)
int(AR)
with pytest.warns(np.exceptions.ComplexWarning):
int(c16)
float(i8)
float(u8)
float(f8)
float(b_)
float(td)
float(U)
float(S)
float(AR)
with pytest.warns(np.exceptions.ComplexWarning):
float(c16)
complex(i8)
complex(u8)
complex(f8)
complex(c16)
complex(b_)
complex(td)
complex(U)
complex(AR)
# Misc
c16.dtype
c16.real
c16.imag
c16.real.real
c16.real.imag
c16.ndim
c16.size
c16.itemsize
c16.shape
c16.strides
c16.squeeze()
c16.byteswap()
c16.transpose()
# Aliases
np.byte()
np.short()
np.intc()
np.intp()
np.int_()
np.longlong()
np.ubyte()
np.ushort()
np.uintc()
np.uintp()
np.uint()
np.ulonglong()
np.half()
np.single()
np.double()
np.longdouble()
np.csingle()
np.cdouble()
np.clongdouble()
b.item()
i8.item()
u8.item()
f8.item()
c16.item()
U.item()
S.item()
b.tolist()
i8.tolist()
u8.tolist()
f8.tolist()
c16.tolist()
U.tolist()
S.tolist()
b.ravel()
i8.ravel()
u8.ravel()
f8.ravel()
c16.ravel()
U.ravel()
S.ravel()
b.flatten()
i8.flatten()
u8.flatten()
f8.flatten()
c16.flatten()
U.flatten()
S.flatten()
b.reshape(1)
i8.reshape(1)
u8.reshape(1)
f8.reshape(1)
c16.reshape(1)
U.reshape(1)
S.reshape(1)
|
numpyREPO_NAMEnumpyPATH_START.@numpy_extracted@numpy-main@numpy@typing@tests@data@pass@scalars.py@.PATH_END.py
|
{
"filename": "evaluate_clusters.py",
"repo_name": "MGJvanGroeningen/gaia_oc_amd",
"repo_path": "gaia_oc_amd_extracted/gaia_oc_amd-main/gaia_oc_amd/evaluate_clusters.py",
"type": "Python"
}
|
import os
import argparse
import matplotlib
import pandas as pd
from gaia_oc_amd.io import load_cluster, load_sets, cluster_list
from gaia_oc_amd.data_preparation.cluster import Cluster
from gaia_oc_amd.candidate_evaluation.membership_probability import calculate_candidate_probs
from gaia_oc_amd.candidate_evaluation.diagnostics import tidal_radius
from gaia_oc_amd.candidate_evaluation.visualization import plot_density_profile, plot_mass_segregation_profile, \
plot_venn_diagram, plot_confusion_matrix, plot_sources, plot_sources_limits
PRETRAINED_MODEL_DIRS = {'v01': os.path.join(os.path.dirname(__file__),
'candidate_evaluation/pretrained_models/DS10_v01'),
'v02': os.path.join(os.path.dirname(__file__),
'candidate_evaluation/pretrained_models/DS10_v02')}
def evaluate_clusters(cluster_names, clusters_dir='./data/clusters', model_dir=PRETRAINED_MODEL_DIRS['v02'],
n_samples=100, size_support_set=10, hard_size_ss=True, new_members_label='this study',
candidate_prob_threshold=0.1, new_members_prob_threshold=0.8, use_tidal_radius=False,
use_comparison_tidal_radius=False, new_members_plot=True, comparison_plot=True,
additional_members_plot=False, missed_members_plot=False, venn_diagram_plot=False,
confusion_matrix_plot=False, density_profile_plot=False, mass_segregation_plot=False,
plot_stellar_field=True, plot_only=False, show_plots=False, save_plots=True, seed=42):
"""Main function for evaluating the membership status of cluster candidates. This function contains the following
steps:
- Load the model and the hyper parameters
- Evaluate the membership status of the candidates of a cluster
- Create a number of plots which give some insight in the new member distribution and compares to either a set
of training or comparison members.
Args:
cluster_names (str, list): 'Names of the open cluster(s) we want to build sets for. Can be a name or a file
with cluster names.'
clusters_dir (str): 'Directory where cluster data and results are saved.'
model_dir (str): 'Directory of the model, which contains its (hyper)parameters, that will be used
to evaluate the candidate sources.'
n_samples (int): 'Number of candidate samples to use for calculating the membership probability.'
size_support_set (int): 'The number of members in the support set.'
hard_size_ss (bool): 'When false, set the support set size to the number of available training members when the
former is larger than the latter.'
new_members_label (str): 'Label to use for indicating the training members in plots.'
candidate_prob_threshold (float): 'Minimum membership probability of candidates plotted in the comparison and
additional members plot.'
new_members_prob_threshold (float): 'Minimum membership probability of candidates plotted in the new members
plot.'
use_tidal_radius (bool): 'Whether to calculate the tidal radius of the cluster with the candidates and exclude
candidates outside the tidal radius from the plots.'
use_comparison_tidal_radius (bool): 'Whether to calculate the tidal radius of the cluster with the comparison
members and exclude comparison members outside the tidal radius from the plots.'
new_members_plot (bool): 'Whether to create a plot showing the candidates above a threshold probability.'
comparison_plot (bool): 'Whether to create a plot showing the candidates and comparison members above a
threshold probability.'
additional_members_plot (bool): 'Whether to create a plot showing the candidates above a threshold probability,
which are not present in the comparison members.'
missed_members_plot (bool): 'Whether to create a plot showing the comparison members, which are not present
among the candidates above a threshold probability.'
venn_diagram_plot (bool): 'Whether to create a plot showing a Venn diagram comparing the candidates and the
comparison members above 10%, 50% and 90% membership probability.'
confusion_matrix_plot (bool): 'Whether to create a plot showing how the candidate probability compares against
the probability given in the comparison. A 'confusion matrix' is created by using probability bins.'
density_profile_plot (bool): 'Whether to create a plot showing the density profile of the candidates above a
threshold probability and the comparison members.'
mass_segregation_plot (bool): 'Whether to create a plot showing the mass segregation of the candidates above a
threshold probability and the comparison members.'
plot_stellar_field (bool): 'Whether to add the stellar field to the sources plots.'
plot_only (bool): 'Whether to only create plots for the given clusters. This skips the (re)calculation of
candidate member probabilities.'
show_plots (bool): 'Whether to show the plots.'
save_plots (bool): 'Whether to save the plots.'
seed (int): 'The seed that determines the sampling of sources when determining membership probabilities.'
"""
cluster_names = cluster_list(cluster_names)
n_clusters = len(cluster_names)
print('Evaluating candidates for:', cluster_names)
print('Number of clusters:', n_clusters)
if not show_plots:
matplotlib.use('Agg')
for idx, cluster_name in enumerate(cluster_names):
print(' ')
print('Cluster:', cluster_name, f' ({idx + 1} / {n_clusters})')
# Define the cluster data/results directory
cluster_dir = os.path.join(clusters_dir, cluster_name)
if not plot_only:
calculate_candidate_probs(cluster_dir, model_dir, n_samples=n_samples, size_support_set=size_support_set,
hard_size_ss=hard_size_ss, seed=seed)
print('Creating plots...', end=' ')
if show_plots or save_plots:
cluster_params = load_cluster(cluster_dir)
cluster = Cluster(cluster_params)
members, candidates, non_members, comparison = load_sets(cluster_dir)
if cluster.comparison_members_label is not None:
cluster_comparison_label = cluster.comparison_members_label
else:
cluster_comparison_label = 'comparison'
member_candidates = candidates.query(f'PMemb >= {candidate_prob_threshold}')
non_member_candidates = candidates.query(f'PMemb < {candidate_prob_threshold}')
field_sources = None
if density_profile_plot:
plot_density_profile(member_candidates, cluster, comparison,
save_file=os.path.join(cluster_dir, 'density_profile.png'),
members_label=new_members_label, comparison_label=cluster_comparison_label,
title=f'{cluster.name}'.replace('_', ' '),
show=show_plots, save=save_plots)
# If we do not train on the sky position feature (f_r), we can use the tidal radius to constrain the members
if use_tidal_radius:
r_t = tidal_radius(member_candidates, cluster)
member_candidates = member_candidates.query(f'f_r <= {r_t}')
non_member_candidates = candidates.query(f'(PMemb < {candidate_prob_threshold}) or (f_r > {r_t})')
if use_comparison_tidal_radius:
comparison = comparison.query(f'f_r <= {tidal_radius(comparison, cluster)}')
if plot_stellar_field:
field_sources = pd.concat((non_members, non_member_candidates))
if mass_segregation_plot:
plot_mass_segregation_profile(member_candidates, cluster, comparison,
save_file=os.path.join(cluster_dir, 'mass_segregation.png'),
members_label=new_members_label,
comparison_label=cluster_comparison_label,
title=f'{cluster.name}'.replace('_', ' '), show=show_plots,
save=save_plots)
if confusion_matrix_plot:
plot_confusion_matrix(candidates, comparison,
save_file=os.path.join(cluster_dir, 'membership_comparison.png'),
title=f'{cluster.name}'.replace('_', ' '), label1=new_members_label,
label2=cluster_comparison_label, show=show_plots, save=save_plots)
if venn_diagram_plot:
plot_venn_diagram(member_candidates, comparison,
save_file=os.path.join(cluster_dir, 'venn_diagram.png'),
title=f'{cluster.name}'.replace('_', ' '), label1=new_members_label,
label2=cluster_comparison_label, show=show_plots, save=save_plots)
limits = plot_sources_limits(pd.concat((field_sources, member_candidates)), cluster.isochrone_colour)
if new_members_plot:
new_members = member_candidates.query(f'PMemb >= {new_members_prob_threshold}')
plot_sources(new_members, save_file=os.path.join(cluster_dir, 'new_members.png'),
colour=cluster.isochrone_colour, field_sources=field_sources,
members_label=new_members_label + f' ($p\\geq${new_members_prob_threshold})',
title=f'{cluster.name}'.replace('_', ' '), limits=limits, show=show_plots, save=save_plots)
if comparison_plot:
plot_sources(member_candidates, save_file=os.path.join(cluster_dir, 'comparison.png'),
colour=cluster.isochrone_colour, field_sources=field_sources, comparison=comparison,
plot_type='comparison', members_label=new_members_label,
comparison_label=cluster_comparison_label, title=f'{cluster.name}'.replace('_', ' '),
limits=limits, show=show_plots, save=save_plots)
if additional_members_plot:
plot_sources(member_candidates, save_file=os.path.join(cluster_dir, 'additional_members.png'),
colour=cluster.isochrone_colour, comparison=comparison, field_sources=field_sources,
plot_type='unique_members', members_label=new_members_label,
title=f'{cluster.name}'.replace('_', ' '), limits=limits, show_isochrone=True,
show_boundaries=True, cluster=cluster, show=show_plots, save=save_plots)
if missed_members_plot:
plot_sources(comparison, save_file=os.path.join(cluster_dir, 'missed_members.png'),
colour=cluster.isochrone_colour, comparison=member_candidates, field_sources=field_sources,
plot_type='unique_members', members_label=cluster_comparison_label,
title=f'{cluster.name}'.replace('_', ' '), limits=limits, show_isochrone=True,
show_boundaries=True, cluster=cluster, show=show_plots, save=save_plots)
print(f'done, saved in {os.path.abspath(cluster_dir)}')
print(' ')
print(100 * '=')
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument('cluster_names', nargs='?', type=str,
help='Names of the open cluster(s) we want to build sets for. '
'Can be a name or a file with cluster names.')
parser.add_argument('--clusters_dir', nargs='?', type=str, default='clusters',
help='Directory where cluster data and results are saved.')
parser.add_argument('--model_dir', nargs='?', type=str, default='deep_sets_model',
help= 'Directory of the model, which contains its (hyper)parameters, that will be used'
'to evaluate the candidate sources.')
parser.add_argument('--n_samples', nargs='?', type=int, default=100,
help='Number of candidate samples to use for calculating the membership probability.')
parser.add_argument('--size_support_set', nargs='?', type=int, default=10,
help='The number of members in the support set.')
parser.add_argument('--hard_size_ss', nargs='?', type=bool, default=True,
help='When false, set the support set size to the number of available training members when '
'the former is larger than the latter.')
parser.add_argument('--new_members_label', nargs='?', type=str, default='this study',
help='Label to use for indicating the training members in plots.')
parser.add_argument('--candidate_prob_threshold', nargs='?', type=float, default=0.1,
help='Minimum membership probability of candidates plotted in the comparison and '
'additional members plot.')
parser.add_argument('--new_members_prob_threshold', nargs='?', type=float, default=0.8,
help='Minimum membership probability of candidates plotted in the new members plot.')
parser.add_argument('--use_tidal_radius', nargs='?', type=bool, default=False,
help='Whether to calculate the tidal radius of the cluster with the candidates'
'and exclude candidates outside the tidal radius from the plots.')
parser.add_argument('--use_comparison_tidal_radius', nargs='?', type=bool, default=True,
help='Whether to calculate the tidal radius of the cluster '
'and exclude candidate members outside the tidal radius from the plots.')
parser.add_argument('--new_members_plot', nargs='?', type=bool, default=True,
help='Whether to create a plot showing the candidates above a threshold probability.')
parser.add_argument('--comparison_plot', nargs='?', type=bool, default=True,
help='Whether to create a plot showing the candidates and comparison members above a '
'threshold probability.')
parser.add_argument('--additional_members_plot', nargs='?', type=bool, default=True,
help='Whether to create a plot showing the candidates above a threshold probability,'
'which are not present in the comparison members.')
parser.add_argument('--missed_members_plot', nargs='?', type=bool, default=True,
help='Whether to create a plot showing the comparison members, which are not present'
'among the candidates above a threshold probability.')
parser.add_argument('--venn_diagram_plot', nargs='?', type=bool, default=True,
help='Whether to create a plot showing a Venn diagram comparing the candidates and the'
'comparison members above 10%, 50% and 90% membership probability.')
parser.add_argument('--confusion_matrix_plot', nargs='?', type=bool, default=True,
help="Whether to create a plot showing how the candidate probability compares against"
"the probability given in the comparison. A 'confusion_matrix' is created by using "
"probability bins.")
parser.add_argument('--density_profile_plot', nargs='?', type=bool, default=True,
help='Whether to create a plot showing the density profile of the candidates above a'
'threshold probability and the comparison members.')
parser.add_argument('--mass_segregation_plot', nargs='?', type=bool, default=True,
help='Whether to create a plot showing the mass segregation of the candidates above a'
'threshold probability and the comparison members.')
parser.add_argument('--plot_stellar_field', nargs='?', type=bool, default=True,
help='Whether to add the stellar field to the sources plots.')
parser.add_argument('--plot_only', nargs='?', type=bool, default=False,
help='Whether to only create plots for the given clusters. This skips the (re)calculation of'
'candidate member probabilities.')
parser.add_argument('--show_plots', nargs='?', type=bool, default=False,
help='Whether to show the plot.')
parser.add_argument('--save_plots', nargs='?', type=bool, default=True,
help='Whether to save the plots.')
parser.add_argument('--seed', nargs='?', type=int, default=42,
help='The seed that determines the sampling of sources when determining membership '
'probabilities.')
args_dict = vars(parser.parse_args())
evaluate_clusters(args_dict['cluster_names'],
clusters_dir=args_dict['clusters_dir'],
model_dir=args_dict['model_dir'],
n_samples=args_dict['n_samples'],
size_support_set=args_dict['size_support_set'],
hard_size_ss=args_dict['hard_size_ss'],
new_members_label=args_dict['new_members_label'],
candidate_prob_threshold=args_dict['candidate_prob_threshold'],
new_members_prob_threshold=args_dict['member_prob_threshold'],
use_tidal_radius=args_dict['use_tidal_radius'],
use_comparison_tidal_radius=args_dict['use_comparison_tidal_radius'],
new_members_plot=args_dict['new_members_plot'],
comparison_plot=args_dict['comparison_plot'],
additional_members_plot=args_dict['additional_members_plot'],
missed_members_plot=args_dict['missed_members_plot'],
venn_diagram_plot=args_dict['venn_diagram_plot'],
confusion_matrix_plot=args_dict['confusion_matrix_plot'],
density_profile_plot=args_dict['density_profile_plot'],
mass_segregation_plot=args_dict['mass_segregation_plot'],
plot_stellar_field=args_dict['plot_stellar_field'],
plot_only=args_dict['plot_only'],
show_plots=args_dict['show'],
save_plots=args_dict['save_plots'],
seed=args_dict['seed'])
|
MGJvanGroeningenREPO_NAMEgaia_oc_amdPATH_START.@gaia_oc_amd_extracted@gaia_oc_amd-main@gaia_oc_amd@evaluate_clusters.py@.PATH_END.py
|
{
"filename": "runtime-tkinter.py",
"repo_name": "gwastro/pycbc",
"repo_path": "pycbc_extracted/pycbc-master/tools/static/runtime-tkinter.py",
"type": "Python"
}
|
import os, sys
d = os.path.join(sys._MEIPASS, 'tcl')
if not os.path.exists(d):
os.makedirs(d)
d = os.path.join(sys._MEIPASS, 'tk')
if not os.path.exists(d):
os.makedirs(d)
|
gwastroREPO_NAMEpycbcPATH_START.@pycbc_extracted@pycbc-master@tools@static@runtime-tkinter.py@.PATH_END.py
|
{
"filename": "m4fit.py",
"repo_name": "webbjj/m2mcluster",
"repo_path": "m2mcluster_extracted/m2mcluster-main/examples/m4/m4fit.py",
"type": "Python"
}
|
"""
The below script is an example of how to use m2mcluster to fit an M2M model to obserations of a Galactic
globuar cluster's density profile (m4_sig_inner_prof.dat, m4_sig_outer_prof.dat) and
kinematic properties (m4_pm_prof.dat, m4_rv_prof.dat). In this example, the M2M model is fit against
M4's inner and outer surface density profile, proper motion velocity dispersion, and line of sight
velocity dipsersion.
The initial model cluster is based on previous fits to M4.
See Webb, Hunt, and Bovy 2023 for details
"""
import numpy as np
import os,sys
import time
#Import m2mcluster
import m2mcluster as m2m
#Import relevant Amuse module
from amuse.lab import *
from amuse.units import nbody_system,units
from amuse.datamodel import Particles
#**********Initial Options***************
#Restart
restart=False
restartsnap=0 #automatically looks for %s.csv % str(restartnap).zfill(5) to restart from
#****************************************
#**********Made to Measure Options***************
#Set Kernel type and M2M parameters
kernel=['loggaussian','gaussian','gaussian','gaussian']
epsilon= 10.0
mu=0.01
alpha=0.001
#Set limiting variables for reinitialization
rmax = 54.43 | units.parsec
mmin=0.1 | units.MSun
mmax=2.0 | units.MSun
#Set number of iterations and output frequency
niterations=10000
snapfreq=100
nextsnap=0
#**********Set observables**********
rhoparam=['Sigma1','Sigma2']
ndim=2
vfit=True
vparam=['v2','vz2']
#**********Nbody Simulations Options***************
#Need softening length and fraction of dynamical time that will be used for time steps
softening=0.01 | units.parsec
tdynrat=0.001
#**********Initial Particle Datasets*****
ofiles=['m4_sig_inner_prof.dat','m4_sig_outer_prof.dat','m4_pm_prof.dat','m4_rv_prof.dat']
#if intitial model star particles are in a file:
omodname='init_mod.dat'
#****************************************
#Initialize an M2M Star cluster
cluster=m2m.starcluster(number_of_iterations=niterations)
#Get Observables
for i,of in enumerate(ofiles):
orlower,orad,orupper,o,eo=np.loadtxt(of,unpack=True)
if '_sig_' in of:
cluster.add_observable(orlower,orad,orupper,o,rhoparam[i],ndim=2,sigma=eo,kernel=kernel[i])
elif '_pm_' in of and vfit:
cluster.add_observable(orlower,orad,orupper,o,'v2',ndim=2,sigma=eo,kernel=kernel[i])
elif '_rv_' in of and vfit:
cluster.add_observable(orlower,orad,orupper,o,'vz2',ndim=2,sigma=eo,kernel=kernel[i])
#Initialize a model star cluster with an initial guess close to the observed cluster's properties
if not restart:
cluster.initialize_star_cluster(filename=omodname, softening=softening)
elif restart:
cluster.restart_star_cluster(restartsnap,1.,'m2moutfile.dat',softening=softening,unit='msunpckms',fmt='dwdt')
nextsnap=restartsnap+snapfreq
#Write out initial conditions if not a restart
if not restart:
cluster.writeout()
cluster.snapout(return_dwdt=True)
nextsnap+=snapfreq
#Calculate initial lvirial radius
r_v=cluster.stars.virial_radius()
#Remove stars outside of parameter space if a restart
if restart:
cluster.reinitialize_star_cluster(mmin= mmin, mmax=mmax, rmax=rmax,rv=r_v)
#Calculate dynamical time for timestep calculation
tdyn=cluster.stars.dynamical_timescale()
#Plot initial comparisons
#Plot initial positions
cluster.xy_plot(filename='xyplot0.png')
#Compare initial density profiles
cluster.rho_prof(filename='rhoplot0.png')
#Compare initial velocity dispersion profiles
cluster.v2_prof(filename='vplot0.png')
cluster.rhov2_prof(filename='rhovplot0.png')
#Exectute the made to measure algorithm
for i in range(restartsnap,cluster.number_of_iterations):
print(cluster.niteration,restartsnap,cluster.tdyn.value_in(units.Myr),tdynrat)
#Initialize a new N-body simulation ever time step.
#In this example I use 1% of the cluster's dynamical time for the integration timeste
cluster.gravity_code=None
cluster.initialize_gravity_code('BHTree', dt=tdynrat*cluster.tdyn, theta=0.6)
#Evolve the model cluster forward for tdynrat percent of its dynamical time
tnext=tdynrat*cluster.tdyn
cluster.evolve(tend=tnext)
cluster.gravity_code.stop()
#Run the M2M algorithm, which will adjust all of the stellar masses based on kernel function
cluster.evaluate(epsilon=epsilon,mu=mu,alpha=alpha)
#Centre the star cluster and find determine Nbody conversion scales for next integration
cluster.reinitialize_star_cluster(mmin= mmin, mmax=mmax, rmax=rmax,rv=r_v)
#Write profiles and chi^2 to outfile
cluster.writeout()
#Writeout snapshots at a given frequency and update virial radius
if cluster.niteration>=nextsnap:
cluster.snapout(return_dwdt=True)
nextsnap+=snapfreq
r_v=cluster.stars.virial_radius()
sys.stdout.flush()
cluster.outfile.close()
cluster.gravity_code.stop()
#Plot final comparisons
#Plot final positions
cluster.xy_plot(filename='xyplotf.png')
#Compare final density profiles
cluster.rho_prof(filename='rhoplotf.png')
#Compare final velocity dispersion profiles
cluster.v_prof(filename='vplotf.png')
|
webbjjREPO_NAMEm2mclusterPATH_START.@m2mcluster_extracted@m2mcluster-main@examples@m4@m4fit.py@.PATH_END.py
|
{
"filename": "_font.py",
"repo_name": "plotly/plotly.py",
"repo_path": "plotly.py_extracted/plotly.py-master/packages/python/plotly/plotly/validators/table/hoverlabel/_font.py",
"type": "Python"
}
|
import _plotly_utils.basevalidators
class FontValidator(_plotly_utils.basevalidators.CompoundValidator):
def __init__(self, plotly_name="font", parent_name="table.hoverlabel", **kwargs):
super(FontValidator, self).__init__(
plotly_name=plotly_name,
parent_name=parent_name,
data_class_str=kwargs.pop("data_class_str", "Font"),
data_docs=kwargs.pop(
"data_docs",
"""
color
colorsrc
Sets the source reference on Chart Studio Cloud
for `color`.
family
HTML font family - the typeface that will be
applied by the web browser. The web browser
will only be able to apply a font if it is
available on the system which it operates.
Provide multiple font families, separated by
commas, to indicate the preference in which to
apply fonts if they aren't available on the
system. The Chart Studio Cloud (at
https://chart-studio.plotly.com or on-premise)
generates images on a server, where only a
select number of fonts are installed and
supported. These include "Arial", "Balto",
"Courier New", "Droid Sans", "Droid Serif",
"Droid Sans Mono", "Gravitas One", "Old
Standard TT", "Open Sans", "Overpass", "PT Sans
Narrow", "Raleway", "Times New Roman".
familysrc
Sets the source reference on Chart Studio Cloud
for `family`.
lineposition
Sets the kind of decoration line(s) with text,
such as an "under", "over" or "through" as well
as combinations e.g. "under+over", etc.
linepositionsrc
Sets the source reference on Chart Studio Cloud
for `lineposition`.
shadow
Sets the shape and color of the shadow behind
text. "auto" places minimal shadow and applies
contrast text font color. See
https://developer.mozilla.org/en-
US/docs/Web/CSS/text-shadow for additional
options.
shadowsrc
Sets the source reference on Chart Studio Cloud
for `shadow`.
size
sizesrc
Sets the source reference on Chart Studio Cloud
for `size`.
style
Sets whether a font should be styled with a
normal or italic face from its family.
stylesrc
Sets the source reference on Chart Studio Cloud
for `style`.
textcase
Sets capitalization of text. It can be used to
make text appear in all-uppercase or all-
lowercase, or with each word capitalized.
textcasesrc
Sets the source reference on Chart Studio Cloud
for `textcase`.
variant
Sets the variant of the font.
variantsrc
Sets the source reference on Chart Studio Cloud
for `variant`.
weight
Sets the weight (or boldness) of the font.
weightsrc
Sets the source reference on Chart Studio Cloud
for `weight`.
""",
),
**kwargs,
)
|
plotlyREPO_NAMEplotly.pyPATH_START.@plotly.py_extracted@plotly.py-master@packages@python@plotly@plotly@validators@table@hoverlabel@_font.py@.PATH_END.py
|
{
"filename": "ploadparticles.py",
"repo_name": "AndrewSellek/PLUTO_PRIZMO",
"repo_path": "PLUTO_PRIZMO_extracted/PLUTO_PRIZMO-main/Tools/pyPLUTO/pyPLUTO/ploadparticles.py",
"type": "Python"
}
|
#!/usr/bin/python3
from __future__ import division
import os
import sys
import numpy as np
class ploadparticles(object):
def __init__(self, ns, w_dir=None, datatype=None, ptype=None, chnum=None):
"""
Loads the Particle datafile.
**Inputs**:
ns -- Step Number of the data file\n
w_dir -- path to the directory which has the data files\n
datatype -- Datatype (default is set to read .dbl data files)
ptype -- A string denoting the type of particles ('LP', 'CR', 'DUST' etc. Default is 'CR')
chnum -- 2 digit integer denoting chunk number
(Only used if ptype = 'LP' to read particles.nnnn_chxx.dbl file, where nnnn is 4 digit integer denotong ns and xx is a 2 digit integer for chunk number : Default value is 0)
**Outputs**:
pyPLUTO pload object whose keys are arrays of data values.
"""
self.Nstep = ns
if w_dir is None:
w_dir = os.getcwd() + '/'
self.wdir = w_dir
if datatype is None:
datatype = "dbl"
self.datatype = datatype
if ptype == 'LP' and self.datatype in ['dbl', 'flt']:
if chnum is None: chnum = 0 #by default it reads first file.
self.fname = self.wdir+"particles.%04d_ch%02d.%s"%(ns, chnum, self.datatype)
else:
self.fname = self.wdir+"particles.%04d.%s"%(ns, self.datatype)
if self.datatype == 'vtk':
Part_dictionary = self.ReadVTKParticleFile()
else:
Part_dictionary = self.ReadBinParticleFile()
for keys in Part_dictionary:
object.__setattr__(self, keys, Part_dictionary.get(keys))
def ReadVTKParticleFile(self):
print("Reading particle file : %s"%self.fname)
fp = open(self.fname,'rb')
nfields = 0
while True:
line = fp.readline()
try:
line.split()[0]
except IndexError as error:
pass
else:
if line.split()[0] == b'POINTS':
nparts = int(line.decode().split()[1])
dtype_tup = str(nparts*3)+'>f4'
nb = np.dtype(dtype_tup).itemsize
vtkvar_buf = np.frombuffer(fp.read(nb), dtype=np.dtype(dtype_tup))
coords = vtkvar_buf.reshape(nparts,3)
val_dict = {'Totparticles':nparts, 'x1':coords[:,0],'x2':coords[:,1],'x3':coords[:,2]}
nfields += 3
if line.split()[0] == b'SCALARS':
vars = line.decode().split()[1]
if line.split()[0] == b'LOOKUP_TABLE':
if vars == 'Identity':
dtype_tup = str(nparts)+'>i4'
field_name = 'id'
elif vars == 'tinj':
dtype_tup = str(nparts)+'>f4'
field_name = 'tinj'
elif vars == 'Color':
dtype_tup == str(nparts)+'>f4'
field_name = 'color'
nb = np.dtype(dtype_tup).itemsize
vtkvar_buf = np.frombuffer(fp.read(nb), dtype=np.dtype(dtype_tup))
val_dict.update({field_name:vtkvar_buf})
nfields += 1
if line.split()[0] == b'VECTORS':
vars = line.decode().split()[1]
dtype_tup = str(nparts*3)+'>f4'
nb = np.dtype(dtype_tup).itemsize
vtkvar_buf = np.frombuffer(fp.read(nb), dtype=np.dtype(dtype_tup))
vels = vtkvar_buf.reshape(nparts,3)
val_dict.update({'vx1':vels[:,0], 'vx2':vels[:,1], 'vx3':vels[:,2]})
nfields += 3
else:
pass
if line == b'':
break
val_dict.update({'nfields':nfields})
return val_dict
def ReadBinParticleFile(self):
print("Reading particle file : %s"%self.fname)
fp = open(self.fname, "rb")
val_dict = {}
h_lines = 0
#READ HEADER.
with open(self.fname,"rb") as f:
for line in f:
if line.startswith(b'#'):
if line.split()[1] != b'PLUTO':
val_dict.update({line.split()[1].decode('utf8'):[i.decode('utf8') for i in line.split()[2:]]})
h_lines += 1
#SKIP HEADER LINES
cnt = 0
while (cnt < h_lines):
fp.readline()
cnt += 1
#READ DATA
data_ = fp.read()
fp.close()
#SORT DATA INTO DICTIONARY BASED ON DATATYPE.
if self.datatype == 'flt':
dt = np.dtype({'names':val_dict['field_names'], 'formats':['('+i+',)<f' for i in val_dict['field_dim']]})
else:
dt = np.dtype({'names':val_dict['field_names'], 'formats':['('+i+',)<d' for i in val_dict['field_dim']]})
val_ = np.frombuffer(data_,dtype=dt)
for i in range(len(val_dict['field_names'])):
name = val_dict['field_names'][i]
if int(val_dict['field_dim'][i]) == 1:
val_dict.update({name:val_[name].flatten()})
else:
val_dict.update({name:val_[name]})
#OUTPUT IS A DICTIONARY.
return val_dict
|
AndrewSellekREPO_NAMEPLUTO_PRIZMOPATH_START.@PLUTO_PRIZMO_extracted@PLUTO_PRIZMO-main@Tools@pyPLUTO@pyPLUTO@ploadparticles.py@.PATH_END.py
|
{
"filename": "__init__.py",
"repo_name": "DarkQuestCosmology/dark_emulator_public",
"repo_path": "dark_emulator_public_extracted/dark_emulator_public-main/dark_emulator/darkemu/__init__.py",
"type": "Python"
}
|
from .de_interface import base_class
|
DarkQuestCosmologyREPO_NAMEdark_emulator_publicPATH_START.@dark_emulator_public_extracted@dark_emulator_public-main@dark_emulator@darkemu@__init__.py@.PATH_END.py
|
{
"filename": "testFilesFilter.py",
"repo_name": "terryyin/lizard",
"repo_path": "lizard_extracted/lizard-master/test/testFilesFilter.py",
"type": "Python"
}
|
import unittest
import platform
from mock import patch
from lizard import get_all_source_files
import os
def which_system():
return platform.system()
class TestFilesFilter(unittest.TestCase):
@patch.object(os, "walk")
def test_no_matching(self, mock_os_walk):
mock_os_walk.return_value = []
files = get_all_source_files(["dir"], [], [])
self.assertEqual(0, len(list(files)))
@patch.object(os.path, "isfile")
def test_explicit_file_names(self, mock_isfile):
mock_isfile.return_value = True
files = get_all_source_files(["dir/file.c"], [], [])
self.assertEqual(["dir/file.c"], list(files))
@patch.object(os.path, "isfile")
def test_specific_filenames_should_not_be_excluded(self, mock_isfile):
mock_isfile.return_value = True
files = get_all_source_files(["dir/file.log"], [], [])
self.assertEqual(["dir/file.log"], list(files))
@patch('lizard.md5_hash_file')
@patch.object(os, "walk")
def test_exclude_file_name(self, mock_os_walk, md5):
mock_os_walk.return_value = (['.',
None,
['temp.c', 'useful.cpp']],)
files = get_all_source_files(["dir"], ["*.c"], [])
if which_system() == "Windows":
file_names = [".\\useful.cpp"]
else:
file_names = ["./useful.cpp"]
self.assertEqual(file_names, list(files))
@patch('lizard.md5_hash_file')
@patch.object(os, "walk")
def test_assigned_languages(self, mock_os_walk, md5):
mock_os_walk.return_value = (['.',
None,
['temp.c', 'useful.cpp', 'x.java', 'x.js']],)
md5.side_effect = [1, 2, 3, 4, 5, 6, 7, 8, 9, 0]
files = list(get_all_source_files(["dir"], [], ['cpp', 'java']))
if which_system() == "Windows":
file_names = [".\\temp.c", ".\\useful.cpp", ".\\x.java", ".\\x.js"]
else:
file_names = ["./temp.c", "./useful.cpp", "./x.java", "./x.js"]
self.assertIn(file_names[0], files)
self.assertIn(file_names[1], files)
self.assertIn(file_names[2], files)
self.assertNotIn(file_names[3], files)
@patch.object(os, "walk")
def test_exclude_folder(self, mock_os_walk):
mock_os_walk.return_value = (['ut',
None,
['useful.cpp']],)
files = get_all_source_files(["dir"], ["ut/*"], [])
self.assertEqual([], list(files))
@patch.object(os, "walk")
def test_exclude_folder_recursively(self, mock_os_walk):
mock_os_walk.return_value = (['ut/something',
None,
['useful.cpp']],)
files = get_all_source_files(["dir"], ["ut/*"], [])
self.assertEqual([], list(files))
@patch.object(os, "walk")
def test_exclude_none_supported_files(self, mock_os_walk):
mock_os_walk.return_value = (['.',
None,
['useful.txt']],)
files = get_all_source_files(["dir"], ['exclude_me'], [])
self.assertEqual([], list(files))
@patch.object(os, "walk")
@patch("lizard.auto_open", create=True)
def test_duplicates(self, mock_open, mock_os_walk):
mock_os_walk.return_value = (['.',
None,
['f1.cpp', 'f2.cpp']],)
file_handle = mock_open.return_value.__enter__.return_value
file_handle.read.return_value = "int foo(){haha();\n}"
files = get_all_source_files(["dir"], [], [])
if which_system() == "Windows":
file_names = [".\\f1.cpp"]
else:
file_names = ["./f1.cpp"]
self.assertEqual(file_names, list(files))
@patch.object(os, "walk")
@patch("lizard.auto_open", create=True)
def test_nonduplicates(self, mock_open, mock_os_walk):
mock_os_walk.return_value = (['.',
None,
['f1.cpp', 'f2.cpp']],)
file_handle = mock_open.return_value.__enter__.return_value
outs = ["int foo(){{haha({param});\n}}".format(param=i) for i in range(2)]
file_handle.read.side_effect = lambda: outs.pop()
files = get_all_source_files(["dir"], [], [])
if which_system() == "Windows":
file_names = [".\\f1.cpp", ".\\f2.cpp"]
else:
file_names = ["./f1.cpp", "./f2.cpp"]
self.assertEqual(file_names, list(files))
@patch.object(os, "walk")
@patch("lizard.auto_open", create=True)
def test_fail_to_open_file_should_be_allowed(self, mock_open, mock_os_walk):
mock_os_walk.return_value = (['.',
None,
['f1.cpp', 'f2.cpp']],)
file_handle = mock_open.side_effect = IOError
files = get_all_source_files(["dir"], [], [])
if which_system() == "Windows":
file_names = [".\\f1.cpp", ".\\f2.cpp"]
else:
file_names = ["./f1.cpp", "./f2.cpp"]
self.assertEqual(file_names, list(files))
|
terryyinREPO_NAMElizardPATH_START.@lizard_extracted@lizard-master@test@testFilesFilter.py@.PATH_END.py
|
{
"filename": "installation.md",
"repo_name": "google/jax",
"repo_path": "jax_extracted/jax-main/docs/installation.md",
"type": "Markdown"
}
|
(installation)=
# Installation
<!--* freshness: { reviewed: '2024-06-18' } *-->
Using JAX requires installing two packages: `jax`, which is pure Python and
cross-platform, and `jaxlib` which contains compiled binaries, and requires
different builds for different operating systems and accelerators.
**Summary:** For most users, a typical JAX installation may look something like this:
* **CPU-only (Linux/macOS/Windows)**
```
pip install -U jax
```
* **GPU (NVIDIA, CUDA 12)**
```
pip install -U "jax[cuda12]"
```
* **TPU (Google Cloud TPU VM)**
```
pip install -U "jax[tpu]" -f https://storage.googleapis.com/jax-releases/libtpu_releases.html
```
(install-supported-platforms)=
## Supported platforms
The table below shows all supported platforms and installation options. Check if your setup is supported; and if it says _"yes"_ or _"experimental"_, then click on the corresponding link to learn how to install JAX in greater detail.
| | Linux, x86_64 | Linux, aarch64 | Mac, x86_64 | Mac, aarch64 | Windows, x86_64 | Windows WSL2, x86_64 |
|------------------|---------------------------------------|---------------------------------|---------------------------------------|---------------------------------------|--------------------------|------------------------------------------|
| CPU | {ref}`yes <install-cpu>` | {ref}`yes <install-cpu>` | {ref}`yes <install-cpu>` | {ref}`yes <install-cpu>` | {ref}`yes <install-cpu>` | {ref}`yes <install-cpu>` |
| NVIDIA GPU | {ref}`yes <install-nvidia-gpu>` | {ref}`yes <install-nvidia-gpu>` | no | n/a | no | {ref}`experimental <install-nvidia-gpu>` |
| Google Cloud TPU | {ref}`yes <install-google-tpu>` | n/a | n/a | n/a | n/a | n/a |
| AMD GPU | {ref}`experimental <install-amd-gpu>` | no | {ref}`experimental <install-mac-gpu>` | n/a | no | no |
| Apple GPU | n/a | no | n/a | {ref}`experimental <install-mac-gpu>` | n/a | n/a |
| Intel GPU | {ref}`experimental <install-intel-gpu>`| n/a | n/a | n/a | no | no |
(install-cpu)=
## CPU
### pip installation: CPU
Currently, the JAX team releases `jaxlib` wheels for the following
operating systems and architectures:
- Linux, x86_64
- Linux, aarch64
- macOS, Intel
- macOS, Apple ARM-based
- Windows, x86_64 (*experimental*)
To install a CPU-only version of JAX, which might be useful for doing local
development on a laptop, you can run:
```bash
pip install --upgrade pip
pip install --upgrade jax
```
On Windows, you may also need to install the
[Microsoft Visual Studio 2019 Redistributable](https://learn.microsoft.com/en-US/cpp/windows/latest-supported-vc-redist?view=msvc-170#visual-studio-2015-2017-2019-and-2022)
if it is not already installed on your machine.
Other operating systems and architectures require building from source. Trying
to pip install on other operating systems and architectures may lead to `jaxlib`
not being installed alongside `jax`, although `jax` may successfully install
(but fail at runtime).
(install-nvidia-gpu)=
## NVIDIA GPU
JAX supports NVIDIA GPUs that have SM version 5.2 (Maxwell) or newer.
Note that Kepler-series GPUs are no longer supported by JAX since
NVIDIA has dropped support for Kepler GPUs in its software.
You must first install the NVIDIA driver. You're
recommended to install the newest driver available from NVIDIA, but the driver
version must be >= 525.60.13 for CUDA 12 on Linux.
If you need to use a newer CUDA toolkit with an older driver, for example
on a cluster where you cannot update the NVIDIA driver easily, you may be
able to use the
[CUDA forward compatibility packages](https://docs.nvidia.com/deploy/cuda-compatibility/)
that NVIDIA provides for this purpose.
### pip installation: NVIDIA GPU (CUDA, installed via pip, easier)
There are two ways to install JAX with NVIDIA GPU support:
- Using NVIDIA CUDA and cuDNN installed from pip wheels
- Using a self-installed CUDA/cuDNN
The JAX team strongly recommends installing CUDA and cuDNN using the pip wheels,
since it is much easier!
NVIDIA has released CUDA pip packages only for x86_64 and aarch64; on other
platforms you must use a local installation of CUDA.
```bash
pip install --upgrade pip
# NVIDIA CUDA 12 installation
# Note: wheels only available on linux.
pip install --upgrade "jax[cuda12]"
```
If JAX detects the wrong version of the NVIDIA CUDA libraries, there are several things
you need to check:
* Make sure that `LD_LIBRARY_PATH` is not set, since `LD_LIBRARY_PATH` can
override the NVIDIA CUDA libraries.
* Make sure that the NVIDIA CUDA libraries installed are those requested by JAX.
Rerunning the installation command above should work.
### pip installation: NVIDIA GPU (CUDA, installed locally, harder)
If you prefer to use a preinstalled copy of NVIDIA CUDA, you must first
install NVIDIA [CUDA](https://developer.nvidia.com/cuda-downloads) and
[cuDNN](https://developer.nvidia.com/CUDNN).
JAX provides pre-built CUDA-compatible wheels for **Linux x86_64 and Linux aarch64 only**. Other
combinations of operating system and architecture are possible, but require
building from source (refer to {ref}`building-from-source` to learn more}.
You should use an NVIDIA driver version that is at least as new as your
[NVIDIA CUDA toolkit's corresponding driver version](https://docs.nvidia.com/cuda/cuda-toolkit-release-notes/index.html#cuda-major-component-versions__table-cuda-toolkit-driver-versions).
If you need to use a newer CUDA toolkit with an older driver, for example
on a cluster where you cannot update the NVIDIA driver easily, you may be
able to use the
[CUDA forward compatibility packages](https://docs.nvidia.com/deploy/cuda-compatibility/)
that NVIDIA provides for this purpose.
JAX currently ships one CUDA wheel variant:
| Built with | Compatible with |
|------------|--------------------|
| CUDA 12.3 | CUDA >=12.1 |
| CUDNN 9.1 | CUDNN >=9.1, <10.0 |
| NCCL 2.19 | NCCL >=2.18 |
JAX checks the versions of your libraries, and will report an error if they are
not sufficiently new.
Setting the `JAX_SKIP_CUDA_CONSTRAINTS_CHECK` environment variable will disable
the check, but using older versions of CUDA may lead to errors, or incorrect
results.
NCCL is an optional dependency, required only if you are performing multi-GPU
computations.
To install, run:
```bash
pip install --upgrade pip
# Installs the wheel compatible with NVIDIA CUDA 12 and cuDNN 9.0 or newer.
# Note: wheels only available on linux.
pip install --upgrade "jax[cuda12_local]"
```
**These `pip` installations do not work with Windows, and may fail silently; refer to the table
[above](#supported-platforms).**
You can find your CUDA version with the command:
```bash
nvcc --version
```
JAX uses `LD_LIBRARY_PATH` to find CUDA libraries and `PATH` to find binaries
(`ptxas`, `nvlink`). Please make sure that these paths point to the correct CUDA
installation.
JAX requires libdevice10.bc, which typically comes from the cuda-nvvm package.
Make sure that it is present in your CUDA installation.
Please let the JAX team know on [the GitHub issue tracker](https://github.com/jax-ml/jax/issues)
if you run into any errors or problems with the pre-built wheels.
(docker-containers-nvidia-gpu)=
### NVIDIA GPU Docker containers
NVIDIA provides the [JAX
Toolbox](https://github.com/NVIDIA/JAX-Toolbox) containers, which are
bleeding edge containers containing nightly releases of jax and some
models/frameworks.
(install-google-tpu)=
## Google Cloud TPU
### pip installation: Google Cloud TPU
JAX provides pre-built wheels for
[Google Cloud TPU](https://cloud.google.com/tpu/docs/users-guide-tpu-vm).
To install JAX along with appropriate versions of `jaxlib` and `libtpu`, you can run
the following in your cloud TPU VM:
```bash
pip install jax[tpu] -f https://storage.googleapis.com/jax-releases/libtpu_releases.html
```
For users of Colab (https://colab.research.google.com/), be sure you are
using *TPU v2* and not the older, deprecated TPU runtime.
(install-mac-gpu)=
## Mac GPU
### pip installation
Apple provides an experimental Metal plugin. For details,
refer to
[Apple's JAX on Metal documentation](https://developer.apple.com/metal/jax/).
**Note:** There are several caveats with the Metal plugin:
* The Metal plugin is new and experimental and has a number of
[known issues](https://github.com/jax-ml/jax/issues?q=is%3Aissue+is%3Aopen+label%3A%22Apple+GPU+%28Metal%29+plugin%22).
Please report any issues on the JAX issue tracker.
* The Metal plugin currently requires very specific versions of `jax` and
`jaxlib`. This restriction will be relaxed over time as the plugin API
matures.
(install-amd-gpu)=
## AMD GPU (Linux)
JAX has experimental ROCm support. There are two ways to install JAX:
* Use [AMD's Docker container](https://hub.docker.com/r/rocm/jax); or
* Build from source (refer to {ref}`building-from-source` — a section called _Additional notes for building a ROCM `jaxlib` for AMD GPUs_).
(install-intel-gpu)=
## Intel GPU
Intel provides an experimental OneAPI plugin: intel-extension-for-openxla for Intel GPU hardware. For more details and installation instructions, refer to one of the following two methods:
1. Pip installation: [JAX acceleration on Intel GPU](https://github.com/intel/intel-extension-for-openxla/blob/main/docs/acc_jax.md).
2. Using [Intel's XLA Docker container](https://hub.docker.com/r/intel/intel-optimized-xla).
Please report any issues related to:
* JAX: [JAX issue tracker](https://github.com/jax-ml/jax/issues).
* Intel's OpenXLA plugin: [Intel-extension-for-openxla issue tracker](https://github.com/intel/intel-extension-for-openxla/issues).
## Conda (community-supported)
### Conda installation
There is a community-supported Conda build of `jax`. To install it using `conda`,
simply run:
```bash
conda install jax -c conda-forge
```
If you run this command on machine with an NVIDIA GPU, this should install a CUDA-enabled package of `jaxlib`.
To ensure that the jax version you are installing is indeed CUDA-enabled, run:
```bash
conda install "jaxlib=*=*cuda*" jax -c conda-forge
```
If you would like to override which release of CUDA is used by JAX, or to
install the CUDA build on a machine without GPUs, follow the instructions in the
[Tips & tricks](https://conda-forge.org/docs/user/tipsandtricks.html#installing-cuda-enabled-packages-like-tensorflow-and-pytorch)
section of the `conda-forge` website.
Go to the `conda-forge`
[jaxlib](https://github.com/conda-forge/jaxlib-feedstock#installing-jaxlib) and
[jax](https://github.com/conda-forge/jax-feedstock#installing-jax) repositories
for more details.
## JAX nightly installation
Nightly releases reflect the state of the main JAX repository at the time they are
built, and may not pass the full test suite.
Unlike the instructions for installing a JAX release, here we name all of JAX's
packages explicitly on the command line, so `pip` will upgrade them if a newer
version is available.
- CPU only:
```bash
pip install -U --pre jax jaxlib -f https://storage.googleapis.com/jax-releases/jax_nightly_releases.html
```
- Google Cloud TPU:
```bash
pip install -U --pre jax jaxlib libtpu requests -f https://storage.googleapis.com/jax-releases/jax_nightly_releases.html -f https://storage.googleapis.com/jax-releases/libtpu_releases.html
```
- NVIDIA GPU (CUDA 12):
```bash
pip install -U --pre jax jaxlib jax-cuda12-plugin[with_cuda] jax-cuda12-pjrt -f https://storage.googleapis.com/jax-releases/jax_nightly_releases.html
```
- NVIDIA GPU (CUDA 12) legacy:
Use the following for historical nightly releases of monolithic CUDA jaxlibs.
You most likely do not want this; no further monolithic CUDA jaxlibs will be
built and those that exist will expire by Sep 2024. Use the "CUDA 12" option above.
```bash
pip install -U --pre jaxlib -f https://storage.googleapis.com/jax-releases/jaxlib_nightly_cuda12_releases.html
```
(building-jax-from-source)=
## Building JAX from source
Refer to {ref}`building-from-source`.
## Installing older `jaxlib` wheels
Due to storage limitations on the Python package index, the JAX team periodically removes
older `jaxlib` wheels from the releases on http://pypi.org/project/jax. These can
still be installed directly via the URLs here. For example:
```bash
# Install jaxlib on CPU via the wheel archive
pip install jax[cpu]==0.3.25 -f https://storage.googleapis.com/jax-releases/jax_releases.html
# Install the jaxlib 0.3.25 CPU wheel directly
pip install jaxlib==0.3.25 -f https://storage.googleapis.com/jax-releases/jax_releases.html
```
For specific older GPU wheels, be sure to use the `jax_cuda_releases.html` URL; for example
```bash
pip install jaxlib==0.3.25+cuda11.cudnn82 -f https://storage.googleapis.com/jax-releases/jax_cuda_releases.html
```
|
googleREPO_NAMEjaxPATH_START.@jax_extracted@jax-main@docs@installation.md@.PATH_END.py
|
{
"filename": "validation.py",
"repo_name": "catboost/catboost",
"repo_path": "catboost_extracted/catboost-master/contrib/python/scikit-learn/py2/sklearn/utils/validation.py",
"type": "Python"
}
|
"""Utilities for input validation"""
# Authors: Olivier Grisel
# Gael Varoquaux
# Andreas Mueller
# Lars Buitinck
# Alexandre Gramfort
# Nicolas Tresegnie
# License: BSD 3 clause
import warnings
import numbers
import numpy as np
import scipy.sparse as sp
from ..externals import six
from ..utils.fixes import signature
from .deprecation import deprecated
from ..exceptions import DataConversionWarning as _DataConversionWarning
from ..exceptions import NonBLASDotWarning as _NonBLASDotWarning
from ..exceptions import NotFittedError as _NotFittedError
@deprecated("DataConversionWarning has been moved into the sklearn.exceptions"
" module. It will not be available here from version 0.19")
class DataConversionWarning(_DataConversionWarning):
pass
@deprecated("NonBLASDotWarning has been moved into the sklearn.exceptions"
" module. It will not be available here from version 0.19")
class NonBLASDotWarning(_NonBLASDotWarning):
pass
@deprecated("NotFittedError has been moved into the sklearn.exceptions module."
" It will not be available here from version 0.19")
class NotFittedError(_NotFittedError):
pass
FLOAT_DTYPES = (np.float64, np.float32, np.float16)
# Silenced by default to reduce verbosity. Turn on at runtime for
# performance profiling.
warnings.simplefilter('ignore', _NonBLASDotWarning)
def _assert_all_finite(X):
"""Like assert_all_finite, but only for ndarray."""
X = np.asanyarray(X)
# First try an O(n) time, O(1) space solution for the common case that
# everything is finite; fall back to O(n) space np.isfinite to prevent
# false positives from overflow in sum method.
if (X.dtype.char in np.typecodes['AllFloat'] and not np.isfinite(X.sum())
and not np.isfinite(X).all()):
raise ValueError("Input contains NaN, infinity"
" or a value too large for %r." % X.dtype)
def assert_all_finite(X):
"""Throw a ValueError if X contains NaN or infinity.
Input MUST be an np.ndarray instance or a scipy.sparse matrix."""
_assert_all_finite(X.data if sp.issparse(X) else X)
def as_float_array(X, copy=True, force_all_finite=True):
"""Converts an array-like to an array of floats
The new dtype will be np.float32 or np.float64, depending on the original
type. The function can create a copy or modify the argument depending
on the argument copy.
Parameters
----------
X : {array-like, sparse matrix}
copy : bool, optional
If True, a copy of X will be created. If False, a copy may still be
returned if X's dtype is not a floating point type.
force_all_finite : boolean (default=True)
Whether to raise an error on np.inf and np.nan in X.
Returns
-------
XT : {array, sparse matrix}
An array of type np.float
"""
if isinstance(X, np.matrix) or (not isinstance(X, np.ndarray)
and not sp.issparse(X)):
return check_array(X, ['csr', 'csc', 'coo'], dtype=np.float64,
copy=copy, force_all_finite=force_all_finite,
ensure_2d=False)
elif sp.issparse(X) and X.dtype in [np.float32, np.float64]:
return X.copy() if copy else X
elif X.dtype in [np.float32, np.float64]: # is numpy array
return X.copy('F' if X.flags['F_CONTIGUOUS'] else 'C') if copy else X
else:
return X.astype(np.float32 if X.dtype == np.int32 else np.float64)
def _is_arraylike(x):
"""Returns whether the input is array-like"""
return (hasattr(x, '__len__') or
hasattr(x, 'shape') or
hasattr(x, '__array__'))
def _num_samples(x):
"""Return number of samples in array-like x."""
if hasattr(x, 'fit'):
# Don't get num_samples from an ensembles length!
raise TypeError('Expected sequence or array-like, got '
'estimator %s' % x)
if not hasattr(x, '__len__') and not hasattr(x, 'shape'):
if hasattr(x, '__array__'):
x = np.asarray(x)
else:
raise TypeError("Expected sequence or array-like, got %s" %
type(x))
if hasattr(x, 'shape'):
if len(x.shape) == 0:
raise TypeError("Singleton array %r cannot be considered"
" a valid collection." % x)
return x.shape[0]
else:
return len(x)
def _shape_repr(shape):
"""Return a platform independent representation of an array shape
Under Python 2, the `long` type introduces an 'L' suffix when using the
default %r format for tuples of integers (typically used to store the shape
of an array).
Under Windows 64 bit (and Python 2), the `long` type is used by default
in numpy shapes even when the integer dimensions are well below 32 bit.
The platform specific type causes string messages or doctests to change
from one platform to another which is not desirable.
Under Python 3, there is no more `long` type so the `L` suffix is never
introduced in string representation.
>>> _shape_repr((1, 2))
'(1, 2)'
>>> one = 2 ** 64 / 2 ** 64 # force an upcast to `long` under Python 2
>>> _shape_repr((one, 2 * one))
'(1, 2)'
>>> _shape_repr((1,))
'(1,)'
>>> _shape_repr(())
'()'
"""
if len(shape) == 0:
return "()"
joined = ", ".join("%d" % e for e in shape)
if len(shape) == 1:
# special notation for singleton tuples
joined += ','
return "(%s)" % joined
def check_consistent_length(*arrays):
"""Check that all arrays have consistent first dimensions.
Checks whether all objects in arrays have the same shape or length.
Parameters
----------
*arrays : list or tuple of input objects.
Objects that will be checked for consistent length.
"""
lengths = [_num_samples(X) for X in arrays if X is not None]
uniques = np.unique(lengths)
if len(uniques) > 1:
raise ValueError("Found input variables with inconsistent numbers of"
" samples: %r" % [int(l) for l in lengths])
def indexable(*iterables):
"""Make arrays indexable for cross-validation.
Checks consistent length, passes through None, and ensures that everything
can be indexed by converting sparse matrices to csr and converting
non-interable objects to arrays.
Parameters
----------
*iterables : lists, dataframes, arrays, sparse matrices
List of objects to ensure sliceability.
"""
result = []
for X in iterables:
if sp.issparse(X):
result.append(X.tocsr())
elif hasattr(X, "__getitem__") or hasattr(X, "iloc"):
result.append(X)
elif X is None:
result.append(X)
else:
result.append(np.array(X))
check_consistent_length(*result)
return result
def _ensure_sparse_format(spmatrix, accept_sparse, dtype, copy,
force_all_finite):
"""Convert a sparse matrix to a given format.
Checks the sparse format of spmatrix and converts if necessary.
Parameters
----------
spmatrix : scipy sparse matrix
Input to validate and convert.
accept_sparse : string, list of string or None (default=None)
String[s] representing allowed sparse matrix formats ('csc',
'csr', 'coo', 'dok', 'bsr', 'lil', 'dia'). None means that sparse
matrix input will raise an error. If the input is sparse but not in
the allowed format, it will be converted to the first listed format.
dtype : string, type or None (default=none)
Data type of result. If None, the dtype of the input is preserved.
copy : boolean (default=False)
Whether a forced copy will be triggered. If copy=False, a copy might
be triggered by a conversion.
force_all_finite : boolean (default=True)
Whether to raise an error on np.inf and np.nan in X.
Returns
-------
spmatrix_converted : scipy sparse matrix.
Matrix that is ensured to have an allowed type.
"""
if accept_sparse in [None, False]:
raise TypeError('A sparse matrix was passed, but dense '
'data is required. Use X.toarray() to '
'convert to a dense numpy array.')
if dtype is None:
dtype = spmatrix.dtype
changed_format = False
if (isinstance(accept_sparse, (list, tuple))
and spmatrix.format not in accept_sparse):
# create new with correct sparse
spmatrix = spmatrix.asformat(accept_sparse[0])
changed_format = True
if dtype != spmatrix.dtype:
# convert dtype
spmatrix = spmatrix.astype(dtype)
elif copy and not changed_format:
# force copy
spmatrix = spmatrix.copy()
if force_all_finite:
if not hasattr(spmatrix, "data"):
warnings.warn("Can't check %s sparse matrix for nan or inf."
% spmatrix.format)
else:
_assert_all_finite(spmatrix.data)
return spmatrix
def check_array(array, accept_sparse=None, dtype="numeric", order=None,
copy=False, force_all_finite=True, ensure_2d=True,
allow_nd=False, ensure_min_samples=1, ensure_min_features=1,
warn_on_dtype=False, estimator=None):
"""Input validation on an array, list, sparse matrix or similar.
By default, the input is converted to an at least 2D numpy array.
If the dtype of the array is object, attempt converting to float,
raising on failure.
Parameters
----------
array : object
Input object to check / convert.
accept_sparse : string, list of string or None (default=None)
String[s] representing allowed sparse matrix formats, such as 'csc',
'csr', etc. None means that sparse matrix input will raise an error.
If the input is sparse but not in the allowed format, it will be
converted to the first listed format.
dtype : string, type, list of types or None (default="numeric")
Data type of result. If None, the dtype of the input is preserved.
If "numeric", dtype is preserved unless array.dtype is object.
If dtype is a list of types, conversion on the first type is only
performed if the dtype of the input is not in the list.
order : 'F', 'C' or None (default=None)
Whether an array will be forced to be fortran or c-style.
When order is None (default), then if copy=False, nothing is ensured
about the memory layout of the output array; otherwise (copy=True)
the memory layout of the returned array is kept as close as possible
to the original array.
copy : boolean (default=False)
Whether a forced copy will be triggered. If copy=False, a copy might
be triggered by a conversion.
force_all_finite : boolean (default=True)
Whether to raise an error on np.inf and np.nan in X.
ensure_2d : boolean (default=True)
Whether to make X at least 2d.
allow_nd : boolean (default=False)
Whether to allow X.ndim > 2.
ensure_min_samples : int (default=1)
Make sure that the array has a minimum number of samples in its first
axis (rows for a 2D array). Setting to 0 disables this check.
ensure_min_features : int (default=1)
Make sure that the 2D array has some minimum number of features
(columns). The default value of 1 rejects empty datasets.
This check is only enforced when the input data has effectively 2
dimensions or is originally 1D and ``ensure_2d`` is True. Setting to 0
disables this check.
warn_on_dtype : boolean (default=False)
Raise DataConversionWarning if the dtype of the input data structure
does not match the requested dtype, causing a memory copy.
estimator : str or estimator instance (default=None)
If passed, include the name of the estimator in warning messages.
Returns
-------
X_converted : object
The converted and validated X.
"""
if isinstance(accept_sparse, str):
accept_sparse = [accept_sparse]
# store whether originally we wanted numeric dtype
dtype_numeric = dtype == "numeric"
dtype_orig = getattr(array, "dtype", None)
if not hasattr(dtype_orig, 'kind'):
# not a data type (e.g. a column named dtype in a pandas DataFrame)
dtype_orig = None
if dtype_numeric:
if dtype_orig is not None and dtype_orig.kind == "O":
# if input is object, convert to float.
dtype = np.float64
else:
dtype = None
if isinstance(dtype, (list, tuple)):
if dtype_orig is not None and dtype_orig in dtype:
# no dtype conversion required
dtype = None
else:
# dtype conversion required. Let's select the first element of the
# list of accepted types.
dtype = dtype[0]
if estimator is not None:
if isinstance(estimator, six.string_types):
estimator_name = estimator
else:
estimator_name = estimator.__class__.__name__
else:
estimator_name = "Estimator"
context = " by %s" % estimator_name if estimator is not None else ""
if sp.issparse(array):
array = _ensure_sparse_format(array, accept_sparse, dtype, copy,
force_all_finite)
else:
array = np.array(array, dtype=dtype, order=order, copy=copy)
if ensure_2d:
if array.ndim == 1:
if ensure_min_samples >= 2:
raise ValueError("%s expects at least 2 samples provided "
"in a 2 dimensional array-like input"
% estimator_name)
warnings.warn(
"Passing 1d arrays as data is deprecated in 0.17 and will "
"raise ValueError in 0.19. Reshape your data either using "
"X.reshape(-1, 1) if your data has a single feature or "
"X.reshape(1, -1) if it contains a single sample.",
DeprecationWarning)
array = np.atleast_2d(array)
# To ensure that array flags are maintained
array = np.array(array, dtype=dtype, order=order, copy=copy)
# make sure we actually converted to numeric:
if dtype_numeric and array.dtype.kind == "O":
array = array.astype(np.float64)
if not allow_nd and array.ndim >= 3:
raise ValueError("Found array with dim %d. %s expected <= 2."
% (array.ndim, estimator_name))
if force_all_finite:
_assert_all_finite(array)
shape_repr = _shape_repr(array.shape)
if ensure_min_samples > 0:
n_samples = _num_samples(array)
if n_samples < ensure_min_samples:
raise ValueError("Found array with %d sample(s) (shape=%s) while a"
" minimum of %d is required%s."
% (n_samples, shape_repr, ensure_min_samples,
context))
if ensure_min_features > 0 and array.ndim == 2:
n_features = array.shape[1]
if n_features < ensure_min_features:
raise ValueError("Found array with %d feature(s) (shape=%s) while"
" a minimum of %d is required%s."
% (n_features, shape_repr, ensure_min_features,
context))
if warn_on_dtype and dtype_orig is not None and array.dtype != dtype_orig:
msg = ("Data with input dtype %s was converted to %s%s."
% (dtype_orig, array.dtype, context))
warnings.warn(msg, _DataConversionWarning)
return array
def check_X_y(X, y, accept_sparse=None, dtype="numeric", order=None,
copy=False, force_all_finite=True, ensure_2d=True,
allow_nd=False, multi_output=False, ensure_min_samples=1,
ensure_min_features=1, y_numeric=False,
warn_on_dtype=False, estimator=None):
"""Input validation for standard estimators.
Checks X and y for consistent length, enforces X 2d and y 1d.
Standard input checks are only applied to y, such as checking that y
does not have np.nan or np.inf targets. For multi-label y, set
multi_output=True to allow 2d and sparse y. If the dtype of X is
object, attempt converting to float, raising on failure.
Parameters
----------
X : nd-array, list or sparse matrix
Input data.
y : nd-array, list or sparse matrix
Labels.
accept_sparse : string, list of string or None (default=None)
String[s] representing allowed sparse matrix formats, such as 'csc',
'csr', etc. None means that sparse matrix input will raise an error.
If the input is sparse but not in the allowed format, it will be
converted to the first listed format.
dtype : string, type, list of types or None (default="numeric")
Data type of result. If None, the dtype of the input is preserved.
If "numeric", dtype is preserved unless array.dtype is object.
If dtype is a list of types, conversion on the first type is only
performed if the dtype of the input is not in the list.
order : 'F', 'C' or None (default=None)
Whether an array will be forced to be fortran or c-style.
copy : boolean (default=False)
Whether a forced copy will be triggered. If copy=False, a copy might
be triggered by a conversion.
force_all_finite : boolean (default=True)
Whether to raise an error on np.inf and np.nan in X. This parameter
does not influence whether y can have np.inf or np.nan values.
ensure_2d : boolean (default=True)
Whether to make X at least 2d.
allow_nd : boolean (default=False)
Whether to allow X.ndim > 2.
multi_output : boolean (default=False)
Whether to allow 2-d y (array or sparse matrix). If false, y will be
validated as a vector. y cannot have np.nan or np.inf values if
multi_output=True.
ensure_min_samples : int (default=1)
Make sure that X has a minimum number of samples in its first
axis (rows for a 2D array).
ensure_min_features : int (default=1)
Make sure that the 2D array has some minimum number of features
(columns). The default value of 1 rejects empty datasets.
This check is only enforced when X has effectively 2 dimensions or
is originally 1D and ``ensure_2d`` is True. Setting to 0 disables
this check.
y_numeric : boolean (default=False)
Whether to ensure that y has a numeric type. If dtype of y is object,
it is converted to float64. Should only be used for regression
algorithms.
warn_on_dtype : boolean (default=False)
Raise DataConversionWarning if the dtype of the input data structure
does not match the requested dtype, causing a memory copy.
estimator : str or estimator instance (default=None)
If passed, include the name of the estimator in warning messages.
Returns
-------
X_converted : object
The converted and validated X.
y_converted : object
The converted and validated y.
"""
X = check_array(X, accept_sparse, dtype, order, copy, force_all_finite,
ensure_2d, allow_nd, ensure_min_samples,
ensure_min_features, warn_on_dtype, estimator)
if multi_output:
y = check_array(y, 'csr', force_all_finite=True, ensure_2d=False,
dtype=None)
else:
y = column_or_1d(y, warn=True)
_assert_all_finite(y)
if y_numeric and y.dtype.kind == 'O':
y = y.astype(np.float64)
check_consistent_length(X, y)
return X, y
def column_or_1d(y, warn=False):
""" Ravel column or 1d numpy array, else raises an error
Parameters
----------
y : array-like
warn : boolean, default False
To control display of warnings.
Returns
-------
y : array
"""
shape = np.shape(y)
if len(shape) == 1:
return np.ravel(y)
if len(shape) == 2 and shape[1] == 1:
if warn:
warnings.warn("A column-vector y was passed when a 1d array was"
" expected. Please change the shape of y to "
"(n_samples, ), for example using ravel().",
_DataConversionWarning, stacklevel=2)
return np.ravel(y)
raise ValueError("bad input shape {0}".format(shape))
def check_random_state(seed):
"""Turn seed into a np.random.RandomState instance
If seed is None, return the RandomState singleton used by np.random.
If seed is an int, return a new RandomState instance seeded with seed.
If seed is already a RandomState instance, return it.
Otherwise raise ValueError.
"""
if seed is None or seed is np.random:
return np.random.mtrand._rand
if isinstance(seed, (numbers.Integral, np.integer)):
return np.random.RandomState(seed)
if isinstance(seed, np.random.RandomState):
return seed
raise ValueError('%r cannot be used to seed a numpy.random.RandomState'
' instance' % seed)
def has_fit_parameter(estimator, parameter):
"""Checks whether the estimator's fit method supports the given parameter.
Examples
--------
>>> from sklearn.svm import SVC
>>> has_fit_parameter(SVC(), "sample_weight")
True
"""
return parameter in signature(estimator.fit).parameters
def check_symmetric(array, tol=1E-10, raise_warning=True,
raise_exception=False):
"""Make sure that array is 2D, square and symmetric.
If the array is not symmetric, then a symmetrized version is returned.
Optionally, a warning or exception is raised if the matrix is not
symmetric.
Parameters
----------
array : nd-array or sparse matrix
Input object to check / convert. Must be two-dimensional and square,
otherwise a ValueError will be raised.
tol : float
Absolute tolerance for equivalence of arrays. Default = 1E-10.
raise_warning : boolean (default=True)
If True then raise a warning if conversion is required.
raise_exception : boolean (default=False)
If True then raise an exception if array is not symmetric.
Returns
-------
array_sym : ndarray or sparse matrix
Symmetrized version of the input array, i.e. the average of array
and array.transpose(). If sparse, then duplicate entries are first
summed and zeros are eliminated.
"""
if (array.ndim != 2) or (array.shape[0] != array.shape[1]):
raise ValueError("array must be 2-dimensional and square. "
"shape = {0}".format(array.shape))
if sp.issparse(array):
diff = array - array.T
# only csr, csc, and coo have `data` attribute
if diff.format not in ['csr', 'csc', 'coo']:
diff = diff.tocsr()
symmetric = np.all(abs(diff.data) < tol)
else:
symmetric = np.allclose(array, array.T, atol=tol)
if not symmetric:
if raise_exception:
raise ValueError("Array must be symmetric")
if raise_warning:
warnings.warn("Array is not symmetric, and will be converted "
"to symmetric by average with its transpose.")
if sp.issparse(array):
conversion = 'to' + array.format
array = getattr(0.5 * (array + array.T), conversion)()
else:
array = 0.5 * (array + array.T)
return array
def check_is_fitted(estimator, attributes, msg=None, all_or_any=all):
"""Perform is_fitted validation for estimator.
Checks if the estimator is fitted by verifying the presence of
"all_or_any" of the passed attributes and raises a NotFittedError with the
given message.
Parameters
----------
estimator : estimator instance.
estimator instance for which the check is performed.
attributes : attribute name(s) given as string or a list/tuple of strings
Eg. : ["coef_", "estimator_", ...], "coef_"
msg : string
The default error message is, "This %(name)s instance is not fitted
yet. Call 'fit' with appropriate arguments before using this method."
For custom messages if "%(name)s" is present in the message string,
it is substituted for the estimator name.
Eg. : "Estimator, %(name)s, must be fitted before sparsifying".
all_or_any : callable, {all, any}, default all
Specify whether all or any of the given attributes must exist.
"""
if msg is None:
msg = ("This %(name)s instance is not fitted yet. Call 'fit' with "
"appropriate arguments before using this method.")
if not hasattr(estimator, 'fit'):
raise TypeError("%s is not an estimator instance." % (estimator))
if not isinstance(attributes, (list, tuple)):
attributes = [attributes]
if not all_or_any([hasattr(estimator, attr) for attr in attributes]):
# FIXME NotFittedError_ --> NotFittedError in 0.19
raise _NotFittedError(msg % {'name': type(estimator).__name__})
def check_non_negative(X, whom):
"""
Check if there is any negative value in an array.
Parameters
----------
X : array-like or sparse matrix
Input data.
whom : string
Who passed X to this function.
"""
X = X.data if sp.issparse(X) else X
if (X < 0).any():
raise ValueError("Negative values in data passed to %s" % whom)
|
catboostREPO_NAMEcatboostPATH_START.@catboost_extracted@catboost-master@contrib@python@scikit-learn@py2@sklearn@utils@validation.py@.PATH_END.py
|
{
"filename": "SampleTrajectoryPositions.py",
"repo_name": "wmpg/WesternMeteorPyLib",
"repo_path": "WesternMeteorPyLib_extracted/WesternMeteorPyLib-master/wmpl/Utils/SampleTrajectoryPositions.py",
"type": "Python"
}
|
""" Given the trajectory, beginning and ending height of sampling and the step, this code will sample the
trajectory and produce time, height, geo and ECI coordinates for every sample.
This is used as a step in dark flight modelling.
"""
from __future__ import print_function, division, absolute_import
import os
import argparse
import numpy as np
import matplotlib.pyplot as plt
import scipy.interpolate
import scipy.signal
from wmpl.Utils.TrajConversions import cartesian2Geo, eci2RaDec, raDec2AltAz, altAz2RADec, \
equatorialCoordPrecession, J2000_JD
from wmpl.Utils.Math import lineAndSphereIntersections, vectMag, vectNorm
from wmpl.Utils.Pickling import loadPickle
def _plotSphereAndArrow(centre, radius, origin, direction, intersection_list):
from itertools import product, combinations
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
import numpy as np
from wmpl.Utils.Plotting import Arrow3D
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.set_aspect("equal")
# draw sphere
u, v = np.mgrid[0:2*np.pi:20j, 0:np.pi:10j]
x = centre[0] + radius*np.cos(u)*np.sin(v)
y = centre[1] + radius*np.sin(u)*np.sin(v)
z = centre[2] + radius*np.cos(v)
ax.plot_wireframe(x, y, z, color="b")
# draw the origin
ax.scatter(*origin, color="g", s=100)
# draw a vector
xa, ya, za = np.c_[origin, origin + direction]
a = Arrow3D(xa, ya, za, mutation_scale=20,
lw=1, arrowstyle="-|>", color="k")
ax.add_artist(a)
# if intersection:
# for point in intersection_list:
# # draw the intersections
# ax.scatter(*point, color="r", s=100)
plt.show()
class TrajectorySamples(object):
def __init__(self, traj):
self.traj = traj
self.t_est = []
self.ht = []
self.lat = []
self.lon = []
self.ele_geo = []
self.azim_norot = []
self.elev_norot = []
def addSample(self, t_est, ht, lat, lon, ele_geo, azim_norot, elev_norot):
self.t_est.append(t_est)
self.ht.append(ht)
self.lat.append(lat)
self.lon.append(lon)
self.ele_geo.append(ele_geo)
self.azim_norot.append(azim_norot)
self.elev_norot.append(elev_norot)
def sampleTrajectory(traj, beg_ht, end_ht, sample_step, show_plots=False):
""" Given the trajectory, beginning, end and step in km, this function will interpolate the
fireball height vs. distance and return the coordinates of sampled positions and compute the azimuth
and elevation for every point.
Arguments:
Return:
"""
# Set begin and end heights, if not given
if beg_ht < 0:
beg_ht = traj.rbeg_ele
if end_ht < 0:
end_ht = traj.rend_ele
# Generate heights for sampling
height_array = np.flipud(np.arange(end_ht, beg_ht + sample_step, sample_step))
### Fit time vs. height
time_data = []
height_data = []
for obs in traj.observations:
time_data += obs.time_data.tolist()
height_data += obs.model_ht.tolist()
if show_plots:
# Plot the station data
plt.scatter(obs.time_data, obs.model_ht/1000, label=obs.station_id, marker='x', zorder=3)
height_data = np.array(height_data)
time_data = np.array(time_data)
# Sort the arrays by decreasing time
arr_sort_indices = np.argsort(time_data)[::-1]
height_data = height_data[arr_sort_indices]
time_data = time_data[arr_sort_indices]
# Plot the non-smoothed time vs. height
#plt.scatter(time_data, height_data/1000, label='Data')
# Apply Savitzky-Golay to smooth out the height change
height_data = scipy.signal.savgol_filter(height_data, 21, 5)
if show_plots:
plt.scatter(time_data, height_data/1000, label='Savitzky-Golay filtered', marker='+', zorder=3)
# Sort the arrays by increasing heights (needed for interpolation)
arr_sort_indices = np.argsort(height_data)
height_data = height_data[arr_sort_indices]
time_data = time_data[arr_sort_indices]
# Interpolate height vs. time
ht_vs_time_interp = scipy.interpolate.PchipInterpolator(height_data, time_data)
# Plot the interpolation
ht_arr = np.linspace(np.min(height_data), np.max(height_data), 1000)
time_arr = ht_vs_time_interp(ht_arr)
if show_plots:
plt.plot(time_arr, ht_arr/1000, label='Interpolation', zorder=3)
plt.legend()
plt.xlabel('Time (s)')
plt.ylabel('Height (km)')
plt.grid()
plt.show()
###
# Take the ground above the state vector as the reference distance from the surface of the Earth
ref_radius = vectMag(traj.state_vect_mini) - np.max(height_data)
# Compute distance from the centre of the Earth to each height
radius_array = ref_radius + height_array
if show_plots:
print('Beginning coordinates (observed):')
print(' Lat: {:.6f}'.format(np.degrees(traj.rbeg_lat)))
print(' Lon: {:.6f}'.format(np.degrees(traj.rbeg_lon)))
print(' Elev: {:.1f}'.format(traj.rbeg_ele))
print()
print("Ground-fixed azimuth and altitude:")
print(' Time(s), Sample ht (m), Lat (deg), Lon (deg), Height (m), Azim (deg), Elev (deg)')
# Open a trajectory sample container
traj_samples = TrajectorySamples(traj)
# Go through every distance from the Earth centre and compute the geo coordinates at the given distance,
# as well as the point-to-point azimuth and elevation
prev_eci = None
for ht, radius in zip(height_array, radius_array):
# If the height is lower than the eng height, use a fixed velocity of 3 km/s
if ht < traj.rend_ele:
t_est = ht_vs_time_interp(traj.rend_ele) + abs(ht - traj.rend_ele)/3000
time_marker = "*"
else:
# Estimate the fireball time at the given height using interpolated values
t_est = ht_vs_time_interp(ht)
time_marker = " "
# Compute the intersection between the trajectory line and the sphere of radius at the given height
intersections = lineAndSphereIntersections(np.array([0, 0, 0]), radius, traj.state_vect_mini,
traj.radiant_eci_mini)
# Choose the intersection that is closer to the state vector
inter_min_dist_indx = np.argmin([vectMag(inter - traj.state_vect_mini) for inter in intersections])
height_eci = intersections[inter_min_dist_indx]
# Compute the Julian date at the given height
jd = traj.jdt_ref + t_est/86400.0
# Compute geographical coordinates
lat, lon, ele_geo = cartesian2Geo(jd, *height_eci)
# Compute azimuth and elevation
if prev_eci is not None:
# Compute the vector pointing from the previous point to the current point
direction_vect = vectNorm(prev_eci - height_eci)
### Compute the ground-fixed alt/az
eci_x, eci_y, eci_z = height_eci
# Calculate the geocentric latitude (latitude which considers the Earth as an elipsoid) of the reference
# trajectory point
lat_geocentric = np.arctan2(eci_z, np.sqrt(eci_x**2 + eci_y**2))
# Calculate the velocity of the Earth rotation at the position of the reference trajectory point (m/s)
v_e = 2*np.pi*vectMag(height_eci)*np.cos(lat_geocentric)/86164.09053
# Calculate the equatorial coordinates of east from the reference position on the trajectory
azimuth_east = np.pi/2
altitude_east = 0
ra_east, dec_east = altAz2RADec(azimuth_east, altitude_east, jd, lat, lon)
# The reference velocity vector has the average velocity and the given direction
# Note that ideally this would be the instantaneous velocity
v_ref_vect = traj.orbit.v_avg_norot*direction_vect
v_ref_nocorr = np.zeros(3)
# Calculate the derotated reference velocity vector/radiant
v_ref_nocorr[0] = v_ref_vect[0] + v_e*np.cos(ra_east)
v_ref_nocorr[1] = v_ref_vect[1] + v_e*np.sin(ra_east)
v_ref_nocorr[2] = v_ref_vect[2]
# Compute the radiant without Earth's rotation included
ra_norot, dec_norot = eci2RaDec(vectNorm(v_ref_nocorr))
# Precess to the epoch of date
ra_norot, dec_norot = equatorialCoordPrecession(J2000_JD.days, jd, ra_norot, dec_norot)
# Compute apparent alt/az
azim_norot, elev_norot = raDec2AltAz(ra_norot, dec_norot, jd, lat, lon)
###
else:
azim_norot = -np.inf
elev_norot = -np.inf
prev_eci = np.copy(height_eci)
# Add point parameters
traj_samples.addSample(t_est, ht, lat, lon, ele_geo, azim_norot, elev_norot)
if show_plots:
print("{:s}{:7.3f}, {:13.1f}, {:10.6f}, {:11.6f}, {:10.1f}, {:10.6f}, {:10.6f}".format(time_marker, t_est, ht, np.degrees(lat), np.degrees(lon), ele_geo, np.degrees(azim_norot), np.degrees(elev_norot)))
if show_plots:
print('The star * denotes heights extrapolated after the end of the fireball, with the fixed velocity of 3 km/s.')
print("The horizontal coordinates are apparent above a fixed ground, not topocentric in J2000!")
print('End coordinates (observed):')
print(' Lat: {:.6f}'.format(np.degrees(traj.rend_lat)))
print(' Lon: {:.6f}'.format(np.degrees(traj.rend_lon)))
print(' Elev: {:.1f}'.format(traj.rend_ele))
return traj_samples
if __name__ == "__main__":
### COMMAND LINE ARGUMENTS
# Init the command line arguments parser
arg_parser = argparse.ArgumentParser(description=""" Sample the positions on the given trajectory.
The beginning and ending heights should be given, as well as the height step. The function
returns a list of sampled points on the trajectory and their geographical coordinates. """,
formatter_class=argparse.RawTextHelpFormatter)
arg_parser.add_argument('traj_pickle_file', type=str, help='Path to the trajectory .pickle file.')
arg_parser.add_argument('beg_height', type=float, help='Sampling begin height (km). -1 to use real begin height.')
arg_parser.add_argument('end_height', type=float, help='Sampling end height (km). -1 to use the real end height')
arg_parser.add_argument('height_step', type=float, help='Sampling step (km).')
# Parse the command line arguments
cml_args = arg_parser.parse_args()
# Unpack the file name and the directory path from the given arguments
dir_path, file_name = os.path.split(cml_args.traj_pickle_file)
# Convert units to meters
beg_ht = 1000*cml_args.beg_height
end_ht = 1000*cml_args.end_height
sample_step = 1000*cml_args.height_step
############################
# # Directory of the trajectory file
# dir_path = "/home/dvida/Dropbox/UWO Master's/Projects/MILIG files/20180117_010828 Michigan fireball (2 stations) second"
# # Trajectory pickle file
# file_name = "20180117_010828_trajectory.pickle"
# # Beginning height of sampling (m)
# # Use -1 for the beginning hieght of the fireball
# beg_ht = 50000.0
# # End height of sampling (m)
# # Use -1 for the final height
# end_ht = 10000.0
# # Sampling step (m)
# sample_step = 100
# Load the trajectory file
traj = loadPickle(dir_path, file_name)
# Run trajectory sampling
sampleTrajectory(traj, beg_ht, end_ht, sample_step, show_plots=True)
# # Test the line and sphere intersection
# centre = np.array([1.0, 0.0, 0.0])
# radius = 1.0
# origin = np.array([1.0, 1.0, 1.0])
# direction = np.array([-1.0, -1.0, -1.0])
# intersection = lineAndSphereIntersections(centre, radius, origin, direction)
# print(intersection)
# _plotSphereAndArrow(centre, radius, origin, direction, intersection)
|
wmpgREPO_NAMEWesternMeteorPyLibPATH_START.@WesternMeteorPyLib_extracted@WesternMeteorPyLib-master@wmpl@Utils@SampleTrajectoryPositions.py@.PATH_END.py
|
{
"filename": "PhosimCmpt.py",
"repo_name": "lsst-ts/ts_phosim",
"repo_path": "ts_phosim_extracted/ts_phosim-main/python/lsst/ts/phosim/PhosimCmpt.py",
"type": "Python"
}
|
# This file is part of ts_phosim.
#
# Developed for the LSST Telescope and Site Systems.
# This product includes software developed by the LSST Project
# (https://www.lsst.org).
# See the COPYRIGHT file at the top-level directory of this distribution
# for details of code ownership.
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
import os
import re
import shutil
import warnings
import numpy as np
from scipy import ndimage
from astropy.io import fits
from lsst.ts.wep.utils import runProgram
from lsst.ts.wep.paramReader import ParamReader
from lsst.ts.phosim.OpdMetrology import OpdMetrology
from lsst.ts.phosim.utils.Utility import getConfigDir, sortOpdFileList
from lsst.ts.phosim.utils.SensorWavefrontError import SensorWavefrontError
class PhosimCmpt(object):
def __init__(self, tele):
"""Initialization of PhoSim component class.
WEP: wavefront estimation pipeline.
Parameters
----------
tele : TeleFacade
Telescope instance.
"""
# Configuration directory
self.configDir = getConfigDir()
# Telescope setting file
settingFilePath = os.path.join(self.configDir, "phosimCmptSetting.yaml")
self._phosimCmptSettingFile = ParamReader(filePath=settingFilePath)
# TeleFacade instance
self.tele = tele
# OPD metrology
self.metr = OpdMetrology()
self.metr.setCamera(self.tele.surveyParam["instName"])
# Output directory of data
self.outputDir = ""
# Output directory of image
self.outputImgDir = ""
# Seed number
self.seedNum = 0
# M1M3 force error
self.m1m3ForceError = 0.05
def setM1M3ForceError(self, m1m3ForceError):
"""Set the M1M3 force error.
Parameters
----------
m1m3ForceError : float
Ratio of actuator force error between 0 and 1.
"""
self.m1m3ForceError = m1m3ForceError
def getM1M3ForceError(self):
"""Get the M1M3 force error.
Returns
-------
float
Ratio of actuator force error.
"""
return self.m1m3ForceError
def getSettingFile(self):
"""Get the setting file.
Returns
-------
lsst.ts.wep.paramReader
Setting file.
"""
return self._phosimCmptSettingFile
def getTele(self):
"""Get the telescope object.
Returns
-------
TeleFacade
Telescope object.
"""
return self.tele
def getNumOfZk(self):
"""Get the number of Zk (annular Zernike polynomial).
Returns
-------
int
Number of Zk.
"""
return int(self._phosimCmptSettingFile.getSetting("numOfZk"))
def getIntraFocalDirName(self):
"""Get the intra-focal directory name.
Returns
-------
str
Intra-focal directory name.
"""
return self._phosimCmptSettingFile.getSetting("intraDirName")
def getExtraFocalDirName(self):
"""Get the extra-focal directory name.
Returns
-------
str
Extra-focal directory name.
"""
return self._phosimCmptSettingFile.getSetting("extraDirName")
def getWfsDirName(self):
"""Get the WFS directory name.
Returns
-------
str
WFS directory name.
"""
return self._phosimCmptSettingFile.getSetting("wfsDirName")
def getOpdMetr(self):
"""Get the OPD metrology object.
OPD: optical path difference.
Returns
-------
OpdMetrology
OPD metrology object.
"""
return self.metr
def setOutputDir(self, outputDir):
"""Set the output directory.
The output directory will be constructed if there is no existed one.
Parameters
----------
outputDir : str
Output directory.
"""
self._makeDir(outputDir)
self.outputDir = outputDir
def _makeDir(self, newDir, exist_ok=True):
"""Make the new directory.
Super-mkdir; create a leaf directory and all intermediate ones. Works
like mkdir, except that any intermediate path segment (not just the
rightmost) will be created if it does not exist.
Parameters
----------
newDir : str
New directory.
exist_ok : bool, optional
If the target directory already exists, raise an OSError if
exist_ok is False. Otherwise no exception is raised. (the default
is True.)
"""
os.makedirs(newDir, exist_ok=exist_ok)
def getOutputDir(self):
"""Get the output directory.
Returns
-------
str
Output directory.
"""
return self.outputDir
def setOutputImgDir(self, outputImgDir):
"""Set the output image directory.
The output image directory will be constructed if there is no existed
one.
Parameters
----------
outputImgDir : str
Output image directory
"""
self._makeDir(outputImgDir)
self.outputImgDir = outputImgDir
def getOutputImgDir(self):
"""Get the output image directory.
Returns
-------
str
Output image directory
"""
return self.outputImgDir
def setSeedNum(self, seedNum):
"""Set the seed number for the M1M3 mirror surface purturbation.
Parameters
----------
seedNum : int
Seed number.
"""
self.seedNum = int(seedNum)
def getSeedNum(self):
"""Get the seed number for the M1M3 random surface purturbation.
Returns
-------
int or None
Seed number. None means there is no random purturbation.
"""
return self.seedNum
def setSurveyParam(
self,
obsId=None,
filterType=None,
boresight=None,
zAngleInDeg=None,
rotAngInDeg=None,
):
"""Set the survey parameters.
Parameters
----------
obsId : int, optional
Observation Id. (the default is None.)
filterType : enum 'FilterType' in lsst.ts.wep.utils, optional
Active filter type. (the default is None.)
boresight : tuple, optional
Telescope boresight in (ra, decl). (the default is None.)
zAngleInDeg : float, optional
Zenith angle in degree. (the default is None.)
rotAngInDeg : float, optional
Camera rotation angle in degree between -90 and 90 degrees. (the
default is None.)
"""
self.tele.setSurveyParam(
obsId=obsId,
filterType=filterType,
boresight=boresight,
zAngleInDeg=zAngleInDeg,
rotAngInDeg=rotAngInDeg,
)
def addOpdFieldXYbyDeg(self, fieldXInDegree, fieldYInDegree):
"""Add the OPD new field X, Y in degree.
OPD: optical path difference.
Parameters
----------
fieldXInDegree : float, list, or numpy.ndarray
New field X in degree.
fieldYInDegree : float, list, or numpy.ndarray
New field Y in degree.
"""
self.metr.addFieldXYbyDeg(fieldXInDegree, fieldYInDegree)
def accDofInUm(self, dofInUm):
"""Accumulate the aggregated degree of freedom (DOF) in um.
idx 0-4: M2 dz, dx, dy, rx, ry
idx 5-9: Cam dz, dx, dy, rx, ry
idx 10-29: M1M3 20 bending modes
idx 30-49: M2 20 bending modes
Parameters
----------
dofInUm : list or numpy.ndarray
DOF in um.
"""
self.tele.accDofInUm(dofInUm)
def setDofInUm(self, dofInUm):
"""Set the accumulated degree of freedom (DOF) in um.
idx 0-4: M2 dz, dx, dy, rx, ry
idx 5-9: Cam dz, dx, dy, rx, ry
idx 10-29: M1M3 20 bending modes
idx 30-49: M2 20 bending modes
Parameters
----------
dofInUm : list or numpy.ndarray
DOF in um.
"""
self.tele.setDofInUm(dofInUm)
def getDofInUm(self):
"""Get the accumulated degree of freedom (DOF) in um.
idx 0-4: M2 dz, dx, dy, rx, ry
idx 5-9: Cam dz, dx, dy, rx, ry
idx 10-29: M1M3 20 bending modes
idx 30-49: M2 20 bending modes
Returns
-------
numpy.ndarray
DOF in um.
"""
return self.tele.getDofInUm()
def saveDofInUmFileForNextIter(
self, dofInUm, dofInUmFileName="dofPertInNextIter.mat"
):
"""Save the DOF in um data to file for the next iteration.
DOF: degree of freedom.
Parameters
----------
dofInUm : list or numpy.ndarray
DOF in um.
dofInUmFileName : str, optional
File name to save the DOF in um. (the default is
"dofPertInNextIter.mat".)
"""
filePath = os.path.join(self.outputDir, dofInUmFileName)
header = "The followings are the DOF in um:"
np.savetxt(filePath, np.transpose(dofInUm), header=header)
def runPhoSim(self, argString):
"""Run the PhoSim program.
Parameters
----------
argString : str
Arguments for PhoSim.
"""
self.tele.runPhoSim(argString)
def getComCamOpdArgsAndFilesForPhoSim(
self,
cmdFileName="opd.cmd",
instFileName="opd.inst",
logFileName="opdPhoSim.log",
cmdSettingFileName="opdDefault.cmd",
instSettingFileName="opdDefault.inst",
):
"""Get the OPD calculation arguments and files of ComCam for the PhoSim
calculation.
OPD: optical path difference.
ComCam: commissioning camera.
Parameters
----------
cmdFileName : str, optional
Physical command file name. (the default is "opd.cmd".)
instFileName : str, optional
OPD instance file name. (the default is "opd.inst".)
logFileName : str, optional
Log file name. (the default is "opdPhoSim.log".)
cmdSettingFileName : str, optional
Physical command setting file name. (the default is
"opdDefault.cmd".)
instSettingFileName : str, optional
Instance setting file name. (the default is "opdDefault.inst".)
Returns
-------
str
Arguments to run the PhoSim.
"""
warnings.warn(
"Use getOpdArgsAndFilesForPhoSim() instead.",
category=DeprecationWarning,
stacklevel=2,
)
argString = self.getOpdArgsAndFilesForPhoSim(
"comcam",
cmdFileName=cmdFileName,
instFileName=instFileName,
logFileName=logFileName,
cmdSettingFileName=cmdSettingFileName,
instSettingFileName=instSettingFileName,
)
return argString
def getOpdArgsAndFilesForPhoSim(
self,
instName,
cmdFileName="opd.cmd",
instFileName="opd.inst",
logFileName="opdPhoSim.log",
cmdSettingFileName="opdDefault.cmd",
instSettingFileName="opdDefault.inst",
):
"""Get the OPD calculation arguments and files for the PhoSim
calculation.
OPD: optical path difference.
Parameters
----------
instName : `str`
Instrument name.
cmdFileName : str, optional
Physical command file name. (the default is "opd.cmd".)
instFileName : str, optional
OPD instance file name. (the default is "opd.inst".)
logFileName : str, optional
Log file name. (the default is "opdPhoSim.log".)
cmdSettingFileName : str, optional
Physical command setting file name. (the default is
"opdDefault.cmd".)
instSettingFileName : str, optional
Instance setting file name. (the default is "opdDefault.inst".)
Returns
-------
str
Arguments to run the PhoSim.
"""
# Set the weighting ratio and field positions of OPD
if instName == "lsst":
self.metr.setDefaultLsstWfsGQ()
else:
self.metr.setWgtAndFieldXyOfGQ(instName)
# Write the command file
cmdFilePath = self._writePertAndCmdFiles(cmdSettingFileName, cmdFileName)
# Write the instance file
instSettingFile = self._getInstSettingFilePath(instSettingFileName)
instFilePath = self.tele.writeOpdInstFile(
self.outputDir,
self.metr,
instSettingFile=instSettingFile,
instFileName=instFileName,
)
# Get the argument to run the PhoSim
argString = self._getPhoSimArgs(logFileName, instFilePath, cmdFilePath)
return argString
def _writePertAndCmdFiles(self, cmdSettingFileName, cmdFileName):
"""Write the physical perturbation and command files.
Parameters
----------
cmdSettingFileName : str
Physical command setting file name.
cmdFileName : str
Physical command file name.
Returns
-------
str
Command file path.
"""
# Write the perturbation file
pertCmdFileName = "pert.cmd"
pertCmdFilePath = os.path.join(self.outputDir, pertCmdFileName)
if not os.path.exists(pertCmdFilePath):
self.tele.writePertBaseOnConfigFile(
self.outputDir,
seedNum=self.seedNum,
m1m3ForceError=self.m1m3ForceError,
saveResMapFig=True,
pertCmdFileName=pertCmdFileName,
)
# Write the physical command file
cmdSettingFile = os.path.join(self.configDir, "cmdFile", cmdSettingFileName)
cmdFilePath = os.path.join(self.outputDir, cmdFileName)
if not os.path.exists(cmdFilePath):
self.tele.writeCmdFile(
self.outputDir,
cmdSettingFile=cmdSettingFile,
pertFilePath=pertCmdFilePath,
cmdFileName=cmdFileName,
)
return cmdFilePath
def _getInstSettingFilePath(self, instSettingFileName):
"""Get the instance setting file path.
Parameters
----------
instSettingFileName : str
Instance setting file name.
Returns
-------
str
Instance setting file path.
"""
instSettingFile = os.path.join(self.configDir, "instFile", instSettingFileName)
return instSettingFile
def _getPhoSimArgs(self, logFileName, instFilePath, cmdFilePath):
"""Get the arguments needed to run the PhoSim.
Parameters
----------
logFileName : str
Log file name.
instFilePath: str
Instance file path.
cmdFilePath : str
Physical command file path.
Returns
-------
str
Arguments to run the PhoSim.
"""
# PhoSim parameters
numPro = int(self._phosimCmptSettingFile.getSetting("numPro"))
e2ADC = int(self._phosimCmptSettingFile.getSetting("e2ADC"))
logFilePath = os.path.join(self.outputImgDir, logFileName)
argString = self.tele.getPhoSimArgs(
instFilePath,
extraCommandFile=cmdFilePath,
numPro=numPro,
outputDir=self.outputImgDir,
e2ADC=e2ADC,
logFilePath=logFilePath,
)
return argString
def getComCamStarArgsAndFilesForPhoSim(
self,
extraObsId,
intraObsId,
skySim,
simSeed=1000,
cmdSettingFileName="starDefault.cmd",
instSettingFileName="starSingleExp.inst",
):
"""Get the star calculation arguments and files of ComCam for the
PhoSim calculation.
Parameters
----------
extraObsId : int
Extra-focal observation Id.
intraObsId : int
Intra-focal observation Id.
skySim : SkySim
Sky simulator
simSeed : int, optional
Random number seed. (the default is 1000.)
cmdSettingFileName : str, optional
Physical command setting file name. (the default is
"starDefault.cmd".)
instSettingFileName : str, optional
Instance setting file name. (the default is "starSingleExp.inst".)
Returns
-------
list[str]
List of arguments to run the PhoSim.
"""
warnings.warn(
"Use getPistonCamStarArgsAndFilesForPhoSim() instead.",
category=DeprecationWarning,
stacklevel=2,
)
return self.getPistonCamStarArgsAndFilesForPhoSim(
extraObsId,
intraObsId,
skySim,
simSeed=simSeed,
cmdSettingFileName=cmdSettingFileName,
instSettingFileName=instSettingFileName,
)
def getPistonCamStarArgsAndFilesForPhoSim(
self,
extraObsId,
intraObsId,
skySim,
simSeed=1000,
cmdSettingFileName="starDefault.cmd",
instSettingFileName="starSingleExp.inst",
):
"""Get the star calculation arguments and files of piston camera (
ComCam or LSST FAM) for the PhoSim calculation.
FAM: Full-array mode.
Parameters
----------
extraObsId : int
Extra-focal observation Id.
intraObsId : int
Intra-focal observation Id.
skySim : SkySim
Sky simulator
simSeed : int, optional
Random number seed. (the default is 1000.)
cmdSettingFileName : str, optional
Physical command setting file name. (the default is
"starDefault.cmd".)
instSettingFileName : str, optional
Instance setting file name. (the default is "starSingleExp.inst".)
Returns
-------
list[str]
List of arguments to run the PhoSim.
"""
# Set the intra- and extra-focal related information
obsIdList = {"-1": extraObsId, "1": intraObsId}
instFileNameList = {"-1": "starExtra.inst", "1": "starIntra.inst"}
logFileNameList = {"-1": "starExtraPhoSim.log", "1": "starIntraPhoSim.log"}
extraFocalDirName = self.getExtraFocalDirName()
intraFocalDirName = self.getIntraFocalDirName()
outImgDirNameList = {"-1": extraFocalDirName, "1": intraFocalDirName}
# Write the instance and command files of defocal conditions
cmdFileName = "star.cmd"
onFocalDofInUm = self.getDofInUm()
onFocalOutputImgDir = self.outputImgDir
argStringList = []
for ii in (-1, 1):
# Set the observation ID
self.setSurveyParam(obsId=obsIdList[str(ii)])
# Camera piston (Change the unit from mm to um)
pistonInUm = np.zeros(len(onFocalDofInUm))
pistonInUm[5] = ii * self.tele.getDefocalDistInMm() * 1e3
# Set the new DOF that considers the piston motion
self.setDofInUm(onFocalDofInUm + pistonInUm)
# Update the output image directory
outputImgDir = os.path.join(onFocalOutputImgDir, outImgDirNameList[str(ii)])
self.setOutputImgDir(outputImgDir)
# Get the argument to run the phosim
argString = self.getStarArgsAndFilesForPhoSim(
skySim,
cmdFileName=cmdFileName,
instFileName=instFileNameList[str(ii)],
logFileName=logFileNameList[str(ii)],
simSeed=simSeed,
cmdSettingFileName=cmdSettingFileName,
instSettingFileName=instSettingFileName,
)
argStringList.append(argString)
# Put the internal state back to the focal plane condition
self.setDofInUm(onFocalDofInUm)
self.setOutputImgDir(onFocalOutputImgDir)
return argStringList
def getWfsStarArgsAndFilesForPhoSim(
self,
obsId,
skySim,
simSeed=1000,
cmdSettingFileName="starDefault.cmd",
instSettingFileName="starSingleExp.inst",
):
"""Get the star calculation arguments and files for the
wavefront sensors for the PhoSim calculation.
Parameters
----------
obsId : int
Observation Id.
skySim : SkySim
Sky simulator
simSeed : int, optional
Random number seed. (the default is 1000.)
cmdSettingFileName : str, optional
Physical command setting file name. (the default is
"starDefault.cmd".)
instSettingFileName : str, optional
Instance setting file name. (the default is "starSingleExp.inst".)
Returns
-------
str
Arguments to run the PhoSim.
"""
instFileName = "starWfs.inst"
logFileName = "starWfsPhosim.log"
wfsDirName = self.getWfsDirName()
# Write the command files of conditions
cmdFileName = "star.cmd"
inFocusDofInUm = self.getDofInUm()
inFocusOutputImgDir = self.outputImgDir
# Set the observation ID
self.setSurveyParam(obsId=obsId)
# Set the DOF
self.setDofInUm(inFocusDofInUm)
# Update the output image directory
outputImgDir = os.path.join(inFocusOutputImgDir, wfsDirName)
self.setOutputImgDir(outputImgDir)
# Get the argument to run the phosim
argString = self.getStarArgsAndFilesForPhoSim(
skySim,
cmdFileName=cmdFileName,
instFileName=instFileName,
logFileName=logFileName,
simSeed=simSeed,
cmdSettingFileName=cmdSettingFileName,
instSettingFileName=instSettingFileName,
)
# Return to original state
self.setOutputImgDir(inFocusOutputImgDir)
return argString
def getStarArgsAndFilesForPhoSim(
self,
skySim,
cmdFileName="star.cmd",
instFileName="star.inst",
logFileName="starPhoSim.log",
simSeed=1000,
cmdSettingFileName="starDefault.cmd",
instSettingFileName="starSingleExp.inst",
):
"""Get the star calculation arguments and files for the PhoSim
calculation.
Parameters
----------
skySim : SkySim
Sky simulator
cmdFileName : str, optional
Physical command file name. (the default is "star.cmd".)
instFileName : str, optional
Star instance file name. (the default is "star.inst".)
logFileName : str, optional
Log file name. (the default is "starPhoSim.log".)
simSeed : int, optional
Random number seed. (the default is 1000)
cmdSettingFileName : str, optional
Physical command setting file name. (the default is
"starDefault.cmd".)
instSettingFileName : str, optional
Instance setting file name. (the default is "starSingleExp.inst".)
Returns
-------
str
Arguments to run the PhoSim.
"""
# Write the command file
cmdFilePath = self._writePertAndCmdFiles(cmdSettingFileName, cmdFileName)
# Write the instance file
instSettingFile = self._getInstSettingFilePath(instSettingFileName)
instFilePath = self.tele.writeStarInstFile(
self.outputDir,
skySim,
simSeed=simSeed,
sedName="sed_flat.txt",
instSettingFile=instSettingFile,
instFileName=instFileName,
)
# Get the argument to run the PhoSim
argString = self._getPhoSimArgs(logFileName, instFilePath, cmdFilePath)
return argString
def analyzeComCamOpdData(
self, zkFileName="opd.zer", rotOpdInDeg=0.0, pssnFileName="PSSN.txt"
):
"""Analyze the ComCam OPD data.
Rotate OPD to simulate the output by rotated camera. When anaylzing the
PSSN, the unrotated OPD is used.
ComCam: Commissioning camera.
OPD: Optical path difference.
PSSN: Normalized point source sensitivity.
Parameters
----------
zkFileName : str, optional
OPD in zk file name. (the default is "opd.zer".)
rotOpdInDeg : float, optional
Rotate OPD in degree in the counter-clockwise direction. (the
default is 0.0.)
pssnFileName : str, optional
PSSN file name. (the default is "PSSN.txt".)
"""
warnings.warn(
"Use analyzeOpdData() instead.",
category=DeprecationWarning,
stacklevel=2,
)
self.analyzeOpdData(
"comcam",
zkFileName=zkFileName,
rotOpdInDeg=rotOpdInDeg,
pssnFileName=pssnFileName,
)
def analyzeOpdData(
self, instName, zkFileName="opd.zer", rotOpdInDeg=0.0, pssnFileName="PSSN.txt"
):
"""Analyze the OPD data.
Rotate OPD to simulate the output by rotated camera. When anaylzing the
PSSN, the unrotated OPD is used.
OPD: Optical path difference.
PSSN: Normalized point source sensitivity.
Parameters
----------
instName : `str`
Instrument name.
zkFileName : str, optional
OPD in zk file name. (the default is "opd.zer".)
rotOpdInDeg : float, optional
Rotate OPD in degree in the counter-clockwise direction. (the
default is 0.0.)
pssnFileName : str, optional
PSSN file name. (the default is "PSSN.txt".)
"""
self._writeOpdZkFile(zkFileName, rotOpdInDeg)
self._writeOpdPssnFile(instName, pssnFileName)
def _writeOpdZkFile(self, zkFileName, rotOpdInDeg):
"""Write the OPD in zk file.
OPD: optical path difference.
Parameters
----------
zkFileName : str
OPD in zk file name.
rotOpdInDeg : float
Rotate OPD in degree in the counter-clockwise direction.
"""
filePath = os.path.join(self.outputImgDir, zkFileName)
opdData = self._mapOpdToZk(rotOpdInDeg)
header = (
"The followings are OPD in rotation angle of %.2f degree in um from z4 to z22:"
% (rotOpdInDeg)
)
np.savetxt(filePath, opdData, header=header)
def _mapOpdToZk(self, rotOpdInDeg):
"""Map the OPD to the basis of annular Zernike polynomial (Zk).
OPD: optical path difference.
Parameters
----------
rotOpdInDeg : float
Rotate OPD in degree in the counter-clockwise direction.
Returns
-------
numpy.ndarray
Zk data from OPD. This is a 2D array. The row is the OPD index and
the column is z4 to z22 in um. The order of OPD index is based on
the file name.
"""
# Get the sorted OPD file list
opdFileList = self._getOpdFileInDir(self.outputImgDir)
# Map the OPD to the Zk basis and do the collection
numOfZk = self.getNumOfZk()
opdData = np.zeros((len(opdFileList), numOfZk))
for idx, opdFile in enumerate(opdFileList):
opd = fits.getdata(opdFile)
# Rotate OPD if needed
if rotOpdInDeg != 0:
opdRot = ndimage.rotate(opd, rotOpdInDeg, reshape=False)
opdRot[opd == 0] = 0
else:
opdRot = opd
# z1 to z22 (22 terms)
zk = self.metr.getZkFromOpd(opdMap=opdRot)[0]
# Only need to collect z4 to z22
initIdx = 3
opdData[idx, :] = zk[initIdx : initIdx + numOfZk]
return opdData
def _getOpdFileInDir(self, opdDir):
"""Get the sorted OPD files in the directory.
OPD: Optical path difference.
Parameters
----------
opdDir : str
OPD file directory.
Returns
-------
list
List of sorted OPD files.
"""
# Get the files
opdFileList = []
fileList = self._getFileInDir(opdDir)
for file in fileList:
fileName = os.path.basename(file)
m = re.match(r"\Aopd_\d+_(\d+).fits.gz", fileName)
if m is not None:
opdFileList.append(file)
# Do the sorting of file name
sortedOpdFileList = sortOpdFileList(opdFileList)
return sortedOpdFileList
def _getFileInDir(self, fileDir):
"""Get the files in the directory.
Parameters
----------
fileDir : str
File directory.
Returns
-------
list
List of files.
"""
fileList = []
for name in os.listdir(fileDir):
filePath = os.path.join(fileDir, name)
if os.path.isfile(filePath):
fileList.append(filePath)
return fileList
def _writeOpdPssnFile(self, instName, pssnFileName):
"""Write the OPD PSSN in file.
OPD: Optical path difference.
PSSN: Normalized point source sensitivity.
Parameters
----------
instName : `str`
Instrument name.
pssnFileName : str
PSSN file name.
"""
# Set the weighting ratio and field positions of OPD
if instName == "lsst":
self.metr.setDefaultLsstWfsGQ()
else:
self.metr.setWgtAndFieldXyOfGQ(instName)
# Calculate the PSSN
pssnList, gqEffPssn = self._calcPssnOpd()
# Calculate the FWHM
effFwhmList, gqEffFwhm = self._calcEffFwhmOpd(pssnList)
# Append the list to write the data into file
pssnList.append(gqEffPssn)
effFwhmList.append(gqEffFwhm)
# Stack the data
data = np.vstack((pssnList, effFwhmList))
# Write to file
filePath = os.path.join(self.outputImgDir, pssnFileName)
header = "The followings are PSSN and FWHM (in arcsec) data. The final number is the GQ value."
np.savetxt(filePath, data, header=header)
def _calcPssnOpd(self):
"""Calculate the PSSN of OPD.
OPD: Optical path difference.
PSSN: Normalized point source sensitivity.
GQ: Gaussian quadrature.
Returns
-------
list
PSSN list.
float
GQ effective PSSN.
"""
opdFileList = self._getOpdFileInDir(self.outputImgDir)
wavelengthInUm = self.tele.getRefWaveLength() * 1e-3
pssnList = []
for opdFile in opdFileList:
pssn = self.metr.calcPSSN(wavelengthInUm, opdFitsFile=opdFile)
pssnList.append(pssn)
# Calculate the GQ effectice PSSN
gqEffPssn = self.metr.calcGQvalue(pssnList)
return pssnList, gqEffPssn
def _calcEffFwhmOpd(self, pssnList):
"""Calculate the effective FWHM of OPD.
FWHM: Full width and half maximum.
PSSN: Normalized point source sensitivity.
GQ: Gaussian quadrature.
Parameters
----------
pssnList : list
List of PSSN.
Returns
-------
list
Effective FWHM list.
float
GQ effective FWHM.
"""
# Calculate the list of effective FWHM
effFwhmList = []
for pssn in pssnList:
effFwhm = self.metr.calcFWHMeff(pssn)
effFwhmList.append(effFwhm)
# Calculate the GQ effectice FWHM
gqEffFwhm = self.metr.calcGQvalue(effFwhmList)
return effFwhmList, gqEffFwhm
def mapOpdDataToListOfWfErr(self, opdZkFileName, sensorIdList, sensorNameList):
"""Map the OPD data to the list of wavefront error.
OPD: Optical path difference.
Parameters
----------
opdZkFileName : str
OPD zk file name.
sensorIdList : list
Reference sensor ID list.
sensorNameList : list
Reference sensor name list.
Returns
-------
list [lsst.ts.wep.ctrlIntf.SensorWavefrontError]
List of SensorWavefrontError object.
"""
opdZk = self._getZkFromFile(opdZkFileName)
listOfWfErr = []
for sensorId, sensorName, zk in zip(sensorIdList, sensorNameList, opdZk):
sensorWavefrontData = SensorWavefrontError(numOfZk=self.getNumOfZk())
sensorWavefrontData.setSensorId(sensorId)
sensorWavefrontData.setSensorName(sensorName)
sensorWavefrontData.setAnnularZernikePoly(zk)
listOfWfErr.append(sensorWavefrontData)
return listOfWfErr
def _getZkFromFile(self, zkFileName):
"""Get the zk (z4-z22) from file.
Parameters
----------
zkFileName : str
Zk file name.
Returns
-------
numpy.ndarray
zk matrix. The colunm is z4-z22. The raw is each data point.
"""
filePath = os.path.join(self.outputImgDir, zkFileName)
zk = np.loadtxt(filePath)
return zk
def getOpdPssnFromFile(self, pssnFileName):
"""Get the OPD PSSN from file.
OPD: Optical path difference.
PSSN: Normalized point source sensitivity.
Parameters
----------
pssnFileName : str
PSSN file name.
Returns
-------
numpy.ndarray
PSSN.
"""
data = self._getDataOfPssnFile(pssnFileName)
pssn = data[0, :-1]
return pssn
def _getDataOfPssnFile(self, pssnFileName):
"""Get the data of the PSSN file.
PSSN: Normalized point source sensitivity.
Parameters
----------
pssnFileName : str
PSSN file name.
Returns
-------
numpy.ndarray
Data of the PSSN file.
"""
filePath = os.path.join(self.outputImgDir, pssnFileName)
data = np.loadtxt(filePath)
return data
def getOpdGqEffFwhmFromFile(self, pssnFileName):
"""Get the OPD GQ effective FWHM from file.
OPD: Optical path difference.
GQ: Gaussian quadrature.
FWHM: Full width at half maximum.
PSSN: Normalized point source sensitivity.
Parameters
----------
pssnFileName : str
PSSN file name.
Returns
-------
float
OPD GQ effective FWHM.
"""
data = self._getDataOfPssnFile(pssnFileName)
gqEffFwhm = data[1, -1]
return gqEffFwhm
def getListOfFwhmSensorData(self, pssnFileName, sensorIdList):
"""Get the list of FWHM sensor data based on the OPD PSSN file.
FWHM: Full width at half maximum.
OPD: Optical path difference.
PSSN: Normalized point source sensitivity.
Parameters
----------
pssnFileName : str
PSSN file name.
sensorIdList : list
Reference sensor id list.
Returns
-------
fwhmCollection : `np.ndarray [object]`
Numpy array with fwhm data. This is a numpy array of arrays. The
data type is `object` because each element may have different
number of elements.
sensor_id: `np.ndarray`
Numpy array with sensor ids.
"""
# Get the FWHM data from the PSSN file
# The first row is the PSSN and the second one is the FWHM
# The final element in each row is the GQ value
data = self._getDataOfPssnFile(pssnFileName)
fwhmData = data[1, :-1]
sensor_id = np.array(sensorIdList, dtype=int)
fwhmCollection = np.array([], dtype=object)
for fwhm in fwhmData:
fwhmCollection = np.append(fwhmCollection, fwhm)
return fwhmCollection, sensor_id
def repackageWfsCamImgs(self, instName, isEimg=False):
"""Repackage the images from in focus camera for processing.
Parameters
----------
instName : str
Instrument name.
isEimg : bool, optional
Is eimage or not. (the default is False.)
"""
# Make a temporary directory
tmpDirPath = os.path.join(self.outputImgDir, "tmp")
self._makeDir(tmpDirPath)
wfsDirName = self.getWfsDirName()
# Repackage the images to the temporary directory
command = "phosim_repackager.py"
phosimImgDir = os.path.join(self.outputImgDir, wfsDirName)
argstring = "%s --out_dir=%s" % (phosimImgDir, tmpDirPath)
argstring += f" --inst {instName} "
if isEimg:
argstring += " --eimage"
# Wavefront sensors require camera to be in focus (focusz = 0)
argstring += " --focusz 0"
runProgram(command, argstring=argstring)
# Remove the image data in the original directory
argstring = "-rf %s/*.fits*" % phosimImgDir
runProgram("rm", argstring=argstring)
# Put the repackaged data into the image directory
argstring = "%s/*.fits %s" % (tmpDirPath, phosimImgDir)
runProgram("mv", argstring=argstring)
# Remove the temporary directory
shutil.rmtree(tmpDirPath)
def repackagePistonCamImgs(self, instName, isEimg=False):
"""Repackage the images of piston camera (ComCam and LSST FAM) from
PhoSim for processing.
FAM: Full-array mode.
Parameters
----------
instName : `str`
Instrument name.
isEimg : bool, optional
Is eimage or not. (the default is False.)
"""
# Make a temporary directory
tmpDirPath = os.path.join(self.outputImgDir, "tmp")
self._makeDir(tmpDirPath)
intraFocalDirName = self.getIntraFocalDirName()
extraFocalDirName = self.getExtraFocalDirName()
for imgType in (intraFocalDirName, extraFocalDirName):
# Repackage the images to the temporary directory
command = "phosim_repackager.py"
phosimImgDir = os.path.join(self.outputImgDir, imgType)
argstring = "%s --out_dir=%s" % (phosimImgDir, tmpDirPath)
argstring += f" --inst {instName} "
if isEimg:
argstring += " --eimage"
focusz = self.tele.getDefocalDistInMm() * (
-1.0 if imgType == intraFocalDirName else 1.0
)
argstring += f" --focusz {focusz}"
runProgram(command, argstring=argstring)
# Remove the image data in the original directory
argstring = "-rf %s/*.fits*" % phosimImgDir
runProgram("rm", argstring=argstring)
# Put the repackaged data into the image directory
argstring = "%s/*.fits %s" % (tmpDirPath, phosimImgDir)
runProgram("mv", argstring=argstring)
# Remove the temporary directory
shutil.rmtree(tmpDirPath)
def repackageComCamAmpImgFromPhoSim(self):
"""Repackage the ComCam amplifier images from PhoSim to the single 16
extension MEFs for processing.
ComCam: commissioning camera.
MEF: multi-extension frames.
"""
warnings.warn(
"Use repackagePistonCamImgs() instead.",
category=DeprecationWarning,
stacklevel=2,
)
self.repackagePistonCamImgs(isEimg=False, instName="comcam")
def repackageComCamEimgFromPhoSim(self):
"""Repackage the ComCam eimages from PhoSim for processing.
ComCam: commissioning camera.
"""
warnings.warn(
"Use repackagePistonCamImgs() instead.",
category=DeprecationWarning,
stacklevel=2,
)
self.repackagePistonCamImgs(isEimg=True, instName="comcam")
def reorderAndSaveWfErrFile(
self, listOfWfErr, refSensorNameList, lsstCamera, zkFileName="wfs.zer"
):
"""Reorder the wavefront error in the wavefront error list according to
the reference sensor name list and save to a file.
The unexisted wavefront error will be a numpy zero array. The unit is
um.
Parameters
----------
listOfWfErr : list [lsst.ts.wep.ctrlIntf.SensorWavefrontData]
List of SensorWavefrontData object.
refSensorNameList : list
Reference sensor name list.
lsstCamera : lsst.afw.cameraGeom.Camera
Lsst instrument.
zkFileName : str, optional
Wavefront error file name. (the default is "wfs.zer".)
"""
# Get the sensor name that in the wavefront error map
wfErrMap = self._transListOfWfErrToMap(listOfWfErr, lsstCamera)
nameListInWfErrMap = list(wfErrMap.keys())
# Reorder the wavefront error map based on the reference sensor name
# list.
reorderedWfErrMap = dict()
for sensorName in refSensorNameList:
if sensorName in nameListInWfErrMap:
wfErr = wfErrMap[sensorName]
else:
numOfZk = self.getNumOfZk()
wfErr = np.zeros(numOfZk)
reorderedWfErrMap[sensorName] = wfErr
# Save the file
filePath = os.path.join(self.outputImgDir, zkFileName)
wfsData = self._getWfErrValuesAndStackToMatrix(reorderedWfErrMap)
header = "The followings are ZK in um from z4 to z22:"
np.savetxt(filePath, wfsData, header=header)
def _transListOfWfErrToMap(self, listOfWfErr, lsstCamera):
"""Transform the list of wavefront error to map.
Parameters
----------
listOfWfErr : list [lsst.ts.wep.ctrlIntf.SensorWavefrontData]
List of SensorWavefrontData object.
lsstCamera : lsst.afw.cameraGeom.Camera
Lsst instrument.
Returns
-------
dict
Calculated wavefront error. The dictionary key [str] is the
abbreviated sensor name (e.g. R22_S11). The dictionary item
[numpy.ndarray] is the averaged wavefront error (z4-z22) in um.
"""
mapSensorNameAndId = dict(
[(detector.getId(), detector.getName()) for detector in lsstCamera]
)
wfErrMap = dict()
for sensorWavefrontData in listOfWfErr:
sensorId = sensorWavefrontData.getSensorId()
sensorName = mapSensorNameAndId[sensorId]
avgErrInUm = sensorWavefrontData.getAnnularZernikePoly()
wfErrMap[sensorName] = avgErrInUm
return wfErrMap
def _getWfErrValuesAndStackToMatrix(self, wfErrMap):
"""Get the wavefront errors and stack them to be a matrix.
Parameters
----------
wfErrMap : dict
Calculated wavefront error. The dictionary key [str] is the
abbreviated sensor name (e.g. R22_S11). The dictionary item
[numpy.ndarray] is the averaged wavefront error (z4-z22) in um.
Returns
-------
numpy.ndarray
Wavefront errors as a matrix. The column is z4-z22 in um. The row
is the individual sensor. The order is the same as the input of
wfErrMap.
"""
numOfZk = self.getNumOfZk()
valueMatrix = np.empty((0, numOfZk))
for wfErr in wfErrMap.values():
valueMatrix = np.vstack((valueMatrix, wfErr))
return valueMatrix
|
lsst-tsREPO_NAMEts_phosimPATH_START.@ts_phosim_extracted@ts_phosim-main@python@lsst@ts@phosim@PhosimCmpt.py@.PATH_END.py
|
{
"filename": "SECURITY.md",
"repo_name": "tensorflow/tensorflow",
"repo_path": "tensorflow_extracted/tensorflow-master/SECURITY.md",
"type": "Markdown"
}
|
# Using TensorFlow Securely
This document discusses the TensorFlow security model. It describes the security
risks to consider when using models, checkpoints or input data for training or
serving. We also provide guidelines on what constitutes a vulnerability in
TensorFlow and how to report them.
This document applies to other repositories in the TensorFlow organization,
covering security practices for the entirety of the TensorFlow ecosystem.
## TensorFlow models are programs
TensorFlow
[**models**](https://developers.google.com/machine-learning/glossary/#model) (to
use a term commonly used by machine learning practitioners) are expressed as
programs that TensorFlow executes. TensorFlow programs are encoded as
computation
[**graphs**](https://developers.google.com/machine-learning/glossary/#graph).
Since models are practically programs that TensorFlow executes, using untrusted
models or graphs is equivalent to running untrusted code.
If you need to run untrusted models, execute them inside a
[**sandbox**](https://developers.google.com/code-sandboxing). Memory corruptions
in TensorFlow ops can be recognized as security issues only if they are
reachable and exploitable through production-grade, benign models.
### Compilation
Compiling models via the recommended entry points described in
[XLA](https://www.tensorflow.org/xla) and
[JAX](https://jax.readthedocs.io/en/latest/jax-101/02-jitting.html)
documentation should be safe, while some of the testing and debugging tools that
come with the compiler are not designed to be used with untrusted data and
should be used with caution when working with untrusted models.
### Saved graphs and checkpoints
When loading untrusted serialized computation graphs (in form of a `GraphDef`,
`SavedModel`, or equivalent on-disk format), the set of computation primitives
available to TensorFlow is powerful enough that you should assume that the
TensorFlow process effectively executes arbitrary code.
The risk of loading untrusted checkpoints depends on the code or graph that you
are working with. When loading untrusted checkpoints, the values of the traced
variables from your model are also going to be untrusted. That means that if
your code interacts with the filesystem, network, etc. and uses checkpointed
variables as part of those interactions (ex: using a string variable to build a
filesystem path), a maliciously created checkpoint might be able to change the
targets of those operations, which could result in arbitrary
read/write/executions.
### Running a TensorFlow server
TensorFlow is a platform for distributed computing, and as such there is a
TensorFlow server (`tf.train.Server`). The TensorFlow server is intended for
internal communication only. It is not built for use in untrusted environments
or networks.
For performance reasons, the default TensorFlow server does not include any
authorization protocol and sends messages unencrypted. It accepts connections
from anywhere, and executes the graphs it is sent without performing any checks.
Therefore, if you run a `tf.train.Server` in your network, anybody with access
to the network can execute arbitrary code with the privileges of the user
running the `tf.train.Server`.
## Untrusted inputs during training and prediction
TensorFlow supports a wide range of input data formats. For example it can
process images, audio, videos, and text. There are several modules specialized
in taking those formats, modifying them, and/or converting them to intermediate
formats that can be processed by TensorFlow.
These modifications and conversions are handled by a variety of libraries that
have different security properties and provide different levels of confidence
when dealing with untrusted data. Based on the security history of these
libraries we consider that it is safe to work with untrusted inputs for PNG,
BMP, GIF, WAV, RAW, RAW\_PADDED, CSV and PROTO formats. All other input formats,
including tensorflow-io should be sandboxed if used to process untrusted data.
For example, if an attacker were to upload a malicious video file, they could
potentially exploit a vulnerability in the TensorFlow code that handles videos,
which could allow them to execute arbitrary code on the system running
TensorFlow.
It is important to keep TensorFlow up to date with the latest security patches
and follow the sandboxing guideline above to protect against these types of
vulnerabilities.
## Security properties of execution modes
TensorFlow has several execution modes, with Eager-mode being the default in v2.
Eager mode lets users write imperative-style statements that can be easily
inspected and debugged and it is intended to be used during the development
phase.
As part of the differences that make Eager mode easier to debug, the [shape
inference
functions](https://www.tensorflow.org/guide/create_op#define_the_op_interface)
are skipped, and any checks implemented inside the shape inference code are not
executed.
The security impact of skipping those checks should be low, since the attack
scenario would require a malicious user to be able to control the model which as
stated above is already equivalent to code execution. In any case, the
recommendation is not to serve models using Eager mode since it also has
performance limitations.
## Multi-Tenant environments
It is possible to run multiple TensorFlow models in parallel. For example,
`ModelServer` collates all computation graphs exposed to it (from multiple
`SavedModel`) and executes them in parallel on available executors. Running
TensorFlow in a multitenant design mixes the risks described above with the
inherent ones from multitenant configurations. The primary areas of concern are
tenant isolation, resource allocation, model sharing and hardware attacks.
### Tenant isolation
Since any tenants or users providing models, graphs or checkpoints can execute
code in context of the TensorFlow service, it is important to design isolation
mechanisms that prevent unwanted access to the data from other tenants.
Network isolation between different models is also important not only to prevent
unauthorized access to data or models, but also to prevent malicious users or
tenants sending graphs to execute under another tenant’s identity.
The isolation mechanisms are the responsibility of the users to design and
implement, and therefore security issues deriving from their absence are not
considered a vulnerability in TensorFlow.
### Resource allocation
A denial of service caused by one model could bring down the entire server, but
we don't consider this as a vulnerability, given that models can exhaust
resources in many different ways and solutions exist to prevent this from
happening (e.g., rate limits, ACLs, monitors to restart broken servers).
### Model sharing
If the multitenant design allows sharing models, make sure that tenants and
users are aware of the security risks detailed here and that they are going to
be practically running code provided by other users. Currently there are no good
ways to detect malicious models/graphs/checkpoints, so the recommended way to
mitigate the risk in this scenario is to sandbox the model execution.
### Hardware attacks
Physical GPUs or TPUs can also be the target of attacks. [Published
research](https://scholar.google.com/scholar?q=gpu+side+channel) shows that it
might be possible to use side channel attacks on the GPU to leak data from other
running models or processes in the same system. GPUs can also have
implementation bugs that might allow attackers to leave malicious code running
and leak or tamper with applications from other users. Please report
vulnerabilities to the vendor of the affected hardware accelerator.
## Reporting vulnerabilities
### Vulnerabilities in TensorFlow
This document covers different use cases for TensorFlow together with comments
whether these uses were recommended or considered safe, or where we recommend
some form of isolation when dealing with untrusted data. As a result, this
document also outlines what issues we consider as TensorFlow security
vulnerabilities.
We recognize issues as vulnerabilities only when they occur in scenarios that we
outline as safe; issues that have a security impact only when TensorFlow is used
in a discouraged way (e.g. running untrusted models or checkpoints, data parsing
outside of the safe formats, etc.) are not treated as vulnerabilities.
### Reporting process
Please use [Google Bug Hunters reporting form](https://g.co/vulnz) to report
security vulnerabilities. Please include the following information along with
your report:
- A descriptive title
- Your name and affiliation (if any).
- A description of the technical details of the vulnerabilities.
- A minimal example of the vulnerability. It is very important to let us know
how we can reproduce your findings. For memory corruption triggerable in
TensorFlow models, please demonstrate an exploit against one of Alphabet's
models in <https://tfhub.dev/>
- An explanation of who can exploit this vulnerability, and what they gain
when doing so. Write an attack scenario that demonstrates how your issue
violates the use cases and security assumptions defined in the threat model.
This will help us evaluate your report quickly, especially if the issue is
complex.
- Whether this vulnerability is public or known to third parties. If it is,
please provide details.
We will try to fix the problems as soon as possible. Vulnerabilities will, in
general, be batched to be fixed at the same time as a quarterly release. We
credit reporters for identifying security issues, although we keep your name
confidential if you request it. Please see Google Bug Hunters program website
for more info.
|
tensorflowREPO_NAMEtensorflowPATH_START.@tensorflow_extracted@tensorflow-master@SECURITY.md@.PATH_END.py
|
{
"filename": "__main__.py",
"repo_name": "catboost/catboost",
"repo_path": "catboost_extracted/catboost-master/contrib/python/Pillow/py3/PIL/__main__.py",
"type": "Python"
}
|
from __future__ import annotations
from .features import pilinfo
def main():
pilinfo()
if __name__ == '__main__':
main()
|
catboostREPO_NAMEcatboostPATH_START.@catboost_extracted@catboost-master@contrib@python@Pillow@py3@PIL@__main__.py@.PATH_END.py
|
{
"filename": "interpol.py",
"repo_name": "sibirrer/lenstronomy",
"repo_path": "lenstronomy_extracted/lenstronomy-main/lenstronomy/LensModel/Profiles/interpol.py",
"type": "Python"
}
|
__author__ = "sibirrer"
import scipy.interpolate
import numpy as np
import lenstronomy.Util.util as util
from lenstronomy.LensModel.Profiles.base_profile import LensProfileBase
__all__ = ["Interpol", "InterpolScaled"]
class Interpol(LensProfileBase):
"""Class which uses an interpolation of a lens model and its first and second order
derivatives.
See also the tests in lenstronomy.test.test_LensModel.test_Profiles.test_interpol.py for example use cases
as checks against known analytic models.
The deflection angle is in the same convention as the one in the LensModel module, meaning that:
source position = image position - deflection angle
"""
param_names = [
"grid_interp_x",
"grid_interp_y",
"f_",
"f_x",
"f_y",
"f_xx",
"f_yy",
"f_xy",
]
lower_limit_default = {}
upper_limit_default = {}
def __init__(self, grid=False, min_grid_number=100, kwargs_spline=None):
"""
:param grid: bool, if True, computes the calculation on a grid
:param min_grid_number: minimum numbers of positions to compute the interpolation on a grid, otherwise in a loop
:param kwargs_spline: keyword arguments for the scipy.interpolate.RectBivariateSpline() interpolation (optional)
if =None, a default linear interpolation is chosen.
"""
self._grid = grid
self._min_grid_number = min_grid_number
if kwargs_spline is None:
kwargs_spline = {"kx": 1, "ky": 1, "s": 0}
self._kwargs_spline = kwargs_spline
super(Interpol, self).__init__()
def function(
self,
x,
y,
grid_interp_x=None,
grid_interp_y=None,
f_=None,
f_x=None,
f_y=None,
f_xx=None,
f_yy=None,
f_xy=None,
):
"""
:param x: x-coordinate (angular position), float or numpy array
:param y: y-coordinate (angular position), float or numpy array
:param grid_interp_x: numpy array (ascending) to mark the x-direction of the interpolation grid
:param grid_interp_y: numpy array (ascending) to mark the y-direction of the interpolation grid
:param f_: 2d numpy array of lensing potential, matching the grids in grid_interp_x and grid_interp_y
:param f_x: 2d numpy array of deflection in x-direction, matching the grids in grid_interp_x and grid_interp_y
:param f_y: 2d numpy array of deflection in y-direction, matching the grids in grid_interp_x and grid_interp_y
:param f_xx: 2d numpy array of df/dxx, matching the grids in grid_interp_x and grid_interp_y
:param f_yy: 2d numpy array of df/dyy, matching the grids in grid_interp_x and grid_interp_y
:param f_xy: 2d numpy array of df/dxy, matching the grids in grid_interp_x and grid_interp_y
:return: potential at interpolated positions (x, y)
"""
# self._check_interp(grid_interp_x, grid_interp_y, f_, f_x, f_y, f_xx, f_yy, f_xy)
n = len(np.atleast_1d(x))
if n <= 1 and np.shape(x) == ():
# if type(x) == float or type(x) == int or type(x) == type(np.float64(1)) or len(x) <= 1:
f_out = self.f_interp(x, y, grid_interp_x, grid_interp_y, f_)
return f_out
else:
if self._grid and n >= self._min_grid_number:
x_axes, y_axes = util.get_axes(x, y)
f_out = self.f_interp(
x_axes, y_axes, grid_interp_x, grid_interp_y, f_, grid=self._grid
)
f_out = util.image2array(f_out)
else:
# n = len(x)
f_out = np.zeros(n)
for i in range(n):
f_out[i] = self.f_interp(
x[i], y[i], grid_interp_x, grid_interp_y, f_
)
return f_out
def derivatives(
self,
x,
y,
grid_interp_x=None,
grid_interp_y=None,
f_=None,
f_x=None,
f_y=None,
f_xx=None,
f_yy=None,
f_xy=None,
):
"""Returns df/dx and df/dy of the function.
:param x: x-coordinate (angular position), float or numpy array
:param y: y-coordinate (angular position), float or numpy array
:param grid_interp_x: numpy array (ascending) to mark the x-direction of the
interpolation grid
:param grid_interp_y: numpy array (ascending) to mark the y-direction of the
interpolation grid
:param f_: 2d numpy array of lensing potential, matching the grids in
grid_interp_x and grid_interp_y
:param f_x: 2d numpy array of deflection in x-direction, matching the grids in
grid_interp_x and grid_interp_y
:param f_y: 2d numpy array of deflection in y-direction, matching the grids in
grid_interp_x and grid_interp_y
:param f_xx: 2d numpy array of df/dxx, matching the grids in grid_interp_x and
grid_interp_y
:param f_yy: 2d numpy array of df/dyy, matching the grids in grid_interp_x and
grid_interp_y
:param f_xy: 2d numpy array of df/dxy, matching the grids in grid_interp_x and
grid_interp_y
:return: f_x, f_y at interpolated positions (x, y)
"""
n = len(np.atleast_1d(x))
if n <= 1 and np.shape(x) == ():
# if type(x) == float or type(x) == int or type(x) == type(np.float64(1)) or len(x) <= 1:
f_x_out = self.f_x_interp(x, y, grid_interp_x, grid_interp_y, f_x)
f_y_out = self.f_y_interp(x, y, grid_interp_x, grid_interp_y, f_y)
return f_x_out, f_y_out
else:
if self._grid and n >= self._min_grid_number:
x_, y_ = util.get_axes(x, y)
f_x_out = self.f_x_interp(
x_, y_, grid_interp_x, grid_interp_y, f_x, grid=self._grid
)
f_y_out = self.f_y_interp(
x_, y_, grid_interp_x, grid_interp_y, f_y, grid=self._grid
)
f_x_out = util.image2array(f_x_out)
f_y_out = util.image2array(f_y_out)
else:
# n = len(x)
f_x_out = self.f_x_interp(x, y, grid_interp_x, grid_interp_y, f_x)
f_y_out = self.f_y_interp(x, y, grid_interp_x, grid_interp_y, f_y)
return f_x_out, f_y_out
def hessian(
self,
x,
y,
grid_interp_x=None,
grid_interp_y=None,
f_=None,
f_x=None,
f_y=None,
f_xx=None,
f_yy=None,
f_xy=None,
):
"""Returns Hessian matrix of function d^2f/dx^2, d^2/dxdy, d^2/dydx, d^f/dy^2.
:param x: x-coordinate (angular position), float or numpy array
:param y: y-coordinate (angular position), float or numpy array
:param grid_interp_x: numpy array (ascending) to mark the x-direction of the
interpolation grid
:param grid_interp_y: numpy array (ascending) to mark the y-direction of the
interpolation grid
:param f_: 2d numpy array of lensing potential, matching the grids in
grid_interp_x and grid_interp_y
:param f_x: 2d numpy array of deflection in x-direction, matching the grids in
grid_interp_x and grid_interp_y
:param f_y: 2d numpy array of deflection in y-direction, matching the grids in
grid_interp_x and grid_interp_y
:param f_xx: 2d numpy array of df/dxx, matching the grids in grid_interp_x and
grid_interp_y
:param f_yy: 2d numpy array of df/dyy, matching the grids in grid_interp_x and
grid_interp_y
:param f_xy: 2d numpy array of df/dxy, matching the grids in grid_interp_x and
grid_interp_y
:return: f_xx, f_xy, f_yx, f_yy at interpolated positions (x, y)
"""
if not (hasattr(self, "_f_xx_interp")) and (
f_xx is None or f_yy is None or f_xy is None
):
diff = 0.000001
alpha_ra_pp, alpha_dec_pp = self.derivatives(
x + diff / 2,
y + diff / 2,
grid_interp_x=grid_interp_x,
grid_interp_y=grid_interp_y,
f_=f_,
f_x=f_x,
f_y=f_y,
)
alpha_ra_pn, alpha_dec_pn = self.derivatives(
x + diff / 2,
y - diff / 2,
grid_interp_x=grid_interp_x,
grid_interp_y=grid_interp_y,
f_=f_,
f_x=f_x,
f_y=f_y,
)
alpha_ra_np, alpha_dec_np = self.derivatives(
x - diff / 2,
y + diff / 2,
grid_interp_x=grid_interp_x,
grid_interp_y=grid_interp_y,
f_=f_,
f_x=f_x,
f_y=f_y,
)
alpha_ra_nn, alpha_dec_nn = self.derivatives(
x - diff / 2,
y - diff / 2,
grid_interp_x=grid_interp_x,
grid_interp_y=grid_interp_y,
f_=f_,
f_x=f_x,
f_y=f_y,
)
f_xx_out = (
(alpha_ra_pp - alpha_ra_np + alpha_ra_pn - alpha_ra_nn) / diff / 2
)
f_xy_out = (
(alpha_ra_pp - alpha_ra_pn + alpha_ra_np - alpha_ra_nn) / diff / 2
)
f_yx_out = (
(alpha_dec_pp - alpha_dec_np + alpha_dec_pn - alpha_dec_nn) / diff / 2
)
f_yy_out = (
(alpha_dec_pp - alpha_dec_pn + alpha_dec_np - alpha_dec_nn) / diff / 2
)
return f_xx_out, f_xy_out, f_yx_out, f_yy_out
n = len(np.atleast_1d(x))
if n <= 1 and np.shape(x) == ():
# if type(x) == float or type(x) == int or type(x) == type(np.float64(1)) or len(x) <= 1:
f_xx_out = self.f_xx_interp(x, y, grid_interp_x, grid_interp_y, f_xx)
f_yy_out = self.f_yy_interp(x, y, grid_interp_x, grid_interp_y, f_yy)
f_xy_out = self.f_xy_interp(x, y, grid_interp_x, grid_interp_y, f_xy)
return f_xx_out, f_xy_out, f_xy_out, f_yy_out
else:
if self._grid and n >= self._min_grid_number:
x_, y_ = util.get_axes(x, y)
f_xx_out = self.f_xx_interp(
x_, y_, grid_interp_x, grid_interp_y, f_xx, grid=self._grid
)
f_yy_out = self.f_yy_interp(
x_, y_, grid_interp_x, grid_interp_y, f_yy, grid=self._grid
)
f_xy_out = self.f_xy_interp(
x_, y_, grid_interp_x, grid_interp_y, f_xy, grid=self._grid
)
f_xx_out = util.image2array(f_xx_out)
f_yy_out = util.image2array(f_yy_out)
f_xy_out = util.image2array(f_xy_out)
else:
# n = len(x)
f_xx_out, f_yy_out, f_xy_out = np.zeros(n), np.zeros(n), np.zeros(n)
for i in range(n):
f_xx_out[i] = self.f_xx_interp(
x[i], y[i], grid_interp_x, grid_interp_y, f_xx
)
f_yy_out[i] = self.f_yy_interp(
x[i], y[i], grid_interp_x, grid_interp_y, f_yy
)
f_xy_out[i] = self.f_xy_interp(
x[i], y[i], grid_interp_x, grid_interp_y, f_xy
)
return f_xx_out, f_xy_out, f_xy_out, f_yy_out
def f_interp(self, x, y, x_grid=None, y_grid=None, f_=None, grid=False):
if not hasattr(self, "_f_interp"):
self._f_interp = scipy.interpolate.RectBivariateSpline(
y_grid, x_grid, f_, **self._kwargs_spline
)
return self._f_interp(y, x, grid=grid)
def f_x_interp(self, x, y, x_grid=None, y_grid=None, f_x=None, grid=False):
if not hasattr(self, "_f_x_interp"):
self._f_x_interp = scipy.interpolate.RectBivariateSpline(
y_grid, x_grid, f_x, **self._kwargs_spline
)
return self._f_x_interp(y, x, grid=grid)
def f_y_interp(self, x, y, x_grid=None, y_grid=None, f_y=None, grid=False):
if not hasattr(self, "_f_y_interp"):
self._f_y_interp = scipy.interpolate.RectBivariateSpline(
y_grid, x_grid, f_y, **self._kwargs_spline
)
return self._f_y_interp(y, x, grid=grid)
def f_xx_interp(self, x, y, x_grid=None, y_grid=None, f_xx=None, grid=False):
if not hasattr(self, "_f_xx_interp"):
self._f_xx_interp = scipy.interpolate.RectBivariateSpline(
y_grid, x_grid, f_xx, **self._kwargs_spline
)
return self._f_xx_interp(y, x, grid=grid)
def f_xy_interp(self, x, y, x_grid=None, y_grid=None, f_xy=None, grid=False):
if not hasattr(self, "_f_xy_interp"):
self._f_xy_interp = scipy.interpolate.RectBivariateSpline(
y_grid, x_grid, f_xy, **self._kwargs_spline
)
return self._f_xy_interp(y, x, grid=grid)
def f_yy_interp(self, x, y, x_grid=None, y_grid=None, f_yy=None, grid=False):
if not hasattr(self, "_f_yy_interp"):
self._f_yy_interp = scipy.interpolate.RectBivariateSpline(
y_grid, x_grid, f_yy, **self._kwargs_spline
)
return self._f_yy_interp(y, x, grid=grid)
def do_interp(self, x_grid, y_grid, f_, f_x, f_y, f_xx=None, f_yy=None, f_xy=None):
self._f_interp = scipy.interpolate.RectBivariateSpline(
x_grid, y_grid, f_, **self._kwargs_spline
)
self._f_x_interp = scipy.interpolate.RectBivariateSpline(
x_grid, y_grid, f_x, **self._kwargs_spline
)
self._f_y_interp = scipy.interpolate.RectBivariateSpline(
x_grid, y_grid, f_y, **self._kwargs_spline
)
if f_xx is not None:
self._f_xx_interp = scipy.interpolate.RectBivariateSpline(
x_grid, y_grid, f_xx, **self._kwargs_spline
)
if f_xy is not None:
self._f_xy_interp = scipy.interpolate.RectBivariateSpline(
x_grid, y_grid, f_xy, **self._kwargs_spline
)
if f_yy is not None:
self._f_yy_interp = scipy.interpolate.RectBivariateSpline(
x_grid, y_grid, f_yy, **self._kwargs_spline
)
class InterpolScaled(LensProfileBase):
"""Class for handling an interpolated lensing map and has the freedom to scale its
lensing effect.
Applications are e.g. mass to light ratio.
"""
param_names = [
"scale_factor",
"grid_interp_x",
"grid_interp_y",
"f_",
"f_x",
"f_y",
"f_xx",
"f_yy",
"f_xy",
]
lower_limit_default = {"scale_factor": 0}
upper_limit_default = {"scale_factor": 100}
def __init__(self, grid=True, min_grid_number=100, kwargs_spline=None):
"""
:param grid: bool, if True, computes the calculation on a grid
:param min_grid_number: minimum numbers of positions to compute the interpolation on a grid
:param kwargs_spline: keyword arguments for the scipy.interpolate.RectBivariateSpline() interpolation (optional)
if =None, a default linear interpolation is chosen.
"""
self.interp_func = Interpol(
grid, min_grid_number=min_grid_number, kwargs_spline=kwargs_spline
)
super(InterpolScaled, self).__init__()
def function(
self,
x,
y,
scale_factor=1,
grid_interp_x=None,
grid_interp_y=None,
f_=None,
f_x=None,
f_y=None,
f_xx=None,
f_yy=None,
f_xy=None,
):
"""
:param x: x-coordinate (angular position), float or numpy array
:param y: y-coordinate (angular position), float or numpy array
:param scale_factor: float, overall scaling of the lens model relative to the input interpolation grid
:param grid_interp_x: numpy array (ascending) to mark the x-direction of the interpolation grid
:param grid_interp_y: numpy array (ascending) to mark the y-direction of the interpolation grid
:param f_: 2d numpy array of lensing potential, matching the grids in grid_interp_x and grid_interp_y
:param f_x: 2d numpy array of deflection in x-direction, matching the grids in grid_interp_x and grid_interp_y
:param f_y: 2d numpy array of deflection in y-direction, matching the grids in grid_interp_x and grid_interp_y
:param f_xx: 2d numpy array of df/dxx, matching the grids in grid_interp_x and grid_interp_y
:param f_yy: 2d numpy array of df/dyy, matching the grids in grid_interp_x and grid_interp_y
:param f_xy: 2d numpy array of df/dxy, matching the grids in grid_interp_x and grid_interp_y
:return: potential at interpolated positions (x, y)
"""
f_out = self.interp_func.function(
x, y, grid_interp_x, grid_interp_y, f_, f_x, f_y, f_xx, f_yy, f_xy
)
f_out *= scale_factor
return f_out
def derivatives(
self,
x,
y,
scale_factor=1,
grid_interp_x=None,
grid_interp_y=None,
f_=None,
f_x=None,
f_y=None,
f_xx=None,
f_yy=None,
f_xy=None,
):
"""
:param x: x-coordinate (angular position), float or numpy array
:param y: y-coordinate (angular position), float or numpy array
:param scale_factor: float, overall scaling of the lens model relative to the input interpolation grid
:param grid_interp_x: numpy array (ascending) to mark the x-direction of the interpolation grid
:param grid_interp_y: numpy array (ascending) to mark the y-direction of the interpolation grid
:param f_: 2d numpy array of lensing potential, matching the grids in grid_interp_x and grid_interp_y
:param f_x: 2d numpy array of deflection in x-direction, matching the grids in grid_interp_x and grid_interp_y
:param f_y: 2d numpy array of deflection in y-direction, matching the grids in grid_interp_x and grid_interp_y
:param f_xx: 2d numpy array of df/dxx, matching the grids in grid_interp_x and grid_interp_y
:param f_yy: 2d numpy array of df/dyy, matching the grids in grid_interp_x and grid_interp_y
:param f_xy: 2d numpy array of df/dxy, matching the grids in grid_interp_x and grid_interp_y
:return: deflection angles in x- and y-direction at position (x, y)
"""
f_x_out, f_y_out = self.interp_func.derivatives(
x, y, grid_interp_x, grid_interp_y, f_, f_x, f_y, f_xx, f_yy, f_xy
)
f_x_out *= scale_factor
f_y_out *= scale_factor
return f_x_out, f_y_out
def hessian(
self,
x,
y,
scale_factor=1,
grid_interp_x=None,
grid_interp_y=None,
f_=None,
f_x=None,
f_y=None,
f_xx=None,
f_yy=None,
f_xy=None,
):
"""
:param x: x-coordinate (angular position), float or numpy array
:param y: y-coordinate (angular position), float or numpy array
:param scale_factor: float, overall scaling of the lens model relative to the input interpolation grid
:param grid_interp_x: numpy array (ascending) to mark the x-direction of the interpolation grid
:param grid_interp_y: numpy array (ascending) to mark the y-direction of the interpolation grid
:param f_: 2d numpy array of lensing potential, matching the grids in grid_interp_x and grid_interp_y
:param f_x: 2d numpy array of deflection in x-direction, matching the grids in grid_interp_x and grid_interp_y
:param f_y: 2d numpy array of deflection in y-direction, matching the grids in grid_interp_x and grid_interp_y
:param f_xx: 2d numpy array of df/dxx, matching the grids in grid_interp_x and grid_interp_y
:param f_yy: 2d numpy array of df/dyy, matching the grids in grid_interp_x and grid_interp_y
:param f_xy: 2d numpy array of df/dxy, matching the grids in grid_interp_x and grid_interp_y
:return: second derivatives of the lensing potential f_xx, f_yy, f_xy at position (x, y)
"""
f_xx_out, f_xy_out, f_yx_out, f_yy_out = self.interp_func.hessian(
x, y, grid_interp_x, grid_interp_y, f_, f_x, f_y, f_xx, f_yy, f_xy
)
f_xx_out *= scale_factor
f_yy_out *= scale_factor
f_xy_out *= scale_factor
f_yx_out *= scale_factor
return f_xx_out, f_xy_out, f_yx_out, f_yy_out
|
sibirrerREPO_NAMElenstronomyPATH_START.@lenstronomy_extracted@lenstronomy-main@lenstronomy@LensModel@Profiles@interpol.py@.PATH_END.py
|
{
"filename": "tests.py",
"repo_name": "ejhigson/perfectns",
"repo_path": "perfectns_extracted/perfectns-master/tests/tests.py",
"type": "Python"
}
|
#!/usr/bin/env python
"""
Test the perfectns package installation.
"""
import os
import shutil
import unittest
import warnings
import numpy as np
import numpy.testing
import matplotlib
import nestcheck.ns_run_utils
import perfectns.settings
import perfectns.estimators as e
import perfectns.cached_gaussian_prior
import perfectns.likelihoods as likelihoods
import perfectns.nested_sampling as ns
import perfectns.results_tables as rt
import perfectns.maths_functions
import perfectns.priors as priors
import perfectns.plots
ESTIMATOR_LIST = [e.LogZ(),
e.Z(),
e.ParamMean(),
e.ParamSquaredMean(),
e.ParamCred(0.5),
e.ParamCred(0.84),
e.RMean(from_theta=True),
e.RCred(0.84, from_theta=True)]
TEST_CACHE_DIR = 'cache_tests'
TEST_DIR_EXISTS_MSG = ('Directory ' + TEST_CACHE_DIR + ' exists! Tests use '
'this dir to check caching then delete it afterwards, '
'so the path should be left empty.')
class TestNestedSampling(unittest.TestCase):
def setUp(self):
"""Check TEST_CACHE_DIR does not already exist."""
assert not os.path.exists(TEST_CACHE_DIR), TEST_DIR_EXISTS_MSG
def tearDown(self):
"""Remove any caches created by the tests."""
try:
shutil.rmtree(TEST_CACHE_DIR)
except FileNotFoundError:
pass
def test_nestcheck_run_format(self):
"""
Check perfectns runs are compatable with the nestcheck run format
(excepting their additional 'logx' and 'r' keys).
"""
settings = get_minimal_settings()
for dynamic_goal in [None, 0, 0.5, 1]:
settings.dynamic_goal = dynamic_goal
run = ns.generate_ns_run(settings)
del run['logx']
del run['r']
del run['settings']
del run['random_seed']
try:
nestcheck.ns_run_utils.check_ns_run(run)
except AttributeError:
# check_ns_run moved from nestcheck.data_processing to
# nestcheck.ns_run_utils in v0.1.8, so this is needed to
# maintain compatibility with earlier versions
pass
def test_get_run_data_caching(self):
settings = get_minimal_settings()
settings.dynamic_goal = None
settings.n_samples_max = 100
ns.get_run_data(settings, 1, save=True, load=True,
check_loaded_settings=True, cache_dir=TEST_CACHE_DIR)
# test loading and checking settings
ns.get_run_data(settings, 1, save=True, load=True,
check_loaded_settings=True, cache_dir=TEST_CACHE_DIR)
# test loading and checking settings when settings are not the same
# this only works for changing a setting which dosnt affect the save
# name
settings.dynamic_goal = 0
ns.get_run_data(settings, 1, save=True, load=True,
check_loaded_settings=True, cache_dir=TEST_CACHE_DIR)
settings.n_samples_max += 1
with warnings.catch_warnings(record=True) as war:
warnings.simplefilter("always")
ns.get_run_data(
settings, 1, save=True, load=True, check_loaded_settings=True,
cache_dir=TEST_CACHE_DIR)
self.assertEqual(len(war), 1)
def test_get_run_data_unexpected_kwarg(self):
settings = get_minimal_settings()
self.assertRaises(TypeError, ns.get_run_data, settings, 1,
unexpected=1)
def test_no_point_thread(self):
"""
Check generate_single_thread returns None when keep_final_point is
False and thread is empty.
"""
settings = get_minimal_settings()
self.assertIsNone(ns.generate_single_thread(
settings, -(10 ** -150), 0, keep_final_point=False))
def test_min_max_importance(self):
"""
Check importance condition when the final point is one of the
ones with high importance.
"""
settings = get_minimal_settings()
samples = np.random.random((2, 3))
loglmm, logxmm = ns.min_max_importance(np.full(2, 1), samples,
settings)
self.assertEqual(loglmm[1], samples[-1, 0])
self.assertEqual(logxmm[1], samples[-1, 2])
def test_tuned_p_importance(self):
theta = np.random.random((5, 1))
w_rel = np.full(5, 1)
imp = np.abs(theta - np.mean(theta))[:, 0]
imp /= imp.max()
self.assertTrue(np.array_equal(
ns.p_importance(theta, w_rel, tuned_dynamic_p=True), imp))
class TestEstimators(unittest.TestCase):
"""
Test estimators: largely checking the get_true_estimator_values output
as the functions used for analysing nested sampling runs are mostly
imported from nestcheck which has its own tests.
"""
def setUp(self):
"""Get some settings for the get_true_estimator_values tests."""
self.settings = get_minimal_settings()
def test_true_logz_value(self):
self.assertAlmostEqual(
e.get_true_estimator_values(e.LogZ(), self.settings),
-6.4529975832506050, places=10)
def test_true_z_value(self):
self.assertAlmostEqual(
e.get_true_estimator_values(e.Z(), self.settings),
1.5757915157613399e-03, places=10)
def test_true_param_mean_value(self):
self.assertEqual(
e.get_true_estimator_values(e.ParamMean(), self.settings), 0)
def test_true_param_mean_squared_value(self):
self.assertAlmostEqual(
e.get_true_estimator_values(e.ParamSquaredMean(), self.settings),
9.9009851517647807e-01, places=10)
def test_true_param_cred_value(self):
self.assertEqual(
e.get_true_estimator_values(e.ParamCred(0.5), self.settings), 0)
self.assertAlmostEqual(
e.get_true_estimator_values(e.ParamCred(0.84), self.settings),
9.8952257789120635e-01, places=10)
def test_true_r_mean_value(self):
self.assertAlmostEqual(
e.get_true_estimator_values(e.RMean(), self.settings),
1.2470645289408879e+00, places=10)
def test_true_r_cred_value(self):
self.assertTrue(np.isnan(
e.get_true_estimator_values(e.RCred(0.84), self.settings)))
# Test with a list argument as well to cover list version of true
# get_true_estimator_values
self.assertTrue(np.isnan(
e.get_true_estimator_values([e.RCred(0.84)], self.settings)[0]))
def test_r_not_from_theta(self):
run_dict_temp = {'theta': np.full((2, 2), 1),
'r': np.full((2,), np.sqrt(2)),
'logl': np.full((2,), 0.),
'nlive_array': np.full((2,), 5.),
'settings': {'dims_to_sample': 2, 'n_dim': 2}}
self.assertAlmostEqual(e.RMean(from_theta=False)(
run_dict_temp, logw=None), np.sqrt(2), places=10)
self.assertAlmostEqual(e.RCred(0.84, from_theta=False)(
run_dict_temp, logw=None), np.sqrt(2), places=10)
def test_count_samples(self):
self.assertEqual(e.CountSamples()({'logl': np.zeros(10)}), 10)
class TestMathsFunctions(unittest.TestCase):
def test_analytic_logx_terminate(self):
"""Check None is returned when the likelihood is not set up."""
settings = get_minimal_settings()
settings.likelihood = likelihoods.ExpPower(2)
self.assertIsNone(
perfectns.maths_functions.analytic_logx_terminate(settings))
def test_nsphere_sampling(self):
# By default only used in high dim so manually test with dim=100
perfectns.maths_functions.sample_nsphere_shells(
np.asarray([1]), 100, n_sample=1)
# Check handling of n_sample=None
self.assertEqual(
perfectns.maths_functions.sample_nsphere_shells_normal(
np.asarray([1]), 2, n_sample=None).shape, (1, 2))
self.assertEqual(
perfectns.maths_functions.sample_nsphere_shells_beta(
np.asarray([1]), 2, n_sample=None).shape, (1, 2))
class TestSettings(unittest.TestCase):
def test_settings_unexpected_arg(self):
self.assertRaises(
TypeError, perfectns.settings.PerfectNSSettings, unexpected=0)
def test_settings_save_name(self):
settings = perfectns.settings.PerfectNSSettings()
settings.dynamic_goal = 1
settings.nbatch = 2
settings.nlive_const = None
settings.tuned_dynamic_p = True
settings.n_samples_max = 100
settings.dynamic_fraction = 0.8
settings.dims_to_sample = 2
self.assertIsInstance(settings.save_name(), str)
def test_settings_unexpected_attr(self):
settings = perfectns.settings.PerfectNSSettings()
self.assertRaises(TypeError, settings.__setattr__, 'unexpected', 1)
class TestLikelihoods(unittest.TestCase):
def test_standard_ns_exp_power_likelihood_gaussian_prior(self):
"""Check the exp_power likelihood, as well as some functions in
analyse_run."""
settings = get_minimal_settings()
settings.likelihood = likelihoods.ExpPower(likelihood_scale=1,
power=2)
self.assertAlmostEqual(
settings.logx_given_logl(settings.logl_given_logx(-1.0)),
-1.0, places=12)
settings.logz_analytic()
ns_run = ns.generate_ns_run(settings)
values = nestcheck.ns_run_utils.run_estimators(ns_run, ESTIMATOR_LIST)
self.assertFalse(np.any(np.isnan(values)))
def test_standard_ns_cauchy_likelihood_gaussian_prior(self):
"""Check the Cauchy likelihood."""
settings = get_minimal_settings()
settings.likelihood = likelihoods.Cauchy(likelihood_scale=1)
self.assertAlmostEqual(
settings.logx_given_logl(settings.logl_given_logx(-1.0)),
-1.0, places=12)
settings.logz_analytic()
ns_run = ns.generate_ns_run(settings)
values = nestcheck.ns_run_utils.run_estimators(ns_run, ESTIMATOR_LIST)
self.assertFalse(np.any(np.isnan(values)))
class TestPriors(unittest.TestCase):
def setUp(self):
"""Check TEST_CACHE_DIR does not already exist."""
assert not os.path.exists(TEST_CACHE_DIR), TEST_DIR_EXISTS_MSG
def tearDown(self):
"""Remove any caches created by the tests."""
try:
shutil.rmtree(TEST_CACHE_DIR)
except FileNotFoundError:
pass
def test_standard_ns_gaussian_likelihood_uniform_prior(self):
"""Check the uniform prior."""
settings = get_minimal_settings()
settings.prior = priors.Uniform(prior_scale=10)
self.assertAlmostEqual(
settings.logx_given_logl(settings.logl_given_logx(-1.0)),
-1.0, places=12)
settings.logz_analytic()
ns_run = ns.generate_ns_run(settings)
values = nestcheck.ns_run_utils.run_estimators(ns_run, ESTIMATOR_LIST)
self.assertFalse(np.any(np.isnan(values)))
def test_cached_gaussian_prior(self):
"""Check the cached_gaussian prior."""
settings = get_minimal_settings()
self.assertRaises(
TypeError, priors.GaussianCached,
prior_scale=10, unexpected=0)
with warnings.catch_warnings(record=True) as war:
warnings.simplefilter("always")
settings.prior = priors.GaussianCached(
prior_scale=10, save_dict=True, n_dim=settings.n_dim,
cache_dir=TEST_CACHE_DIR,
interp_density=10, logx_min=-30)
self.assertEqual(len(war), 1)
# Test inside and outside cached regime (logx<-10).
# Need fairly low number of places
for logx in [-1, -11]:
self.assertAlmostEqual(
settings.logx_given_logl(settings.logl_given_logx(logx)),
logx, places=3)
# Test array version of the function too
logx = np.asarray([-2])
self.assertAlmostEqual(
settings.logx_given_logl(settings.logl_given_logx(logx)[0]),
logx[0], places=12)
settings.get_settings_dict()
# Generate NS run using get_run_data to check it checks the cache
# before submitting process to parallel apply
ns_run = ns.get_run_data(settings, 1, load=False, save=False)[0]
values = nestcheck.ns_run_utils.run_estimators(ns_run, ESTIMATOR_LIST)
self.assertFalse(np.any(np.isnan(values)))
# check the argument options and messages for interp_r_logx_dict
with warnings.catch_warnings(record=True) as war:
warnings.simplefilter("always")
self.assertRaises(
TypeError, perfectns.cached_gaussian_prior.interp_r_logx_dict,
2000, 10, logx_min=-100, interp_density=1, unexpected=0)
self.assertEqual(len(war), 1)
self.assertRaises(
TypeError, perfectns.cached_gaussian_prior.interp_r_logx_dict,
200, 10, logx_min=-100, interp_density=1, unexpected=0)
class TestPlotting(unittest.TestCase):
def test_plot_dynamic_nlive(self):
settings = get_minimal_settings()
fig = perfectns.plots.plot_dynamic_nlive(
[None, 0, 1, 1], settings, n_run=2,
tuned_dynamic_ps=[False, False, False, True],
save=False, load=False)
self.assertIsInstance(fig, matplotlib.figure.Figure)
# Test ymax and the fallback for normalising analytic lines when the
# dynamic goal which is meant to mirror them is not present
fig = perfectns.plots.plot_dynamic_nlive(
[None], settings, n_run=2,
save=False, load=False,
tuned_dynamic_ps=[True], ymax=1000)
def test_plot_parameter_logx_diagram(self):
settings = get_minimal_settings()
for ftheta in [e.ParamMean(), e.ParamSquaredMean(), e.RMean()]:
fig = perfectns.plots.plot_parameter_logx_diagram(
settings, ftheta, x_points=50, y_points=50)
self.assertIsInstance(fig, matplotlib.figure.Figure)
# Test warning for estimators without CDF
with warnings.catch_warnings(record=True) as war:
warnings.simplefilter("always")
perfectns.plots.cdf_given_logx(e.LogZ(), np.zeros(1), np.zeros(1),
settings)
self.assertEqual(len(war), 1)
# Test unexpected kwargs check
self.assertRaises(
TypeError, perfectns.plots.plot_parameter_logx_diagram,
settings, e.ParamMean(), x_points=50, y_points=50,
unexpected=0)
def test_plot_rel_posterior_mass(self):
fig = perfectns.plots.plot_rel_posterior_mass(
[perfectns.likelihoods.Gaussian(1),
perfectns.likelihoods.ExpPower(1, 2)],
perfectns.priors.Gaussian(1),
[2], np.linspace(-10, 0, 100))
self.assertIsInstance(fig, matplotlib.figure.Figure)
self.assertRaises(
TypeError, perfectns.plots.plot_rel_posterior_mass,
[perfectns.likelihoods.Gaussian(1),
perfectns.likelihoods.ExpPower(1, 2)],
perfectns.priors.Gaussian(1),
[2], np.linspace(-10, 0, 100), unexpected=0)
class TestDynamicResultsTables(unittest.TestCase):
def setUp(self):
"""Check TEST_CACHE_DIR does not already exist."""
assert not os.path.exists(TEST_CACHE_DIR), TEST_DIR_EXISTS_MSG
def tearDown(self):
"""Remove any caches created by the tests."""
try:
shutil.rmtree(TEST_CACHE_DIR)
except FileNotFoundError:
pass
def test_dynamic_results_table_values(self):
"""
Test generating a table comparing dynamic and standard nested sampling;
this covers a lot of the perfectns package's functionality.
Tests of the expected values relies on default seeding of runs in
get_run_data using numpy.random.seed - this should be stable over
different platforms but may be worth checking if errors occur.
"""
settings = get_minimal_settings()
n_run = 5
dynamic_goals = [0, 0.25, 1, 1]
tuned_dynamic_ps = [False, False, False, True]
dynamic_table = rt.get_dynamic_results(
n_run, dynamic_goals, ESTIMATOR_LIST, settings, load=True,
save=True, cache_dir=TEST_CACHE_DIR,
parallel=True, tuned_dynamic_ps=tuned_dynamic_ps)
# Check merged dynamic results function
merged_df = rt.merged_dynamic_results(
[(settings.n_dim, settings.prior.prior_scale)],
[settings.likelihood], settings, ESTIMATOR_LIST,
dynamic_goals=dynamic_goals, n_run=n_run,
cache_dir=TEST_CACHE_DIR, tuned_dynamic_ps=tuned_dynamic_ps,
load=True, save=False)
self.assertTrue(np.array_equal(
merged_df.values, dynamic_table.values))
# Check numerical values in dynamic_table
self.assertFalse(np.any(np.isnan(dynamic_table.values)))
# Check the values for one column (those for RMean)
expected_rmean_vals = np.asarray(
[1.05159345, 0.05910616, 1.09315952, 0.08192338, 1.14996638, 0.11357112,
1.24153945, 0.09478196, 1.24436994, 0.07220817, 0.13216539, 0.04672752,
0.18318625, 0.06476612, 0.25395275, 0.08978585, 0.21193890, 0.07493172,
0.16146238, 0.05708557, 0.52053477, 0.52053477, 0.27085052, 0.27085052,
0.38887867, 0.38887867, 0.67002776, 0.67002776])
numpy.testing.assert_allclose(
dynamic_table[e.RMean(from_theta=True).latex_name].values,
expected_rmean_vals, rtol=1e-7,
err_msg=('this relies on numpy.random.seed being consistent - '
'this should be true but is perhaps worth checking for '
'your platform.'))
def test_dynamic_results_table_unexpected_kwargs(self):
settings = get_minimal_settings()
# Run some of the code in merged_dynamic_results which is missed with
# different options
self.assertRaises(TypeError, rt.merged_dynamic_results, [(1000, 10)],
[likelihoods.ExpPower()],
settings, ESTIMATOR_LIST, unexpected=1)
class TestBootstrapResultsTables(unittest.TestCase):
def setUp(self):
"""Check TEST_CACHE_DIR does not already exist."""
assert not os.path.exists(TEST_CACHE_DIR), TEST_DIR_EXISTS_MSG
def tearDown(self):
"""Remove any caches created by the tests."""
try:
shutil.rmtree(TEST_CACHE_DIR)
except FileNotFoundError:
pass
def test_bootstrap_results_table_values(self):
"""
Generate a table showing sampling error estimates using the bootstrap
method.
As the numerical values produced are stochastic we just test that the
function runs ok and does not produce NaN values - this should be
sufficient.
"""
np.random.seed(0)
with warnings.catch_warnings():
warnings.simplefilter('ignore', UserWarning)
bs_df = rt.get_bootstrap_results(
5, 10, ESTIMATOR_LIST, get_minimal_settings(), n_run_ci=2,
n_simulate_ci=10, add_sim_method=True, cred_int=0.95, load=True,
save=True, cache_dir=TEST_CACHE_DIR, ninit_sep=True,
parallel=False)
# Check numerical values in dynamic_table:
# The first row of the table contains analytic calculations of the
# estimators' values given the likelihood and prior which have already
# been tested in test_dynamic_results_table.
# None of the other values in the table should be NaN:
self.assertFalse(np.any(np.isnan(bs_df.values[1:, :])))
# Check the values for one column (those for RMean)
expected_rmean_vals = np.asarray(
[1.05159345e+00, 5.91061598e-02, 1.32165391e-01, 4.67275222e-02,
9.74212313e-01, 3.95418293e-01, 4.45773150e+01, 1.57604609e+01,
1.26559404e+00, 4.81565225e-01, 3.14517691e+01, 1.11198796e+01,
1.44503362e+00, 1.20619710e-01, 6.00000000e+01, 1.00000000e+02])
numpy.testing.assert_allclose(
bs_df[e.RMean(from_theta=True).latex_name].values,
expected_rmean_vals, rtol=1e-7,
err_msg=('this relies on numpy.random.seed being consistent - '
'this should be true but is perhaps worth checking for '
'your platform.'))
def test_bootstrap_results_table_unexpected_kwargs(self):
settings = get_minimal_settings()
self.assertRaises(TypeError, rt.get_bootstrap_results, 3, 10,
ESTIMATOR_LIST, settings, unexpected=1)
# Helper functions
# ----------------
def get_minimal_settings():
"""
Get a perfectns settings object with a minimal number of live points so
that tests run quickly.
"""
settings = perfectns.settings.PerfectNSSettings()
settings.dims_to_sample = 2
settings.n_dim = 2
settings.nlive_const = 5
settings.ninit = 2
settings.dynamic_goal = None
return settings
if __name__ == '__main__':
unittest.main()
|
ejhigsonREPO_NAMEperfectnsPATH_START.@perfectns_extracted@perfectns-master@tests@tests.py@.PATH_END.py
|
{
"filename": "json.py",
"repo_name": "gwpy/gwpy",
"repo_path": "gwpy_extracted/gwpy-main/gwpy/segments/io/json.py",
"type": "Python"
}
|
# -*- coding: utf-8 -*-
# Copyright (C) Duncan Macleod (2017-2020)
#
# This file is part of GWpy.
#
# GWpy is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# GWpy is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with GWpy. If not, see <http://www.gnu.org/licenses/>.
"""Read/write segments and flags from DQSEGDB-format JSON
"""
import json
from ...io import registry
from ...io.utils import (
identify_factory,
with_open,
)
from .. import DataQualityFlag
__author__ = 'Duncan Macleod <duncan.macleod@ligo.org>'
# -- read ---------------------------------------------------------------------
@with_open
def read_json_flag(fobj):
"""Read a `DataQualityFlag` from a segments-web.ligo.org JSON file
"""
data = json.load(fobj)
# format flag
name = '{ifo}:{name}:{version}'.format(**data)
out = DataQualityFlag(name, active=data['active'],
known=data['known'])
# parse 'metadata'
try:
out.description = data['metadata'].get('flag_description', None)
except KeyError: # no metadata available, but that's ok
pass
else:
out.isgood = not data['metadata'].get(
'active_indicates_ifo_badness', False)
return out
# -- write --------------------------------------------------------------------
@with_open(mode="w", pos=1)
def write_json_flag(flag, fobj, **kwargs):
"""Write a `DataQualityFlag` to a JSON file
Parameters
----------
flag : `DataQualityFlag`
data to write
fobj : `str`, `file`
target file (or filename) to write
**kwargs
other keyword arguments to pass to :func:`json.dump`
See also
--------
json.dump
for details on acceptable keyword arguments
"""
# build json packet
data = {}
data['ifo'] = flag.ifo
data['name'] = flag.tag
data['version'] = flag.version
data['active'] = flag.active
data['known'] = flag.known
data['metadata'] = {}
data['metadata']['active_indicates_ifo_badness'] = not flag.isgood
data['metadata']['flag_description'] = flag.description
# write
json.dump(data, fobj, **kwargs)
# -- identify -----------------------------------------------------------------
identify_json = identify_factory('json') # pylint: disable=invalid-name
# -- register -----------------------------------------------------------------
registry.register_reader('json', DataQualityFlag, read_json_flag)
registry.register_writer('json', DataQualityFlag, write_json_flag)
registry.register_identifier('json', DataQualityFlag, identify_json)
|
gwpyREPO_NAMEgwpyPATH_START.@gwpy_extracted@gwpy-main@gwpy@segments@io@json.py@.PATH_END.py
|
{
"filename": "_weight.py",
"repo_name": "catboost/catboost",
"repo_path": "catboost_extracted/catboost-master/contrib/python/plotly/py3/plotly/validators/layout/legend/title/font/_weight.py",
"type": "Python"
}
|
import _plotly_utils.basevalidators
class WeightValidator(_plotly_utils.basevalidators.IntegerValidator):
def __init__(
self, plotly_name="weight", parent_name="layout.legend.title.font", **kwargs
):
super(WeightValidator, self).__init__(
plotly_name=plotly_name,
parent_name=parent_name,
edit_type=kwargs.pop("edit_type", "legend"),
extras=kwargs.pop("extras", ["normal", "bold"]),
max=kwargs.pop("max", 1000),
min=kwargs.pop("min", 1),
**kwargs,
)
|
catboostREPO_NAMEcatboostPATH_START.@catboost_extracted@catboost-master@contrib@python@plotly@py3@plotly@validators@layout@legend@title@font@_weight.py@.PATH_END.py
|
{
"filename": "fit_dist_to_samples.py",
"repo_name": "asmuzsoy/bayesn-VI-paper",
"repo_path": "bayesn-VI-paper_extracted/bayesn-VI-paper-main/fit_dist_to_samples.py",
"type": "Python"
}
|
import numpy as np
import numpyro
from numpyro.infer import MCMC, NUTS, init_to_median, init_to_sample, init_to_value, Predictive
import numpyro.distributions as dist
from numpyro.optim import Adam, ClippedAdam
from numpyro.infer import SVI, Trace_ELBO, TraceGraph_ELBO
from numpyro.infer.autoguide import AutoDelta, AutoMultivariateNormal, AutoDiagonalNormal
from numpyro.distributions.transforms import LowerCholeskyAffine
from numpyro.primitives import plate
import matplotlib.pyplot as plt
import matplotlib.lines as mlines
import jax
from jax import device_put
import jax.numpy as jnp
from jax.random import PRNGKey, split
import spline_utils
from zltn_utils import *
import pickle
from astropy.cosmology import FlatLambdaCDM
fiducial_cosmology={"H0": 73.24, "Om0": 0.28}
cosmo = FlatLambdaCDM(**fiducial_cosmology)
dataset = 'sim_population_12/12'
dataset_number = 12
true_av = np.loadtxt("sim_population_AV_" + str(dataset_number) + ".txt")[12]
true_theta = np.loadtxt("sim_population_theta_" + str(dataset_number) + ".txt")[12]
true_z = np.loadtxt("sim_population_z_" + str(dataset_number) + ".txt")[12]
true_mu = cosmo.distmod(true_z).value
def fit_model(samples):
mean_vector = numpyro.sample("mean_vector", dist.MultivariateNormal(jnp.array([true_av,true_mu,true_theta]), jnp.eye(3)))
# test_cov = jnp.eye(3)
# print(test_cov.shape)
# code from https://num.pyro.ai/en/latest/distributions.html
d = 3
std_vector = numpyro.sample("std_vector", dist.HalfNormal(jnp.ones(d)))
# concentration = jnp.ones(3) # Implies a uniform distribution over correlation matrices
corr_mat = numpyro.sample("corr_mat", dist.LKJ(d))
sigma = jnp.sqrt(std_vector)
cov_mat = jnp.matmul(jnp.matmul(jnp.diag(sigma), corr_mat), jnp.diag(sigma))
# cov_mat = jnp.outer(std_vector, std_vector) * corr_mat
with numpyro.plate("observations", len(samples)):
# obs = np.zeros((200,3))
# for i in range(200):
# print(i)
# print(mean_vector.shape)
# print(cov_mat.shape)
# print(samples.shape)
# obs[i] = numpyro.sample("obs" + str(i), MultiZLTN(mean_vector, covariance_matrix=cov_mat), obs=samples[i])
obs = numpyro.sample("obs", MultiZLTN(mean_vector, covariance_matrix=cov_mat), obs=samples)
with (open("results/" + dataset + "_mcmc/chains.pkl", "rb")) as openfile:
mcmc_objects = pickle.load(openfile)
for i in range(1):
mcmc_results = []
for var in ['AV', 'mu', 'theta']:
mcmc_samples = mcmc_objects[var][:,:,i].reshape((1000,))
mcmc_results.append(mcmc_samples)
mcmc_results = np.array(mcmc_results).T
print(mcmc_results)
optimizer = Adam(0.001)
guide = AutoDelta(fit_model)
svi = SVI(fit_model, guide, optimizer, loss=Trace_ELBO(10))
svi_result = svi.run(PRNGKey(123), 50000, mcmc_results)
params, losses = svi_result.params, svi_result.losses
# predictive = Predictive(guide, params=params, num_samples=1000)
# samples = predictive(PRNGKey(123), data=None)
# print(samples.keys())
vi_median = guide.median(params)['mean_vector']
vi_std_vector = guide.median(params)['std_vector']
vi_corr_mat = guide.median(params)['corr_mat']
vi_sigma = jnp.sqrt(vi_std_vector)
vi_cov_mat = jnp.matmul(jnp.matmul(jnp.diag(vi_sigma), vi_corr_mat), jnp.diag(vi_sigma))
print(vi_median)
print(vi_cov_mat)
range1 = [(-1,0.2), (33.5, 35), (-0,1.5)]
fig = corner.corner(mcmc_results, color = 'r', range = range1)
corner.corner(np.array(MultiZLTN(vi_median, vi_cov_mat).sample(PRNGKey(123),(1000,))), fig=fig, color = 'k', range = range1)
# corner.corner(np.array(samples), fig=fig, color = 'k', range = range1)
corner.overplot_lines(fig, vi_median, linestyle = 'dashed', color='g')
colors = ['r', 'g']
labels = ['MCMC Samples','VI parameters']
plt.legend(
handles=[
mlines.Line2D([], [], color=colors[i], label=labels[i])
for i in range(len(labels))
],
fontsize=14, frameon=False, bbox_to_anchor=(0.8, 3), loc="upper right"
)
plt.show()
|
asmuzsoyREPO_NAMEbayesn-VI-paperPATH_START.@bayesn-VI-paper_extracted@bayesn-VI-paper-main@fit_dist_to_samples.py@.PATH_END.py
|
{
"filename": "_textfont.py",
"repo_name": "catboost/catboost",
"repo_path": "catboost_extracted/catboost-master/contrib/python/plotly/py2/plotly/validators/scattergl/_textfont.py",
"type": "Python"
}
|
import _plotly_utils.basevalidators
class TextfontValidator(_plotly_utils.basevalidators.CompoundValidator):
def __init__(self, plotly_name="textfont", parent_name="scattergl", **kwargs):
super(TextfontValidator, self).__init__(
plotly_name=plotly_name,
parent_name=parent_name,
data_class_str=kwargs.pop("data_class_str", "Textfont"),
data_docs=kwargs.pop(
"data_docs",
"""
color
colorsrc
Sets the source reference on Chart Studio Cloud
for color .
family
HTML font family - the typeface that will be
applied by the web browser. The web browser
will only be able to apply a font if it is
available on the system which it operates.
Provide multiple font families, separated by
commas, to indicate the preference in which to
apply fonts if they aren't available on the
system. The Chart Studio Cloud (at
https://chart-studio.plotly.com or on-premise)
generates images on a server, where only a
select number of fonts are installed and
supported. These include "Arial", "Balto",
"Courier New", "Droid Sans",, "Droid Serif",
"Droid Sans Mono", "Gravitas One", "Old
Standard TT", "Open Sans", "Overpass", "PT Sans
Narrow", "Raleway", "Times New Roman".
familysrc
Sets the source reference on Chart Studio Cloud
for family .
size
sizesrc
Sets the source reference on Chart Studio Cloud
for size .
""",
),
**kwargs
)
|
catboostREPO_NAMEcatboostPATH_START.@catboost_extracted@catboost-master@contrib@python@plotly@py2@plotly@validators@scattergl@_textfont.py@.PATH_END.py
|
{
"filename": "adaptation.py",
"repo_name": "pyro-ppl/pyro",
"repo_path": "pyro_extracted/pyro-master/pyro/infer/mcmc/adaptation.py",
"type": "Python"
}
|
# Copyright (c) 2017-2019 Uber Technologies, Inc.
# SPDX-License-Identifier: Apache-2.0
import math
from collections import namedtuple
import torch
import pyro
from pyro.ops.arrowhead import (
SymmArrowhead,
sqrt,
triu_gram,
triu_inverse,
triu_matvecmul,
)
from pyro.ops.dual_averaging import DualAveraging
from pyro.ops.welford import WelfordArrowheadCovariance, WelfordCovariance
adapt_window = namedtuple("adapt_window", ["start", "end"])
class WarmupAdapter:
r"""
Adapts tunable parameters, namely step size and mass matrix, during the
warmup phase. This class provides lookup properties to read the latest
values of ``step_size`` and ``inverse_mass_matrix``. These values are
periodically updated when adaptation is engaged.
"""
def __init__(
self,
step_size=1,
adapt_step_size=False,
target_accept_prob=0.8,
adapt_mass_matrix=False,
dense_mass=False,
):
self.adapt_step_size = adapt_step_size
self.adapt_mass_matrix = adapt_mass_matrix
self.target_accept_prob = target_accept_prob
self.dense_mass = dense_mass
self.step_size = 1 if step_size is None else step_size
self._init_step_size = self.step_size
self._adaptation_disabled = not (adapt_step_size or adapt_mass_matrix)
if adapt_step_size:
self._step_size_adapt_scheme = DualAveraging()
self._mass_matrix_adapter = BlockMassMatrix()
# We separate warmup_steps into windows:
# start_buffer + window 1 + window 2 + window 3 + ... + end_buffer
# where the length of each window will be doubled for the next window.
# We won't adapt mass matrix during start and end buffers; and mass
# matrix will be updated at the end of each window. This is helpful
# for dealing with the intense computation of sampling momentum from the
# inverse of mass matrix.
self._adapt_start_buffer = 75 # from Stan
self._adapt_end_buffer = 50 # from Stan
self._adapt_initial_window = 25 # from Stan
# configured later on setup
self._warmup_steps = None
self._adaptation_schedule = []
def _build_adaptation_schedule(self):
adaptation_schedule = []
# from Stan, for small warmup_steps < 20
if self._warmup_steps < 20:
adaptation_schedule.append(adapt_window(0, self._warmup_steps - 1))
return adaptation_schedule
start_buffer_size = self._adapt_start_buffer
end_buffer_size = self._adapt_end_buffer
init_window_size = self._adapt_initial_window
if (
self._adapt_start_buffer
+ self._adapt_end_buffer
+ self._adapt_initial_window
> self._warmup_steps
):
start_buffer_size = int(0.15 * self._warmup_steps)
end_buffer_size = int(0.1 * self._warmup_steps)
init_window_size = self._warmup_steps - start_buffer_size - end_buffer_size
adaptation_schedule.append(adapt_window(start=0, end=start_buffer_size - 1))
end_window_start = self._warmup_steps - end_buffer_size
next_window_size = init_window_size
next_window_start = start_buffer_size
while next_window_start < end_window_start:
cur_window_start, cur_window_size = next_window_start, next_window_size
# Ensure that slow adaptation windows are monotonically increasing
if 3 * cur_window_size <= end_window_start - cur_window_start:
next_window_size = 2 * cur_window_size
else:
cur_window_size = end_window_start - cur_window_start
next_window_start = cur_window_start + cur_window_size
adaptation_schedule.append(
adapt_window(cur_window_start, next_window_start - 1)
)
adaptation_schedule.append(
adapt_window(end_window_start, self._warmup_steps - 1)
)
return adaptation_schedule
def reset_step_size_adaptation(self, z):
r"""
Finds a reasonable step size and resets step size adaptation scheme.
"""
if self._find_reasonable_step_size is not None:
with pyro.validation_enabled(False):
self.step_size = self._find_reasonable_step_size(z)
self._step_size_adapt_scheme.prox_center = math.log(10 * self.step_size)
self._step_size_adapt_scheme.reset()
def _update_step_size(self, accept_prob):
# calculate a statistic for Dual Averaging scheme
H = self.target_accept_prob - accept_prob
self._step_size_adapt_scheme.step(H)
log_step_size, _ = self._step_size_adapt_scheme.get_state()
self.step_size = math.exp(log_step_size)
def _end_adaptation(self):
if self.adapt_step_size:
_, log_step_size_avg = self._step_size_adapt_scheme.get_state()
self.step_size = math.exp(log_step_size_avg)
def configure(
self,
warmup_steps,
initial_step_size=None,
mass_matrix_shape=None,
find_reasonable_step_size_fn=None,
options={},
):
r"""
Model specific properties that are specified when the HMC kernel is setup.
:param warmup_steps: Number of warmup steps that the sampler is initialized with.
:param initial_step_size: Step size to use to initialize the Dual Averaging scheme.
:param mass_matrix_shape: Shape of the mass matrix.
:param find_reasonable_step_size_fn: A callable to find reasonable step size when
mass matrix is changed.
:param dict options: A dict which maps `dtype`, `device` to the corresponding default
tensor options. This is used to construct initial mass matrix in `mass_matrix_adapter`.
"""
self._warmup_steps = warmup_steps
self.step_size = (
initial_step_size if initial_step_size is not None else self._init_step_size
)
if find_reasonable_step_size_fn is not None:
self._find_reasonable_step_size = find_reasonable_step_size_fn
if mass_matrix_shape is None or self.step_size is None:
raise ValueError(
"Incomplete configuration - step size and inverse mass matrix "
"need to be initialized."
)
self.mass_matrix_adapter.configure(
mass_matrix_shape, self.adapt_mass_matrix, options=options
)
if not self._adaptation_disabled:
self._adaptation_schedule = self._build_adaptation_schedule()
self._current_window = 0 # starting window index
if self.adapt_step_size:
self._step_size_adapt_scheme.reset()
def step(self, t, z, accept_prob, z_grad=None):
r"""
Called at each step during the warmup phase to learn tunable
parameters.
:param int t: time step, beginning at 0.
:param dict z: latent variables.
:param float accept_prob: acceptance probability of the proposal.
"""
if t >= self._warmup_steps or self._adaptation_disabled:
return
window = self._adaptation_schedule[self._current_window]
num_windows = len(self._adaptation_schedule)
mass_matrix_adaptation_phase = self.adapt_mass_matrix and (
0 < self._current_window < num_windows - 1
)
if self.adapt_step_size:
self._update_step_size(accept_prob.item())
if mass_matrix_adaptation_phase:
self.mass_matrix_adapter.update(z, z_grad)
if t == window.end:
if self._current_window == num_windows - 1:
self._current_window += 1
self._end_adaptation()
return
if self._current_window == 0:
self._current_window += 1
return
if mass_matrix_adaptation_phase:
self.mass_matrix_adapter.end_adaptation()
if self.adapt_step_size:
self.reset_step_size_adaptation(z)
self._current_window += 1
@property
def adaptation_schedule(self):
return self._adaptation_schedule
@property
def mass_matrix_adapter(self):
return self._mass_matrix_adapter
@mass_matrix_adapter.setter
def mass_matrix_adapter(self, value):
self._mass_matrix_adapter = value
# this works for diagonal matrix `x`
def _matvecmul(x, y):
return x.mul(y) if x.dim() == 1 else x.matmul(y)
def _cholesky(x):
return x.sqrt() if x.dim() == 1 else torch.linalg.cholesky(x)
def _transpose(x):
return x if x.dim() == 1 else x.t()
def _triu_inverse(x):
if x.dim() == 1:
return x.reciprocal()
else:
identity = torch.eye(x.size(-1), dtype=x.dtype, device=x.device)
return torch.linalg.solve_triangular(x, identity, upper=True)
class BlockMassMatrix:
"""
EXPERIMENTAL This class is used to adapt (inverse) mass matrix and provide
useful methods to calculate algebraic terms which involves the mass matrix.
The mass matrix will have block structure, which can be specified by
using the method :meth:`configure` with the corresponding structured
`mass_matrix_shape` arg.
:param float init_scale: initial scale to construct the initial mass matrix.
"""
def __init__(self, init_scale=1.0):
# TODO: we might allow users specify the initial mass matrix in the constructor.
self._init_scale = init_scale
self._adapt_scheme = {}
self._inverse_mass_matrix = {}
# NB: those sqrt matrices are upper triangular
self._mass_matrix_sqrt = {}
self._mass_matrix_sqrt_inverse = {}
self._mass_matrix_size = {}
@property
def mass_matrix_size(self):
"""
A dict that maps site names to the size of the corresponding mass matrix.
"""
return self._mass_matrix_size
@property
def inverse_mass_matrix(self):
return self._inverse_mass_matrix
@inverse_mass_matrix.setter
def inverse_mass_matrix(self, value):
for site_names, inverse_mass_matrix in value.items():
if site_names in self._adapt_scheme:
self._adapt_scheme[site_names].reset()
mass_matrix_sqrt_inverse = _transpose(_cholesky(inverse_mass_matrix))
mass_matrix_sqrt = _triu_inverse(mass_matrix_sqrt_inverse)
self._inverse_mass_matrix[site_names] = inverse_mass_matrix
self._mass_matrix_sqrt[site_names] = mass_matrix_sqrt
self._mass_matrix_sqrt_inverse[site_names] = mass_matrix_sqrt_inverse
def configure(self, mass_matrix_shape, adapt_mass_matrix=True, options={}):
"""
Sets up an initial mass matrix.
:param dict mass_matrix_shape: a dict that maps tuples of site names to the shape of
the corresponding mass matrix. Each tuple of site names corresponds to a block.
:param bool adapt_mass_matrix: a flag to decide whether an adaptation scheme will be used.
:param dict options: tensor options to construct the initial mass matrix.
"""
inverse_mass_matrix = {}
for site_names, shape in mass_matrix_shape.items():
self._mass_matrix_size[site_names] = shape[0]
diagonal = len(shape) == 1
inverse_mass_matrix[site_names] = (
torch.full(shape, self._init_scale, **options)
if diagonal
else torch.eye(*shape, **options) * self._init_scale
)
if adapt_mass_matrix:
adapt_scheme = WelfordCovariance(diagonal=diagonal)
self._adapt_scheme[site_names] = adapt_scheme
self.inverse_mass_matrix = inverse_mass_matrix
def update(self, z, z_grad):
"""
Updates the adaptation scheme using the new sample `z` or its grad `z_grad`.
:param dict z: the current value.
:param dict z_grad: grad of the current value.
"""
for site_names, adapt_scheme in self._adapt_scheme.items():
z_flat = torch.cat([z[name].detach().reshape(-1) for name in site_names])
adapt_scheme.update(z_flat)
def end_adaptation(self):
"""
Updates the current mass matrix using the adaptation scheme.
"""
inverse_mass_matrix = {}
for site_names, adapt_scheme in self._adapt_scheme.items():
inverse_mass_matrix[site_names] = adapt_scheme.get_covariance(
regularize=True
)
self.inverse_mass_matrix = inverse_mass_matrix
def kinetic_grad(self, r):
"""
Computes the gradient of kinetic energy w.r.t. the momentum `r`.
It is equivalent to compute velocity given the momentum `r`.
:param dict r: a dictionary maps site names to a tensor momentum.
:returns: a dictionary maps site names to the corresponding gradient
"""
v = {}
for site_names, inverse_mass_matrix in self._inverse_mass_matrix.items():
r_flat = torch.cat([r[site_name].reshape(-1) for site_name in site_names])
v_flat = _matvecmul(inverse_mass_matrix, r_flat)
# unpacking
pos = 0
for site_name in site_names:
next_pos = pos + r[site_name].numel()
v[site_name] = v_flat[pos:next_pos].reshape(r[site_name].shape)
pos = next_pos
return v
def scale(self, r_unscaled, r_prototype):
"""
Computes `M^{1/2} @ r_unscaled`.
Note that `r` is generated from a gaussian with scale `mass_matrix_sqrt`.
This method will scale it.
:param dict r_unscaled: a dictionary maps site names to a tensor momentum.
:param dict r_prototype: a dictionary mapes site names to prototype momentum.
Those prototype values are used to get shapes of the scaled version.
:returns: a dictionary maps site names to the corresponding tensor
"""
s = {}
for site_names, mass_matrix_sqrt in self._mass_matrix_sqrt.items():
r_flat = _matvecmul(mass_matrix_sqrt, r_unscaled[site_names])
# unpacking
pos = 0
for site_name in site_names:
next_pos = pos + r_prototype[site_name].numel()
s[site_name] = r_flat[pos:next_pos].reshape(
r_prototype[site_name].shape
)
pos = next_pos
return s
def unscale(self, r):
"""
Computes `inv(M^{1/2}) @ r`.
Note that `r` is generated from a gaussian with scale `mass_matrix_sqrt`.
This method will unscale it.
:param dict r: a dictionary maps site names to a tensor momentum.
:returns: a dictionary maps site names to the corresponding tensor
"""
u = {}
for (
site_names,
mass_matrix_sqrt_inverse,
) in self._mass_matrix_sqrt_inverse.items():
r_flat = torch.cat([r[site_name].reshape(-1) for site_name in site_names])
u[site_names] = _matvecmul(mass_matrix_sqrt_inverse, r_flat)
return u
class ArrowheadMassMatrix:
"""
EXPERIMENTAL This class is used to adapt (inverse) mass matrix and provide useful
methods to calculate algebraic terms which involves the mass matrix.
The mass matrix will have arrowhead structure, with the head including all
dense sites specified in the argument `full_mass` of the HMC/NUTS kernels.
:param float init_scale: initial scale to construct the initial mass matrix.
"""
def __init__(self, init_scale=1.0):
self._init_scale = init_scale
self._adapt_scheme = {}
self._mass_matrix = {}
# NB: like BlockMassMatrix, those sqrt matrices are upper triangular
self._mass_matrix_sqrt = {}
self._mass_matrix_sqrt_inverse = {}
self._mass_matrix_size = {}
@property
def mass_matrix_size(self):
"""
A dict that maps site names to the size of the corresponding mass matrix.
"""
return self._mass_matrix_size
@property
def inverse_mass_matrix(self):
# NB: this computation is O(N^2 x head_size)
# however, HMC/NUTS kernel does not require us computing inverse_mass_matrix;
# so all linear algebra cost in HMC/NUTS is still O(N x head_size^2);
# we still expose this property for testing and for backward compatibility
inverse_mass_matrix = {}
for site_names, sqrt_inverse in self._mass_matrix_sqrt_inverse.items():
inverse_mass_matrix[site_names] = triu_gram(sqrt_inverse)
return inverse_mass_matrix
@property
def mass_matrix(self):
return self._mass_matrix
@mass_matrix.setter
def mass_matrix(self, value):
for site_names, mass_matrix in value.items():
# XXX: consider to add a try/except here:
# if mass_matrix is not positive definite, we won't reset adapt_scheme
self._adapt_scheme[site_names].reset()
mass_matrix_sqrt = sqrt(mass_matrix)
mass_matrix_sqrt_inverse = triu_inverse(mass_matrix_sqrt)
self._mass_matrix[site_names] = mass_matrix
self._mass_matrix_sqrt[site_names] = mass_matrix_sqrt
self._mass_matrix_sqrt_inverse[site_names] = mass_matrix_sqrt_inverse
def configure(self, mass_matrix_shape, adapt_mass_matrix=True, options={}):
"""
Sets up an initial mass matrix.
:param dict mass_matrix_shape: a dict that maps tuples of site names to the shape of
the corresponding mass matrix. Each tuple of site names corresponds to a block.
:param bool adapt_mass_matrix: a flag to decide whether an adaptation scheme will be used.
:param dict options: tensor options to construct the initial mass matrix.
"""
mass_matrix = {}
dense_sites = ()
dense_size = 0
diag_sites = ()
diag_size = 0
for site_names, shape in mass_matrix_shape.items():
if len(shape) == 2:
dense_sites = dense_sites + site_names
dense_size = dense_size + shape[0]
else:
diag_sites = diag_sites + site_names
diag_size = diag_size + shape[0]
size = dense_size + diag_size
head_size = dense_size
all_sites = dense_sites + diag_sites
self._mass_matrix_size[all_sites] = size
top = torch.eye(head_size, size, **options) * self._init_scale
bottom_diag = torch.full((size - head_size,), self._init_scale, **options)
mass_matrix[all_sites] = SymmArrowhead(top, bottom_diag)
if adapt_mass_matrix:
adapt_scheme = WelfordArrowheadCovariance(head_size=head_size)
self._adapt_scheme[all_sites] = adapt_scheme
self.mass_matrix = mass_matrix
def update(self, z, z_grad):
"""
Updates the adaptation scheme using the new sample `z` or its grad `z_grad`.
:param dict z: the current value.
:param dict z_grad: grad of the current value.
"""
for site_names, adapt_scheme in self._adapt_scheme.items():
z_grad_flat = torch.cat([z_grad[name].reshape(-1) for name in site_names])
adapt_scheme.update(z_grad_flat)
def end_adaptation(self):
"""
Updates the current mass matrix using the adaptation scheme.
"""
mass_matrix = {}
for site_names, adapt_scheme in self._adapt_scheme.items():
top, bottom_diag = adapt_scheme.get_covariance(regularize=True)
mass_matrix[site_names] = SymmArrowhead(top, bottom_diag)
self.mass_matrix = mass_matrix
def kinetic_grad(self, r):
"""
Computes the gradient of kinetic energy w.r.t. the momentum `r`.
It is equivalent to compute velocity given the momentum `r`.
:param dict r: a dictionary maps site names to a tensor momentum.
:returns: a dictionary maps site names to the corresponding gradient
"""
v = {}
for (
site_names,
mass_matrix_sqrt_inverse,
) in self._mass_matrix_sqrt_inverse.items():
r_flat = torch.cat([r[site_name].reshape(-1) for site_name in site_names])
# NB: using inverse_mass_matrix as in BlockMassMatrix will cost
# O(N^2 x head_size) operators and O(N^2) memory requirement;
# here, we will leverage mass_matrix_sqrt_inverse to reduce the cost to
# O(N x head_size^2) operators and O(N x head_size) memory requirement.
r_unscaled = triu_matvecmul(mass_matrix_sqrt_inverse, r_flat)
v_flat = triu_matvecmul(
mass_matrix_sqrt_inverse, r_unscaled, transpose=True
)
# unpacking
pos = 0
for site_name in site_names:
next_pos = pos + r[site_name].numel()
v[site_name] = v_flat[pos:next_pos].reshape(r[site_name].shape)
pos = next_pos
return v
def scale(self, r_unscaled, r_prototype):
"""
Computes `M^{1/2} @ r_unscaled`.
Note that `r` is generated from a gaussian with scale `mass_matrix_sqrt`.
This method will scale it.
:param dict r_unscaled: a dictionary maps site names to a tensor momentum.
:param dict r_prototype: a dictionary mapes site names to prototype momentum.
Those prototype values are used to get shapes of the scaled version.
:returns: a dictionary maps site names to the corresponding tensor
"""
s = {}
for site_names, mass_matrix_sqrt in self._mass_matrix_sqrt.items():
r_flat = triu_matvecmul(mass_matrix_sqrt, r_unscaled[site_names])
# unpacking
pos = 0
for site_name in site_names:
next_pos = pos + r_prototype[site_name].numel()
s[site_name] = r_flat[pos:next_pos].reshape(
r_prototype[site_name].shape
)
pos = next_pos
return s
def unscale(self, r):
"""
Computes `inv(M^{1/2}) @ r`.
Note that `r` is generated from a gaussian with scale `mass_matrix_sqrt`.
This method will unscale it.
:param dict r: a dictionary maps site names to a tensor momentum.
:returns: a dictionary maps site names to the corresponding tensor
"""
u = {}
for (
site_names,
mass_matrix_sqrt_inverse,
) in self._mass_matrix_sqrt_inverse.items():
r_flat = torch.cat([r[site_name].reshape(-1) for site_name in site_names])
u[site_names] = triu_matvecmul(mass_matrix_sqrt_inverse, r_flat)
return u
|
pyro-pplREPO_NAMEpyroPATH_START.@pyro_extracted@pyro-master@pyro@infer@mcmc@adaptation.py@.PATH_END.py
|
{
"filename": "clarifai.py",
"repo_name": "langchain-ai/langchain",
"repo_path": "langchain_extracted/langchain-master/libs/community/langchain_community/vectorstores/clarifai.py",
"type": "Python"
}
|
from __future__ import annotations
import logging
import os
import traceback
import uuid
from concurrent.futures import ThreadPoolExecutor
from typing import Any, Iterable, List, Optional, Tuple
import requests
from langchain_core.documents import Document
from langchain_core.embeddings import Embeddings
from langchain_core.vectorstores import VectorStore
logger = logging.getLogger(__name__)
class Clarifai(VectorStore):
"""`Clarifai AI` vector store.
To use, you should have the ``clarifai`` python SDK package installed.
Example:
.. code-block:: python
from langchain_community.vectorstores import Clarifai
clarifai_vector_db = Clarifai(
user_id=USER_ID,
app_id=APP_ID,
number_of_docs=NUMBER_OF_DOCS,
)
"""
def __init__(
self,
user_id: Optional[str] = None,
app_id: Optional[str] = None,
number_of_docs: Optional[int] = 4,
pat: Optional[str] = None,
token: Optional[str] = None,
api_base: Optional[str] = "https://api.clarifai.com",
) -> None:
"""Initialize with Clarifai client.
Args:
user_id (Optional[str], optional): User ID. Defaults to None.
app_id (Optional[str], optional): App ID. Defaults to None.
pat (Optional[str], optional): Personal access token. Defaults to None.
token (Optional[str], optional): Session token. Defaults to None.
number_of_docs (Optional[int], optional): Number of documents to return
during vector search. Defaults to None.
api_base (Optional[str], optional): API base. Defaults to None.
Raises:
ValueError: If user ID, app ID or personal access token is not provided.
"""
_user_id = user_id or os.environ.get("CLARIFAI_USER_ID")
_app_id = app_id or os.environ.get("CLARIFAI_APP_ID")
if _user_id is None or _app_id is None:
raise ValueError(
"Could not find CLARIFAI_USER_ID "
"or CLARIFAI_APP_ID in your environment. "
"Please set those env variables with a valid user ID, app ID"
)
self._number_of_docs = number_of_docs
try:
from clarifai.client.search import Search
except ImportError as e:
raise ImportError(
"Could not import clarifai python package. "
"Please install it with `pip install clarifai`."
) from e
self._auth = Search(
user_id=_user_id,
app_id=_app_id,
top_k=number_of_docs,
pat=pat,
token=token,
base_url=api_base,
).auth_helper
def add_texts(
self,
texts: Iterable[str],
metadatas: Optional[List[dict]] = None,
ids: Optional[List[str]] = None,
**kwargs: Any,
) -> List[str]:
"""Add texts to the Clarifai vectorstore. This will push the text
to a Clarifai application.
Application use a base workflow that create and store embedding for each text.
Make sure you are using a base workflow that is compatible with text
(such as Language Understanding).
Args:
texts (Iterable[str]): Texts to add to the vectorstore.
metadatas (Optional[List[dict]], optional): Optional list of metadatas.
ids (Optional[List[str]], optional): Optional list of IDs.
"""
try:
from clarifai.client.input import Inputs
from google.protobuf.struct_pb2 import Struct
except ImportError as e:
raise ImportError(
"Could not import clarifai python package. "
"Please install it with `pip install clarifai`."
) from e
ltexts = list(texts)
length = len(ltexts)
assert length > 0, "No texts provided to add to the vectorstore."
if metadatas is not None:
assert length == len(
metadatas
), "Number of texts and metadatas should be the same."
if ids is not None:
assert len(ltexts) == len(
ids
), "Number of text inputs and input ids should be the same."
input_obj = Inputs.from_auth_helper(auth=self._auth)
batch_size = 32
input_job_ids = []
for idx in range(0, length, batch_size):
try:
batch_texts = ltexts[idx : idx + batch_size]
batch_metadatas = (
metadatas[idx : idx + batch_size] if metadatas else None
)
if ids is None:
batch_ids = [uuid.uuid4().hex for _ in range(len(batch_texts))]
else:
batch_ids = ids[idx : idx + batch_size]
if batch_metadatas is not None:
meta_list = []
for meta in batch_metadatas:
meta_struct = Struct()
meta_struct.update(meta)
meta_list.append(meta_struct)
input_batch = [
input_obj.get_text_input(
input_id=batch_ids[i],
raw_text=text,
metadata=meta_list[i] if batch_metadatas else None,
)
for i, text in enumerate(batch_texts)
]
result_id = input_obj.upload_inputs(inputs=input_batch)
input_job_ids.extend(result_id)
logger.debug("Input posted successfully.")
except Exception as error:
logger.warning(f"Post inputs failed: {error}")
traceback.print_exc()
return input_job_ids
def similarity_search_with_score(
self,
query: str,
k: Optional[int] = None,
filters: Optional[dict] = None,
**kwargs: Any,
) -> List[Tuple[Document, float]]:
"""Run similarity search with score using Clarifai.
Args:
query (str): Query text to search for.
k (Optional[int]): Number of results to return. If not set,
it'll take _number_of_docs. Defaults to None.
filter (Optional[Dict[str, str]]): Filter by metadata.
Defaults to None.
Returns:
List[Document]: List of documents most similar to the query text.
"""
try:
from clarifai.client.search import Search
from clarifai_grpc.grpc.api import resources_pb2
from google.protobuf import json_format # type: ignore
except ImportError as e:
raise ImportError(
"Could not import clarifai python package. "
"Please install it with `pip install clarifai`."
) from e
# Get number of docs to return
top_k = k or self._number_of_docs
search_obj = Search.from_auth_helper(auth=self._auth, top_k=top_k)
rank = [{"text_raw": query}]
# Add filter by metadata if provided.
if filters is not None:
search_metadata = {"metadata": filters}
search_response = search_obj.query(ranks=rank, filters=[search_metadata])
else:
search_response = search_obj.query(ranks=rank)
# Retrieve hits
hits = [hit for data in search_response for hit in data.hits]
executor = ThreadPoolExecutor(max_workers=10)
def hit_to_document(hit: resources_pb2.Hit) -> Tuple[Document, float]:
metadata = json_format.MessageToDict(hit.input.data.metadata)
h = dict(self._auth.metadata)
request = requests.get(hit.input.data.text.url, headers=h)
# override encoding by real educated guess as provided by chardet
request.encoding = request.apparent_encoding
requested_text = request.text
logger.debug(
f"\tScore {hit.score:.2f} for annotation: {hit.annotation.id}\
off input: {hit.input.id}, text: {requested_text[:125]}"
)
return (Document(page_content=requested_text, metadata=metadata), hit.score)
# Iterate over hits and retrieve metadata and text
futures = [executor.submit(hit_to_document, hit) for hit in hits]
docs_and_scores = [future.result() for future in futures]
return docs_and_scores
def similarity_search(
self,
query: str,
k: Optional[int] = None,
**kwargs: Any,
) -> List[Document]:
"""Run similarity search using Clarifai.
Args:
query: Text to look up documents similar to.
k: Number of Documents to return.
If not set, it'll take _number_of_docs. Defaults to None.
Returns:
List of Documents most similar to the query and score for each
"""
docs_and_scores = self.similarity_search_with_score(query, k=k, **kwargs)
return [doc for doc, _ in docs_and_scores]
@classmethod
def from_texts(
cls,
texts: List[str],
embedding: Optional[Embeddings] = None,
metadatas: Optional[List[dict]] = None,
user_id: Optional[str] = None,
app_id: Optional[str] = None,
number_of_docs: Optional[int] = None,
pat: Optional[str] = None,
token: Optional[str] = None,
**kwargs: Any,
) -> Clarifai:
"""Create a Clarifai vectorstore from a list of texts.
Args:
user_id (str): User ID.
app_id (str): App ID.
texts (List[str]): List of texts to add.
number_of_docs (Optional[int]): Number of documents
to return during vector search. Defaults to None.
pat (Optional[str], optional): Personal access token.
Defaults to None.
token (Optional[str], optional): Session token. Defaults to None.
metadatas (Optional[List[dict]]): Optional list
of metadatas. Defaults to None.
kwargs: Additional keyword arguments to be passed to the Search.
Returns:
Clarifai: Clarifai vectorstore.
"""
clarifai_vector_db = cls(
user_id=user_id,
app_id=app_id,
number_of_docs=number_of_docs,
pat=pat,
token=token,
**kwargs,
)
clarifai_vector_db.add_texts(texts=texts, metadatas=metadatas)
return clarifai_vector_db
@classmethod
def from_documents(
cls,
documents: List[Document],
embedding: Optional[Embeddings] = None,
user_id: Optional[str] = None,
app_id: Optional[str] = None,
number_of_docs: Optional[int] = None,
pat: Optional[str] = None,
token: Optional[str] = None,
**kwargs: Any,
) -> Clarifai:
"""Create a Clarifai vectorstore from a list of documents.
Args:
user_id (str): User ID.
app_id (str): App ID.
documents (List[Document]): List of documents to add.
number_of_docs (Optional[int]): Number of documents
to return during vector search. Defaults to None.
pat (Optional[str], optional): Personal access token. Defaults to None.
token (Optional[str], optional): Session token. Defaults to None.
kwargs: Additional keyword arguments to be passed to the Search.
Returns:
Clarifai: Clarifai vectorstore.
"""
texts = [doc.page_content for doc in documents]
metadatas = [doc.metadata for doc in documents]
return cls.from_texts(
user_id=user_id,
app_id=app_id,
texts=texts,
number_of_docs=number_of_docs,
pat=pat,
metadatas=metadatas,
token=token,
**kwargs,
)
|
langchain-aiREPO_NAMElangchainPATH_START.@langchain_extracted@langchain-master@libs@community@langchain_community@vectorstores@clarifai.py@.PATH_END.py
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.