metadata
dict | text
stringlengths 0
40.6M
| id
stringlengths 14
255
|
|---|---|---|
{
"filename": "bench.md",
"repo_name": "teuben/DataComb",
"repo_path": "DataComb_extracted/DataComb-main/md/bench.md",
"type": "Markdown"
}
|
# DC2019 benchmark
There is a bench.md in QAC, but here we want a much simpler one. Probably the goal here
is to have a benchmark that also tests if the results of the
computation are same/close enough to what we consider the correct answer.
In QAC we use qac_stats(), which prints some simple mean/rms/min/max/flux/sratio
of a given map. The precision can be choosen. E.g.
mean rms min max flux sratio
QAC_STATS: export/sky_tweak_box1.fits 0.855257 1.335149 -0.431050 7.741113 6489.699216 0.961874
In qac we have
OMP_NUM_THREAD=1 make sky4z sky4z-bench2
to test a full script, as well a single longish combination tclean. Also note the number of cores
that is used for the bench can be important, but we could do a single and full core experience, but
keeping in mind that the maximum number of cores often involved hitting all threads per core, which
can be over the sweet spot as we have seen in many experiments.
## Parallel CASA
There are two ways to exploit some parallelism in CASA: MPI and OpenMP. Let's look at the sky4z-bench2
benchmark, which runs tclean for niter=100,000. Here on a i7-4930K CPU wiht 6 cores, and 2 hyper-threads
per core (which should not be used):
# single core
OMP_NUM_THREADS=1 make sky4z-bench2
385.93user 20.92system 7:38.47elapsed 88%CPU
QAC_STATS: 5.8807340538948054 7.705096874227821 -20.720354080200195 27.191093444824219 69635.755146266558 0.930926
# all cores
OMP_NUM_THREADS=6 make sky4z-bench2
# all cores and hyper-threads (12 virtual cores)
make sky4z-bench2
943.39user 32.12system 9:06.77elapsed 178%CPU
# one for MPI, and one for computing
make sky4z-bench2mpi ONT=2
395.34user 22.64system 9:51.74elapsed 70%CPU
# one for MPI, and 4 for computing
make sky4z-bench2mpi ONT=5
400.32user 24.26system 9:48.00elapsed 72%CPU
|
teubenREPO_NAMEDataCombPATH_START.@DataComb_extracted@DataComb-main@md@bench.md@.PATH_END.py
|
{
"filename": "test_astro_data2.py",
"repo_name": "sherpa/sherpa",
"repo_path": "sherpa_extracted/sherpa-main/sherpa/astro/tests/test_astro_data2.py",
"type": "Python"
}
|
#
# Copyright (C) 2020 - 2024
# Smithsonian Astrophysical Observatory
#
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
#
"""Continued testing of sherpa.astro.data."""
import logging
import pickle
import re
import warnings
import numpy as np
import pytest
from sherpa.astro.data import DataARF, DataIMG, DataIMGInt, DataPHA, DataRMF
from sherpa.astro.instrument import create_delta_rmf, matrix_to_rmf
from sherpa.astro import io
from sherpa.astro.io.wcs import WCS
from sherpa.data import Data2D, Data2DInt
from sherpa.models import Const1D, Delta2D, Polynom1D, Polynom2D
from sherpa.stats._statfcts import calc_chi2datavar_errors
from sherpa.utils import dataspace2d
from sherpa.utils.err import DataErr
from sherpa.utils.logging import SherpaVerbosity
from sherpa.utils.testing import requires_data, requires_fits, \
requires_group, requires_region, requires_wcs
def test_can_not_group_ungrouped():
"""Does setting the grouping setting fail with no data?"""
pha = DataPHA('name', [1, 2, 3], [1, 1, 1])
assert not pha.grouped
with pytest.raises(DataErr,
match="data set 'name' does not specify grouping flags"):
pha.grouped = True
def test_pha_get_indep_when_all_filtered():
"""Regression test."""
pha = DataPHA("pha", [1, 2, 3], [9, 7, 8])
pha.ignore()
# This is with 'pha.mask = False'
#
indep = pha.get_indep()
assert len(indep) == 1
assert indep[0] == pytest.approx([])
def test_pha_get_indep_when_all_filtered():
"""Regression test."""
pha = DataPHA("pha", [1, 2, 3], [9, 7, 8])
pha.mask = [False] * 3
indep = pha.get_indep()
assert len(indep) == 1
assert indep[0] == pytest.approx([])
def test_pha_get_indep_when_partial_filtered():
"""Regression test."""
pha = DataPHA("pha", [1, 2, 3], [9, 7, 8])
pha.mask = [True, False, True]
indep = pha.get_indep()
assert len(indep) == 1
assert indep[0] == pytest.approx([1, 3])
def test_get_mask_is_none():
"""This is a regression test.
The test name is no-longer valid since the return value is now, as
of 4.17.0, [True,True,True] and not None.
"""
pha = DataPHA('name', [1, 2, 3], [1, 1, 1])
assert pha.mask is True
assert pha.get_mask() == pytest.approx(np.asarray([True] * 3))
def test_get_mask_is_none_when_all_filtered():
"""This is a regression test."""
pha = DataPHA('name', [1, 2, 3], [1, 1, 1])
pha.ignore()
assert pha.mask is False
assert pha.get_mask() is None
def test_get_noticed_channels_no_filter():
"""Regression test."""
pha = DataPHA('name', [1, 2, 3], [1, 1, 1])
assert pha.get_filter() == "1:3"
assert pha.get_noticed_channels() == pytest.approx([1, 2, 3])
assert pha.mask is True
def test_get_noticed_channels_no_filter_manual():
"""Regression test."""
pha = DataPHA('name', [1, 2, 3], [1, 1, 1])
pha.mask = [True, True, True]
assert pha.mask == pytest.approx([1, 1, 1])
assert pha.get_filter() == "1:3"
assert pha.get_noticed_channels() == pytest.approx([1, 2, 3])
def test_get_noticed_channels_partial_filter():
"""Regression test."""
pha = DataPHA('name', [1, 2, 3], [1, 1, 1])
pha.ignore(2, 2)
assert pha.mask == pytest.approx([1, 0, 1])
assert pha.get_filter() == "1,3"
assert pha.get_noticed_channels() == pytest.approx([1, 3])
def test_get_noticed_channels_removed_filter():
"""Regression test."""
pha = DataPHA('name', [1, 2, 3], [1, 1, 1])
pha.ignore()
assert pha.mask is False
assert pha.get_filter() == ""
assert pha.get_noticed_channels() == pytest.approx([])
def test_get_noticed_channels_removed_filter2():
"""Regression test."""
pha = DataPHA('name', [1, 2, 3], [1, 1, 1])
pha.mask = [False, False, False]
assert pha.mask == pytest.approx([0, 0, 0])
assert pha.get_filter() == ""
assert pha.get_noticed_channels() == pytest.approx([])
def test_get_noticed_expr_no_filter():
"""Regression test."""
pha = DataPHA('name', [1, 2, 3], [1, 1, 1])
assert pha.get_noticed_expr() == "1-3"
def test_get_noticed_expr_no_filter_manual():
"""Regression test."""
pha = DataPHA('name', [1, 2, 3], [1, 1, 1])
pha.mask = [True, True, True]
assert pha.get_noticed_expr() == "1-3"
def test_get_noticed_expr_partial_filter():
"""Regression test."""
pha = DataPHA('name', [1, 2, 3], [1, 1, 1])
pha.ignore(2, 2)
assert pha.get_noticed_expr() == "1,3"
def test_get_noticed_expr_removed_filter():
"""Regression test."""
pha = DataPHA('name', [1, 2, 3], [1, 1, 1])
pha.ignore()
assert pha.get_noticed_expr() == "No noticed channels"
def test_get_noticed_expr_removed_filter2():
"""Regression test."""
pha = DataPHA('name', [1, 2, 3], [1, 1, 1])
pha.mask = [False, False, False]
assert pha.get_noticed_expr() == "No noticed channels"
# Historically we have needed to use ndarray and not lists, so check.
@pytest.mark.parametrize("chans",
[np.asarray([1, 2, 3]),
[1, 2, 3]
])
def test_get_filter_expr_channel(chans):
"""Check get_filter_expr is called"""
pha = DataPHA('name', chans, [1, 1, 1])
assert pha.get_filter_expr() == '1-3 Channel'
pha.ignore(None, 1)
assert pha.get_filter_expr() == '2-3 Channel'
# Historically we have needed to use ndarray and not lists, so check.
@pytest.mark.parametrize("chans",
[np.asarray([1, 2, 3]),
[1, 2, 3]
])
def test_get_filter_is_empty(chans):
pha = DataPHA('name', chans, [1, 1, 1])
assert pha.get_filter() == '1:3'
assert pha.mask is True
pha.ignore()
assert pha.get_filter() == ''
assert pha.mask is False
pha.mask = [False, False, False]
assert pha.get_filter() == ''
def test_pha_get_noticed_channels_when_empty():
"""A regression test."""
empty = DataPHA("empty", None, None)
with pytest.raises(DataErr,
match="^The size of 'empty' has not been set$"):
empty.get_noticed_channels()
def test_pha_filter_when_empty(caplog):
"""A regression test."""
empty = DataPHA("empty", None, None)
assert len(caplog.record_tuples) == 0
with SherpaVerbosity("INFO"):
empty.ignore(hi=3)
assert len(caplog.records) == 1
emsg = "Skipping dataset empty: The size of 'empty' has not been set"
r = caplog.record_tuples[0]
assert r[0] == "sherpa.astro.data"
assert r[1] == logging.INFO
assert r[2] == emsg
def test_pha_get_mask_when_empty(caplog):
"""A regression test."""
empty = DataPHA("empty", None, None)
with pytest.raises(DataErr,
match="^The size of 'empty' has not been set$"):
empty.get_mask()
def test_pha_channel_empty_remains_empty():
"""A regression test."""
empty = DataPHA("empty", None, None)
assert empty.channel is None
empty.channel = None
assert empty.channel is None
@pytest.mark.parametrize("chans", [None, [1, 2, 3]])
def test_pha_group_when_empty(chans):
"""A regression test."""
empty = DataPHA("empty", chans, None)
with pytest.raises(DataErr,
match="^data set 'empty' can not be grouped as channel or counts is not set"):
empty.group_counts(5)
@pytest.mark.parametrize("chtype,expected,args",
[("channel", '1:10', []),
("channel", '', [(False, 1, 10)]),
("channel", '2:9', [(True, 2, 9)]),
("channel", '2:3,7:9', [(True, 2, 9), (False, 4, 6)]),
("channel", '1:4,7:9', [(True, 2, 9), (False, 4, 6), (True, 0, 4)]),
("channel", '2:3,5:10', [(True, 2, 9), (False, 4, 6), (True, 5, 13)]),
("channel", '', [(True, 2, 9), (False, 4, 6), (True, 5, 13), (False, 0, 13)]),
("channel", '1:10', [(True, 2, 9), (False, 4, 6), (True, 0, 13)]),
# None checks
("channel", '1:3', [(True, None, 3)]),
("channel", '4:10', [(False, None, 3)]),
("channel", '5:10', [(True, 5, None)]),
("channel", '1:4', [(False, 5, None)]),
("channel", '1:3,5:10', [(True, 5, None), (True, None, 3)]),
("channel", '4', [(False, 5, None), (False, None, 3)]),
# a few checks of non-integer channel limits (we don't explicitly
# say what this means so just check we know what it does)
# These are no-longer valid
# ("channel", '3:7', [(True, 2.8, 7.9)]),
# ("channel", '3:7', [(True, 2.1, 7.2)]),
# ("channel", '1:2,8:10', [(False, 2.8, 7.9)]),
# energy
("energy", '0.2:2.2', []),
("energy", '', [(False, 0.3, 2.1)]),
("energy", '', [(False, 0, 3)]),
("energy", '0.4:2.0', [(True, 0.51, 1.98)]),
("energy", '0.4:1.2,1.6:2.0', [(True, 0.51, 1.98), (False, 1.24, 1.51)]),
("energy", '0.2:1.4,1.6:2.0', [(True, 0.51, 1.98), (False, 1.24, 1.51), (True, 0.001, 1.32)]),
("energy", '0.4:1.2,1.4:2.2', [(True, 0.51, 1.98), (False, 1.24, 1.51), (True, 1.46, 12.2)]),
("energy", '', [(True, 0.51, 1.98), (False, 1.24, 1.51), (True, 1.46, 12.2), (False, 0.01, 13)]),
("energy", '0.2:2.2', [(True, 0.51, 1.98), (False, 1.24, 1.51), (True, 0.01, 13)]),
# None checks
("energy", '0.2:0.8', [(True, None, 0.65)]),
("energy", '0.8:2.2', [(False, None, 0.65)]),
("energy", '0.8:2.2', [(True, 0.95, None)]),
("energy", '0.2:0.8', [(False, 0.95, None)]),
("energy", '0.2:0.8,1.0:2.2', [(True, 1.05, None), (True, None, 0.65)]),
("energy", '0.2:2.2', [(True, 0.95, None), (True, None, 0.65)]),
("energy", '0.8:1.0', [(False, 1.05, None), (False, None, 0.65)]),
("energy", '', [(False, 0.95, None), (False, None, 0.65)]),
# wavelength
("wave", '5.6:62.0', []),
("wave", '', [(False, 1, 70)]),
("wave", '6.2:31.0', [(True, 6.5, 25)]),
("wave", '6.2:8.9,12.4:31.0', [(True, 6.5, 25), (False, 9.1, 12)]),
("wave", '5.6:10.3,12.4:31.0', [(True, 6.5, 25), (False, 9.1, 12), (True, 1, 10)]),
("wave", '6.2:8.9,10.3:62.0', [(True, 6.5, 25), (False, 9.1, 12), (True, 12, 70)]),
("wave", '5.6:62.0', [(True, 6.5, 25), (False, 9.1, 12), (True, 1, 70)]),
# None checks
("wave", '5.6:10.3', [(True, None, 9.1)]),
("wave", '10.3:62.0', [(False, None, 9.1)]),
("wave", '10.3:62.0', [(True, 12.0, None)]),
("wave", '5.6:10.3', [(False, 12.0, None)]),
("wave", '5.6:10.3,12.4:62.0', [(True, 12.5, None), (True, None, 9.1)]),
("wave", '5.6:62.0', [(True, 12.0, None), (True, None, 9.1)]),
("wave", '10.3:12.4', [(False, 12.5, None), (False, None, 9.1)]),
("wave", '', [(False, 12.0, None), (False, None, 9.1)]),
])
def test_pha_get_filter_checks_ungrouped(chtype, expected, args):
"""Check we get the filter we expect
chtype is channel, energy, or wavelength
expected is the expected response
args is a list of 3-tuples of (flag, loval, hival) where
flag is True for notice and False for ignore; they define
the filter to apply
"""
chans = np.arange(1, 11, dtype=int)
counts = np.ones(10, dtype=int)
pha = DataPHA('data', chans, counts)
# Use an ARF to create a channel to energy mapping
# The 0.2-2.2 keV range maps to 5.636-61.992 Angstrom
#
egrid = 0.2 * np.arange(1, 12)
arf = DataARF('arf', egrid[:-1], egrid[1:], np.ones(10))
pha.set_arf(arf)
pha.units = chtype
for (flag, lo, hi) in args:
if flag:
pha.notice(lo, hi)
else:
pha.ignore(lo, hi)
assert pha.get_filter(format='%.1f') == expected
def test_pha_get_xerr_all_bad_channel_no_group():
"""get_xerr handles all bad values [channel]
It's not obvious what it is meant to be doing here.
"""
pha = DataPHA('name', [1, 2, 3], [1, 1, 1],
quality=[2, 2, 2])
assert pha.get_xerr() == pytest.approx([0.5, 0.5, 0.5])
pha.ignore_bad()
assert pha.get_filter() == ''
assert pha.get_xerr() == pytest.approx([0.5, 0.5, 0.5])
def test_pha_get_xerr_all_bad_channel_group():
"""get_xerr handles all bad values [channel]
The behavior with grouping is different, presumably because
we assume we have grouping when we have a quality array.
"""
pha = DataPHA('name', [1, 2, 3], [1, 1, 1],
grouping=[1, 1, 1],
quality=[2, 2, 2])
assert pha.get_xerr() == pytest.approx([0.5, 0.5, 0.5])
assert pha.grouped
pha.ignore_bad()
assert pha.get_filter() == ''
assert pha.get_xerr() == pytest.approx([])
def test_pha_get_xerr_all_bad_energy_no_group():
"""get_xerr handles all bad values [energy]
It's not obvious what it is meant to be doing here.
"""
pha = DataPHA('name', [1, 2, 3], [1, 1, 1],
quality=[2, 2, 2])
ebins = np.asarray([3.0, 5., 8.0, 12.0])
rlo = ebins[:-1]
rhi = ebins[1:]
rmf = create_delta_rmf(rlo, rhi, e_min=rlo, e_max=rhi)
pha.set_rmf(rmf)
pha.units = 'energy'
assert pha.get_xerr() == pytest.approx([1.0, 1.5, 2.0])
pha.ignore_bad()
assert pha.get_filter() == ''
assert pha.get_xerr() == pytest.approx([1.0, 1.5, 2.0])
def test_pha_get_xerr_all_bad_energy_group():
"""get_xerr handles all bad values [energy]
The behavior with grouping is different, presumably because
we assume we have grouping when we have a quality array.
"""
pha = DataPHA('name', [1, 2, 3], [1, 1, 1],
grouping=[1, 1, 1],
quality=[2, 2, 2])
ebins = np.asarray([3.0, 5., 8.0, 12.0])
rlo = ebins[:-1]
rhi = ebins[1:]
rmf = create_delta_rmf(rlo, rhi, e_min=rlo, e_max=rhi)
pha.set_rmf(rmf)
pha.units = 'energy'
assert pha.get_xerr() == pytest.approx([1.0, 1.5, 2.0])
assert pha.grouped
pha.ignore_bad()
# Should this error out or not?
assert pha.get_filter() == ''
# with pytest.raises(DataErr,
# match="mask excludes all data"):
# pha.get_filter()
assert pha.get_xerr() == pytest.approx([])
@pytest.mark.parametrize("ignore", [False, True])
@pytest.mark.parametrize("lbl,lo,hi", [('lo', 1.5, 2.5),
('lo', 1.5, 2),
('hi', 1, 2.5)])
def test_pha_channel_limits_are_integers(ignore, lbl, lo, hi):
"""Ensure channels are integers."""
pha = DataPHA('name', [1, 2, 3], [1, 1, 1],
grouping=[1, -1, 1])
func = pha.ignore if ignore else pha.notice
with pytest.raises(DataErr,
match=f"unknown {lbl} argument: 'must be an integer channel value'"):
func(lo, hi)
def test_288_a():
"""The issue from #288 which was working"""
channels = np.arange(1, 6)
counts = np.asarray([5, 5, 10, 10, 2])
grouping = np.asarray([1, -1, 1, -1, 1], dtype=np.int16)
pha = DataPHA('x', channels, counts, grouping=grouping)
assert pha.mask is True
pha.ignore(3, 4)
# I use approx because it gives a nice answer, even though
# I want eqiuality not approximation in this test. Fortunately
# with bools the use of approx is okay (it can tell the
# difference between 0 and 1, aka False and True).
#
assert pha.mask == pytest.approx(np.asarray([True, False, True]))
def test_288_a_energy():
"""The issue from #288 which was working
test_288_a but with a response so we test energy filters
"""
channels = np.arange(1, 6)
counts = np.asarray([5, 5, 10, 10, 2])
grouping = np.asarray([1, -1, 1, -1, 1], dtype=np.int16)
pha = DataPHA('x', channels, counts, grouping=grouping)
rlo = channels
rhi = channels + 1
rmf = create_delta_rmf(rlo, rhi, e_min=rlo, e_max=rhi)
pha.set_arf(rmf)
pha.set_analysis('energy')
assert pha.mask is True
pha.ignore(3, 4)
# I use approx because it gives a nice answer, even though
# I want equality not approximation in this test. Fortunately
# with bools the use of approx is okay (it can tell the
# difference between 0 and 1, aka False and True).
#
assert pha.mask == pytest.approx(np.asarray([True, False, True]))
def test_288_b():
"""The issue from #288 which was failing
We now error out with a non-integer channel
"""
channels = np.arange(1, 6)
counts = np.asarray([5, 5, 10, 10, 2])
grouping = np.asarray([1, -1, 1, -1, 1], dtype=np.int16)
pha = DataPHA('x', channels, counts, grouping=grouping)
assert pha.mask is True
with pytest.raises(DataErr,
match="unknown lo argument: 'must be an integer channel value'"):
pha.ignore(3.1, 4)
def test_288_b_energy():
"""The issue from #288 which was failing
test_288_b but with a response so we test energy filters
"""
channels = np.arange(1, 6)
counts = np.asarray([5, 5, 10, 10, 2])
grouping = np.asarray([1, -1, 1, -1, 1], dtype=np.int16)
pha = DataPHA('x', channels, counts, grouping=grouping)
rlo = channels
rhi = channels + 1
rmf = create_delta_rmf(rlo, rhi, e_min=rlo, e_max=rhi)
pha.set_arf(rmf)
pha.set_analysis('energy')
assert pha.mask is True
pha.ignore(3.1, 4)
assert pha.mask == pytest.approx(np.asarray([True, False, True]))
@requires_group
def test_grouping_non_numpy():
"""Historically the group* calls would fail oddly if y is not numpy
TypeError: grpNumCounts() Could not parse input arguments, please check input for correct type(s)
This has now been addressed but the test has been left in.
"""
x = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
y = [0, 0, 0, 2, 1, 1, 0, 0, 0, 0]
pha = DataPHA('416', x, y)
pha.group_counts(3)
grouping = [1, -1, -1, -1, -1, 1, -1, -1, -1, -1.]
assert pha.grouping == pytest.approx(grouping)
quality = [0, 0, 0, 0, 0, 2, 2, 2, 2, 2]
assert pha.quality == pytest.approx(quality)
@requires_group
def test_416_a():
"""The first test case from issue #416
This used to use channels but it has been changed to add an RMF so
we can filter in energy space, as it is not clear what non-integer
channels should mean.
"""
# if y is not a numpy array then group_counts errors out
# with a strange error. Another reason why DataPHA needs
# to validate input
#
x = np.asarray([1, 2, 3, 4, 5, 6, 7, 8, 9, 10])
y = np.asarray([0, 0, 0, 2, 1, 1, 0, 0, 0, 0])
pha = DataPHA('416', x, y)
rmf = create_delta_rmf(x, x + 1, e_min=x, e_max=x + 1,
name='416')
pha.set_arf(rmf)
pha.set_analysis('energy')
pha.notice(4.5, 6.5)
# There are two ways to get the mask:
# - pha.mask returns the grouped mask (if the data is
# grouped)
# - pha.get_mask() always returns the ungrouped mask
#
mask_ungrouped = np.asarray([False] * 3 + [True] * 3 + [False] * 4)
mask_grouped = np.asarray([False] * 3 + [True] * 2 + [False] * 4)
assert pha.mask == pytest.approx(mask_ungrouped)
assert pha.get_mask() == pytest.approx(mask_ungrouped)
assert pha.grouping is None
assert pha.quality is None
# The grouping is done only for the noticed data range.
pha.group_counts(3)
assert pha.mask == pytest.approx(mask_grouped)
assert pha.get_mask() == pytest.approx(mask_ungrouped)
# Check we get the expected grouping: the first 3 and last 4
# channels are excluded by the notice call above, so they are 0,
# which means we only have 3 channels with data.
#
grouping = [0] * 3 + [1, -1, 1] + [0] * 4
assert pha.grouping == pytest.approx(grouping)
# As with grouoping, the first 3 and last 4 channels are not
# changed, so have a quality of 0. For the remaining three
# channels we know the first group is okay (this corresponds to
# [1, -1] from the grouping array above, correspondinf to counts
# [2, 1]) but the second group (the second [1] in grouping,
# corresponding to counts of [1]) does not meet the grouping
# criteria (it sums to 1!) and so has a quality of 2.
#
quality = [0] * 3 + [0, 0, 2] + [0] * 4
assert pha.quality == pytest.approx(quality)
dep = pha.get_dep(filter=True)
assert dep == pytest.approx([3, 1])
@requires_group
def test_416_c():
"""The third test case from issue #416
This used to use channels but it has been changed to add an RMF so
we can filter in energy space, as it is not clear what non-integer
channels should mean.
"""
x = np.asarray([1, 2, 3, 4, 5, 6, 7, 8, 9, 10])
y = np.asarray([0, 0, 0, 2, 1, 1, 0, 0, 0, 0])
pha = DataPHA('416', x, y)
rmf = create_delta_rmf(x, x + 1, e_min=x, e_max=x + 1,
name='416')
pha.set_arf(rmf)
pha.set_analysis('energy')
# When using channels this used notice(3.5, 6.5)
# but using energy space we need to use a different
# range to match the ones the original channel filter
# used.
#
pha.notice(4.5, 6.5)
# this should be ~pha.mask
tabstops = np.asarray([True] * 3 + [False] * 3 + [True] * 4)
assert ~pha.mask == pytest.approx(tabstops)
assert pha.grouping is None
assert pha.quality is None
pha.group_counts(3, tabStops=~pha.mask)
grouping = [0] * 3 + [1, -1, 1] + [0] * 4
assert pha.grouping == pytest.approx(grouping)
# the second grouped bin has a quality of 2 as
# it only contains 1 count
quality = np.zeros(10, dtype=int)
quality[5] = 2
assert pha.quality == pytest.approx(quality)
pha.ignore_bad()
assert pha.grouping == pytest.approx(grouping)
assert pha.quality == pytest.approx(quality)
dep = pha.get_dep(filter=False)
assert dep == pytest.approx(y)
# It is not at all obvious why we get 8 bins returned
# here. The ignore_bad has removed any existing
# filters, but why do we get 8, not 10, values?
# Well, one bin has been removed (quality=2)
# and two bins have merged into 1. Hence the 8.
#
dep = pha.get_dep(filter=True)
exp = np.zeros(8)
exp[3] = 3
assert dep == pytest.approx(exp)
@pytest.fixture
def make_test_image():
"""A simple image
Note that normally you'd have logical axes of 1:31,
1:21 here and then a WCS, but I've decided to number
the axes differently (in physical units) as there is
no requirement that the logical units are 1:nx/ny.
"""
x1, x0 = np.mgrid[3830:3850, 4245:4275]
# What is the ordering of shape? At the moment going for
# NumPy style (ny, nx), but there is no credible documentation
# (any documentation was added to describe the behavior we
# have now).
#
shape = x0.shape
x0 = x0.flatten()
x1 = x1.flatten()
y = np.ones(x0.size)
return DataIMG('d', x0, x1, y, shape=shape)
@pytest.fixture
def make_test_image_sky():
"""A simple image with just sky WCS
"""
crval = [2000.5, -5000.5]
cdelt = [2.0, 4.0]
crpix = [-2.0, 3.0]
sky = WCS("physical", "LINEAR", crval=crval, crpix=crpix, cdelt=cdelt)
# logical: x=1, 2, 3
# y=1, 2
#
x1, x0 = np.mgrid[1:3, 1:4]
shape = x0.shape
x0 = x0.flatten()
x1 = x1.flatten()
y = np.ones(x0.size)
return DataIMG('sky-ey', x0, x1, y, shape=shape, sky=sky)
# This is a regression test - the values were calculated by
# Sherpa/WCS and not from first principles.
#
WORLD_X0 = np.asarray([30.10151131, 30., 29.89848869, 30.10154255, 30., 29.89845745])
WORLD_X1 = np.asarray([ 9.89998487, 9.9000001, 9.89998487, 9.99998461, 10., 9.99998461])
@pytest.fixture
def make_test_image_world():
"""A simple image with just world WCS
"""
crval = [30, 10]
cdelt = [-0.1, 0.1]
crpix = [2.0, 2.0]
eqpos = WCS("world", "WCS", crval=crval, crpix=crpix, cdelt=cdelt)
# logical: x=1, 2, 3
# y=1, 2
#
x1, x0 = np.mgrid[1:3, 1:4]
shape = x0.shape
x0 = x0.flatten()
x1 = x1.flatten()
y = np.ones(x0.size)
return DataIMG('world-ey', x0, x1, y, shape=shape, eqpos=eqpos)
@pytest.fixture
def make_test_pha():
"""A simple PHA"""
chans = np.asarray([1, 2, 3, 4], dtype=np.int16)
counts = np.asarray([1, 2, 0, 3], dtype=np.int16)
return DataPHA('p', chans, counts)
@pytest.fixture
def make_grouped_pha():
"""A simple PHA with grouping
Note that this does have a quality=2 bin that is
ignored.
"""
chans = np.asarray([1, 2, 3, 4, 5], dtype=np.int16)
counts = np.asarray([1, 2, 0, 3, 12], dtype=np.int16)
grp = np.asarray([1, -1, -1, 1, 1], dtype=np.int16)
qual = np.asarray([0, 0, 0, 0, 2], dtype=np.int16)
pha = DataPHA('grp', chans, counts,
grouping=grp, quality=qual)
pha.ignore_bad()
return pha
@pytest.fixture
def make_quality_pha():
"""A simple PHA with grouping/quality data
This is different to make_grouped_data as
- it is more focussed on quality issues so has more
channels, and more "bad" ones
- does not call ignore_bad.
"""
chans = np.arange(1, 10, dtype=np.int16)
counts = np.asarray([1, 2, 0, 3, 12, 2, 9, 8, 7], dtype=np.int16)
# The first group extends across the first set of "5" bad
# channels, as does the last group (but that also includes a "5"
# channel).
grp = np.asarray([1, -1, -1, -1, 1, 1, 1, -1, -1], dtype=np.int16)
qual = np.asarray([0, 5, 5, 0, 0, 0, 2, 2, 5], dtype=np.int16)
return DataPHA('grp', chans, counts, grouping=grp, quality=qual)
def test_img_get_img(make_test_image):
img = make_test_image
ival = img.get_img()
assert ival.shape == (20, 30)
assert ival == pytest.approx(np.ones(20 * 30).reshape((20, 30)))
def test_img_get_img_filter_none1(make_test_image):
"""get_img when all the data has been filtered: mask is False
It is not obvious what is meant to happen here (the docs suggest
the filter is ignored but there is some handling of filters), so
this should be treated as a regression test. See issue #1447
"""
img = make_test_image
img.notice2d(ignore=True)
# safety check to ensure all the data has been ignored
assert img.mask is False
shape = (20, 30)
expected = np.ones(shape)
ival = img.get_img()
assert ival.shape == shape
assert ival == pytest.approx(expected)
@requires_region
def test_img_get_img_filter_none2(make_test_image):
"""get_img when all the data has been filtered: mask is array of False"""
img = make_test_image
img.notice2d("rect(0,0,10000,10000)", ignore=True)
# safety check to ensure all the data has been ignored
assert np.iterable(img.mask)
assert not np.any(img.mask)
shape = (20, 30)
ival = img.get_img()
assert ival.shape == shape
assert not np.any(np.isfinite(ival))
@requires_region
def test_img_get_img_filter_some(make_test_image):
"""get_img when some of the data has been filtered.
Unlike filtering out all the data, this does filter the response.
"""
img = make_test_image
# use a shape that's easy to filter
img.notice2d("rect(4250, 3840,4256,3842)")
# safety check to ensure that a subset of the data has been masked out
assert np.iterable(img.mask)
assert np.any(img.mask)
assert not np.all(img.mask)
# It looks like RECT is inclusive for low and high edges.
shape = (20, 30)
expected = np.zeros(20 * 30) * np.nan
idx = np.hstack((np.arange(305, 312), np.arange(335, 342), np.arange(365, 372)))
expected[idx] = 1
expected.resize(shape)
ival = img.get_img()
assert ival.shape == shape
# pytest.approx follows IEEE so nan != nan, hence we
# have to filter out the values we expect.
#
good = np.isfinite(expected)
assert np.isfinite(ival) == pytest.approx(good)
assert ival[good] == pytest.approx(expected[good])
def image_callable(x0, x1):
"""Check that we call the routine correctly (DataIMG/get_img)"""
assert len(x0) == 20 * 30
assert len(x1) == 20 * 30
assert x0[0] == pytest.approx(4245)
assert x1[0] == pytest.approx(3830)
assert x0[-1] == pytest.approx(4274)
assert x1[-1] == pytest.approx(3849)
return np.ones(x0.size) + 2
def image_callable_filtered(x0, x1):
"""Check that we call the routine correctly (DataIMG/get_img)"""
assert len(x0) == 21
assert len(x1) == 21
assert x0[0] == pytest.approx(4250)
assert x1[0] == pytest.approx(3840)
assert x0[-1] == pytest.approx(4256)
assert x1[-1] == pytest.approx(3842)
return np.ones(x0.size) + 2
def image_callable_filtered2(x0, x1):
"""Check that we call the routine correctly (DataIMG/get_img)"""
assert len(x0) == 11
assert len(x1) == 11
assert x0[0] == pytest.approx(4247)
assert x1[0] == pytest.approx(3831)
assert x0[-1] == pytest.approx(4248)
assert x1[-1] == pytest.approx(3834)
return np.ones(x0.size) + 2
def image_callable_none(x0, x1):
"""Check that we call the routine correctly (DataIMG/get_img)"""
assert len(x0) == 0
assert len(x1) == 0
return np.asarray([])
def test_img_get_img_model(make_test_image):
"""What happens when we give a callable function to get_img?
The idea is that it will be a model, but all we need is
a callable.
"""
img = make_test_image
ival, mval = img.get_img(image_callable)
shape = (20, 30)
expected1 = np.ones(shape)
expected2 = np.ones(shape) * 3
# The data
assert ival.shape == shape
assert ival == pytest.approx(expected1)
# The callable
assert mval.shape == shape
assert mval == pytest.approx(expected2)
def test_img_get_img_model_filter_none1(make_test_image):
"""See test_img_get_img_filter_none1. Issue #1447"""
img = make_test_image
img.notice2d(ignore=True)
with pytest.raises(DataErr,
match="mask excludes all data"):
img.get_img(image_callable)
@requires_region
def test_img_get_img_model_filter_none2(make_test_image):
"""See test_img_get_img_filter_none2. Issue #1447"""
img = make_test_image
img.notice2d("rect(2000,3000,7000,5000)", ignore=True)
ival, mval = img.get_img(image_callable_none)
shape = (20, 30)
assert ival.shape == shape
assert mval.shape == shape
assert not np.any(np.isfinite(ival))
assert not np.any(np.isfinite(mval))
@requires_region
def test_img_get_img_model_filter_some(make_test_image):
"""get_img with a callable and having a filter"""
img = make_test_image
# use a shape that's easy to filter
img.notice2d("rect(4250, 3840,4256,3842)")
ival, mval = img.get_img(image_callable_filtered)
shape = (20, 30)
idx = np.hstack((np.arange(305, 312), np.arange(335, 342), np.arange(365, 372)))
expected1 = np.zeros(20 * 30) * np.nan
expected1[idx] = 1
expected1.resize(shape)
expected2 = np.zeros(20 * 30) * np.nan
expected2[idx] = 3
expected2.resize(shape)
assert ival.shape == shape
assert mval.shape == shape
# pytest.approx follows IEEE so nan != nan, hence we
# have to filter out the values we expect.
#
good = np.isfinite(expected1)
assert np.isfinite(ival) == pytest.approx(good)
assert np.isfinite(mval) == pytest.approx(good)
assert ival[good] == pytest.approx(expected1[good])
assert mval[good] == pytest.approx(expected2[good])
@requires_region
def test_img_get_img_model_filter_some2(make_test_image):
"""test_img_get_img_model_filter_some but with a non-rectangular filter
We have been using a filter that is rectangular, so matches the
grid. Let's see what happens if the filter like a circle so that
the bounding box does not match the filter.
"""
img = make_test_image
img.notice2d("circle(4247.8, 3832.1, 2)")
# check
assert img.mask.sum() == 11
ival, mval = img.get_img(image_callable_filtered2)
shape = (20, 30)
idx = np.asarray([32, 33, 34, 61, 62, 63, 64, 92, 93, 94, 123])
expected1 = np.zeros(20 * 30) * np.nan
expected1[idx] = 1
expected1.resize(shape)
expected2 = np.zeros(20 * 30) * np.nan
expected2[idx] = 3
expected2.resize(shape)
assert ival.shape == shape
assert mval.shape == shape
# pytest.approx follows IEEE so nan != nan, hence we
# have to filter out the values we expect.
#
good = np.isfinite(expected1)
assert np.isfinite(ival) == pytest.approx(good)
assert np.isfinite(mval) == pytest.approx(good)
assert ival[good] == pytest.approx(expected1[good])
assert mval[good] == pytest.approx(expected2[good])
def test_img_can_not_set_coord(make_test_image):
"""The coord attribute is not writeable.
It used to be, but now we require the user to change
it with the set_coord method.
"""
d = make_test_image
# This dataset does not have a physical system, but we do not get
# a DataErr but an AttributeError. The text message depends on
# Python version (e.g. 3.9, 3.10, and 3.11 all have different
# messages) so we do not check the message, just the class.
#
with pytest.raises(AttributeError):
d.coord = "physical"
def test_img_set_coord_invalid(make_test_image):
"""An invalid coord setting"""
d = make_test_image
assert d.coord == 'logical'
emsg = "unknown coordinates: 'bob'\n"
emsg += "Valid options: logical, image, physical, world, wcs"
with pytest.raises(DataErr,
match=emsg):
d.set_coord('bob')
assert d.coord == 'logical'
@pytest.mark.parametrize('coord,expected',
[('physical', 'physical'),
('world', 'world'),
('wcs', 'world')])
def test_img_set_coord_notset(coord, expected, make_test_image):
"""A valid coord setting but we don't have the data"""
d = make_test_image
with pytest.raises(DataErr,
match=f"data set 'd' does not contain a {expected} coordinate system"):
d.set_coord(coord)
assert d.coord == 'logical'
@pytest.mark.parametrize('coord,expected',
[('physical', 'physical'),
('world', 'world'),
('wcs', 'world')])
def test_img_get_coord_notset(coord, expected, make_test_image):
"""Check get_physical/world fail when there's no WCS"""
d = make_test_image
meth = getattr(d, f"get_{coord}")
with pytest.raises(DataErr,
match=f"data set 'd' does not contain a {expected} coordinate system"):
meth()
def test_img_set_coord_image(make_test_image):
"""Can set to image though"""
d = make_test_image
assert d.coord == 'logical'
d.set_coord('image')
assert d.coord == 'logical'
def test_img_get_coord_image(make_test_image):
"""Can call get_image though"""
d = make_test_image
cs = d.get_image()
x1, x0 = np.mgrid[3830:3850, 4245:4275]
x0 = x0.flatten()
x1 = x1.flatten()
assert cs[0] == pytest.approx(x0)
assert cs[1] == pytest.approx(x1)
assert len(cs) == 2
@pytest.fixture
def read_test_image(make_data_path):
from sherpa.astro.io import read_image
filename = 'acisf07999_000N001_r0035_regevt3_srcimg.fits'
d = read_image(make_data_path(filename))
d.name = 'test.img'
return d
@requires_fits
@requires_data
@requires_wcs
@pytest.mark.parametrize('coord,expected',
[('logical', 'logical'),
('image', 'logical'),
('physical', 'physical'),
('world', 'world'),
('wcs', 'world')])
def test_img_file_set_coord(coord, expected, read_test_image):
"""call set_coord with an image with WCS"""
d = read_test_image
assert d.coord == 'logical'
d.set_coord(coord)
assert d.coord == expected
@requires_fits
@requires_data
@requires_wcs
@pytest.mark.parametrize('coord', ['logical', 'image', 'physical', 'world', 'wcs'])
def test_img_file_get_logical(coord, read_test_image):
"""get_logical when coord is set"""
d = read_test_image
d.set_coord(coord)
yexp, xexp = np.mgrid[1:378, 1:170]
xexp = xexp.flatten()
yexp = yexp.flatten()
x, y = d.get_logical()
assert x == pytest.approx(xexp)
assert y == pytest.approx(yexp)
@requires_fits
@requires_data
@requires_wcs
@pytest.mark.parametrize('coord', ['logical', 'image', 'physical', 'world', 'wcs'])
def test_img_file_get_physical(coord, read_test_image):
"""get_physical when coord is set"""
d = read_test_image
d.set_coord(coord)
yexp, xexp = np.mgrid[4333.1298828125:4710:1, 3062.3100585938:3231:1]
xexp = xexp.flatten()
yexp = yexp.flatten()
x, y = d.get_physical()
assert x == pytest.approx(xexp)
assert y == pytest.approx(yexp)
@requires_fits
@requires_data
@requires_wcs
@pytest.mark.parametrize('coord', ['logical', 'image', 'physical', 'world', 'wcs'])
def test_img_file_get_world(coord, read_test_image):
"""get_world when coord is set"""
d = read_test_image
d.set_coord(coord)
# Since the pixel size isn't guaranteed to be constant
# just check the corners. Note that this is not a
# check from first principles: it just checks that we
# get the same answer as previous calls to this routine.
#
x, y = d.get_world()
# BL
assert x[0] == pytest.approx(150.02683651815326)
assert y[0] == pytest.approx(2.6402818651328728)
# TR
assert x[-1] == pytest.approx(150.00385708212673)
assert y[-1] == pytest.approx(2.6916707654223244)
# BR
assert x[168] == pytest.approx(150.00385224075313)
assert y[168] == pytest.approx(2.640284264823834)
# TL
assert x[169 * 377 - 169] == pytest.approx(150.0268422985145)
assert y[169 * 377 - 169] == pytest.approx(2.691668318963721)
def test_img_get_axes_logical(make_test_image):
"""Does get_axes work?"""
d = make_test_image
x, y = d.get_axes()
assert x == pytest.approx(np.arange(1, 31, 1))
assert y == pytest.approx(np.arange(1, 21, 1))
@requires_fits
@requires_data
@pytest.mark.parametrize('coord', ['logical', 'image'])
def test_img_file_get_axes_logical(coord, read_test_image):
"""get_axes when coord is set: logical"""
d = read_test_image
d.set_coord(coord)
x, y = d.get_axes()
assert x == pytest.approx(np.arange(1, 170, 1))
assert y == pytest.approx(np.arange(1, 378, 1))
@requires_fits
@requires_data
@requires_wcs
@pytest.mark.parametrize('coord', ['physical'])
def test_img_file_get_axes_physical(coord, read_test_image):
"""get_axes when coord is set: physical"""
d = read_test_image
d.set_coord(coord)
x, y = d.get_axes()
assert x == pytest.approx(np.arange(3062.3100585938, 3231, 1))
assert y == pytest.approx(np.arange(4333.1298828125, 4710, 1))
@requires_fits
@requires_data
@requires_wcs
@pytest.mark.parametrize('coord', ['world', 'wcs'])
def test_img_file_get_axes_world(coord, read_test_image):
"""get_axes when coord is set: world"""
d = read_test_image
d.set_coord(coord)
x, y = d.get_axes()
assert x.size == 169
assert y.size == 377
# This is an interesting combination of the corners from
# test_img_file_get_world
assert x[0] == pytest.approx(150.02683651815326)
assert y[0] == pytest.approx(2.6402818651328728)
assert x[-1] == pytest.approx(150.00385224075313)
assert y[-1] == pytest.approx(2.691668318963721)
@requires_fits
@requires_data
@requires_wcs
@pytest.mark.parametrize('coord,expected',
[('logical', 'x0'),
('image', 'x0'),
('physical', 'x0 (pixels)'),
('world', 'RA (deg)'),
('wcs', 'RA (deg)')])
def test_img_file_get_x0label(coord, expected, read_test_image):
"""get_x0label"""
d = read_test_image
d.set_coord(coord)
assert d.get_x0label() == expected
@requires_fits
@requires_data
@requires_wcs
@pytest.mark.parametrize('coord', ['logical', 'physical', 'world'])
@pytest.mark.parametrize('label', ['', 'not a label', 'x0'])
def test_img_file_set_x0label(coord, label, read_test_image):
"""set_x0label"""
d = read_test_image
d.set_coord(coord)
d.set_x0label(label)
assert d.get_x0label() == label
@requires_fits
@requires_data
@requires_wcs
@pytest.mark.parametrize('coord,expected',
[('logical', 'x1'),
('image', 'x1'),
('physical', 'x1 (pixels)'),
('world', 'DEC (deg)'),
('wcs', 'DEC (deg)')])
def test_img_file_get_x1label(coord, expected, read_test_image):
"""get_x1label"""
d = read_test_image
d.set_coord(coord)
assert d.get_x1label() == expected
@requires_fits
@requires_data
@requires_wcs
@pytest.mark.parametrize('coord', ['logical', 'physical', 'world'])
@pytest.mark.parametrize('label', ['', 'not a label', 'x0'])
def test_img_file_set_x1label(coord, label, read_test_image):
"""set_x1label"""
d = read_test_image
d.set_coord(coord)
d.set_x1label(label)
assert d.get_x1label() == label
@requires_fits
@requires_data
@requires_wcs
@pytest.mark.parametrize('coord', ['logical', 'physical', 'world'])
def test_img_file_get_ylabel(coord, read_test_image):
"""get_ylabel"""
d = read_test_image
d.set_coord(coord)
assert d.get_ylabel() == 'y'
@requires_fits
@requires_data
@requires_wcs
@pytest.mark.parametrize('coord', ['logical', 'physical', 'world'])
@pytest.mark.parametrize('label', ['', 'not a label', 'y'])
def test_img_file_set_ylabel(coord, label, read_test_image):
"""set_ylabel"""
d = read_test_image
d.set_coord(coord)
d.set_ylabel(label)
assert d.get_ylabel() == label
def test_img_get_bounding_mask_nofilter(make_test_image):
"""get_bounding_mask with no filter"""
d = make_test_image
ans = d.get_bounding_mask()
assert len(ans) == 2
assert ans[0]
assert ans[1] is None
def test_img_get_bounding_mask_nodata(make_test_image):
"""get_bounding_mask with all data masked"""
d = make_test_image
d.notice2d(ignore=True)
ans = d.get_bounding_mask()
assert len(ans) == 2
assert not ans[0]
assert ans[1] is None
@requires_region
def test_img_get_bounding_mask_filtered(make_test_image):
"""get_bounding_mask with data partially filtered"""
d = make_test_image
d.notice2d('ellipse(4260,3840,3,2,0)')
ans = d.get_bounding_mask()
mask = np.zeros(5 * 7, dtype=bool)
for i in [3, 8, 9, 10, 11, 12, 14, 15, 16, 17, 18, 19, 20, 22, 23, 24,
25, 26, 31]:
mask[i] = True
assert len(ans) == 2
assert ans[0] == pytest.approx(mask)
assert ans[1] == (5, 7)
@requires_region
def test_img_get_filter(make_test_image):
"""Simple get_filter check on an image."""
d = make_test_image
assert d.get_filter() == ''
shape = 'ellipse(4260,3840,3,2,0)'
d.notice2d(shape)
assert d.get_filter() == shape.capitalize()
@requires_region
def test_img_get_filter_exclude(make_test_image):
"""Simple get_filter check on an image."""
d = make_test_image
assert d.get_filter() == ''
shape = 'ellipse(4260,3840,3,2,0)'
d.notice2d(shape, ignore=True)
expected = 'Field()&!' + shape.capitalize()
assert d.get_filter() == expected
@requires_region
def test_img_get_filter_none(make_test_image):
"""Simple get_filter check on an image: no data"""
d = make_test_image
shape = 'ellipse(4260,3840,3,2,0)'
d.notice2d(shape)
d.notice2d(ignore=True)
# It's not clear what the filter should be here
assert d.get_filter() == ''
@requires_region
def test_img_get_filter_combined(make_test_image):
"""Simple get_filter check on an image."""
d = make_test_image
assert d.get_filter() == ''
shape1 = 'ellipse(4260,3840,3,2,0)'
d.notice2d(shape1)
shape2 = 'rect(4258,3830,4264,3841)'
d.notice2d(shape2)
shape2 = shape2.replace('rect', 'rectangle')
shape = shape1.capitalize() + '|' + shape2.capitalize()
assert d.get_filter() == shape
@requires_region
def test_img_get_filter_excluded(make_test_image):
"""Simple get_filter check on an image."""
d = make_test_image
assert d.get_filter() == ''
shape1 = 'ellipse(4260,3840,3,2,0)'
d.notice2d(shape1)
shape2 = 'rect(4258,3830,4264,3841)'
d.notice2d(shape2, ignore=True)
shape2 = shape2.replace('rect', 'rectangle')
shape = shape1.capitalize() + '&!' + shape2.capitalize()
assert d.get_filter() == shape
def check_ignore_ignore(d):
"""Check removing the shapes works as expected.
Tests must use requires_region
"""
from sherpa.astro.utils._region import Region
shape1 = 'ellipse(4260,3840,3,2,0)'
d.notice2d(shape1, ignore=True)
mask1 = ~Region(shape1).mask(d.x0, d.x1).astype(bool)
assert d.mask == pytest.approx(mask1)
expected = 'Field()&!' + shape1.capitalize()
assert d.get_filter() == expected
shape2 = 'rect(4258,3830,4264,3841)'
d.notice2d(shape2, ignore=True)
mask2 = ~Region(shape2).mask(d.x0, d.x1).astype(bool)
assert d.mask == pytest.approx(mask1 & mask2)
shape2 = shape2.replace('rect', 'rectangle')
expected = 'Field()&!' + shape1.capitalize() + '&!' + shape2.capitalize()
assert d.get_filter() == expected
@requires_region
def test_img_get_filter_included_excluded(make_test_image):
"""Simple get_filter check on an image.
Just to match test_img_get_filter_excluded_excluded.
"""
d = make_test_image
check_ignore_ignore(d)
@requires_region
def test_img_get_filter_excluded_excluded(make_test_image):
"""Simple get_filter check on an image.
Here we want to check the behavior when d.mask is False. I am not
sure this makes sense, but this is done to show the current
behavior. Note that d.notice2d(ignore=True) is meant to ignore all
points but it (currently) doesn't add a region since we know from
the mask all the points are ignored and so there's no need to add
a "no region" filter: if you have !field() and then union s1 we
would get
!field()|s
but this is the same as
s
Instead if we do !field().subtract(s) then it's the same as
!field(). There probably is something we could improve here.
"""
d = make_test_image
assert d.mask
d.notice2d(ignore=True)
assert not d.mask
# It is not at all obvious to me that we should get the
# same results as test_img_get_filter_included_excluded,
# as we start with ignoring all points.
#
# However, this is just to check the existing behavior,
# which was not changed in #968.
#
check_ignore_ignore(d)
@requires_region
def test_img_get_filter_compare_filtering(make_test_image):
"""Check calling notice2d(ignore=True) with 2 shapes is same as once.
"""
d = make_test_image
shape1 = 'ellipse(4260,3840,3,2,0)'
shape2 = 'rect(4258,3830,4264,3841)'
d.notice2d(shape1, ignore=True)
d.notice2d(shape2, ignore=True)
assert d._region is not None
maska = d.mask.copy()
d.notice2d()
assert d._region is None
assert d.mask is True
exc = f"field()-{shape1}-{shape2}"
d.notice2d(exc)
maskb = d.mask.copy()
assert maskb == pytest.approx(maska)
# just check we have some True and False values
assert maska.min() == 0
assert maska.max() == 1
def test_pha_size_grouped(make_grouped_pha):
"""Regression test: what is the size of this?
Is it always DETCHANS or does it change? Test what we say.
"""
pha = make_grouped_pha
assert pha.quality_filter is not None
assert pha.size == 5
assert pha.quality_filter.sum() == 4
def test_pha_size_quality(make_quality_pha):
"""Regression test: what is the size of this?
Is it always DETCHANS or does it change? Test what we say.
"""
pha = make_quality_pha
pha.ignore_bad()
assert pha.quality_filter is not None
assert pha.size == 9
assert pha.quality_filter.sum() == 4
def test_grouped_quality_filter_expr(make_grouped_pha):
"""What is the filter expression?
This is a regression test.
"""
pha = make_grouped_pha
pha.get_filter() == "1:9" # does not exclude bad channel
def test_quality_quality_filter_expr(make_quality_pha):
"""What is the filter expression?
This is a regression test.
"""
pha = make_quality_pha
pha.ignore_bad()
# Note that the first group covers channels 1-4, but channels 2 and 3
# are excluded, so this could be written as "1,4-..." but then
# this loses the fact that the first group is 1 and 4, (so it
# can be thought of as being correct).
#
pha.get_filter() == "1:9" # does not exclude bad channels at end
def test_pha_quality_all_bad_basic_checks():
"""Regression test
Note this only sets the quality field, not the grouping field.
"""
all4 = np.ones(4, dtype=bool)
none4 = np.zeros(4, dtype=bool)
pha = DataPHA("q", [1, 2, 3, 4], [9, 0, 1, 64])
fvals = [12, 2, 7, 8]
assert pha.mask is True
assert pha.get_mask() == pytest.approx(all4)
assert pha.get_filter() == "1:4"
assert pha.get_x() == pytest.approx([1, 2, 3, 4])
assert pha.apply_filter(fvals) == pytest.approx(fvals)
assert pha.apply_grouping(fvals) == pytest.approx(fvals)
pha.quality = [2, 2, 2, 5]
assert pha.mask is True
assert pha.get_mask() == pytest.approx(all4)
assert pha.get_filter() == "1:4"
assert pha.get_x() == pytest.approx([1, 2, 3, 4])
assert pha.apply_filter(fvals) == pytest.approx(fvals)
assert pha.apply_grouping(fvals) == pytest.approx(fvals)
pha.ignore_bad()
assert pha.mask == pytest.approx(none4)
assert pha.get_mask() == pytest.approx(none4)
assert pha.get_filter() == ""
assert pha.get_x() == pytest.approx([1, 2, 3, 4])
assert pha.apply_filter(fvals) == pytest.approx([])
assert pha.apply_grouping(fvals) == pytest.approx(fvals)
@pytest.mark.parametrize("qual,fexpr,mask,counts",
[([2, 0, 0, 0], "1:4", [0, 1, 1, 1], [1, 64]),
([0, 0, 0, 2], "1:4", [1, 1, 1, 0], [9, 1]),
([0, 2, 2, 0], "1:4", [1, 0, 0, 1], [9, 64])
])
def test_pha_quality_bad_range_checks(qual, fexpr, mask, counts):
"""Regression test when a group is all bad quality.
We want to test start, end, and middle of the channels.
"""
pha = DataPHA("q", [1, 2, 3, 4], [9, 0, 1, 64], quality=qual,
grouping=[1, 1, -1, 1])
pha.ignore_bad()
assert pha.get_filter() == fexpr
assert pha.mask is True
assert pha.get_mask() == pytest.approx(mask)
assert pha.get_dep(filter=True) == pytest.approx(counts)
def test_pha_quality_change_mask(make_quality_pha):
"""A regression test."""
pha = make_quality_pha
pha.ignore_bad()
assert pha.mask is True
pha.mask = [1, 1, 0]
assert pha.mask == pytest.approx(np.asarray([True, True, False]))
def test_pha_quality_change_mask_ungrouped(make_quality_pha):
"""A regression test."""
pha = make_quality_pha
pha.ignore_bad()
pha.ungroup()
assert pha.mask is True
pha.mask = [1, 1, 0, 1, 1, 0, 0, 1, 1]
mask = np.asarray([True, True, False, True, True, False, False, True, True])
assert pha.mask == pytest.approx(mask)
def test_pha_quality_change_mask_fullsize(make_quality_pha):
"""A regression test.
Can we give the mask the "full" size (i.e. detchans)?
No.
"""
pha = make_quality_pha
pha.ignore_bad()
with pytest.raises(DataErr,
match="^size mismatch between grouped data and mask: 3 vs 9$"):
pha.mask = [1, 1, 0, 1, 1, 0, 0, 1, 1]
def test_pha_quality_change_mask_wrong_size(make_quality_pha):
"""A regression test."""
pha = make_quality_pha
pha.ignore_bad()
with pytest.raises(DataErr,
match="^size mismatch between grouped data and mask: 3 vs 5$"):
pha.mask = [1, 1, 0, 1, 1]
def test_pha_quality_change_mask_ungrouped_wrong_size(make_quality_pha):
"""A regression test."""
pha = make_quality_pha
pha.ignore_bad()
pha.ungroup()
with pytest.raises(DataErr,
match="^size mismatch between independent axis and mask: 9 vs 5$"):
pha.mask = [1, 1, 0, 1, 1]
def test_pha_change_channels(make_test_pha):
"""What happens if we change the channel/count values?
We have several ways of getting the independent and dependent
axes.
"""
pha = make_test_pha
channels = [1, 2, 3, 4]
counts = [1, 2, 0, 3]
assert np.all(pha.channel == channels)
assert np.all(pha.counts == counts)
assert len(pha.get_indep()) == 1
assert np.all(pha.get_indep()[0] == channels)
assert np.all(pha.get_dep() == counts)
assert len(pha.indep) == 1
assert np.all(pha.indep[0] == channels)
assert np.all(pha.dep == counts)
channels2 = [2, 3, 4, 5]
counts2 = [20, 30, 20, 10]
pha.channel = channels2
pha.counts = counts2
assert np.all(pha.channel == channels2)
assert np.all(pha.counts == counts2)
assert len(pha.get_indep()) == 1
assert np.all(pha.get_indep()[0] == channels2)
assert np.all(pha.get_dep() == counts2)
assert len(pha.indep) == 1
assert np.all(pha.indep[0] == channels2)
assert np.all(pha.dep == counts2)
def test_pha_add_channels(make_test_pha):
"""What happens if we increase the number of channels/counts?
Extends test_pha_change_channels
"""
pha = make_test_pha
channels2 = np.arange(1, 6, dtype=int)
with pytest.raises(DataErr,
match="independent axis can not change size: 4 to 5"):
pha.channel = channels2
def test_pha_remove_channels(make_test_pha):
"""What happens if we decrease the number of channels/counts?
Extends test_pha_change_channels
"""
pha = make_test_pha
channels2 = np.arange(1, 4, dtype=int)
with pytest.raises(DataErr,
match="independent axis can not change size: 4 to 3"):
pha.channel = channels2
@pytest.mark.parametrize("requested,expected",
[("bin", "channel"), ("Bin", "channel"),
("channel", "channel"), ("ChannelS", "channel"),
("chan", "channel"),
("energy", "energy"), ("ENERGY", "energy"),
("Energies", "energy"),
("WAVE", "wavelength"), ("wavelength", "wavelength"),
("Wavelengths", "wavelength"),
("chan This Is Wrong", "channel"), # should this be an error?
("WAVEY GRAVY", "wavelength") # should this be an error?
])
def test_pha_valid_units(requested, expected, make_test_pha):
"""Check we can set the units field of a PHA object"""
pha = make_test_pha
pha.units = requested
assert pha.units == expected
@pytest.mark.parametrize("invalid", ["Bins", "BINNING", "wavy", "kev", "angstrom"])
def test_pha_invalid_units(invalid, make_test_pha):
"""Check we can not set units to an invalid value"""
pha = make_test_pha
with pytest.raises(DataErr,
match=f"unknown quantity: '{invalid}'"):
pha.units = invalid
@pytest.mark.parametrize("invalid", ["RATE", "COUNTS", "rates", "count", "count-rate"])
def test_pha_analysis_type_invalid(invalid, make_test_pha):
pha = make_test_pha
with pytest.raises(DataErr,
match=f"unknown plot type '{invalid}', choose 'rate' or 'counts'"):
pha.set_analysis("channel", type=invalid)
def test_pha_analysis_plot_fac_valid(make_test_pha):
"""Historically we've allowed 2.0 as an argument, so check it still works"""
pha = make_test_pha
assert pha.plot_fac == 0
pha.plot_fac = 2.0
assert pha.plot_fac == 2
@pytest.mark.parametrize("invalid", ["1", 2.01, 0.5, complex(1)])
def test_pha_analysis_plot_fac_invalid(invalid, make_test_pha):
pha = make_test_pha
# Need to protect the '(1+0j)' brackets.
#
emsg = re.escape(f"unknown plot_fac setting: '{invalid}'")
with pytest.raises(DataErr,
match=emsg):
pha.plot_fac = invalid
@pytest.mark.parametrize("invalid", ["1", 2.01, 0.5, complex(1)])
def test_pha_analysis_factor_invalid(invalid, make_test_pha):
pha = make_test_pha
# Need to protect the '(1+0j)' brackets.
#
emsg = re.escape(f"unknown factor setting: '{invalid}'")
with pytest.raises(DataErr,
match=emsg):
pha.set_analysis("channel", factor=invalid)
def test_pha_get_specresp_no_response(make_test_pha):
pha = make_test_pha
assert pha.get_specresp() is None
def test_pha_ignore_bad_no_quality(make_test_pha):
pha = make_test_pha
assert pha.quality is None
with pytest.raises(DataErr,
match="data set 'p' does not specify quality flags"):
pha.ignore_bad()
def test_pha_quality_noticed_channels_no_filter(make_quality_pha):
"""Regression test."""
pha = make_quality_pha
chans0 = pha.get_noticed_channels()
assert chans0 == pytest.approx(np.arange(1, 10))
pha.ignore_bad()
chans1 = pha.get_noticed_channels()
assert chans1 == pytest.approx([1, 4, 5, 6])
def test_pha_quality_noticed_channels_with_filter(make_quality_pha):
"""Regression test."""
pha = make_quality_pha
assert pha.grouped
pha.ignore_bad()
# The bins are 1-4 (although 2 and 3 are excluded), 5, 6, 7-9 (all
# excluded). Ignoring at hi=2 makes this interesting, as the first
# group should then be removed.
#
pha.ignore(hi=2)
chans1 = pha.get_noticed_channels()
assert chans1 == pytest.approx([5, 6])
pha.notice(lo=2)
chans2 = pha.get_noticed_channels()
assert chans2 == pytest.approx([1, 4, 5, 6])
def test_pha_quality_ignore_bad_clear_filter(make_quality_pha):
"""Regression test."""
pha = make_quality_pha
mask0 = np.ones(9, dtype=bool)
assert pha.get_filter() == "1:9"
assert pha.mask is True
assert pha.get_mask() == pytest.approx(mask0)
assert pha.quality_filter is None
# channels 2,3 and 7-9 are "bad"
pha.ignore(hi=3)
mask = np.asarray([False] + [True] * 3)
mask_full = np.asarray([False] * 4 + [True] * 5)
assert pha.get_filter() == "5:9"
assert pha.mask == pytest.approx(mask)
assert pha.get_mask() == pytest.approx(mask_full)
assert pha.quality_filter is None
# This resets the previous filters
pha.ignore_bad()
qflags = np.asarray([True] * 1 + [False] * 2 + [True] * 3 + [False] * 3)
assert pha.get_filter() == "1:9"
assert pha.mask is True
assert pha.get_mask() == pytest.approx(qflags)
assert pha.quality_filter == pytest.approx(qflags)
pha.ignore(hi=3)
mask2 = np.asarray([False] + [True] * 2)
mask2_full = np.asarray([False] * 2 + [True] * 2)
assert pha.get_filter() == "5:6"
assert pha.mask == pytest.approx(mask2)
assert pha.get_mask() == pytest.approx(mask2_full)
assert pha.quality_filter == pytest.approx(qflags)
pha.ignore(lo=2, hi=4)
assert pha.get_filter() == "5:6"
assert pha.mask == pytest.approx(mask2)
assert pha.get_mask() == pytest.approx(mask2_full)
assert pha.quality_filter == pytest.approx(qflags)
# This removes the quality filter!
pha.notice()
assert pha.get_filter() == "1:9"
assert pha.mask is True
assert pha.get_mask() == pytest.approx(mask0)
assert pha.quality_filter is None
def test_pha_grouping_changed_no_filter_1160(make_test_pha):
"""What happens when the grouping is changed?
See also test_pha_grouping_changed_filter_1160
"""
pha = make_test_pha
d1 = pha.get_dep(filter=True)
assert d1 == pytest.approx([1, 2, 0, 3])
# grouping set but not grouped
pha.grouping = [1, 1, 1, 1]
d2 = pha.get_dep(filter=True)
assert d2 == pytest.approx([1, 2, 0, 3])
# now grouped
pha.grouped = True
d3 = pha.get_dep(filter=True)
assert d3 == pytest.approx([1, 2, 0, 3])
pha.grouping = [1, 1, -1, 1]
d4 = pha.get_dep(filter=True)
assert d4 == pytest.approx([1, 2, 3])
def test_pha_grouping_changed_filter_1160(make_test_pha):
"""What happens when the grouping is changed?
See also test_pha_grouping_changed_no_filter_1160
"""
pha = make_test_pha
pha.notice(2, 5)
d1 = pha.get_dep(filter=True)
assert d1 == pytest.approx([2, 0, 3])
# grouping set but not grouped
pha.grouping = [1, 1, 1, 1]
d2 = pha.get_dep(filter=True)
assert d2 == pytest.approx([2, 0, 3])
# now grouped
pha.grouped = True
d3 = pha.get_dep(filter=True)
assert d3 == pytest.approx([2, 0, 3])
pha.grouping = [1, 1, -1, 1]
d4 = pha.get_dep(filter=True)
assert d4 == pytest.approx([2, 3])
def test_pha_grouping_changed_1160_grped_no_filter(make_grouped_pha):
"""Test based on work on #1160
This is probably no different to
test_pha_grouping_changed_no_filter_1160 and
test_pha_grouping_changed_filter_1160 but separated out
as more tests here are probably useful.
"""
# Do we care about adding a response?
pha = make_grouped_pha
# why does this not understand the "bad quality" filter?
ofilter = "1:5"
assert pha.get_filter() == ofilter
# Change the grouping
pha.grouping = [1] * 5
# Although no grouping, we still have the bad filter in place
assert pha.get_dep(filter=False) == pytest.approx([1, 2, 0, 3, 12])
assert pha.get_dep(filter=True) == pytest.approx([1, 2, 0, 3])
assert pha.get_filter() == ofilter
def test_pha_grouping_changed_1160_grped_with_filter(make_grouped_pha):
"""Test based on work on #1160
See test_pha_grouping_changed_1160_grped_no_filter
"""
pha = make_grouped_pha
# Can not say
# pha.notice(2, 4)
# as we have already done a filter, so this would not
# act to ignore the first channel.
#
# The ignore(lo=5) line is not needed as it is already excluded by
# a bad-quality channel, but users can say this, so check the the
# response.
#
pha.ignore(hi=1)
pha.ignore(lo=5)
# Dropping channel 1 means the first group gets dropped, so we
# only have channel 4 left.
#
assert pha.get_filter() == "4"
assert pha.get_dep(filter=False) == pytest.approx([1, 2, 0, 3, 12])
assert pha.get_dep(filter=True) == pytest.approx([3])
# Change the grouping; it would be nice if it could have
# recognized the requested range was > 1 and <= 5 but the current
# code does not support this.
#
pha.grouping = [1] * 5
assert pha.get_dep(filter=False) == pytest.approx([1, 2, 0, 3, 12])
assert pha.get_dep(filter=True) == pytest.approx([3])
assert pha.get_filter() == "4"
def test_pha_grouping_changed_1160_ungrped_with_filter(make_grouped_pha):
"""Test based on work on #1160
A version of
test_pha_grouping_changed_1160_grped_with_filter
but the data is not grouped, even though the grouping
is set/changed.
"""
pha = make_grouped_pha
# Apply the filter whilst still grouped
pha.ignore(hi=1)
pha.ignore(lo=5)
pha.ungroup()
# The filtering does not change because of the ungroup call,
# although we might like it too.
assert pha.get_filter() == "4"
assert pha.get_dep(filter=False) == pytest.approx([1, 2, 0, 3, 12])
assert pha.get_dep(filter=True) == pytest.approx([3])
# Change the grouping
pha.grouping = [1] * 5
assert pha.get_dep(filter=False) == pytest.approx([1, 2, 0, 3, 12])
assert pha.get_dep(filter=True) == pytest.approx([3])
assert pha.get_filter() == "4"
@requires_fits
@requires_data
def test_1160(make_data_path):
"""Use the dataset we reported this with just as an extra check
It is slightly different to the other #1160 tests above because
a) we do not have a non-zero quality bin
b) we have an instrument response (this should not really
matter here)
"""
from sherpa.astro.io import read_pha
pha = read_pha(make_data_path("3c273.pi"))
fexpr = "0.47:6.57"
pha.notice(0.5, 6)
assert pha.get_dep(filter=False).shape == (1024, )
assert pha.get_dep(filter=True).shape == (41, )
assert pha.mask.shape == (46, )
assert pha.mask.sum() == 41
assert pha.get_filter(format="%.2f") == fexpr
pha.grouping = [1] * 1024
assert pha.get_dep(filter=False).shape == (1024, )
assert pha.get_dep(filter=True).shape == (418, )
assert pha.mask.shape == (1024, )
assert pha.mask.sum() == 418
assert pha.get_filter(format="%.2f") == fexpr
def test_pha_remove_grouping(make_test_pha):
"""Check we can remove the grouping array.
See issue #1659
"""
pha = make_test_pha
assert pha.grouping is None
assert not pha.grouped
no_data = [1, 2, 0, 3]
d2 = pha.get_dep(filter=True)
assert d2 == pytest.approx(no_data)
pha.grouping = [1, -1, 1, -1]
assert not pha.grouped
pha.grouped = True
d1 = pha.get_dep(filter=True)
assert d1 == pytest.approx([3, 3])
# Can we remove the grouping column?
pha.grouping = None
# Check the get_dep behavior before grouped as this is currently
# causes a TypeError rather than not changing a boolean flag
# variable.
#
d2 = pha.get_dep(filter=True)
assert d2 == pytest.approx(no_data)
# This thinks that pha.grouped is still set
assert not pha.grouped
@pytest.mark.xfail
@pytest.mark.parametrize("grouping", [True, [1, 1], np.ones(10)])
def test_pha_grouping_size(grouping, make_test_pha):
"""Check we error out if grouping has the wrong size"""
pha = make_test_pha
with pytest.raises(DataErr) as de:
pha.grouping = grouping
assert str(de.value) == 'size mismatch between channel and grouping'
def test_pha_remove_quality(make_test_pha):
"""Check we can remove the quality array."""
pha = make_test_pha
assert pha.quality is None
no_data = [1, 2, 0, 3]
d1 = pha.get_dep(filter=True)
assert d1 == pytest.approx(no_data)
pha.quality = [0, 0, 0, 2]
d2 = pha.get_dep(filter=True)
assert d2 == pytest.approx(no_data)
pha.quality = None
d3 = pha.get_dep(filter=True)
assert d3 == pytest.approx(no_data)
@pytest.mark.xfail
def test_pha_remove_quality_bad(make_test_pha):
"""Check we can remove the quality array after calling ignore_bad
Here we ensure we have a "bad" value that will be
marked bad by ignore_bad.
What is the expected behavior after removing the
quality array? See #1427
"""
pha = make_test_pha
assert pha.quality is None
no_data = [1, 2, 0, 3]
pha.quality = [0, 0, 0, 2]
pha.ignore_bad()
d1 = pha.get_dep(filter=True)
assert d1 == pytest.approx([1, 2, 0])
# At the moment d2 == [1, 2, 0] so the quality filter remains
pha.quality = None
d2 = pha.get_dep(filter=True)
assert d2 == pytest.approx(no_data)
def test_pha_quality_bad_filter(make_test_pha, caplog):
"""What is the filter expression when ignore bad + filter
Also check the screen output (there is none for the PHA case,
unlike the UI version).
"""
pha = make_test_pha
assert pha.get_filter() == "1:4"
assert len(caplog.record_tuples) == 0
with SherpaVerbosity("INFO"):
pha.ignore(hi=1)
assert pha.get_filter() == "2:4"
assert len(caplog.record_tuples) == 0
d1 = pha.get_dep(filter=True)
assert d1 == pytest.approx([2, 0, 3])
pha.quality = [0, 0, 0, 2]
with SherpaVerbosity("INFO"):
pha.ignore_bad()
d2 = pha.get_dep(filter=True)
assert d2 == pytest.approx([2, 0])
assert pha.get_filter() == "2:3"
assert len(caplog.record_tuples) == 0
def test_pha_quality_bad_filter2(make_quality_pha, caplog):
"""A different set of bad channels and groups to test_pha_quality_bad_filter
This also does not include a filter, since this messes everything up at
this time.
"""
pha = make_quality_pha
assert pha.get_filter() == "1:9"
assert pha.grouped
d1 = pha.get_dep(filter=True)
assert d1 == pytest.approx([6, 12, 2, 24])
with SherpaVerbosity("INFO"):
pha.ignore_bad()
assert len(caplog.record_tuples) == 0
d2 = pha.get_dep(filter=True)
assert d2 == pytest.approx([4, 12, 2])
assert pha.get_filter() == "1:9"
assert len(caplog.record_tuples) == 0
@pytest.mark.xfail
def test_pha_quality_bad_filter_remove(make_test_pha):
"""test_pha_quality_bad_filter then remove the quality array
What is the expected behavior after removing the
quality array? See #1427
"""
pha = make_test_pha
pha.ignore(hi=1)
pha.quality = [0, 0, 0, 2]
pha.ignore_bad()
# At the moment the filter still includes the quality filter
pha.quality = None
assert pha.get_filter() == "2:4"
@pytest.mark.parametrize("field,expected",
[("channel", [1, 2, 3, 4, 5, 6, 7, 8, 9]),
("counts", [1, 2, 0, 3, 12, 2, 9, 8, 7]),
("grouping", [1, -1, -1, -1, 1, 1, 1, -1, -1]),
("quality", [0, 5, 5, 0, 0, 0, 2, 2, 5])
])
def test_pha_quality_bad_field(field, expected, make_quality_pha):
"""After ignore_bad what does the field return?"""
pha = make_quality_pha
assert getattr(pha, field) == pytest.approx(expected)
pha.ignore_bad()
assert getattr(pha, field) == pytest.approx(expected)
def test_pha_quality_bad_mask(make_quality_pha):
"""What does the mask look like?"""
pha = make_quality_pha
assert pha.mask is True
pha.ignore_bad()
assert pha.mask is True
def test_pha_quality_bad_mask_grouped(make_quality_pha):
"""What does the mask look like?"""
pha = make_quality_pha
pha.ignore_bad()
pha.group()
assert pha.mask is True
def test_pha_quality_bad_get_mask(make_quality_pha):
"""What does the get_mask() look like?"""
pha = make_quality_pha
assert pha.get_mask() == pytest.approx([1] * 9)
pha.ignore_bad()
assert pha.get_mask() == pytest.approx([1, 0, 0, 1, 1, 1, 0, 0, 0])
def test_pha_no_quality_ignore_bad(make_test_pha):
"""What happens if call ignore_bad and no quality data"""
pha = make_test_pha
with pytest.raises(DataErr,
match="^data set 'p' does not specify quality flags$"):
pha.ignore_bad()
@requires_group
def test_pha_change_quality_values(caplog):
"""What happens if we change the quality column?
This is a regression test as it is likely we should change the filter,
but we have not thought through the consequences. See also #1427
"""
pha = DataPHA('ex', [1, 2, 3, 4, 5, 6, 7], [1, 2, 1, 0, 2, 2, 1])
pha.group_counts(5)
assert pha.quality == pytest.approx([0, 0, 0, 0, 0, 2, 2])
assert pha.get_dep(filter=True) == pytest.approx([6, 3])
assert pha.get_filter() == '1:7'
assert pha.quality_filter is None
assert len(caplog.records) == 0
pha.ignore_bad()
assert len(caplog.records) == 0
qfilt = np.asarray([True] * 5 + [False] * 2)
assert pha.quality_filter == pytest.approx(qfilt)
assert pha.get_dep(filter=True) == pytest.approx([6])
assert pha.get_filter() == '1:7'
# With no tabStops set it uses ~pha.get_mask() which in this case
# is [False] * 5 + [True] * 2,
#
pha.group_counts(4)
assert len(caplog.records) == 0
assert pha.quality == pytest.approx([0, 0, 0, 2, 2, 0, 0])
# Should quality filter be reset?
assert pha.quality_filter == pytest.approx(qfilt)
assert pha.get_dep(filter=True) == pytest.approx([4, 2])
assert pha.get_filter() == '1:7'
def test_pha_group_adapt_check():
"""Regression test.
This was found when investigating ignore_bad issues and felt to
be worth a test to check how this code behaves.
"""
counts = [4, 2, 3, 1, 5, 6, 7]
pha = DataPHA("ex", [1, 2, 3, 4, 5, 6, 7], counts)
pha.group_adapt(6)
# The grouping may change if the adaptive scheme changes.
assert pha.grouping == pytest.approx([1, -1, 1, 1, -1, 1, 1])
# The group library behaves oddly (the last element being 2).
assert pha.quality == pytest.approx([0, 0, 2, 0, 0, 0, 2])
assert pha.get_dep(filter=False) == pytest.approx(counts)
assert pha.get_dep(filter=True) == pytest.approx([6, 3, 6, 6, 7])
def test_pha_group_ignore_bad_then_filter(caplog):
"""Regression test."""
counts = [4, 2, 3, 1, 5, 6, 7]
pha = DataPHA("ex", [1, 2, 3, 4, 5, 6, 7], counts)
# The equivalent of pha.group_adapt(6) with CIAO 4.17 but this
# may change, so set the data manually. Compare to
# test_pha_group_adapt_check
#
# pha.group_adapt(6)
pha.grouping = [1, -1, 1, 1, -1, 1, 1]
pha.quality = [0, 0, 2, 0, 0, 0, 2]
pha.group()
assert len(caplog.records) == 0
assert pha.mask is True
assert pha.get_mask() == pytest.approx(np.ones(7, dtype=bool))
assert pha.get_filter() == '1:7'
assert pha.quality_filter is None
pha.ignore_bad()
assert len(caplog.records) == 0
qual_mask = np.asarray([True] * 2 + [False] + [True] * 3 + [False])
assert pha.mask is True
assert pha.get_mask() == pytest.approx(qual_mask)
assert pha.get_filter() == '1:7'
assert pha.quality_filter == pytest.approx(qual_mask)
assert pha.quality == pytest.approx([0, 0, 2, 0, 0, 0, 2])
assert pha.get_dep(filter=False) == pytest.approx(counts)
assert pha.get_dep(filter=True) == pytest.approx([6, 6, 6])
pha.ignore(4, 5)
assert len(caplog.records) == 0
mask = np.asarray([True, False, True])
mask_full = np.asarray([True] * 2 + [False] * 2 + [True])
assert pha.mask == pytest.approx(mask)
assert pha.get_mask() == pytest.approx(mask_full)
assert pha.get_filter() == '1:2,6'
assert pha.quality_filter == pytest.approx(qual_mask)
assert pha.quality == pytest.approx([0, 0, 2, 0, 0, 0, 2])
assert pha.get_dep(filter=False) == pytest.approx(counts)
assert pha.get_dep(filter=True) == pytest.approx([6, 6])
def test_pha_group_ignore_bad_then_group(caplog):
"""Regression test."""
counts = [4, 2, 3, 1, 5, 6, 7]
pha = DataPHA("ex", [1, 2, 3, 4, 5, 6, 7], counts)
pha.group_adapt(6)
pha.ignore_bad()
assert len(caplog.records) == 0
qual_mask = np.asarray([True] * 2 + [False] + [True] * 3 + [False])
assert pha.mask is True
assert pha.get_mask() == pytest.approx(qual_mask)
assert pha.quality_filter == pytest.approx(qual_mask)
# Change the grouping. What happens with the existing "bad
# quality" data?
#
pha.group_counts(4)
assert len(caplog.records) == 0
assert pha.mask is True
assert pha.get_mask() == pytest.approx(qual_mask)
assert pha.get_filter() == '1:7'
assert pha.quality_filter == pytest.approx(qual_mask)
assert pha.quality == pytest.approx([0, 2, 0, 0, 0, 0, 0])
assert pha.get_dep(filter=False) == pytest.approx(counts)
assert pha.get_dep(filter=True) == pytest.approx([4, 2, 6, 6])
# Shouldn't this be a no-op. It isn't because the group call
# didn't change the quality_filter array, so it now changes what
# are the good/bad channels.
#
pha.ignore_bad()
assert len(caplog.records) == 0
qual_mask = np.asarray([True] + [False] + [True] * 5)
assert pha.mask is True
assert pha.get_mask() == pytest.approx(qual_mask)
assert pha.get_filter() == '1:7'
assert pha.quality_filter == pytest.approx(qual_mask)
assert pha.quality == pytest.approx([0, 2, 0, 0, 0, 0, 0])
assert pha.get_dep(filter=False) == pytest.approx(counts)
assert pha.get_dep(filter=True) == pytest.approx([4, 3, 6, 6, 7])
def test_pha_filter_ignore_bad_filter(caplog):
"""A regression test.
Mix filtering, ignore_bad, and more filtering.
"""
counts = np.asarray([4, 2, 3, 1, 5, 6, 7])
pha = DataPHA("ex", [1, 2, 3, 4, 5, 6, 7], counts)
pha.ignore(lo=4, hi=4)
assert len(caplog.records) == 0
data_mask = np.asarray([True] * 3 + [False] + [True] * 3)
assert pha.mask == pytest.approx(data_mask)
assert pha.get_mask() == pytest.approx(data_mask)
assert pha.get_filter() == '1:3,5:7'
assert pha.quality_filter is None
assert pha.quality is None
assert pha.get_dep(filter=False) == pytest.approx(counts)
assert pha.get_dep(filter=True) == pytest.approx(counts[[0, 1, 2, 4, 5, 6]])
pha.group_counts(5)
assert len(caplog.records) == 0
data_mask2 = np.asarray([True] * 2 + [False] + [True] * 3)
assert pha.mask == pytest.approx(data_mask2)
assert pha.get_mask() == pytest.approx(data_mask)
assert pha.get_filter() == '1:3,5:7'
assert pha.quality_filter is None
assert pha.quality == pytest.approx([0, 0, 2, 0, 0, 0, 0])
assert pha.get_dep(filter=False) == pytest.approx(counts)
assert pha.get_dep(filter=True) == pytest.approx([6, 3, 5, 6, 7])
pha.ignore_bad()
assert len(caplog.records) == 1
r = caplog.records[-1]
assert r.name == "sherpa.astro.data"
assert r.levelname == "WARNING"
assert r.getMessage() == "filtering grouped data with quality flags, previous filters deleted"
new_mask = np.asarray([True] * 2 + [False] + [True] * 4)
assert pha.mask is True
assert pha.get_mask() == pytest.approx(new_mask)
assert pha.get_filter() == '1:7'
assert pha.quality_filter == pytest.approx(new_mask)
assert pha.quality == pytest.approx([0, 0, 2, 0, 0, 0, 0])
assert pha.get_dep(filter=False) == pytest.approx(counts)
assert pha.get_dep(filter=True) == pytest.approx([6, 1, 5, 6, 7])
pha.ignore(lo=2, hi=2)
assert len(caplog.records) == 1
mask3 = np.asarray([False] + [True] * 4)
mask3_full = np.asarray([False] * 2 + [True] * 4)
assert pha.mask == pytest.approx(mask3)
assert pha.get_mask() == pytest.approx(mask3_full)
assert pha.get_filter() == '4:7'
assert pha.quality_filter == pytest.approx(new_mask)
assert pha.quality == pytest.approx([0, 0, 2, 0, 0, 0, 0])
assert pha.get_dep(filter=False) == pytest.approx(counts)
assert pha.get_dep(filter=True) == pytest.approx([1, 5, 6, 7])
@pytest.mark.parametrize("field", ["grouping", "quality"])
def test_pha_change_xxx_non_integer_value(field, make_test_pha):
"""What happens if send grouping/quality values that can not be converted to an array?"""
pha = make_test_pha
invalid = [None, "x", {}, set()]
with pytest.raises(DataErr,
match="Array must be a sequence of integers or None"):
setattr(pha, field, invalid)
def test_pha_change_grouping_type(make_test_pha):
"""Check the grouping column is converted to int"""
pha = make_test_pha
grp = np.asarray([1.0, -1.0, -1.0, 1.0])
pha.grouping = grp
# Since integer values can do an exact check
assert (pha.grouping == np.asarray([1, -1, -1, 1])).all()
assert pha.grouping.dtype == np.int16
def test_pha_change_quality_type(make_test_pha):
"""Check the quality column is converted to int"""
pha = make_test_pha
# technically negative numbers are allowed
qual = np.asarray([0.0, 2.0, 5.0, -1.0])
pha.quality = qual
# Since integer values can do an exact check
assert (pha.quality == np.asarray([0, 2, 5, -1])).all()
assert pha.quality.dtype == np.int16
@pytest.mark.parametrize("label", ["grouping", "quality"])
def test_pha_change_grouping_rounding(label, make_test_pha):
"""What happens with non-integer values?
Unlike test_pha_change_grouping/quality_type we can more-easily
use the same input array, which makes it easier to test both
columns with the same routine. It is actually unclear what
we should do with input like this - should we error out,
silently truncate, or perhaps warn the user. For the moment
test assuming silent truncation.
"""
pha = make_test_pha
vals = np.asarray([0.5, 1.5, -0.5, 0.9])
setattr(pha, label, vals)
got = getattr(pha, label)
assert (got == np.asarray([0, 1, 0, 0])).all()
@requires_group
def test_pha_ignore_bad_group_quality(caplog):
"""Check handling of ignore_bad when quality and grouping set.
This used to be called test_416_b but has been expanded to
check a few more things. See also
test_pha_ignore_bad_quality which is meant to
be the same but with an ungrouped dataset (so the
results won't quite match).
"""
# The energy range matches the channel values to make
# things easier.
#
x = np.asarray([1, 2, 3, 4, 5, 6, 7, 8, 9, 10])
y = np.asarray([0, 0, 0, 2, 1, 1, 0, 0, 1, 0])
pha = DataPHA('416', x, y)
rmf = create_delta_rmf(x, x + 1, e_min=x, e_max=x + 1,
name='416')
pha.set_arf(rmf)
pha.set_analysis('energy')
assert pha.get_filter(format="%.1f") == "1.0:11.0"
assert pha.get_noticed_channels() == pytest.approx(np.arange(1, 11))
# No grouping or filtering yet
assert pha.get_dep(filter=False) == pytest.approx(y)
assert pha.get_dep(filter=True) == pytest.approx(y)
assert not pha.grouped
# After this we have
# - two groups, channels 1-5 and 6-10
# - the first group has quality=0, the second quality=2
# - the noticed range is channels 3-7 before grouping
# which becomes 1-11 after grouping (i.e. all points)
#
pha.notice(3.5, 6.5)
assert pha.get_filter(format="%.1f") == "3.0:7.0"
assert pha.get_noticed_channels() == pytest.approx(np.arange(3, 7))
omask = np.asarray([False] * 2 + [True] * 4 + [False] * 4)
assert pha.mask == pytest.approx(omask)
assert pha.get_mask() == pytest.approx(omask)
# Only filtering
assert pha.get_dep(filter=False) == pytest.approx(y)
assert pha.get_dep(filter=True) == pytest.approx(y[2:6])
# tabStops is not set, so uses the current mask
pha.group_counts(3)
assert pha.get_filter(format="%.1f") == "3.0:7.0"
assert pha.get_noticed_channels() == pytest.approx(np.arange(3, 7))
# Grouped and filtered
assert pha.get_dep(filter=False) == pytest.approx(y)
assert pha.get_dep(filter=True) == pytest.approx([3, 1])
mask = np.asarray([False] * 2 + [True] * 2 + [False] * 4)
assert pha.mask == pytest.approx(mask)
assert pha.get_mask() == pytest.approx(omask)
grouping = [0, 0, 1, -1, -1, 1, 0, 0, 0, 0]
assert pha.grouping == pytest.approx(grouping)
quality = [0, 0, 0, 0, 0, 2, 0, 0, 0, 0]
assert pha.quality == pytest.approx(quality)
assert pha.quality_filter is None
assert pha.grouped
# By calling ignore_bad we have
# - removed the channels with quality=2, which is
# channels 6-10
# - removed the noticed range
#
assert len(caplog.record_tuples) == 0
with caplog.at_level(logging.INFO, logger='sherpa'):
pha.ignore_bad()
# check captured log
#
emsg = 'filtering grouped data with quality flags, previous filters deleted'
assert caplog.record_tuples == [
('sherpa.astro.data', logging.WARNING, emsg)
]
assert pha.grouped
# We have reverted the energy filter, so the mask attribute
# is back to a boolean.
#
assert type(pha.mask) is bool
assert pha.mask is True
# However, get_mask reflects the quality filter, so is all True
# except for the 6th element.
#
single_bad = np.asarray([True] * 5 + [False] + [True] * 4)
assert pha.get_mask() == pytest.approx(single_bad)
# What about the quality fields?
#
assert pha.quality == pytest.approx(quality)
assert pha.quality_filter == pytest.approx(single_bad)
# Saying all that though, the filter expression does not
# know we are ignoring channels 6-10.
#
# TODO: This is likely a bug.
#
assert pha.get_filter(format="%.1f") == "1.0:11.0"
assert pha.get_noticed_channels() == pytest.approx([1, 2, 3, 4, 5, 7, 8, 9, 10])
# Grouped and quality-filtered (even though get_filter
# returns 1:11 here).
#
assert pha.get_dep(filter=False) == pytest.approx(y)
assert pha.get_dep(filter=True) == pytest.approx([0, 0, 3, 0, 0, 1, 0])
# check there have been no more messages.
#
assert len(caplog.record_tuples) == 1
@pytest.mark.parametrize("groupit", [False, True])
def test_pha_ignore_bad_quality(groupit, caplog):
"""Check handling of ignore_bad when quality set but no grouping.
See test_pha_ignore_bad_group_quality. The case when
the quality array is not set is handled earlier by
test_pha_ignore_bad_no_quality
The groupit flag is used to ensure the results are
the same if the data has no grouping data at all
(False) or has grouping but is not used (True).
"""
x = np.asarray([1, 2, 3, 4, 5, 6, 7, 8, 9, 10])
y = np.asarray([0, 0, 0, 2, 1, 1, 0, 0, 1, 0])
pha = DataPHA('416', x, y)
rmf = create_delta_rmf(x, x + 1, e_min=x, e_max=x + 1,
name='416')
pha.set_arf(rmf)
pha.set_analysis('energy')
grps = np.asarray([1, -1, -1, -1, -1] * 2)
if groupit:
pha.grouping = grps
assert not pha.grouped
assert pha.get_filter(format="%.1f") == "1.0:11.0"
assert pha.get_noticed_channels() == pytest.approx(np.arange(1, 11))
assert pha.get_dep(filter=False) == pytest.approx(y)
assert pha.get_dep(filter=True) == pytest.approx(y)
# After this we have
# - the noticed range is channels 3-7
#
pha.notice(3.5, 6.5)
assert pha.get_filter(format="%.1f") == "3.0:7.0"
assert pha.get_noticed_channels() == pytest.approx(np.arange(3, 7))
# Only filtering
assert pha.get_dep(filter=False) == pytest.approx(y)
assert pha.get_dep(filter=True) == pytest.approx(y[2:6])
mask = np.asarray([False] * 2 + [True] * 4 + [False] * 4)
assert pha.mask == pytest.approx(mask)
assert pha.get_mask() == pytest.approx(mask)
if groupit:
assert pha.grouping == pytest.approx(grps)
else:
assert pha.grouping is None
assert not pha.grouped
assert pha.quality is None
assert pha.quality_filter is None
# Now apply quality filtering without grouping. We choose
# the same quality range as test_pha_grouped_filtered_quality_warns
#
quality = [0] * 5 + [2] * 5
pha.quality = quality
assert pha.quality == pytest.approx(quality)
assert pha.quality_filter is None
# By calling ignore_bad we have
# - removed the channels with quality=2, which is
# channels 6-10
#
assert len(caplog.record_tuples) == 0
with caplog.at_level(logging.INFO, logger='sherpa'):
pha.ignore_bad()
assert not pha.grouped
# check captured log; at the moment this DOES NOT warn the
# user about the filter being removed.
#
assert len(caplog.record_tuples) == 0
# The mask changed (the channel=6 value is now filtered out).
#
mask2 = np.asarray([False] * 2 + [True] * 3 + [False] * 5)
assert pha.mask == pytest.approx(mask2)
assert pha.get_mask() == pytest.approx(mask2)
# What about the quality fields?
#
assert pha.quality == pytest.approx(quality)
assert pha.quality_filter is None
# The filter expression has changed to reflect the quality filter;
# this is unlike the grouped version above.
#
assert pha.get_filter(format="%.1f") == "3.0:6.0"
assert pha.get_noticed_channels() == pytest.approx(np.arange(3, 6))
# noticed and quality-filtered.
#
assert pha.get_dep(filter=False) == pytest.approx(y)
assert pha.get_dep(filter=True) == pytest.approx(y[2:5])
# check there have been no more messages.
#
assert len(caplog.record_tuples) == 0
@requires_group
def test_361():
"""Check issue #361
This is also tested in test_filter_bad_notice_361 in
sherpa/astro/ui/tests/test_filtering.py using the UI
interface.
"""
# energy ranges are
# 0.1-0.2, 0.2-0.3, ..., 1.0-1.1
# and when grouped we get
# 0.1-0.3, 0.3-0.5, 0.5-0.7, 0.7-0.9, 0.9-1.1
# with counts
# 12, 6, 11, 8, 3
# and then the quality array knocks out the
# 0.5-0.7 group (11 counts).
#
x = np.asarray([1, 2, 3, 4, 5, 6, 7, 8, 9, 10])
y = np.asarray([5, 7, 2, 4, 6, 5, 8, 0, 1, 2])
grp = np.asarray([1, -1] * 5)
qual = np.zeros(10)
qual[4:6] = 2
pha = DataPHA('361', x, y,
grouping=grp, quality=qual)
elo = x * 0.1
ehi = (x + 1) * 0.1
rmf = create_delta_rmf(elo, ehi, e_min=elo, e_max=ehi,
name='4361')
pha.set_arf(rmf)
pha.set_analysis('energy')
assert pha.grouped
assert pha.get_dep() == pytest.approx(y)
assert pha.get_dep(filter=True) == pytest.approx([12, 6, 11, 8, 3])
assert pha.get_noticed_channels() == pytest.approx(np.arange(1, 11))
pha.ignore_bad()
assert pha.get_dep() == pytest.approx(y)
assert pha.get_dep(filter=True) == pytest.approx([12, 6, 8, 3])
assert pha.get_noticed_channels() == pytest.approx([1, 2, 3, 4, 7, 8, 9, 10])
pha.notice(0.35, 0.8)
assert pha.get_dep() == pytest.approx(y)
assert pha.get_dep(filter=True) == pytest.approx([6, 8])
# The issue in #361 seems to come from evaluating an array
# of the expected length as created by the model. We can
# be more-direct here and check the problematic call.
#
assert pha.get_noticed_channels() == pytest.approx([3, 4, 7, 8])
def test_grouped_pha_get_dep(make_grouped_pha):
"""Quality filtering and grouping is applied: get_dep
As noted in issue #1438 it's not obvious what get_y is meant to
return. It is not the same as get_dep as there's post-processing.
So just test the current behavior.
"""
pha = make_grouped_pha
# grouped counts are [3, 3, 12]
# channel widths are [3, 1, 1]
# which gives [1, 3, 12]
# but the last group is marked bad by quality,
# so we expect [1, 3]
#
assert pha.get_dep() == pytest.approx([1, 3])
def test_grouped_pha_get_y(make_grouped_pha):
"""Quality filtering and grouping is applied: get_y
As noted in issue #1438 it's not obvious what get_y is meant to
return. It is not the same as get_dep as there's post-processing.
So just test the current behavior.
"""
pha = make_grouped_pha
# grouped counts are [3, 3, 12]
# channel widths are [3, 1, 1]
# which gives [1, 3, 12]
# but the last group is marked bad by quality,
# so we expect [1, 3]
#
assert pha.get_y() == pytest.approx([1, 3])
def test_quality_pha_get_dep(make_quality_pha):
"""Regression test for quality + filtering."""
pha = make_quality_pha
# ungrouped counts no quality are [1, 2, 0, 3, 12, 2, 9, 8, 7]
# ungrouped counts with quality are [1, 3, 12, 2]
#
# grouped counts no quality are [6, 12, 2, 24]
# grouped counts with quality are [4, 12, 2]
#
all_counts = [1, 2, 0, 3, 12, 2, 9, 8, 7]
assert pha.get_dep(filter=False) == pytest.approx(all_counts)
assert pha.get_dep(filter=True) == pytest.approx([6, 12, 2, 24])
pha.ignore_bad()
assert pha.get_dep(filter=False) == pytest.approx(all_counts)
assert pha.get_dep(filter=True) == pytest.approx([4, 12, 2])
def test_quality_pha_get_y(make_quality_pha):
"""Regression test for quality + filtering."""
pha = make_quality_pha
# bin widths are
# no quality [4, 1, 1, 3]
# quality [2, 1, 1]
#
# grouped counts no quality are [6, 12, 2, 24]
# grouped counts with quality are [4, 12, 2]
#
expected1 = np.asarray([1.5, 12, 2, 8])
assert pha.get_y(filter=False) == pytest.approx(expected1)
assert pha.get_y(filter=True) == pytest.approx(expected1)
pha.ignore_bad()
# expected2 = np.asarray([2, 12, 2])
expected2 = np.asarray([1, 12, 2]) # TODO: why do we get this?
assert pha.get_y(filter=False) == pytest.approx(expected2)
assert pha.get_y(filter=True) == pytest.approx(expected2)
def test_quality_pha_get_indep(make_quality_pha):
"""Regression test for quality + filtering.
This ignores the channel settings.
"""
pha = make_quality_pha
assert pha.units == "channel"
chans = np.arange(1, 10)
for filt in [False, True]:
indep = pha.get_indep(filter=filt)
assert len(indep) == 1
assert indep[0] == pytest.approx(chans)
pha.ignore_bad()
indep = pha.get_indep(filter=False)
assert len(indep) == 1
assert indep[0] == pytest.approx(chans)
indep = pha.get_indep(filter=True)
assert len(indep) == 1
assert indep[0] == pytest.approx([1, 4, 5, 6])
def test_grouped_pha_mask(make_grouped_pha):
"""What is the default mask setting?"""
pha = make_grouped_pha
assert np.isscalar(pha.mask)
assert pha.mask is True
def test_grouped_pha_get_mask(make_grouped_pha):
"""What is the default get_mask value?"""
pha = make_grouped_pha
assert pha.get_mask() == pytest.approx(np.asarray([True] * 4 + [False]))
def test_quality_pha_mask(make_quality_pha):
"""What is the default mask setting?"""
pha = make_quality_pha
assert np.isscalar(pha.mask)
assert pha.mask is True
pha.ignore_bad()
assert np.isscalar(pha.mask)
assert pha.mask is True
def test_quality_pha_get_mask(make_quality_pha):
"""What is the default get_mask value?"""
pha = make_quality_pha
m1 = np.ones(9, dtype=bool)
assert pha.get_mask() == pytest.approx(m1)
pha.ignore_bad()
# This is a regression test
m2 = np.asarray([True] + [False] * 2 + [True] * 3 + [False] * 3)
assert pha.get_mask() == pytest.approx(m2)
@pytest.mark.parametrize("field,expected",
[("channel", np.arange(1, 10)),
("counts", [1, 2, 0, 3, 12, 2, 9, 8, 7]),
("grouping", [1, -1, -1, -1, 1, 1, 1, -1, -1]),
("quality", [0, 5, 5, 0, 0, 0, 2, 2, 5]),
("backscal", [0.2, 99, 98, 0.4, 0.5, 0.6, 2, 3, 4]),
("areascal", 0.9)
])
def test_quality_pha_fields(field, expected, make_quality_pha):
"""Regression test: do we get the expected fields?"""
pha = make_quality_pha
# fake in a backscal array but scalar areascal
pha.areascal = 0.9
pha.backscal = [0.2, 99, 98, 0.4, 0.5, 0.6, 2, 3, 4]
pha.ignore_bad()
assert getattr(pha, field) == pytest.approx(expected)
# Should this really return "1:4" as the fifth channel has been
# excluded? At the moment check the current behavior.
#
def test_grouped_pha_get_filter(make_grouped_pha):
"""What is the default get_filter value?"""
pha = make_grouped_pha
assert pha.get_filter() == "1:5"
def test_grouped_pha_set_filter(make_grouped_pha):
"""What happens with a simple filter?"""
pha = make_grouped_pha
pha.ignore(hi=2)
assert pha.get_filter() == "4"
# What should get_dep(filter=False) return here? Should it
# include the quality=2 filtered bin (12) or not? At the
# moment it does, so we test against this behavior, but it
# might be something we want to change.
#
@pytest.mark.parametrize("filter,expected",
[(False, [1, 2, 0, 3, 12]),
(True, [3, 3])])
def test_grouped_pha_get_dep(filter, expected, make_grouped_pha):
"""Quality filtering and grouping is applied: get_dep"""
pha = make_grouped_pha
assert pha.get_dep(filter=filter) == pytest.approx(expected)
@pytest.mark.parametrize("filter,expected",
[(False, [1, 2, 0, 3, 12]),
(True, [3])])
def test_grouped_pha_filter_get_dep(filter, expected, make_grouped_pha):
"""What happens after a simple filter?
We do this because the behavior of test_grouped_get_filter
and test_grouped_pha_get_dep has been unclear.
"""
pha = make_grouped_pha
pha.ignore(hi=2)
assert pha.get_dep(filter=filter) == pytest.approx(expected)
def setup_pha_quality_example():
"""A PHA dataset for some tests."""
chans = np.arange(1, 11, dtype=np.int16)
pha = DataPHA("x", chans, chans)
pha.grouping = [1, -1, -1, 1, 1, 1, -1, 1, 1, 1]
pha.quality = [0, 5, 0, 0, 0, 2, 2, 0, 0, 5]
pha.exposure = 5.0
elo = chans * 0.1
ehi = elo + 0.1
rmf = create_delta_rmf(elo, ehi, e_min=elo, e_max=ehi)
pha.set_rmf(rmf)
# The groups are irrelevant to the model evaluation. We do
# care about the energy bins
#
# channel 1 2 3 4 5 6 7 8 9 10
# elo 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
# ehi 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1.1
# bad? * ? ? *
#
# where
# * indicates a "very bad" bin (qual=1 or 5)
# ? indicates a "slightly bad" bin (qual=2)
#
# The groups are complicated for the first group, since it
# contanis a bad channel.
#
# group 1 2 3 4 5 6 7
# channels 1,2*,3 4 5 6?-7? 8 9 10
#
# As of 4.17.0 there's no difference in handing the
# "severity" of the quality setting.
#
pha.set_analysis("energy")
return pha, elo, ehi
def test_pha_get_indep_grouped_quality():
"""A regression test.
This is to check the behavior with ignore_bad.
See also test_pha_eval_model_to_fit_grouped_quality
"""
pha, _, _ = setup_pha_quality_example()
expected = np.arange(1, 11, dtype=np.int16)
# Checks: no filtering
#
i1, = pha.get_indep(filter=False)
i2, = pha.get_indep(filter=True)
assert i1 == pytest.approx(expected)
assert i2 == pytest.approx(expected)
# Checks: what does the bad filter do?
#
pha.ignore_bad()
expected1 = expected[[0, 2, 3, 4, 7, 8]]
i1, = pha.get_indep(filter=False)
i2, = pha.get_indep(filter=True)
assert i1 == pytest.approx(expected)
assert i2 == pytest.approx(expected1)
# This range is in the "bad quality" region so it should not
# change the result.
#
pha.ignore(0.6, 0.8)
i1, = pha.get_indep(filter=False)
i2, = pha.get_indep(filter=True)
assert i1 == pytest.approx(expected)
assert i2 == pytest.approx(expected1)
# Subset the data
#
pha.ignore(hi=0.25) # this is a bad-quality bin
pha.ignore(lo=0.95)
expected2 = expected[[2, 3, 4, 7]]
i1, = pha.get_indep(filter=False)
i2, = pha.get_indep(filter=True)
assert i1 == pytest.approx(expected)
assert i2 == pytest.approx(expected2)
# Re-allow the low-energy range, but it should
# still drop the 0.2-0.3 keV bin. It does not.
#
pha.notice(hi=0.4)
expected3 = expected[[0, 1, 2, 3, 4, 7]]
i1, = pha.get_indep(filter=False)
i2, = pha.get_indep(filter=True)
assert i1 == pytest.approx(expected)
assert i2 == pytest.approx(expected3)
# Drop all filters. It should not drop the quality filter, but it
# currently does.
#
pha.notice()
i1, = pha.get_indep(filter=False)
i2, = pha.get_indep(filter=True)
assert i1 == pytest.approx(expected)
assert i2 == pytest.approx(expected)
def test_pha_eval_model_to_fit_grouped_quality():
"""A regression test.
This is to check the behavior with ignore_bad.
See also test_pha_get_indep_grouped_quality
"""
pha, elo, ehi = setup_pha_quality_example()
mdl = Polynom1D()
mdl.c0 = 0.1
mdl.c1 = 1.2
# Need to scale by the exposure time for the expected values.
#
expected = 5 * mdl(elo, ehi)
assert (expected > 0).all() # just check
resp = pha.get_full_response()
full_mdl = resp(mdl)
# Checks: no filtering
#
m1 = pha.eval_model(full_mdl)
m2 = pha.eval_model_to_fit(full_mdl)
assert m1 == pytest.approx(expected)
assert m2 == pytest.approx(expected)
# Checks: what does the bad filter do?
#
pha.ignore_bad()
expected1 = expected[[0, 2, 3, 4, 7, 8]]
m1 = pha.eval_model(full_mdl)
m2 = pha.eval_model_to_fit(full_mdl)
assert m1 == pytest.approx(expected)
assert m2 == pytest.approx(expected1)
# This range is in the "bad quality" region so it should not
# change the result.
#
pha.ignore(0.6, 0.8)
m1 = pha.eval_model(full_mdl)
m2 = pha.eval_model_to_fit(full_mdl)
assert m1 == pytest.approx(expected)
assert m2 == pytest.approx(expected1)
# Subset the data
#
pha.ignore(hi=0.25) # this is a bad-quality bin
pha.ignore(lo=0.95)
expected2 = expected[[2, 3, 4, 7]]
m1 = pha.eval_model(full_mdl)
m2 = pha.eval_model_to_fit(full_mdl)
assert m1 == pytest.approx(expected)
assert m2 == pytest.approx(expected2)
# Re-allow the low-energy range, but it should
# still drop the 0.2-0.3 keV bin. It does not.
#
pha.notice(hi=0.4)
expected3 = expected[[0, 1, 2, 3, 4, 7]]
m1 = pha.eval_model(full_mdl)
m2 = pha.eval_model_to_fit(full_mdl)
assert m1 == pytest.approx(expected)
assert m2 == pytest.approx(expected3)
# Drop all filters. It should not drop the quality filter, but it
# currently does.
#
pha.notice()
m1 = pha.eval_model(full_mdl)
m2 = pha.eval_model_to_fit(full_mdl)
assert m1 == pytest.approx(expected)
assert m2 == pytest.approx(expected)
def test_grouped_pha_set_y_invalid_size(make_grouped_pha):
"""What happens if change a grouped PHA counts/y setting?
See also test_grouped_pha_set_related_invalid_size which
is essentially the same but for other fields.
"""
pha = make_grouped_pha
# Pick an array that matches the grouped/filtered data size.
# This is "not actionable" as we can't work out how to change
# the counts channels, so it should error.
#
with pytest.raises(DataErr,
match="size mismatch between independent axis and y: 5 vs 2"):
pha.set_dep([2, 3])
@pytest.mark.parametrize("related", ["staterror", "syserror",
"y", "counts",
"backscal", "areascal",
"grouping", "quality"])
def test_grouped_pha_set_related_invalid_size(related, make_grouped_pha):
"""Can we set the value to a 2-element array?"""
pha = make_grouped_pha
# Pick an array that matches the grouped/filtered data size.
# This is "not actionable" as we can't work out how to change
# the counts channels, so it should error.
#
# Handle y/counts alias here
related = "y" if related == "counts" else related
emsg = f"size mismatch between independent axis and {related}: 5 vs 2"
with pytest.raises(DataErr,
match=emsg):
setattr(pha, related, [2, 3])
@pytest.mark.parametrize("column", ["staterror", "syserror",
"y", "counts",
"backscal", "areascal",
"grouping", "quality"])
def test_pha_check_related_fields_correct_size(column, make_grouped_pha):
"""Can we set the value to a 2-element array?"""
d = DataPHA('example', None, None)
setattr(d, column, np.asarray([2, 10, 3]))
with pytest.raises(DataErr,
match="independent axis can not change size: 3 to 4"):
d.indep = (np.asarray([2, 3, 4, 5]), )
@pytest.mark.parametrize("label", ["filter", "grouping"])
def test_pha_no_group_apply_xxx_invalid_size(label, make_test_pha):
"""Check apply_filter/grouping tests the data length: no quality/group
Issue #1439 points out that quality handling creates different results.
"""
pha = make_test_pha
func = getattr(pha, f"apply_{label}")
elabel = "filtered data" if label == "filter" else "data"
msg = f"size mismatch between {elabel} and array: 4 vs 2"
with pytest.raises(DataErr,
match=f"^{msg}$"):
func([1, 2])
@pytest.mark.parametrize("vals", [[1], [1, 2, 3, 4, 5, 6, 7, 8]])
def test_pha_no_group_filtered_apply_filter_invalid_size(vals, make_test_pha):
"""Check apply_filter tests the data length: no quality/group, filtered
This behaves differently to the apply_grouping case
"""
pha = make_test_pha
pha.ignore(hi=2)
# safety check to make sure we've excluded points
assert not np.all(pha.mask)
assert np.any(pha.mask)
with pytest.raises(DataErr,
match="^size mismatch between filtered data and array: 2 vs [18]$"):
pha.apply_filter(vals)
@pytest.mark.parametrize("vals", [[1], [1, 2, 3, 4, 5, 6, 7, 8]])
def test_pha_no_group_filtered_apply_grouping_invalid_size(vals, make_test_pha):
"""Check apply_grouping tests the data length: no quality/group, filtered
This behaves differently to the apply_filter case
"""
pha = make_test_pha
pha.ignore(hi=2)
with pytest.raises(DataErr,
match="^size mismatch between data and array: 4 vs [18]$"):
pha.apply_grouping(vals)
@pytest.mark.parametrize("label", ["filter", "grouping"])
@pytest.mark.parametrize("vals", [[1], (2, 3), [1, 2, 3, 4, 5, 6, 7, 8]])
def test_pha_zero_quality_apply_xxx_invalid_size(label, vals, make_test_pha):
"""Check apply_filter/grouping tests the data length: quality set to 0
We can not use make_grouped_pha and then set the quality array to
0's as that does not remove the quality setting (see issue #1427)
so we replicate most of make_grouped_pha with make_test_pha.
"""
pha = make_test_pha
pha.grouping = [1, -1, -1, 1]
pha.quality = [0] * 4
pha.group()
func = getattr(pha, f"apply_{label}")
elabel = "filtered data" if label == "filter" else "data"
msg = f"size mismatch between {elabel} and array: 4 vs [128]"
with pytest.raises(DataErr,
match=f"^{msg}$"):
func(vals)
@pytest.mark.parametrize("vals", [(2, 3), [1, 2, 3, 4, 5, 6, 7, 8]])
def test_pha_zero_quality_filtered_apply_filter_invalid_size(vals, make_test_pha):
"""Check apply_filter tests the data length: quality set to 0, filtered"""
pha = make_test_pha
pha.grouping = [1, -1, -1, 1]
pha.quality = [0] * 4
pha.group()
pha.ignore(hi=2)
# safety check to make sure we've excluded points
assert not np.all(pha.mask)
assert np.any(pha.mask)
with pytest.raises(DataErr,
match="^size mismatch between filtered data and array: 1 vs [28]$"):
pha.apply_filter(vals)
@pytest.mark.parametrize("vals", [[1], (2, 3), [1, 2, 3, 4, 5, 6, 7, 8]])
def test_pha_zero_quality_filtered_apply_grouping_invalid_size(vals, make_test_pha):
"""Check apply_grouping tests the data length: quality set to 0, filtered"""
pha = make_test_pha
pha.grouping = [1, -1, -1, 1]
pha.quality = [0] * 4
pha.group()
pha.ignore(hi=2)
with pytest.raises(DataErr,
match="^size mismatch between data and array: 4 vs [128]$"):
pha.apply_grouping(vals)
@pytest.mark.parametrize("vals", [(2, 3), [1, 2, 3, 4, 5, 6, 7, 8]])
def test_pha_quality_apply_filter_invalid_size(vals, make_grouped_pha):
"""Check apply_filter tests the data length: with quality set"""
pha = make_grouped_pha
with pytest.raises(DataErr,
match="^size mismatch between filtered data and array: 4 vs [28]$"):
pha.apply_filter(vals)
@pytest.mark.parametrize("vals", [(2, 3), [1, 2, 3, 4, 5, 6, 7, 8]])
def test_pha_quality_filtered_apply_filter_invalid_size(vals, make_grouped_pha):
"""Check apply_filter tests the data length: with quality set, filtered"""
pha = make_grouped_pha
pha.ignore(hi=1)
# safety check to make sure we've excluded points
m1 = np.asarray([False, True])
m2 = np.asarray([False, False, False, True])
assert pha.mask == pytest.approx(m1)
assert pha.get_mask() == pytest.approx(m2)
with pytest.raises(DataErr,
match="^size mismatch between filtered data and array: 1 vs [28]$"):
pha.apply_filter(vals)
@pytest.mark.parametrize("vals", [pytest.param([42], marks=pytest.mark.xfail), [10, 20, 35, 42, 55]])
def test_pha_quality_filtered_apply_filter_match_filter(vals, make_grouped_pha):
"""What happens if the array has the correct size?"""
pha = make_grouped_pha
pha.ignore(hi=1)
# XFAIL: when the array matches the filtered data there's a problem
# matching to the ungrouped data.
got = pha.apply_filter(vals)
assert got == pytest.approx([42])
@pytest.mark.parametrize("vals", [[1], (2, 3), [1, 2, 3, 4, 5, 6, 7, 8]])
def test_pha_quality_apply_grouping_invalid_size(vals, make_grouped_pha):
"""Check apply_grouping tests the data length: with quality set"""
pha = make_grouped_pha
with pytest.raises(DataErr,
match="^size mismatch between data and array: 5 vs [128]$"):
pha.apply_grouping(vals)
@pytest.mark.parametrize("vals", [[1], (2, 3), [1, 2, 3, 4, 5, 6, 7, 8]])
def test_pha_quality_filtered_apply_grouping_invalid_size(vals, make_grouped_pha):
"""Check apply_grouping tests the data length: with quality set, filtered"""
pha = make_grouped_pha
pha.ignore(hi=1)
with pytest.raises(DataErr,
match="^size mismatch between data and array: 5 vs [128]+$"):
pha.apply_grouping(vals)
def test_pha_quality_apply_grouping_size_matches_detchans(make_grouped_pha):
"""Regression test: send in DETCHANS values to group.
Is this an error or not? There's arguments for both, so test what we do.
"""
pha = make_grouped_pha
pha.ignore(hi=1)
got = pha.apply_grouping([10, 12, 2, 4, 84])
assert got == pytest.approx([24, 4])
def test_pha_quality_apply_grouping_size_matches_quality(make_grouped_pha):
"""Regression test: send in"good" values to group.
Is this an error or not? There's arguments for both, so test what we do.
"""
pha = make_grouped_pha
pha.ignore(hi=1)
with pytest.raises(DataErr,
match="^size mismatch between data and array: 5 vs 4$"):
pha.apply_grouping([10, 12, 2, 4])
def test_pha_apply_filter_check():
"""Check that apply_filter works as expected.
We go through a number of stages - e.g.
- no filter or group
- only group
- group and filter
"""
chans = np.arange(1, 21)
counts = np.ones(20)
data = DataPHA("ex", chans, counts)
all_vals = np.arange(1, 21)
filt_vals = np.arange(5, 17)
expected = np.arange(1, 21)
got = data.apply_filter(all_vals, groupfunc=np.sum)
assert got == pytest.approx(expected)
grouping = np.asarray([1, -1] * 10)
data.grouping = grouping
data.quality = [0] * 20
assert not data.grouped
data.group()
assert data.grouped
expected = np.asarray([3, 7, 11, 15, 19, 23, 27, 31, 35, 39])
got = data.apply_filter(all_vals, groupfunc=np.sum)
assert got == pytest.approx(expected)
# This ignores the first two groups, channels 1-2 and 3-4,
# and the last two groups, channels 17-18 and 19-20.
# Note that channel 17 is ignored even though not explicitly
# asked because of the use of ignore.
#
data.ignore(hi=4)
data.ignore(lo=18)
expected = np.asarray([11, 15, 19, 23, 27, 31])
got = data.apply_filter(all_vals, groupfunc=np.sum)
assert got == pytest.approx(expected)
# Now the data has been filtered we can check what happens when
# the input argument has less channels in.
#
got = data.apply_filter(filt_vals, groupfunc=np.sum)
assert got == pytest.approx(expected)
# Remove the grouping
#
data.ungroup()
assert not data.grouped
# Note that we still send in vals=arange(5, 17)
#
expected = filt_vals.copy()
got = data.apply_filter(filt_vals, groupfunc=np.sum)
assert got == pytest.approx(expected)
# We can send in the full array too
got = data.apply_filter(all_vals, groupfunc=np.sum)
assert got == pytest.approx(expected)
@pytest.mark.parametrize("backscal", [0, -2.1e-10, -12])
def test_datapha_negative_scale_check(backscal):
"""Check of < 0 condition in DataPHA._check_scale.
The current test relies on us not restricting the
backscal value to > 0. Perhaps we should do that and
then we may be able to remove the code being tested
in _check_scale.
"""
# Create a PHA with no ARF or RMF
channels = np.arange(1, 5, dtype=np.int16)
counts = np.asarray([10, 5, 12, 7], dtype=np.int16)
pha = DataPHA('test-pha', channel=channels, counts=counts,
exposure=10.0, backscal=0.2)
assert pha.get_backscal() == pytest.approx(0.2)
pha.backscal = backscal
assert pha.get_backscal() == pytest.approx(1.0)
def test_datapha_apply_grouping_quality_filter_length_check():
"""Check we get an error for this case."""
channels = np.arange(1, 5, dtype=np.int16)
counts = np.asarray([10, 5, 12, 7], dtype=np.int16)
grouping = np.asarray([1, -1, 1, 1])
pha = DataPHA('test-pha', channel=channels, counts=counts,
grouping=grouping)
assert pha.grouped
with pytest.raises(DataErr,
match="size mismatch between independent axis and quality_filter: 4 vs 5"):
pha.quality_filter = np.asarray([1, 1, 1, 1, 1], dtype=bool)
def test_datapha_apply_grouping_quality_filter_scalar():
"""A regression test.
Check we get an error for this case.
"""
channels = np.arange(1, 5, dtype=np.int16)
counts = np.asarray([10, 5, 12, 7], dtype=np.int16)
grouping = np.asarray([1, -1, 1, 1])
pha = DataPHA('test-pha', channel=channels, counts=counts,
grouping=grouping)
assert pha.grouped
# TODO: The error message here is not ideal.
#
with pytest.raises(DataErr,
match="^Array must be a sequence or None$"):
pha.quality_filter = True
@requires_fits
@requires_data
def test_xmmrgs_notice(make_data_path):
"""Test that notice and ignore works on XMMRGS dataset, which is
ordered in increasing wavelength, not energy"""
from sherpa.astro.io import read_pha, read_rmf
dat = read_pha(make_data_path('xmmrgs/P0112880201R1S004SRSPEC1003.FTZ'))
rmf = read_rmf(make_data_path('xmmrgs/P0112880201R1S004RSPMAT1003.FTZ'))
dat.set_rmf(rmf)
dat.units = 'wave'
dat.notice(18.8, 19.2)
assert len(dat.get_dep(filter=True)) == 41
assert dat.get_filter(format='%.2f') == '18.80:19.21'
dat.ignore(10, 19.)
assert len(dat.get_dep(filter=True)) == 20
assert dat.get_filter(format='%.2f') == '19.01:19.21'
def test_pickle_image_filter_none(make_test_image):
"""Check we can pickle/unpickle without a region filter.
This test assumes we have region support, but we do not
currently have any test builds without it so do not
bother skipping.
"""
d = make_test_image
assert d._region is None
d2 = pickle.loads(pickle.dumps(d))
assert d2._region is None
@requires_region
@pytest.mark.parametrize("ignore,region,expected",
[(False, 'circle(4255, 3840, 20)', 'Circle(4255,3840,20)'),
(True, 'circle(4255, 3840, 20)', 'Field()&!Circle(4255,3840,20)'),
(False, 'circle(4255, 3840, 20) - field()', 'Circle(4255,3840,20)&!Field()'),
(True, 'circle(4255, 3840, 20) - field()', 'Field()&!Circle(4255,3840,20)|Field()'),
])
def test_pickle_image_filter(ignore, region, expected, make_test_image):
"""Check we can pickle/unpickle with a region filter."""
from sherpa.astro.utils._region import Region
d = make_test_image
d.notice2d(region, ignore=ignore)
assert isinstance(d._region, Region)
assert str(d._region) == expected
d2 = pickle.loads(pickle.dumps(d))
assert isinstance(d2._region, Region)
assert str(d2._region) == expected
def test_img_sky_create(make_test_image_sky):
d = make_test_image_sky
assert d.sky is not None
assert d.eqpos is None
def test_img_world_create(make_test_image_world):
d = make_test_image_world
assert d.sky is None
assert d.eqpos is not None
def test_img_sky_show(make_test_image_sky):
d = make_test_image_sky
out = str(d).split("\n")
assert out[0] == "name = sky-ey"
assert out[1] == "x0 = Int64[6]"
assert out[2] == "x1 = Int64[6]"
assert out[3] == "y = Float64[6]"
assert out[4] == "shape = (2, 3)"
assert out[5] == "staterror = None"
assert out[6] == "syserror = None"
assert out[7] == "sky = physical"
assert out[8] == " crval = [ 2000.5,-5000.5]"
assert out[9] == " crpix = [-2., 3.]"
assert out[10] == " cdelt = [2.,4.]"
assert out[11] == "eqpos = None"
assert out[12] == "coord = logical"
assert len(out) == 13
def test_img_world_show(make_test_image_world):
d = make_test_image_world
out = str(d).split("\n")
assert out[0] == "name = world-ey"
assert out[1] == "x0 = Int64[6]"
assert out[2] == "x1 = Int64[6]"
assert out[3] == "y = Float64[6]"
assert out[4] == "shape = (2, 3)"
assert out[5] == "staterror = None"
assert out[6] == "syserror = None"
assert out[7] == "sky = None"
assert out[8] == "eqpos = world"
assert out[9] == " crval = [30.,10.]"
assert out[10] == " crpix = [2.,2.]"
assert out[11] == " cdelt = [-0.1, 0.1]"
assert out[12] == " crota = 0"
assert out[13] == " epoch = 2000"
assert out[14] == " equinox = 2000"
assert out[15] == "coord = logical"
assert len(out) == 16
@requires_wcs
def test_img_sky_pickle(make_test_image_sky):
"""Very basic test of pickling"""
d = make_test_image_sky
d.set_coord("physical")
d2 = pickle.loads(pickle.dumps(d))
assert d2.coord == "physical"
assert d2.eqpos is None
# We don't have an easy way to check for WCS equivalence
# so just rely on string representation.
#
assert str(d2.sky) == str(d.sky)
# check the independent axes are converted
assert (d2.x0 == d.x0).all()
assert (d2.x1 == d.x1).all()
@requires_wcs
def test_img_world_pickle(make_test_image_world):
"""Very basic test of pickling"""
d = make_test_image_world
d.set_coord("wcs")
d2 = pickle.loads(pickle.dumps(d))
assert d2.coord == "world"
assert d2.sky is None
# We don't have an easy way to check for WCS equivalence
# so just rely on string representation.
#
assert str(d2.sky) == str(d.sky)
# check the independent axes are converted
assert (d2.x0 == d.x0).all()
assert (d2.x1 == d.x1).all()
@requires_wcs # not for all, but easiest this way
@pytest.mark.parametrize("path", [[],
["logical"],
["physical", "logical", "physical", "logical", "physical", "logical"]])
def test_img_sky_logical(path, make_test_image_sky):
"""The logical axes are as expected. Inspired by issue 1380."""
d = make_test_image_sky
for coord in path:
d.set_coord(coord)
x1, x0 = np.mgrid[1:3, 1:4]
assert (d.x0 == x0.flatten()).all()
assert (d.x1 == x1.flatten()).all()
@requires_wcs # not for all, but easiest this way
@pytest.mark.parametrize("path", [[],
["logical"],
["world", "logical", "world", "logical", "world", "logical"]])
def test_img_world_logical(path, make_test_image_world):
"""The logical axes are as expected. Inspired by issue 1380."""
d = make_test_image_world
for coord in path:
d.set_coord(coord)
x1, x0 = np.mgrid[1:3, 1:4]
assert (d.x0 == x0.flatten()).all()
assert (d.x1 == x1.flatten()).all()
@requires_wcs
@pytest.mark.parametrize("path", [[],
["physical"],
["logical", "physical", "logical", "physical", "logical"]])
def test_img_sky_physical(path, make_test_image_sky):
"""The physical axes are as expected. Inspired by issue 1380."""
d = make_test_image_sky
for coord in path:
d.set_coord(coord)
d.set_coord("physical")
x1, x0 = np.mgrid[1:3, 1:4]
x0 = (x0 + 2.0) * 2.0 + 2000.5
x1 = (x1 - 3.0) * 4.0 - 5000.5
assert (d.x0 == x0.flatten()).all()
assert (d.x1 == x1.flatten()).all()
def test_img_world_physical(make_test_image_world):
"""The physical axes are not defined."""
d = make_test_image_world
with pytest.raises(DataErr,
match="data set 'world-ey' does not contain a physical coordinate system"):
d.set_coord("physical")
def test_img_sky_world(make_test_image_sky):
"""The world axes are not defined."""
d = make_test_image_sky
with pytest.raises(DataErr,
match="data set 'sky-ey' does not contain a world coordinate system"):
d.set_coord("world")
@requires_wcs
@pytest.mark.parametrize("path", [[],
["logical"],
["world", "logical", "world", "logical", "world", "logical"]])
def test_img_world_world(path, make_test_image_world):
"""The world axes are as expected. Inspired by issue 1380."""
d = make_test_image_world
for coord in path:
d.set_coord(coord)
d.set_coord("world")
assert d.x0 == pytest.approx(WORLD_X0)
assert d.x1 == pytest.approx(WORLD_X1)
@requires_wcs
@pytest.mark.parametrize("path", [[],
["physical"],
["logical", "physical", "logical", "physical"]])
def test_img_sky_get_logical(path, make_test_image_sky):
"""Check get_logical works"""
d = make_test_image_sky
for coord in path:
d.set_coord(coord)
x1, x0 = np.mgrid[1:3, 1:4]
a, b = d.get_logical()
assert (a == x0.flatten()).all()
assert (b == x1.flatten()).all()
@requires_wcs
@pytest.mark.parametrize("path", [[],
["world"],
["logical", "world", "logical", "world"]])
def test_img_world_get_logical(path, make_test_image_world):
"""Check get_logical works"""
d = make_test_image_world
for coord in path:
d.set_coord(coord)
x1, x0 = np.mgrid[1:3, 1:4]
a, b = d.get_logical()
assert (a == x0.flatten()).all()
assert (b == x1.flatten()).all()
@requires_wcs
@pytest.mark.parametrize("path", [[],
["logical"],
["physical", "logical", "physical", "logical", "physical", "logical"]])
def test_img_sky_get_physical(path, make_test_image_sky):
"""Check get_physical works"""
d = make_test_image_sky
for coord in path:
d.set_coord(coord)
x1, x0 = np.mgrid[1:3, 1:4]
x0 = (x0 + 2.0) * 2.0 + 2000.5
x1 = (x1 - 3.0) * 4.0 - 5000.5
a, b = d.get_physical()
assert (a == x0.flatten()).all()
assert (b == x1.flatten()).all()
def test_img_world_get_physical(make_test_image_world):
"""Check get_physical errors out"""
d = make_test_image_world
with pytest.raises(DataErr,
match="data set 'world-ey' does not contain a physical coordinate system"):
d.get_physical()
def test_img_sky_get_world(make_test_image_sky):
"""Check get_world errors out"""
d = make_test_image_sky
with pytest.raises(DataErr,
match="data set 'sky-ey' does not contain a world coordinate system"):
d.get_world()
@requires_wcs
@pytest.mark.parametrize("path", [[],
["logical"],
["world", "logical", "world", "logical"]])
def test_img_world_get_world(path, make_test_image_world):
"""Check get_world works"""
d = make_test_image_world
for coord in path:
d.set_coord(coord)
a, b = d.get_world()
assert a == pytest.approx(WORLD_X0)
assert b == pytest.approx(WORLD_X1)
@requires_wcs
@requires_region
def test_img_sky_can_filter(make_test_image_sky):
"""Check we can filter the image using physical coordinates"""
data = make_test_image_sky
assert data.coord == "logical"
assert data.mask
assert data.get_filter() == ""
data.set_coord("physical")
assert data.coord == "physical"
assert data.mask
assert data.get_filter() == ""
data.notice2d("rect(2009,-5006,2011,-5000)", ignore=True)
assert data.mask == pytest.approx([1, 1, 1, 1, 1, 0])
assert data.get_filter() == "Field()&!Rectangle(2009,-5006,2011,-5000)"
@requires_wcs
@requires_region
def test_img_sky_can_filter_change_coords(make_test_image_sky, caplog):
"""What happens to a filter after changing coordinates?
This is a regression test.
"""
data = make_test_image_sky
data.set_coord("physical")
data.notice2d("rect(2009,-5006,2011,-5000)", ignore=True)
assert len(caplog.records) == 0
with caplog.at_level(logging.INFO, logger='sherpa'):
data.set_coord("image")
assert len(caplog.records) == 1
assert data.coord == "logical"
assert data.mask
assert data.get_filter() == ""
r = caplog.record_tuples[0]
assert r[0] == "sherpa.astro.data"
assert r[1] == logging.WARN
assert r[2] == "Region filter has been removed from 'sky-ey'"
def test_arf_checks_energy_length():
"""Just check we error out"""
elo = np.arange(1, 5)
ehi = np.arange(2, 9)
dummy = []
with pytest.raises(ValueError,
match="The energy arrays must have the same size, not 4 and 7"):
DataARF("dummy", elo, ehi, dummy)
def test_rmf_checks_energy_length():
"""Just check we error out"""
elo = np.arange(1, 5)
ehi = np.arange(2, 9)
dummy = []
with pytest.raises(ValueError,
match="The energy arrays must have the same size, not 4 and 7"):
DataRMF("dummy", 1024, elo, ehi, dummy, dummy, dummy, dummy)
@pytest.mark.parametrize("startchan,na,nb,nc",
[(0, 0, 2, 17), # channel 0 is not defined, what should it do?
(1, 0, 3, 16),
(2, 0, 4, 15),
(3, 1, 4, 14),
(4, 2, 4, 13),
(9, 7, 4, 8),
(12, 10, 4, 5),
(16, 14, 4, 1),
(17, 15, 4, 0),
(18, 16, 3, 0), # channel 20 is not defined, what should it do?
(19, 17, 2, 0), # channel 20,21 is not defined, what should it do?
(20, 18, 1, 0), # channel 20,21,22 is not defined, what should it do?
(21, 19, 0, 0), # channel 21,22,23 is not defined, what should it do?
])
def test_rmf_simple_filter_check(startchan, na, nb, nc):
"""Check we behave as expected for a very simple filter.
nc is not needed since it's 19 - na - nb, but specify it
This assumes offset=1.
It is not at all clear why the filter selects 4 channels once away
from the edges. Is it a small bug in the code?
"""
egrid = np.arange(0.1, 2.1, 0.1)
elo = egrid[:-1]
ehi = egrid[1:]
rmf = create_delta_rmf(elo, ehi, e_min=elo, e_max=ehi)
# This is a "perfect" response.
#
mvals = np.linspace(1, 19, 19)
assert rmf.apply_rmf(mvals) == pytest.approx(mvals)
selected = rmf.notice([startchan, startchan + 1, startchan + 2])
expected = np.asarray([False] * na + [True] * nb + [False] * nc)
assert selected == pytest.approx(expected)
# Drop everything but the selected values.
#
expected = mvals.copy()
expected[~selected] = 0
assert rmf.apply_rmf(mvals[selected]) == pytest.approx(expected)
def test_rmf_complains_about_filter_mismatch():
"""Check we error out if, after filtering, we are given the wrong size."""
egrid = np.arange(0.1, 2.1, 0.1)
elo = egrid[:-1]
ehi = egrid[1:]
rmf = create_delta_rmf(elo, ehi, e_min=elo, e_max=ehi)
rmf.notice([9, 10, 11])
msg = "Mismatched filter between ARF and RMF or PHA and RMF"
with pytest.raises(TypeError, match=f"^{msg}$"):
rmf.apply_rmf([1] * 19)
def test_rmf_invalid_offset():
"""Just check we error out"""
elo = np.arange(1, 5)
ehi = elo + 1
dummy = []
with pytest.raises(ValueError,
match="offset must be >=0, not -1"):
DataRMF("dummy", 1024, elo, ehi, dummy, dummy, dummy, dummy, offset=-1)
def test_rmf_invalid_offset_not_integer():
"""Just check we error out"""
elo = np.arange(1, 5)
ehi = elo + 1
dummy = []
with pytest.raises(ValueError,
match="^offset must be an integer, not 0.5$"):
DataRMF("dummy", 1024, elo, ehi, dummy, dummy, dummy, dummy, offset=0.5)
@pytest.mark.parametrize("offset", [0, 1, 2, 5])
def test_rmf_offset_check_square(offset, caplog):
"""What happens if offset is set?
This uses a blurry / made-up RMF but with a 1 to 1 mapping of
channel and energy grids.
"""
# The channel range is offset, offset + 1, offset + 2, offset + 3.
# The response is not "perfect".
#
elo = np.asarray([1.1, 1.2, 1.4, 1.5])
ehi = np.asarray([1.2, 1.4, 1.5, 2.0])
rmf = DataRMF("dummy", 4, elo, ehi,
n_grp=np.asarray([1, 1, 1, 1]),
f_chan=np.asarray([offset, offset, offset + 1, offset + 2]),
n_chan=np.asarray([1, 2, 2, 2]),
matrix=np.asarray([1.0, 0.6, 0.4, 0.5, 0.5, 0.2, 0.8]),
e_min=elo, e_max=ehi, offset=offset)
assert len(caplog.record_tuples) == 0
# The model, evaluated on the bins, returns
# mdl.c0 * bin-width
# So, for this situation
# [0.2, 0.4, 0.2, 1.0]
#
mdl = Const1D()
mdl.c0 = 2
mvals = mdl(elo, ehi)
assert mvals == pytest.approx([0.2, 0.4, 0.2, 1.0])
# The RMF is slightly blury:
# - [1.0, 0.0, 0.0, 0.0]
# - [0.6, 0.4, 0.0, 0.0]
# - [0.0, 0.5, 0.5, 0.0]
# - [0.0, 0.0, 0.2, 0.8]
#
expected = [1.0 * 0.2 + 0.6 * 0.4,
0.4 * 0.4 + 0.5 * 0.2,
0.5 * 0.2 + 0.5 * 0.4,
0.8 * 1.0]
got = rmf.apply_rmf(mvals)
assert got == pytest.approx(expected)
# Now filter the response and try again. Select just the
# third bin, which should select the last three "energy"
# bins.
#
nchans = [offset + 2]
selected = rmf.notice(nchans)
assert selected == pytest.approx(np.asarray([False, True, True, True]))
expected2 = [0.0 * 0.2 + 0.6 * 0.4,
0.4 * 0.4 + 0.5 * 0.2,
0.5 * 0.2 + 0.5 * 0.4,
0.8 * 1.0]
got2 = rmf.apply_rmf(mvals[selected])
assert got2 == pytest.approx(expected2)
@pytest.mark.parametrize("offset", [0, 1, 2, 5])
def test_rmf_offset_check_rectangular(offset):
"""What happens if offset is set?
Here the RMF grid has more bins than channels, and it's more
constrained than the one_to_one case which might cover different
code paths in the notice call.
"""
# 10 channels and 20 energy bins.
#
ematrix = np.linspace(0.1, 21 * 0.1, 21)
ebounds = np.linspace(0.05, 2.2, 11)
elo = ematrix[:-1]
ehi = ematrix[1:]
e_min = ebounds[:-1]
e_max = ebounds[1:]
# Create the full matrix.
#
full_matrix = np.zeros((20, 10))
for idx in range(2, 18):
n = idx // 2
full_matrix[idx][n:n + 2] = [0.6, 0.4]
full_matrix[1][1:3] = [0.8, 0.2]
full_matrix[18][8:10] = [0.2, 0.8]
full_matrix[19][-1] = 1.0
# Special case the multi-group rows.
#
full_matrix[0] = [0.0, 0.1, 0.1, 0.0, 0.4, 0.4, 0.0, 0.0, 0.0, 0.0]
full_matrix[2] = [0.0, 0.1, 0.1, 0.0, 0.0, 0.0, 0.4, 0.4, 0.0, 0.0]
full_matrix[12] = [0.0, 0.0, 0.0, 0.0, 0.1, 0.1, 0.0, 0.0, 0.4, 0.4]
full_matrix[18] = [0.0, 0.0, 0.0, 0.0, 0.0, 0.1, 0.1, 0.0, 0.4, 0.4]
# Cmopress the matrix into the form that DataRMF needs. This step
# assumes that offset=1.
#
n_grp, f_chan, n_chan, matrix = matrix_to_rmf(full_matrix)
# Correct for the offset.
#
f_chan += (offset - 1)
rmf = DataRMF("dummy", 10, elo, ehi, n_grp=n_grp, f_chan=f_chan,
n_chan=n_chan, matrix=matrix, e_min=e_min,
e_max=e_max, offset=offset)
# Have different values for each bin: 0.5, 0.6, ..., 2.4
#
mvals = np.linspace(0.5, 2.4, 20)
# Since we have the full matrix we can calculate it rather than
# work it out manually.
#
expected = mvals @ full_matrix
got = rmf.apply_rmf(mvals)
assert got == pytest.approx(expected)
# Now filter the response and try again:
#
# We check the RMF convolution still works after each notice call,
# even if nothing has been ignored, just to try and check all the
# corner cases.
#
# - drop the first channel
nchans1 = offset + np.arange(1, 10)
selected1 = rmf.notice(nchans1)
assert selected1.all()
expected1 = expected
got1 = rmf.apply_rmf(mvals[selected1])
assert got1 == pytest.approx(expected1)
# - drop the last channel
nchans2 = offset + np.arange(0, 9)
mask = np.asarray([True] * 19 + [False])
selected2 = rmf.notice(nchans2)
assert selected2 == pytest.approx(mask)
selected2 = rmf.notice(offset + np.arange(0, 9))
assert selected2 == pytest.approx(mask)
expected2 = mvals[selected2] @ full_matrix[selected2, :]
got2 = rmf.apply_rmf(mvals[selected2])
assert got2 == pytest.approx(expected2)
# - drop a range of channels (start and end)
#
# This is a regression test as there's no documentation on
# filter_resp to indicate what it should return in this case.
# DJB thinks it should have returned
#
# T F T F F F T T T T T T T T F F F F T F
#
# rather than
#
# T F T F T T T T T T T T T T F F F F T F
#
# [or it could not fik=kter out those ranges within the
# start/end points].
#
nchans3 = offset + np.asarray([4, 5, 6])
mask3 = np.asarray([True, False] * 2 + [True] * 10 +
[False] * 4 + [True, False])
selected3 = rmf.notice(nchans3)
assert selected3 == pytest.approx(mask3)
# It is not clear what the RMF application does here.
#
# What did I naively expect?
#
# expected3 = mvals[selected3] @ full_matrix[selected3, :]
# assert expected3 == pytest.approx([0, 0.12, 1.26, 2.14, 2.91, 3.54, 2.83, 1, 1.6, 1.6])
#
# How is this calculated?
expected3 = [0, 0, 1.14, 2.14, 2.91, 3.54, 2.83, 1, 0, 0]
got3 = rmf.apply_rmf(mvals[selected3])
assert got3 == pytest.approx(expected3)
# Check we get back the original values.
#
assert rmf.notice(None) is None
got_last = rmf.apply_rmf(mvals)
assert got_last == pytest.approx(expected)
@pytest.mark.parametrize("offset", [0, 1, 2, 5])
def test_rmf_offset_check_basics(offset):
"""Check out one of the tests from
sherpa/astro/utils/tests/test_astro_utils.py::test_filter_resp_basics
"""
fm = np.asarray([[0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 1.0, 0.0, 0.0, 0.0],
[0.0, 1.2, 1.8, 0.0, 0.0],
[0.0, 0.0, 2.0, 0.0, 0.0],
[0.0, 0.0, 3.3, 3.7, 0.0],
[0.0, 0.0, 0.0, 4.0, 0.0]])
n_grp, f_chan, n_chan, matrix = matrix_to_rmf(fm, startchan=offset)
# Correct for the offset.
#
delta = offset - 1
# Make up energy ranges
#
ematrix = np.asarray([0.1, 0.2, 0.25, 0.35, 0.4, 0.5, 0.6])
elo = ematrix[:-1]
ehi = ematrix[1:]
ebounds = np.asarray([0.05, 0.15, 0.25, 0.35, 0.45, 0.55])
emin = ebounds[:-1]
emax = ebounds[1:]
rmf = DataRMF("x", 5, n_grp=n_grp, f_chan=f_chan, n_chan=n_chan,
matrix=matrix, offset=offset, energ_lo=elo,
energ_hi=ehi, e_min=emin, e_max=emax)
mvals = np.asarray([0.1, 0.2, 0.3, 0.4, 0.5, 0.6])
# No filter:
expected = mvals @ fm
got = rmf.apply_rmf(mvals)
assert got == pytest.approx(expected)
# Now apply channel selections; need to correct for the offset.
#
# First channel:
nchans = np.asarray([1]) + delta
selected = rmf.notice(nchans)
expected = mvals[selected] @ fm[selected, :]
# Check that we get "no model" out
assert expected == pytest.approx(np.zeros(5))
got = rmf.apply_rmf(mvals[selected])
assert got == pytest.approx(expected)
# Second channel:
nchans = np.asarray([2]) + delta
selected = rmf.notice(nchans)
expected = mvals[selected] @ fm[selected, :]
got = rmf.apply_rmf(mvals[selected])
assert got == pytest.approx(expected)
# Add an explicit check here, rather than one that just checks
# it is internally consistent.
#
assert got == pytest.approx([0, 0.56, 0.54, 0, 0])
# Third channel:
nchans = np.asarray([3]) + delta
selected = rmf.notice(nchans)
expected = mvals[selected] @ fm[selected, :]
got = rmf.apply_rmf(mvals[selected])
assert got == pytest.approx(expected)
# Fourth channel:
nchans = np.asarray([4]) + delta
selected = rmf.notice(nchans)
expected = mvals[selected] @ fm[selected, :]
got = rmf.apply_rmf(mvals[selected])
assert got == pytest.approx(expected)
# Fifth channel:
nchans = np.asarray([5]) + delta
selected = rmf.notice(nchans)
expected = mvals[selected] @ fm[selected, :]
got = rmf.apply_rmf(mvals[selected])
assert got == pytest.approx(expected)
# [2,3] and [1,2,3] should give the same results
# as channel 1 doesn't match anything
#
nchans1 = np.asarray([2,3]) + delta
nchans2 = np.asarray([1,2,3]) + delta
selected1 = rmf.notice(nchans1)
got1 = rmf.apply_rmf(mvals[selected1])
selected2 = rmf.notice(nchans2)
got2 = rmf.apply_rmf(mvals[selected2])
assert selected2 == pytest.approx(selected1)
assert got2 == pytest.approx(got1)
# Add an explicit check here, rather than one that just checks
# it is internally consistent.
#
assert got1 == pytest.approx([0, 0.56, 2.99, 1.85, 0])
@pytest.mark.parametrize("subtract", [True, False])
def test_pha_no_bkg(subtract):
"""Just check we error out
Given the way the code works, it errors out both ways.
"""
chans = np.arange(1, 4)
counts = np.ones_like(chans)
pha = DataPHA("dummy", chans, counts)
with pytest.raises(DataErr,
match="data set 'dummy' does not have any associated backgrounds"):
pha.subtracted = subtract
@pytest.mark.parametrize("attr", ["response", "background"])
def test_pha_xxx_ids_invalid_not_an_iterable(attr):
"""Just check we error out"""
chans = np.arange(1, 4)
counts = np.ones_like(chans)
pha = DataPHA("dummy", chans, counts)
with pytest.raises(DataErr,
match=f"{attr} ids 'None' does not appear to be an array"):
setattr(pha, f"{attr}_ids", None)
@pytest.mark.parametrize("attr", ["response", "background"])
def test_pha_xxx_ids_invalid_not_known(attr):
"""Just check we error out"""
chans = np.arange(1, 4)
counts = np.ones_like(chans)
pha = DataPHA("dummy", chans, counts)
with pytest.raises(DataErr,
match=re.escape(f"3 is not a valid {attr} id in []")):
setattr(pha, f"{attr}_ids", [3])
def test_pha_set_analysis_rate_invalid():
"""Just check we error out"""
chans = np.arange(1, 4)
counts = np.ones_like(chans)
pha = DataPHA("dummy", chans, counts)
with pytest.raises(DataErr,
match="unknown plot type 'None', choose 'rate' or 'counts'"):
pha.set_analysis("channel", type=None)
def test_pha_get_ylabel_yfac0():
"""This does not depend on the backend"""
chans = np.arange(1, 4)
counts = np.ones_like(chans)
pha = DataPHA("dummy", chans, counts)
assert pha.plot_fac == 0
assert pha.get_ylabel() == 'Counts/channel'
def test_pha_get_ylabel_yfac1(all_plot_backends):
"""Basic check
The label depends on the backend, so we just want the dummy
backend used here.
"""
chans = np.arange(1, 4)
counts = np.ones_like(chans)
pha = DataPHA("dummy", chans, counts)
pha.plot_fac = 1
ylabel = pha.get_ylabel()
assert ylabel.startswith('Counts/channel X Channel')
assert "1" in ylabel
@requires_fits
@requires_data
def test_1209_rsp(make_data_path):
"""Do we pick up the header keywords from a RSP matrix.
This is related to issue #1209
"""
# We could set up channels and counts, but let's not.
#
d = DataPHA("dummy", None, None)
assert d.header["TELESCOP"] == "none"
assert d.header["INSTRUME"] == "none"
assert d.header["FILTER"] == "none"
infile = make_data_path("xmmrgs/P0112880201R1S004RSPMAT1003.FTZ")
rsp = io.read_rmf(infile)
d.set_rmf(rsp)
assert d.header["TELESCOP"] == "XMM"
assert d.header["INSTRUME"] == "RGS1"
assert d.header["FILTER"] == "NONE"
@requires_fits
@requires_data
@pytest.mark.parametrize("mode,fexpr",
[(["arf", "rmf"], ""),
(["arf"], ""),
(["rmf"], "Medium")])
def test_1209_response(mode, fexpr, make_data_path):
"""Do we pick up the header keywords from ARF and/or RMF
This is related to issue #1209
We use a non-Chandra dataset for the responses just to
ensure we understand other missions. Note that SWIFT and ROSAT
are tested in test_astro_data_xxx_unit.py so we want something
other than those two.
"""
# We could set up channels and counts, but let's not.
#
d = DataPHA("dummy", None, None)
assert d.header["TELESCOP"] == "none"
assert d.header["INSTRUME"] == "none"
assert d.header["FILTER"] == "none"
# We hide the warnings about ENERG_LO being 0 in the input files
# as we are not testing this here.
#
with warnings.catch_warnings(record=True):
if "arf" in mode:
infile = make_data_path("MNLup_2138_0670580101_EMOS1_S001_spec.arf")
arf = io.read_arf(infile)
d.set_arf(arf)
if "rmf" in mode:
infile = make_data_path("MNLup_2138_0670580101_EMOS1_S001_spec.rmf")
rmf = io.read_rmf(infile)
d.set_rmf(rmf)
assert d.header["TELESCOP"] == "XMM"
assert d.header["INSTRUME"] == "EMOS1"
# The FILTER setting:
# is "" in the ARF
# "Medium" in the RMF
# so the output depends on the selected response.
#
# We could work it out, but specify it as an input to the test.
# It turns out to be a good test that we see different behavior
# depending on the loaded data!
#
assert d.header["FILTER"] == fexpr
@requires_fits
@requires_data
def test_1209_background(make_data_path):
"""Do we pick up the header keywords from the background?
This is related to issue #1209
We use a non-Chandra dataset.
"""
# We need to set up the channels array to match the background.
#
d = DataPHA("dummy", np.arange(0, 800, dtype=np.int16), None)
assert d.header["TELESCOP"] == "none"
assert d.header["INSTRUME"] == "none"
assert d.header["FILTER"] == "none"
infile = make_data_path("MNLup_2138_0670580101_EMOS1_S001_specbg.fits")
bkg = io.read_pha(infile)
d.set_background(bkg)
assert d.header["TELESCOP"] == "XMM"
assert d.header["INSTRUME"] == "EMOS1"
assert d.header["FILTER"] == "Medium"
@pytest.fixture
def make_dataimgint():
"""Create a simple IMG Int data set."""
# a 1 by 2 grid.
#
x1, x0 = np.mgrid[10:12, -5:-4]
shape = x0.shape
x0 = x0.flatten()
x1 = x1.flatten()
y = np.asarray([10, 5])
return DataIMGInt("ival", x0, x1, x0 + 1, x1 + 1,
y, shape=shape)
def test_dataimgint_create(make_dataimgint):
"""Check we can create a basic integrated image data set.
See issue #1379
"""
x0 = np.asarray([-5, -5])
x1 = np.asarray([10, 11])
img = make_dataimgint
assert (img.dep == [10, 5]).all()
assert len(img.indep) == 4
assert (img.indep[0] == x0).all()
assert (img.indep[1] == x1).all()
assert (img.indep[2] == (x0 + 1)).all()
assert (img.indep[3] == (x1 + 1)).all()
assert img.header == {}
def test_dataimgint_show(make_dataimgint):
"""Check we can show a basic integrated image data set.
See issue #1379
"""
img = make_dataimgint
# This fails because there's problems getting x0 and x0lo
# attributes.
#
out = str(img).split("\n")
# Do we expect the x0/x1 output or x0lo/../x1hi
# output? For the moment just test what we do return.
#
assert out[0] == "name = ival"
assert out[1] == "x0 = Float64[2]"
assert out[2] == "x1 = Float64[2]"
assert out[3] == "y = Int64[2]"
assert out[4] == "shape = (2, 1)"
assert out[5] == "staterror = None"
assert out[6] == "syserror = None"
assert out[7] == "sky = None"
assert out[8] == "eqpos = None"
assert out[9] == "coord = logical"
assert len(out) == 10
def test_dataimgint_x0lo(make_dataimgint):
assert make_dataimgint.x0lo == pytest.approx([-5, -5])
def test_dataimgint_x1lo(make_dataimgint):
assert make_dataimgint.x1lo == pytest.approx([10, 11])
def test_dataimgint_x0hi(make_dataimgint):
assert make_dataimgint.x0hi == pytest.approx([-4, -4])
def test_dataimgint_x1hi(make_dataimgint):
assert make_dataimgint.x1hi == pytest.approx([11, 12])
def test_dataimgint_get_x0(make_dataimgint):
x0 = np.asarray([-5, -5])
x = (x0 + x0 + 1) / 2
assert (make_dataimgint.get_x0() == x).all()
def test_dataimgint_x0(make_dataimgint):
x0 = np.asarray([-5, -5])
x = (x0 + x0 + 1) / 2
assert (make_dataimgint.x0 == x).all()
def test_dataimgint_get_x1(make_dataimgint):
x1 = np.asarray([10, 11])
x = (x1 + x1 + 1) / 2
assert (make_dataimgint.get_x1() == x).all()
def test_dataimgint_x1(make_dataimgint):
x1 = np.asarray([10, 11])
x = (x1 + x1 + 1) / 2
assert (make_dataimgint.x1 == x).all()
def test_dataimgint_get_y(make_dataimgint):
assert (make_dataimgint.get_y() == [10, 5]).all()
def test_dataimgint_y(make_dataimgint):
assert (make_dataimgint.y == [10, 5]).all()
def test_dataimgint_get_dep(make_dataimgint):
assert (make_dataimgint.get_dep() == [10, 5]).all()
def test_dataimgint_get_x0label(make_dataimgint):
assert make_dataimgint.get_x0label() == "x0"
def test_dataimgint_get_x1label(make_dataimgint):
assert make_dataimgint.get_x1label() == "x1"
def test_dataimgint_get_ylabel(make_dataimgint):
assert make_dataimgint.get_ylabel() == "y"
def test_dataimgint_get_axes(make_dataimgint):
"""This copies the Data2DInt case but is different"""
axes = make_dataimgint.get_axes()
assert len(axes) == 4
# What are these values? They are not the input values
# to DataIMGInt.
#
assert (axes[0] == [-0.5]).all()
assert (axes[1] == [-0.5, 0.5]).all()
assert (axes[2] == [0.5]).all()
assert (axes[3] == [0.5, 1.5]).all()
@pytest.mark.xfail
def test_dataimgint_notice(make_dataimgint):
"""basic notice call
It is not entirely clear whether we expect the
notice call to work here when notice2d is present.
"""
img = make_dataimgint
# The mask attribute can be True, False, or a ndarray. Fortunately
# using an ndarray as a truthy value throws a ValueError.
#
assert img.mask
# Data is defined on x0=-5, x1=10,11
# so this excludes the second point.
#
img.notice(x1lo=10, x1hi=11)
assert (img.mask == np.asarray([True, False])).all()
@pytest.mark.xfail
def test_dataimgint_ignore(make_dataimgint):
"""basic ignore call"""
img = make_dataimgint
assert img.mask
img.notice(x1lo=10, x1hi=11, ignore=True)
assert (img.mask == np.asarray([False, True])).all()
def test_dataimgint_ignore_get_filter(make_dataimgint):
"""What exactly does get_filter return here?
The current behavior does not look sensible.
"""
img = make_dataimgint
assert img.mask
img.notice(x1lo=10, x1hi=11, ignore=True)
assert img.get_filter() == ''
def test_dataimgint_ignore_get_filter_expr(make_dataimgint):
"""What exactly does get_filter_expr return here?
The current behavior does not look sensible.
"""
img = make_dataimgint
assert img.mask
img.notice(x1lo=10, x1hi=11, ignore=True)
assert img.get_filter_expr() == ''
# given how notice test above fails, how is this working?
def test_dataimgint_notice_get_x0(make_dataimgint):
"""basic notice call + get_x0"""
img = make_dataimgint
img.notice(x1lo=10, x1hi=11)
assert (img.get_x0() == np.asarray([-4.5, -4.5])).all()
assert (img.get_x0(True) == np.asarray([-4.5])).all()
@pytest.mark.xfail
def test_dataimgint_notice_get_x1(make_dataimgint):
"""basic notice call + get_x1"""
img = make_dataimgint
img.notice(x1lo=10, x1hi=11)
assert (img.get_x1() == np.asarray([10.5, 11.5])).all()
assert (img.get_x1(True) == np.asarray([10.5])).all()
@pytest.mark.xfail
def test_dataimgint_notice_get_y(make_dataimgint):
"""basic notice call + get_y"""
img = make_dataimgint
img.notice(x1lo=10, x1hi=11)
assert (img.get_y() == np.asarray([10, 5])).all()
assert (img.get_y(True) == np.asarray([10])).all()
@requires_region
def test_dataimgint_notice2d(make_dataimgint):
"""basic notice2d call.
Given that we only have two items the testing is not
going to be extensive.
"""
img = make_dataimgint
img.notice2d("rect(-100, 10, 100, 11)")
assert (img.mask == np.asarray([True, False])).all()
@requires_region
def test_dataimgint_ignore2d(make_dataimgint):
"""basic ignore2d call.
Given that we only have two items the testing is not
going to be extensive.
"""
img = make_dataimgint
img.notice2d("rect(-100, 10, 100, 11)", ignore=True)
assert (img.mask == np.asarray([False, True])).all()
@requires_region
def test_dataimgint_notice2d_get_filter(make_dataimgint):
img = make_dataimgint
img.notice2d("rect(-100, 10, 100, 11)")
assert img.get_filter() == 'Rectangle(-100,10,100,11)'
@requires_region
def test_dataimgint_notice2d_get_filter_expr(make_dataimgint):
img = make_dataimgint
img.notice2d("rect(-100, 10, 100, 11)")
assert img.get_filter_expr() == 'Rectangle(-100,10,100,11)'
@requires_region
def test_dataimgint_notice2d_get_x0(make_dataimgint):
"""basic notice2d call + get_x0"""
img = make_dataimgint
img.notice2d("rect(-100, 10, 100, 11)")
assert (img.get_x0() == np.asarray([-4.5, -4.5])).all()
assert (img.get_x0(True) == np.asarray([-4.5])).all()
@requires_region
def test_dataimgint_notice2d_get_x1(make_dataimgint):
"""basic notice2d call + get_x1"""
img = make_dataimgint
img.notice2d("rect(-100, 10, 100, 11)")
assert (img.get_x1() == np.asarray([10.5, 11.5])).all()
assert (img.get_x1(True) == np.asarray([10.5])).all()
@requires_region
def test_dataimgint_notice2d_get_y(make_dataimgint):
"""basic notice2d call + get_y"""
img = make_dataimgint
img.notice2d("rect(-100, 10, 100, 11)")
assert (img.get_y() == np.asarray([10, 5])).all()
assert (img.get_y(True) == np.asarray([10])).all()
def test_dataimgint_get_dims(make_dataimgint):
assert make_dataimgint.get_dims() == (1, 2)
def test_dataimgint_get_img(make_dataimgint):
img = make_dataimgint
ival = img.get_img()
assert ival.shape == (2, 1)
assert (ival == np.asarray([[10], [5]])).all()
def test_dataimgint_get_img_model_no_filter(make_dataimgint):
"""Check we can evaluate a model
The Data2DInt case also adds a filter to check that the routine
ignores this filter, but as we currently don't understand the
filtering we skip this step.
"""
img = make_dataimgint
# This model evaluates
# mdl.c + mdl.cx1 * x0 + mdl.cy1 * x1
#
# which becomes, because we use the middle of the bin
#
# 10 + 1 * (-4.5) + 10 * (10.5, 11.5)
# = (110.5, 120.5)
#
mdl = Polynom2D()
mdl.c = 10
mdl.cy1 = 10
mdl.cx1 = 1
ivals = img.get_img(mdl)
assert len(ivals) == 2
assert ivals[0].shape == (2, 1)
assert ivals[1].shape == (2, 1)
assert (ivals[0] == np.asarray([[10], [5]])).all()
assert (ivals[1] == np.asarray([[110.5], [120.5]])).all()
def test_dataimgint_get_max_pos(make_dataimgint):
assert make_dataimgint.get_max_pos() == (-4.5, 10.5)
def test_dataimgint_get_bounding_mask(make_dataimgint):
assert make_dataimgint.get_bounding_mask() == (True, None)
@pytest.mark.parametrize("method",
["get_error",
"get_imgerr",
"get_staterror",
"get_syserror",
"get_yerr"
])
def test_dataimgint_method_is_none(method, make_dataimgint):
"""Check those methods that return None"""
func = getattr(make_dataimgint, method)
assert func() is None
@pytest.mark.parametrize("attribute",
["eqpos",
"sky",
"staterror",
"syserror"
])
def test_dataimgint_attribute_is_none(attribute, make_dataimgint):
"""Check those attributes that return None"""
attr = getattr(make_dataimgint, attribute)
assert attr is None
def test_dataimgint_no_sky(make_dataimgint):
"""Basic check (rely on base class to check all the combinations)."""
with pytest.raises(DataErr,
match="data set 'ival' does not contain a physical coordinate system"):
make_dataimgint.get_physical()
@requires_wcs
def test_dataimgint_sky(make_dataimgint):
"""We can convert coordinates.
We assume the base class tests are good here, so this is a
minimal check.
"""
img = make_dataimgint
img.sky= WCS("sky", "LINEAR",
crval=[100.5, 110.5],
crpix=[1.5, 2.5],
cdelt=[2, 2])
# The "logical" coordinates are
# lo = [-5, 10], [-5, 11]
# hi = [-4, 11], [-4, 12]
#
# so these get converted to
#
# new = (orig - crpix) * cdelt + crval
#
# which is
# lo = [87.5, 125.5], [87.5, 127.5]
# hi = [89.5, 127.5], [89.5, 129.5]
#
x0 = np.asarray([87.5, 87.5])
x1 = np.asarray([125.5, 127.5])
sky = img.get_physical()
assert len(sky) == 4
assert sky[0] == pytest.approx(x0)
assert sky[1] == pytest.approx(x1)
assert sky[2] == pytest.approx(x0 + 2)
assert sky[3] == pytest.approx(x1 + 2)
def test_dataimgint_sky_coords_unchanged(make_dataimgint):
"""Just because sky is set we don't change axis data."""
img = make_dataimgint
img.sky= WCS("sky", "LINEAR",
crval=[100.5, 110.5],
crpix=[1.5, 2.5],
cdelt=[2, 2])
x1 = np.asarray([10, 11])
x = (x1 + x1 + 1) / 2
assert img.get_x1() == pytest.approx(x)
@requires_wcs
def test_dataimgint_set_sky(make_dataimgint):
"""We can change to the SKY coordinate system"""
img = make_dataimgint
img.sky= WCS("sky", "LINEAR",
crval=[100.5, 110.5],
crpix=[1.5, 2.5],
cdelt=[2, 2])
assert img.coord == "logical"
img.set_coord("physical")
assert img.coord == "physical"
@requires_wcs
def test_dataimgint_set_sky_x0hi(make_dataimgint):
"""x0hi is changed
We don't check all attributes.
"""
img = make_dataimgint
img.sky= WCS("sky", "LINEAR",
crval=[100.5, 110.5],
crpix=[1.5, 2.5],
cdelt=[2, 2])
img.set_coord("physical")
x0 = np.asarray([87.5, 87.5])
x = x0 + 2
assert img.x0hi == pytest.approx(x)
@requires_wcs
def test_dataimgint_set_sky_get_x1(make_dataimgint):
"""get_x1 is changed
We don't check all accessors.
"""
img = make_dataimgint
img.sky= WCS("sky", "LINEAR",
crval=[100.5, 110.5],
crpix=[1.5, 2.5],
cdelt=[2, 2])
img.set_coord("physical")
x1 = np.asarray([125.5, 127.5])
x = (x1 + x1 + 2) / 2
assert img.get_x1() == pytest.approx(x)
@requires_wcs
def test_dataimgint_sky_coords_reset(make_dataimgint):
"""We can get back to the logical units
We only check one of the values.
"""
img = make_dataimgint
img.sky= WCS("sky", "LINEAR",
crval=[100.5, 110.5],
crpix=[1.5, 2.5],
cdelt=[2, 2])
img.set_coord("physical")
img.set_coord("logical")
x1 = np.asarray([10, 11])
x = (x1 + x1 + 1) / 2
assert img.get_x1() == pytest.approx(x)
@pytest.mark.parametrize("dclass", [Data2D, DataIMG])
def test_1379_evaluation_unintegrated(dclass):
"""Check that delta2d does not evaluate (i.e. only 0's).
This is based on the code that lead to showing #1379.
"""
x0, x1, y, shape = dataspace2d([10,15])
data = dclass("temp", x0, x1, y, shape=shape)
# It is important that xpos/ypos is not set to either an integer
# value or to half-pixel (as this is used for the bin edges in the
# integrated case).
#
mdl = Delta2D("mdl")
mdl.xpos = 4.3
mdl.ypos = 8.9
mdl.ampl = 100
assert mdl.integrate # ensure it's integrates
out = data.eval_model(mdl)
assert len(out) == len(y)
assert set(out) == {0.0}
@pytest.mark.parametrize("dclass", [Data2DInt, DataIMGInt])
def test_1379_evaluation_integrated(dclass):
"""Check that delta2d does get evaluate at some point.
This is based on the code that lead to showing #1379.
"""
x0, x1, y, shape = dataspace2d([10,15])
data = dclass("temp", x0 - 0.5, x1 - 0.5,
x0 + 0.5, x1 + 0.5, y,
shape=shape)
mdl = Delta2D("mdl")
mdl.xpos = 4.3
mdl.ypos = 8.9
mdl.ampl = 100
assert mdl.integrate # ensure it's integrates
out = data.eval_model(mdl)
assert len(out) == len(y)
assert set(out) == {0.0, 100.0}
assert out.sum() == 100.0
assert out.argmax() == 83
# An internal check that we are actually selecting the correct
# pixel.
#
assert x0[83] == 4.0
assert x1[83] == 9.0
@pytest.mark.parametrize("dclass", [Data2DInt, DataIMGInt])
def test_1379_evaluation_model_not_integrated(dclass):
"""If the delta2D model is not integrated all bets are off.
This is based on the code that lead to showing #1379.
"""
x0, x1, y, shape = dataspace2d([10,15])
data = dclass("temp", x0 - 0.5, x1 - 0.5,
x0 + 0.5, x1 + 0.5, y,
shape=shape)
mdl = Delta2D("mdl")
mdl.xpos = 4.3
mdl.ypos = 8.9
mdl.ampl = 100
mdl.integrate = False
# As the integrate flag is False it behaves like the Data2D/DataIMG
# case (since the xpos/ypos value is chosen not to fall on a
# bin edge).
#
out = data.eval_model(mdl)
assert len(out) == len(y)
assert set(out) == {0.0}
@requires_fits
@requires_data
@requires_wcs
@pytest.mark.parametrize("coord", ["logical", "image", "physical", "world", "wcs"])
def test_1380_data(coord, make_data_path):
"""The contour data should ideally remain the same.
See also sherpa/astro/ui/tests/test_astro_ui_plot.py::test_1380_plot
This is the origin of the problem.
"""
infile = make_data_path("image2.fits")
img = io.read_image(infile)
assert isinstance(img, DataIMG)
assert img.coord == "logical"
(x0_1, x1_1, y_1, xl_1, yl_1) = img.to_contour()
# We do not check the output of this call. It is important
# to call set_coord rather than change the coord attribute.
#
img.set_coord(coord)
img.to_contour()
# We do check that we get back the same data as we
# originally did.
#
img.set_coord("logical")
(x0_3, x1_3, y_3, xl_3, yl_3) = img.to_contour()
assert xl_3 == xl_1
assert yl_3 == yl_1
assert (y_3 == y_1).all()
assert (x0_3 == x0_1).all()
assert (x1_3 == x1_1).all()
@requires_fits
@requires_data
@requires_wcs
def test_1380_pickle(make_data_path):
"""Can we pickle and restore an image?
The fix for 1380 added new data that is pickled, so just
check it works. Technically this should work but the
state handling has had to be tweaked to allow old state
files to be read in, so this just checks that new data
is not affected by this. We don't have any "old" state
files lying around that we can use here.
There are a number of existing image pickle tests but
they don't check the coordinate settings used here.
"""
infile = make_data_path("image2.fits")
img = io.read_image(infile)
img.set_coord("physical")
x0_1, x1_1 = img.indep
img2 = pickle.loads(pickle.dumps(img))
assert img2.coord == "physical"
x0_2, x1_2 = img2.indep
# this test should not need pytest.approx
assert (x0_2 == x0_1).all()
assert (x1_2 == x1_1).all()
img2.set_coord("logical")
assert img.coord == "physical"
assert img2.coord == "logical"
img.set_coord("logical")
x0_3, x1_3 = img.indep
x0_4, x1_4 = img2.indep
assert (x0_4 == x0_3).all()
assert (x1_4 == x1_3).all()
assert (x0_3 != x0_1).all()
# This is an internal check and may get changed if the
# implementation changes.
#
assert img._orig_indep_axis[0] == "logical"
assert img2._orig_indep_axis[0] == "logical"
def test_image_apply_filter_invalid_size(make_test_image):
"""Does an image error out if the filter is sent an invalid size?
Test related to issue #1439 which is an issue with the DataPHA class.
"""
data = make_test_image
with pytest.raises(DataErr,
match="^size mismatch between data and array: 600 vs 2$"):
data.apply_filter([1, 2])
def test_image_filtered_apply_filter_invalid_size(make_test_image):
"""Does an image error out if the filter is sent an invalid size after a filter?"""
data = make_test_image
# "Fake" a filter (this is a perfectly-valid way to set up a filter,
# at least at the time this code was written).
#
data.mask = np.ones(data.y.size, dtype=bool)
data.mask[0] = False
with pytest.raises(DataErr,
match="^size mismatch between data and array: 600 vs 2$"):
data.apply_filter([1, 2])
def test_pha_subtract_bkg_no_staterror():
"""Check what happens with no staterror function for the background.
The idea is that the data has a statistical error column but the
background does not. In this case we can't calculate an error. The
code currently returns None rather than raising an error.
"""
chans = np.arange(1, 5)
counts = np.asarray([10, 9, 3, 7])
errs = np.asarray([3, 2, 1, 2])
data = DataPHA("ex", chans, counts, errs)
bcounts = np.asarray([2, 1, 2, 4])
bkg = DataPHA("bkg", chans, bcounts)
data.set_background(bkg)
data.subtract()
assert bkg.staterror is None
assert data.staterror is not None
# The data.get_staterror call is the code being tested here, but
# the other asserts are being made to check that nothing has
# changed.
#
assert bkg.get_staterror() is None
assert data.get_staterror() is None
def test_pha_subtract_bkg_filter_false():
"""Check what happens with background and filter=False
Looks like this has not been tested, so add an explicit check.
"""
# Note that the background has a different set of groups to the
# data, but it is over-ridden when computing the
# background-subtracted values.
#
chans = np.arange(1, 5)
counts = np.asarray([10, 9, 3, 7])
grps = np.asarray([1, 1, -1, 1])
data = DataPHA("ex", chans, counts, grouping=grps)
bcounts = np.asarray([2, 0, 2, 4])
bgrps = np.asarray([1, -1, -1, 1])
bkg = DataPHA("bkg", chans, bcounts, grouping=bgrps)
data.set_background(bkg)
data.subtract()
bgot = bkg.get_staterror(filter=False, staterrfunc=np.sqrt)
assert bgot == pytest.approx([2, 2])
expected = np.sqrt(np.asarray([10, 12, 7]) + np.asarray([2, 2, 4]))
got = data.get_staterror(filter=False, staterrfunc=np.sqrt)
assert got == pytest.approx(expected)
def test_pha_subtract_bkg_filter_cih2datavar():
"""Check what happens with background and chi2datavar
Looks like this has not been tested, so add an explicit check.
Follows test_pha_subtract_bkg_filter_false but uses a different
error function.
"""
chans = np.arange(1, 5)
counts = np.asarray([10, 9, 3, 7])
data = DataPHA("ex", chans, counts)
bcounts = np.asarray([2, 0, 2, 4])
bkg = DataPHA("bkg", chans, bcounts)
data.set_background(bkg)
data.subtract()
bgot = bkg.get_staterror(staterrfunc=calc_chi2datavar_errors)
assert bgot == pytest.approx([np.sqrt(2), 0, np.sqrt(2), np.sqrt(4)])
expected = np.sqrt(np.asarray([10, 9, 3, 7]) + np.asarray([2, 0, 2, 4]))
got = data.get_staterror(staterrfunc=calc_chi2datavar_errors)
assert got == pytest.approx(expected)
@requires_group
@pytest.mark.parametrize("opt,arg",
[("bins", 2), ("width", 2),
("counts", 3), ("adapt", 3),
pytest.param("snr", 3, marks=pytest.mark.xfail),
("adapt_snr", 3)])
def test_1643_group_options(opt, arg):
"""Check the behavior noted in #1646 and if it's been fixed
yet or not (it comes from the group module which is not
part of Sherpa).
"""
pha = DataPHA("grp", np.arange(1, 12),
[1, 1, 1, 1, 8, 2, 6, 4, 1, 1, 1])
pha.ignore(hi=4)
pha.ignore(lo=9)
meth = getattr(pha, f"group_{opt}")
assert pha.get_filter("5:8")
meth(arg, tabStops=~pha.mask)
assert pha.get_filter("5:8")
# excluded channels are 0
assert pha.grouping[0:4] == pytest.approx([0] * 4)
assert pha.quality[0:4] == pytest.approx([0] * 4)
assert pha.grouping[8:] == pytest.approx([0] * 3)
assert pha.quality[8:] == pytest.approx([0] * 3)
def test_pha_group_background(caplog):
"""Do we group the background?"""
src = DataPHA("src", [1, 2, 3], [1, 2, 1])
bkg = DataPHA("bkg", [1, 2, 3], [0, 1, 1])
src.grouping = [1, 1, 1]
src.quality = [0, 0, 0]
bkg.grouping = [1, 1, -1]
src.set_background(bkg)
assert not src.grouped
assert not bkg.grouped
src.group()
assert src.grouped
assert bkg.grouped
assert len(caplog.record_tuples) == 0
def test_pha_group_background_not_set(caplog):
"""Do we group the background, but grouping not set?"""
src = DataPHA("src", [1, 2, 3], [1, 2, 1])
bkg = DataPHA("bkg", [1, 2, 3], [0, 1, 1])
src.grouping = [1, 1, 1]
src.quality = [0, 0, 0]
bkg.grouping = None
src.set_background(bkg)
assert not src.grouped
assert bkg.grouped # TODO: why is this set?
src.group()
assert src.grouped
assert bkg.grouped
assert len(caplog.record_tuples) == 0
def test_pha_ungroup_background(caplog):
"""Do we ungroup the background?"""
src = DataPHA("src", [1, 2, 3], [1, 2, 1])
bkg = DataPHA("bkg", [1, 2, 3], [0, 1, 1])
src.grouping = [1, 1, 1]
src.quality = [0, 0, 0]
bkg.grouping = [1, 1, -1]
src.grouped = True
bkg.grouped = True
src.set_background(bkg)
assert src.grouped
assert bkg.grouped
src.ungroup()
assert not src.grouped
assert not bkg.grouped
assert len(caplog.record_tuples) == 0
def test_pha_ungroup_background_not_set(caplog):
"""Do we ungroup the background, but grouping not set?"""
src = DataPHA("src", [1, 2, 3], [1, 2, 1])
bkg = DataPHA("bkg", [1, 2, 3], [0, 1, 1])
src.grouping = [1, 1, 1]
src.quality = [0, 0, 0]
bkg.grouping = None
src.grouped = True
with pytest.raises(DataErr,
match="data set 'bkg' does not specify grouping flags"):
bkg.grouped = True
src.set_background(bkg)
assert src.grouped
assert bkg.grouped
src.ungroup()
assert not src.grouped
assert not bkg.grouped
assert len(caplog.record_tuples) == 0
def test_pha_ungroup_background_after(caplog):
"""Set the background before setting the grouping
The set_background method does some extra work, so let's
see what happens if we don't do that.
"""
src = DataPHA("src", [1, 2, 3], [1, 2, 1])
bkg = DataPHA("bkg", [1, 2, 3], [0, 1, 1])
src.set_background(bkg)
src.grouping = [1, 1, 1]
src.quality = [0, 0, 0]
bkg.grouping = [1, 1, -1]
src.grouped = True
bkg.grouped = True
assert src.grouped
assert bkg.grouped
src.ungroup()
assert not src.grouped
assert not bkg.grouped
assert len(caplog.record_tuples) == 0
def test_pha_ungroup_background_after_not_set(caplog):
"""Set the background before setting the grouping
The set_background method does some extra work, so let's
see what happens if we don't do that.
"""
src = DataPHA("src", [1, 2, 3], [1, 2, 1])
bkg = DataPHA("bkg", [1, 2, 3], [0, 1, 1])
src.set_background(bkg)
src.grouping = [1, 1, 1]
src.quality = [0, 0, 0]
bkg.grouping = None
src.grouped = True
with pytest.raises(DataErr,
match="data set 'bkg' does not specify grouping flags"):
bkg.grouped = True
assert src.grouped
assert not bkg.grouped
src.ungroup()
assert not src.grouped
assert not bkg.grouped
assert len(caplog.record_tuples) == 0
def test_pha_ungroup_background_remove(caplog):
"""Can we remove the grouping after grouping?
This is to try and check a corner case.
"""
src = DataPHA("src", [1, 2, 3], [1, 2, 1])
bkg = DataPHA("bkg", [1, 2, 3], [0, 1, 1])
src.set_background(bkg)
src.grouping = [1, 1, 1]
src.quality = [0, 0, 0]
bkg.grouping = [1, 1, -1]
bkg.quality = [0, 0, 0]
src.grouped = True
bkg.grouped = True
# TODO: should this remove the grouping flag?
bkg.grouping = None
src.ungroup()
assert not src.grouped
assert not bkg.grouped
assert len(caplog.record_tuples) == 0
def test_pha_check_background_ids_basic():
"""Check background_ids can be used.
This is a basic run through to check the behavior.
"""
pha = DataPHA("src", [1, 2, 3], [1, 1, 1])
b1 = DataPHA("b1", [1, 2, 3], [1, 1, 1])
b2 = DataPHA("b2", [1, 2, 3], [1, 1, 1])
assert len(pha.background_ids) == 0
pha.set_background(b1)
assert pha.background_ids == pytest.approx([1])
pha.set_background(b2, id="up")
assert pha.background_ids == pytest.approx([1, "up"])
pha.delete_background(1)
assert pha.background_ids == pytest.approx(["up"])
# Check we can delete a background by setting background_ids
#
pha.background_ids = []
assert pha.background_ids == []
# Does the order matter?
#
pha.set_background(b2, id="up")
pha.set_background(b1)
assert pha.background_ids == pytest.approx(["up", 1])
# Remove one element.
#
pha.background_ids = [1]
assert pha.background_ids == pytest.approx([1])
# We can do the following, which is technically a no-op but may
# change some internal state. This also tests using an
# iterable-that-is-not-a-list.
#
pha.background_ids = {1}
assert pha.background_ids == pytest.approx([1])
# We can change to a currently-unused background as long as we've
# used it before.
#
pha.background_ids = ["up"]
assert pha.background_ids == pytest.approx(["up"])
pha.background_ids = [1, "up"]
assert pha.background_ids == pytest.approx([1, "up"])
# We can not set an unknown background.
#
with pytest.raises(DataErr,
match=r"^foo is not a valid background id in \['up', 1\]$"):
pha.background_ids = ["foo"]
# And it hasn't changed.
#
assert pha.background_ids == pytest.approx([1, "up"])
@pytest.mark.parametrize("bkg_id", [None, 1, "foo"])
def test_pha_delete_unknown_background(bkg_id):
"""What happens if we delete an unknown background?"""
pha = DataPHA("src", [1, 2, 3], [1, 1, 1])
b1 = DataPHA("b1", [1, 2, 3], [1, 1, 1])
b2 = DataPHA("b2", [1, 2, 3], [1, 1, 1])
pha.set_background(b1, id="up")
pha.set_background(b1, id="down")
# This is treated as a no-op
pha.delete_background(bkg_id)
assert pha.background_ids == pytest.approx(["up", "down"])
def test_pha_check_response_ids_basic():
"""Check response_ids can be used.
This is a basic run through to check the behavior.
"""
pha = DataPHA("src", [1, 2, 3], [1, 1, 1])
elo = np.arange(2, 5)
ehi = elo + 1
rmf1 = create_delta_rmf(elo, ehi, name="rmf1")
rmf2 = create_delta_rmf(elo, ehi, name="rmf2")
pha.set_response(rmf=rmf1)
assert pha.response_ids == pytest.approx([1])
pha.set_response(rmf=rmf2, id="up")
assert pha.response_ids == pytest.approx([1, "up"])
pha.delete_response(1)
assert pha.response_ids == pytest.approx(["up"])
# Check we can delete a response by setting response_ids
#
pha.response_ids = []
assert pha.response_ids == []
# Does the order matter?
#
pha.set_response(rmf=rmf2, id="up")
pha.set_response(rmf=rmf1)
assert pha.response_ids == pytest.approx(["up", 1])
# Remove one element.
#
pha.response_ids = [1]
assert pha.response_ids == pytest.approx([1])
# We can do the following, which is technically a no-op but may
# change some internal state. This also tests using an
# iterable-that-is-not-a-list.
#
pha.response_ids = {1}
assert pha.response_ids == pytest.approx([1])
# We can change to a currently-unused response as long as we've
# used it before.
#
pha.response_ids = ["up"]
assert pha.response_ids == pytest.approx(["up"])
pha.response_ids = [1, "up"]
assert pha.response_ids == pytest.approx([1, "up"])
# We can not set an unknown response.
#
with pytest.raises(DataErr,
match=r"^foo is not a valid response id in \['up', 1\]$"):
pha.response_ids = ["foo"]
# And it hasn't changed.
#
assert pha.response_ids == pytest.approx([1, "up"])
def test_pha_delete_missing_background_is_a_noop():
"""Check this call returns.
"""
pha = DataPHA("src", [1, 2, 3], [1, 1, 1])
bkg = DataPHA("bkg", [1, 2, 3], [1, 1, 1])
pha.set_background(bkg)
pha.subtract()
assert pha.subtracted
assert pha.background_ids == pytest.approx([1])
assert not bkg.subtracted
assert bkg.background_ids == []
# deleting a non-existent background is a no-op:
# - dataset with no backgrounds
# - dataset with a different-background to the requested
#
bkg.delete_background()
pha.delete_background(2)
# Minimal check that "nothing has happened".
#
assert pha.subtracted
assert pha.background_ids == pytest.approx([1])
assert not bkg.subtracted
assert bkg.background_ids == []
def test_img_checks_coord_nonsense():
"""What happens when coord is set to a nonsense value?"""
with pytest.raises(DataErr,
match="^unknown coordinates: 'nonsense'"):
DataIMG("ex", [1, 2, 1, 2], [1, 1, 2, 2], [1, 2, 3, 4],
coord="nonsense")
@pytest.mark.parametrize("coord", ["physical", "world", "wcs"])
def test_img_checks_coord_no_transform(coord):
"""What happens when coord is set to xxx but no transform?"""
with pytest.raises(DataErr,
match="^data set 'ex' does not contain a .* coordinate system$"):
DataIMG("ex", [1, 2, 1, 2], [1, 1, 2, 2], [1, 2, 3, 4],
coord=coord)
@requires_group
@pytest.mark.parametrize("asarray", [True, False])
def test_group_xxx_tabtops_not_ndarray(asarray):
"""What happens if tabStops is not a ndarray?"""
pha = DataPHA("test", [1, 2, 3, 4, 5], [2, 3, 4, 5, 6])
tabstops = [1, 1, 0, 0, 1]
if asarray:
tabstops = np.asarray(tabstops)
# This should only group channels 3 and 4.
pha.group_width(2, tabStops=tabstops)
assert pha.get_y() == pytest.approx([2, 3, 4.5, 6])
assert pha.mask is True
assert pha.get_mask() == pytest.approx(np.ones(5, dtype=bool))
@requires_group
@pytest.mark.parametrize("asarray", [True, False])
@pytest.mark.parametrize("nelem", [4, 6])
def test_group_xxx_tabtops_wrong_size(asarray, nelem):
"""What happens if tabStops is not a ndarray?"""
pha = DataPHA("test", [1, 2, 3, 4, 5], [2, 3, 4, 5, 6])
tabstops = [0] * nelem
if asarray:
tabstops = np.asarray(tabstops)
emsg = r"^grpBinWidth\(\) The number of tab stops and number of channels specified in the argument list have different sizes$"
with pytest.raises(ValueError, match=emsg):
pha.group_width(2, tabStops=tabstops)
@requires_group
def test_group_xxx_tabstops_already_grouped():
"""Check what happens if tabStops is sent ~pha.mask when already grouped."""
pha = DataPHA("grp", [1, 2, 3, 4, 5, 6], [12, 2, 9, 2, 4, 5])
pha.mask = [1, 0, 1, 1, 1, 0]
assert pha.get_y(filter=True) == pytest.approx([12, 9, 2, 4])
pha.grouping = [1, 1, 1, -1, 1, 1]
pha.grouped = True
assert pha.get_y(filter=True) == pytest.approx([12, 5.5, 4])
tstops = ~pha.mask
mask = np.asarray([False, True, False, False, True])
assert tstops == pytest.approx(mask)
# Apply the mask as the tabStops (after inversion) where
# len(tstops) < nchannel but does match the number of groups.
#
pha.group_counts(8, tabStops=tstops)
assert pha.grouping == pytest.approx([1, 0, 1, 1, -1, 0])
assert pha.quality == pytest.approx([0, 0, 0, 2, 2, 0])
assert pha.get_y(filter=True) == pytest.approx([12, 9, 3])
@requires_wcs
@requires_region
def test_dataimg_axis_ordering():
"""What are the x0/x1 axes meant to be?
See also test_dataimg_axis_ordering_1880
"""
# nx=3, ny=2
#
x1, x0 = np.mgrid[1:3, 1:4]
x0 = x0.flatten()
x1 = x1.flatten()
y = np.arange(6) * 10 + 10
sky = WCS("physical", "LINEAR", [100, 200], [1, 1], [10, 10])
eqpos = WCS("world", "WCS", [30, 50], [100, 200], [-0.1, 0.1])
orig = DataIMG("faked", x0, x1, y, shape=(2, 3), sky=sky,
eqpos=eqpos)
orig.set_coord("physical")
orig.notice2d("circle(110, 210, 6)", True)
assert orig.get_filter() == "Field()&!Circle(110,210,6)"
assert orig.get_dep(filter=True) == pytest.approx([10, 20, 30, 40, 60])
a0, a1 = orig.get_indep()
assert a0 == pytest.approx([100, 110, 120] * 2)
assert a1 == pytest.approx([200, 200, 200, 210, 210, 210])
@requires_wcs
@requires_region
def test_dataimg_axis_ordering_1880():
"""What are the x0/x1 axes meant to be? See issues #1789 #1880
See also test_dataimg_axis_ordering. This is a regression test to
catch if we ever decide to update the DataIMG code.
"""
# nx=2, ny=3
#
x1, x0 = np.mgrid[1:4, 1:3]
x0 = x0.flatten()
x1 = x1.flatten()
y = np.arange(6) * 10 + 10
sky = WCS("physical", "LINEAR", [100, 200], [1, 1], [10, 10])
eqpos = WCS("world", "WCS", [30, 50], [100, 200], [-0.1, 0.1])
orig = DataIMG("faked", x0, x1, y, shape=(2, 3), sky=sky,
eqpos=eqpos)
orig.set_coord("physical")
# This should remove the pixel with value 50 but it actually
# removes 40. This is an issue with how DataIMG requires the x0/x1
# arrays (I think) but for this test we just test the existing
# behavior. See issues #1880 and #1789.
#
orig.notice2d("circle(110, 210, 6)", True)
assert orig.get_filter() == "Field()&!Circle(110,210,6)"
assert orig.get_dep(filter=True) == pytest.approx([10, 20, 30, 50, 60])
a0, a1 = orig.get_indep()
assert a0 == pytest.approx([100, 110] * 3)
assert a1 == pytest.approx([200, 200, 210, 210, 220, 220])
def test_rmf_get_indep():
"""Check this routine."""
ebins = np.asarray([3.0, 5., 8.0, 12.0])
rlo = ebins[:-1]
rhi = ebins[1:]
rmf = create_delta_rmf(rlo, rhi, e_min=rlo, e_max=rhi)
xlo, xhi = rmf.get_indep()
assert xlo == pytest.approx(rlo)
assert xhi == pytest.approx(rhi)
def test_rmf_get_dep_simple():
"""Check this routine."""
ebins = np.asarray([3.0, 5., 8.0, 12.0])
rlo = ebins[:-1]
rhi = ebins[1:]
rmf = create_delta_rmf(rlo, rhi, e_min=rlo, e_max=rhi)
# This is an "ideal" response.
#
y = rmf.get_dep()
assert y == pytest.approx([1, 1, 1])
def test_rmf_get_dep_complex():
"""Check this routine."""
ebins = np.asarray([1.1, 1.2, 1.5, 2.0, 3.0, 6.0])
rlo = ebins[:-1]
rhi = ebins[1:]
rmf = DataRMF("x", 5, rlo, rhi,
[1, 1, 1, 1, 1], # n_grp
[2, 3, 4, 4, 3], # f_chan
[2, 1, 1, 1, 2], # n_chan
[0.4, 0.6, 1.0, 1.0, 1.0, 0.2, 0.8], # matrix
e_min=rlo, e_max=rhi)
# This RMF only fills in channels 2 to 4.
#
y = rmf.get_dep()
assert y == pytest.approx([0, 0.4, 1.8, 2.8, 0])
|
sherpaREPO_NAMEsherpaPATH_START.@sherpa_extracted@sherpa-main@sherpa@astro@tests@test_astro_data2.py@.PATH_END.py
|
{
"filename": "paper.md",
"repo_name": "hposborn/MonoTools",
"repo_path": "MonoTools_extracted/MonoTools-main/paper/paper.md",
"type": "Markdown"
}
|
---
title: 'MonoTools -- A python package for planets of uncertain period'
tags:
- Python
- astronomy
- exoplanets
- transit
authors:
- name: Hugh P. Osborn
orcid: 0000-0002-4047-4724
affiliation: 1, 2
affiliations:
- name: NCCR/Planet S, Centre for Space and Habitability, University of Bern, Switzerland
index: 1
- name: Kavli Institute for Space Sciences, Massacussets Institute of Technology, Cambridge, MA, USA
index: 2
date: 1 June 2021
bibliography: paper.bib
---
# Summary
The transit method has proved the most productive technique for detecting extrasolar planets, especially since the era of space-based photometric survey missions began with *CoRoT* [@auvergne2009corot] and *Kepler* [@borucki2010kepler] in the late 2000s.
This continued with *K2* [@howell2014k2] and *TESS* [@ricker2014transiting], and will extend into the 2030s with *PLATO* [@rauer2014plato].
Typically, the planets detected by these surveys show multiple consecutive transits.
This means planet candidates are most often detected through algorithms which search the frequency domain [e.g.; @kovacs2002box; @hippke2019optimized], vetted using metrics that require multiple detected transits [e.g.; @thompson2018planetary; @shallue2018identifying], and modelled (and sometimes statistically validated) using the assumption that the orbital period is well-constrained and approximated by a Gaussian distribution [e.g.; @eastman2013exofast; @morton2012efficient].
However, planet candidates continue to be found that do not show multiple consecutive transits - the single (or "Mono-") transits [e.g.; @wang2015planet; @osborn2016single; @gill2020ngts].
For these transit candidates - where orbital period is not a priori known from the detection - a special approach to exoplanet detection, vetting and modelling must be taken.
In this work, we detail ``MonoTools``, a python package capable of performing detection, vetting and modelling of Mono (and Duo) transit candidates.
First we will describe briefly what Mono (and Duo-) transits are, and the challenges associated with them.
Then in the following three sections we will outline the basis of the three parts of the code.
Following that, we will validate the code using limited examples of planets with known orbital periods.
# Mono- & Duo-transits
Mono-transits, which have also variously been called "single transits" or "orphan transits", are the transits of long-period planet candidates which occur only once during photometric observations.
In these cases, the orbital period is not directly evident as we do not have subsequent transits.
However, the planet's orbit can be constrained using the transit event, as we will explore later in this section.
Another special case is worth noting - that of two non-consecutive transits where intermediate transit events were not observed, therefore the orbital period is not directly constrained by the transit events.
Here I class these cases as "Duotransits" in contrast to "Monotransits" and "Multitransits" which we will use as short-hand for planet candidates which show multiple (and consecutive) transit events.
In these cases, the resulting planet candidate may have both a highly uncertain period ($20{\rm d}< P <750{\rm d}$ in the case of two transits separated by a 2-year gap) and yet a well-constrained array of possible periods to search ($P \in (t_{{\rm tr},2}-t_{{\rm tr},1})/\{1,2,3, \cdots, N_{\rm max}\}$).
``MonoTools`` is explicitly dealt to deal with both the monotransit and duotransit cases.
Transit shape is universally governed by the same simple geometry [e.g. @mandel2002analytic, @seager2003unique].
As they must be strongly detected in a single transit, the typical per-transit signal-to-noise of monotransits is often higher than for multitransits, allowing their shape to be well-constrained.
This shape is important for detection, vetting and modelling of such planet candidates.
Transit duration is weakly dependent on planetary period ($t_D \propto P^{1/3}$), therefore long-period planets typically have longer-duration transits.
Indeed the longest-duration transit yet found belonged to a monotransit detected in K2 [@giles2018transiting] at 54 hours.
# Input Information
## Detrended Lightcurve
We built a custom `MonoTools.lightcurve` package to manipulate photometric lightcurves for this package. This includes the ability to download all available Kepler, K2, CoRoT and TESS lightcurves for any target.
### Kepler
For stars in or near the Kepler field, we use `astroquery` to query the Kepler input catalogue (KIC) to assess if the star was observed.
The Kepler lightcurves (either 1 or 30-min cadence, depending on availablity) were accessed on MAST and the `PDCSAP` flux (pre-datasearch conditioning simple aperture photometry) was used as the default flux. We masked points where the `quality` bits [1,2,3,4,6,7,8,9,13,15,16,17] were flagged.
### K2
Like for Kepler, we check for any star near the ecliptic if the star has an EPIC (Ecliptic plane input catalogue, @Huber2014) ID using `astroquery`
Unlike Kepler, K2 had a diversity of pipelines used to produce photometry.
`MonoTools.lightcurve` has the capability to access data from Everest [@luger2016], K2SPP [@Vanderburg2015], and PDC [].
Unless specified `MonoTools.lightcurve` will search in this order, which follows typical lightcurve precision, until data is found for a given EPIC.
### CoRoT
The CoRoT object search API available at the NASA Exoplanet Archive is used to both search for and then download CoRoT data.
Although three band photometry is available, we are typically most interested in the more precise monochrome lightcurve, so this is by default the flux parameter.
As CoRoT observed continuously from its low-earth orbit, the South Atlantic Anomaly produced clear flux bumps in the data due to excess cosmic rays, which needs to be removed from the data.
To do this, each flux point is compared with its 24 neighbours and any point identified to be significantly (at $3.2\sigma$) higher than its neighbours median flux in 80% of possible bins is added to a mask.
This is iterated (typically twice) to make sure an accurate standard deviation without the highest anomalies with lower but still-significant SNR to be removed.
### TESS
Given an RA/Dec, we search the TIC (TESS Input Catalogue @Stassun2018) to find the TESS ID for a given target.
As for K2, there is not one unique pipeline for TESS data, especially for those targets not observed in 2-minute TPFs but only in the FFIs.
In this case, `MonoTools.lightcurve` will search MAST for a PDC (20s or 120s) lightcurve [@Jenkins], a SPOC-TESS (10 or 30min) lightcurve, a QLP (Quick-Look Pipeline, [@Huang]), and finally an Eleanor lightcurve [@Feinstein2019]).
### Lightcurve functions
- `change_jd_base` - Change time base between modified julian dates (e.g. 2454833 to 2457000)
- `change_flx_system` - Change from e.g. photons to ppt or ppm.
- `bin` - Bin the flux timeseries
- `flatten` - Flatten either with splines or with out-of-box polynomials
- `make_cadmask` - Make a mask which
- `make_fluxmask` - Make a flux mask by search for an covering anomalies
- `save` - Save lightcurve as numpy csv and/or pickled python file.
- `plot` - Plot lightcurve timeseries in multi-row figure.
## Stellar parameters
# Search
## ``MonoTools.search.MonoTransitSearch``
This function iteratively fits both a transit model and a polynomial to the lightcurve to detect monotransits in space telescope photometry, which we detail here.
We first create a series of reference transit models (default 5) to iterate across the lightcurve using ``exoplanet`` [@foreman2021exoplanet].
The derived stellar parameters are used, along with a default planet-to-star radius ratio of 0.1.
As input periods, logspaced values between 0.4 and 2.5 times the duration of continuous observations (in the case of lightcurves with gaps longer than 5 days, the longest individual region was used).
The impact parameters were chosen such that the maximum duration transit (with $P=2.5P_{\rm mono}$) is given $b=0.0$ while successively shorter durations linearly spaced up to $b=0.85$ producing ever-shorter duration transits.
500 in-transit steps are generated for each model with exposure times fixed to that of the lightcurve, and then interpolated.
This interpolated transit function forms the model which is minimized at each step in the lightcurve.
Each of the models (with differing transit durations) are then iterated over the lightcurve, where transit centres are shifted some small fraction of transit duration each iteration (default 5\%).
At each position, a 7-transit-duration-long window around the transit time is fitted to three different models which are minimised using ``scipy.optimize``. These models are:
- The interpolated transit model with varying depth (reparameterised to $\log{\rm depth}$ to avoid negative depths) plus a 1D gradient in the out-of-transit flux.
- A 3-degree polynomial.
- A "wavelet" model with the following equation, designed to fit dips due to stellar variability where $t_D$ is the transit duration (set, in our case, from the interpolated transit models), and $a$ is the depth. As with the transit, a gradient was also included to account for any non-linear out-of-eclipse flux trend.
$$
t' = 2\pi x / (2 t_D);
F = {a}(\exp{((-t'^2) / (2\pi^2))}\sin{(t'-\pi/2)})
$$
<!---The shift of 0.1 is included to depress the median flux below 0.0, which is the case for dips which mimic a transit.
For each model, a likelihood is calculated and the function minimised. Bayesian Information Criterion and likelihood ratios are then calculated between each false positive model and the transit model. A transit SNR is also calculated assuming white noise.-->
For each of these three models, the minimised log likelihood is used to compute a Bayesian Information Criterion.
Significant detections are therefore found by choosing all transit model fits which have a log likelihood ratio with respect to non-transit models greater than some threshold (default: 4.5) as well as an SNR (calculated from the depth, transit duration, and out-of-transit RMS) greater than some SNR threshold (default: 6.0).
Multiple iterations (either in transit time or duration) may find the same significant dip.
In this case the minimum DeltaBIC between transit & polynomial model is used to choose the representative detecion, and all nearby detections within $0.66 t_D$ of this candidate are removed to avoid double counting.
<!---TThis it iterated until no detections are classed as significant, or 8 high-SNR transit have been found.-->
## ``MonoTools.search.PeriodicPlanetSearch``
Many multitransiting planets produce high-SNR individual transits that would be detected using ``MonoTransitSearch``, therefore we also require a method of detecting periodic planets, as well as selecting between the monotransit and multitransit cases.
To search for periodic transits, we first flatten long-timescale variation from the lightcurve.
This is performed by fitting polynomials to sections of the lightcurve while also iteratively removing anomalies, as was adapted from [@armstrong2014abundance].
For each small step along the lightcurve, a wide window around (but not including) each step is used to fit a polynomial.
Points in this window that had already been identified as either outliers (i.e. from detrending) or within detected monotransits (from the Monotransit search), can be excluded from the polynomial fitting.
A log likelihood is computed on each of ten iterated polynomial fits, and each time a new pseudo-random mask is generated by excluding points whose scaled residual to the model is greater than a randomly-generated absolute normal distribution with unit standard deviation (thereby, on average, excluding points with offset residuals).
This best-fit polynomial, is then subtracted from the small central masked region.
For Periodic Planet searches, a window with duration 11 times the likely maximum duration and a stepsize of 0.1 days are typically used to ensure transits do not influence the polynomial fit.
``transit least squares`` [TLS; @hippke2019optimized] is used to perform the periodic planet search.
We iteratively run this TLS search and masked the detected transits until no more candidates are found above the SNR threshold (default:6)
During the TLS search, we necessitated a minimum of three transits.
This is preferred over a limit of two for a handful reasons:
- The implementation of period-epoch values in ``transit least squares`` means that allowing two transits also lets monotransits be detected, thereby duplicating our effort with the above search technique.
- Multi-transit search is not strict about assigning only similar dips together and may connect either two monotransits, or the wrong two transits from a multi-transiting planet. Requiring three dips ensures the correct periodicity
- Individual transits of the majority of good duo-transiting planet are likely to be individually detectable on their own right, as the individual transits have SNR's only $1/\sqrt{2}$ (30\%) lower than the combination of both events.
To make sure that at least 3 transits were detected, we excluding any candidates where one or two individual transits dominated the combined SNR (defined by computing an expected SNR from the sum of each individual transit SNRs and assuring solid detections have ${\rm SNR}_i > 0.5 {\rm SNR}_{\rm expected}$).
If the highest periodogram peak in the TLS corresponds to a multi-transiting planet with a SNR higher than our threshold (default: 6), and
In either case, if a signal with SNR higher that the threshold is found, we mask the detected transits by replacing all points associated with the transit with flux values randomly taken from the rest of the lightcurve.
The lightcurve is then re-scanned with \texttt{TLS} until no high-SNR candidates remain.
# Vetting
# Fitting
## Typical Monotransit fitting approaches
We have the following information available from a monotransit:
- Epoch of transit, $t_0$
- Transit duration, $t_D$
- Ingress & Egress duration $\tau$
- Transit depth, $\delta$
- In-transit shape
- Stellar parameters (e.g. stellar radius and density)
- Orbital period information from the lack of additional transits in the lightcurve. At the least we have a minimum possible period below which obvious transits would be observed, and at the most we may have a complex sequence of period islands.
- Additional planets
- Complimentary observations (e.g. radial velocities)
From these observables, there are then second order parameters. These can either be derived from the observables or, more commonly, can be used directly in fitting as reparameterisations of the observed parameters:
- **Limb-darkening parameters** - These parameters due to the change in optical depth as a function of position on the stellar surface correspond to the in-transit shape and are also constrainable from the stellar parameters (as theoretical limb-darkening parameters can be calculated for a given star).
- **Radius ratio, $R_p/R_s$** - This is most directly linked to transit depth $\delta$ ($R_p/R_s \sim \sqrt{\delta}$), although limb-darkening and dilution can play effects here (as well as impact parameter in the case of a grazing transit/eclipse).
- **Impact parameter, $b$** - Impact parameter refers to the location of the transit chord between the centre and edge of the stellar disc. In the case of multitransiting planets impact parameter constraints come from both the transit shape and the known orbital distance compared with the transit duration. With monotransits we do not have this luxury and instead only the transit shape constrains b (i.e. the radius ratio, ingress duration, transit duration).
These parameters can then in turn be linked to orbital parameters.
Typical transit modelling includes parameters for both transit shape (e.g. impact parameter, radius ratio, \& limb-darkening parameters), semi-major axis (typically parameterised as $a/R_s$), and orbital period.
Splitting orbital parameters into both $a/R_s$ & $P$ is superfluous for planets with uncertain periods.
Instead, the typical approach is to use only the transit shape parameters to constrain as few orbitla parameters as possible.
For example, if the impact parameter can be constrained from the shape alone, then in combination with the transit duration we can estimate the velocity of a planet across the star.
In the case of a purely circular orbit, this velocity then directly produces a period.
Including samples from some eccentricity and omega (argument of periasteron) distributions, these will then modify the resulting period.
There have been numerous past efforts and theoretical works exploring fitting such transits:
- @yee2008characterizing provided a theoretical perspective on modelling such transits even before Kepler began finding them.
- @wang2015planet adapted a transit model which included both circular period and semi-major axis ($a/R_s$) without specific priors on these quantities.
- @foreman2016population included eccentricity and reparameterised the orbital semi-major axis \& inclination into two parameters ($\sqrt{a}\sin{i}$ & $\sqrt{a}\cos{i}$), with an effective prior on the period ($P^{-2/3}$).
- @osborn2016single fitted impact parameter and a scaled velocity parameter (which encapsulates a prior equating to $P^{-5/3}$) to predict planetary periods, with the same approach being used in @giles2018transiting.
- @kipping2018orbital provided a purely theoretical view of the correct prior to place on such analyses, combining the geometric transit probability, a window effect prior, and the intrinsic perior prior to produce a value of $P^{-8/3}$.
- @sandford2019estimation created the ``single`` python package which used gaia parallaxes as a source of stellar density and allowed eccentricity to vary (with a period prior of $P^{-5/3}$)
- @becker2018discrete modelled the duotransit system HIP41378 using discrete period aliases and a $P^{-1}$ prior.
As can be seen from this array, the approach and prior varies widely between study.
Some directly model orbital period while others reparameterise in terms of parameters closer to the observed transit information.
Some use eccentricity but most assume circular orbits.
Some use information from interior multitransiting planets (e.g. @becker2018discrete) but most treat only the outer planet individually.
## ``MonoTools.fit`` approach
The ``monoModel`` class of ``MonoTools.fit`` uses the ``exoplanet`` package [@foreman2021exoplanet] and ``PyMC3`` [@exoplanet:pymc3] to build flexible a transit model which can be easily and efficiently sampled using ``PyMC3``'s Hamiltonian Monte Carlo approach.
The key development of ``MonoTools`` over past monotransit and duotransit tools is that it natively supports bayesian marginalisation over discontinous period space.
In the case of duotransits, this means the multiple period aliases, while in the case of monotransits, this means the multiple period gaps that can occur due to non-continuous photometric coverage.
### Calculating Period Aliases & Gaps
For Duotransits, period is not a modelled quantity in `MonoTools.fit`, but is instead derived from modelling two transit centres $t_0$ and $t_1$, with the period being part of the set $P \in (t_{{\rm tr},2}-t_{{\rm tr},1})/\{1,2, \cdots, N\}$).
Potential aliases therefore lie between $P_{\rm max}=t_{{\rm tr},2}-t_{{\rm tr},1}$ and $P_{\rm min}$, a minimum period, and are calculated by `compute_duo_period_aliases`.
To calculate $P_{\rm min}$, this function iterates over all potential aliases between $P_{\rm max}$ and 10d.
For each period, the data is phase-folded and the known transits masked.
Only period aliasess for which there there are no significant in-transit observations found elsewhere (defined as 15% of the central 90% of the transit duration) are kept in the model.
For monotransiters, a similar process is applied to find regions of the period parameter space that are rejected by photometry using `compute_period_gaps`.
First, an RMS timeseries of the light curve is computed.
This iterates through the flattened light curve in steps that are typically $1/7 t_D$ wide, performing a weighted average \& standard deviation for photometry in a $1 t_D$ wide.
The resulting timeseries can be converted into a theoretical transit SNR given the depth of the known transit.
This timeseries can be converted to a function of period space (i.e. by phase-folded around the know transit), with regions without photometric data being given SNR values of 0.0.
Period gaps can then be defined as regions in period space where the computed SNR is below some threshold value (default: $4\sigma$).
### Marginalisation
Here we have some number of discrete regions in one parameter space that we want to sample.
Typically, samplers such as MCMC fail with multiple local minima, especially in the case where the gaps between regions are far wider than the regions themselves.
One way to avoid this problem is to treat each region of this discontinuous parameter space as seperate and therefore sample each one individually.
We can then perform marginalisation over N submodels with parameters that correspond to each N period gaps.
By computing the log likelihood with respect to the data and the log prior of the parameters used, their sum gives us the probability of each submodel for a given step.
$p(\theta \mid y) = \sum_{k=1}^K p(\theta \mid y, M_{P=i}) \; p(M_{P=i} \mid y)$
The normalised probability for each period gap or alias are then the marginalised probabilities, and the marginalised parameters are simply the average of the submodel parameters weighted by this probability.
However, if all of the parameters in the model are marginalised, this can effectively require a huge number of parameters - $N_{params} \times N_{models}$.
Therefore, to improve efficiency, we must choose which parameters to marginalise and which to fit only once.
In the case of a transit where we want to marginalise over multiple sub-models at different orbital periods, we only need marginalise over parameters that substantially vary as a function of orbital period.
Other parameters, such as transit time, limb darkening and radius ratio, can be fitted as global parameters.
In the simplest case, ``MonoTools`` allows some degree of flexibility in what parameters to marginalise using the `fit_params` and `marginal_params` lists as inputs to the `monoModel` class.
Period is always marginalised, but so can $t_D$ or $b$.
However, this implementation of marginalisation can still be slow, and suffers from drawbacks.
For $t_D$ and $b$ one must always be globally fitted and the other marginalised.
But, their connection to the orbital period means that across this marginalisation there is always going to be many aliases which do not well represent the data.
For example, if a 15d planet with $b=0.2$ fits the transit well, a 150d planet with the same transit chord is sure to produce far too long a transit duration, and therefore a very low likelihood.
And, despite the fact a 150d plant might be able to well explain the data at higher impact parameters, the strong prior on period means this part of the parameter space is not explored and our 150d alias may be given an artificially low marginal probability.
### Marginalising with derived in-transit velocity
The solution to this problem is to not marginalise duration or impact parameter which are both intimitely connected to the observed transit shape.
By keeping all the parameters required to fit transit shape global, we can remove the need to perform likelihood calculations for each of the different period parameters, greatly improving speed and sampling efficiency.
Instead, we use the duration and impact parameter to derive an instantaneous velocity across the star, as was performed in @osborn2016single.
For each of the period aliases and the sampled stellar parameters, we can calculate a circular velocity.
The derived transit velocity as a ratio of circular velocity ($v/v_{\rm circ}$) for each period alias/gap then becomes the important quantity to marginalise.
Of course this is incompatible with the assumption of a circular orbit - we require an eccentricity distribution for this method to work.
As we are directly modelling the transit shape the likelihood for each alias is identical (or at least negligibly different), all that is important is deriving a prior for each.
The key part of this prior comes from the assumed eccentricity distribution.
Observations of exoplanets show that low eccentricities are typically preferred over high ones.
Two distributions are typically used to quantify this - the beta distribtion of @kipping2013parametrizing for typically single-planet RV systems, and the Rayleigh distribution of @van2015eccentricity for multi-planet transiting systems.
Another observational constraint on eccentricity comes from the distribution of perihelion distances - exoplanet orbits do not typically pass within $2R_s$, as within this radius tidal circularisation occurs.
In terms of semi-major axis, we include a sharp sigmoid prior at the threshold of $e = 1 - 2R_s/a$ which corresponds to this perihelion limit.
We can also include another upper limit on eccentricity here - stable exoplanetary systems require that a planet's orbit does not cross the orbit of interior candidates.
So in the case of transiting multi-planet systems we can use $e < 1 - R_s/a_{\rm inner}$.
For each given $v/v_{\rm circ}$ we must calculate the possible eccentricity and argument of periastron.
From @barnes2007effects (Eq 12) we know that a planet's azimuthal velocity can be defined as:
$\frac{v_f}{v_{circ}} = \frac{1+e\cos{f}}{\sqrt{1-e^2}}$ where $f_{\rm tr}=(\omega-\pi/2)$.
Rearranging to give eccentricity gives two roots, although the second root is only applicable for cases where $v/v_{\rm circ}$<1.0:
$e_1 = (-v^2 \sqrt{\frac{v^2 (\sin{\omega}^2+v^2-1)}{(\sin{\omega}^2+v^2)^2}} - \sin{\omega}^2 \sqrt{\frac{v^2 (\sin^2{\omega}+v^2-1)}{(\sin^2{\omega}+v^2)^2}} - \sin{\omega})/(\sin{\omega}^2 + v^2)$
$e_2 = (v^2 \sqrt{\frac{v^2 (\sin{\omega}^2+v^2-1)}{(\sin{\omega}^2+v^2)^2}} + \sin{\omega}^2 \sqrt{\frac{v^2 (\sin^2{\omega}+v^2-1)}{(\sin{\omega}^2+v^2)^2}}-\sin{\omega})/(\sin{\omega}^2 + v^2)$
These two roots make it impractical to solve for the probability of $v$ analytically, so we instead compute this numerically.
Ultimately, we must derive the probability of each velocity ($v/v_{\rm circ}$, or $v$ hereafter) given that a transit occurs by marginalising over all compatible eccentricities and arguments of perasteron:
$$ p(v \mid {\rm Tr}, e_{\rm max}) = \frac{\int_0^{2\pi} \int_{0}^{e_{\rm max}} p(e, \omega \mid v) p({\rm Tr} \mid e, \omega, v) de d\omega}{\int_{v_{\rm min}}^{v_{\rm max}} \int_0^{2\pi} \int_{0}^{e_{\rm max}} p(e, \omega \mid v) p({\rm Tr} \mid e, \omega, v) de d\omega dv} $$
Using the equations for $e$, we can feasibly generate eccentricity for each $v/v_{\rm circ}$ & $\omega$ sample.
As the geometric probability of transit is a function of the distance in-transit, and eccentricity \& argument of periasteron directly affect this quantity, we also calculate a geometric correction (i.e. the distance at transit compared to semi major axis):
$$ \frac{d_{\rm Tr}}{a} = \frac{1 + e \sin{\omega}}{1 - e^2}$$
Therefore the probability of each grid position is then determined by the probability derived from selected prior distribution (i.e. `'kipping'`,`'vaneylen'` or `'uniform'`) multiplied by the geometric correction.
In the case that the derived eccentricity is above $e_{\rm max}$, a log prior of -500 is added.
As all velocities here are normalised to circular velocities and the joint argument of periastron -- eccentricity distributions remain constant with period, these calculations should remain constant for any period across all model samples.
However, the maximum permitted eccentricity ($e_{\rm max}$) can vary for each sample due to e.g. the sampled stellar radius and parameters for the orbits of interior planets.
Therefore, we need a way to compute on-the-fly a prior probability for a particular velocity and $e_{\rm max}$, as well as a marginal eccentricity and argument of periastron.
We choose to generate a 2D interpolation function for each eccentricity prior distribution.
Effectively the equation required to produce the marginalised probability distribution for $v$ (given some maximum eccentricity and the fact that a transit occurs) is:
$$ p(v \mid {\rm Tr}, e_{\rm max}) = \int_0^{2\pi} \int_{0}^{e_{\rm max}} p(e, \omega \mid v, e_{\rm max}) p({\rm Tr} \mid e, \omega, v) de d\omega$$
Where, for example in the case of the @kipping2013parametrizing $\beta$ distribution where $\alpha=0.867$ and $\beta=3.03$, the probability on $p(e \mid v, e_{\rm max})$ (and, therefore, $p(e,\omega \mid v, e_{\rm max})$ as $\omega$ is uniform) is:
$$p(e \mid v, e_{\rm max}) = \begin{cases}
0 & \text{if } e > e_{\rm max} \\ % & is your "\tab"-like command (it's a tab alignment character)
\frac{\exp{(\alpha - 1)}(1-e)^{\beta-1}}{{\rm B}(\alpha,\beta)} & \text{otherwise.}
\end{cases}$$
By generating a grid of $v/v_{\rm circ}$ (16000-steps flat in $\log_{10}{}$ between 0.07 and 14), omega (8000 steps flat between 0 & $2\pm$) and $e_{\rm max}$ (96 steps sampled using $e_{\rm max} \in 1 - 10^{\{-3,...,-0.05\}}$), we can derive eccentricities for each point in the $\omega$ - $v$ plane and therefore compute marginalised probabilities for each point on the $v$ - $e_{\rm max}$ plane.
For each of the $e_{\rm max}$ steps, the sum of probabilities for $v$ must sum to 1.0, therefore we must renormalise the above equation using the integral over all possible velocities using the following normalisation factor:
$$ \int_{v_{\rm min}}^{v_{\rm max}} \int_0^{2\pi} \int_{1\times10^-4}^{e_{\rm max}} p(e,\omega \mid v) p({\rm Tr} \mid e, \omega, \log{v}) de d\omega dv $$
The resulting $v$ - $e_{\rm max}$ distributions can be seen in Figure XX.
### Choice of transit shape parameters
Transit duration and impact parameter can both
### Treatment of limb darkening parameters
Limb-darkening parameters define the relative variation in surface brightness of a star from centre to limb, and therefore govern the shape of the transit between ingress and egress.
For high-SNR transits where other parameers including orbital period are well-constrained, it is typically preferred to fit for limb-darkening parameters directly from the transit.
However, for analyses where we wish to use the transit shape to constrain other parameters, it instead makes sense to constrain the limb-darkening using priors derived from theoretical predictions.
For this reason, in the default case, `MonoTools.fit` constrains the limb darkening parameters using the derived stellar parameters (although it is also possible to fit unbiased).
Tables of theoretical limb darkening parameters typically produce grids of each parameter with respect to stellar effective temperature, $\log{\rm g}$, metallicity and micro-turbulence.
We select the nearest unique metallicity value (default of \[Fe/H\]=0.0) and micro-turbulence (default 1km/s), and perform a 2D interpolation of the Teff-logg values for each of the two quadratic limb darkening parameters.
We then generate a sample of stellar Teff & logg values from the stellar paramters using a minimum uncertainty of 100K and 0.1dex to allow to potential systematic errors in stellar parameters.
The interpolation functions then produce samples for each of the two quadratic parameters, from which we construct a normally-distributed prior.
In both cases, the Kipping reparameterisation of limb darkening [@kipping2013] is used to allow for efficient & physically-motivated sampling.
### Treatment of period gaps & aliases
### Eccentricity distribution marginalisation
### Other photometric fitting parameters
Gaussian processes
Jitter
Dilution may be important, especially when unresolved stellar companions are detected through high-resolution imaging.
To allow for this, we include the ability to include either constrained or unconstrained dilution from a stellar companion.
In the case of unconstrained dilution, we allow the magnitude difference of the companion to vary from -10 to 10, while in the constrained option the user can define mean and standard deviations for each observed band.
This is converted from $\Delta {\rm mag}$ to a correction factor for the undiluted flux to the total flux, $F_{\rm targ}/F_{\rm total} = 2.511^{-\Delta {\rm mag}}$.
## Radial Velocity modelling
###
### Derived semi-amplitude
###
# Validation
# Installation
# Acknowledgements
# References
|
hposbornREPO_NAMEMonoToolsPATH_START.@MonoTools_extracted@MonoTools-main@paper@paper.md@.PATH_END.py
|
{
"filename": "run_compiled_diffusion_model_hotswap.py",
"repo_name": "huggingface/peft",
"repo_path": "peft_extracted/peft-main/tests/run_compiled_diffusion_model_hotswap.py",
"type": "Python"
}
|
# Copyright 2024-present the HuggingFace Inc. team.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""This is a standalone script that checks that we can hotswap a LoRA adapter on a compiled model
By itself, this script is not super interesting but when we collect the compile logs, we can check that hotswapping
does not trigger recompilation. This is done in the TestLoraHotSwapping class in test_pipelines.py.
Running this script with `check_hotswap(False)` will load the LoRA adapter without hotswapping, which will result in
recompilation.
There is an equivalent test in diffusers, see https://github.com/huggingface/diffusers/pull/9453.
"""
import os
import sys
import tempfile
import torch
from diffusers import StableDiffusionPipeline, UNet2DConditionModel
from diffusers.utils.testing_utils import floats_tensor
from peft import LoraConfig, get_peft_model_state_dict
from peft.tuners.tuners_utils import BaseTunerLayer
torch_device = "cuda" if torch.cuda.is_available() else "cpu"
def get_small_unet():
# from diffusers UNet2DConditionModelTests
# TODO: This appears not to work yet in full pipeline context, see:
# https://github.com/huggingface/diffusers/pull/9453#issuecomment-2418508871
torch.manual_seed(0)
init_dict = {
"block_out_channels": (4, 8),
"norm_num_groups": 4,
"down_block_types": ("CrossAttnDownBlock2D", "DownBlock2D"),
"up_block_types": ("UpBlock2D", "CrossAttnUpBlock2D"),
"cross_attention_dim": 8,
"attention_head_dim": 2,
"out_channels": 4,
"in_channels": 4,
"layers_per_block": 1,
"sample_size": 16,
}
model = UNet2DConditionModel(**init_dict)
return model.to(torch_device)
def get_unet_lora_config():
# from diffusers test_models_unet_2d_condition.py
rank = 4
unet_lora_config = LoraConfig(
r=rank,
lora_alpha=rank,
target_modules=["to_q", "to_k", "to_v", "to_out.0"],
init_lora_weights=False,
use_dora=False,
)
return unet_lora_config
def get_dummy_input():
# from UNet2DConditionModelTests
batch_size = 4
num_channels = 4
sizes = (16, 16)
noise = floats_tensor((batch_size, num_channels) + sizes).to(torch_device)
time_step = torch.tensor([10]).to(torch_device)
encoder_hidden_states = floats_tensor((batch_size, 4, 8)).to(torch_device)
return {"sample": noise, "timestep": time_step, "encoder_hidden_states": encoder_hidden_states}
def get_lora_state_dicts(modules_to_save):
state_dicts = {}
for module_name, module in modules_to_save.items():
if module is not None:
state_dicts[f"{module_name}_lora_layers"] = get_peft_model_state_dict(module)
return state_dicts
def set_lora_device(model, adapter_names, device):
# copied from diffusers LoraBaseMixin.set_lora_device
for module in model.modules():
if isinstance(module, BaseTunerLayer):
for adapter_name in adapter_names:
module.lora_A[adapter_name].to(device)
module.lora_B[adapter_name].to(device)
# this is a param, not a module, so device placement is not in-place -> re-assign
if hasattr(module, "lora_magnitude_vector") and module.lora_magnitude_vector is not None:
if adapter_name in module.lora_magnitude_vector:
module.lora_magnitude_vector[adapter_name] = module.lora_magnitude_vector[adapter_name].to(
device
)
def check_hotswap(do_hotswap):
dummy_input = get_dummy_input()
unet = get_small_unet()
lora_config = get_unet_lora_config()
unet.add_adapter(lora_config)
with tempfile.TemporaryDirectory() as tmp_dirname:
lora_state_dicts = get_lora_state_dicts({"unet": unet})
StableDiffusionPipeline.save_lora_weights(
save_directory=tmp_dirname, safe_serialization=True, **lora_state_dicts
)
del unet
unet = get_small_unet()
file_name = os.path.join(tmp_dirname, "pytorch_lora_weights.safetensors")
unet.load_attn_procs(file_name)
unet = torch.compile(unet, mode="reduce-overhead")
unet(**dummy_input)["sample"]
if do_hotswap:
unet.load_attn_procs(file_name, adapter_name="default_0", hotswap=True)
else:
# offloading the old and loading the new adapter will result in recompilation
set_lora_device(unet, adapter_names=["default_0"], device="cpu")
unet.load_attn_procs(file_name, adapter_name="other_name", hotswap=False)
# we need to call forward to potentially trigger recompilation
unet(**dummy_input)["sample"]
if __name__ == "__main__":
# check_hotswap(False) will trigger recompilation
check_hotswap(do_hotswap=sys.argv[1] == "1")
|
huggingfaceREPO_NAMEpeftPATH_START.@peft_extracted@peft-main@tests@run_compiled_diffusion_model_hotswap.py@.PATH_END.py
|
{
"filename": "test_blocks.py",
"repo_name": "lgrcia/prose",
"repo_path": "prose_extracted/prose-main/tests/test_blocks.py",
"type": "Python"
}
|
import inspect
import sys
import numpy as np
import pytest
from prose import Block, Sequence, blocks, example_image
from prose.blocks.centroids import _PhotutilsCentroid
from prose.blocks.detection import _SourceDetection
from prose.blocks.psf import _PSFModelBase
image = blocks.PointSourceDetection()(example_image())
image_psf = image.copy()
Sequence([blocks.Cutouts(), blocks.MedianEPSF()]).run(image_psf)
def classes(module, sublcasses):
class_members = inspect.getmembers(sys.modules[module], inspect.isclass)
def mask(n, c):
return issubclass(c, sublcasses) and n[0] != "_"
return [c for n, c in class_members if mask(n, c)]
@pytest.mark.parametrize("block", classes("prose.blocks.detection", _SourceDetection))
def test_detection_blocks(block):
block().run(image)
@pytest.mark.parametrize("block", classes("prose.blocks.centroids", _PhotutilsCentroid))
def test_centroids_blocks(block):
block().run(image)
def test_centroid_ballet():
tf = pytest.importorskip("tensorflow")
from prose.blocks.centroids import CentroidBallet
CentroidBallet().run(image_psf)
@pytest.mark.parametrize("block", classes("prose.blocks.psf", _PSFModelBase))
def test_psf_blocks(block):
if "JAX" in block.__name__:
pytest.importorskip("jax")
block().run(image_psf)
@pytest.mark.parametrize("d", [10, 50, 80, 100])
def test_sourcedetection_min_separation(d):
from prose.blocks.detection import PointSourceDetection
PointSourceDetection(min_separation=d).run(image)
distances = np.linalg.norm(
image.sources.coords - image.sources.coords[:, None], axis=-1
)
distances = np.where(np.eye(distances.shape[0]).astype(bool), np.nan, distances)
distances = np.nanmin(distances, 0)
np.testing.assert_allclose(distances > d, True)
def test_Trim():
blocks.Trim(30).run(image.copy())
def test_Cutouts():
im = blocks.Cutouts()(image)
assert len(im._sources) == len(im.cutouts)
def test_ComputeTransform():
from prose.blocks.geometry import ComputeTransform
im = ComputeTransform(image.copy())(image.copy())
assert np.allclose(im.transform, np.eye(3))
def test_MedianPSF():
im = image.copy()
blocks.Cutouts().run(im)
blocks.MedianEPSF().run(im)
def test_AlignReferenceSources():
im = image.copy()
blocks.ComputeTransformTwirl(image.copy()).run(im)
blocks.AlignReferenceSources(image.copy())(im)
def test_Get():
image = example_image()
image.a = 3
image.b = 6
image.header = {"C": 42}
g = blocks.Get("a", "b", "keyword:C", arrays=False)
g(image)
assert g.values == {"a": [3], "b": [6], "c": [42]}
def test_peaks():
im = image.copy()
blocks.Peaks().run(im)
def test_LimitSources():
from prose.core.source import PointSource, Sources
im = image.copy()
im.sources = Sources([PointSource(0, 0) for _ in range(2)])
blocks.LimitSources().run(im)
assert im.discard == True
def test_Del():
im = image.copy()
im.a = 3
blocks.Del("a", "data").run(im)
assert not "a" in im.computed
assert im.data is None
def test_Apply():
im = image.copy()
im.a = 3
def f(im):
im.a += 1
blocks.Apply(f).run(im)
assert im.a == 4
def test_Calibration_with_arrays():
from prose.blocks import Calibration
im = image.copy()
bias = np.ones_like(im.data) * 1
dark = np.ones_like(im.data)
flat = np.ones_like(im.data) * 0.5
flat /= np.mean(flat)
observed_flat = flat + bias + dark
observed_dark = dark + bias
# None
expected = im.data
Calibration().run(im)
np.testing.assert_allclose(im.data, expected)
# bias only
im = image.copy()
im.data = im.data + bias
expected = im.data - bias
Calibration(bias=bias).run(im)
np.testing.assert_allclose(im.data, expected)
# dark and bias only
im = image.copy()
im.data = im.data + bias + dark
expected = im.data - bias - dark
Calibration(darks=observed_dark, bias=bias).run(im)
np.testing.assert_allclose(im.data, expected)
# flat only
im = image.copy()
im.data = im.data * flat
expected = im.data / flat
Calibration(flats=flat).run(im)
np.testing.assert_allclose(im.data, expected)
# flat and bias only
im = image.copy()
im.data = (im.data * flat) + bias
expected = (im.data - bias) / flat
Calibration(bias=bias, flats=observed_flat).run(im)
np.testing.assert_allclose(im.data, expected)
# flat, dark and bias
im = image.copy()
im.data = (im.data * flat) + bias + dark
expected = (im.data - bias - dark) / flat
Calibration(bias=bias, flats=observed_flat, darks=observed_dark).run(im)
np.testing.assert_allclose(im.data, expected)
# empty lists and ndarray
# this reproduce an observed bug
im = image.copy()
im.data = im.data + dark
expected = im.data - dark
Calibration(bias=np.array([], dtype=object), flats=[], darks=observed_dark).run(im)
def test_Calibration_with_files(tmp_path):
from prose.blocks import Calibration
im = image.copy()
calib = image.copy()
calib_path = tmp_path / "calib.fits"
calib.writeto(calib_path)
Calibration(bias=calib_path).run(im)
Calibration(bias=[calib_path]).run(im)
Calibration(bias=np.array([calib_path])).run(im)
def test_SortSources():
im = image_psf.copy()
blocks.SortSources().run(im)
peaks = [s.peak for s in im.sources]
assert np.all(peaks[:-1] >= peaks[1:])
def test_require():
im = image.copy()
im.a = 0
class Testa(Block):
def __init__(self, name=None):
super().__init__(name=name, read=["a"])
def run(self, image):
pass
class Testab(Block):
def __init__(self, name=None):
super().__init__(name=name, read=["a", "b"])
def run(self, image):
pass
Sequence([Testa()]).run(im)
with pytest.raises(AttributeError, match="attribute 'b'"):
Sequence([Testab()]).run(im)
def test_require_sources():
im = image.copy()
im.sources = None
with pytest.raises(AttributeError, match="sources"):
Sequence([blocks.Cutouts()]).run(im)
def test_Video(tmp_path):
from prose.blocks import Video
im = image.copy()
im.sources = None
Sequence([Video(tmp_path / "video.gif", fps=3)]).run([im, im, im])
def test_VideoPlot(tmp_path):
from prose.blocks import VideoPlot
def plot(image):
image.show()
im = image.copy()
Sequence([VideoPlot(plot, tmp_path / "video.gif", fps=3)]).run([im, im, im])
|
lgrciaREPO_NAMEprosePATH_START.@prose_extracted@prose-main@tests@test_blocks.py@.PATH_END.py
|
{
"filename": "HI2Astro.py",
"repo_name": "PabloVD/21cmDeepLearning",
"repo_path": "21cmDeepLearning_extracted/21cmDeepLearning-master/HI2Astro.py",
"type": "Python"
}
|
#----------------------------------------------------------------
# CNN to predict the astrophysical parameters from a 21 cm field
# It can employ the encoder of the pre-trained U-Net
# Author: Pablo Villanueva Domingo
# Last update: 25/6/20
#----------------------------------------------------------------
import time, datetime, psutil
from Source.functions import *
from Source.nets import Encoder, AstroNet
from Source.plot_routines import loss_trend, param_plot
#from torchsummary import summary
#--- MAIN ---#
epochs_astro = 10
time_ini = time.time()
# Set to 1 to load the weights of the encoder of the U-Net if it has already pre-trained with HI2DM.py
# It allows to explore how much astrophysical information carries the contracting part of the U-Net
# These layers are frozen and are not trained again
pretrained_encoder = 0
# Make some directories if they don't exist yet
if not os.path.exists(path+"Plots"):
os.mkdir(path+"Plots")
if not os.path.exists(path+"Models"):
os.mkdir(path+"Models")
if not os.path.exists(path_outputs):
os.mkdir(path_outputs)
if not os.path.exists(path_outputs+"Outputs"+sufix):
os.mkdir(path_outputs+"Outputs"+sufix)
# Load fields and convert to tensors
print("Loading dataset...")
inputs = load_field("dTb")
inputs = normalize_field(inputs)
tensor_x = torch.from_numpy(inputs)
# Load astropyhsical parameters, already normalized
params = np.load(path_fields+"params_sims_"+str(n_sims)+"_z_"+redshifts[0]+"_data_aug_"+str(data_aug)+".npy")
tensor_par = torch.from_numpy(params)
print("Shape data: ",tensor_x.shape,tensor_par.shape)
totaldata = utils.TensorDataset(tensor_x.float(),tensor_par.float())
# Split training and validation sets
train_loader, valid_loader, test_loader = split_datasets(totaldata)
astromodel = AstroNet()
#summary(model,(1,DIM,DIM))
lossfunc = nn.MSELoss()
if pretrained_encoder:
print("Loading pretrained encoder weights from the U-Net")
best_Unet_model = bestmodel
if train_on_gpu:
my_dict = torch.load(best_Unet_model,map_location=torch.device('cuda'))
else:
my_dict = torch.load(best_Unet_model,map_location=torch.device('cpu'))
astro_state = astromodel.state_dict()
# Copy the weights of the pretrained encoder in the correspondent layers astronet
for (name1, param), (name2, param2) in zip(my_dict.items(), astro_state.items()):
if name1 not in name2:
continue
param = param.data
astro_state[name2].copy_(param)
encoder = Encoder()
for name, child in encoder.named_children():
layer = getattr(astromodel.encoder, name)
for param in layer.parameters():
param.requires_grad = False
if train_on_gpu:
astromodel.cuda()
network_total_params = sum(p.numel() for p in astromodel.parameters())
print('Total number of parameters in the model = %d'%network_total_params)
print("Data loaded. Time elapsed:",datetime.timedelta(seconds=time.time()-time_ini))
# Print the memory (in GB) being used now:
process = psutil.Process()
print("Memory being used (GB):",process.memory_info().rss/1.e9)
# Train the net
if training:
print("Learning...")
train_losses, valid_losses = learning_loop(astromodel,train_loader,valid_loader,lossfunc,n_epochs=epochs_astro,name_model=bestmodel_astro)
# Plot the validation/training trend
loss_trend(train_losses,valid_losses,astro=True)
# Test the net
print("Testing...")
true_targets, predicted_targets, test_loss = testing_loop(astromodel,test_loader,lossfunc,name_model=bestmodel_astro,export_map=0)
np.savetxt(path_outputs+"AstroParams"+sufix+".dat",np.transpose([true_targets[:,0],true_targets[:,1],true_targets[:,2],predicted_targets[:,0],predicted_targets[:,1],predicted_targets[:,2]]))
# Plot true vs predicted params
param_plot(true_targets,predicted_targets,test_loss)
print("Finished. Time elapsed:",datetime.timedelta(seconds=time.time()-time_ini))
|
PabloVDREPO_NAME21cmDeepLearningPATH_START.@21cmDeepLearning_extracted@21cmDeepLearning-master@HI2Astro.py@.PATH_END.py
|
{
"filename": "_text.py",
"repo_name": "catboost/catboost",
"repo_path": "catboost_extracted/catboost-master/contrib/python/plotly/py2/plotly/validators/scatterternary/_text.py",
"type": "Python"
}
|
import _plotly_utils.basevalidators
class TextValidator(_plotly_utils.basevalidators.StringValidator):
def __init__(self, plotly_name="text", parent_name="scatterternary", **kwargs):
super(TextValidator, self).__init__(
plotly_name=plotly_name,
parent_name=parent_name,
array_ok=kwargs.pop("array_ok", True),
edit_type=kwargs.pop("edit_type", "calc"),
role=kwargs.pop("role", "info"),
**kwargs
)
|
catboostREPO_NAMEcatboostPATH_START.@catboost_extracted@catboost-master@contrib@python@plotly@py2@plotly@validators@scatterternary@_text.py@.PATH_END.py
|
{
"filename": "selfcal.py",
"repo_name": "mpi-astronomy/snowblind",
"repo_path": "snowblind_extracted/snowblind-main/src/snowblind/selfcal.py",
"type": "Python"
}
|
from os.path import commonprefix
import warnings
from astropy.stats import sigma_clipped_stats
import numpy as np
from jwst import datamodels
from jwst.stpipe import Step
OPEN = datamodels.dqflags.pixel["OPEN"]
ADJ_OPEN = datamodels.dqflags.pixel["ADJ_OPEN"]
DO_NOT_USE = datamodels.dqflags.pixel["DO_NOT_USE"]
class OpenPixelStep(Step):
"""Flags cross-shaped and hot pixel defects caused by open pixels in NIR detectors
Input is an assocation (or glob pattern of files) of all images in visit or program ID
on which ones wishes to do a self-cal. These are split into separate detector stacks,
and then each image in the stack is operated on by the median of the images in that
stack. So input is N images, and output is N images, much like the first steps of
stage 3 pipelines such as tweakreg, skymatch and outlier detection.
Like outlier_detection, the input and output science images are the same, and only the
data quality (DQ) array has new pixels flagged as DO_NOT_USE and ADJ_OPEN.
This should be run after flatfielding is finished in image2 pipeline. It is fine to
insert it anywhere in the level3 pipeline before resample.
"""
spec = """
threshold = float(default=3.0) # threshold in sigma to flag hot pixels above local background
save_mask = boolean(default=False) # write out per-detector bad-pixel mask and median
output_use_model = boolean(default=True)
output_use_index = boolean(default=False)
flag_low_signal_pix = boolean(default=False)
"""
class_alias = "open_pixel"
def process(self, input_data):
with datamodels.open(input_data) as images:
# Sort into a dict of lists, grouped by detector
images_grouped_by_detector = {}
detector_names = set([image.meta.instrument.detector for image in images])
for detector in detector_names:
det_list = [i for i in images if i.meta.instrument.detector == detector]
images_grouped_by_detector.update({detector: det_list})
results = images.copy()
# For each detector represented in the association, compute a hot pixel mask and
# np.bitwise_or() it with each input image for that detector
for detector, models in images_grouped_by_detector.items():
image_stack = self.get_selfcal_stack(models)
self.log.info(f"Creating mask for detector {detector}")
mask, median = self.create_hotpixel_mask(image_stack)
self.log.info(f"Flagged {mask.sum()} pixels with {self.threshold} sigma")
if self.save_mask:
filename_prefix = f"{commonprefix([f.meta.filename for f in models])}_{detector.lower()}_{self.class_alias}"
mask_model = datamodels.MaskModel(data=mask.astype(np.uint8))
mask_model.meta.filename = f"{filename_prefix}.fits"
self.save_model(mask_model, suffix="mask", force=True)
median_model = datamodels.ImageModel(data=median)
median_model.meta.filename = f"{filename_prefix}.fits"
self.save_model(median_model, suffix="median", force=True)
for result in results:
if result.meta.instrument.detector == detector:
result.dq |= (mask * (DO_NOT_USE | ADJ_OPEN)).astype(np.uint32)
return results
def get_selfcal_stack(self, images):
"""
Get a stack of exposures taken with same detector as the data
"""
stack = []
for model in images:
stack.append(model.data)
return np.array(stack)
def create_hotpixel_mask(self, image_stack):
# Median collapse the stack of images
with warnings.catch_warnings():
warnings.filterwarnings(action="ignore", message="All-NaN slice encountered")
median2d = np.nanmedian(image_stack, axis=0)
# Clip to threshold
with warnings.catch_warnings():
warnings.filterwarnings(action="ignore",
message="Input data contains invalid values")
_, med, std = sigma_clipped_stats(median2d, mask_value=np.nan)
mask = median2d > med + self.threshold * std
if self.flag_low_signal_pix:
self.log.info(f"Flagging pixels {self.threshold}-sigma below median as well as those above.")
mask |= median2d < med - self.threshold * std
return mask, median2d
|
mpi-astronomyREPO_NAMEsnowblindPATH_START.@snowblind_extracted@snowblind-main@src@snowblind@selfcal.py@.PATH_END.py
|
{
"filename": "spherical_rht.py",
"repo_name": "georgehalal/sphericalrht",
"repo_path": "sphericalrht_extracted/sphericalrht-main/src/sphericalrht/spherical_rht.py",
"type": "Python"
}
|
# -*- coding: utf-8 -*-
"""
Spherical Rolling Hough Transform
A fast, efficient implementation of the Rolling Hough
Transform using spherical harmonic convolutions to
perform the algorithm directly on the sphere.
Classes:
StokesQU: Handling Stokes linear polarization maps (only use
if necessary).
CubeAndStokes: Define an instance of this class with input
parameters, then use the build_and_save method to run
the algorthm.
Author: George Halal
Email: halalgeorge@gmail.com
Date: 04/27/2023
Version: 2.0.1
"""
__author__ = "George Halal"
__email__ = "halalgeorge@gmail.com"
__version__ = "2.0.1"
__all__ = ["StokesQU", "CubeAndStokes"]
import os
import time
from collections import deque
import logging
from typing import Union, Tuple, Optional
from dataclasses import dataclass
import psutil
import healpy as hp
import numpy as np
import ducc0
import h5py
from .utils import set_logger
@dataclass(order=True)
class StokesQU:
"""Handle Stokes Q and U linear polarization maps.
Methods:
update: sum the contribution from different orientations
normalize_and_weight: normalize and weight by the intensity
"""
def __init__(self, npix: int) -> None:
"""Initialize linear Stokes maps.
Parameters:
npix (int): Number of map pixels
"""
assert isinstance(npix, int), (
"Number of pixels should bean integer")
self.stokes_q = np.zeros((npix))
self.stokes_u = np.zeros((npix))
return None
def update(self, spherical_rht_cube: np.ndarray,
orient_angs: np.ndarray) -> None:
"""Sum the contribution from different orientations.
Apply the Q/U formula from Clark & Hensley 2019.
Parameters:
spherical_rht_cube (np.ndarray((Norientations, Npix))):
convolution result over different orientations
orient_angs (np.ndarray((Norientations,))): orientation
angles corresponding to the cube
"""
assert spherical_rht_cube.shape[0] == orient_angs.shape[0], (
"One of the inputs has the wrong dimensions")
self.stokes_q += spherical_rht_cube.T.dot(np.cos(2. * orient_angs))
self.stokes_u += spherical_rht_cube.T.dot(np.sin(2. * orient_angs))
return None
def normalize_and_weight(self, norm: np.ndarray,
weighting: np.ndarray) -> None:
"""Normalize and weight by the intensity.
The maps are normalized such that the integral over angles = 1.
They are then weighted by the intensity map.
Parameters:
norm (np.ndarray((Npix,))): normalization
weighting (np.ndarray((Npix,))): weighting map
"""
assert norm.shape[0] == weighting.shape[0], (
"One of the inputs has the wrong dimensions")
self.stokes_q[norm == 0] = 0.
self.stokes_u[norm == 0] = 0.
norm[norm == 0] = 1.
self.stokes_q = -weighting * np.divide(self.stokes_q, norm)
self.stokes_u = weighting * np.divide(self.stokes_u, norm)
return None
@dataclass
class CubeAndStokes:
"""The main class of the sphericalrht algorithm.
Methods:
unsharp_mask: high-pass filter the input map and make it binary.
prep_intensity: process the input map and optionally calculate
its alms.
make_ker: select the pixels defining the convolution kernel.
get_ker_alm: define the convolution kernel as a stick and
calculate its alms.
get_ptg: prepare pointing tensor.
save_cube_and_stokes: run the convolution and save the resulting
cube and maps.
build_and_save: main function to use for running the algorithm
and saving the resulting cube and maps.
"""
def __init__(self, in_map: Union[str, Tuple[np.ndarray, str]], nside: int,
out_dir: str, wlen: int = 75, fwhm: float = 30.,
thresh: float = 0.7, norients: int = 25,
weighting: Optional[Union[str, np.ndarray]] = None,
mask: Optional[Union[str, np.ndarray]] = None,
overwrite: bool = False,
split_factor: Optional[int] = None) -> None:
"""Define necessary variables based on the input arguments and
make necessary directories.
Parameters:
in_map (str or tuple(np.ndarray((Npix,)), str)): either
a path to the input intensity map or a tuple of the
intensity map as an array along with its name as a str,
which is used for saving log file, alms, spherical RHT
cube, and Stokes Q/U maps. The rest of the input options
will be appended to this name when saving. The input map
ordering is assumed to be RING.
nside (int): output NSIDE for intensity and Stokes Q/U maps.
out_dir (str): directory to save log file, alms, spherical
RHT cube, and Stokes Q/U maps in COSMO convention.
wlen (int): convolution kernel window diameter [arcmins]
(the scale at which to measure the orientation).
fwhm (float): scale [arcmins] for the unsharp mask applied
to pick out filamentary structure.
thresh (float): threshold fraction of the window diameter
between 0-1 applied to the result of the convolution.
Higher thresholds focus on the main orientations only,
while lower thresholds take more orientations into
account, weighted by their intensity.
norients (int): angular resolution given by the number of
orientations to consider.
mask (str or np.ndarray((Npix,))): either a path to the map
or an array of the map pixels. This defines the mask for
maps that are not defined over the entire sky.
weighting (str or np.ndarray((Npix,))): either a path to the
map or an array of the map pixels. This is used as the
weight for the output Stokes Q/U maps. The map ordering
is assumed to be RING.
overwrite (bool): whether to overwrite outputs of same name
if they already exist.
split_factor (int): number of convolution splits to save on
memory usage. Default value is based on the requested
NSIDE and norients.
"""
assert isinstance(in_map, str) or (
isinstance(in_map, tuple) and
list(map(type, in_map)) == [np.ndarray, str]), (
"Input map must be a path or a tuple(np.ndarray, name)")
if isinstance(in_map, str):
assert os.path.exists(in_map), (
"Input map does not exist. Check path and try again.")
self.in_map = hp.read_map(in_map, field=(0))
self.name = in_map.split("/")[-1].split(".fits")[0]
else:
self.in_map = in_map[0]
self.name = in_map[1]
assert np.sqrt(self.in_map.shape[0]/12) % 1 == 0, (
"Input map has the wrong shape or number of pixels.")
if mask is not None:
assert isinstance(mask, str) or isinstance(mask, np.ndarray), (
"Mask must be a path or a np.ndarray")
if isinstance(mask, str):
assert os.path.exists(mask), (
"Mask does not exist. Check path and try again.")
self.mask = hp.read_map(mask, field=(0))
else:
self.mask = mask
assert self.mask.shape[0] == self.in_map.shape[0], (
"Mask has the wrong number of pixels.")
else:
self.mask = np.ones_like(self.in_map)
assert nside % 1 == 0 and nside > 0, "NSIDE must be a positive integer"
self.nside = int(nside)
assert isinstance(out_dir, str), "Output directory must be a str"
if not os.path.exists(out_dir):
os.makedirs(out_dir)
self.out_dir = out_dir
assert wlen % 1 == 0 and wlen > 0, "wlen must be a positive integer"
self.wlen = int(wlen)
assert isinstance(fwhm, (float, int)), "FWHM type is invalid"
assert fwhm > 0, "FWHM must be positive"
self.fwhm = fwhm
if fwhm % 1 == 0:
self.fwhm = int(fwhm)
assert isinstance(thresh, (float, int)), "Threshold type is invalid"
assert thresh >= 0 and thresh < 1, "Threshold must be between 0-1"
self.thresh = thresh
assert norients % 1 == 0 and norients > 0, (
"Number of orientations must be a positive integer")
self.norients = int(norients)
if weighting is not None:
# needed since self.weighting is not declared otherwise and
# needs to be replaced with self.in_map AFTER self.in_map
# has been proccessed.
self.weighting_flag = True
assert isinstance(weighting, str) or (
isinstance(weighting, np.ndarray)), (
"Weighting map must be a path or an np.ndarray")
if isinstance(weighting, str):
assert os.path.exists(weighting), (
"Weighting map does not exist. Check path and try again.")
if isinstance(in_map, str) and weighting == in_map:
self.weighting = self.in_map
else:
self.weighting = hp.read_map(weighting, field=(0))
else:
assert np.sqrt(weighting.shape[0]/12) % 1 == 0, (
"Weighting map has the wrong shape or number of pixels.")
self.weighting = weighting
if hp.get_nside(self.weighting) != self.nside:
self.weighting = hp.ud_grade(self.weighting, self.nside)
else:
self.weighting_flag = False
self.overwrite = overwrite
if split_factor is not None:
assert split_factor % 1 == 0 and split_factor > 0, (
"Split factor must be a positive integer")
self.split_factor = int(split_factor)
else:
self.split_factor = np.ceil(
self.nside**2/2048**2 * self.norients/25)
self.out_name = (f"{self.name}_nside{self.nside}_wlen{self.wlen}"
f"_fwhm{fwhm}_thresh{thresh}_norients{self.norients}")
set_logger(os.path.join(out_dir, self.out_name + ".log"))
logging.info(f"* Output directory: {out_dir}")
self.kernel_alms_dir = os.path.join(
os.path.expanduser("~"), ".cache/sphericalrht/kernel_alms")
if not os.path.exists(self.kernel_alms_dir):
os.makedirs(self.kernel_alms_dir)
self.ker_nside = min(self.nside * 4, 4096)
self.lmax = min(int(2.5*self.ker_nside - 1), int(1.25*4096 - 1))
self.mmax = min(50, self.lmax)
return None
def unsharp_mask(self, original_map: np.ndarray) -> np.ndarray:
"""High-pass filter the input map and make it binary.
Parameters:
original_map (np.ndarray((Npix,))): map to
high-pass filter
Returns:
high-pass filtered map of 1s and 0s (np.ndarray)
"""
original_map[original_map != original_map] = 0
smoothed_map = hp.smoothing(
original_map * self.mask, fwhm=np.radians(self.fwhm / 60.))
subtracted_map = original_map*self.mask - smoothed_map
return (subtracted_map > 0.).astype(int)
def prep_intensity(self, return_alm: bool = False) -> Union[
np.ndarray, None]:
"""Process the intensity map for saving with the Stokes Q/U
maps and optionally calculate alms for the convolution.
Parameters:
return_alm (bool): whether to calculate and return alms
Returns:
(optional) intensity_alm (np.ndarray((1, Nalms))):
processed alms used for convolution
"""
intensity_nside = hp.get_nside(self.in_map)
if self.nside != intensity_nside:
self.in_map = hp.ud_grade(self.in_map, self.nside)
if return_alm:
if self.ker_nside == self.nside:
intensity_in = self.in_map
else:
intensity_in = hp.ud_grade(self.in_map, self.ker_nside)
self.mask = hp.ud_grade(self.mask, self.ker_nside)
intensity_in = self.unsharp_mask(intensity_in)
intensity_alm = hp.map2alm(
intensity_in, self.lmax).reshape((1, -1))
return intensity_alm
return None
def make_ker(self):
"""Select line of pixels defining the kernel at the North Pole.
Returns:
line (collections.deque): double-ended queue of pixel
indices defining the kernel
"""
niters = int(30 * self.wlen/75 * self.ker_nside/1024)
line = deque([0, 1])
for i in range(niters):
line.append(hp.get_all_neighbours(self.ker_nside, line[-1])[-2])
line.appendleft(hp.get_all_neighbours(self.ker_nside, line[0])[0])
line.append(2)
line.appendleft(3)
for i in range(niters):
line.append(hp.get_all_neighbours(self.ker_nside, line[-1])[0])
line.appendleft(hp.get_all_neighbours(self.ker_nside, line[0])[-2])
return line
def get_ker_alm(self) -> np.ndarray:
"""Define the convolution kernel as a stick at the North Pole
and calculate its alms.
Returns:
ker_alm (np.ndarray((1, Nalms))): complex-valued kernel
alms to use in the convolution
"""
npix = 12 * self.ker_nside**2
# create a mask at the North Pole with the size of the kernel
vec = hp.ang2vec(0, 90, lonlat=True)
disc = hp.query_disc(self.ker_nside, vec,
radius=np.radians(self.wlen / 2. / 60.),
inclusive=True)
window = np.zeros((npix))
window[disc] = 1
ker = np.zeros((npix))
line = self.make_ker()
ker[line] = 1
ker *= window
ker_alm = hp.map2alm(ker)
# smooth to prevent ringing and apply lmax and mmax cuts
ker_alm = hp.smoothalm(ker_alm, fwhm=hp.nside2resol(self.nside))
l, m = hp.Alm.getlm(3*self.ker_nside - 1, np.arange(ker_alm.shape[0]))
ker_alm = ker_alm[np.logical_and(l < self.lmax+1, m < self.mmax+1)]
# normalize
ker_alm /= 2. * np.sqrt(np.pi) * ker_alm[0]
return ker_alm.reshape((1, -1))
def get_ptg(self, npix, orients) -> np.ndarray:
"""Prepare pointing tensor of co-latitudes, longitudes, and
kernel orientations.
Parameters:
npix (int): Number of pixels to calculate the convolution on
orients (np.ndarray((Norientations,))): Angles by which
to rotate the kernel
Returns:
pointing tensor (np.ndarray((N, 3))): tensor of
co-latitudes, longitudes, and kernel orientations
"""
thetas, phis = hp.pix2ang(self.nside, np.arange(npix))
psis = np.repeat(orients, phis.shape[0])
phis = np.tile(phis, orients.shape[0])
thetas = np.tile(thetas, orients.shape[0])
return np.vstack((thetas, phis, psis)).T
def save_cube_and_stokes(
self,
interpolator: ducc0.totalconvolve.Interpolator) -> None:
"""Run the convolution and save the resulting cube and maps.
Parameters:
interpolator (ducc0.totalconvolve.Interpolator): object
encapsulating the convolution functionality
"""
npix = 12 * self.nside**2
stokes = StokesQU(npix)
norm = np.zeros((npix))
orients = np.linspace(0.0, np.pi, self.norients)
# Angle values corresponding to the kernel rotation angles
orient_angs = np.flip(orients)
orients = np.array_split(orients, self.split_factor)
orient_angs = np.array_split(orient_angs, self.split_factor)
# Find the size of each split
bnd = 0
split_bounds = [0]
for i in range(len(orients)):
bnd += orients[i].shape[0]
split_bounds.append(bnd)
# Save convolution result as a data cube in hdf5 format
out_fn = os.path.join(self.out_dir, self.out_name + ".h5")
if os.path.exists(out_fn):
os.remove(out_fn)
f = h5py.File(out_fn, "w")
spherical_rht_cube = f.create_dataset(name="spherical_rht_cube",
shape=(self.norients, npix),
dtype="f", compression="gzip")
# Perform convolutions
for i, (orients_split, orient_angs_split) in enumerate(
zip(orients, orient_angs)):
ptg = self.get_ptg(npix, orients_split)
spherical_rht_temp = interpolator.interpol(ptg).reshape(
orients_split.shape[0], npix)
spherical_rht_cube[split_bounds[i]:split_bounds[i + 1], :] = (
spherical_rht_temp)
# Apply threshold
spherical_rht_temp -= self.thresh
spherical_rht_temp[spherical_rht_temp < 0] = 0.
stokes.update(spherical_rht_temp, orient_angs_split)
norm += spherical_rht_temp.sum(axis=0)
process = psutil.Process(os.getpid())
logging.info(f"* Using {process.memory_info().rss / 1e9}GB of"
" memory to make spherical RHT cube.")
f.close()
if self.weighting_flag:
stokes.normalize_and_weight(norm, self.weighting)
else:
stokes.normalize_and_weight(norm, self.in_map)
hp.write_map(os.path.join(self.out_dir, "IQU_" + self.out_name
+ ".fits"), (self.in_map, stokes.stokes_q,
stokes.stokes_u), dtype=["float32", "float32", "float32"],
coord="G", column_names=["I", "Q", "U"], overwrite=True)
return None
def build_and_save(self) -> None:
"""Run the algorithm and save the resulting orientation cube and
Stokes maps.
"""
out_maps_name = os.path.join(
self.out_dir, "IQU_" + self.out_name + ".fits")
out_cube_name = os.path.join(self.out_dir, self.out_name + ".h5")
if not self.overwrite and (os.path.exists(out_maps_name)
and os.path.exists(out_cube_name)):
logging.info("* Outputs already exist. "
"Change overwrite to True to overwrite them.")
return None
# more accurate than time.time()
start_time = time.perf_counter()
intensity_alm_file = os.path.join(
self.out_dir, self.name
+ f"_alms_nside{self.ker_nside}_fwhm{self.fwhm}.npy")
if os.path.exists(intensity_alm_file) and not self.overwrite:
logging.info("* Input map alms exist.")
intensity_alm = np.load(intensity_alm_file)
self.prep_intensity()
else:
intensity_alm = self.prep_intensity(return_alm=True)
np.save(intensity_alm_file, intensity_alm)
logging.info("* Created input map alms.")
ker_alm_file = os.path.join(
self.kernel_alms_dir, f"alms_nside{self.ker_nside}_"
f"wlen{self.wlen}.npy")
if os.path.exists(ker_alm_file):
logging.info("* Kernel alms exist.")
ker_alm = np.load(ker_alm_file)
else:
ker_alm = self.get_ker_alm()
np.save(ker_alm_file, ker_alm)
logging.info("* Created kernel alms.")
# Use as many threads as available
interpolator = ducc0.totalconvolve.Interpolator(intensity_alm, ker_alm,
separate=True,
lmax=self.lmax,
kmax=self.mmax,
epsilon=1e-4,
ofactor=1.5,
nthreads=0)
logging.info("* Interpolator configured.")
del intensity_alm, ker_alm
self.save_cube_and_stokes(interpolator)
logging.info("* Saved cube and maps in output directory.")
logging.info(
f"* Total run time = {(time.perf_counter()-start_time) / 60.}"
" mins.")
return None
|
georgehalalREPO_NAMEsphericalrhtPATH_START.@sphericalrht_extracted@sphericalrht-main@src@sphericalrht@spherical_rht.py@.PATH_END.py
|
{
"filename": "_familysrc.py",
"repo_name": "plotly/plotly.py",
"repo_path": "plotly.py_extracted/plotly.py-master/packages/python/plotly/plotly/validators/densitymapbox/hoverlabel/font/_familysrc.py",
"type": "Python"
}
|
import _plotly_utils.basevalidators
class FamilysrcValidator(_plotly_utils.basevalidators.SrcValidator):
def __init__(
self,
plotly_name="familysrc",
parent_name="densitymapbox.hoverlabel.font",
**kwargs,
):
super(FamilysrcValidator, self).__init__(
plotly_name=plotly_name,
parent_name=parent_name,
edit_type=kwargs.pop("edit_type", "none"),
**kwargs,
)
|
plotlyREPO_NAMEplotly.pyPATH_START.@plotly.py_extracted@plotly.py-master@packages@python@plotly@plotly@validators@densitymapbox@hoverlabel@font@_familysrc.py@.PATH_END.py
|
{
"filename": "evidence.py",
"repo_name": "florpi/sunbird",
"repo_path": "sunbird_extracted/sunbird-main/paper_figures/boss/evidence.py",
"type": "Python"
}
|
"""
Figure 7: Bayesian evidence
"""
import numpy as np
import matplotlib.pyplot as plt
from pathlib import Path
from sunbird.data.data_readers import NseriesCutsky, CMASS
from sunbird.covariance import CovarianceMatrix
from sunbird.summaries import Bundle
from getdist import MCSamples
plt.style.use(['stylelib/science.mplstyle'])
def read_dynesty_chain(filename):
data = np.genfromtxt(filename, skip_header=1, delimiter=",")
chain = data[:, 4:]
weights = np.exp(data[:, 1] - data[-1, 2])
return chain, weights
def get_names_labels(param_space):
names = ['omega_b', 'omega_cdm', 'sigma8_m', 'n_s',
'nrun', 'N_ur', 'w0_fld', 'wa_fld',
'logM1', 'logM_cut', 'alpha', 'alpha_s',
'alpha_c', 'logsigma', 'kappa', 'B_cen', 'B_sat']
labels_dict = {
"omega_b": r'\omega_{\rm b}', "omega_cdm": r'\omega_{\rm cdm}',
"sigma8_m": r'\sigma_8', "n_s": r'n_s', "nrun": r'\alpha_s',
"N_ur": r'N_{\rm ur}', "w0_fld": r'w_0', "wa_fld": r'w_a',
"logM1": r'\log M_1', "logM_cut": r'\log M_{\rm cut}',
"alpha": r'\alpha', "alpha_s": r'\alpha_{\rm vel, s}',
"alpha_c": r'\alpha_{\rm vel, c}', "logsigma": r'\log \sigma',
"kappa": r'\kappa', "B_cen": r'B_{\rm cen}', "B_sat": r'B_{\rm sat}',
}
if not 'w0wa' in param_space:
names.remove('w0_fld')
names.remove('wa_fld')
if not 'nrun' in param_space:
names.remove('nrun')
if not 'Nur' in param_space:
names.remove('N_ur')
if 'noAB' in param_space:
names.remove('B_cen')
names.remove('B_sat')
labels = [labels_dict[name] for name in names]
return names, labels
fig, ax = plt.subplots(figsize=(6, 5))
colors = ['lightseagreen', 'lightpink', 'darkorchid']
loglikes_smin = []
evidence_smin = []
for i, smin in enumerate([0.7, 50.0]):
statistic = ['density_split_cross', 'density_split_auto', 'tpcf']
s = np.load('/pscratch/sd/e/epaillas/sunbird/data/s.npy')
smax = 150
s = s[(s > smin) & (s < smax)]
quantiles = [0, 1, 3, 4]
slice_filters = {'s': [smin, smax],}
select_filters = {'quintiles': quantiles, 'multipoles': [0, 2],}
loglikes_phases = []
evidence_phases = []
for phase in range(1, 84):
print(phase)
root_dir = Path('/pscratch/sd/e/epaillas/sunbird/chains/boss_paper')
chain_handle = f'nseries_cutsky_ph{phase}_density_split_cross_density_split_auto_tpcf_mae_patchycov_vol1_smin{smin:.2f}_smax150.00_m02_q0134_base_bbn'
names, labels = get_names_labels(chain_handle)
chain_fn = root_dir / chain_handle / 'results.csv'
data = np.genfromtxt(chain_fn, skip_header=1, delimiter=",")
chain = data[:, 4:]
loglikes = data[:, 1]
evidence = data[:, 2]
weights = np.exp(data[:, 1] - data[-1, 2])
samples = MCSamples(samples=chain, weights=weights, labels=labels, names=names)
# this reads the ML point from the chain
parameters = {names[i]: samples[names[i]][-1] for i in range(len(names))}
parameters['nrun'] = 0.0
parameters['N_ur'] = 2.0328
parameters['w0_fld'] = -1.0
parameters['wa_fld'] = 0.0
datavector = NseriesCutsky(
statistics=statistic,
select_filters=select_filters,
slice_filters=slice_filters,
).get_observation(phase=phase)
cov = CovarianceMatrix(
covariance_data_class='Patchy',
statistics=statistic,
select_filters=select_filters,
slice_filters=slice_filters,
path_to_models='/global/homes/e/epaillas/pscratch/sunbird/trained_models/enrique/best/'
)
emulator = Bundle(
summaries=statistic,
path_to_models='/global/homes/e/epaillas/pscratch/sunbird/trained_models/enrique/best/',
)
model, error_model = emulator(
param_dict=parameters,
select_filters=select_filters,
slice_filters=slice_filters,
)
cov_data = cov.get_covariance_data(volume_scaling=1)
cov_emu = cov.get_covariance_emulator()
cov_sim = cov.get_covariance_simulation()
cov_tot = cov_data + cov_emu + cov_sim
error_data = np.sqrt(np.diag(cov_data))
error_emu = np.sqrt(np.diag(cov_emu))
error_sim = np.sqrt(np.diag(cov_sim))
error_model = np.sqrt(error_sim**2 + error_emu**2)
error_tot = np.sqrt(error_data**2 + error_emu**2 + error_sim**2)
dof = (len(datavector) - 13)
chi2 = np.dot(datavector - model, np.linalg.inv(cov_tot)).dot(datavector - model)
chi2_red = chi2 / dof
# print(f'{statistic} reduced chi2 = {chi2_red}')
# chi2_list.append(chi2_red)
loglikes_phases.append(loglikes[-1])
evidence_phases.append(evidence[-1])
root_dir = Path('/pscratch/sd/e/epaillas/sunbird/chains/boss_paper')
chain_handle = f'cmass_density_split_cross_density_split_auto_tpcf_mae_patchycov_smin{smin:.2f}_smax150.00_m02_q0134_base_bbn'
names, labels = get_names_labels(chain_handle)
chain_fn = root_dir / chain_handle / 'results.csv'
data = np.genfromtxt(chain_fn, skip_header=1, delimiter=",")
chain = data[:, 4:]
loglikes = data[:, 1]
evidence = data[:, 2]
weights = np.exp(data[:, 1] - data[-1, 2])
samples = MCSamples(samples=chain, weights=weights, labels=labels, names=names)
# this reads the ML point from the chain
parameters = {names[i]: samples[names[i]][-1] for i in range(len(names))}
parameters['nrun'] = 0.0
parameters['N_ur'] = 2.0328
parameters['w0_fld'] = -1.0
parameters['wa_fld'] = 0.0
datavector = CMASS(
statistics=statistic,
select_filters=select_filters,
slice_filters=slice_filters,
region='NGC'
).get_observation()
cov = CovarianceMatrix(
covariance_data_class='Patchy',
statistics=statistic,
select_filters=select_filters,
slice_filters=slice_filters,
path_to_models='/global/homes/e/epaillas/pscratch/sunbird/trained_models/enrique/best/'
)
emulator = Bundle(
summaries=statistic,
path_to_models='/global/homes/e/epaillas/pscratch/sunbird/trained_models/enrique/best/',
)
model, error_model = emulator(
param_dict=parameters,
select_filters=select_filters,
slice_filters=slice_filters,
)
cov_data = cov.get_covariance_data()
cov_emu = cov.get_covariance_emulator()
cov_sim = cov.get_covariance_simulation()
cov_tot = cov_data + cov_emu + cov_sim
dof = (len(datavector) - 13)
chi2 = np.dot(datavector - model, np.linalg.inv(cov_tot)).dot(datavector - model)
chi2_red = chi2 / dof
ax.hist(evidence_phases, bins=20, alpha=0.5, color=colors[i],)
ylim = ax.get_ylim()
ax.vlines(evidence[-1], 0, 10, color=colors[i], linestyle='--',)
ax.plot(np.nan, np.nan, ls='-', lw=7.0, color='k',
label='Nseries',)
ax.plot(np.nan, np.nan, ls='--', lw=2.0, color='k',
label='CMASS',)
ax.annotate(text=r'$s_{\rm min} = 1\,{h^{-1}{\rm Mpc}}$', xy=(0.08, 0.87),
xycoords='axes fraction', fontsize=14, color=colors[0])
ax.annotate(text=r'$s_{\rm min} = 50\,{h^{-1}{\rm Mpc}}$', xy=(0.6, 0.87),
xycoords='axes fraction', fontsize=14, color=colors[1])
leg = ax.legend(bbox_to_anchor=(0.5, 1.2), loc='upper center',
frameon=False, fontsize=15, ncols=2, columnspacing=0.7,)
for line,text in zip(leg.get_lines(), leg.get_texts()):
text.set_color(line.get_color())
ax.set_ylim(0, 13.0)
ax.set_xlabel('log-evidence 'r'$\mathcal{Z}$', fontsize=15)
ax.set_ylabel(r'counts', fontsize=15)
ax.tick_params(axis='x', labelsize=14)
ax.tick_params(axis='y', labelsize=14)
plt.tight_layout()
plt.savefig('fig/pdf/evidence.pdf', bbox_inches='tight')
|
florpiREPO_NAMEsunbirdPATH_START.@sunbird_extracted@sunbird-main@paper_figures@boss@evidence.py@.PATH_END.py
|
{
"filename": "_ticklabelposition.py",
"repo_name": "plotly/plotly.py",
"repo_path": "plotly.py_extracted/plotly.py-master/packages/python/plotly/plotly/validators/densitymap/colorbar/_ticklabelposition.py",
"type": "Python"
}
|
import _plotly_utils.basevalidators
class TicklabelpositionValidator(_plotly_utils.basevalidators.EnumeratedValidator):
def __init__(
self,
plotly_name="ticklabelposition",
parent_name="densitymap.colorbar",
**kwargs,
):
super(TicklabelpositionValidator, self).__init__(
plotly_name=plotly_name,
parent_name=parent_name,
edit_type=kwargs.pop("edit_type", "colorbars"),
values=kwargs.pop(
"values",
[
"outside",
"inside",
"outside top",
"inside top",
"outside left",
"inside left",
"outside right",
"inside right",
"outside bottom",
"inside bottom",
],
),
**kwargs,
)
|
plotlyREPO_NAMEplotly.pyPATH_START.@plotly.py_extracted@plotly.py-master@packages@python@plotly@plotly@validators@densitymap@colorbar@_ticklabelposition.py@.PATH_END.py
|
{
"filename": "densenet.py",
"repo_name": "pytorch/vision",
"repo_path": "vision_extracted/vision-main/torchvision/models/densenet.py",
"type": "Python"
}
|
import re
from collections import OrderedDict
from functools import partial
from typing import Any, List, Optional, Tuple
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.utils.checkpoint as cp
from torch import Tensor
from ..transforms._presets import ImageClassification
from ..utils import _log_api_usage_once
from ._api import register_model, Weights, WeightsEnum
from ._meta import _IMAGENET_CATEGORIES
from ._utils import _ovewrite_named_param, handle_legacy_interface
__all__ = [
"DenseNet",
"DenseNet121_Weights",
"DenseNet161_Weights",
"DenseNet169_Weights",
"DenseNet201_Weights",
"densenet121",
"densenet161",
"densenet169",
"densenet201",
]
class _DenseLayer(nn.Module):
def __init__(
self, num_input_features: int, growth_rate: int, bn_size: int, drop_rate: float, memory_efficient: bool = False
) -> None:
super().__init__()
self.norm1 = nn.BatchNorm2d(num_input_features)
self.relu1 = nn.ReLU(inplace=True)
self.conv1 = nn.Conv2d(num_input_features, bn_size * growth_rate, kernel_size=1, stride=1, bias=False)
self.norm2 = nn.BatchNorm2d(bn_size * growth_rate)
self.relu2 = nn.ReLU(inplace=True)
self.conv2 = nn.Conv2d(bn_size * growth_rate, growth_rate, kernel_size=3, stride=1, padding=1, bias=False)
self.drop_rate = float(drop_rate)
self.memory_efficient = memory_efficient
def bn_function(self, inputs: List[Tensor]) -> Tensor:
concated_features = torch.cat(inputs, 1)
bottleneck_output = self.conv1(self.relu1(self.norm1(concated_features))) # noqa: T484
return bottleneck_output
# todo: rewrite when torchscript supports any
def any_requires_grad(self, input: List[Tensor]) -> bool:
for tensor in input:
if tensor.requires_grad:
return True
return False
@torch.jit.unused # noqa: T484
def call_checkpoint_bottleneck(self, input: List[Tensor]) -> Tensor:
def closure(*inputs):
return self.bn_function(inputs)
return cp.checkpoint(closure, *input, use_reentrant=False)
@torch.jit._overload_method # noqa: F811
def forward(self, input: List[Tensor]) -> Tensor: # noqa: F811
pass
@torch.jit._overload_method # noqa: F811
def forward(self, input: Tensor) -> Tensor: # noqa: F811
pass
# torchscript does not yet support *args, so we overload method
# allowing it to take either a List[Tensor] or single Tensor
def forward(self, input: Tensor) -> Tensor: # noqa: F811
if isinstance(input, Tensor):
prev_features = [input]
else:
prev_features = input
if self.memory_efficient and self.any_requires_grad(prev_features):
if torch.jit.is_scripting():
raise Exception("Memory Efficient not supported in JIT")
bottleneck_output = self.call_checkpoint_bottleneck(prev_features)
else:
bottleneck_output = self.bn_function(prev_features)
new_features = self.conv2(self.relu2(self.norm2(bottleneck_output)))
if self.drop_rate > 0:
new_features = F.dropout(new_features, p=self.drop_rate, training=self.training)
return new_features
class _DenseBlock(nn.ModuleDict):
_version = 2
def __init__(
self,
num_layers: int,
num_input_features: int,
bn_size: int,
growth_rate: int,
drop_rate: float,
memory_efficient: bool = False,
) -> None:
super().__init__()
for i in range(num_layers):
layer = _DenseLayer(
num_input_features + i * growth_rate,
growth_rate=growth_rate,
bn_size=bn_size,
drop_rate=drop_rate,
memory_efficient=memory_efficient,
)
self.add_module("denselayer%d" % (i + 1), layer)
def forward(self, init_features: Tensor) -> Tensor:
features = [init_features]
for name, layer in self.items():
new_features = layer(features)
features.append(new_features)
return torch.cat(features, 1)
class _Transition(nn.Sequential):
def __init__(self, num_input_features: int, num_output_features: int) -> None:
super().__init__()
self.norm = nn.BatchNorm2d(num_input_features)
self.relu = nn.ReLU(inplace=True)
self.conv = nn.Conv2d(num_input_features, num_output_features, kernel_size=1, stride=1, bias=False)
self.pool = nn.AvgPool2d(kernel_size=2, stride=2)
class DenseNet(nn.Module):
r"""Densenet-BC model class, based on
`"Densely Connected Convolutional Networks" <https://arxiv.org/pdf/1608.06993.pdf>`_.
Args:
growth_rate (int) - how many filters to add each layer (`k` in paper)
block_config (list of 4 ints) - how many layers in each pooling block
num_init_features (int) - the number of filters to learn in the first convolution layer
bn_size (int) - multiplicative factor for number of bottle neck layers
(i.e. bn_size * k features in the bottleneck layer)
drop_rate (float) - dropout rate after each dense layer
num_classes (int) - number of classification classes
memory_efficient (bool) - If True, uses checkpointing. Much more memory efficient,
but slower. Default: *False*. See `"paper" <https://arxiv.org/pdf/1707.06990.pdf>`_.
"""
def __init__(
self,
growth_rate: int = 32,
block_config: Tuple[int, int, int, int] = (6, 12, 24, 16),
num_init_features: int = 64,
bn_size: int = 4,
drop_rate: float = 0,
num_classes: int = 1000,
memory_efficient: bool = False,
) -> None:
super().__init__()
_log_api_usage_once(self)
# First convolution
self.features = nn.Sequential(
OrderedDict(
[
("conv0", nn.Conv2d(3, num_init_features, kernel_size=7, stride=2, padding=3, bias=False)),
("norm0", nn.BatchNorm2d(num_init_features)),
("relu0", nn.ReLU(inplace=True)),
("pool0", nn.MaxPool2d(kernel_size=3, stride=2, padding=1)),
]
)
)
# Each denseblock
num_features = num_init_features
for i, num_layers in enumerate(block_config):
block = _DenseBlock(
num_layers=num_layers,
num_input_features=num_features,
bn_size=bn_size,
growth_rate=growth_rate,
drop_rate=drop_rate,
memory_efficient=memory_efficient,
)
self.features.add_module("denseblock%d" % (i + 1), block)
num_features = num_features + num_layers * growth_rate
if i != len(block_config) - 1:
trans = _Transition(num_input_features=num_features, num_output_features=num_features // 2)
self.features.add_module("transition%d" % (i + 1), trans)
num_features = num_features // 2
# Final batch norm
self.features.add_module("norm5", nn.BatchNorm2d(num_features))
# Linear layer
self.classifier = nn.Linear(num_features, num_classes)
# Official init from torch repo.
for m in self.modules():
if isinstance(m, nn.Conv2d):
nn.init.kaiming_normal_(m.weight)
elif isinstance(m, nn.BatchNorm2d):
nn.init.constant_(m.weight, 1)
nn.init.constant_(m.bias, 0)
elif isinstance(m, nn.Linear):
nn.init.constant_(m.bias, 0)
def forward(self, x: Tensor) -> Tensor:
features = self.features(x)
out = F.relu(features, inplace=True)
out = F.adaptive_avg_pool2d(out, (1, 1))
out = torch.flatten(out, 1)
out = self.classifier(out)
return out
def _load_state_dict(model: nn.Module, weights: WeightsEnum, progress: bool) -> None:
# '.'s are no longer allowed in module names, but previous _DenseLayer
# has keys 'norm.1', 'relu.1', 'conv.1', 'norm.2', 'relu.2', 'conv.2'.
# They are also in the checkpoints in model_urls. This pattern is used
# to find such keys.
pattern = re.compile(
r"^(.*denselayer\d+\.(?:norm|relu|conv))\.((?:[12])\.(?:weight|bias|running_mean|running_var))$"
)
state_dict = weights.get_state_dict(progress=progress, check_hash=True)
for key in list(state_dict.keys()):
res = pattern.match(key)
if res:
new_key = res.group(1) + res.group(2)
state_dict[new_key] = state_dict[key]
del state_dict[key]
model.load_state_dict(state_dict)
def _densenet(
growth_rate: int,
block_config: Tuple[int, int, int, int],
num_init_features: int,
weights: Optional[WeightsEnum],
progress: bool,
**kwargs: Any,
) -> DenseNet:
if weights is not None:
_ovewrite_named_param(kwargs, "num_classes", len(weights.meta["categories"]))
model = DenseNet(growth_rate, block_config, num_init_features, **kwargs)
if weights is not None:
_load_state_dict(model=model, weights=weights, progress=progress)
return model
_COMMON_META = {
"min_size": (29, 29),
"categories": _IMAGENET_CATEGORIES,
"recipe": "https://github.com/pytorch/vision/pull/116",
"_docs": """These weights are ported from LuaTorch.""",
}
class DenseNet121_Weights(WeightsEnum):
IMAGENET1K_V1 = Weights(
url="https://download.pytorch.org/models/densenet121-a639ec97.pth",
transforms=partial(ImageClassification, crop_size=224),
meta={
**_COMMON_META,
"num_params": 7978856,
"_metrics": {
"ImageNet-1K": {
"acc@1": 74.434,
"acc@5": 91.972,
}
},
"_ops": 2.834,
"_file_size": 30.845,
},
)
DEFAULT = IMAGENET1K_V1
class DenseNet161_Weights(WeightsEnum):
IMAGENET1K_V1 = Weights(
url="https://download.pytorch.org/models/densenet161-8d451a50.pth",
transforms=partial(ImageClassification, crop_size=224),
meta={
**_COMMON_META,
"num_params": 28681000,
"_metrics": {
"ImageNet-1K": {
"acc@1": 77.138,
"acc@5": 93.560,
}
},
"_ops": 7.728,
"_file_size": 110.369,
},
)
DEFAULT = IMAGENET1K_V1
class DenseNet169_Weights(WeightsEnum):
IMAGENET1K_V1 = Weights(
url="https://download.pytorch.org/models/densenet169-b2777c0a.pth",
transforms=partial(ImageClassification, crop_size=224),
meta={
**_COMMON_META,
"num_params": 14149480,
"_metrics": {
"ImageNet-1K": {
"acc@1": 75.600,
"acc@5": 92.806,
}
},
"_ops": 3.36,
"_file_size": 54.708,
},
)
DEFAULT = IMAGENET1K_V1
class DenseNet201_Weights(WeightsEnum):
IMAGENET1K_V1 = Weights(
url="https://download.pytorch.org/models/densenet201-c1103571.pth",
transforms=partial(ImageClassification, crop_size=224),
meta={
**_COMMON_META,
"num_params": 20013928,
"_metrics": {
"ImageNet-1K": {
"acc@1": 76.896,
"acc@5": 93.370,
}
},
"_ops": 4.291,
"_file_size": 77.373,
},
)
DEFAULT = IMAGENET1K_V1
@register_model()
@handle_legacy_interface(weights=("pretrained", DenseNet121_Weights.IMAGENET1K_V1))
def densenet121(*, weights: Optional[DenseNet121_Weights] = None, progress: bool = True, **kwargs: Any) -> DenseNet:
r"""Densenet-121 model from
`Densely Connected Convolutional Networks <https://arxiv.org/abs/1608.06993>`_.
Args:
weights (:class:`~torchvision.models.DenseNet121_Weights`, optional): The
pretrained weights to use. See
:class:`~torchvision.models.DenseNet121_Weights` below for
more details, and possible values. By default, no pre-trained
weights are used.
progress (bool, optional): If True, displays a progress bar of the download to stderr. Default is True.
**kwargs: parameters passed to the ``torchvision.models.densenet.DenseNet``
base class. Please refer to the `source code
<https://github.com/pytorch/vision/blob/main/torchvision/models/densenet.py>`_
for more details about this class.
.. autoclass:: torchvision.models.DenseNet121_Weights
:members:
"""
weights = DenseNet121_Weights.verify(weights)
return _densenet(32, (6, 12, 24, 16), 64, weights, progress, **kwargs)
@register_model()
@handle_legacy_interface(weights=("pretrained", DenseNet161_Weights.IMAGENET1K_V1))
def densenet161(*, weights: Optional[DenseNet161_Weights] = None, progress: bool = True, **kwargs: Any) -> DenseNet:
r"""Densenet-161 model from
`Densely Connected Convolutional Networks <https://arxiv.org/abs/1608.06993>`_.
Args:
weights (:class:`~torchvision.models.DenseNet161_Weights`, optional): The
pretrained weights to use. See
:class:`~torchvision.models.DenseNet161_Weights` below for
more details, and possible values. By default, no pre-trained
weights are used.
progress (bool, optional): If True, displays a progress bar of the download to stderr. Default is True.
**kwargs: parameters passed to the ``torchvision.models.densenet.DenseNet``
base class. Please refer to the `source code
<https://github.com/pytorch/vision/blob/main/torchvision/models/densenet.py>`_
for more details about this class.
.. autoclass:: torchvision.models.DenseNet161_Weights
:members:
"""
weights = DenseNet161_Weights.verify(weights)
return _densenet(48, (6, 12, 36, 24), 96, weights, progress, **kwargs)
@register_model()
@handle_legacy_interface(weights=("pretrained", DenseNet169_Weights.IMAGENET1K_V1))
def densenet169(*, weights: Optional[DenseNet169_Weights] = None, progress: bool = True, **kwargs: Any) -> DenseNet:
r"""Densenet-169 model from
`Densely Connected Convolutional Networks <https://arxiv.org/abs/1608.06993>`_.
Args:
weights (:class:`~torchvision.models.DenseNet169_Weights`, optional): The
pretrained weights to use. See
:class:`~torchvision.models.DenseNet169_Weights` below for
more details, and possible values. By default, no pre-trained
weights are used.
progress (bool, optional): If True, displays a progress bar of the download to stderr. Default is True.
**kwargs: parameters passed to the ``torchvision.models.densenet.DenseNet``
base class. Please refer to the `source code
<https://github.com/pytorch/vision/blob/main/torchvision/models/densenet.py>`_
for more details about this class.
.. autoclass:: torchvision.models.DenseNet169_Weights
:members:
"""
weights = DenseNet169_Weights.verify(weights)
return _densenet(32, (6, 12, 32, 32), 64, weights, progress, **kwargs)
@register_model()
@handle_legacy_interface(weights=("pretrained", DenseNet201_Weights.IMAGENET1K_V1))
def densenet201(*, weights: Optional[DenseNet201_Weights] = None, progress: bool = True, **kwargs: Any) -> DenseNet:
r"""Densenet-201 model from
`Densely Connected Convolutional Networks <https://arxiv.org/abs/1608.06993>`_.
Args:
weights (:class:`~torchvision.models.DenseNet201_Weights`, optional): The
pretrained weights to use. See
:class:`~torchvision.models.DenseNet201_Weights` below for
more details, and possible values. By default, no pre-trained
weights are used.
progress (bool, optional): If True, displays a progress bar of the download to stderr. Default is True.
**kwargs: parameters passed to the ``torchvision.models.densenet.DenseNet``
base class. Please refer to the `source code
<https://github.com/pytorch/vision/blob/main/torchvision/models/densenet.py>`_
for more details about this class.
.. autoclass:: torchvision.models.DenseNet201_Weights
:members:
"""
weights = DenseNet201_Weights.verify(weights)
return _densenet(32, (6, 12, 48, 32), 64, weights, progress, **kwargs)
|
pytorchREPO_NAMEvisionPATH_START.@vision_extracted@vision-main@torchvision@models@densenet.py@.PATH_END.py
|
{
"filename": "prior.py",
"repo_name": "ojhall94/michael",
"repo_path": "michael_extracted/michael-main/michael/prior.py",
"type": "Python"
}
|
"""
Class to estimate prior expectations of target rotation period based on
select input data.
"""
import pandas as pd
import numpy as np
import emcee
import statsmodels.api as sm
from statsmodels.nonparametric.bandwidths import select_bandwidth
from .utils import _random_seed
class priorclass():
""" Class managing the prior expectations on rotation period.
Examples
--------
Parameters
----------
Attributes
----------
"""
def __init__(self, obs, verbose = False):
self.obs = obs
self.verbose = verbose
def load_prior_data(self):
"""
This will randomly shuffle the data following the set random seed.
"""
df = pd.read_csv('../michael/data/prior_data.csv', index_col = None)
self.train = df.sample(len(df), ignore_index=True).reset_index(drop=True)
def build_kde(self):
"""
Build a KDE based on the prior data from the Santos et al. (2019, 2020)
catalogues. The KDE is based on a subselection of the full data set
based on the uncertainties on the input observables to save on
computation time. IF there are fewer than 2000 stars in a 2 sigma
region in temperature, it is extended to 3 sigma.
It is always recommended to check the number of stars included in the
KDE when using this function. You can do this by setting `verbose=True`
when initialising the `janet` class.
"""
self.sel_train = self.train.loc[np.abs(self.train.logT - self.obs['logT'][0])
< 2*self.obs['logT'][1]]
if len(self.sel_train) < 2000:
self.sel_train = self.train.loc[np.abs(self.train.logT - self.obs['logT'][0])
< 3*self.obs['logT'][1]]
self.bw = select_bandwidth(self.sel_train.values,
bw = 'scott', kernel=None)
self.kde = sm.nonparametric.KDEMultivariate(data = self.sel_train.values,
var_type = 'c'*len(self.sel_train.columns),
bw = self.bw)
if self.verbose:
print(f'KDE built on {len(self.sel_train)} values.')
def ln_normal(self, x, mu, sigma):
"""
A normal distribution in log space.
"""
return 0.5 * np.abs(x - mu)**2 / sigma**2
def prior_pdf(self, p):
"""
Returns the prior pdf for a given data input.
"""
return self.kde.pdf(p)
def likelihood(self, p):
"""
Returns likelihood function as a sum of normal distributions
of the form Normal(parameter - observed, uncertainty), and the
prior probility resulting from the trained KDE.
"""
like = np.log(1e-30 + self.prior_pdf(p))
like += self.ln_normal(p[0], *self.obs['logT'])
like += self.ln_normal(p[1], *self.obs['logg'])
like += self.ln_normal(p[3], *self.obs['MG'])
like += self.ln_normal(p[4], *self.obs['logbp_rp'])
return like
def sample(self, nwalkers = 32, nsteps = 1000):
"""
Draw samples from the KDE distribution given the observations in
logT, logg, MG and log(bp_rp).
"""
ndim = 5
sampler = emcee.EnsembleSampler(nwalkers, ndim, self.likelihood)
start = [self.obs['logT'][0], self.obs['logg'][0], 1.3, self.obs['MG'][0], self.obs['logbp_rp'][0]]
p0 = [start + np.random.rand(ndim) * [0.005, 0.001, 0.2, 0.2, 0.001] for n in range(nwalkers)]
sampler.run_mcmc(p0, nsteps, progress=self.verbose)
frac_acc = np.mean(sampler.acceptance_fraction)
if frac_acc < 0.2:
warnings.warn(f'Sampler acceptance fraction is low: {frac_acc}')
self.samples = sampler.get_chain(flat=True)
self.prot_prior = np.nanpercentile(10**self.samples[:,2], [16, 50, 84])
if self.verbose:
print('Done sampling prior!')
def test_prior(self):
"""
Check whether the prior returns finite results for the input data.
I.e., does the prior KDE cover the input's parameter space?
The starting guess for log(P) is set to 1.3 (== 20 days).
"""
p0 = [self.obs['logT'][0], self.obs['logg'][0], 1.3, self.obs['MG'][0], self.obs['logbp_rp'][0]]
prior = self.prior_pdf(p0)
if prior < 1e-3:
print('!! Input data are outside range of KDE. Prior not available. !!')
return False
else:
return True
def __call__(self):
self.load_prior_data()
self.build_kde()
if self.test_prior():
self.sample()
return self.samples, self.prot_prior
|
ojhall94REPO_NAMEmichaelPATH_START.@michael_extracted@michael-main@michael@prior.py@.PATH_END.py
|
{
"filename": "_x.py",
"repo_name": "catboost/catboost",
"repo_path": "catboost_extracted/catboost-master/contrib/python/plotly/py3/plotly/validators/splom/marker/colorbar/_x.py",
"type": "Python"
}
|
import _plotly_utils.basevalidators
class XValidator(_plotly_utils.basevalidators.NumberValidator):
def __init__(self, plotly_name="x", parent_name="splom.marker.colorbar", **kwargs):
super(XValidator, self).__init__(
plotly_name=plotly_name,
parent_name=parent_name,
edit_type=kwargs.pop("edit_type", "colorbars"),
**kwargs,
)
|
catboostREPO_NAMEcatboostPATH_START.@catboost_extracted@catboost-master@contrib@python@plotly@py3@plotly@validators@splom@marker@colorbar@_x.py@.PATH_END.py
|
{
"filename": "_name.py",
"repo_name": "plotly/plotly.py",
"repo_path": "plotly.py_extracted/plotly.py-master/packages/python/plotly/plotly/validators/surface/_name.py",
"type": "Python"
}
|
import _plotly_utils.basevalidators
class NameValidator(_plotly_utils.basevalidators.StringValidator):
def __init__(self, plotly_name="name", parent_name="surface", **kwargs):
super(NameValidator, self).__init__(
plotly_name=plotly_name,
parent_name=parent_name,
edit_type=kwargs.pop("edit_type", "style"),
**kwargs,
)
|
plotlyREPO_NAMEplotly.pyPATH_START.@plotly.py_extracted@plotly.py-master@packages@python@plotly@plotly@validators@surface@_name.py@.PATH_END.py
|
{
"filename": "vincent_renderer.py",
"repo_name": "plotly/plotly.py",
"repo_path": "plotly.py_extracted/plotly.py-master/packages/python/plotly/plotly/matplotlylib/mplexporter/renderers/vincent_renderer.py",
"type": "Python"
}
|
import warnings
from .base import Renderer
from ..exporter import Exporter
class VincentRenderer(Renderer):
def open_figure(self, fig, props):
self.chart = None
self.figwidth = int(props["figwidth"] * props["dpi"])
self.figheight = int(props["figheight"] * props["dpi"])
def draw_line(self, data, coordinates, style, label, mplobj=None):
import vincent # only import if VincentRenderer is used
if coordinates != "data":
warnings.warn("Only data coordinates supported. Skipping this")
linedata = {"x": data[:, 0], "y": data[:, 1]}
line = vincent.Line(
linedata, iter_idx="x", width=self.figwidth, height=self.figheight
)
# TODO: respect the other style settings
line.scales["color"].range = [style["color"]]
if self.chart is None:
self.chart = line
else:
warnings.warn("Multiple plot elements not yet supported")
def draw_markers(self, data, coordinates, style, label, mplobj=None):
import vincent # only import if VincentRenderer is used
if coordinates != "data":
warnings.warn("Only data coordinates supported. Skipping this")
markerdata = {"x": data[:, 0], "y": data[:, 1]}
markers = vincent.Scatter(
markerdata, iter_idx="x", width=self.figwidth, height=self.figheight
)
# TODO: respect the other style settings
markers.scales["color"].range = [style["facecolor"]]
if self.chart is None:
self.chart = markers
else:
warnings.warn("Multiple plot elements not yet supported")
def fig_to_vincent(fig):
"""Convert a matplotlib figure to a vincent object"""
renderer = VincentRenderer()
exporter = Exporter(renderer)
exporter.run(fig)
return renderer.chart
|
plotlyREPO_NAMEplotly.pyPATH_START.@plotly.py_extracted@plotly.py-master@packages@python@plotly@plotly@matplotlylib@mplexporter@renderers@vincent_renderer.py@.PATH_END.py
|
{
"filename": "notes.md",
"repo_name": "Keck-DataReductionPipelines/KPF-Pipeline",
"repo_path": "KPF-Pipeline_extracted/KPF-Pipeline-master/kpfpipe/models/notes.md",
"type": "Markdown"
}
|
# Data Model Notes
## Level 0 data
- Contain a single 2D image and variance array (2 extensions).
- Image and variance can be empty.
- contain "receipt" extension as ASCII table (exist in memory as pandas)
- support adding/removing auxillary HDUs
## Level 1 data
- Data identified by fibers
- Each fiber have flux, wavelength, variance
- contain "receipt" extension
- contains a "segment" extension that specifies all
- inherit all headers keywords from level 0
## Level 2 data
- data stored in table. Each row is identified by a segment.
## Demo
### Astropy and FITS
- import astropy, read fits, fits info
### Core
- create empty level 0: info
- receipt: info, access, add, remove
- auxiliary extensions: add, remove
### level 0
- read level 0 NEID
- write level 0 KPF
- read level 0 KPF
### level 1
- read level 1 NEID
- write level 1 KPF
- segement: info, append, delete.
|
Keck-DataReductionPipelinesREPO_NAMEKPF-PipelinePATH_START.@KPF-Pipeline_extracted@KPF-Pipeline-master@kpfpipe@models@notes.md@.PATH_END.py
|
{
"filename": "_textcase.py",
"repo_name": "catboost/catboost",
"repo_path": "catboost_extracted/catboost-master/contrib/python/plotly/py3/plotly/validators/layout/ternary/aaxis/title/font/_textcase.py",
"type": "Python"
}
|
import _plotly_utils.basevalidators
class TextcaseValidator(_plotly_utils.basevalidators.EnumeratedValidator):
def __init__(
self,
plotly_name="textcase",
parent_name="layout.ternary.aaxis.title.font",
**kwargs,
):
super(TextcaseValidator, self).__init__(
plotly_name=plotly_name,
parent_name=parent_name,
edit_type=kwargs.pop("edit_type", "plot"),
values=kwargs.pop("values", ["normal", "word caps", "upper", "lower"]),
**kwargs,
)
|
catboostREPO_NAMEcatboostPATH_START.@catboost_extracted@catboost-master@contrib@python@plotly@py3@plotly@validators@layout@ternary@aaxis@title@font@_textcase.py@.PATH_END.py
|
{
"filename": "acsData.py",
"repo_name": "spacetelescope/drizzlepac",
"repo_path": "drizzlepac_extracted/drizzlepac-main/drizzlepac/acsData.py",
"type": "Python"
}
|
"""
Class used to model ACS specific instrument data.
:Authors: Christopher Hanley, Warren Hack, Ivo Busko, David Grumm
:License: :doc:`/LICENSE`
"""
from stsci.tools import fileutil
import numpy as np
from .imageObject import imageObject
class ACSInputImage(imageObject):
SEPARATOR = '_'
def __init__(self,filename=None, output=None, group=None):
super().__init__(filename, output=output, group=group)
# define the cosmic ray bits value to use in the dq array
self.cr_bits_value = 4096
self._instrument=self._image["PRIMARY"].header["INSTRUME"]
self._effGain=1.
self.flatkey = 'PFLTFILE'
for chip in range(1,self._numchips+1,1):
if self._image[self.scienceExt,chip].group_member:
self._image[self.scienceExt,chip].darkcurrent=self.getdarkcurrent(chip)
def doUnitConversions(self):
# Effective gain to be used in the driz_cr step. Since the
# ACS images have already been converted to electrons,
# the effective gain is 1.
for chip in self.returnAllChips(extname=self.scienceExt):
chip._effGain = 1.0 #chip._effGain is was drizCr uses
chip._conversionFactor = 1.0
self._effGain=1.0
def _assignSignature(self, chip):
"""assign a unique signature for the image based
on the instrument, detector, chip, and size
this will be used to uniquely identify the appropriate
static mask for the image
this also records the filename for the static mask to the outputNames dictionary
"""
sci_chip = self._image[self.scienceExt,chip]
ny=sci_chip._naxis1
nx=sci_chip._naxis2
detnum = sci_chip.detnum
sig = (self.outroot, (nx, ny), int(chip)) # signature is a tuple
sci_chip.signature=sig # signature is a tuple
def getdarkcurrent(self,extver):
"""
Return the dark current for the ACS detector. This value
will be contained within an instrument specific keyword.
The value in the image header will be converted to units
of electrons.
Returns
-------
darkcurrent: float
Dark current value for the ACS detector in **units of electrons**.
"""
darkcurrent=0.
try:
darkcurrent = self._image[self.scienceExt,extver].header['MEANDARK']
except:
str = "#############################################\n"
str += "# #\n"
str += "# Error: #\n"
str += "# Cannot find the value for 'MEANDARK' #\n"
str += "# in the image header. ACS input images #\n"
str += "# are expected to have this header #\n"
str += "# keyword. #\n"
str += "# #\n"
str += "# Error occured in the ACSInputImage class #\n"
str += "# #\n"
str += "#############################################\n"
raise ValueError(str)
return darkcurrent
class WFCInputImage(ACSInputImage):
def __init__(self, filename=None, output=None, group=None):
super().__init__(filename, output=output, group=group)
self.full_shape = (4096, 2048)
self._detector=self._image["PRIMARY"].header["DETECTOR"]
# get cte direction, which depends on which chip but is independent of amp
for chip in range(1, self._numchips + 1, 1):
self._assignSignature(chip) #this is used in the static mask
if chip == 1:
self._image[self.scienceExt,chip].cte_dir = -1
elif chip == 2:
self._image[self.scienceExt,chip].cte_dir = 1
def setInstrumentParameters(self,instrpars):
""" This method overrides the superclass to set default values into
the parameter dictionary, in case empty entries are provided.
This method gets called from processInput.
"""
pri_header = self._image[0].header
if len(instrpars) == 0:
instrpars['proc_unit']='native'
instrpars['gain']=''
instrpars['rdnoise']=''
instrpars['exptime']=''
instrpars['gnkeyword']=''
instrpars['rnkeyword']=''
instrpars['expkeyword']=''
self.proc_unit = instrpars['proc_unit']
if self._isNotValid (instrpars['gain'], instrpars['gnkeyword']):
instrpars['gnkeyword'] = 'ATODGNA,ATODGNB,ATODGNC,ATODGND'
if self._isNotValid (instrpars['rdnoise'], instrpars['rnkeyword']):
instrpars['rnkeyword'] = 'READNSEA,READNSEB,READNSEC,READNSED'
if self._isNotValid (instrpars['exptime'], instrpars['expkeyword']):
instrpars['expkeyword'] = 'EXPTIME'
for chip in self.returnAllChips(extname=self.scienceExt):
chip._gain = self.getInstrParameter(instrpars['gain'], pri_header,
instrpars['gnkeyword'])
chip._rdnoise = self.getInstrParameter(instrpars['rdnoise'], pri_header,
instrpars['rnkeyword'])
chip._exptime = self.getInstrParameter(instrpars['exptime'], pri_header,
instrpars['expkeyword'])
chip._effGain = 1.
if (chip._gain is None or chip._rdnoise is None or
chip._exptime is None):
print('ERROR: invalid instrument task parameter')
raise ValueError
# Convert the science data to electrons if specified by the user.
self.doUnitConversions()
class HRCInputImage (ACSInputImage):
def __init__(self, filename=None, output=None, group=None):
super().__init__(filename, output=output, group=group)
self._detector = self._image['PRIMARY'].header["DETECTOR"]
self.full_shape = (1024, 1024)
amp = self._image['PRIMARY'].header["CCDAMP"]
self._detector = self._image['PRIMARY'].header["DETECTOR"]
for chip in range(1,self._numchips+1,1):
self._assignSignature(chip) #this is used in the static mask
# cte direction depends on amp (but is independent of chip):
if amp in ['A', 'B']:
self._image[self.scienceExt, chip].cte_dir = 1
elif amp in ['C', 'D']:
self._image[self.scienceExt, chip].cte_dir = -1
def setInstrumentParameters(self,instrpars):
""" This method overrides the superclass to set default values into
the parameter dictionary, in case empty entries are provided.
This method gets called from processInput.
"""
pri_header = self._image[0].header
if len(instrpars) == 0:
instrpars['proc_unit']='native'
instrpars['gain']=''
instrpars['rdnoise']=''
instrpars['exptime']=''
instrpars['gnkeyword']=''
instrpars['rnkeyword']=''
instrpars['expkeyword']=''
self.proc_unit = instrpars['proc_unit']
if self._isNotValid (instrpars['gain'], instrpars['gnkeyword']):
instrpars['gnkeyword'] = 'ATODGNA,ATODGNB,ATODGNC,ATODGND'
if self._isNotValid (instrpars['rdnoise'], instrpars['rnkeyword']):
instrpars['rnkeyword'] = 'READNSEA,READNSEB,READNSEC,READNSED'
if self._isNotValid (instrpars['exptime'], instrpars['expkeyword']):
instrpars['expkeyword'] = 'EXPTIME'
for chip in self.returnAllChips(extname=self.scienceExt):
chip._gain = self.getInstrParameter(instrpars['gain'], pri_header,
instrpars['gnkeyword'])
chip._rdnoise = self.getInstrParameter(instrpars['rdnoise'], pri_header,
instrpars['rnkeyword'])
chip._exptime = self.getInstrParameter(instrpars['exptime'], pri_header,
instrpars['expkeyword'])
chip._effGain = chip._gain
if (chip._gain is None or chip._rdnoise is None or
chip._exptime is None):
print('ERROR: invalid instrument task parameter')
raise ValueError
# Convert the science data to electrons if specified by the user.
self.doUnitConversions()
class SBCInputImage (ACSInputImage):
def __init__(self, filename=None, output=None, group=None):
super().__init__(filename, output=output, group=group)
self.full_shape = (1024, 1024)
self._detector = self._image['PRIMARY'].header["DETECTOR"]
# no cte correction for SBC so set cte_dir=0.
print('WARNING: No cte correction will be made for this SBC data.')
for chip in range(1,self._numchips+1,1):
self._assignSignature(chip) #this is used in the static mask
self._image[self.scienceExt,chip].cte_dir = 0
def _setSBCchippars(self):
self._setDefaultSBCGain()
self._setDefaultSBCReadnoise()
def _setDefaultSBCGain(self):
self._gain = 1
def _setDefaultSBCReadnoise(self):
self._rdnoise = 0
def setInstrumentParameters(self,instrpars):
""" Sets the instrument parameters.
"""
pri_header = self._image[0].header
if self._isNotValid (instrpars['gain'], instrpars['gnkeyword']):
instrpars['gnkeyword'] = None
if self._isNotValid (instrpars['rdnoise'], instrpars['rnkeyword']):
instrpars['rnkeyword'] = None
if self._isNotValid (instrpars['exptime'], instrpars['expkeyword']):
instrpars['expkeyword'] = 'EXPTIME'
# We need to treat Read Noise and Gain as a special case since it is
# not populated in the SBC primary header for the MAMA
for chip in self.returnAllChips(extname=self.scienceExt):
chip._gain = 1.0 #self.getInstrParameter("", pri_header,
# instrpars['gnkeyword'])
chip._rdnoise = 0.0 #self.getInstrParameter("", pri_header,
# instrpars['rnkeyword'])
chip._exptime = self.getInstrParameter(instrpars['exptime'], pri_header,
instrpars['expkeyword'])
if chip._exptime is None:
print('ERROR: invalid instrument task parameter')
raise ValueError
# We need to determine if the user has used the default readnoise/gain value
# since if not, they will need to supply a gain/readnoise value as well
usingDefaultGain = instrpars['gnkeyword'] is None
usingDefaultReadnoise = instrpars['rnkeyword'] is None
# Set the default readnoise or gain values based upon the amount of user input given.
# Case 1: User supplied no gain or readnoise information
if usingDefaultReadnoise and usingDefaultGain:
# Set the default gain and readnoise values
self._setSBCchippars()
# Case 2: The user has supplied a value for gain
elif usingDefaultReadnoise and not usingDefaultGain:
# Set the default readnoise value
self._setDefaultSBCReadnoise()
# Case 3: The user has supplied a value for readnoise
elif not usingDefaultReadnoise and usingDefaultGain:
# Set the default gain value
self._setDefaultSBCGain()
else:
# In this case, the user has specified both a gain and readnoise values. Just use them as is.
pass
|
spacetelescopeREPO_NAMEdrizzlepacPATH_START.@drizzlepac_extracted@drizzlepac-main@drizzlepac@acsData.py@.PATH_END.py
|
{
"filename": "test_reduce.py",
"repo_name": "ledatelescope/bifrost",
"repo_path": "bifrost_extracted/bifrost-master/test/test_reduce.py",
"type": "Python"
}
|
# Copyright (c) 2016-2023, The Bifrost Authors. All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions
# are met:
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution.
# * Neither the name of The Bifrost Authors nor the names of its
# contributors may be used to endorse or promote products derived
# from this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS ``AS IS'' AND ANY
# EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
# PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
# CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
# EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
# PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
# PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
# OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
import unittest
import numpy as np
import bifrost as bf
from bifrost.libbifrost_generated import BF_CUDA_ENABLED
#import time
def stderr(data, axis):
return np.sum(data, axis=axis) / np.sqrt(data.shape[axis])
NP_OPS = {
'sum': np.sum,
'mean': np.mean,
'min': np.min,
'max': np.max,
'stderr': stderr
}
def scrunch(data, factor=2, axis=0, func=np.sum):
if factor is None:
factor = data.shape[axis]
s = data.shape
if s[axis] % factor != 0:
raise ValueError("Scrunch factor does not divide axis size")
s = s[:axis] + (s[axis]//factor, factor) + s[axis:][1:]
axis = axis + 1 if axis >= 0 else axis
return func(data.reshape(s), axis=axis)
def pwrscrunch(data, factor=2, axis=0, func=np.sum):
if factor is None:
factor = data.shape[axis]
s = data.shape
if s[axis] % factor != 0:
raise ValueError("Scrunch factor does not divide axis size")
s = s[:axis] + (s[axis]//factor, factor) + s[axis:][1:]
axis = axis + 1 if axis >= 0 else axis
return func(np.abs(data.reshape(s))**2, axis=axis)
@unittest.skipUnless(BF_CUDA_ENABLED, "requires GPU support")
class ReduceTest(unittest.TestCase):
def setUp(self):
np.random.seed(1234)
def run_reduce_test(self, shape, axis, n, op='sum', dtype=np.float32):
a = ((np.random.random(size=shape)*2-1)*127).astype(np.int8).astype(dtype)
if op[:3] == 'pwr':
b_gold = pwrscrunch(a.astype(np.float32), n, axis, NP_OPS[op[3:]])
else:
b_gold = scrunch(a.astype(np.float32), n, axis, NP_OPS[op])
a = bf.asarray(a, space='cuda')
b = bf.empty_like(b_gold, space='cuda')
bf.reduce(a, b, op)
b = b.copy('system')
np.testing.assert_allclose(b, b_gold)
def run_reduce_slice_test(self, shape, axis, n, op='sum', dtype=np.float32):
if n is None:
return None
a = ((np.random.random(size=shape)*2-1)*127).astype(np.int8).astype(dtype)
if axis == 0:
a_slice = a[1:((a.shape[0]-1)//n-1)*n+1,...]
elif axis == 1:
a_slice = a[:,1:((a.shape[1]-1)//n-1)*n+1,:]
else:
a_slice = a[...,1:((a.shape[-1]-1)//n-1)*n+1]
if a_slice.shape[0] == 0 or a_slice.shape[1] == 0 or a_slice.shape[-1] == 0:
return None
if op[:3] == 'pwr':
b_gold = pwrscrunch(a_slice.astype(np.float32), n, axis, NP_OPS[op[3:]])
else:
b_gold = scrunch(a_slice.astype(np.float32), n, axis, NP_OPS[op])
a = bf.ndarray(a, space='cuda')
if axis == 0:
a_slice = a[1:((a.shape[0]-1)//n-1)*n+1,...]
elif axis == 1:
a_slice = a[:,1:((a.shape[1]-1)//n-1)*n+1,:]
else:
a_slice = a[...,1:((a.shape[-1]-1)//n-1)*n+1]
b = bf.empty_like(b_gold, space='cuda')
bf.reduce(a_slice, b, op)
b = b.copy('system')
np.testing.assert_allclose(b, b_gold)
def test_reduce(self):
self.run_reduce_test((3,6,5), axis=1, n=2, op='sum', dtype=np.float32)
for shape in [(20,20,40), (20,40,60), (40,100,200)]:
for axis in range(3):
for n in [2, 4, 5, 10, None]:
for op in ['sum', 'mean', 'pwrsum', 'pwrmean']:#, 'min', 'max', 'stderr']:
for dtype in [np.float32, np.int16, np.int8]:
#print shape, axis, n, op, dtype
self.run_reduce_test(shape, axis, n, op, dtype)
self.run_reduce_slice_test(shape, axis, n, op, dtype)
def test_reduce_pow2(self):
for shape in [(16,32,64), (16,64,256), (256,64,16)]:#, (256, 256, 512)]:
for axis in range(3):
for n in [2, 4, 8, 16, None]:
for op in ['sum', 'mean', 'pwrsum', 'pwrmean']:#, 'min', 'max', 'stderr']:
for dtype in [np.float32, np.int16, np.int8]:
#print shape, axis, n, op, dtype
self.run_reduce_test(shape, axis, n, op, dtype)
self.run_reduce_slice_test(shape, axis, n, op, dtype)
def run_complex_reduce_test(self, shape, axis, n, op='sum', dtype=np.complex64):
a = ((np.random.random(size=shape)*2-1)*127).astype(np.int8).astype(dtype) \
+ 1j*((np.random.random(size=shape)*2-1)*127).astype(np.int8).astype(dtype)
if op[:3] == 'pwr':
b_gold = pwrscrunch(a.astype(np.complex64), n, axis, NP_OPS[op[3:]]).astype(np.float32)
else:
b_gold = scrunch(a.astype(np.complex64), n, axis, NP_OPS[op])
a = bf.asarray(a, space='cuda')
b = bf.empty_like(b_gold, space='cuda')
bf.reduce(a, b, op)
b = b.copy('system')
np.testing.assert_allclose(b, b_gold, rtol=1e-3 if op[:3] == 'pwr' else 1e-7)
def run_complex_reduce_slice_test(self, shape, axis, n, op='sum', dtype=np.float32):
if n is None:
return None
a = ((np.random.random(size=shape)*2-1)*127).astype(np.int8).astype(dtype) \
+ 1j*((np.random.random(size=shape)*2-1)*127).astype(np.int8).astype(dtype)
if axis == 0:
a_slice = a[1:((a.shape[0]-1)//n-1)*n+1,...]
elif axis == 1:
a_slice = a[:,1:((a.shape[1]-1)//n-1)*n+1,:]
else:
a_slice = a[...,1:((a.shape[-1]-1)//n-1)*n+1]
if a_slice.shape[0] == 0 or a_slice.shape[1] == 0 or a_slice.shape[-1] == 0:
return None
if op[:3] == 'pwr':
b_gold = pwrscrunch(a_slice.astype(np.complex64), n, axis, NP_OPS[op[3:]])
else:
b_gold = scrunch(a_slice.astype(np.complex64), n, axis, NP_OPS[op])
a = bf.ndarray(a, space='cuda')
if axis == 0:
a_slice = a[1:((a.shape[0]-1)//n-1)*n+1,...]
elif axis == 1:
a_slice = a[:,1:((a.shape[1]-1)//n-1)*n+1,:]
else:
a_slice = a[...,1:((a.shape[-1]-1)//n-1)*n+1]
b = bf.empty_like(b_gold, space='cuda')
bf.reduce(a_slice, b, op)
b = b.copy('system')
np.testing.assert_allclose(b, b_gold, rtol=1e-3 if op[:3] == 'pwr' else 1e-7)
def test_complex_reduce(self):
self.run_complex_reduce_test((3,6,5), axis=1, n=2, op='pwrsum', dtype=np.complex64)
for shape in [(20,20,40), (20,40,60), (40,100,200)]:
for axis in range(3):
for n in [2, 4, 5, 10, None]:
for op in ['sum', 'mean', 'pwrsum', 'pwrmean']:#, 'min', 'max', 'stderr']:
for dtype in [np.complex64,]:
#print shape, axis, n, op, dtype
self.run_complex_reduce_test(shape, axis, n, op, dtype)
self.run_complex_reduce_slice_test(shape, axis, n, op, dtype)
def test_complex_reduce_pow2(self):
for shape in [(16,32,64), (16,64,256), (256,64,16)]:#, (256, 256, 512)]:
for axis in range(3):
for n in [2, 4, 8, 16, None]:
for op in ['sum', 'mean', 'pwrsum', 'pwrmean']:#, 'min', 'max', 'stderr']:
for dtype in [np.complex64,]:
#print shape, axis, n, op, dtype
self.run_complex_reduce_test(shape, axis, n, op, dtype)
self.run_complex_reduce_slice_test(shape, axis, n, op, dtype)
|
ledatelescopeREPO_NAMEbifrostPATH_START.@bifrost_extracted@bifrost-master@test@test_reduce.py@.PATH_END.py
|
{
"filename": "PN_dE_GW_dt_and_dM_dt.py",
"repo_name": "zachetienne/nrpytutorial",
"repo_path": "nrpytutorial_extracted/nrpytutorial-master/NRPyPN/PN_dE_GW_dt_and_dM_dt.py",
"type": "Python"
}
|
# As documented in the NRPyPN notebook
# PN-dE_GW_dt.ipynb, this Python script
# generates dE_GW/dt at highest known
# post-Newtonian order (as of 2015, at
# least).
# Core functions:
# dE_GW_dt_OBKPSS2015_consts(m1,m2, n12U, S1U,S2U):
# Define constants used in the dE_GW/dt expression.
# f_dE_GW_dt(mOmega, m1,m2, n12U, S1U,S2U):
# Compute dE_GW_dt and store to global variable of the same name.
# Author: Zach Etienne
# zachetie **at** gmail **dot* com
# Step 0: Add NRPy's directory to the path
# https://stackoverflow.com/questions/16780014/import-file-from-parent-directory
import sys # Standard Python modules for multiplatform OS-level functions
import sympy as sp # SymPy: The Python computer algebra package upon which NRPy+ depends
import indexedexpNRPyPN as ixp # NRPy+: Symbolic indexed expression (e.g., tensors, vectors, etc.) support
from NRPyPN_shortcuts import div,dot,gamma_EulerMascheroni # NRPyPN: shortcuts for e.g., vector operations
#################################
#################################
# Constants given in Eqs A1-13 of https://arxiv.org/abs/1502.01747
def dE_GW_dt_OBKPSS2015_consts(m1,m2, _n12U, S1U,S2U): # _n12U unused.
# define scalars:
m = (m1+m2)
nu = m1*m2/m**2
delta = (m1-m2)/m
# define vectors:
Stot = ixp.zerorank1()
Sigma= ixp.zerorank1()
l = ixp.zerorank1()
l[2] = sp.sympify(1)
chi1U = ixp.zerorank1()
chi2U = ixp.zerorank1()
chi_s = ixp.zerorank1()
chi_a = ixp.zerorank1()
for i in range(3):
Stot[i] = S1U[i] + S2U[i]
Sigma[i] = (m1+m2)/m2*S2U[i] - (m1+m2)/m1*S1U[i]
chi1U[i] = S1U[i]/m1**2
chi2U[i] = S2U[i]/m2**2
chi_s[i] = div(1,2) * (chi1U[i] + chi2U[i])
chi_a[i] = div(1,2) * (chi1U[i] - chi2U[i])
# define scalars that depend on vectors
s_l = dot(Stot,l) /m**2
# s_n = dot(Stot,n12U)/m**2
sigma_l = dot(Sigma,l)/m**2
# sigma_n = dot(Sigma,n12U)/m**2
return nu,delta, l,chi_a,chi_s, s_l,sigma_l
#################################
#################################
# Based on Eqs A22-28 of https://arxiv.org/abs/1502.01747, with
# Eq A.14 of https://arxiv.org/abs/0709.0093 for Mdot
# and correction on b[7] term by comparison with
# https://link.springer.com/content/pdf/10.12942/lrr-2014-2.pdf
def f_dE_GW_dt_and_dM_dt(mOmega, m1,m2, n12U, S1U,S2U):
def f_compute_quantities(mOmega, m1,m2, n12U, S1U,S2U, which_quantity):
if not which_quantity in ('dM_dt', 'dE_GW_dt', 'dE_GW_dt_plus_dM_dt'):
print("which_quantity == "+str(which_quantity)+" not supported!")
sys.exit(1)
nu,delta, l,chi_a,chi_s, s_l,sigma_l = dE_GW_dt_OBKPSS2015_consts(m1,m2, n12U, S1U,S2U)
x = (mOmega)**div(2,3)
# Compute b_5_Mdot:
b_5_Mdot = (-div(1,4)*(+(1-3*nu)*dot(chi_s,l)*(1+3*dot(chi_s,l)**2+9*dot(chi_a,l)**2)
+(1- nu)*delta*dot(chi_a,l)*(1+3*dot(chi_a,l)**2+9*dot(chi_s,l)**2)))
if which_quantity == "dM_dt":
return div(32,5)*nu**2*x**5*b_5_Mdot*x**div(5,2)
b = ixp.zerorank1(DIM=10)
b[2] = -div(1247,336) - div(35,12)*nu
b[3] = +4*sp.pi - 4*s_l - div(5,4)*delta*sigma_l
b[4] =(-div(44711,9072) + div(9271,504)*nu + div(65,18)*nu**2
+(+div(287,96) + div( 1,24)*nu)*dot(chi_s,l)**2
-(+div( 89,96) + div( 7,24)*nu)*dot(chi_s,chi_s)
+(+div(287,96) - 12*nu)*dot(chi_a,l)**2
+(-div( 89,96) + 4*nu)*dot(chi_a,chi_a)
+div(287,48)*delta*dot(chi_s,l)*dot(chi_a,l) - div(89,48)*delta*dot(chi_s,chi_a))
b[5] =(-div(8191,672)*sp.pi - div(9,2)*s_l - div(13,16)*delta*sigma_l
+nu*(-div(583,24)*sp.pi + div(272,9)*s_l + div(43,4)*delta*sigma_l))
if which_quantity == "dE_GW_dt_plus_dM_dt":
b[5]+= b_5_Mdot
b[6] =(+div(6643739519,69854400) + div(16,3)*sp.pi**2 - div(1712,105)*gamma_EulerMascheroni
-div(856,105)*sp.log(16*x) + (-div(134543,7776) + div(41,48)*sp.pi**2)*nu
-div(94403,3024)*nu**2 - div(775,324)*nu**3 - 16*sp.pi*s_l - div(31,6)*sp.pi*delta*sigma_l)
b[7] =(+(+div(476645,6804) + div(6172,189)*nu - div(2810,27)*nu**2)*s_l
+(+div(9535,336) + div(1849,126)*nu - div(1501,36)*nu**2)*delta*sigma_l
+(-div(16285,504) + div(214745,1728)*nu + div(193385,3024)*nu**2)*sp.pi)
b[8] =(+(-div(3485,96)*sp.pi + div(13879,72)*sp.pi*nu)*s_l
+(-div(7163,672)*sp.pi + div(130583,2016)*sp.pi*nu)*delta*sigma_l)
b_sum = sp.sympify(1)
for k in range(9):
b_sum += b[k]*x**div(k,2)
return div(32,5)*nu**2*x**5*b_sum
global dE_GW_dt_plus_dM_dt, dE_GW_dt, dM_dt
dE_GW_dt_plus_dM_dt = \
f_compute_quantities(mOmega, m1,m2, n12U, S1U,S2U, "dE_GW_dt_plus_dM_dt")
dE_GW_dt = f_compute_quantities(mOmega, m1,m2, n12U, S1U,S2U, "dE_GW_dt")
dM_dt = f_compute_quantities(mOmega, m1,m2, n12U, S1U,S2U, "dM_dt")
|
zachetienneREPO_NAMEnrpytutorialPATH_START.@nrpytutorial_extracted@nrpytutorial-master@NRPyPN@PN_dE_GW_dt_and_dM_dt.py@.PATH_END.py
|
{
"filename": "config.py",
"repo_name": "spedas/pyspedas",
"repo_path": "pyspedas_extracted/pyspedas-master/pyspedas/projects/omni/config.py",
"type": "Python"
}
|
import os
CONFIG = {'local_data_dir': 'omni_data/',
'remote_data_dir': 'https://spdf.gsfc.nasa.gov/pub/data/omni/omni_cdaweb/'}
# override local data directory with environment variables
if os.environ.get('SPEDAS_DATA_DIR'):
CONFIG['local_data_dir'] = os.sep.join([os.environ['SPEDAS_DATA_DIR'], 'omni'])
if os.environ.get('OMNI_DATA_DIR'):
CONFIG['local_data_dir'] = os.environ['OMNI_DATA_DIR']
|
spedasREPO_NAMEpyspedasPATH_START.@pyspedas_extracted@pyspedas-master@pyspedas@projects@omni@config.py@.PATH_END.py
|
{
"filename": "config_s4cmb.py",
"repo_name": "JulienPeloton/s4cmb",
"repo_path": "s4cmb_extracted/s4cmb-master/s4cmb/config_s4cmb.py",
"type": "Python"
}
|
#!/usr/bin/python
# Copyright (c) 2016-2021 Julien Peloton, Giulio Fabbian.
#
# This file is part of s4cmb
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
"""
Module to normalise ini files containing parameter
values for running s4cmb in software mode.
Not required for the API.
Author: Julien Peloton, peloton@lal.in2p3.fr
"""
import os
import sys
import importlib
import numpy as np
def compare_version_number(version, threshold):
"""
Compare two version numbers.
Parameters
----------
version: string
Version of you package x.y.z
threshold: string
Threshold version x.y.z
Returns
----------
result: boolean
True if your version is higher or equal than the threshold.
False otherwise.
Examples
----------
>>> version = '1.10.0'
>>> threshold = '1.9.1'
>>> compare_version_number(version, threshold)
True
"""
# If the two versions are equal
if version == threshold:
return True
version_numbers = version.split(".")
threshold_numbers = threshold.split(".")
for v, t in zip(version_numbers, threshold_numbers):
v = int(v)
t = int(t)
if v == t:
continue
if v > t:
return True
if v < t:
return False
return True
def import_string_as_module(fn_full):
"""
Import module from its name given as a string.
Parameters
----------
fn_full: string
Python filename containing parameters that you
want to import.
Returns
----------
params: module
Module containing your parameters.
Examples
----------
>>> fn_full = 'examples/inifiles/simple_parameters.py'
>>> params = import_string_as_module(fn_full)
>>> 'do_pol' in dir(params)
True
"""
# Import parameters from the user parameter file
fn_short = os.path.basename(fn_full).split(".py")[0]
sys.path.insert(
0, os.path.realpath(os.path.join(os.getcwd(), os.path.dirname(fn_full)))
)
params = importlib.import_module(fn_short)
return params
if __name__ == "__main__":
import doctest
if np.__version__ >= "1.14.0":
np.set_printoptions(legacy="1.13")
doctest.testmod()
|
JulienPelotonREPO_NAMEs4cmbPATH_START.@s4cmb_extracted@s4cmb-master@s4cmb@config_s4cmb.py@.PATH_END.py
|
{
"filename": "_familysrc.py",
"repo_name": "catboost/catboost",
"repo_path": "catboost_extracted/catboost-master/contrib/python/plotly/py2/plotly/validators/scatterpolargl/hoverlabel/font/_familysrc.py",
"type": "Python"
}
|
import _plotly_utils.basevalidators
class FamilysrcValidator(_plotly_utils.basevalidators.SrcValidator):
def __init__(
self,
plotly_name="familysrc",
parent_name="scatterpolargl.hoverlabel.font",
**kwargs
):
super(FamilysrcValidator, self).__init__(
plotly_name=plotly_name,
parent_name=parent_name,
edit_type=kwargs.pop("edit_type", "none"),
role=kwargs.pop("role", "info"),
**kwargs
)
|
catboostREPO_NAMEcatboostPATH_START.@catboost_extracted@catboost-master@contrib@python@plotly@py2@plotly@validators@scatterpolargl@hoverlabel@font@_familysrc.py@.PATH_END.py
|
{
"filename": "index.md",
"repo_name": "nasa/Kamodo",
"repo_path": "Kamodo_extracted/Kamodo-master/kamodo_ccmc/readers/kamodo-tsyganenko/docs/index.md",
"type": "Markdown"
}
|
{! ../README.md !}
|
nasaREPO_NAMEKamodoPATH_START.@Kamodo_extracted@Kamodo-master@kamodo_ccmc@readers@kamodo-tsyganenko@docs@index.md@.PATH_END.py
|
{
"filename": "test_signal.py",
"repo_name": "ExObsSim/ExoRad2-public",
"repo_path": "ExoRad2-public_extracted/ExoRad2-public-master/tests/test_signal.py",
"type": "Python"
}
|
import logging
import unittest
import astropy.units as u
import numpy as np
from exorad.log import setLogLevel
from exorad.models.noise import Noise
from exorad.models.signal import CountsPerSeconds
from exorad.models.signal import Sed
from exorad.models.signal import Signal
setLogLevel(logging.DEBUG)
class SignalTest(unittest.TestCase):
def test_quantity_check(self):
try:
wl = np.linspace(0.1, 1, 10) * u.um
data = np.random.random_sample((10, 10))
time_grid = np.linspace(1, 5, 10) * u.hr
Signal(wl_grid=wl, data=data, time_grid=time_grid)
CountsPerSeconds(wl_grid=wl, data=data * u.Unit('ct/s'),
time_grid=time_grid)
Noise(wl_grid=wl, data=data * u.hr ** 0.5, time_grid=time_grid)
Sed(wl_grid=wl, data=data * u.W / u.m ** 2 / u.um,
time_grid=time_grid)
wl = np.linspace(0.1, 1, 10) * u.m
time_grid = np.linspace(1, 5, 10) * u.s
Signal(wl_grid=wl, data=data, time_grid=time_grid)
wl = np.linspace(0.1, 1, 10)
data = np.random.random_sample(10)
Signal(wl_grid=wl, data=data)
CountsPerSeconds(wl_grid=wl, data=data)
Noise(wl_grid=wl, data=data)
Sed(wl_grid=wl, data=data)
except u.UnitConversionError:
self.fail("Signal raised Exception unexpectedly!")
with self.assertRaises(u.UnitConversionError):
wl = np.linspace(0.1, 1, 10) * u.s
data = np.random.random_sample(10)
Signal(wl_grid=wl, data=data)
def test_time_dependency(self):
wl = np.linspace(0.1, 1, 10)
data = np.random.random_sample((10, 10))
time_grid = np.linspace(1, 5, 10)
sig = Signal(wl_grid=wl, data=data, time_grid=time_grid)
with self.assertRaises(NotImplementedError):
sig.time_dependent
fl = CountsPerSeconds(wl_grid=wl, data=data, time_grid=time_grid)
with self.assertRaises(NotImplementedError):
fl.time_dependent
data = np.random.random_sample(10)
sig = Signal(wl_grid=wl, data=data)
self.assertFalse(sig.time_dependent)
fl = CountsPerSeconds(wl_grid=wl, data=data)
self.assertFalse(fl.time_dependent)
def test_dimension_check(self):
with self.assertRaises(ValueError):
wl = np.linspace(0.1, 1, 10) * u.um
data = np.random.random_sample((10, 2))
Signal(wl_grid=wl, data=data)
wl = np.linspace(0.1, 1, 10) * u.um
data = np.random.random_sample(10)
time_grid = np.linspace(1, 5, 10) * u.hr
Signal(wl_grid=wl, data=data, time_grid=time_grid)
def test_rebins(self):
pass
|
ExObsSimREPO_NAMEExoRad2-publicPATH_START.@ExoRad2-public_extracted@ExoRad2-public-master@tests@test_signal.py@.PATH_END.py
|
{
"filename": "_tickvalssrc.py",
"repo_name": "plotly/plotly.py",
"repo_path": "plotly.py_extracted/plotly.py-master/packages/python/plotly/plotly/validators/layout/scene/xaxis/_tickvalssrc.py",
"type": "Python"
}
|
import _plotly_utils.basevalidators
class TickvalssrcValidator(_plotly_utils.basevalidators.SrcValidator):
def __init__(
self, plotly_name="tickvalssrc", parent_name="layout.scene.xaxis", **kwargs
):
super(TickvalssrcValidator, self).__init__(
plotly_name=plotly_name,
parent_name=parent_name,
edit_type=kwargs.pop("edit_type", "none"),
**kwargs,
)
|
plotlyREPO_NAMEplotly.pyPATH_START.@plotly.py_extracted@plotly.py-master@packages@python@plotly@plotly@validators@layout@scene@xaxis@_tickvalssrc.py@.PATH_END.py
|
{
"filename": "test_color_maps.py",
"repo_name": "rennehan/yt-swift",
"repo_path": "yt-swift_extracted/yt-swift-main/yt/visualization/tests/test_color_maps.py",
"type": "Python"
}
|
import os
import shutil
import tempfile
import unittest
import matplotlib.pyplot as plt
import numpy as np
from nose.tools import assert_raises
from numpy.testing import assert_almost_equal, assert_equal
from yt import make_colormap, show_colormaps
from yt.testing import requires_backend
class TestColorMaps(unittest.TestCase):
def setUp(self):
self.tmpdir = tempfile.mkdtemp()
self.curdir = os.getcwd()
os.chdir(self.tmpdir)
def tearDown(self):
os.chdir(self.curdir)
shutil.rmtree(self.tmpdir)
@requires_backend("Agg")
def test_show_colormaps(self):
show_colormaps()
show_colormaps(subset=["jet", "cool"])
show_colormaps(subset="yt_native", filename="yt_color_maps.png")
# Test for non-existent color map
with assert_raises(AttributeError) as ex:
show_colormaps(subset="unknown", filename="yt_color_maps.png")
desired = (
"show_colormaps requires subset attribute to be 'all', "
"'yt_native', or a list of valid colormap names."
)
assert_equal(str(ex.exception), desired)
@requires_backend("Agg")
def test_make_colormap(self):
make_colormap(
[([0, 0, 1], 10), ([1, 1, 1], 10), ([1, 0, 0], 10)],
name="french_flag",
interpolate=False,
)
show_colormaps("french_flag")
cmap = make_colormap(
[("dred", 5), ("blue", 2.0), ("orange", 0)], name="my_cmap"
)
assert_almost_equal(
cmap["red"][1], np.array([0.00392157, 0.62400345, 0.62400345])
)
assert_almost_equal(
cmap["blue"][2], np.array([0.00784314, 0.01098901, 0.01098901])
)
assert_almost_equal(cmap["green"][3], np.array([0.01176471, 0.0, 0.0]))
def test_cmyt_integration():
for name in ["algae", "bds_highcontrast", "kelp", "arbre", "octarine", "kamae"]:
cmap = plt.get_cmap(name)
assert cmap.name == name
name_r = name + "_r"
cmap_r = plt.get_cmap(name_r)
assert cmap_r.name == name_r
for name in ["algae", "kelp", "arbre", "octarine", "pastel"]:
cmap = plt.get_cmap("cmyt." + name)
assert cmap.name == "cmyt." + name
|
rennehanREPO_NAMEyt-swiftPATH_START.@yt-swift_extracted@yt-swift-main@yt@visualization@tests@test_color_maps.py@.PATH_END.py
|
{
"filename": "_bgcolor.py",
"repo_name": "catboost/catboost",
"repo_path": "catboost_extracted/catboost-master/contrib/python/plotly/py2/plotly/validators/barpolar/marker/colorbar/_bgcolor.py",
"type": "Python"
}
|
import _plotly_utils.basevalidators
class BgcolorValidator(_plotly_utils.basevalidators.ColorValidator):
def __init__(
self, plotly_name="bgcolor", parent_name="barpolar.marker.colorbar", **kwargs
):
super(BgcolorValidator, self).__init__(
plotly_name=plotly_name,
parent_name=parent_name,
edit_type=kwargs.pop("edit_type", "colorbars"),
role=kwargs.pop("role", "style"),
**kwargs
)
|
catboostREPO_NAMEcatboostPATH_START.@catboost_extracted@catboost-master@contrib@python@plotly@py2@plotly@validators@barpolar@marker@colorbar@_bgcolor.py@.PATH_END.py
|
{
"filename": "test_value_counts.py",
"repo_name": "pandas-dev/pandas",
"repo_path": "pandas_extracted/pandas-main/pandas/tests/frame/methods/test_value_counts.py",
"type": "Python"
}
|
import numpy as np
import pytest
from pandas._config import using_string_dtype
from pandas.compat import HAS_PYARROW
import pandas as pd
import pandas._testing as tm
def test_data_frame_value_counts_unsorted():
df = pd.DataFrame(
{"num_legs": [2, 4, 4, 6], "num_wings": [2, 0, 0, 0]},
index=["falcon", "dog", "cat", "ant"],
)
result = df.value_counts(sort=False)
expected = pd.Series(
data=[1, 2, 1],
index=pd.MultiIndex.from_arrays(
[(2, 4, 6), (2, 0, 0)], names=["num_legs", "num_wings"]
),
name="count",
)
tm.assert_series_equal(result, expected)
def test_data_frame_value_counts_ascending():
df = pd.DataFrame(
{"num_legs": [2, 4, 4, 6], "num_wings": [2, 0, 0, 0]},
index=["falcon", "dog", "cat", "ant"],
)
result = df.value_counts(ascending=True)
expected = pd.Series(
data=[1, 1, 2],
index=pd.MultiIndex.from_arrays(
[(2, 6, 4), (2, 0, 0)], names=["num_legs", "num_wings"]
),
name="count",
)
tm.assert_series_equal(result, expected)
def test_data_frame_value_counts_default():
df = pd.DataFrame(
{"num_legs": [2, 4, 4, 6], "num_wings": [2, 0, 0, 0]},
index=["falcon", "dog", "cat", "ant"],
)
result = df.value_counts()
expected = pd.Series(
data=[2, 1, 1],
index=pd.MultiIndex.from_arrays(
[(4, 2, 6), (0, 2, 0)], names=["num_legs", "num_wings"]
),
name="count",
)
tm.assert_series_equal(result, expected)
def test_data_frame_value_counts_normalize():
df = pd.DataFrame(
{"num_legs": [2, 4, 4, 6], "num_wings": [2, 0, 0, 0]},
index=["falcon", "dog", "cat", "ant"],
)
result = df.value_counts(normalize=True)
expected = pd.Series(
data=[0.5, 0.25, 0.25],
index=pd.MultiIndex.from_arrays(
[(4, 2, 6), (0, 2, 0)], names=["num_legs", "num_wings"]
),
name="proportion",
)
tm.assert_series_equal(result, expected)
def test_data_frame_value_counts_single_col_default():
df = pd.DataFrame({"num_legs": [2, 4, 4, 6]})
result = df.value_counts()
expected = pd.Series(
data=[2, 1, 1],
index=pd.MultiIndex.from_arrays([[4, 2, 6]], names=["num_legs"]),
name="count",
)
tm.assert_series_equal(result, expected)
def test_data_frame_value_counts_empty():
df_no_cols = pd.DataFrame()
result = df_no_cols.value_counts()
expected = pd.Series(
[], dtype=np.int64, name="count", index=np.array([], dtype=np.intp)
)
tm.assert_series_equal(result, expected)
def test_data_frame_value_counts_empty_normalize():
df_no_cols = pd.DataFrame()
result = df_no_cols.value_counts(normalize=True)
expected = pd.Series(
[], dtype=np.float64, name="proportion", index=np.array([], dtype=np.intp)
)
tm.assert_series_equal(result, expected)
def test_data_frame_value_counts_dropna_true(nulls_fixture):
# GH 41334
df = pd.DataFrame(
{
"first_name": ["John", "Anne", "John", "Beth"],
"middle_name": ["Smith", nulls_fixture, nulls_fixture, "Louise"],
},
)
result = df.value_counts()
expected = pd.Series(
data=[1, 1],
index=pd.MultiIndex.from_arrays(
[("John", "Beth"), ("Smith", "Louise")], names=["first_name", "middle_name"]
),
name="count",
)
tm.assert_series_equal(result, expected)
@pytest.mark.xfail(
using_string_dtype() and not HAS_PYARROW, reason="TODO(infer_string)", strict=False
)
def test_data_frame_value_counts_dropna_false(nulls_fixture):
# GH 41334
df = pd.DataFrame(
{
"first_name": ["John", "Anne", "John", "Beth"],
"middle_name": ["Smith", nulls_fixture, nulls_fixture, "Louise"],
},
)
result = df.value_counts(dropna=False)
expected = pd.Series(
data=[1, 1, 1, 1],
index=pd.MultiIndex(
levels=[
pd.Index(["Anne", "Beth", "John"]),
pd.Index(["Louise", "Smith", np.nan]),
],
codes=[[2, 0, 2, 1], [1, 2, 2, 0]],
names=["first_name", "middle_name"],
),
name="count",
)
tm.assert_series_equal(result, expected)
@pytest.mark.parametrize("columns", (["first_name", "middle_name"], [0, 1]))
def test_data_frame_value_counts_subset(nulls_fixture, columns):
# GH 50829
df = pd.DataFrame(
{
columns[0]: ["John", "Anne", "John", "Beth"],
columns[1]: ["Smith", nulls_fixture, nulls_fixture, "Louise"],
},
)
result = df.value_counts(columns[0])
expected = pd.Series(
data=[2, 1, 1],
index=pd.Index(["John", "Anne", "Beth"], name=columns[0]),
name="count",
)
tm.assert_series_equal(result, expected)
def test_value_counts_categorical_future_warning():
# GH#54775
df = pd.DataFrame({"a": [1, 2, 3]}, dtype="category")
result = df.value_counts()
expected = pd.Series(
1,
index=pd.MultiIndex.from_arrays(
[pd.Index([1, 2, 3], name="a", dtype="category")]
),
name="count",
)
tm.assert_series_equal(result, expected)
def test_value_counts_with_missing_category():
# GH-54836
df = pd.DataFrame({"a": pd.Categorical([1, 2, 4], categories=[1, 2, 3, 4])})
result = df.value_counts()
expected = pd.Series(
[1, 1, 1, 0],
index=pd.MultiIndex.from_arrays(
[pd.CategoricalIndex([1, 2, 4, 3], categories=[1, 2, 3, 4], name="a")]
),
name="count",
)
tm.assert_series_equal(result, expected)
|
pandas-devREPO_NAMEpandasPATH_START.@pandas_extracted@pandas-main@pandas@tests@frame@methods@test_value_counts.py@.PATH_END.py
|
{
"filename": "test_argument_inputs.py",
"repo_name": "JLBLine/WODEN",
"repo_path": "WODEN_extracted/WODEN-master/cmake_testing/wodenpy/wodenpy_setup/test_argument_inputs.py",
"type": "Python"
}
|
from sys import path
import os
import unittest
import numpy as np
import numpy.testing as npt
code_dir = os.path.realpath(__file__)
code_dir = ('/').join(code_dir.split('/')[:-1])
# ##Code we are testing
from wodenpy.wodenpy_setup import run_setup
##some expected values
east = [-5.55600e+01, 1.77467e+02, -2.17100e+01, 9.18090e+01, 1.53700e+02,
-6.75820e+01, -1.18037e+02, -8.75160e+01, -1.95980e+02, -1.98194e+02,
-2.84457e+02, -3.88152e+02, -4.52686e+02, -4.68817e+02, -3.64524e+02,
-3.69608e+02, -5.18046e+02, -4.89943e+02, -5.75557e+02, -5.85675e+02,
-6.53467e+02, -6.85027e+02, -1.07960e+03, -1.02607e+03, -3.33758e+02,
-2.27151e+02, -1.73717e+02, -2.56952e+02, -1.95018e+02, -2.49647e+02,
-3.00696e+02, -8.89471e+02, 7.16393e+02, 2.88066e+02, 2.38310e+02,
2.27878e+02, 6.14430e+01, -5.88740e+01, 3.52240e+01, 1.01500e+00,
8.39943e+02, 1.17729e+03, 1.31361e+03, 5.56122e+02, 5.71541e+02,
4.88857e+02, 5.28598e+02, 3.81069e+02, 1.10094e+03, 1.20113e+03,
4.98629e+02, 3.01193e+02, 3.70648e+02, 5.44324e+02, 5.06793e+02,
5.23727e+02, 6.53774e+02, 4.42990e+02, -5.88010e+01, 3.70650e+01,
2.32842e+02, 2.57984e+02, 3.25688e+02, 3.26101e+02, -4.12457e+02,
-1.41898e+02, -4.80270e+02, -5.75533e+02, -6.74021e+02, -6.31403e+02,
-6.33237e+02, -5.68540e+02, -1.99981e+03, -1.77014e+03, -1.60021e+03,
-1.47317e+03, -1.36727e+03, -1.17292e+03, -1.10097e+03, -8.89290e+02,
-8.29330e+02, -3.30610e+02, -4.55360e+02, -1.75090e+02, -5.25500e+01,
7.76550e+02, 5.20820e+02, 3.27910e+02, 3.63950e+02, 8.75570e+02,
8.79780e+02, 1.36097e+03, 1.68351e+03, 1.37137e+03, 1.63127e+03,
1.22470e+03, 1.90759e+03, 2.05493e+03, 2.40004e+03, 2.44155e+03,
2.34629e+03, 2.77082e+03, 3.10564e+03, 2.87228e+03, 1.65437e+03,
1.99447e+03, 2.32742e+03, 2.23817e+03, 2.69858e+03, 2.67033e+03,
2.40567e+03, 3.02946e+03, 8.03010e+02, 1.47178e+03, 1.17792e+03,
1.67727e+03, 1.55938e+03, 1.88127e+03, 2.28548e+03, 1.99131e+03,
1.65760e+02, 5.11310e+02, 3.38490e+02, 8.05810e+02, 1.10572e+03,
9.31990e+02, 1.33634e+03, 1.66429e+03]
north = [ 124.801, -43.377, 77.005, -80.565, -259.929, -22.361, -39.873,
125.197, 189.795, 95.15, -196.181, -44.022, -15.933, 185.187,
183.098, 266.484, 632.503, 604.568, 415.008, -101.53, 142.255,
212.168, 87.018, 574.527, 1660.73, 898.129, 781.638, 892.005,
762.893, 731.709, 704.326, 1088.14, 1405.26, 618.265, 768.413,
819.724, 815.255, 810.53, 950.242, 1412.36, 990.227, 858.632,
553.372, 244.485, 359.21, 424.98, 539.194, 652.864, 151.76,
-443.25, -264.594, -48.952, -38.415, -91.152, -3.216, 75.494,
-1037.75, -835.306, -505.332, -405.05, -357.891, -366.453, -318.508,
-306.307, -409.06, -420.261, -817.324, -863.782, -638.871, -245.107,
-169.362, -257.66, 206.85, 53.59, -64.26, -194.32, -453.65,
-569.74, -698.66, -779.73, 2563.46, 2829.62, 2039.45, 2644.3,
2142.93, 2535.7, 2352.17, 1962.28, 1536.78, 2181.3, 1687.73,
2458.85, 2153.35, 1863.8, 1497.8, 1387.35, 1247.36, 1775.43,
2214.24, 1785.7, 1367.78, 1905.8, 1640.33, 1455.11, 845.05,
629.74, 1026.38, 408.81, 931.59, 572.92, 23.61, 1095.29,
-812.85, 179.53, -691.74, -478.33, -932.27, -106.66, -251.75,
-940.61, -991.57, -1486.98, -2065.83, -1408.62, -1139.39, -1975.87,
-1867.29, -1355.78]
height = [376.803, 375.005, 376.351, 375.76, 376.476,
376.175, 376.054, 377.031, 377.161,
377.024, 374.901, 375.372, 375.111, 374.752, 375.739, 375.24, 372.604, 372.907,
373.374, 375.212, 374.462, 374.236, 377.009, 374.039, 371.068, 373.694, 374.217,
373.492, 374.093, 373.797, 373.514, 370.381, 374.602, 375.134, 375.805, 375.748,
376.001, 375.17, 376.275, 373.682, 372.696, 372.122, 370.354, 372.284, 372.509,
372.967, 372.789, 374.251, 369.065, 368.328, 373.22, 374.618, 374.34, 372.812,
372.803, 372.613, 371.071, 372.251, 372.913, 374.034, 375.262, 375.221, 375.029,
375.128, 373.969, 373.503, 375.72, 375.522, 375.34, 375.544, 375.616, 375.246,
372.92, 374.44, 375.54, 376.22, 374.91, 374.77, 374.12, 374.41, 369.52,
373.37, 370.95, 373.96, 373.19, 377.7, 376.35, 374.38, 374.63, 377.05,
376.53, 380.61, 380.99, 378.97, 376.74, 376.46, 375.33, 378.58, 380.84,
377.35, 375.66, 378.43, 376.8, 376.45, 372.22, 373.7, 374.52, 371.5,
373.23, 371.79, 369.35, 373.69, 371.04, 369.24, 368.31, 366.65, 367.41,
368.26, 367.95, 365.15, 370.3, 368.75, 365.99, 367.88, 370.54, 365.39,
366.17, 365.88]
names = ['Tile051', 'Tile052', 'Tile053', 'Tile054',
'Tile055', 'Tile056', 'Tile057',
'Tile058', 'Tile071', 'Tile072', 'Tile073', 'Tile074', 'Tile075', 'Tile076',
'Tile077', 'Tile078', 'Tile101', 'Tile102', 'Tile103', 'Tile104', 'Tile105',
'Tile106', 'Tile107', 'Tile108', 'Tile111', 'Tile112', 'Tile113', 'Tile114',
'Tile115', 'Tile116', 'Tile117', 'Tile118', 'Tile121', 'Tile122', 'Tile123',
'Tile124', 'Tile125', 'Tile126', 'Tile127', 'Tile128', 'Tile131', 'Tile132',
'Tile133', 'Tile134', 'Tile135', 'Tile136', 'Tile137', 'Tile138', 'Tile141',
'Tile142', 'Tile143', 'Tile144', 'Tile145', 'Tile146', 'Tile147', 'Tile148',
'Tile151', 'Tile152', 'Tile153', 'Tile154', 'Tile155', 'Tile156', 'Tile157',
'Tile158', 'Tile161', 'Tile162', 'Tile163', 'Tile164', 'Tile165', 'Tile166',
'Tile167', 'Tile168', 'LBA1', 'LBA2', 'LBA3', 'LBA4', 'LBA5', 'LBA6', 'LBA7',
'LBA8', 'LBB1', 'LBB2', 'LBB3', 'LBB4', 'LBB5', 'LBB6', 'LBB7', 'LBB8', 'LBC1',
'LBC2', 'LBC3', 'LBC4', 'LBC5', 'LBC6', 'LBC7', 'LBC8', 'LBD1', 'LBD2', 'LBD3',
'LBD4', 'LBD5', 'LBD6', 'LBD7', 'LBD8', 'LBE1', 'LBE2', 'LBE3', 'LBE4', 'LBE5',
'LBE6', 'LBE7', 'LBE8', 'LBF1', 'LBF2', 'LBF3', 'LBF4', 'LBF5', 'LBF6', 'LBF7',
'LBF8', 'LBG1', 'LBG2', 'LBG3', 'LBG4', 'LBG5', 'LBG6', 'LBG7', 'LBG8']
dipflags = np.array([10, 43, 102, 110, 113, 120, 123, 198, 202, 204, 205, 213, 214, 221,
223, 431, 534, 616, 1028, 1048, 1059, 1074, 1092, 1099, 1101, 1122, 1148, 1163,
1179, 1215, 1217, 1268, 1350, 1386, 1429, 1572, 1599, 1613, 1700, 1740, 1744, 1811,
2156, 2311, 2323, 2398, 2418, 2496, 2569, 2629, 2652, 2657, 2667, 2703, 2719, 2745,
2816, 2842, 2864, 2997, 3102, 3247, 3260, 3265, 3308, 3343, 3384, 3420, 3436, 3480,
3561, 3618, 3633, 3663, 3700, 3772, 3791, 3823, 3880, 3897, 3969, 4003, 4089])
dipamp_indexes = np.array([ 10, 43, 102, 100, 345, 768, 2780, 3678, 4000])
dipamps = np.array([0.90376163, 0.95701027, 0.44699562, 0.95701027,
1., 0.97801661, 0.95163655, 1., 1.])
dipamps_flagged = np.array([0.0, 0.0, 0.0, 0.95701027,
1., 0.97801661, 0.95163655, 1., 1.])
##Vehicle for running tests
class Test(unittest.TestCase):
"""Test whether the args collected by the argument parser are read in
correctly, and that sanity checks on certain combinations work such that
we don't feed WODEN arguments that won't work"""
def run_parser_on_inputs(self):
"""Call `run_setup.get_parser` and run the returned parser using the inputs
in `self.inputs`. Return the recovered arguments"""
parser = run_setup.get_parser()
args = parser.parse_args(self.inputs)
return args
def assert_parser_errors(self):
"""Assert that the parser returned by `run_setup.get_parser` errors when
run with the current set of self.inputs"""
with self.assertRaises(SystemExit) as cm:
##call the argparser with the gives=n inputs
args = self.run_parser_on_inputs()
def assert_check_args_errors(self):
"""Assert that `run_setup.check_args` errors when run on the parser returned
by `run_setup.get_parser` is run with the current arguments in `self.inputs`"""
##call the argparser with the given inputs
args = self.run_parser_on_inputs()
##Assert the code raisers a sys exit
with self.assertRaises(SystemExit) as cm:
run_setup.check_args(args)
return args
def run_parser_and_check_args(self):
"""Runs the parser on the current set of args in self.inputs, then runs
the output through `run_setup.check_args`"""
args = self.run_parser_on_inputs()
args = run_setup.check_args(args)
return args
def make_required_args(self):
"""These are arguments that if missing, the parser itself should fail"""
self.inputs = ['--ra0=0.0', '--dec0=-26.7', '--cat_filename=srclist.txt']
def make_minimum_required_args_without_metafits(self):
"""When not passing a metafits file, these are the minimum set of
arguments needed to pass onto WODEN. Other arguments either have
defaults or are optional"""
self.make_required_args()
self.inputs.append('--num_time_steps=16')
self.inputs.append('--time_res=8.0')
self.inputs.append('--freq_res=40e+3')
self.inputs.append('--array_layout={:s}/example_array_layout.txt'.format(code_dir))
self.inputs.append('--lowest_channel_freq=160e+6')
self.inputs.append('--date="2019-06-12T13:04:12"')
def test_parser_fails(self):
"""Tests the `run_setup.get_parser` function, which creates an
`argparse.ArgumentParser` object to collect the arguments from the
command. There are three arguments which are always required -
check things fail if they are missing"""
self.inputs = []
self.assert_parser_errors()
##Add one of the required args
self.inputs.append('--ra0=0.0')
self.assert_parser_errors()
##Add second required args
self.inputs.append('--dec0=-26.7')
self.assert_parser_errors()
##Add final required args
self.inputs.append('--cat_filename=srclist.txt')
##This should run now
args = self.run_parser_on_inputs()
def test_missing_args_without_metafits_fails(self):
"""When not providing a metafits file, a number of arguments must
be given to complete the observational settings. Can't make them
required by the argparser as only needed if metafits not given,
so we use `run_setup.check_args` to make sure everything that is needed
is present.
Here, we make a list of every neccessary input argument in a loop,
deleting a different one each time to make sure if one is missing
we get an error every time"""
##First 3 arguments in self.inputs generated by
##self.make_minimum_required_args_without_metafits are required by
##the parser, so start loop at 3
self.make_minimum_required_args_without_metafits()
num_req_args = len(self.inputs)
##Loop through the needed args, delete them from the list of inputs
##and make sure it errors for each one
for arg_ind in range(4, num_req_args):
##regenerate input args to replace anything that was deleted
self.make_minimum_required_args_without_metafits()
del[self.inputs[arg_ind]]
##Check that it errors
self.assert_check_args_errors()
def test_metafits_read_fails(self):
"""If the path to the metafits file is not real, `run_setup.check_args`
should fail"""
self.make_required_args()
##Add a bad path to the metafits option
self.inputs.append("--metafits_filename=this_is_not_a_path")
##Check it fails with the bad path
self.assert_check_args_errors()
def test_read_metafits_succeeds(self):
self.make_required_args()
##Add a path to a metafits to the options
self.inputs.append("--metafits_filename={:s}/1202815152_metafits_ppds.fits".format(code_dir))
##Run the parser and get the args
args = self.run_parser_on_inputs()
##Check the args make sense and read in options from the metafits
args = run_setup.check_args(args)
##Check the options set by the metafits are correct
self.assertEqual(169595000.0, args.lowest_channel_freq)
self.assertEqual(240, args.num_time_steps)
self.assertEqual(10000.0, args.freq_res)
self.assertEqual(0.5, args.time_res)
self.assertEqual("2018-02-16T11:18:54", args.date)
self.assertEqual("from_the_metafits", args.array_layout)
self.assertEqual(128, args.num_freq_channels)
self.assertEqual(range(1, 25), args.band_nums)
self.assertEqual(-26.703319405555554 ,args.latitude)
self.assertEqual(1280000.0 ,args.coarse_band_width)
self.assertEqual(54.75681779724241, args.gauss_ra_point)
self.assertEqual(-39.48422590285089, args.gauss_dec_point)
##Check east, north, height coords read correctly
self.assertTrue(np.allclose(np.array(east), args.east, atol=1e-5))
self.assertTrue(np.allclose(np.array(north), args.north, atol=1e-5))
self.assertTrue(np.allclose(np.array(height), args.height, atol=1e-5))
##Check the tile (antenna) names are read correctly
self.assertTrue(np.char.equal(np.array(names), args.ant_names).all())
def test_EDA2_args_work(self):
"""Check that with a minimal set of arguments, the EDA2 primary
beam is selected by `run_setup.check_args` correctly"""
self.make_minimum_required_args_without_metafits()
self.inputs.append('--primary_beam=EDA2')
args = self.run_parser_and_check_args()
self.assertEqual('EDA2', args.primary_beam)
def test_GaussBeam_args_work(self):
"""Check that the Gaussian primary beam is handled `ra.check_args`
correctly, and that optional arguments are added correctly"""
self.make_minimum_required_args_without_metafits()
self.inputs.append('--primary_beam=Gaussian')
args = self.run_parser_and_check_args()
self.assertEqual('Gaussian', args.primary_beam)
##Should be pointed at phase centre with default 20.0 FWHM and ref freq 150e+6
##as none of these have been specified
self.assertEqual(args.ra0, args.gauss_ra_point)
self.assertEqual(args.dec0, args.gauss_dec_point)
self.assertEqual(20.0, args.gauss_beam_FWHM)
self.assertEqual(150e+6, args.gauss_beam_ref_freq)
##Keep adding in optional arguments and checking they end up manifesting
self.inputs.append('--gauss_ra_point=234.0')
args = self.run_parser_and_check_args()
self.assertEqual(234.0, args.gauss_ra_point)
self.assertEqual(args.dec0, args.gauss_dec_point)
self.assertEqual(20.0, args.gauss_beam_FWHM)
self.assertEqual(150e+6, args.gauss_beam_ref_freq)
self.inputs.append('--gauss_dec_point=19.0')
args = self.run_parser_and_check_args()
self.assertEqual(234.0, args.gauss_ra_point)
self.assertEqual(19.0, args.gauss_dec_point)
self.assertEqual(20.0, args.gauss_beam_FWHM)
self.assertEqual(150e+6, args.gauss_beam_ref_freq)
self.inputs.append('--gauss_beam_FWHM=64.0')
args = self.run_parser_and_check_args()
self.assertEqual(234.0, args.gauss_ra_point)
self.assertEqual(19.0, args.gauss_dec_point)
self.assertEqual(64.0, args.gauss_beam_FWHM)
self.assertEqual(150e+6, args.gauss_beam_ref_freq)
self.inputs.append('--gauss_beam_ref_freq=291e+6')
args = self.run_parser_and_check_args()
self.assertEqual(234.0, args.gauss_ra_point)
self.assertEqual(19.0, args.gauss_dec_point)
self.assertEqual(64.0, args.gauss_beam_FWHM)
self.assertEqual(291e+6, args.gauss_beam_ref_freq)
def _check_MWA_FEE_generic(self, beam_name, beam_env_var):
"""Run the tests common to `test_MWAFEEBeam_args_work` and
`test_MWAFEEBeamInterp_args_work`. The function should error out
if certain paths to the hdf5 file that holds the spherical harmonic
information is missing, and if the delays have been specified
incorrectly"""
##We want to test that the argument fails if the environment key
##MWA_FEE_HDF5 is not set. Here we check if it is set and delete it
##if so
try:
del os.environ[beam_env_var]
except KeyError:
pass
##Trying to run an MWA_FEE simulation with no metafits, no 'MWA_FEE_HDF5'
##or --hdf5_beam_path should fail
self.make_minimum_required_args_without_metafits()
self.inputs.append(f'--primary_beam={beam_name}')
##Check the primary_beam has been selected and that `run_setup.check_args` fails
args = self.assert_check_args_errors()
self.assertEqual(beam_name, args.primary_beam)
##Set the path to the hdf5 file. Just point it at a text file.
##TODO - have `check_args` actually test reading the hdf5 file. At the
##mo just tests that the file exists
##This should still fail because we have no delays set
self.inputs.append('--hdf5_beam_path={:s}/example_array_layout.txt'.format(code_dir))
self.assert_check_args_errors()
##now set the delays to something that should produce a failure
self.inputs.append('--MWA_FEE_delays=oijasdoiasjd')
self.assert_check_args_errors()
##Set the delays to something that should work
self.inputs.append('--MWA_FEE_delays=[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]')
self.run_parser_and_check_args()
##Reset the arguments try to run using a metafits file, but still
##have no path to the hdf5 file. Should fail
self.make_minimum_required_args_without_metafits()
self.inputs.append(f'--primary_beam={beam_name}')
self.inputs.append("--metafits_filename={:s}/1202815152_metafits_ppds.fits".format(code_dir))
args = self.assert_check_args_errors()
##This time, set the environment variable to the hdf5 file. Should
##pass (just use a file we know exists somewhere)
os.environ[beam_env_var] = '{:s}/example_array_layout.txt'.format(code_dir)
args = self.run_parser_and_check_args()
##Assert the delays were read in correctly
self.assertEqual("[6, 4, 2, 0, 8, 6, 4, 2, 10, 8, 6, 4, 12, 10, 8, 6]", args.MWA_FEE_delays)
##Setting the delays manually should override the delays in the
##metafits
self.inputs.append('--MWA_FEE_delays=[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]')
args = self.run_parser_and_check_args()
##Check the command line delays are there instead of metafits
self.assertEqual("[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]", args.MWA_FEE_delays)
def test_MWAFEEBeam_args_work(self):
"""Check that the MWA FEE primary beam is handled `ra.check_args`
correctly. The function should error out if certain paths to the
hdf5 file that holds the spherical harmonic information is missing,
and if the delays have been specified incorrectly"""
self._check_MWA_FEE_generic('MWA_FEE', 'MWA_FEE_HDF5')
def test_MWAFEEBeamInterp_args_work(self):
"""Check that the interpolated MWA FEE primary beam is handled `ra.check_args`
correctly. The function should error out if certain paths to the
hdf5 file that holds the spherical harmonic information is missing,
and if the delays have been specified incorrectly"""
self._check_MWA_FEE_generic('MWA_FEE_interp', 'MWA_FEE_HDF5_INTERP')
def test_MWAAnalyBeam_args_work(self):
"""Check `ra.check_args` works correctly for the analytic MWA beam.
Should fail if the delays have been specified incorrectly"""
##Trying to run an MWA_FEE simulation with no metafits, no 'MWA_FEE_HDF5'
##or --hdf5_beam_path should fail
self.make_minimum_required_args_without_metafits()
self.inputs.append(f'--primary_beam=MWA_analy')
##Check the primary_beam has been selected and that `run_setup.check_args` fails
args = self.assert_check_args_errors()
self.assertEqual('MWA_analy', args.primary_beam)
##now set the delays to something that should produce a failure
self.inputs.append('--MWA_FEE_delays=oijasdoiasjd')
self.assert_check_args_errors()
##Set the delays to something that should work
self.inputs.append('--MWA_FEE_delays=[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]')
self.run_parser_and_check_args()
##Reset the arguments try to run using delays from a metafits file.
##Should pass
self.make_minimum_required_args_without_metafits()
self.inputs.append(f'--primary_beam=MWA_analy')
self.inputs.append("--metafits_filename={:s}/1202815152_metafits_ppds.fits".format(code_dir))
self.run_parser_and_check_args()
##Setting the delays manually should override the delays in the
##metafits
self.inputs.append('--MWA_FEE_delays=[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]')
args = self.run_parser_and_check_args()
##Check the command line delays are there instead of metafits
self.assertEqual("[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]", args.MWA_FEE_delays)
def test_do_autos_work(self):
"""Check that with a minimal set of arguments, the parser defaults to
no asking for autocorrelations, and check that this changes when
requested"""
##Make minimum arguements, run parser, check do_autos == False
self.make_minimum_required_args_without_metafits()
args = self.run_parser_and_check_args()
self.assertEqual(False, args.do_autos)
##Append the --do_autos flag, and check do_autos == True
self.inputs.append('--do_autos')
args = self.run_parser_and_check_args()
self.assertEqual(True, args.do_autos)
def test_sky_cropping_works(self):
"""Check that with a minimal set of arguments, the parser defaults to
crop by sky component, and check that this changes when
requested via --sky_crop_sources"""
##Make minimum arguements, run parser, check do_autos == False
self.make_minimum_required_args_without_metafits()
args = self.run_parser_and_check_args()
self.assertEqual(True, args.sky_crop_components)
##Append the --do_autos flag, and check do_autos == True
self.inputs.append('--sky_crop_sources')
args = self.run_parser_and_check_args()
self.assertEqual(False, args.sky_crop_components)
def test_use_MWA_dipamps_works(self):
"""Check `ra.check_args` works correctly for the `--use_MWA_dipamps` flag.
Should fail for a number of cases"""
##Make minimum arguements
self.make_minimum_required_args_without_metafits()
##Set flag we want to test
self.inputs.append('--use_MWA_dipamps')
# ##Try running without selecting a FEE beam - should fail
self.assert_check_args_errors()
##Put in a FEE beam, but not a metafits file - should fail
self.inputs.append('--primary_beam=MWA_FEE')
self.assert_check_args_errors()
##now try adding a metafits file that does not have the `Dipamps` key
##this should also fail
self.inputs.append("--metafits_filename={:s}/1202815152_metafits_ppds.fits".format(code_dir))
self.assert_check_args_errors()
##Add a metafits file that has the `Dipamps` key. However input
##args have a bespoke array layout which has 8 antennas, whereas this
##metafits has 128 antennas. Should fail
self.inputs.append("--metafits_filename={:s}/1088285600_DipAmps.metafits".format(code_dir))
self.assert_check_args_errors()
##reset to remove incorrect --array_layout flags
self.make_required_args()
self.inputs.append('--use_MWA_dipamps')
self.inputs.append('--primary_beam=MWA_FEE')
self.inputs.append("--metafits_filename={:s}/1088285600_DipAmps.metafits".format(code_dir))
args = self.run_parser_and_check_args()
##check outputs are good
self.assertEqual(args.use_MWA_dipamps, True)
##Using atol=1e-8 as only stored to 8 decimal places
npt.assert_allclose(args.dipamps[dipamp_indexes], dipamps, atol=1e-8)
def test_use_MWA_dipflags_works(self):
"""Check `ra.check_args` works correctly for the `--use_MWA_dipamps` flag.
Should fail for a number of cases"""
##Make minimum arguements
self.make_minimum_required_args_without_metafits()
##Set flag we want to test
self.inputs.append('--use_MWA_dipflags')
# ##Try running without selecting a FEE beam - should fail
self.assert_check_args_errors()
##Put in a FEE beam, but not a metafits file - should fail
self.inputs.append('--primary_beam=MWA_FEE')
self.assert_check_args_errors()
##Do give it a metafits, but combined with the `--array_layout` flag
##this should crash as we have wrong number of tiles
self.inputs.append("--metafits_filename={:s}/1088285600_DipAmps.metafits".format(code_dir))
self.assert_check_args_errors()
#reset to remove incorrect --array_layout flags
self.make_required_args()
self.inputs.append('--use_MWA_dipflags')
self.inputs.append('--primary_beam=MWA_FEE')
##Try using a metafits file that has no dipole flagging at all
##As we haven't asked for dipole amplitudes here, we should stick
##in a warning saying we won't flag any dipoles and will run with
##perfect FEE beams (which will be way faster)
self.inputs.append("--metafits_filename={:s}/1088285600_DipAmps.metafits".format(code_dir))
args = self.run_parser_and_check_args()
self.assertEqual(args.use_MWA_dipflags, False)
##Finally run with a metafites that does have flags and read them in
self.inputs.append("--metafits_filename={:s}/1088285600_DipAmps_withflags.metafits".format(code_dir))
args = self.run_parser_and_check_args()
##Righto, so our fina result should live in args.dipamps and we
##should have switched --use_MWA_dipamps to True
self.assertEqual(args.use_MWA_dipflags,True)
self.assertEqual(args.use_MWA_dipamps, True)
##Check we have a zero where we expect
npt.assert_array_equal(dipflags, np.where(args.dipamps == 0)[0])
def test_use_both_MWA_dipflags_dipamps_works(self):
"""Check `ra.check_args` works correctly for the `--use_MWA_dipflags` and
`--use_MWA_dipamps` flags. Make sure it combines the arrays"""
self.make_required_args()
self.inputs.append('--use_MWA_dipflags')
self.inputs.append('--use_MWA_dipamps')
self.inputs.append('--primary_beam=MWA_FEE')
self.inputs.append("--metafits_filename={:s}/1088285600_DipAmps_withflags.metafits".format(code_dir))
args = self.run_parser_and_check_args()
##Righto, so our fina result should live in args.dipamps and we
##should have switched --use_MWA_dipamps to True
self.assertEqual(args.use_MWA_dipflags,True)
self.assertEqual(args.use_MWA_dipamps, True)
##Check we have a zero where we expect
npt.assert_array_equal(dipflags, np.where(args.dipamps == 0)[0])
npt.assert_allclose(args.dipamps[dipamp_indexes], dipamps_flagged, atol=1e-8)
def test_use_off_cardinal_dipoles(self):
"""Check `ra.check_args` works correctly for the `--use_MWA_dipflags` and
`--use_MWA_dipamps` flags. Make sure it combines the arrays"""
##First, run as normal, and assert that off cardinal dipoles are not used
self.make_minimum_required_args_without_metafits()
self.inputs.append('--primary_beam=None')
args = self.run_parser_and_check_args()
self.assertEqual(args.off_cardinal_dipoles,False)
##Next, add --off_cardinal_dipoles and check it is set to True
self.make_minimum_required_args_without_metafits()
self.inputs.append('--primary_beam=None')
self.inputs.append('--off_cardinal_dipoles')
args = self.run_parser_and_check_args()
self.assertEqual(args.off_cardinal_dipoles,True)
##Run the test
if __name__ == '__main__':
unittest.main()
|
JLBLineREPO_NAMEWODENPATH_START.@WODEN_extracted@WODEN-master@cmake_testing@wodenpy@wodenpy_setup@test_argument_inputs.py@.PATH_END.py
|
{
"filename": "_xref.py",
"repo_name": "plotly/plotly.py",
"repo_path": "plotly.py_extracted/plotly.py-master/packages/python/plotly/plotly/validators/scatter/marker/colorbar/_xref.py",
"type": "Python"
}
|
import _plotly_utils.basevalidators
class XrefValidator(_plotly_utils.basevalidators.EnumeratedValidator):
def __init__(
self, plotly_name="xref", parent_name="scatter.marker.colorbar", **kwargs
):
super(XrefValidator, self).__init__(
plotly_name=plotly_name,
parent_name=parent_name,
edit_type=kwargs.pop("edit_type", "colorbars"),
values=kwargs.pop("values", ["container", "paper"]),
**kwargs,
)
|
plotlyREPO_NAMEplotly.pyPATH_START.@plotly.py_extracted@plotly.py-master@packages@python@plotly@plotly@validators@scatter@marker@colorbar@_xref.py@.PATH_END.py
|
{
"filename": "zoomtool.py",
"repo_name": "healpy/healpy",
"repo_path": "healpy_extracted/healpy-main/lib/healpy/zoomtool.py",
"type": "Python"
}
|
#
# This file is part of Healpy.
#
# Healpy is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# Healpy is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Healpy; if not, write to the Free Software
# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
#
# For more information about Healpy, see http://code.google.com/p/healpy
#
import logging
log = logging.getLogger("healpy")
import matplotlib
from . import projaxes as PA
from . import rotator as R
import numpy as np
from ._healpy_pixel_lib import UNSEEN
from . import pixelfunc
pi = np.pi
dtor = pi / 180.0
def mollzoom(
map=None,
fig=None,
rot=None,
coord=None,
unit="",
xsize=800,
title="Mollweide view",
nest=False,
min=None,
max=None,
flip="astro",
remove_dip=False,
remove_mono=False,
gal_cut=0,
format="%g",
cmap=None,
norm=None,
hold=False,
margins=None,
sub=None,
):
"""Interactive mollweide plot with zoomed gnomview.
Parameters:
-----------
map : float, array-like shape (Npix,)
An array containing the map,
supports masked maps, see the `ma` function.
if None, use map with inf value (white map), useful for
overplotting
fig : a figure number.
Default: create a new figure
rot : scalar or sequence, optional
Describe the rotation to apply.
In the form (lon, lat, psi) (unit: degrees) : the point at
longitude *lon* and latitude *lat* will be at the center. An additional rotation
of angle *psi* around this direction is applied.
coord : sequence of character, optional
Either one of 'G', 'E' or 'C' (where 'E' stands for the Ecliptic, 'G' for
the Galactic, and 'C' for the Celestial or equatorial) to describe the coordinate
system of the map, or a sequence of 2 of these to rotate the map from the first
to the second coordinate system.
unit : str, optional
A text describing the unit of the data. Default: ''
xsize : int, optional
The size of the image. Default: 800
title : str, optional
The title of the plot. Default: 'Mollweide view'
nest : bool, optional
If True, ordering scheme is NESTED. Default: False (RING)
min : float, optional
The minimum range value
max : float, optional
The maximum range value
flip : {'astro', 'geo'}, optional
Defines the convention of projection : 'astro' (default, east towards left, west towards right)
or 'geo' (east towards roght, west towards left)
remove_dip : bool, optional
If :const:`True`, remove the dipole+monopole
remove_mono : bool, optional
If :const:`True`, remove the monopole
gal_cut : float, scalar, optional
Symmetric galactic cut for the dipole/monopole fit.
Removes points in latitude range [-gal_cut, +gal_cut]
format : str, optional
The format of the scale label. Default: '%g'
"""
import pylab
# Ensure that the nside is valid
nside = pixelfunc.get_nside(map)
pixelfunc.check_nside(nside, nest=nest)
# create the figure (if interactive, it will open the window now)
f = pylab.figure(fig, figsize=(10.5, 5.4))
extent = (0.02, 0.25, 0.56, 0.72)
# Starting to draw : turn interactive off
wasinteractive = pylab.isinteractive()
pylab.ioff()
try:
if map is None:
map = np.zeros(12) + np.inf
map = pixelfunc.ma_to_array(map)
ax = PA.HpxMollweideAxes(
f, extent, coord=coord, rot=rot, format=format, flipconv=flip
)
f.add_axes(ax)
if remove_dip:
map = pixelfunc.remove_dipole(
map, gal_cut=gal_cut, nest=nest, copy=True
)
elif remove_mono:
map = pixelfunc.remove_monopole(
map, gal_cut=gal_cut, nest=nest, copy=True
)
ax.projmap(
map,
nest=nest,
xsize=xsize,
coord=coord,
vmin=min,
vmax=max,
cmap=cmap,
norm=norm,
)
im = ax.get_images()[0]
b = im.norm.inverse(np.linspace(0, 1, im.cmap.N + 1))
v = np.linspace(im.norm.vmin, im.norm.vmax, im.cmap.N)
if matplotlib.__version__ >= "0.91.0":
cb = f.colorbar(
ax.get_images()[0],
ax=ax,
orientation="horizontal",
shrink=0.5,
aspect=25,
ticks=PA.BoundaryLocator(),
pad=0.05,
fraction=0.1,
boundaries=b,
values=v,
)
else:
# for older matplotlib versions, no ax kwarg
cb = f.colorbar(
ax.get_images()[0],
orientation="horizontal",
shrink=0.5,
aspect=25,
ticks=PA.BoundaryLocator(),
pad=0.05,
fraction=0.1,
boundaries=b,
values=v,
)
ax.set_title(title)
ax.text(
0.86,
0.05,
ax.proj.coordsysstr,
fontsize=14,
fontweight="bold",
transform=ax.transAxes,
)
cb.ax.text(
1.05,
0.30,
unit,
fontsize=14,
fontweight="bold",
transform=cb.ax.transAxes,
ha="left",
va="center",
)
f.sca(ax)
## Gnomonic axes
# extent = (0.02,0.25,0.56,0.72)
g_xsize = 600
g_reso = 1.0
extent = (0.60, 0.04, 0.38, 0.94)
g_ax = PA.HpxGnomonicAxes(
f, extent, coord=coord, rot=rot, format=format, flipconv=flip
)
f.add_axes(g_ax)
if remove_dip:
map = pixelfunc.remove_dipole(map, gal_cut=gal_cut, nest=nest, copy=True)
elif remove_mono:
map = pixelfunc.remove_monopole(map, gal_cut=gal_cut, nest=nest, copy=True)
g_ax.projmap(
map,
nest=nest,
coord=coord,
vmin=min,
vmax=max,
xsize=g_xsize,
ysize=g_xsize,
reso=g_reso,
cmap=cmap,
norm=norm,
)
im = g_ax.get_images()[0]
b = im.norm.inverse(np.linspace(0, 1, im.cmap.N + 1))
v = np.linspace(im.norm.vmin, im.norm.vmax, im.cmap.N)
if matplotlib.__version__ >= "0.91.0":
cb = f.colorbar(
g_ax.get_images()[0],
ax=g_ax,
orientation="horizontal",
shrink=0.5,
aspect=25,
ticks=PA.BoundaryLocator(),
pad=0.08,
fraction=0.1,
boundaries=b,
values=v,
)
else:
cb = f.colorbar(
g_ax.get_images()[0],
orientation="horizontal",
shrink=0.5,
aspect=25,
ticks=PA.BoundaryLocator(),
pad=0.08,
fraction=0.1,
boundaries=b,
values=v,
)
g_ax.set_title(title)
g_ax.text(
-0.07,
0.02,
"%g '/pix, %dx%d pix"
% (
g_ax.proj.arrayinfo["reso"],
g_ax.proj.arrayinfo["xsize"],
g_ax.proj.arrayinfo["ysize"],
),
fontsize=12,
verticalalignment="bottom",
transform=g_ax.transAxes,
rotation=90,
)
g_ax.text(
-0.07,
0.8,
g_ax.proj.coordsysstr,
fontsize=14,
fontweight="bold",
rotation=90,
transform=g_ax.transAxes,
)
lon, lat = np.around(g_ax.proj.get_center(lonlat=True), g_ax._coordprec)
g_ax.text(
0.5,
-0.03,
"on (%g,%g)" % (lon, lat),
verticalalignment="center",
horizontalalignment="center",
transform=g_ax.transAxes,
)
cb.ax.text(
1.05,
0.30,
unit,
fontsize=14,
fontweight="bold",
transform=cb.ax.transAxes,
ha="left",
va="center",
)
# Add graticule info axes
grat_ax = pylab.axes([0.25, 0.02, 0.22, 0.25])
grat_ax.axis("off")
# Add help text
help_ax = pylab.axes([0.02, 0.02, 0.22, 0.25])
help_ax.axis("off")
t = help_ax.transAxes
help_ax.text(0.1, 0.8, "r/t .... zoom out/in", transform=t, va="baseline")
help_ax.text(0.1, 0.65, "p/v .... print coord/val", transform=t, va="baseline")
help_ax.text(0.1, 0.5, "c ...... go to center", transform=t, va="baseline")
help_ax.text(0.1, 0.35, "f ...... next color scale", transform=t, va="baseline")
help_ax.text(
0.1, 0.2, "k ...... save current scale", transform=t, va="baseline"
)
help_ax.text(0.1, 0.05, "g ...... toggle graticule", transform=t, va="baseline")
f.sca(g_ax)
# Set up the zoom capability
zt = ZoomTool(map, fig=f.number, nest=nest, cmap=cmap, norm=norm, coord=coord)
finally:
pylab.draw()
if wasinteractive:
pylab.ion()
def set_g_clim(vmin, vmax):
"""Set min/max value of the gnomview part of a mollzoom."""
import pylab
f = pylab.gcf()
if not hasattr(f, "zoomtool"):
raise TypeError("The current figure has no zoomtool")
f.zoomtool.save_min = vmin
f.zoomtool.save_max = vmax
f.zoomtool._range_status = 2
f.zoomtool.draw_gnom()
class ZoomTool(object):
"""A class providing zoom capability to a figure containing a Mollweide
and a Gnomonic axis.
"""
def __init__(self, m, fig=None, nest=False, cmap=None, norm=None, coord=None):
"""m: the map to be zoomed (already plotted in Mollweide view)
fig: the figure to instrument (None->gcf())
"""
import pylab
self.reso_list = [
0.05,
0.1,
0.2,
0.3,
0.5,
0.75,
1.0,
1.5,
3.0,
5.0,
10.0,
15.0,
30.0,
45.0,
60.0,
]
self._map = m
self._nest = nest
self._cmap = cmap
self._norm = norm
self._coord = coord
self._range_status = 0 # 0:normal, 1:global map min,max, 2: saved
self.save_min = self.save_max = None
self._graton = False
# find min, max of map
if isinstance(m, dict):
if len(m) == 0:
self._mapmin, self._mapmax = -1.0, 1.0
else:
self._mapmin, self._mapmax = min(m.values()), max(m.values())
else:
mgood = m[m != UNSEEN]
if mgood.size == 0:
self._mapmin, self._mapmax = -1.0, 1.0
else:
self._mapmin, self._mapmax = mgood.min(), mgood.max()
del mgood
if fig is None:
f = pylab.gcf()
else:
f = pylab.figure(fig)
self.f = f
f.zoomtool = self
(
self._moll_ax,
self._moll_cb_ax,
self._gnom_ax,
self._gnom_cb_ax,
) = f.get_axes()[:4]
self._grat_ax = f.get_axes()[4]
self._text_reso, self._text_coord, self._text_loc = self._gnom_ax.texts
self._xsize = self._gnom_ax.proj.arrayinfo["xsize"]
self._ysize = self._gnom_ax.proj.arrayinfo["ysize"]
try:
self._reso_idx = self.reso_list.index(self._gnom_ax.proj._arrayinfo["reso"])
except ValueError as e:
raise ValueError("Resolution not in %s" % self.reso_list)
(self.zoomcenter,) = self._moll_ax.plot([0], [0], "ok", mew=1, ms=15, alpha=0.1)
(self.zoomcenter2,) = self._moll_ax.plot(
[0], [0], "xr", ms=15, alpha=0.5, mew=3
)
self._text_range = self._gnom_ax.text(
-0.4,
-0.2,
"scale mode: loc",
transform=self._gnom_ax.transAxes,
va="baseline",
ha="left",
)
self.draw_gnom(0, 0)
self._connected = False
self.connect_callbacks()
def _zoom_on_click(self, ev):
import pylab
try:
ax = ev.inaxes
lon, lat = ax.get_lonlat(ev.xdata, ev.ydata)
if np.isnan(lon) or np.isnan(lat):
raise ValueError("invalid position")
val = ax.get_value(ev.xdata, ev.ydata)
self.lastval = val
self._move_zoom_center(lon, lat)
self.draw_gnom(lon, lat)
except Exception as s:
self._move_zoom_center(0, 0, False)
pylab.draw_if_interactive()
# print s
return
def _reso_on_key(self, ev):
if ev.key == "r":
self._decrease_reso()
elif ev.key == "t":
self._increase_reso()
elif ev.key == "p":
log.info("lon,lat = %.17g,%.17g", self.lon, self.lat)
elif ev.key == "c":
self._move_zoom_center(0, 0)
self.draw_gnom(0, 0)
elif ev.key == "v":
log.info("val = %.17g", self.lastval)
elif ev.key == "f":
self._range_status += 1
self._range_status %= 3
self.draw_gnom()
elif ev.key == "k":
self.save_min = self._gnom_ax.images[0].norm.vmin
self.save_max = self._gnom_ax.images[0].norm.vmax
elif ev.key == "g":
if getattr(self, "_graton", False):
self._gnom_ax.delgraticules()
self._moll_ax.delgraticules()
self._graton = False
else:
(self._g_dpar, self._g_dmer) = self._gnom_ax.graticule(
local=False
)
(self._m_dpar, self._m_dmer) = self._moll_ax.graticule()
self._graton = True
self.draw_gnom()
def _update_grat_info(self):
self._grat_ax.cla()
self._grat_ax.axis("off")
if self._graton:
a = self._grat_ax
t = a.transAxes
a.text(0.1, 0.8, "moll. grat.:", transform=t, weight="bold")
vdeg = np.floor(np.around(self._m_dpar / dtor, 10))
varcmin = (self._m_dpar / dtor - vdeg) * 60.0
a.text(0.1, 0.65, " -par: %d d %.2f '" % (vdeg, varcmin), transform=t)
vdeg = np.floor(np.around(self._m_dmer / dtor, 10))
varcmin = (self._m_dmer / dtor - vdeg) * 60.0
a.text(0.1, 0.5, " -mer: %d d %.2f '" % (vdeg, varcmin), transform=t)
a.text(0.1, 0.35, "gnom. grat.:", transform=t, weight="bold")
vdeg = np.floor(np.around(self._g_dpar / dtor, 10))
varcmin = (self._g_dpar / dtor - vdeg) * 60.0
a.text(0.1, 0.2, " -par: %d d %.2f '" % (vdeg, varcmin), transform=t)
vdeg = np.floor(np.around(self._g_dmer / dtor, 10))
varcmin = (self._g_dmer / dtor - vdeg) * 60.0
a.text(0.1, 0.05, " -mer: %d d %.2f '" % (vdeg, varcmin), transform=t)
def _increase_reso(self):
if self._reso_idx > 0:
self._reso_idx -= 1
self.draw_gnom(self.lon, self.lat)
def _decrease_reso(self):
if self._reso_idx < len(self.reso_list) - 1:
self._reso_idx += 1
self.draw_gnom(self.lon, self.lat)
def get_reso(self):
return self.reso_list[self._reso_idx]
def connect_callbacks(self):
if not self._connected:
self._callbacks_id = []
cid = self.f.canvas.mpl_connect("button_press_event", self._zoom_on_click)
self._callbacks_id.append(cid)
cid = self.f.canvas.mpl_connect("key_press_event", self._reso_on_key)
self._callbacks_id.append(cid)
self._connected = True
def disconnect_callbacks(self):
if self._connected:
for cid in self._callbacks_id:
self.figure.canvas.mpl_disconnect(cid)
def _move_zoom_center(self, lon, lat, visible=True):
# Move the zoom center marker.
if self.zoomcenter:
x, y = self._moll_ax.proj.ang2xy(lon, lat, lonlat=True)
self.zoomcenter.set_xdata([x])
self.zoomcenter.set_ydata([y])
self.zoomcenter.set_visible(visible)
if self.zoomcenter2:
x, y = self._moll_ax.proj.ang2xy(lon, lat, lonlat=True)
self.zoomcenter2.set_xdata([x])
self.zoomcenter2.set_ydata([y])
self.zoomcenter2.set_visible(visible)
def draw_gnom(self, lon=None, lat=None):
import pylab
wasinteractive = pylab.isinteractive()
pylab.ioff()
try:
# modify rot of the gnom_ax
if lon is None:
lon = self._lon
else:
self._lon = lon
if lat is None:
lat = self._lat
else:
self._lat = lat
self._gnom_ax.proj.rotator._rots.pop()
self._gnom_ax.proj.rotator._rots.append(
R.normalise_rot((lon, lat), deg=True)
)
self._gnom_ax.proj.rotator._update_matrix()
if self._range_status == 0:
vmin = vmax = None
elif self._range_status == 1:
vmin, vmax = self._mapmin, self._mapmax
elif self._range_status == 2:
vmin, vmax = self.save_min, self.save_max
self._gnom_ax.images.pop()
self._gnom_ax.projmap(
self._map,
nest=self._nest,
coord=self._coord,
vmin=vmin,
vmax=vmax,
xsize=self._xsize,
ysize=self._ysize,
reso=self.get_reso(),
cmap=self._cmap,
norm=self._norm,
)
if hasattr(self._gnom_ax, "_scatter_data"):
l = [x for x in self._gnom_ax._scatter_data]
# print l
for sd in l:
s, input_data = sd
# print input_data
self._gnom_ax.collections.remove(s)
self._gnom_ax._scatter_data.remove(sd)
theta, phi, args, kwds = input_data
self._gnom_ax.projscatter(theta, phi=phi, *args, **kwds)
del l
if self._graton:
self._gnom_ax.delgraticules()
(self._g_dpar, self._g_dmer) = self._gnom_ax.graticule(
local=False
)
self._gnom_cb_ax.cla()
im = self._gnom_ax.images[0]
if matplotlib.__version__ >= "0.91.0":
cb = self.f.colorbar(
im,
ax=self._gnom_ax,
cax=self._gnom_cb_ax,
orientation="horizontal",
ticks=PA.BoundaryLocator(),
)
else:
cb = self.f.colorbar(
im,
cax=self._gnom_cb_ax,
orientation="horizontal",
ticks=PA.BoundaryLocator(),
)
lon, lat = np.around(
self._gnom_ax.proj.get_center(lonlat=True), self._gnom_ax._coordprec
)
self._text_loc.set_text("on (%g,%g)" % (lon, lat))
reso = self._gnom_ax.proj.arrayinfo["reso"]
xsize = self._gnom_ax.proj.arrayinfo["xsize"]
ysize = self._gnom_ax.proj.arrayinfo["ysize"]
self._text_reso.set_text("%g '/pix, %dx%d pix" % (reso, xsize, ysize))
mode = ["loc", "map", "sav"][self._range_status]
self._text_range.set_text("scale mode: %s" % mode)
self.lon, self.lat = lon, lat
self._update_grat_info()
except Exception as e:
pass # print e
finally:
if wasinteractive:
pylab.ion()
pylab.draw()
pylab.show()
|
healpyREPO_NAMEhealpyPATH_START.@healpy_extracted@healpy-main@lib@healpy@zoomtool.py@.PATH_END.py
|
{
"filename": "_weightsrc.py",
"repo_name": "plotly/plotly.py",
"repo_path": "plotly.py_extracted/plotly.py-master/packages/python/plotly/plotly/validators/densitymapbox/hoverlabel/font/_weightsrc.py",
"type": "Python"
}
|
import _plotly_utils.basevalidators
class WeightsrcValidator(_plotly_utils.basevalidators.SrcValidator):
def __init__(
self,
plotly_name="weightsrc",
parent_name="densitymapbox.hoverlabel.font",
**kwargs,
):
super(WeightsrcValidator, self).__init__(
plotly_name=plotly_name,
parent_name=parent_name,
edit_type=kwargs.pop("edit_type", "none"),
**kwargs,
)
|
plotlyREPO_NAMEplotly.pyPATH_START.@plotly.py_extracted@plotly.py-master@packages@python@plotly@plotly@validators@densitymapbox@hoverlabel@font@_weightsrc.py@.PATH_END.py
|
{
"filename": "model_trace.py",
"repo_name": "simonsobs/nextline-rdb",
"repo_path": "nextline-rdb_extracted/nextline-rdb-main/src/nextline_rdb/alembic/models/rev_6e3cf7d9b6bf/model_trace.py",
"type": "Python"
}
|
from datetime import datetime
from typing import TYPE_CHECKING
from sqlalchemy import ForeignKey, UniqueConstraint
from sqlalchemy.orm import Mapped, mapped_column, relationship
from .base import Model
if TYPE_CHECKING:
from .model_prompt import Prompt
from .model_run import Run
from .model_stdout import Stdout
class Trace(Model):
__tablename__ = "trace"
id: Mapped[int] = mapped_column(primary_key=True, index=True)
run_no: Mapped[int]
trace_no: Mapped[int]
state: Mapped[str]
thread_no: Mapped[int]
task_no: Mapped[int | None]
started_at: Mapped[datetime]
ended_at: Mapped[datetime | None]
run_id: Mapped[int] = mapped_column(ForeignKey('run.id'))
run: Mapped['Run'] = relationship(back_populates='traces')
prompts: Mapped[list["Prompt"]] = relationship(back_populates="trace")
stdouts: Mapped[list["Stdout"]] = relationship(back_populates="trace")
__table_args__ = (UniqueConstraint("run_no", "trace_no"),)
|
simonsobsREPO_NAMEnextline-rdbPATH_START.@nextline-rdb_extracted@nextline-rdb-main@src@nextline_rdb@alembic@models@rev_6e3cf7d9b6bf@model_trace.py@.PATH_END.py
|
{
"filename": "__init__.py",
"repo_name": "NuSpaceSim/nuSpaceSim",
"repo_path": "nuSpaceSim_extracted/nuSpaceSim-main/src/nuspacesim/simulation/taus/__init__.py",
"type": "Python"
}
|
# The Clear BSD License
#
# Copyright (c) 2021 Alexander Reustle and the NuSpaceSim Team
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted (subject to the limitations in the disclaimer
# below) provided that the following conditions are met:
#
# * Redistributions of source code must retain the above copyright notice,
# this list of conditions and the following disclaimer.
#
# * Redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution.
#
# * Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from this
# software without specific prior written permission.
#
# NO EXPRESS OR IMPLIED LICENSES TO ANY PARTY'S PATENT RIGHTS ARE GRANTED BY
# THIS LICENSE. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND
# CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
# PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR
# CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
# EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
# PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
# BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER
# IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
# POSSIBILITY OF SUCH DAMAGE.
r"""NuSpaceSim taus class and routines.
.. _taus:
****
Taus
****
.. autosummary::
:toctree:
:recursive:
Taus
"""
__all__ = ["Taus", "show_plot", "local_plots"]
from . import local_plots
from .taus import Taus, show_plot
|
NuSpaceSimREPO_NAMEnuSpaceSimPATH_START.@nuSpaceSim_extracted@nuSpaceSim-main@src@nuspacesim@simulation@taus@__init__.py@.PATH_END.py
|
{
"filename": "test_fitting_smah_helpers.py",
"repo_name": "ArgonneCPAC/diffstar",
"repo_path": "diffstar_extracted/diffstar-main/diffstar/fitting_helpers/tests/test_fitting_smah_helpers.py",
"type": "Python"
}
|
"""
"""
import numpy as np
from ...defaults import DEFAULT_MS_PDICT, DEFAULT_Q_PDICT
from ...utils import _jax_get_dt_array
from ..fit_smah_helpers import get_header, get_loss_data_fixed_hi
DIFFMAH_K = 3.5
def test_get_header_colnames_agree_with_model_param_names():
header = get_header()
assert header[0] == "#"
colnames = header[1:].strip().split()
assert colnames[0] == "halo_id"
u_ms_colnames_from_header = colnames[1:6]
ms_colnames_from_header = [s[2:] for s in u_ms_colnames_from_header]
assert ms_colnames_from_header == list(DEFAULT_MS_PDICT.keys())
u_q_colnames_from_header = colnames[6:10]
q_colnames_from_header = [s[2:] for s in u_q_colnames_from_header]
assert q_colnames_from_header == list(DEFAULT_Q_PDICT.keys())
assert colnames[10:] == ["loss", "success"]
def test_get_loss_data_fixed_hi():
t_sim = np.linspace(0.1, 13.8, 100)
dt_sim = _jax_get_dt_array(t_sim)
sfrh = np.random.uniform(0, 10, t_sim.size)
smh = np.cumsum(dt_sim * sfrh) * 1e9
log_smah_sim = np.log10(smh)
logmp = 12.0
logtc, early, late = 0.1, 2.0, 1.0
mah_params = logtc, DIFFMAH_K, early, late
p_init, loss_data = get_loss_data_fixed_hi(
t_sim, dt_sim, sfrh, log_smah_sim, logmp, mah_params
)
|
ArgonneCPACREPO_NAMEdiffstarPATH_START.@diffstar_extracted@diffstar-main@diffstar@fitting_helpers@tests@test_fitting_smah_helpers.py@.PATH_END.py
|
{
"filename": "plot_disk.py",
"repo_name": "gammapy/gammapy",
"repo_path": "gammapy_extracted/gammapy-main/examples/models/spatial/plot_disk.py",
"type": "Python"
}
|
r"""
.. _disk-spatial-model:
Disk spatial model
==================
This is a spatial model parametrising a disk.
By default, the model is symmetric, i.e. a disk:
.. math::
\phi(lon, lat) = \frac{1}{2 \pi (1 - \cos{r_0}) } \cdot
\begin{cases}
1 & \text{for } \theta \leq r_0 \\
0 & \text{for } \theta > r_0
\end{cases}
where :math:`\theta` is the sky separation. To improve fit convergence of the
model, the sharp edges is smoothed using `~scipy.special.erf`.
In case an eccentricity (`e`) and rotation angle (:math:`\phi`) are passed,
then the model is an elongated disk (i.e. an ellipse), with a major semiaxis of length :math:`r_0`
and position angle :math:`\phi` (increasing counter-clockwise from the North direction).
The model is defined on the celestial sphere, with a normalization defined by:
.. math::
\int_{4\pi}\phi(\text{lon}, \text{lat}) \,d\Omega = 1\,.
"""
# %%
# Example plot
# ------------
# Here is an example plot of the model:
import numpy as np
from astropy.coordinates import Angle
from gammapy.modeling.models import (
DiskSpatialModel,
Models,
PowerLawSpectralModel,
SkyModel,
)
phi = Angle("30 deg")
model = DiskSpatialModel(
lon_0="2 deg",
lat_0="2 deg",
r_0="1 deg",
e=0.8,
phi=phi,
edge_width=0.1,
frame="galactic",
)
ax = model.plot(add_cbar=True)
# illustrate size parameter
region = model.to_region().to_pixel(ax.wcs)
artist = region.as_artist(facecolor="none", edgecolor="red")
ax.add_artist(artist)
transform = ax.get_transform("galactic")
ax.scatter(2, 2, transform=transform, s=20, edgecolor="red", facecolor="red")
ax.text(1.7, 1.85, r"$(l_0, b_0)$", transform=transform, ha="center")
ax.plot([2, 2 + np.sin(phi)], [2, 2 + np.cos(phi)], color="r", transform=transform)
ax.vlines(x=2, color="r", linestyle="--", transform=transform, ymin=0, ymax=5)
ax.text(2.15, 2.3, r"$\phi$", transform=transform)
# %%
# This plot illustrates the definition of the edge parameter:
import numpy as np
from astropy import units as u
from astropy.visualization import quantity_support
import matplotlib.pyplot as plt
from gammapy.modeling.models import DiskSpatialModel
lons = np.linspace(0, 0.3, 500) * u.deg
r_0, edge_width = 0.2 * u.deg, 0.5
disk = DiskSpatialModel(lon_0="0 deg", lat_0="0 deg", r_0=r_0, edge_width=edge_width)
profile = disk(lons, 0 * u.deg)
plt.plot(lons, profile / profile.max(), alpha=0.5)
plt.xlabel("Radius (deg)")
plt.ylabel("Profile (A.U.)")
edge_min, edge_max = r_0 * (1 - edge_width / 2.0), r_0 * (1 + edge_width / 2.0)
with quantity_support():
plt.vlines([edge_min, edge_max], 0, 1, linestyles=["--"], color="k")
plt.annotate(
"",
xy=(edge_min, 0.5),
xytext=(edge_min + r_0 * edge_width, 0.5),
arrowprops=dict(arrowstyle="<->", lw=2),
)
plt.text(0.2, 0.53, "Edge width", ha="center", size=12)
margin = 0.02 * u.deg
plt.hlines(
[0.95], edge_min - margin, edge_min + margin, linestyles=["-"], color="k"
)
plt.text(edge_min + margin, 0.95, "95%", size=12, va="center")
plt.hlines(
[0.05], edge_max - margin, edge_max + margin, linestyles=["-"], color="k"
)
plt.text(edge_max - margin, 0.05, "5%", size=12, va="center", ha="right")
plt.show()
# %%
# YAML representation
# -------------------
# Here is an example YAML file using the model:
pwl = PowerLawSpectralModel()
gauss = DiskSpatialModel()
model = SkyModel(spectral_model=pwl, spatial_model=gauss, name="pwl-disk-model")
models = Models([model])
print(models.to_yaml())
|
gammapyREPO_NAMEgammapyPATH_START.@gammapy_extracted@gammapy-main@examples@models@spatial@plot_disk.py@.PATH_END.py
|
{
"filename": "CODE_OF_CONDUCT.md",
"repo_name": "scikit-image/scikit-image",
"repo_path": "scikit-image_extracted/scikit-image-main/CODE_OF_CONDUCT.md",
"type": "Markdown"
}
|
[scikit-image Code of Conduct](doc/source/about/code_of_conduct.md)
|
scikit-imageREPO_NAMEscikit-imagePATH_START.@scikit-image_extracted@scikit-image-main@CODE_OF_CONDUCT.md@.PATH_END.py
|
{
"filename": "test_point_source_rendering.py",
"repo_name": "lenstronomy/lenstronomy",
"repo_path": "lenstronomy_extracted/lenstronomy-main/test/test_ImSim/test_Numerics/test_point_source_rendering.py",
"type": "Python"
}
|
from lenstronomy.ImSim.Numerics.point_source_rendering import PointSourceRendering
from lenstronomy.Data.pixel_grid import PixelGrid
from lenstronomy.Data.psf import PSF
import numpy as np
import numpy.testing as npt
import pytest
import unittest
class TestPointSourceRendering(object):
def setup_method(self):
Mpix2coord = np.array([[1, 0], [0, 1]])
kwargs_grid = {
"ra_at_xy_0": 0,
"dec_at_xy_0": 0,
"transform_pix2angle": Mpix2coord,
"nx": 10,
"ny": 10,
}
pixel_grid = PixelGrid(**kwargs_grid)
kernel = np.zeros((5, 5))
kernel[2, 2] = 1
kwargs_psf = {
"kernel_point_source": kernel,
"psf_type": "PIXEL",
"psf_error_map": np.ones_like(kernel) * kernel**2,
}
psf_class = PSF(**kwargs_psf)
self._ps_rendering = PointSourceRendering(
pixel_grid, supersampling_factor=1, psf=psf_class
)
def test_psf_error_map(self):
ra_pos, dec_pos = [5], [5]
data = np.zeros((10, 10))
image = self._ps_rendering.psf_error_map(
ra_pos, dec_pos, amp=1, data=data, fix_psf_error_map=False
)
npt.assert_almost_equal(np.sum(image), 0, decimal=10)
image = self._ps_rendering.psf_error_map(
ra_pos, dec_pos, amp=1, data=data, fix_psf_error_map=True
)
npt.assert_almost_equal(np.sum(image), 1, decimal=10)
ra_pos, dec_pos = [50], [50]
data = np.zeros((10, 10))
image = self._ps_rendering.psf_error_map(
ra_pos, dec_pos, amp=1, data=data, fix_psf_error_map=False
)
npt.assert_almost_equal(np.sum(image), 0, decimal=10)
def test_point_source_rendering(self):
amp = [1, 1]
ra_pos, dec_pos = [0, 1], [1, 0]
model = self._ps_rendering.point_source_rendering(ra_pos, dec_pos, amp)
npt.assert_almost_equal(np.sum(model), 2, decimal=8)
class TestRaise(unittest.TestCase):
def test_raise(self):
Mpix2coord = np.array([[1, 0], [0, 1]])
kwargs_grid = {
"ra_at_xy_0": 0,
"dec_at_xy_0": 0,
"transform_pix2angle": Mpix2coord,
"nx": 10,
"ny": 10,
}
pixel_grid = PixelGrid(**kwargs_grid)
kernel = np.zeros((5, 5))
kernel[2, 2] = 1
kwargs_psf = {
"kernel_point_source": kernel,
"psf_type": "PIXEL",
"psf_error_map": np.ones_like(kernel),
}
psf_class = PSF(**kwargs_psf)
self._ps_rendering = PointSourceRendering(
pixel_grid, supersampling_factor=1, psf=psf_class
)
with self.assertRaises(ValueError):
self._ps_rendering.point_source_rendering(
ra_pos=[1, 1], dec_pos=[0, 1], amp=[1]
)
if __name__ == "__main__":
pytest.main()
|
lenstronomyREPO_NAMElenstronomyPATH_START.@lenstronomy_extracted@lenstronomy-main@test@test_ImSim@test_Numerics@test_point_source_rendering.py@.PATH_END.py
|
{
"filename": "_layout.py",
"repo_name": "catboost/catboost",
"repo_path": "catboost_extracted/catboost-master/contrib/python/plotly/py2/plotly/validators/_layout.py",
"type": "Python"
}
|
import _plotly_utils.basevalidators
class LayoutValidator(_plotly_utils.basevalidators.CompoundValidator):
def __init__(self, plotly_name="layout", parent_name="", **kwargs):
super(LayoutValidator, self).__init__(
plotly_name=plotly_name,
parent_name=parent_name,
data_class_str=kwargs.pop("data_class_str", "Layout"),
data_docs=kwargs.pop(
"data_docs",
"""
activeshape
:class:`plotly.graph_objects.layout.Activeshape
` instance or dict with compatible properties
angularaxis
:class:`plotly.graph_objects.layout.AngularAxis
` instance or dict with compatible properties
annotations
A tuple of
:class:`plotly.graph_objects.layout.Annotation`
instances or dicts with compatible properties
annotationdefaults
When used in a template (as
layout.template.layout.annotationdefaults),
sets the default property values to use for
elements of layout.annotations
autosize
Determines whether or not a layout width or
height that has been left undefined by the user
is initialized on each relayout. Note that,
regardless of this attribute, an undefined
layout width or height is always initialized on
the first call to plot.
autotypenumbers
Using "strict" a numeric string in trace data
is not converted to a number. Using *convert
types* a numeric string in trace data may be
treated as a number during automatic axis
`type` detection. This is the default value;
however it could be overridden for individual
axes.
bargap
Sets the gap (in plot fraction) between bars of
adjacent location coordinates.
bargroupgap
Sets the gap (in plot fraction) between bars of
the same location coordinate.
barmode
Determines how bars at the same location
coordinate are displayed on the graph. With
"stack", the bars are stacked on top of one
another With "relative", the bars are stacked
on top of one another, with negative values
below the axis, positive values above With
"group", the bars are plotted next to one
another centered around the shared location.
With "overlay", the bars are plotted over one
another, you might need to an "opacity" to see
multiple bars.
barnorm
Sets the normalization for bar traces on the
graph. With "fraction", the value of each bar
is divided by the sum of all values at that
location coordinate. "percent" is the same but
multiplied by 100 to show percentages.
boxgap
Sets the gap (in plot fraction) between boxes
of adjacent location coordinates. Has no effect
on traces that have "width" set.
boxgroupgap
Sets the gap (in plot fraction) between boxes
of the same location coordinate. Has no effect
on traces that have "width" set.
boxmode
Determines how boxes at the same location
coordinate are displayed on the graph. If
"group", the boxes are plotted next to one
another centered around the shared location. If
"overlay", the boxes are plotted over one
another, you might need to set "opacity" to see
them multiple boxes. Has no effect on traces
that have "width" set.
calendar
Sets the default calendar system to use for
interpreting and displaying dates throughout
the plot.
clickmode
Determines the mode of single click
interactions. "event" is the default value and
emits the `plotly_click` event. In addition
this mode emits the `plotly_selected` event in
drag modes "lasso" and "select", but with no
event data attached (kept for compatibility
reasons). The "select" flag enables selecting
single data points via click. This mode also
supports persistent selections, meaning that
pressing Shift while clicking, adds to /
subtracts from an existing selection. "select"
with `hovermode`: "x" can be confusing,
consider explicitly setting `hovermode`:
"closest" when using this feature. Selection
events are sent accordingly as long as "event"
flag is set as well. When the "event" flag is
missing, `plotly_click` and `plotly_selected`
events are not fired.
coloraxis
:class:`plotly.graph_objects.layout.Coloraxis`
instance or dict with compatible properties
colorscale
:class:`plotly.graph_objects.layout.Colorscale`
instance or dict with compatible properties
colorway
Sets the default trace colors.
computed
Placeholder for exporting automargin-impacting
values namely `margin.t`, `margin.b`,
`margin.l` and `margin.r` in "full-json" mode.
datarevision
If provided, a changed value tells
`Plotly.react` that one or more data arrays has
changed. This way you can modify arrays in-
place rather than making a complete new copy
for an incremental change. If NOT provided,
`Plotly.react` assumes that data arrays are
being treated as immutable, thus any data array
with a different identity from its predecessor
contains new data.
direction
Legacy polar charts are deprecated! Please
switch to "polar" subplots. Sets the direction
corresponding to positive angles in legacy
polar charts.
dragmode
Determines the mode of drag interactions.
"select" and "lasso" apply only to scatter
traces with markers or text. "orbit" and
"turntable" apply only to 3D scenes.
editrevision
Controls persistence of user-driven changes in
`editable: true` configuration, other than
trace names and axis titles. Defaults to
`layout.uirevision`.
extendfunnelareacolors
If `true`, the funnelarea slice colors (whether
given by `funnelareacolorway` or inherited from
`colorway`) will be extended to three times its
original length by first repeating every color
20% lighter then each color 20% darker. This is
intended to reduce the likelihood of reusing
the same color when you have many slices, but
you can set `false` to disable. Colors provided
in the trace, using `marker.colors`, are never
extended.
extendpiecolors
If `true`, the pie slice colors (whether given
by `piecolorway` or inherited from `colorway`)
will be extended to three times its original
length by first repeating every color 20%
lighter then each color 20% darker. This is
intended to reduce the likelihood of reusing
the same color when you have many slices, but
you can set `false` to disable. Colors provided
in the trace, using `marker.colors`, are never
extended.
extendsunburstcolors
If `true`, the sunburst slice colors (whether
given by `sunburstcolorway` or inherited from
`colorway`) will be extended to three times its
original length by first repeating every color
20% lighter then each color 20% darker. This is
intended to reduce the likelihood of reusing
the same color when you have many slices, but
you can set `false` to disable. Colors provided
in the trace, using `marker.colors`, are never
extended.
extendtreemapcolors
If `true`, the treemap slice colors (whether
given by `treemapcolorway` or inherited from
`colorway`) will be extended to three times its
original length by first repeating every color
20% lighter then each color 20% darker. This is
intended to reduce the likelihood of reusing
the same color when you have many slices, but
you can set `false` to disable. Colors provided
in the trace, using `marker.colors`, are never
extended.
font
Sets the global font. Note that fonts used in
traces and other layout components inherit from
the global font.
funnelareacolorway
Sets the default funnelarea slice colors.
Defaults to the main `colorway` used for trace
colors. If you specify a new list here it can
still be extended with lighter and darker
colors, see `extendfunnelareacolors`.
funnelgap
Sets the gap (in plot fraction) between bars of
adjacent location coordinates.
funnelgroupgap
Sets the gap (in plot fraction) between bars of
the same location coordinate.
funnelmode
Determines how bars at the same location
coordinate are displayed on the graph. With
"stack", the bars are stacked on top of one
another With "group", the bars are plotted next
to one another centered around the shared
location. With "overlay", the bars are plotted
over one another, you might need to an
"opacity" to see multiple bars.
geo
:class:`plotly.graph_objects.layout.Geo`
instance or dict with compatible properties
grid
:class:`plotly.graph_objects.layout.Grid`
instance or dict with compatible properties
height
Sets the plot's height (in px).
hiddenlabels
hiddenlabels is the funnelarea & pie chart
analog of visible:'legendonly' but it can
contain many labels, and can simultaneously
hide slices from several pies/funnelarea charts
hiddenlabelssrc
Sets the source reference on Chart Studio Cloud
for hiddenlabels .
hidesources
Determines whether or not a text link citing
the data source is placed at the bottom-right
cored of the figure. Has only an effect only on
graphs that have been generated via forked
graphs from the Chart Studio Cloud (at
https://chart-studio.plotly.com or on-premise).
hoverdistance
Sets the default distance (in pixels) to look
for data to add hover labels (-1 means no
cutoff, 0 means no looking for data). This is
only a real distance for hovering on point-like
objects, like scatter points. For area-like
objects (bars, scatter fills, etc) hovering is
on inside the area and off outside, but these
objects will not supersede hover on point-like
objects in case of conflict.
hoverlabel
:class:`plotly.graph_objects.layout.Hoverlabel`
instance or dict with compatible properties
hovermode
Determines the mode of hover interactions. If
"closest", a single hoverlabel will appear for
the "closest" point within the `hoverdistance`.
If "x" (or "y"), multiple hoverlabels will
appear for multiple points at the "closest" x-
(or y-) coordinate within the `hoverdistance`,
with the caveat that no more than one
hoverlabel will appear per trace. If *x
unified* (or *y unified*), a single hoverlabel
will appear multiple points at the closest x-
(or y-) coordinate within the `hoverdistance`
with the caveat that no more than one
hoverlabel will appear per trace. In this mode,
spikelines are enabled by default perpendicular
to the specified axis. If false, hover
interactions are disabled. If `clickmode`
includes the "select" flag, `hovermode`
defaults to "closest". If `clickmode` lacks the
"select" flag, it defaults to "x" or "y"
(depending on the trace's `orientation` value)
for plots based on cartesian coordinates. For
anything else the default value is "closest".
images
A tuple of
:class:`plotly.graph_objects.layout.Image`
instances or dicts with compatible properties
imagedefaults
When used in a template (as
layout.template.layout.imagedefaults), sets the
default property values to use for elements of
layout.images
legend
:class:`plotly.graph_objects.layout.Legend`
instance or dict with compatible properties
mapbox
:class:`plotly.graph_objects.layout.Mapbox`
instance or dict with compatible properties
margin
:class:`plotly.graph_objects.layout.Margin`
instance or dict with compatible properties
meta
Assigns extra meta information that can be used
in various `text` attributes. Attributes such
as the graph, axis and colorbar `title.text`,
annotation `text` `trace.name` in legend items,
`rangeselector`, `updatemenus` and `sliders`
`label` text all support `meta`. One can access
`meta` fields using template strings:
`%{meta[i]}` where `i` is the index of the
`meta` item in question. `meta` can also be an
object for example `{key: value}` which can be
accessed %{meta[key]}.
metasrc
Sets the source reference on Chart Studio Cloud
for meta .
modebar
:class:`plotly.graph_objects.layout.Modebar`
instance or dict with compatible properties
newshape
:class:`plotly.graph_objects.layout.Newshape`
instance or dict with compatible properties
orientation
Legacy polar charts are deprecated! Please
switch to "polar" subplots. Rotates the entire
polar by the given angle in legacy polar
charts.
paper_bgcolor
Sets the background color of the paper where
the graph is drawn.
piecolorway
Sets the default pie slice colors. Defaults to
the main `colorway` used for trace colors. If
you specify a new list here it can still be
extended with lighter and darker colors, see
`extendpiecolors`.
plot_bgcolor
Sets the background color of the plotting area
in-between x and y axes.
polar
:class:`plotly.graph_objects.layout.Polar`
instance or dict with compatible properties
radialaxis
:class:`plotly.graph_objects.layout.RadialAxis`
instance or dict with compatible properties
scene
:class:`plotly.graph_objects.layout.Scene`
instance or dict with compatible properties
selectdirection
When `dragmode` is set to "select", this limits
the selection of the drag to horizontal,
vertical or diagonal. "h" only allows
horizontal selection, "v" only vertical, "d"
only diagonal and "any" sets no limit.
selectionrevision
Controls persistence of user-driven changes in
selected points from all traces.
separators
Sets the decimal and thousand separators. For
example, *. * puts a '.' before decimals and a
space between thousands. In English locales,
dflt is ".," but other locales may alter this
default.
shapes
A tuple of
:class:`plotly.graph_objects.layout.Shape`
instances or dicts with compatible properties
shapedefaults
When used in a template (as
layout.template.layout.shapedefaults), sets the
default property values to use for elements of
layout.shapes
showlegend
Determines whether or not a legend is drawn.
Default is `true` if there is a trace to show
and any of these: a) Two or more traces would
by default be shown in the legend. b) One pie
trace is shown in the legend. c) One trace is
explicitly given with `showlegend: true`.
sliders
A tuple of
:class:`plotly.graph_objects.layout.Slider`
instances or dicts with compatible properties
sliderdefaults
When used in a template (as
layout.template.layout.sliderdefaults), sets
the default property values to use for elements
of layout.sliders
spikedistance
Sets the default distance (in pixels) to look
for data to draw spikelines to (-1 means no
cutoff, 0 means no looking for data). As with
hoverdistance, distance does not apply to area-
like objects. In addition, some objects can be
hovered on but will not generate spikelines,
such as scatter fills.
sunburstcolorway
Sets the default sunburst slice colors.
Defaults to the main `colorway` used for trace
colors. If you specify a new list here it can
still be extended with lighter and darker
colors, see `extendsunburstcolors`.
template
Default attributes to be applied to the plot.
This should be a dict with format: `{'layout':
layoutTemplate, 'data': {trace_type:
[traceTemplate, ...], ...}}` where
`layoutTemplate` is a dict matching the
structure of `figure.layout` and
`traceTemplate` is a dict matching the
structure of the trace with type `trace_type`
(e.g. 'scatter'). Alternatively, this may be
specified as an instance of
plotly.graph_objs.layout.Template. Trace
templates are applied cyclically to traces of
each type. Container arrays (eg `annotations`)
have special handling: An object ending in
`defaults` (eg `annotationdefaults`) is applied
to each array item. But if an item has a
`templateitemname` key we look in the template
array for an item with matching `name` and
apply that instead. If no matching `name` is
found we mark the item invisible. Any named
template item not referenced is appended to the
end of the array, so this can be used to add a
watermark annotation or a logo image, for
example. To omit one of these items on the
plot, make an item with matching
`templateitemname` and `visible: false`.
ternary
:class:`plotly.graph_objects.layout.Ternary`
instance or dict with compatible properties
title
:class:`plotly.graph_objects.layout.Title`
instance or dict with compatible properties
titlefont
Deprecated: Please use layout.title.font
instead. Sets the title font. Note that the
title's font used to be customized by the now
deprecated `titlefont` attribute.
transition
Sets transition options used during
Plotly.react updates.
treemapcolorway
Sets the default treemap slice colors. Defaults
to the main `colorway` used for trace colors.
If you specify a new list here it can still be
extended with lighter and darker colors, see
`extendtreemapcolors`.
uirevision
Used to allow user interactions with the plot
to persist after `Plotly.react` calls that are
unaware of these interactions. If `uirevision`
is omitted, or if it is given and it changed
from the previous `Plotly.react` call, the
exact new figure is used. If `uirevision` is
truthy and did NOT change, any attribute that
has been affected by user interactions and did
not receive a different value in the new figure
will keep the interaction value.
`layout.uirevision` attribute serves as the
default for `uirevision` attributes in various
sub-containers. For finer control you can set
these sub-attributes directly. For example, if
your app separately controls the data on the x
and y axes you might set
`xaxis.uirevision=*time*` and
`yaxis.uirevision=*cost*`. Then if only the y
data is changed, you can update
`yaxis.uirevision=*quantity*` and the y axis
range will reset but the x axis range will
retain any user-driven zoom.
uniformtext
:class:`plotly.graph_objects.layout.Uniformtext
` instance or dict with compatible properties
updatemenus
A tuple of
:class:`plotly.graph_objects.layout.Updatemenu`
instances or dicts with compatible properties
updatemenudefaults
When used in a template (as
layout.template.layout.updatemenudefaults),
sets the default property values to use for
elements of layout.updatemenus
violingap
Sets the gap (in plot fraction) between violins
of adjacent location coordinates. Has no effect
on traces that have "width" set.
violingroupgap
Sets the gap (in plot fraction) between violins
of the same location coordinate. Has no effect
on traces that have "width" set.
violinmode
Determines how violins at the same location
coordinate are displayed on the graph. If
"group", the violins are plotted next to one
another centered around the shared location. If
"overlay", the violins are plotted over one
another, you might need to set "opacity" to see
them multiple violins. Has no effect on traces
that have "width" set.
waterfallgap
Sets the gap (in plot fraction) between bars of
adjacent location coordinates.
waterfallgroupgap
Sets the gap (in plot fraction) between bars of
the same location coordinate.
waterfallmode
Determines how bars at the same location
coordinate are displayed on the graph. With
"group", the bars are plotted next to one
another centered around the shared location.
With "overlay", the bars are plotted over one
another, you might need to an "opacity" to see
multiple bars.
width
Sets the plot's width (in px).
xaxis
:class:`plotly.graph_objects.layout.XAxis`
instance or dict with compatible properties
yaxis
:class:`plotly.graph_objects.layout.YAxis`
instance or dict with compatible properties
""",
),
**kwargs
)
|
catboostREPO_NAMEcatboostPATH_START.@catboost_extracted@catboost-master@contrib@python@plotly@py2@plotly@validators@_layout.py@.PATH_END.py
|
{
"filename": "classifier_metrics.py",
"repo_name": "daniel-muthukrishna/astrorapid",
"repo_path": "astrorapid_extracted/astrorapid-master/astrorapid/classifier_metrics.py",
"type": "Python"
}
|
"""
Plot overall classification performance metrics.
"""
import os
import sys
import numpy as np
import itertools
from distutils.spawn import find_executable
from sklearn.metrics import roc_curve, auc
from sklearn.metrics import precision_recall_curve
from sklearn.metrics import f1_score
from sklearn.metrics import average_precision_score
from scipy import interp
try:
import matplotlib
import matplotlib.pyplot as plt
# Check if latex is installed
if find_executable('latex'):
plt.rcParams['text.usetex'] = True
plt.rcParams['font.serif'] = ['Computer Modern Roman'] + plt.rcParams['font.serif']
font = {'family': 'normal',
'size': 34}
matplotlib.rc('font', **font)
except ImportError:
print("Warning: You will need to install matplotlib if you want to plot any metric")
COLORS = ['tab:green', 'tab:orange', 'tab:blue', 'tab:red', 'tab:purple', 'tab:brown', '#aaffc3', 'tab:olive',
'tab:cyan', '#FF1493', 'navy', 'tab:pink', 'lightcoral', '#228B22', '#aa6e28', '#FFA07A']
def plasticc_log_loss(y_true, y_pred, relative_class_weights=None):
"""
Implementation of weighted log loss used for the Kaggle challenge
"""
if np.nonzero(y_true[:, 0])[0].size == 0:
start_index = 1
else:
start_index = 0
print(start_index)
predictions = y_pred.copy()
# sanitize predictions
epsilon = sys.float_info.epsilon # this is machine dependent but essentially prevents log(0)
predictions = np.clip(predictions, epsilon, 1.0 - epsilon)
predictions = predictions / np.sum(predictions, axis=1)[:, np.newaxis]
predictions = np.log(predictions)
# multiplying the arrays is equivalent to a truth mask as y_true only contains zeros and ones
class_logloss = []
for i in range(start_index, predictions.shape[1]):
# average column wise log loss with truth mask applied
result = np.average(predictions[:, i][y_true[:, i] == 1])
class_logloss.append(result)
return -1 * np.average(class_logloss, weights=relative_class_weights[start_index:])
def compute_precision_recall(classes, y_test, y_pred_prob, name='', fig_dir='.', title=None):
"""
Plot Precision-Recall curves.
"""
if np.nonzero(y_test[:, 0])[0].size == 0:
start_index = 1
else:
start_index = 0
nclasses = len(classes)
# For each class
precision = dict()
recall = dict()
save_auc = dict()
average_precision = dict()
for i in range(start_index, nclasses):
precision[i], recall[i], _ = precision_recall_curve(y_test[:, i], y_pred_prob[:, i])
average_precision[i] = average_precision_score(y_test[:, i], y_pred_prob[:, i])
save_auc[classes[i]] = average_precision[i]
# A "micro-average": quantifying score on all classes jointly
precision["micro"], recall["micro"], _ = precision_recall_curve(y_test.ravel(), y_pred_prob.ravel())
average_precision["micro"] = average_precision_score(y_test, y_pred_prob,
average="micro")
save_auc[classes[i]] = average_precision["micro"]
print('Average precision score, micro-averaged over all classes: {0:0.2f}'
.format(average_precision["micro"]))
plt.figure(figsize=(12, 16))
f_scores = np.linspace(0.2, 0.8, num=4)
lines = []
labels = []
for f_score in f_scores:
x = np.linspace(0.01, 1)
y = f_score * x / (2 * x - f_score)
# l, = plt.plot(x[y >= 0], y[y >= 0], color='gray', alpha=0.2)
# plt.annotate('f1={0:0.1f}'.format(f_score), xy=(0.9, y[45] + 0.02))
# lines.append(l)
# labels.append('iso-f1 curves')
l, = plt.plot(recall["micro"], precision["micro"], color='navy', linestyle=':', lw=2)
lines.append(l)
labels.append('micro-average ({0:0.2f})'
''.format(average_precision["micro"]))
for i in range(start_index, nclasses):
l, = plt.plot(recall[i], precision[i], color=COLORS[i], lw=2)
lines.append(l)
labels.append('{0} ({1:0.2f})'
''.format(classes[i], average_precision[i]))
fig = plt.gcf()
fig.subplots_adjust(bottom=0.25)
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('Recall')
plt.ylabel('Precision')
if title is not None:
plt.title(title, fontsize=34)
plt.legend(lines, labels, loc=(0.1, -.6), fontsize=24, frameon=True, ncol=2)
plt.tight_layout()
figname = os.path.join(fig_dir, 'precision_%s.pdf' % name)
plt.savefig(figname)
figname = os.path.join(fig_dir, 'precision_%s.png' % name)
plt.savefig(figname)
plt.close()
return figname, save_auc
def compute_multiclass_roc_auc(classes, y_test, y_pred_prob, name='', fig_dir='.', title=None, logyscale=False):
"""
Plot multiclass Receiver Operating Characteristic curves.
"""
if np.nonzero(y_test[:, 0])[0].size == 0:
start_index = 1
else:
start_index = 0
nclasses = len(classes)
# Compute ROC curve and ROC area for each class
fpr = dict()
tpr = dict()
roc_auc = dict()
save_auc = dict()
for i in range(start_index, nclasses):
fpr[i], tpr[i], _ = roc_curve(y_test[:, i], y_pred_prob[:, i])
roc_auc[i] = auc(fpr[i], tpr[i])
save_auc[classes[i]] = roc_auc[i]
# Compute micro-average ROC curve and ROC area
fpr["micro"], tpr["micro"], _ = roc_curve(y_test.ravel(), y_pred_prob.ravel())
roc_auc["micro"] = auc(fpr["micro"], tpr["micro"])
save_auc['micro'] = roc_auc['micro']
# Compute macro-average ROC curve and ROC area
# First aggregate all false positive rates
all_fpr = np.unique(np.concatenate([fpr[i] for i in range(start_index, nclasses)]))
# Then interpolate all ROC curves at this points
mean_tpr = np.zeros_like(all_fpr)
for i in range(start_index, nclasses):
mean_tpr += interp(all_fpr, fpr[i], tpr[i])
# Finally average it and compute AUC
mean_tpr /= len(classes[start_index:])
fpr["macro"] = all_fpr
tpr["macro"] = mean_tpr
roc_auc["macro"] = auc(fpr["macro"], tpr["macro"])
save_auc['macro'] = roc_auc['macro']
# Plot all ROC curves
fig = plt.figure(figsize=(13, 12))
plt.plot(fpr["micro"], tpr["micro"],
label='micro-average ({0:0.2f})'
''.format(roc_auc["micro"]),
color='navy', linestyle=':', linewidth=6)
plt.plot(fpr["macro"], tpr["macro"],
label='macro-average ({0:0.2f})'
''.format(roc_auc["macro"]),
color='deeppink', linestyle=':', linewidth=6)
lw = 2
# colors = cycle(['aqua', 'darkorange', 'cornflowerblue'])
for i in range(start_index, nclasses):
plt.plot(fpr[i], tpr[i], lw=lw, color=COLORS[i],
label='{0} ({1:0.2f})'
''.format(classes[i], roc_auc[i]))
# # plt.plot([0, 1], [0, 1], 'k--', lw=lw)
# plt.xlim([0.0, 1.0])
if logyscale:
plt.yscale("log")
else:
pass # plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
if title is not None:
plt.title(title, fontsize=34) # plt.title(title, fontsize=70, fontweight="bold", y=1.02) # was size 34
plt.legend(loc="lower right", frameon=True, fontsize=26)
plt.tight_layout()
figname = os.path.join(fig_dir, 'roc_%s.pdf' % name)
plt.savefig(figname)
figname = os.path.join(fig_dir, 'roc_%s.png' % name)
plt.savefig(figname)
return figname, save_auc
def plot_confusion_matrix(cm, classes, normalize=False, title=None, cmap=plt.cm.RdBu, fig_dir='.', name='',
combine_kfolds=False, show_uncertainties=False):
"""
Plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
if cm.shape[0] == 2:
classes = [cls_ for cls_ in classes if cls_ != 'Pre-explosion']
if combine_kfolds:
uncertainties = np.std(cm, axis=0)
cm = np.sum(cm, axis=0)
if normalize:
if combine_kfolds:
uncertainties = uncertainties.astype('float') / cm.sum(axis=1)[:, np.newaxis]
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
print(cm)
# Multiply off diagonal by -1
if cm.shape[0] > 2:
off_diag = ~np.eye(cm.shape[0], dtype=bool)
cm[off_diag] *= -1
np.savetxt(os.path.join(fig_dir, 'confusion_matrix_%s.csv' % name), cm)
print(cm)
cms = [cm]
deleterows = [False]
if np.all(np.isnan(cm[0])):
cmDelete = np.delete(cm, 0, 0)
cms.append(cmDelete)
deleterows.append(True)
for cm, deleterow in zip(cms, deleterows):
fig = plt.figure(figsize=(15, 12))
plt.imshow(cm, interpolation='nearest', cmap=cmap, vmin=-1, vmax=1)
# plt.title(title)
# cb = plt.colorbar()
# cb.ax.set_yticklabels(cb.ax.get_yticklabels(), fontsize=27)
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=90, fontsize=27)
if deleterow:
plt.yticks(tick_marks[:-1], classes[1:], fontsize=27)
else:
plt.yticks(tick_marks, classes, fontsize=27)
fmt = '.2f' if normalize else 'd'
thresh = 0.5 # cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
value = format(abs(cm[i, j]), fmt)
if combine_kfolds and show_uncertainties:
unc = format(uncertainties[i, j], fmt)
cell_text = r"{} $\pm$ {}".format(value, unc)
else:
cell_text = value
if cell_text == 'nan':
cell_text = '-'
plt.text(j, i, cell_text, horizontalalignment="center",
color="white" if abs(cm[i, j]) > thresh else "black", fontsize=26)
if title is not None:
plt.title(title, fontsize=34) # plt.title(title, fontsize=70, fontweight="bold", y=1.02) # was size 33
plt.ylabel('True label')
plt.xlabel('Predicted label')
figname_pdf = os.path.join(fig_dir, 'confusion_matrix_%s.pdf' % name)
plt.savefig(figname_pdf, bbox_inches="tight")
if not deleterow:
figname_png = os.path.join(fig_dir, 'confusion_matrix_%s.png' % name)
plt.savefig(figname_png, bbox_inches="tight")
plt.close()
return figname_png
|
daniel-muthukrishnaREPO_NAMEastrorapidPATH_START.@astrorapid_extracted@astrorapid-master@astrorapid@classifier_metrics.py@.PATH_END.py
|
{
"filename": "zep.py",
"repo_name": "langchain-ai/langchain",
"repo_path": "langchain_extracted/langchain-master/libs/community/langchain_community/vectorstores/zep.py",
"type": "Python"
}
|
from __future__ import annotations
import logging
import warnings
from dataclasses import asdict, dataclass
from typing import TYPE_CHECKING, Any, Dict, Iterable, List, Optional, Tuple
from langchain_core.documents import Document
from langchain_core.embeddings import Embeddings
from langchain_core.vectorstores import VectorStore
if TYPE_CHECKING:
from zep_python.document import Document as ZepDocument
from zep_python.document import DocumentCollection
logger = logging.getLogger()
@dataclass
class CollectionConfig:
"""Configuration for a `Zep Collection`.
If the collection does not exist, it will be created.
Attributes:
name (str): The name of the collection.
description (Optional[str]): An optional description of the collection.
metadata (Optional[Dict[str, Any]]): Optional metadata for the collection.
embedding_dimensions (int): The number of dimensions for the embeddings in
the collection. This should match the Zep server configuration
if auto-embed is true.
is_auto_embedded (bool): A flag indicating whether the collection is
automatically embedded by Zep.
"""
name: str
description: Optional[str]
metadata: Optional[Dict[str, Any]]
embedding_dimensions: int
is_auto_embedded: bool
class ZepVectorStore(VectorStore):
"""`Zep` vector store.
It provides methods for adding texts or documents to the store,
searching for similar documents, and deleting documents.
Search scores are calculated using cosine similarity normalized to [0, 1].
Args:
api_url (str): The URL of the Zep API.
collection_name (str): The name of the collection in the Zep store.
api_key (Optional[str]): The API key for the Zep API.
config (Optional[CollectionConfig]): The configuration for the collection.
Required if the collection does not already exist.
embedding (Optional[Embeddings]): Optional embedding function to use to
embed the texts. Required if the collection is not auto-embedded.
"""
def __init__(
self,
collection_name: str,
api_url: str,
*,
api_key: Optional[str] = None,
config: Optional[CollectionConfig] = None,
embedding: Optional[Embeddings] = None,
) -> None:
super().__init__()
if not collection_name:
raise ValueError(
"collection_name must be specified when using ZepVectorStore."
)
try:
from zep_python import ZepClient
except ImportError:
raise ImportError(
"Could not import zep-python python package. "
"Please install it with `pip install zep-python`."
)
self._client = ZepClient(api_url, api_key=api_key)
self.collection_name = collection_name
# If for some reason the collection name is not the same as the one in the
# config, update it.
if config and config.name != self.collection_name:
config.name = self.collection_name
self._collection_config = config
self._collection = self._load_collection()
self._embedding = embedding
# self.add_texts(texts, metadatas=metadatas, **kwargs)
@property
def embeddings(self) -> Optional[Embeddings]:
"""Access the query embedding object if available."""
return self._embedding
def _load_collection(self) -> DocumentCollection:
"""
Load the collection from the Zep backend.
"""
from zep_python import NotFoundError
try:
collection = self._client.document.get_collection(self.collection_name)
except NotFoundError:
logger.info(
f"Collection {self.collection_name} not found. Creating new collection."
)
collection = self._create_collection()
return collection
def _create_collection(self) -> DocumentCollection:
"""
Create a new collection in the Zep backend.
"""
if not self._collection_config:
raise ValueError(
"Collection config must be specified when creating a new collection."
)
collection = self._client.document.add_collection(
**asdict(self._collection_config)
)
return collection
def _generate_documents_to_add(
self,
texts: Iterable[str],
metadatas: Optional[List[Dict[Any, Any]]] = None,
document_ids: Optional[List[str]] = None,
) -> List[ZepDocument]:
from zep_python.document import Document as ZepDocument
embeddings = None
if self._collection and self._collection.is_auto_embedded:
if self._embedding is not None:
warnings.warn(
"""The collection is set to auto-embed and an embedding
function is present. Ignoring the embedding function.""",
stacklevel=2,
)
elif self._embedding is not None:
embeddings = self._embedding.embed_documents(list(texts))
if self._collection and self._collection.embedding_dimensions != len(
embeddings[0]
):
raise ValueError(
"The embedding dimensions of the collection and the embedding"
" function do not match. Collection dimensions:"
f" {self._collection.embedding_dimensions}, Embedding dimensions:"
f" {len(embeddings[0])}"
)
else:
pass
documents: List[ZepDocument] = []
for i, d in enumerate(texts):
documents.append(
ZepDocument(
content=d,
metadata=metadatas[i] if metadatas else None,
document_id=document_ids[i] if document_ids else None,
embedding=embeddings[i] if embeddings else None,
)
)
return documents
def add_texts(
self,
texts: Iterable[str],
metadatas: Optional[List[Dict[str, Any]]] = None,
document_ids: Optional[List[str]] = None,
**kwargs: Any,
) -> List[str]:
"""Run more texts through the embeddings and add to the vectorstore.
Args:
texts: Iterable of strings to add to the vectorstore.
metadatas: Optional list of metadatas associated with the texts.
document_ids: Optional list of document ids associated with the texts.
kwargs: vectorstore specific parameters
Returns:
List of ids from adding the texts into the vectorstore.
"""
if not self._collection:
raise ValueError(
"collection should be an instance of a Zep DocumentCollection"
)
documents = self._generate_documents_to_add(texts, metadatas, document_ids)
uuids = self._collection.add_documents(documents)
return uuids
async def aadd_texts(
self,
texts: Iterable[str],
metadatas: Optional[List[Dict[str, Any]]] = None,
document_ids: Optional[List[str]] = None,
**kwargs: Any,
) -> List[str]:
"""Run more texts through the embeddings and add to the vectorstore."""
if not self._collection:
raise ValueError(
"collection should be an instance of a Zep DocumentCollection"
)
documents = self._generate_documents_to_add(texts, metadatas, document_ids)
uuids = await self._collection.aadd_documents(documents)
return uuids
def search(
self,
query: str,
search_type: str,
metadata: Optional[Dict[str, Any]] = None,
k: int = 3,
**kwargs: Any,
) -> List[Document]:
"""Return docs most similar to query using specified search type."""
if search_type == "similarity":
return self.similarity_search(query, k=k, metadata=metadata, **kwargs)
elif search_type == "mmr":
return self.max_marginal_relevance_search(
query, k=k, metadata=metadata, **kwargs
)
else:
raise ValueError(
f"search_type of {search_type} not allowed. Expected "
"search_type to be 'similarity' or 'mmr'."
)
async def asearch(
self,
query: str,
search_type: str,
metadata: Optional[Dict[str, Any]] = None,
k: int = 3,
**kwargs: Any,
) -> List[Document]:
"""Return docs most similar to query using specified search type."""
if search_type == "similarity":
return await self.asimilarity_search(
query, k=k, metadata=metadata, **kwargs
)
elif search_type == "mmr":
return await self.amax_marginal_relevance_search(
query, k=k, metadata=metadata, **kwargs
)
else:
raise ValueError(
f"search_type of {search_type} not allowed. Expected "
"search_type to be 'similarity' or 'mmr'."
)
def similarity_search(
self,
query: str,
k: int = 4,
metadata: Optional[Dict[str, Any]] = None,
**kwargs: Any,
) -> List[Document]:
"""Return docs most similar to query."""
results = self._similarity_search_with_relevance_scores(
query, k=k, metadata=metadata, **kwargs
)
return [doc for doc, _ in results]
def similarity_search_with_score(
self,
query: str,
k: int = 4,
metadata: Optional[Dict[str, Any]] = None,
**kwargs: Any,
) -> List[Tuple[Document, float]]:
"""Run similarity search with distance."""
return self._similarity_search_with_relevance_scores(
query, k=k, metadata=metadata, **kwargs
)
def _similarity_search_with_relevance_scores(
self,
query: str,
k: int = 4,
metadata: Optional[Dict[str, Any]] = None,
**kwargs: Any,
) -> List[Tuple[Document, float]]:
"""
Default similarity search with relevance scores. Modify if necessary
in subclass.
Return docs and relevance scores in the range [0, 1].
0 is dissimilar, 1 is most similar.
Args:
query: input text
k: Number of Documents to return. Defaults to 4.
metadata: Optional, metadata filter
**kwargs: kwargs to be passed to similarity search. Should include:
score_threshold: Optional, a floating point value between 0 to 1 and
filter the resulting set of retrieved docs
Returns:
List of Tuples of (doc, similarity_score)
"""
if not self._collection:
raise ValueError(
"collection should be an instance of a Zep DocumentCollection"
)
if not self._collection.is_auto_embedded and self._embedding:
query_vector = self._embedding.embed_query(query)
results = self._collection.search(
embedding=query_vector, limit=k, metadata=metadata, **kwargs
)
else:
results = self._collection.search(
query, limit=k, metadata=metadata, **kwargs
)
return [
(
Document(
page_content=doc.content,
metadata=doc.metadata,
),
doc.score or 0.0,
)
for doc in results
]
async def asimilarity_search_with_relevance_scores(
self,
query: str,
k: int = 4,
metadata: Optional[Dict[str, Any]] = None,
**kwargs: Any,
) -> List[Tuple[Document, float]]:
"""Return docs most similar to query."""
if not self._collection:
raise ValueError(
"collection should be an instance of a Zep DocumentCollection"
)
if not self._collection.is_auto_embedded and self._embedding:
query_vector = self._embedding.embed_query(query)
results = await self._collection.asearch(
embedding=query_vector, limit=k, metadata=metadata, **kwargs
)
else:
results = await self._collection.asearch(
query, limit=k, metadata=metadata, **kwargs
)
return [
(
Document(
page_content=doc.content,
metadata=doc.metadata,
),
doc.score or 0.0,
)
for doc in results
]
async def asimilarity_search(
self,
query: str,
k: int = 4,
metadata: Optional[Dict[str, Any]] = None,
**kwargs: Any,
) -> List[Document]:
"""Return docs most similar to query."""
results = await self.asimilarity_search_with_relevance_scores(
query, k, metadata=metadata, **kwargs
)
return [doc for doc, _ in results]
def similarity_search_by_vector(
self,
embedding: List[float],
k: int = 4,
metadata: Optional[Dict[str, Any]] = None,
**kwargs: Any,
) -> List[Document]:
"""Return docs most similar to embedding vector.
Args:
embedding: Embedding to look up documents similar to.
k: Number of Documents to return. Defaults to 4.
metadata: Optional, metadata filter
Returns:
List of Documents most similar to the query vector.
"""
if not self._collection:
raise ValueError(
"collection should be an instance of a Zep DocumentCollection"
)
results = self._collection.search(
embedding=embedding, limit=k, metadata=metadata, **kwargs
)
return [
Document(
page_content=doc.content,
metadata=doc.metadata,
)
for doc in results
]
async def asimilarity_search_by_vector(
self,
embedding: List[float],
k: int = 4,
metadata: Optional[Dict[str, Any]] = None,
**kwargs: Any,
) -> List[Document]:
"""Return docs most similar to embedding vector."""
if not self._collection:
raise ValueError(
"collection should be an instance of a Zep DocumentCollection"
)
results = self._collection.search(
embedding=embedding, limit=k, metadata=metadata, **kwargs
)
return [
Document(
page_content=doc.content,
metadata=doc.metadata,
)
for doc in results
]
def max_marginal_relevance_search(
self,
query: str,
k: int = 4,
fetch_k: int = 20,
lambda_mult: float = 0.5,
metadata: Optional[Dict[str, Any]] = None,
**kwargs: Any,
) -> List[Document]:
"""Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Args:
query: Text to look up documents similar to.
k: Number of Documents to return. Defaults to 4.
fetch_k: Number of Documents to fetch to pass to MMR algorithm.
Zep determines this automatically and this parameter is
ignored.
lambda_mult: Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
metadata: Optional, metadata to filter the resulting set of retrieved docs
Returns:
List of Documents selected by maximal marginal relevance.
"""
if not self._collection:
raise ValueError(
"collection should be an instance of a Zep DocumentCollection"
)
if not self._collection.is_auto_embedded and self._embedding:
query_vector = self._embedding.embed_query(query)
results = self._collection.search(
embedding=query_vector,
limit=k,
metadata=metadata,
search_type="mmr",
mmr_lambda=lambda_mult,
**kwargs,
)
else:
results, query_vector = self._collection.search_return_query_vector(
query,
limit=k,
metadata=metadata,
search_type="mmr",
mmr_lambda=lambda_mult,
**kwargs,
)
return [Document(page_content=d.content, metadata=d.metadata) for d in results]
async def amax_marginal_relevance_search(
self,
query: str,
k: int = 4,
fetch_k: int = 20,
lambda_mult: float = 0.5,
metadata: Optional[Dict[str, Any]] = None,
**kwargs: Any,
) -> List[Document]:
"""Return docs selected using the maximal marginal relevance."""
if not self._collection:
raise ValueError(
"collection should be an instance of a Zep DocumentCollection"
)
if not self._collection.is_auto_embedded and self._embedding:
query_vector = self._embedding.embed_query(query)
results = await self._collection.asearch(
embedding=query_vector,
limit=k,
metadata=metadata,
search_type="mmr",
mmr_lambda=lambda_mult,
**kwargs,
)
else:
results, query_vector = await self._collection.asearch_return_query_vector(
query,
limit=k,
metadata=metadata,
search_type="mmr",
mmr_lambda=lambda_mult,
**kwargs,
)
return [Document(page_content=d.content, metadata=d.metadata) for d in results]
def max_marginal_relevance_search_by_vector(
self,
embedding: List[float],
k: int = 4,
fetch_k: int = 20,
lambda_mult: float = 0.5,
metadata: Optional[Dict[str, Any]] = None,
**kwargs: Any,
) -> List[Document]:
"""Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Args:
embedding: Embedding to look up documents similar to.
k: Number of Documents to return. Defaults to 4.
fetch_k: Number of Documents to fetch to pass to MMR algorithm.
Zep determines this automatically and this parameter is
ignored.
lambda_mult: Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
metadata: Optional, metadata to filter the resulting set of retrieved docs
Returns:
List of Documents selected by maximal marginal relevance.
"""
if not self._collection:
raise ValueError(
"collection should be an instance of a Zep DocumentCollection"
)
results = self._collection.search(
embedding=embedding,
limit=k,
metadata=metadata,
search_type="mmr",
mmr_lambda=lambda_mult,
**kwargs,
)
return [Document(page_content=d.content, metadata=d.metadata) for d in results]
async def amax_marginal_relevance_search_by_vector(
self,
embedding: List[float],
k: int = 4,
fetch_k: int = 20,
lambda_mult: float = 0.5,
metadata: Optional[Dict[str, Any]] = None,
**kwargs: Any,
) -> List[Document]:
"""Return docs selected using the maximal marginal relevance."""
if not self._collection:
raise ValueError(
"collection should be an instance of a Zep DocumentCollection"
)
results = await self._collection.asearch(
embedding=embedding,
limit=k,
metadata=metadata,
search_type="mmr",
mmr_lambda=lambda_mult,
**kwargs,
)
return [Document(page_content=d.content, metadata=d.metadata) for d in results]
@classmethod
def from_texts(
cls,
texts: List[str],
embedding: Optional[Embeddings] = None,
metadatas: Optional[List[dict]] = None,
collection_name: str = "",
api_url: str = "",
api_key: Optional[str] = None,
config: Optional[CollectionConfig] = None,
**kwargs: Any,
) -> ZepVectorStore:
"""
Class method that returns a ZepVectorStore instance initialized from texts.
If the collection does not exist, it will be created.
Args:
texts (List[str]): The list of texts to add to the vectorstore.
embedding (Optional[Embeddings]): Optional embedding function to use to
embed the texts.
metadatas (Optional[List[Dict[str, Any]]]): Optional list of metadata
associated with the texts.
collection_name (str): The name of the collection in the Zep store.
api_url (str): The URL of the Zep API.
api_key (Optional[str]): The API key for the Zep API.
config (Optional[CollectionConfig]): The configuration for the collection.
kwargs: Additional parameters specific to the vectorstore.
Returns:
ZepVectorStore: An instance of ZepVectorStore.
"""
vecstore = cls(
collection_name,
api_url,
api_key=api_key,
config=config,
embedding=embedding,
)
vecstore.add_texts(texts, metadatas)
return vecstore
def delete(self, ids: Optional[List[str]] = None, **kwargs: Any) -> None:
"""Delete by Zep vector UUIDs.
Parameters
----------
ids : Optional[List[str]]
The UUIDs of the vectors to delete.
Raises
------
ValueError
If no UUIDs are provided.
"""
if ids is None or len(ids) == 0:
raise ValueError("No uuids provided to delete.")
if self._collection is None:
raise ValueError("No collection name provided.")
for u in ids:
self._collection.delete_document(u)
|
langchain-aiREPO_NAMElangchainPATH_START.@langchain_extracted@langchain-master@libs@community@langchain_community@vectorstores@zep.py@.PATH_END.py
|
{
"filename": "__init__.py",
"repo_name": "scipy/scipy",
"repo_path": "scipy_extracted/scipy-main/scipy/sparse/linalg/_eigen/lobpcg/tests/__init__.py",
"type": "Python"
}
|
scipyREPO_NAMEscipyPATH_START.@scipy_extracted@scipy-main@scipy@sparse@linalg@_eigen@lobpcg@tests@__init__.py@.PATH_END.py
|
|
{
"filename": "filter_design.py",
"repo_name": "scipy/scipy",
"repo_path": "scipy_extracted/scipy-main/scipy/signal/filter_design.py",
"type": "Python"
}
|
# This file is not meant for public use and will be removed in SciPy v2.0.0.
# Use the `scipy.signal` namespace for importing the functions
# included below.
from scipy._lib.deprecation import _sub_module_deprecation
__all__ = [ # noqa: F822
'findfreqs', 'freqs', 'freqz', 'tf2zpk', 'zpk2tf', 'normalize',
'lp2lp', 'lp2hp', 'lp2bp', 'lp2bs', 'bilinear', 'iirdesign',
'iirfilter', 'butter', 'cheby1', 'cheby2', 'ellip', 'bessel',
'band_stop_obj', 'buttord', 'cheb1ord', 'cheb2ord', 'ellipord',
'buttap', 'cheb1ap', 'cheb2ap', 'ellipap', 'besselap',
'BadCoefficients', 'freqs_zpk', 'freqz_zpk',
'tf2sos', 'sos2tf', 'zpk2sos', 'sos2zpk', 'group_delay',
'sosfreqz', 'freqz_sos', 'iirnotch', 'iirpeak', 'bilinear_zpk',
'lp2lp_zpk', 'lp2hp_zpk', 'lp2bp_zpk', 'lp2bs_zpk',
'gammatone', 'iircomb',
]
def __dir__():
return __all__
def __getattr__(name):
return _sub_module_deprecation(sub_package="signal", module="filter_design",
private_modules=["_filter_design"], all=__all__,
attribute=name)
|
scipyREPO_NAMEscipyPATH_START.@scipy_extracted@scipy-main@scipy@signal@filter_design.py@.PATH_END.py
|
{
"filename": "sis_truncate.py",
"repo_name": "sibirrer/lenstronomy",
"repo_path": "lenstronomy_extracted/lenstronomy-main/lenstronomy/LensModel/Profiles/sis_truncate.py",
"type": "Python"
}
|
__author__ = "sibirrer"
import numpy as np
from lenstronomy.LensModel.Profiles.base_profile import LensProfileBase
__all__ = ["SIS_truncate"]
class SIS_truncate(LensProfileBase):
"""This class contains the function and the derivatives of the Singular Isothermal
Sphere."""
param_names = ["theta_E", "r_trunc", "center_x", "center_y"]
lower_limit_default = {
"theta_E": 0,
"r_trunc": 0,
"center_x": -100,
"center_y": -100,
}
upper_limit_default = {
"theta_E": 100,
"r_trunc": 100,
"center_x": 100,
"center_y": 100,
}
def function(self, x, y, theta_E, r_trunc, center_x=0, center_y=0):
x_shift = x - center_x
y_shift = y - center_y
r = np.sqrt(x_shift * x_shift + y_shift * y_shift)
if isinstance(r, int) or isinstance(r, float):
if r < r_trunc:
f_ = theta_E * r
elif r < 2 * r_trunc:
f_ = theta_E * r_trunc + 1.0 / 2 * theta_E * (3 - r / r_trunc) * (
r - r_trunc
)
else:
f_ = 3.0 / 2 * theta_E * r_trunc
else:
f_ = np.zeros_like(r)
f_[r < r_trunc] = theta_E * r[r < r_trunc]
r_ = r[(r < 2 * r_trunc) & (r > r_trunc)]
f_[(r < 2 * r_trunc) & (r > r_trunc)] = (
theta_E * r_trunc
+ 1.0 / 2 * theta_E * (3 - r_ / r_trunc) * (r_ - r_trunc)
)
f_[r > 2 * r_trunc] = 3.0 / 2 * theta_E * r_trunc
return f_
def derivatives(self, x, y, theta_E, r_trunc, center_x=0, center_y=0):
"""Returns df/dx and df/dy of the function."""
x_shift = x - center_x
y_shift = y - center_y
dphi_dr = self._dphi_dr(x_shift, y_shift, theta_E, r_trunc)
dr_dx, dr_dy = self._dr_dx(x_shift, y_shift)
f_x = dphi_dr * dr_dx
f_y = dphi_dr * dr_dy
return f_x, f_y
def hessian(self, x, y, theta_E, r_trunc, center_x=0, center_y=0):
"""Returns Hessian matrix of function d^2f/dx^2, d^2/dxdy, d^2/dydx,
d^f/dy^2."""
x_shift = x - center_x
y_shift = y - center_y
dphi_dr = self._dphi_dr(x_shift, y_shift, theta_E, r_trunc)
d2phi_dr2 = self._d2phi_dr2(x_shift, y_shift, theta_E, r_trunc)
dr_dx, dr_dy = self._dr_dx(x, y)
d2r_dx2, d2r_dy2, d2r_dxy = self._d2r_dx2(x_shift, y_shift)
f_xx = d2r_dx2 * dphi_dr + dr_dx**2 * d2phi_dr2
f_yy = d2r_dy2 * dphi_dr + dr_dy**2 * d2phi_dr2
f_xy = d2r_dxy * dphi_dr + dr_dx * dr_dy * d2phi_dr2
return f_xx, f_xy, f_xy, f_yy
def _dphi_dr(self, x, y, theta_E, r_trunc):
"""
:param x:
:param y:
:param r_trunc:
:return:
"""
r = np.sqrt(x * x + y * y)
if isinstance(r, int) or isinstance(r, float):
if r == 0:
a = 0
elif r < r_trunc:
a = theta_E
elif r < 2 * r_trunc:
a = theta_E * (2 - r / r_trunc)
else:
a = 0
else:
a = np.zeros_like(r)
a[(r < r_trunc) & (r > 0)] = theta_E
r_ = r[(r < 2 * r_trunc) & (r >= r_trunc)]
a[(r < 2 * r_trunc) & (r >= r_trunc)] = theta_E * (2 - r_ / r_trunc)
a[r >= 2 * r_trunc] = 0
return a
def _d2phi_dr2(self, x, y, theta_E, r_trunc):
"""Second derivative of the potential in radial direction :param x:
:param y:
:param theta_E:
:param r_trunc:
:return:
"""
r = np.sqrt(x * x + y * y)
if isinstance(r, int) or isinstance(r, float):
if r < r_trunc:
a = 0
elif r < 2 * r_trunc:
a = -theta_E / r_trunc
else:
a = 0
else:
a = np.zeros_like(r)
a[r < r_trunc] = 0
a[(r < 2 * r_trunc) & (r > r_trunc)] = -theta_E / r_trunc
a[r > 2 * r_trunc] = 0
return a
def _dr_dx(self, x, y):
"""Derivative of dr/dx, dr/dy :param x:
:param y:
:return:
"""
r = np.sqrt(x**2 + y**2)
if isinstance(r, int) or isinstance(r, float):
if r == 0:
r = 1
else:
r[r == 0] = 1
return x / r, y / r
@staticmethod
def _d2r_dx2(x, y):
"""Second derivative :param x:
:param y:
:return:
"""
r = np.sqrt(x**2 + y**2)
if isinstance(r, int) or isinstance(r, float):
if r == 0:
r = 1
else:
r[r == 0] = 1
return y**2 / r**3, x**2 / r**3, -x * y / r**3
|
sibirrerREPO_NAMElenstronomyPATH_START.@lenstronomy_extracted@lenstronomy-main@lenstronomy@LensModel@Profiles@sis_truncate.py@.PATH_END.py
|
{
"filename": "test_tgasSelect.py",
"repo_name": "jobovy/gaia_tools",
"repo_path": "gaia_tools_extracted/gaia_tools-main/nose/test_tgasSelect.py",
"type": "Python"
}
|
# Tests of gaia_tools.select.tgasSelect
import numpy
import gaia_tools.select
def test_effvol_complete():
# Test that the effective volume == volume when the completeness == 1
tsf= gaia_tools.select.tgasSelectUniform(comp=1.)
tesf= gaia_tools.select.tgasEffectiveSelect(tsf)
dxy, dz, zmin= 0.2, 0.1, 0.15
v= tesf.volume(\
lambda x,y,z: cyl_vol_func(x,y,z,xymax=dxy,zmin=zmin,zmax=zmin+dz),
xyz=True)
v_exp= numpy.pi*dxy**2.*dz
assert(numpy.fabs(v/v_exp-1.) < 10.**-3.), 'Effective volume for unit completeness is not equal to the volume'
# Another one
dxy, dz, zmin= 0.2, 0.2, -0.15
v= tesf.volume(\
lambda x,y,z: cyl_vol_func(x,y,z,xymax=dxy,zmin=zmin,zmax=zmin+dz),
xyz=True,ndists=251)
v_exp= numpy.pi*dxy**2.*dz
assert(numpy.fabs(v/v_exp-1.) < 10.**-2.), 'Effective volume for unit completeness is not equal to the volume'
return None
def test_effvol_uniform_complete():
# Test that the effective volume == A x volume when the completeness == A
comp= 0.33
tsf= gaia_tools.select.tgasSelectUniform(comp=comp)
tesf= gaia_tools.select.tgasEffectiveSelect(tsf)
dxy, dz, zmin= 0.2, 0.1, 0.15
v= tesf.volume(\
lambda x,y,z: cyl_vol_func(x,y,z,xymax=dxy,zmin=zmin,zmax=zmin+dz),
xyz=True)
v_exp= numpy.pi*dxy**2.*dz*comp
assert(numpy.fabs(v/v_exp-1.) < 10.**-3.), 'Effective volume for unit completeness is not equal to the volume'
# Another one
dxy, dz, zmin= 0.2, 0.2, -0.15
v= tesf.volume(\
lambda x,y,z: cyl_vol_func(x,y,z,xymax=dxy,zmin=zmin,zmax=zmin+dz),
xyz=True,ndists=251)
v_exp= numpy.pi*dxy**2.*dz*comp
assert(numpy.fabs(v/v_exp-1.) < 10.**-2.), 'Effective volume for unit completeness is not equal to the volume'
return None
def test_effvol_uniform_complete_partialsky():
# Test that the effective volume == A x volume x sky-fraction when the completeness == A over a fraction of the sky for a spherical volume
comp= 0.33
ramin, ramax= 30., 120.
tsf= gaia_tools.select.tgasSelectUniform(comp=comp,ramin=ramin,ramax=ramax)
tesf= gaia_tools.select.tgasEffectiveSelect(tsf)
dr, rmin= 0.1, 0.
v= tesf.volume(\
lambda x,y,z: spher_vol_func(x,y,z,rmin=rmin,rmax=rmin+dr),
xyz=True,ndists=251)
v_exp= 4.*numpy.pi*dr**3./3.*comp*(ramax-ramin)/360.
assert(numpy.fabs(v/v_exp-1.) < 10.**-2.), 'Effective volume for unit completeness is not equal to the volume'
# Another one
dr, rmin= 0.2, 0.
v= tesf.volume(\
lambda x,y,z: spher_vol_func(x,y,z,rmin=rmin,rmax=rmin+dr),
xyz=True,ndists=501)
v_exp= 4.*numpy.pi*dr**3./3.*comp*(ramax-ramin)/360.
assert(numpy.fabs(v/v_exp-1.) < 10.**-1.9), 'Effective volume for unit completeness is not equal to the volume'
return None
def test_effvol_uniform_complete_gaiagoodsky():
# Test that the effective volume == A x volume x sky-fraction when the completeness == A over a fraction of the sky for a spherical volume
comp= 0.33
tsf= gaia_tools.select.tgasSelectUniform(comp=comp,keepexclude=True)
tesf= gaia_tools.select.tgasEffectiveSelect(tsf)
dr, rmin= 0.1, 0.
v= tesf.volume(\
lambda x,y,z: spher_vol_func(x,y,z,rmin=rmin,rmax=rmin+dr),
xyz=True,ndists=251)
v_exp= 4.*numpy.pi*dr**3./3.*comp\
*float(numpy.sum(True-tsf._exclude_mask_skyonly))\
/len(tsf._exclude_mask_skyonly)
assert(numpy.fabs(v/v_exp-1.) < 10.**-2.), 'Effective volume for unit completeness is not equal to the volume'
# Another one
dr, rmin= 0.2, 0.
v= tesf.volume(\
lambda x,y,z: spher_vol_func(x,y,z,rmin=rmin,rmax=rmin+dr),
xyz=True,ndists=501)
v_exp= 4.*numpy.pi*dr**3./3.*comp\
*float(numpy.sum(True-tsf._exclude_mask_skyonly))\
/len(tsf._exclude_mask_skyonly)
assert(numpy.fabs(v/v_exp-1.) < 10.**-1.9), 'Effective volume for unit completeness is not equal to the volume'
return None
def cyl_vol_func(X,Y,Z,xymin=0.,xymax=0.15,zmin=0.05,zmax=0.15):
"""A function that bins in cylindrical annuli around the Sun"""
xy= numpy.sqrt(X**2.+Y**2.)
out= numpy.zeros_like(X)
out[(xy >= xymin)*(xy < xymax)*(Z >= zmin)*(Z < zmax)]= 1.
return out
def spher_vol_func(X,Y,Z,rmin=0.,rmax=0.15):
"""A function that bins in spherical annuli around the Sun"""
r= numpy.sqrt(X**2.+Y**2.+Z**2.)
out= numpy.zeros_like(X)
out[(r >= rmin)*(r < rmax)]= 1.
return out
|
jobovyREPO_NAMEgaia_toolsPATH_START.@gaia_tools_extracted@gaia_tools-main@nose@test_tgasSelect.py@.PATH_END.py
|
{
"filename": "__init__kpd.py",
"repo_name": "tgrassi/prizmo",
"repo_path": "prizmo_extracted/prizmo-main/src_py/ChiantiPy/__init__kpd.py",
"type": "Python"
}
|
"""
ChiantiPy - CHIANTI Python package Calculates various aspects of emission lines
and continua from the CHIANTI atomic database for astrophysical spectroscopy.
"""
# This is not yet an Astropy affiliated package, but it makes use of the Astropy
# package template
# this indicates whether or not we are in the package's setup.py
try:
_ASTROPY_SETUP_
except NameError:
from sys import version_info
if version_info[0] >= 3:
import builtins
else:
import __builtin__ as builtins
builtins._ASTROPY_SETUP_ = False
try:
from .version import version as __version__
except ImportError:
__version__ = ''
try:
from .version import githash as __githash__
except ImportError:
__githash__ = ''
# Import astropy test runner if we can and dummy it if we can't
import os
try:
from astropy.tests.helper import TestRunner
test = TestRunner.make_test_runner_in(os.path.dirname(__file__))
except ImportError:
def test(*args, **kwargs):
raise ImportError("astropy is needed to run the tests")
# Actual package imports here:
# Note this if statement is only here to allow chiantipy to be imported before
# it's compiled.
if not _ASTROPY_SETUP_:
## For ChiantiPy
#from . import version
#Version = version._last_generated_version
## For ChiantiPy
from . import version
#Version = version._last_generated_version
__version__ = version.__version__
__version_info__ = version.__version_info__
|
tgrassiREPO_NAMEprizmoPATH_START.@prizmo_extracted@prizmo-main@src_py@ChiantiPy@__init__kpd.py@.PATH_END.py
|
{
"filename": "laguerre.py",
"repo_name": "numpy/numpy",
"repo_path": "numpy_extracted/numpy-main/numpy/polynomial/laguerre.py",
"type": "Python"
}
|
"""
==================================================
Laguerre Series (:mod:`numpy.polynomial.laguerre`)
==================================================
This module provides a number of objects (mostly functions) useful for
dealing with Laguerre series, including a `Laguerre` class that
encapsulates the usual arithmetic operations. (General information
on how this module represents and works with such polynomials is in the
docstring for its "parent" sub-package, `numpy.polynomial`).
Classes
-------
.. autosummary::
:toctree: generated/
Laguerre
Constants
---------
.. autosummary::
:toctree: generated/
lagdomain
lagzero
lagone
lagx
Arithmetic
----------
.. autosummary::
:toctree: generated/
lagadd
lagsub
lagmulx
lagmul
lagdiv
lagpow
lagval
lagval2d
lagval3d
laggrid2d
laggrid3d
Calculus
--------
.. autosummary::
:toctree: generated/
lagder
lagint
Misc Functions
--------------
.. autosummary::
:toctree: generated/
lagfromroots
lagroots
lagvander
lagvander2d
lagvander3d
laggauss
lagweight
lagcompanion
lagfit
lagtrim
lagline
lag2poly
poly2lag
See also
--------
`numpy.polynomial`
"""
import numpy as np
import numpy.linalg as la
from numpy.lib.array_utils import normalize_axis_index
from . import polyutils as pu
from ._polybase import ABCPolyBase
__all__ = [
'lagzero', 'lagone', 'lagx', 'lagdomain', 'lagline', 'lagadd',
'lagsub', 'lagmulx', 'lagmul', 'lagdiv', 'lagpow', 'lagval', 'lagder',
'lagint', 'lag2poly', 'poly2lag', 'lagfromroots', 'lagvander',
'lagfit', 'lagtrim', 'lagroots', 'Laguerre', 'lagval2d', 'lagval3d',
'laggrid2d', 'laggrid3d', 'lagvander2d', 'lagvander3d', 'lagcompanion',
'laggauss', 'lagweight']
lagtrim = pu.trimcoef
def poly2lag(pol):
"""
poly2lag(pol)
Convert a polynomial to a Laguerre series.
Convert an array representing the coefficients of a polynomial (relative
to the "standard" basis) ordered from lowest degree to highest, to an
array of the coefficients of the equivalent Laguerre series, ordered
from lowest to highest degree.
Parameters
----------
pol : array_like
1-D array containing the polynomial coefficients
Returns
-------
c : ndarray
1-D array containing the coefficients of the equivalent Laguerre
series.
See Also
--------
lag2poly
Notes
-----
The easy way to do conversions between polynomial basis sets
is to use the convert method of a class instance.
Examples
--------
>>> import numpy as np
>>> from numpy.polynomial.laguerre import poly2lag
>>> poly2lag(np.arange(4))
array([ 23., -63., 58., -18.])
"""
[pol] = pu.as_series([pol])
res = 0
for p in pol[::-1]:
res = lagadd(lagmulx(res), p)
return res
def lag2poly(c):
"""
Convert a Laguerre series to a polynomial.
Convert an array representing the coefficients of a Laguerre series,
ordered from lowest degree to highest, to an array of the coefficients
of the equivalent polynomial (relative to the "standard" basis) ordered
from lowest to highest degree.
Parameters
----------
c : array_like
1-D array containing the Laguerre series coefficients, ordered
from lowest order term to highest.
Returns
-------
pol : ndarray
1-D array containing the coefficients of the equivalent polynomial
(relative to the "standard" basis) ordered from lowest order term
to highest.
See Also
--------
poly2lag
Notes
-----
The easy way to do conversions between polynomial basis sets
is to use the convert method of a class instance.
Examples
--------
>>> from numpy.polynomial.laguerre import lag2poly
>>> lag2poly([ 23., -63., 58., -18.])
array([0., 1., 2., 3.])
"""
from .polynomial import polyadd, polysub, polymulx
[c] = pu.as_series([c])
n = len(c)
if n == 1:
return c
else:
c0 = c[-2]
c1 = c[-1]
# i is the current degree of c1
for i in range(n - 1, 1, -1):
tmp = c0
c0 = polysub(c[i - 2], (c1*(i - 1))/i)
c1 = polyadd(tmp, polysub((2*i - 1)*c1, polymulx(c1))/i)
return polyadd(c0, polysub(c1, polymulx(c1)))
#
# These are constant arrays are of integer type so as to be compatible
# with the widest range of other types, such as Decimal.
#
# Laguerre
lagdomain = np.array([0., 1.])
# Laguerre coefficients representing zero.
lagzero = np.array([0])
# Laguerre coefficients representing one.
lagone = np.array([1])
# Laguerre coefficients representing the identity x.
lagx = np.array([1, -1])
def lagline(off, scl):
"""
Laguerre series whose graph is a straight line.
Parameters
----------
off, scl : scalars
The specified line is given by ``off + scl*x``.
Returns
-------
y : ndarray
This module's representation of the Laguerre series for
``off + scl*x``.
See Also
--------
numpy.polynomial.polynomial.polyline
numpy.polynomial.chebyshev.chebline
numpy.polynomial.legendre.legline
numpy.polynomial.hermite.hermline
numpy.polynomial.hermite_e.hermeline
Examples
--------
>>> from numpy.polynomial.laguerre import lagline, lagval
>>> lagval(0,lagline(3, 2))
3.0
>>> lagval(1,lagline(3, 2))
5.0
"""
if scl != 0:
return np.array([off + scl, -scl])
else:
return np.array([off])
def lagfromroots(roots):
"""
Generate a Laguerre series with given roots.
The function returns the coefficients of the polynomial
.. math:: p(x) = (x - r_0) * (x - r_1) * ... * (x - r_n),
in Laguerre form, where the :math:`r_n` are the roots specified in `roots`.
If a zero has multiplicity n, then it must appear in `roots` n times.
For instance, if 2 is a root of multiplicity three and 3 is a root of
multiplicity 2, then `roots` looks something like [2, 2, 2, 3, 3]. The
roots can appear in any order.
If the returned coefficients are `c`, then
.. math:: p(x) = c_0 + c_1 * L_1(x) + ... + c_n * L_n(x)
The coefficient of the last term is not generally 1 for monic
polynomials in Laguerre form.
Parameters
----------
roots : array_like
Sequence containing the roots.
Returns
-------
out : ndarray
1-D array of coefficients. If all roots are real then `out` is a
real array, if some of the roots are complex, then `out` is complex
even if all the coefficients in the result are real (see Examples
below).
See Also
--------
numpy.polynomial.polynomial.polyfromroots
numpy.polynomial.legendre.legfromroots
numpy.polynomial.chebyshev.chebfromroots
numpy.polynomial.hermite.hermfromroots
numpy.polynomial.hermite_e.hermefromroots
Examples
--------
>>> from numpy.polynomial.laguerre import lagfromroots, lagval
>>> coef = lagfromroots((-1, 0, 1))
>>> lagval((-1, 0, 1), coef)
array([0., 0., 0.])
>>> coef = lagfromroots((-1j, 1j))
>>> lagval((-1j, 1j), coef)
array([0.+0.j, 0.+0.j])
"""
return pu._fromroots(lagline, lagmul, roots)
def lagadd(c1, c2):
"""
Add one Laguerre series to another.
Returns the sum of two Laguerre series `c1` + `c2`. The arguments
are sequences of coefficients ordered from lowest order term to
highest, i.e., [1,2,3] represents the series ``P_0 + 2*P_1 + 3*P_2``.
Parameters
----------
c1, c2 : array_like
1-D arrays of Laguerre series coefficients ordered from low to
high.
Returns
-------
out : ndarray
Array representing the Laguerre series of their sum.
See Also
--------
lagsub, lagmulx, lagmul, lagdiv, lagpow
Notes
-----
Unlike multiplication, division, etc., the sum of two Laguerre series
is a Laguerre series (without having to "reproject" the result onto
the basis set) so addition, just like that of "standard" polynomials,
is simply "component-wise."
Examples
--------
>>> from numpy.polynomial.laguerre import lagadd
>>> lagadd([1, 2, 3], [1, 2, 3, 4])
array([2., 4., 6., 4.])
"""
return pu._add(c1, c2)
def lagsub(c1, c2):
"""
Subtract one Laguerre series from another.
Returns the difference of two Laguerre series `c1` - `c2`. The
sequences of coefficients are from lowest order term to highest, i.e.,
[1,2,3] represents the series ``P_0 + 2*P_1 + 3*P_2``.
Parameters
----------
c1, c2 : array_like
1-D arrays of Laguerre series coefficients ordered from low to
high.
Returns
-------
out : ndarray
Of Laguerre series coefficients representing their difference.
See Also
--------
lagadd, lagmulx, lagmul, lagdiv, lagpow
Notes
-----
Unlike multiplication, division, etc., the difference of two Laguerre
series is a Laguerre series (without having to "reproject" the result
onto the basis set) so subtraction, just like that of "standard"
polynomials, is simply "component-wise."
Examples
--------
>>> from numpy.polynomial.laguerre import lagsub
>>> lagsub([1, 2, 3, 4], [1, 2, 3])
array([0., 0., 0., 4.])
"""
return pu._sub(c1, c2)
def lagmulx(c):
"""Multiply a Laguerre series by x.
Multiply the Laguerre series `c` by x, where x is the independent
variable.
Parameters
----------
c : array_like
1-D array of Laguerre series coefficients ordered from low to
high.
Returns
-------
out : ndarray
Array representing the result of the multiplication.
See Also
--------
lagadd, lagsub, lagmul, lagdiv, lagpow
Notes
-----
The multiplication uses the recursion relationship for Laguerre
polynomials in the form
.. math::
xP_i(x) = (-(i + 1)*P_{i + 1}(x) + (2i + 1)P_{i}(x) - iP_{i - 1}(x))
Examples
--------
>>> from numpy.polynomial.laguerre import lagmulx
>>> lagmulx([1, 2, 3])
array([-1., -1., 11., -9.])
"""
# c is a trimmed copy
[c] = pu.as_series([c])
# The zero series needs special treatment
if len(c) == 1 and c[0] == 0:
return c
prd = np.empty(len(c) + 1, dtype=c.dtype)
prd[0] = c[0]
prd[1] = -c[0]
for i in range(1, len(c)):
prd[i + 1] = -c[i]*(i + 1)
prd[i] += c[i]*(2*i + 1)
prd[i - 1] -= c[i]*i
return prd
def lagmul(c1, c2):
"""
Multiply one Laguerre series by another.
Returns the product of two Laguerre series `c1` * `c2`. The arguments
are sequences of coefficients, from lowest order "term" to highest,
e.g., [1,2,3] represents the series ``P_0 + 2*P_1 + 3*P_2``.
Parameters
----------
c1, c2 : array_like
1-D arrays of Laguerre series coefficients ordered from low to
high.
Returns
-------
out : ndarray
Of Laguerre series coefficients representing their product.
See Also
--------
lagadd, lagsub, lagmulx, lagdiv, lagpow
Notes
-----
In general, the (polynomial) product of two C-series results in terms
that are not in the Laguerre polynomial basis set. Thus, to express
the product as a Laguerre series, it is necessary to "reproject" the
product onto said basis set, which may produce "unintuitive" (but
correct) results; see Examples section below.
Examples
--------
>>> from numpy.polynomial.laguerre import lagmul
>>> lagmul([1, 2, 3], [0, 1, 2])
array([ 8., -13., 38., -51., 36.])
"""
# s1, s2 are trimmed copies
[c1, c2] = pu.as_series([c1, c2])
if len(c1) > len(c2):
c = c2
xs = c1
else:
c = c1
xs = c2
if len(c) == 1:
c0 = c[0]*xs
c1 = 0
elif len(c) == 2:
c0 = c[0]*xs
c1 = c[1]*xs
else:
nd = len(c)
c0 = c[-2]*xs
c1 = c[-1]*xs
for i in range(3, len(c) + 1):
tmp = c0
nd = nd - 1
c0 = lagsub(c[-i]*xs, (c1*(nd - 1))/nd)
c1 = lagadd(tmp, lagsub((2*nd - 1)*c1, lagmulx(c1))/nd)
return lagadd(c0, lagsub(c1, lagmulx(c1)))
def lagdiv(c1, c2):
"""
Divide one Laguerre series by another.
Returns the quotient-with-remainder of two Laguerre series
`c1` / `c2`. The arguments are sequences of coefficients from lowest
order "term" to highest, e.g., [1,2,3] represents the series
``P_0 + 2*P_1 + 3*P_2``.
Parameters
----------
c1, c2 : array_like
1-D arrays of Laguerre series coefficients ordered from low to
high.
Returns
-------
[quo, rem] : ndarrays
Of Laguerre series coefficients representing the quotient and
remainder.
See Also
--------
lagadd, lagsub, lagmulx, lagmul, lagpow
Notes
-----
In general, the (polynomial) division of one Laguerre series by another
results in quotient and remainder terms that are not in the Laguerre
polynomial basis set. Thus, to express these results as a Laguerre
series, it is necessary to "reproject" the results onto the Laguerre
basis set, which may produce "unintuitive" (but correct) results; see
Examples section below.
Examples
--------
>>> from numpy.polynomial.laguerre import lagdiv
>>> lagdiv([ 8., -13., 38., -51., 36.], [0, 1, 2])
(array([1., 2., 3.]), array([0.]))
>>> lagdiv([ 9., -12., 38., -51., 36.], [0, 1, 2])
(array([1., 2., 3.]), array([1., 1.]))
"""
return pu._div(lagmul, c1, c2)
def lagpow(c, pow, maxpower=16):
"""Raise a Laguerre series to a power.
Returns the Laguerre series `c` raised to the power `pow`. The
argument `c` is a sequence of coefficients ordered from low to high.
i.e., [1,2,3] is the series ``P_0 + 2*P_1 + 3*P_2.``
Parameters
----------
c : array_like
1-D array of Laguerre series coefficients ordered from low to
high.
pow : integer
Power to which the series will be raised
maxpower : integer, optional
Maximum power allowed. This is mainly to limit growth of the series
to unmanageable size. Default is 16
Returns
-------
coef : ndarray
Laguerre series of power.
See Also
--------
lagadd, lagsub, lagmulx, lagmul, lagdiv
Examples
--------
>>> from numpy.polynomial.laguerre import lagpow
>>> lagpow([1, 2, 3], 2)
array([ 14., -16., 56., -72., 54.])
"""
return pu._pow(lagmul, c, pow, maxpower)
def lagder(c, m=1, scl=1, axis=0):
"""
Differentiate a Laguerre series.
Returns the Laguerre series coefficients `c` differentiated `m` times
along `axis`. At each iteration the result is multiplied by `scl` (the
scaling factor is for use in a linear change of variable). The argument
`c` is an array of coefficients from low to high degree along each
axis, e.g., [1,2,3] represents the series ``1*L_0 + 2*L_1 + 3*L_2``
while [[1,2],[1,2]] represents ``1*L_0(x)*L_0(y) + 1*L_1(x)*L_0(y) +
2*L_0(x)*L_1(y) + 2*L_1(x)*L_1(y)`` if axis=0 is ``x`` and axis=1 is
``y``.
Parameters
----------
c : array_like
Array of Laguerre series coefficients. If `c` is multidimensional
the different axis correspond to different variables with the
degree in each axis given by the corresponding index.
m : int, optional
Number of derivatives taken, must be non-negative. (Default: 1)
scl : scalar, optional
Each differentiation is multiplied by `scl`. The end result is
multiplication by ``scl**m``. This is for use in a linear change of
variable. (Default: 1)
axis : int, optional
Axis over which the derivative is taken. (Default: 0).
Returns
-------
der : ndarray
Laguerre series of the derivative.
See Also
--------
lagint
Notes
-----
In general, the result of differentiating a Laguerre series does not
resemble the same operation on a power series. Thus the result of this
function may be "unintuitive," albeit correct; see Examples section
below.
Examples
--------
>>> from numpy.polynomial.laguerre import lagder
>>> lagder([ 1., 1., 1., -3.])
array([1., 2., 3.])
>>> lagder([ 1., 0., 0., -4., 3.], m=2)
array([1., 2., 3.])
"""
c = np.array(c, ndmin=1, copy=True)
if c.dtype.char in '?bBhHiIlLqQpP':
c = c.astype(np.double)
cnt = pu._as_int(m, "the order of derivation")
iaxis = pu._as_int(axis, "the axis")
if cnt < 0:
raise ValueError("The order of derivation must be non-negative")
iaxis = normalize_axis_index(iaxis, c.ndim)
if cnt == 0:
return c
c = np.moveaxis(c, iaxis, 0)
n = len(c)
if cnt >= n:
c = c[:1]*0
else:
for i in range(cnt):
n = n - 1
c *= scl
der = np.empty((n,) + c.shape[1:], dtype=c.dtype)
for j in range(n, 1, -1):
der[j - 1] = -c[j]
c[j - 1] += c[j]
der[0] = -c[1]
c = der
c = np.moveaxis(c, 0, iaxis)
return c
def lagint(c, m=1, k=[], lbnd=0, scl=1, axis=0):
"""
Integrate a Laguerre series.
Returns the Laguerre series coefficients `c` integrated `m` times from
`lbnd` along `axis`. At each iteration the resulting series is
**multiplied** by `scl` and an integration constant, `k`, is added.
The scaling factor is for use in a linear change of variable. ("Buyer
beware": note that, depending on what one is doing, one may want `scl`
to be the reciprocal of what one might expect; for more information,
see the Notes section below.) The argument `c` is an array of
coefficients from low to high degree along each axis, e.g., [1,2,3]
represents the series ``L_0 + 2*L_1 + 3*L_2`` while [[1,2],[1,2]]
represents ``1*L_0(x)*L_0(y) + 1*L_1(x)*L_0(y) + 2*L_0(x)*L_1(y) +
2*L_1(x)*L_1(y)`` if axis=0 is ``x`` and axis=1 is ``y``.
Parameters
----------
c : array_like
Array of Laguerre series coefficients. If `c` is multidimensional
the different axis correspond to different variables with the
degree in each axis given by the corresponding index.
m : int, optional
Order of integration, must be positive. (Default: 1)
k : {[], list, scalar}, optional
Integration constant(s). The value of the first integral at
``lbnd`` is the first value in the list, the value of the second
integral at ``lbnd`` is the second value, etc. If ``k == []`` (the
default), all constants are set to zero. If ``m == 1``, a single
scalar can be given instead of a list.
lbnd : scalar, optional
The lower bound of the integral. (Default: 0)
scl : scalar, optional
Following each integration the result is *multiplied* by `scl`
before the integration constant is added. (Default: 1)
axis : int, optional
Axis over which the integral is taken. (Default: 0).
Returns
-------
S : ndarray
Laguerre series coefficients of the integral.
Raises
------
ValueError
If ``m < 0``, ``len(k) > m``, ``np.ndim(lbnd) != 0``, or
``np.ndim(scl) != 0``.
See Also
--------
lagder
Notes
-----
Note that the result of each integration is *multiplied* by `scl`.
Why is this important to note? Say one is making a linear change of
variable :math:`u = ax + b` in an integral relative to `x`. Then
:math:`dx = du/a`, so one will need to set `scl` equal to
:math:`1/a` - perhaps not what one would have first thought.
Also note that, in general, the result of integrating a C-series needs
to be "reprojected" onto the C-series basis set. Thus, typically,
the result of this function is "unintuitive," albeit correct; see
Examples section below.
Examples
--------
>>> from numpy.polynomial.laguerre import lagint
>>> lagint([1,2,3])
array([ 1., 1., 1., -3.])
>>> lagint([1,2,3], m=2)
array([ 1., 0., 0., -4., 3.])
>>> lagint([1,2,3], k=1)
array([ 2., 1., 1., -3.])
>>> lagint([1,2,3], lbnd=-1)
array([11.5, 1. , 1. , -3. ])
>>> lagint([1,2], m=2, k=[1,2], lbnd=-1)
array([ 11.16666667, -5. , -3. , 2. ]) # may vary
"""
c = np.array(c, ndmin=1, copy=True)
if c.dtype.char in '?bBhHiIlLqQpP':
c = c.astype(np.double)
if not np.iterable(k):
k = [k]
cnt = pu._as_int(m, "the order of integration")
iaxis = pu._as_int(axis, "the axis")
if cnt < 0:
raise ValueError("The order of integration must be non-negative")
if len(k) > cnt:
raise ValueError("Too many integration constants")
if np.ndim(lbnd) != 0:
raise ValueError("lbnd must be a scalar.")
if np.ndim(scl) != 0:
raise ValueError("scl must be a scalar.")
iaxis = normalize_axis_index(iaxis, c.ndim)
if cnt == 0:
return c
c = np.moveaxis(c, iaxis, 0)
k = list(k) + [0]*(cnt - len(k))
for i in range(cnt):
n = len(c)
c *= scl
if n == 1 and np.all(c[0] == 0):
c[0] += k[i]
else:
tmp = np.empty((n + 1,) + c.shape[1:], dtype=c.dtype)
tmp[0] = c[0]
tmp[1] = -c[0]
for j in range(1, n):
tmp[j] += c[j]
tmp[j + 1] = -c[j]
tmp[0] += k[i] - lagval(lbnd, tmp)
c = tmp
c = np.moveaxis(c, 0, iaxis)
return c
def lagval(x, c, tensor=True):
"""
Evaluate a Laguerre series at points x.
If `c` is of length ``n + 1``, this function returns the value:
.. math:: p(x) = c_0 * L_0(x) + c_1 * L_1(x) + ... + c_n * L_n(x)
The parameter `x` is converted to an array only if it is a tuple or a
list, otherwise it is treated as a scalar. In either case, either `x`
or its elements must support multiplication and addition both with
themselves and with the elements of `c`.
If `c` is a 1-D array, then ``p(x)`` will have the same shape as `x`. If
`c` is multidimensional, then the shape of the result depends on the
value of `tensor`. If `tensor` is true the shape will be c.shape[1:] +
x.shape. If `tensor` is false the shape will be c.shape[1:]. Note that
scalars have shape (,).
Trailing zeros in the coefficients will be used in the evaluation, so
they should be avoided if efficiency is a concern.
Parameters
----------
x : array_like, compatible object
If `x` is a list or tuple, it is converted to an ndarray, otherwise
it is left unchanged and treated as a scalar. In either case, `x`
or its elements must support addition and multiplication with
themselves and with the elements of `c`.
c : array_like
Array of coefficients ordered so that the coefficients for terms of
degree n are contained in c[n]. If `c` is multidimensional the
remaining indices enumerate multiple polynomials. In the two
dimensional case the coefficients may be thought of as stored in
the columns of `c`.
tensor : boolean, optional
If True, the shape of the coefficient array is extended with ones
on the right, one for each dimension of `x`. Scalars have dimension 0
for this action. The result is that every column of coefficients in
`c` is evaluated for every element of `x`. If False, `x` is broadcast
over the columns of `c` for the evaluation. This keyword is useful
when `c` is multidimensional. The default value is True.
Returns
-------
values : ndarray, algebra_like
The shape of the return value is described above.
See Also
--------
lagval2d, laggrid2d, lagval3d, laggrid3d
Notes
-----
The evaluation uses Clenshaw recursion, aka synthetic division.
Examples
--------
>>> from numpy.polynomial.laguerre import lagval
>>> coef = [1, 2, 3]
>>> lagval(1, coef)
-0.5
>>> lagval([[1, 2],[3, 4]], coef)
array([[-0.5, -4. ],
[-4.5, -2. ]])
"""
c = np.array(c, ndmin=1, copy=None)
if c.dtype.char in '?bBhHiIlLqQpP':
c = c.astype(np.double)
if isinstance(x, (tuple, list)):
x = np.asarray(x)
if isinstance(x, np.ndarray) and tensor:
c = c.reshape(c.shape + (1,)*x.ndim)
if len(c) == 1:
c0 = c[0]
c1 = 0
elif len(c) == 2:
c0 = c[0]
c1 = c[1]
else:
nd = len(c)
c0 = c[-2]
c1 = c[-1]
for i in range(3, len(c) + 1):
tmp = c0
nd = nd - 1
c0 = c[-i] - (c1*(nd - 1))/nd
c1 = tmp + (c1*((2*nd - 1) - x))/nd
return c0 + c1*(1 - x)
def lagval2d(x, y, c):
"""
Evaluate a 2-D Laguerre series at points (x, y).
This function returns the values:
.. math:: p(x,y) = \\sum_{i,j} c_{i,j} * L_i(x) * L_j(y)
The parameters `x` and `y` are converted to arrays only if they are
tuples or a lists, otherwise they are treated as a scalars and they
must have the same shape after conversion. In either case, either `x`
and `y` or their elements must support multiplication and addition both
with themselves and with the elements of `c`.
If `c` is a 1-D array a one is implicitly appended to its shape to make
it 2-D. The shape of the result will be c.shape[2:] + x.shape.
Parameters
----------
x, y : array_like, compatible objects
The two dimensional series is evaluated at the points ``(x, y)``,
where `x` and `y` must have the same shape. If `x` or `y` is a list
or tuple, it is first converted to an ndarray, otherwise it is left
unchanged and if it isn't an ndarray it is treated as a scalar.
c : array_like
Array of coefficients ordered so that the coefficient of the term
of multi-degree i,j is contained in ``c[i,j]``. If `c` has
dimension greater than two the remaining indices enumerate multiple
sets of coefficients.
Returns
-------
values : ndarray, compatible object
The values of the two dimensional polynomial at points formed with
pairs of corresponding values from `x` and `y`.
See Also
--------
lagval, laggrid2d, lagval3d, laggrid3d
Examples
--------
>>> from numpy.polynomial.laguerre import lagval2d
>>> c = [[1, 2],[3, 4]]
>>> lagval2d(1, 1, c)
1.0
"""
return pu._valnd(lagval, c, x, y)
def laggrid2d(x, y, c):
"""
Evaluate a 2-D Laguerre series on the Cartesian product of x and y.
This function returns the values:
.. math:: p(a,b) = \\sum_{i,j} c_{i,j} * L_i(a) * L_j(b)
where the points ``(a, b)`` consist of all pairs formed by taking
`a` from `x` and `b` from `y`. The resulting points form a grid with
`x` in the first dimension and `y` in the second.
The parameters `x` and `y` are converted to arrays only if they are
tuples or a lists, otherwise they are treated as a scalars. In either
case, either `x` and `y` or their elements must support multiplication
and addition both with themselves and with the elements of `c`.
If `c` has fewer than two dimensions, ones are implicitly appended to
its shape to make it 2-D. The shape of the result will be c.shape[2:] +
x.shape + y.shape.
Parameters
----------
x, y : array_like, compatible objects
The two dimensional series is evaluated at the points in the
Cartesian product of `x` and `y`. If `x` or `y` is a list or
tuple, it is first converted to an ndarray, otherwise it is left
unchanged and, if it isn't an ndarray, it is treated as a scalar.
c : array_like
Array of coefficients ordered so that the coefficient of the term of
multi-degree i,j is contained in ``c[i,j]``. If `c` has dimension
greater than two the remaining indices enumerate multiple sets of
coefficients.
Returns
-------
values : ndarray, compatible object
The values of the two dimensional Chebyshev series at points in the
Cartesian product of `x` and `y`.
See Also
--------
lagval, lagval2d, lagval3d, laggrid3d
Examples
--------
>>> from numpy.polynomial.laguerre import laggrid2d
>>> c = [[1, 2], [3, 4]]
>>> laggrid2d([0, 1], [0, 1], c)
array([[10., 4.],
[ 3., 1.]])
"""
return pu._gridnd(lagval, c, x, y)
def lagval3d(x, y, z, c):
"""
Evaluate a 3-D Laguerre series at points (x, y, z).
This function returns the values:
.. math:: p(x,y,z) = \\sum_{i,j,k} c_{i,j,k} * L_i(x) * L_j(y) * L_k(z)
The parameters `x`, `y`, and `z` are converted to arrays only if
they are tuples or a lists, otherwise they are treated as a scalars and
they must have the same shape after conversion. In either case, either
`x`, `y`, and `z` or their elements must support multiplication and
addition both with themselves and with the elements of `c`.
If `c` has fewer than 3 dimensions, ones are implicitly appended to its
shape to make it 3-D. The shape of the result will be c.shape[3:] +
x.shape.
Parameters
----------
x, y, z : array_like, compatible object
The three dimensional series is evaluated at the points
``(x, y, z)``, where `x`, `y`, and `z` must have the same shape. If
any of `x`, `y`, or `z` is a list or tuple, it is first converted
to an ndarray, otherwise it is left unchanged and if it isn't an
ndarray it is treated as a scalar.
c : array_like
Array of coefficients ordered so that the coefficient of the term of
multi-degree i,j,k is contained in ``c[i,j,k]``. If `c` has dimension
greater than 3 the remaining indices enumerate multiple sets of
coefficients.
Returns
-------
values : ndarray, compatible object
The values of the multidimensional polynomial on points formed with
triples of corresponding values from `x`, `y`, and `z`.
See Also
--------
lagval, lagval2d, laggrid2d, laggrid3d
Examples
--------
>>> from numpy.polynomial.laguerre import lagval3d
>>> c = [[[1, 2], [3, 4]], [[5, 6], [7, 8]]]
>>> lagval3d(1, 1, 2, c)
-1.0
"""
return pu._valnd(lagval, c, x, y, z)
def laggrid3d(x, y, z, c):
"""
Evaluate a 3-D Laguerre series on the Cartesian product of x, y, and z.
This function returns the values:
.. math:: p(a,b,c) = \\sum_{i,j,k} c_{i,j,k} * L_i(a) * L_j(b) * L_k(c)
where the points ``(a, b, c)`` consist of all triples formed by taking
`a` from `x`, `b` from `y`, and `c` from `z`. The resulting points form
a grid with `x` in the first dimension, `y` in the second, and `z` in
the third.
The parameters `x`, `y`, and `z` are converted to arrays only if they
are tuples or a lists, otherwise they are treated as a scalars. In
either case, either `x`, `y`, and `z` or their elements must support
multiplication and addition both with themselves and with the elements
of `c`.
If `c` has fewer than three dimensions, ones are implicitly appended to
its shape to make it 3-D. The shape of the result will be c.shape[3:] +
x.shape + y.shape + z.shape.
Parameters
----------
x, y, z : array_like, compatible objects
The three dimensional series is evaluated at the points in the
Cartesian product of `x`, `y`, and `z`. If `x`, `y`, or `z` is a
list or tuple, it is first converted to an ndarray, otherwise it is
left unchanged and, if it isn't an ndarray, it is treated as a
scalar.
c : array_like
Array of coefficients ordered so that the coefficients for terms of
degree i,j are contained in ``c[i,j]``. If `c` has dimension
greater than two the remaining indices enumerate multiple sets of
coefficients.
Returns
-------
values : ndarray, compatible object
The values of the two dimensional polynomial at points in the Cartesian
product of `x` and `y`.
See Also
--------
lagval, lagval2d, laggrid2d, lagval3d
Examples
--------
>>> from numpy.polynomial.laguerre import laggrid3d
>>> c = [[[1, 2], [3, 4]], [[5, 6], [7, 8]]]
>>> laggrid3d([0, 1], [0, 1], [2, 4], c)
array([[[ -4., -44.],
[ -2., -18.]],
[[ -2., -14.],
[ -1., -5.]]])
"""
return pu._gridnd(lagval, c, x, y, z)
def lagvander(x, deg):
"""Pseudo-Vandermonde matrix of given degree.
Returns the pseudo-Vandermonde matrix of degree `deg` and sample points
`x`. The pseudo-Vandermonde matrix is defined by
.. math:: V[..., i] = L_i(x)
where ``0 <= i <= deg``. The leading indices of `V` index the elements of
`x` and the last index is the degree of the Laguerre polynomial.
If `c` is a 1-D array of coefficients of length ``n + 1`` and `V` is the
array ``V = lagvander(x, n)``, then ``np.dot(V, c)`` and
``lagval(x, c)`` are the same up to roundoff. This equivalence is
useful both for least squares fitting and for the evaluation of a large
number of Laguerre series of the same degree and sample points.
Parameters
----------
x : array_like
Array of points. The dtype is converted to float64 or complex128
depending on whether any of the elements are complex. If `x` is
scalar it is converted to a 1-D array.
deg : int
Degree of the resulting matrix.
Returns
-------
vander : ndarray
The pseudo-Vandermonde matrix. The shape of the returned matrix is
``x.shape + (deg + 1,)``, where The last index is the degree of the
corresponding Laguerre polynomial. The dtype will be the same as
the converted `x`.
Examples
--------
>>> import numpy as np
>>> from numpy.polynomial.laguerre import lagvander
>>> x = np.array([0, 1, 2])
>>> lagvander(x, 3)
array([[ 1. , 1. , 1. , 1. ],
[ 1. , 0. , -0.5 , -0.66666667],
[ 1. , -1. , -1. , -0.33333333]])
"""
ideg = pu._as_int(deg, "deg")
if ideg < 0:
raise ValueError("deg must be non-negative")
x = np.array(x, copy=None, ndmin=1) + 0.0
dims = (ideg + 1,) + x.shape
dtyp = x.dtype
v = np.empty(dims, dtype=dtyp)
v[0] = x*0 + 1
if ideg > 0:
v[1] = 1 - x
for i in range(2, ideg + 1):
v[i] = (v[i-1]*(2*i - 1 - x) - v[i-2]*(i - 1))/i
return np.moveaxis(v, 0, -1)
def lagvander2d(x, y, deg):
"""Pseudo-Vandermonde matrix of given degrees.
Returns the pseudo-Vandermonde matrix of degrees `deg` and sample
points ``(x, y)``. The pseudo-Vandermonde matrix is defined by
.. math:: V[..., (deg[1] + 1)*i + j] = L_i(x) * L_j(y),
where ``0 <= i <= deg[0]`` and ``0 <= j <= deg[1]``. The leading indices of
`V` index the points ``(x, y)`` and the last index encodes the degrees of
the Laguerre polynomials.
If ``V = lagvander2d(x, y, [xdeg, ydeg])``, then the columns of `V`
correspond to the elements of a 2-D coefficient array `c` of shape
(xdeg + 1, ydeg + 1) in the order
.. math:: c_{00}, c_{01}, c_{02} ... , c_{10}, c_{11}, c_{12} ...
and ``np.dot(V, c.flat)`` and ``lagval2d(x, y, c)`` will be the same
up to roundoff. This equivalence is useful both for least squares
fitting and for the evaluation of a large number of 2-D Laguerre
series of the same degrees and sample points.
Parameters
----------
x, y : array_like
Arrays of point coordinates, all of the same shape. The dtypes
will be converted to either float64 or complex128 depending on
whether any of the elements are complex. Scalars are converted to
1-D arrays.
deg : list of ints
List of maximum degrees of the form [x_deg, y_deg].
Returns
-------
vander2d : ndarray
The shape of the returned matrix is ``x.shape + (order,)``, where
:math:`order = (deg[0]+1)*(deg[1]+1)`. The dtype will be the same
as the converted `x` and `y`.
See Also
--------
lagvander, lagvander3d, lagval2d, lagval3d
Examples
--------
>>> import numpy as np
>>> from numpy.polynomial.laguerre import lagvander2d
>>> x = np.array([0])
>>> y = np.array([2])
>>> lagvander2d(x, y, [2, 1])
array([[ 1., -1., 1., -1., 1., -1.]])
"""
return pu._vander_nd_flat((lagvander, lagvander), (x, y), deg)
def lagvander3d(x, y, z, deg):
"""Pseudo-Vandermonde matrix of given degrees.
Returns the pseudo-Vandermonde matrix of degrees `deg` and sample
points ``(x, y, z)``. If `l`, `m`, `n` are the given degrees in `x`, `y`, `z`,
then The pseudo-Vandermonde matrix is defined by
.. math:: V[..., (m+1)(n+1)i + (n+1)j + k] = L_i(x)*L_j(y)*L_k(z),
where ``0 <= i <= l``, ``0 <= j <= m``, and ``0 <= j <= n``. The leading
indices of `V` index the points ``(x, y, z)`` and the last index encodes
the degrees of the Laguerre polynomials.
If ``V = lagvander3d(x, y, z, [xdeg, ydeg, zdeg])``, then the columns
of `V` correspond to the elements of a 3-D coefficient array `c` of
shape (xdeg + 1, ydeg + 1, zdeg + 1) in the order
.. math:: c_{000}, c_{001}, c_{002},... , c_{010}, c_{011}, c_{012},...
and ``np.dot(V, c.flat)`` and ``lagval3d(x, y, z, c)`` will be the
same up to roundoff. This equivalence is useful both for least squares
fitting and for the evaluation of a large number of 3-D Laguerre
series of the same degrees and sample points.
Parameters
----------
x, y, z : array_like
Arrays of point coordinates, all of the same shape. The dtypes will
be converted to either float64 or complex128 depending on whether
any of the elements are complex. Scalars are converted to 1-D
arrays.
deg : list of ints
List of maximum degrees of the form [x_deg, y_deg, z_deg].
Returns
-------
vander3d : ndarray
The shape of the returned matrix is ``x.shape + (order,)``, where
:math:`order = (deg[0]+1)*(deg[1]+1)*(deg[2]+1)`. The dtype will
be the same as the converted `x`, `y`, and `z`.
See Also
--------
lagvander, lagvander3d, lagval2d, lagval3d
Examples
--------
>>> import numpy as np
>>> from numpy.polynomial.laguerre import lagvander3d
>>> x = np.array([0])
>>> y = np.array([2])
>>> z = np.array([0])
>>> lagvander3d(x, y, z, [2, 1, 3])
array([[ 1., 1., 1., 1., -1., -1., -1., -1., 1., 1., 1., 1., -1.,
-1., -1., -1., 1., 1., 1., 1., -1., -1., -1., -1.]])
"""
return pu._vander_nd_flat((lagvander, lagvander, lagvander), (x, y, z), deg)
def lagfit(x, y, deg, rcond=None, full=False, w=None):
"""
Least squares fit of Laguerre series to data.
Return the coefficients of a Laguerre series of degree `deg` that is the
least squares fit to the data values `y` given at points `x`. If `y` is
1-D the returned coefficients will also be 1-D. If `y` is 2-D multiple
fits are done, one for each column of `y`, and the resulting
coefficients are stored in the corresponding columns of a 2-D return.
The fitted polynomial(s) are in the form
.. math:: p(x) = c_0 + c_1 * L_1(x) + ... + c_n * L_n(x),
where ``n`` is `deg`.
Parameters
----------
x : array_like, shape (M,)
x-coordinates of the M sample points ``(x[i], y[i])``.
y : array_like, shape (M,) or (M, K)
y-coordinates of the sample points. Several data sets of sample
points sharing the same x-coordinates can be fitted at once by
passing in a 2D-array that contains one dataset per column.
deg : int or 1-D array_like
Degree(s) of the fitting polynomials. If `deg` is a single integer
all terms up to and including the `deg`'th term are included in the
fit. For NumPy versions >= 1.11.0 a list of integers specifying the
degrees of the terms to include may be used instead.
rcond : float, optional
Relative condition number of the fit. Singular values smaller than
this relative to the largest singular value will be ignored. The
default value is len(x)*eps, where eps is the relative precision of
the float type, about 2e-16 in most cases.
full : bool, optional
Switch determining nature of return value. When it is False (the
default) just the coefficients are returned, when True diagnostic
information from the singular value decomposition is also returned.
w : array_like, shape (`M`,), optional
Weights. If not None, the weight ``w[i]`` applies to the unsquared
residual ``y[i] - y_hat[i]`` at ``x[i]``. Ideally the weights are
chosen so that the errors of the products ``w[i]*y[i]`` all have the
same variance. When using inverse-variance weighting, use
``w[i] = 1/sigma(y[i])``. The default value is None.
Returns
-------
coef : ndarray, shape (M,) or (M, K)
Laguerre coefficients ordered from low to high. If `y` was 2-D,
the coefficients for the data in column *k* of `y` are in column
*k*.
[residuals, rank, singular_values, rcond] : list
These values are only returned if ``full == True``
- residuals -- sum of squared residuals of the least squares fit
- rank -- the numerical rank of the scaled Vandermonde matrix
- singular_values -- singular values of the scaled Vandermonde matrix
- rcond -- value of `rcond`.
For more details, see `numpy.linalg.lstsq`.
Warns
-----
RankWarning
The rank of the coefficient matrix in the least-squares fit is
deficient. The warning is only raised if ``full == False``. The
warnings can be turned off by
>>> import warnings
>>> warnings.simplefilter('ignore', np.exceptions.RankWarning)
See Also
--------
numpy.polynomial.polynomial.polyfit
numpy.polynomial.legendre.legfit
numpy.polynomial.chebyshev.chebfit
numpy.polynomial.hermite.hermfit
numpy.polynomial.hermite_e.hermefit
lagval : Evaluates a Laguerre series.
lagvander : pseudo Vandermonde matrix of Laguerre series.
lagweight : Laguerre weight function.
numpy.linalg.lstsq : Computes a least-squares fit from the matrix.
scipy.interpolate.UnivariateSpline : Computes spline fits.
Notes
-----
The solution is the coefficients of the Laguerre series ``p`` that
minimizes the sum of the weighted squared errors
.. math:: E = \\sum_j w_j^2 * |y_j - p(x_j)|^2,
where the :math:`w_j` are the weights. This problem is solved by
setting up as the (typically) overdetermined matrix equation
.. math:: V(x) * c = w * y,
where ``V`` is the weighted pseudo Vandermonde matrix of `x`, ``c`` are the
coefficients to be solved for, `w` are the weights, and `y` are the
observed values. This equation is then solved using the singular value
decomposition of ``V``.
If some of the singular values of `V` are so small that they are
neglected, then a `~exceptions.RankWarning` will be issued. This means that
the coefficient values may be poorly determined. Using a lower order fit
will usually get rid of the warning. The `rcond` parameter can also be
set to a value smaller than its default, but the resulting fit may be
spurious and have large contributions from roundoff error.
Fits using Laguerre series are probably most useful when the data can
be approximated by ``sqrt(w(x)) * p(x)``, where ``w(x)`` is the Laguerre
weight. In that case the weight ``sqrt(w(x[i]))`` should be used
together with data values ``y[i]/sqrt(w(x[i]))``. The weight function is
available as `lagweight`.
References
----------
.. [1] Wikipedia, "Curve fitting",
https://en.wikipedia.org/wiki/Curve_fitting
Examples
--------
>>> import numpy as np
>>> from numpy.polynomial.laguerre import lagfit, lagval
>>> x = np.linspace(0, 10)
>>> rng = np.random.default_rng()
>>> err = rng.normal(scale=1./10, size=len(x))
>>> y = lagval(x, [1, 2, 3]) + err
>>> lagfit(x, y, 2)
array([1.00578369, 1.99417356, 2.99827656]) # may vary
"""
return pu._fit(lagvander, x, y, deg, rcond, full, w)
def lagcompanion(c):
"""
Return the companion matrix of c.
The usual companion matrix of the Laguerre polynomials is already
symmetric when `c` is a basis Laguerre polynomial, so no scaling is
applied.
Parameters
----------
c : array_like
1-D array of Laguerre series coefficients ordered from low to high
degree.
Returns
-------
mat : ndarray
Companion matrix of dimensions (deg, deg).
Examples
--------
>>> from numpy.polynomial.laguerre import lagcompanion
>>> lagcompanion([1, 2, 3])
array([[ 1. , -0.33333333],
[-1. , 4.33333333]])
"""
# c is a trimmed copy
[c] = pu.as_series([c])
if len(c) < 2:
raise ValueError('Series must have maximum degree of at least 1.')
if len(c) == 2:
return np.array([[1 + c[0]/c[1]]])
n = len(c) - 1
mat = np.zeros((n, n), dtype=c.dtype)
top = mat.reshape(-1)[1::n+1]
mid = mat.reshape(-1)[0::n+1]
bot = mat.reshape(-1)[n::n+1]
top[...] = -np.arange(1, n)
mid[...] = 2.*np.arange(n) + 1.
bot[...] = top
mat[:, -1] += (c[:-1]/c[-1])*n
return mat
def lagroots(c):
"""
Compute the roots of a Laguerre series.
Return the roots (a.k.a. "zeros") of the polynomial
.. math:: p(x) = \\sum_i c[i] * L_i(x).
Parameters
----------
c : 1-D array_like
1-D array of coefficients.
Returns
-------
out : ndarray
Array of the roots of the series. If all the roots are real,
then `out` is also real, otherwise it is complex.
See Also
--------
numpy.polynomial.polynomial.polyroots
numpy.polynomial.legendre.legroots
numpy.polynomial.chebyshev.chebroots
numpy.polynomial.hermite.hermroots
numpy.polynomial.hermite_e.hermeroots
Notes
-----
The root estimates are obtained as the eigenvalues of the companion
matrix, Roots far from the origin of the complex plane may have large
errors due to the numerical instability of the series for such
values. Roots with multiplicity greater than 1 will also show larger
errors as the value of the series near such points is relatively
insensitive to errors in the roots. Isolated roots near the origin can
be improved by a few iterations of Newton's method.
The Laguerre series basis polynomials aren't powers of `x` so the
results of this function may seem unintuitive.
Examples
--------
>>> from numpy.polynomial.laguerre import lagroots, lagfromroots
>>> coef = lagfromroots([0, 1, 2])
>>> coef
array([ 2., -8., 12., -6.])
>>> lagroots(coef)
array([-4.4408921e-16, 1.0000000e+00, 2.0000000e+00])
"""
# c is a trimmed copy
[c] = pu.as_series([c])
if len(c) <= 1:
return np.array([], dtype=c.dtype)
if len(c) == 2:
return np.array([1 + c[0]/c[1]])
# rotated companion matrix reduces error
m = lagcompanion(c)[::-1,::-1]
r = la.eigvals(m)
r.sort()
return r
def laggauss(deg):
"""
Gauss-Laguerre quadrature.
Computes the sample points and weights for Gauss-Laguerre quadrature.
These sample points and weights will correctly integrate polynomials of
degree :math:`2*deg - 1` or less over the interval :math:`[0, \\inf]`
with the weight function :math:`f(x) = \\exp(-x)`.
Parameters
----------
deg : int
Number of sample points and weights. It must be >= 1.
Returns
-------
x : ndarray
1-D ndarray containing the sample points.
y : ndarray
1-D ndarray containing the weights.
Notes
-----
The results have only been tested up to degree 100 higher degrees may
be problematic. The weights are determined by using the fact that
.. math:: w_k = c / (L'_n(x_k) * L_{n-1}(x_k))
where :math:`c` is a constant independent of :math:`k` and :math:`x_k`
is the k'th root of :math:`L_n`, and then scaling the results to get
the right value when integrating 1.
Examples
--------
>>> from numpy.polynomial.laguerre import laggauss
>>> laggauss(2)
(array([0.58578644, 3.41421356]), array([0.85355339, 0.14644661]))
"""
ideg = pu._as_int(deg, "deg")
if ideg <= 0:
raise ValueError("deg must be a positive integer")
# first approximation of roots. We use the fact that the companion
# matrix is symmetric in this case in order to obtain better zeros.
c = np.array([0]*deg + [1])
m = lagcompanion(c)
x = la.eigvalsh(m)
# improve roots by one application of Newton
dy = lagval(x, c)
df = lagval(x, lagder(c))
x -= dy/df
# compute the weights. We scale the factor to avoid possible numerical
# overflow.
fm = lagval(x, c[1:])
fm /= np.abs(fm).max()
df /= np.abs(df).max()
w = 1/(fm * df)
# scale w to get the right value, 1 in this case
w /= w.sum()
return x, w
def lagweight(x):
"""Weight function of the Laguerre polynomials.
The weight function is :math:`exp(-x)` and the interval of integration
is :math:`[0, \\inf]`. The Laguerre polynomials are orthogonal, but not
normalized, with respect to this weight function.
Parameters
----------
x : array_like
Values at which the weight function will be computed.
Returns
-------
w : ndarray
The weight function at `x`.
Examples
--------
>>> from numpy.polynomial.laguerre import lagweight
>>> x = np.array([0, 1, 2])
>>> lagweight(x)
array([1. , 0.36787944, 0.13533528])
"""
w = np.exp(-x)
return w
#
# Laguerre series class
#
class Laguerre(ABCPolyBase):
"""A Laguerre series class.
The Laguerre class provides the standard Python numerical methods
'+', '-', '*', '//', '%', 'divmod', '**', and '()' as well as the
attributes and methods listed below.
Parameters
----------
coef : array_like
Laguerre coefficients in order of increasing degree, i.e,
``(1, 2, 3)`` gives ``1*L_0(x) + 2*L_1(X) + 3*L_2(x)``.
domain : (2,) array_like, optional
Domain to use. The interval ``[domain[0], domain[1]]`` is mapped
to the interval ``[window[0], window[1]]`` by shifting and scaling.
The default value is [0., 1.].
window : (2,) array_like, optional
Window, see `domain` for its use. The default value is [0., 1.].
symbol : str, optional
Symbol used to represent the independent variable in string
representations of the polynomial expression, e.g. for printing.
The symbol must be a valid Python identifier. Default value is 'x'.
.. versionadded:: 1.24
"""
# Virtual Functions
_add = staticmethod(lagadd)
_sub = staticmethod(lagsub)
_mul = staticmethod(lagmul)
_div = staticmethod(lagdiv)
_pow = staticmethod(lagpow)
_val = staticmethod(lagval)
_int = staticmethod(lagint)
_der = staticmethod(lagder)
_fit = staticmethod(lagfit)
_line = staticmethod(lagline)
_roots = staticmethod(lagroots)
_fromroots = staticmethod(lagfromroots)
# Virtual properties
domain = np.array(lagdomain)
window = np.array(lagdomain)
basis_name = 'L'
|
numpyREPO_NAMEnumpyPATH_START.@numpy_extracted@numpy-main@numpy@polynomial@laguerre.py@.PATH_END.py
|
{
"filename": "example_reduction.ipynb",
"repo_name": "grzeimann/LRS2Multi",
"repo_path": "LRS2Multi_extracted/LRS2Multi-main/notebooks/example_reduction.ipynb",
"type": "Jupyter Notebook"
}
|
# LRS2 advanced reductions (LRS2Multi)
This notebook is an introduction to using LRS2Multi, which operates on Panacea multi*.fits products to perform advanced sky subtraction, object detection, object extraction, cube creation, or stacking multiple observations. The details of the code can be found in https://github.com/grzeimann/LRS2Multi, and this notebook serves as a use-case demonstration.
The first cell is long and constructions functions to find data from a program(s) as well as load the necessary modules. For the beginning, simply execute the first cell and move to the next step. If you run into issues, please contact Greg Zeimann (gregz@astro.as.utexas.edu).
```python
import sys
sys.path.append("..")
from lrs2multi import LRS2Multi
from lrs2object import LRS2Object
import glob
import os.path as op
import seaborn as sns
import matplotlib.pyplot as plt
import numpy as np
from astropy.io import fits
from astropy.table import Table
from datetime import datetime, timedelta
%matplotlib inline
def get_scifiles_from_folder(folders, object_name=None, exclude_folder=[], date=None, ndays=None,
collect_standards=False):
filenames = []
if date is not None:
def_name = 'multi*%s*orange.fits' % date
else:
def_name = 'multi*orange.fits'
if ndays is not None:
def_name = 'multi*%s*orange.fits'
for folder in folders:
if ndays is not None:
date_ = datetime(int(date[:4]), int(date[4:6]), int(date[6:]))
datel = date_ - timedelta(days=int(ndays/2))
for i in np.arange(ndays):
ndate = datel + timedelta(days=i)
daten = '%04d%02d%02d' % (ndate.year, ndate.month, ndate.day)
all_names = sorted(glob.glob(op.join(folder, def_name % daten)))
for filename in all_names:
filenames.append(filename)
else:
all_names = sorted(glob.glob(op.join(folder, def_name)))
for filename in all_names:
filenames.append(filename)
smaller_list = []
names = []
for filename in filenames:
f = fits.open(filename)
name = f[0].header['OBJECT']
names.append(name)
try:
slot = name.split('_')[-2]
except:
continue
if '_'.join(op.basename(filename).split('_')[:4]) in exclude_folder:
continue
if collect_standards:
if name.lower() in standard_names:
if slot == '056':
smaller_list.append(filename.replace('orange', 'uv'))
smaller_list.append(filename.replace('orange', 'orange'))
if slot == '066':
smaller_list.append(filename.replace('orange', 'red'))
smaller_list.append(filename.replace('orange', 'farred'))
continue
if (object_name is None):
if slot == '056':
smaller_list.append(filename.replace('orange', 'uv'))
smaller_list.append(filename.replace('orange', 'orange'))
if slot == '066':
smaller_list.append(filename.replace('orange', 'red'))
smaller_list.append(filename.replace('orange', 'farred'))
else:
if object_name.lower() in name.lower():
if slot == '056':
smaller_list.append(filename.replace('orange', 'uv'))
smaller_list.append(filename.replace('orange', 'orange'))
if slot == '066':
smaller_list.append(filename.replace('orange', 'red'))
smaller_list.append(filename.replace('orange', 'farred'))
return smaller_list, names
def get_scifiles_from_folder_from_pos(folders, object_name=None, exposure_min=None):
filenames = []
for folder in folders:
all_names = sorted(glob.glob(op.join(folder, 'multi*.fits')))
for filename in all_names:
filenames.append(filename)
smaller_list = []
for filename in filenames:
f = fits.open(filename)
if ('uv' in filename) or ('orange' in filename):
side = '056'
else:
side = '066'
name = f[0].header['OBJECT']
try:
slot = name.split('_')[-2]
except:
continue
name = 'J%s%s' % (''.join(f[0].header['QRA'].split(':')), ''.join(f[0].header['QDEC'].split(':')))
if side != slot:
continue
if exposure_min is not None:
if f[0].header['EXPTIME'] < exposure_min:
continue
if object_name is None:
smaller_list.append(filename)
else:
if object_name.lower() in name.lower():
smaller_list.append(filename)
return smaller_list
sns.set_context('talk')
sns.set_style('ticks')
plt.rcParams["font.family"] = "Times New Roman"
colors = plt.rcParams['axes.prop_cycle'].by_key()['color'] * 10
def_wave = np.arange(3650., 10500, 0.7)
```
# Example Reduction
Now that we have loaded the necessary modules, we can begin our own advanced reduction. There are comments throughout the long cell below to show an example reduction.
```python
###########################################################################
# List the programs that contain the target(s) you would like to reduce
# If there is more than one program, simply list multiple ensuring that
# the absolute path is correct. The path should be the one listed in your
# email notices of new data.
###########################################################################
folders = ['/work/03946/hetdex/maverick/LRS2/ENG21-1-000/']
###########################################################################
# This section includes several key parameters for you to set to execute
# your reductions. This includes the object name (as submitted), lists
# of observations that you would like to ignore (perhaps bad weather or
# coordinates). The wavelength you would like to run object detection.
# For emission line galaxies, you can include the redshift, which is used
# to mask physical lines to avoid self-subtraction in the sky subtraction
# step. If redshift does not make sense, or is unknown for your target,
# you should be safe setting it to zero. The wave window is the full extent
# in Anstroms of the detection window (detwave +/- wave_window/2.). The
# function that collapses that wavelength window for detection is "func"
# The extraction_radius is in arcseconds and is an aperture extraction for
# the target spectrum. The sky_radius defines the sky fibers (>sky_radius
# from target center). The object_radius is used in the "local" sky and is
# the masking radius for smoothing the sky background at each wavelength.
# There are two detection channels for LRS2B and R, and the detwave needs
# to be in the supported channel. When observing with both B+R and you would
# like to combine the channels, choose a wavelength between 6450-6950A.
###########################################################################
exclude_folder = ['']
objectname = 'srlrs2'
detwave = 6650.
redshift = 0. # detwave / 6562.8
wave_window= 20.
extraction_radius = 1.5
sky_radius = 3.5
object_radius = 2.0
red_detect_channel = 'red'
blue_detect_channel = 'orange'
func = np.nanmean
###########################################################################
# Here we define physical lines typically associated with
# emission line galaxies.
###########################################################################
lines = [2796., 2803., 3726.1, 3729.1, 3889., 4101.76, 4340.5,
4363.2, 4861.3, 4958.9, 5006.8, 6562.8, 6716.5, 6730.8,
6548., 6583.4]
###########################################################################
# We grab the filenames related to the target and folders provided.
# If none are found, the programs exits without further adieu.
###########################################################################
filenames, names = get_scifiles_from_folder(folders,
object_name=objectname,
exclude_folder=exclude_folder)
if len(filenames) == 0:
print('No files found for %s' % objectname)
sys.exit('Trying to exit gracefully')
###########################################################################
# We load the telluric models for the 16th, 50th, and 84th percentile
# empirically measured telluric absorption. These were constructed
# from HR standard stars and you can select 0, 1, or 2 in ".data[X]"
###########################################################################
telcor_fits = fits.open('lrs2_avg_telluric.fits')
telcor = telcor_fits[0].data[2]
###########################################################################
# A newly derived response correction to the standard 2019 response curves
# was constructed over a year of standard stars (May 2021-2022). I suggest
# using the response correction as listed here.
###########################################################################
new_response = fits.open('response_correction.fits')
response = new_response[0].data[1]
wsel = ((def_wave>9000) * (def_wave<10050)) + (def_wave<3800) + (def_wave>10200)
response = np.interp(def_wave, def_wave[~wsel], response[~wsel])
###########################################################################
# Here the reduction begins. We first initiate an LRS2Object class for
# our filenames and parametters.
###########################################################################
LRS2 = LRS2Object(filenames, detwave=detwave, wave_window=wave_window,
red_detect_channel=red_detect_channel,
blue_detect_channel=blue_detect_channel,
ignore_mask=False)
###########################################################################
# We then perform a simple sky subtraction. The most common edits are
# setting local=True, and pca=False. Other parameters can be inspected with
# help(LRS2.subtract_sky)
###########################################################################
LRS2.subtract_sky(func=np.nansum, local=False, pca=False, correct_ftf_from_skylines=False,
sky_radius=sky_radius, obj_radius=object_radius,
ncomp=25, bins=25, peakthresh=2., pca_iter=3)
# Set astrometry
LRS2.get_astrometry()
# Extract spectrum for each observation
LRS2.extract_spectrum(radius=extraction_radius)
# Smooth the LRS2-R resolution to match the orange and red channels together
LRS2.smooth_resolution(redkernel=1.8, bluekernel=0.1)
# Rectify all spectra to a common wavelength
LRS2.rectify(def_wave)
# Normalize the spectra using the detwave and wave_window.
# This step can be skipped if the S/N is quite low and the native
# calibration is a better option
LRS2.normalize(detwave=detwave, wave_window=wave_window, func=func)
# Calculate the S/N at the detwave to estimate the weights for a weighted
# combined spectrum
LRS2.calculate_sn()
# Combine the multiple spectra into a single spectrum using the S/N as weights
LRS2.combine_spectra()
# Correct the combined spectrum for the new response
LRS2.spec1D = LRS2.spec1D * response
# Write the combined spectrum including the telluric correction as a column
LRS2.write_combined_spectrum(telcor=telcor)
# Inspect the reductions with 2D collapsed plots
LRS2.setup_plotting()
for key in LRS2.sides.keys():
for L in LRS2.sides[key]:
if (L.channel == blue_detect_channel) or (L.channel == red_detect_channel):
L.plot_image(radius=extraction_radius, func=func, attr='skysub', quick_skysub=False,
sky_radius=sky_radius, wave_window=wave_window)
LRS2.fig.savefig('%s_image_plot.png' % objectname, dpi=150)
# Make a single combine cube. Use help(LRS2.make_cube) for more info
#LRS2.make_cube(def_wave, ran=[-7., 7., -7., 7.])
#LRS2.combine_cubes()
#LRS2.spec3D = LRS2.spec3D * response
#LRS2.spec3D = LRS2.spec3D / telcor
#LRS2.write_cube(outname=(objectname + '_combined_cube.fits'))
# 1D extracted spectrum plotting
# Adjust the five wavelength windows in "wran" to your desire
wran = [[3650, 10450], [3650, 3800], [4325, 4380], [4800, 5100], [6520, 6780]]
fig, ax = plt.subplots(5, 1, figsize=(20, 10))
ax[0].set_position([0.1, 0.55, 0.86, 0.42])
ax[1].set_position([0.1, 0.08, 0.19, 0.42])
ax[2].set_position([0.325, 0.08, 0.19, 0.42])
ax[3].set_position([0.55, 0.08, 0.19, 0.42])
ax[4].set_position([0.77, 0.08, 0.19, 0.42])
for i, wr in enumerate(wran):
wave = LRS2.spec1D.spectral_axis.value
flux = LRS2.spec1D.flux.value
error = LRS2.spec1D.uncertainty.array
wsel = (wave > wr[0]) * (wave < wr[1])
ax[i].step(wave[wsel], flux[wsel]/1e-17/telcor[wsel], color='darkorange', lw=0.5, alpha=1.0, zorder=2)
ax[i].plot(wave[wsel], wave[wsel]*0., 'k--', lw=1, zorder=2)
for key in LRS2.sides.keys():
for L in LRS2.sides[key]:
wsel = (L.spec1D.spectral_axis.value > wr[0]) * (L.spec1D.spectral_axis.value < wr[1])
ax[i].plot(L.spec1D.spectral_axis.value[wsel], L.spec1D.flux.value[wsel]/1e-17, color='grey', lw=0.5, zorder=1)
if (detwave > wr[0]) * (detwave < wr[1]):
li = np.nanpercentile(L.spec1D.flux.value[wsel]/1e-17, 0.1)
hi = np.nanpercentile(L.spec1D.flux.value[wsel]/1e-17, 99.9)
plt.plot([detwave, detwave], [li, hi], 'r-', lw=1)
plt.plot([detwave+wave_window/2., detwave+wave_window/2.], [li, hi], 'r--', lw=0.5)
plt.plot([detwave-wave_window/2., detwave-wave_window/2.], [li, hi], 'r--', lw=0.5)
f_ax10 = ax[i]
f_ax10.tick_params(axis='both', which='both', direction='in')
f_ax10.tick_params(axis='y', which='both', left=True, right=True)
f_ax10.tick_params(axis='x', which='both', bottom=True, top=True)
f_ax10.tick_params(axis='both', which='major', length=8, width=2)
f_ax10.tick_params(axis='both', which='minor', length=5, width=1)
f_ax10.minorticks_on()
ax[0].set_ylabel(r'F$_{\lambda}$ (10$^{-17}$ erg / s / cm$^2$ / $\AA^{-1}$)')
ax[0].yaxis.set_label_coords(-0.05, -0.2)
ax[0].set_xlabel(r'Wavelength ($\AA$)')
ax[0].xaxis.set_label_coords(0.5, -1.2)
plt.savefig('%s_spectrum_plot.png' % objectname, dpi=150)
```
[INFO - 2022-10-17 15:39:39,331] multi_20210325_0000002_exp01_uv: srlrs2_tst_056_W with 607.25s, 40.47cm2, 0.80
[INFO - 2022-10-17 15:39:39,671] multi_20210325_0000002_exp01_orange: srlrs2_tst_056_W with 607.25s, 40.47cm2, 0.80
[INFO - 2022-10-17 15:39:39,683] multi_20210325_0000002_exp01_orange.fits Centroid: -1.28 -2.15
[INFO - 2022-10-17 15:39:39,747] No fplane file given.
[INFO - 2022-10-17 15:39:39,747] Some functions will be unavailable until an fplane is given.
[INFO - 2022-10-17 15:39:39,750] No fplane file given.
[INFO - 2022-10-17 15:39:39,751] Some functions will be unavailable until an fplane is given.
[INFO - 2022-10-17 15:39:39,763] multi_20210325_0000002_exp01_orange.fits Centroid: -1.25 -2.15
[INFO - 2022-10-17 15:39:42,171] multi_20210325_0000002_exp01_orange.fits: 1.00
[INFO - 2022-10-17 15:39:42,192] SN for multi_20210325_0000002_exp01_orange.fits: 380.77
[INFO - 2022-10-17 15:39:42,360] multi_20210325_0000002_exp01_orange.fits Centroid: -1.31 -2.14


```python
```
|
grzeimannREPO_NAMELRS2MultiPATH_START.@LRS2Multi_extracted@LRS2Multi-main@notebooks@example_reduction.ipynb@.PATH_END.py
|
{
"filename": "test_basinhopping.py",
"repo_name": "lmfit/lmfit-py",
"repo_path": "lmfit-py_extracted/lmfit-py-master/tests/test_basinhopping.py",
"type": "Python"
}
|
"""Tests for the basinhopping minimization algorithm."""
import numpy as np
from numpy.testing import assert_allclose
import pytest
from scipy.optimize import basinhopping
import lmfit
def test_basinhopping_lmfit_vs_scipy():
"""Test basinhopping in lmfit versus scipy."""
# SciPy
def func(x):
return np.cos(14.5*x - 0.3) + (x+0.2) * x
minimizer_kwargs = {'method': 'L-BFGS-B'}
x0 = [1.]
ret = basinhopping(func, x0, minimizer_kwargs=minimizer_kwargs, seed=7)
# lmfit
def residual(params):
x = params['x'].value
return np.cos(14.5*x - 0.3) + (x+0.2) * x
pars = lmfit.Parameters()
pars.add_many(('x', 1.))
kws = {'minimizer_kwargs': {'method': 'L-BFGS-B'}, 'seed': 7}
mini = lmfit.Minimizer(residual, pars)
out = mini.minimize(method='basinhopping', **kws)
assert_allclose(out.residual, ret.fun)
assert_allclose(out.params['x'].value, ret.x, rtol=1e-5)
def test_basinhopping_2d_lmfit_vs_scipy():
"""Test basinhopping in lmfit versus scipy."""
# SciPy
def func2d(x):
return np.cos(14.5*x[0] - 0.3) + (x[1]+0.2) * x[1] + (x[0]+0.2) * x[0]
minimizer_kwargs = {'method': 'L-BFGS-B'}
x0 = [1.0, 1.0]
ret = basinhopping(func2d, x0, minimizer_kwargs=minimizer_kwargs, seed=7)
# lmfit
def residual_2d(params):
x0 = params['x0'].value
x1 = params['x1'].value
return np.cos(14.5*x0 - 0.3) + (x1+0.2) * x1 + (x0+0.2) * x0
pars = lmfit.Parameters()
pars.add_many(('x0', 1.), ('x1', 1.))
mini = lmfit.Minimizer(residual_2d, pars)
kws = {'minimizer_kwargs': {'method': 'L-BFGS-B'}, 'seed': 7}
out = mini.minimize(method='basinhopping', **kws)
assert_allclose(out.residual, ret.fun)
assert_allclose(out.params['x0'].value, ret.x[0], rtol=1e-5)
assert_allclose(out.params['x1'].value, ret.x[1], rtol=1e-5)
def test_basinhopping_Alpine02(minimizer_Alpine02):
"""Test basinhopping on Alpine02 function."""
global_optimum = [7.91705268, 4.81584232]
fglob = -6.12950
kws = {'minimizer_kwargs': {'method': 'L-BFGS-B'}, 'seed': 7}
out = minimizer_Alpine02.minimize(method='basinhopping', **kws)
out_x = np.array([out.params['x0'].value, out.params['x1'].value])
assert_allclose(out.residual, fglob, rtol=1e-5)
assert_allclose(min(out_x), min(global_optimum), rtol=1e-3)
assert_allclose(max(out_x), max(global_optimum), rtol=1e-3)
assert out.method == 'basinhopping'
def test_basinhopping_bounds(minimizer_Alpine02):
"""Test basinhopping algorithm with bounds."""
# change boundaries of parameters
pars_bounds = lmfit.Parameters()
pars_bounds.add_many(('x0', 1., True, 5.0, 15.0),
('x1', 1., True, 2.5, 7.5))
kws = {'minimizer_kwargs': {'method': 'L-BFGS-B'}, 'seed': 7}
out = minimizer_Alpine02.minimize(params=pars_bounds,
method='basinhopping', **kws)
assert 5.0 <= out.params['x0'].value <= 15.0
assert 2.5 <= out.params['x1'].value <= 7.5
def test_basinhopping_solver_options(minimizer_Alpine02):
"""Test basinhopping algorithm, pass incorrect options to solver."""
# use minimizer_kwargs to pass an invalid method for local solver to
# scipy.basinhopping
kws = {'minimizer_kwargs': {'method': 'unknown'}}
with pytest.raises(ValueError, match=r'Unknown solver'):
minimizer_Alpine02.minimize(method='basinhopping', **kws)
# pass an incorrect value for niter to scipy.basinhopping
kws = {'niter': 'string'}
with pytest.raises(TypeError):
minimizer_Alpine02.minimize(method='basinhopping', **kws)
|
lmfitREPO_NAMElmfit-pyPATH_START.@lmfit-py_extracted@lmfit-py-master@tests@test_basinhopping.py@.PATH_END.py
|
{
"filename": "style.py",
"repo_name": "catboost/catboost",
"repo_path": "catboost_extracted/catboost-master/contrib/python/Pygments/py2/pygments/style.py",
"type": "Python"
}
|
# -*- coding: utf-8 -*-
"""
pygments.style
~~~~~~~~~~~~~~
Basic style object.
:copyright: Copyright 2006-2019 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
from pygments.token import Token, STANDARD_TYPES
from pygments.util import add_metaclass
# Default mapping of ansixxx to RGB colors.
_ansimap = {
# dark
'ansiblack': '000000',
'ansired': '7f0000',
'ansigreen': '007f00',
'ansiyellow': '7f7fe0',
'ansiblue': '00007f',
'ansimagenta': '7f007f',
'ansicyan': '007f7f',
'ansigray': 'e5e5e5',
# normal
'ansibrightblack': '555555',
'ansibrightred': 'ff0000',
'ansibrightgreen': '00ff00',
'ansibrightyellow': 'ffff00',
'ansibrightblue': '0000ff',
'ansibrightmagenta': 'ff00ff',
'ansibrightcyan': '00ffff',
'ansiwhite': 'ffffff',
}
# mapping of deprecated #ansixxx colors to new color names
_deprecated_ansicolors = {
# dark
'#ansiblack': 'ansiblack',
'#ansidarkred': 'ansired',
'#ansidarkgreen': 'ansigreen',
'#ansibrown': 'ansiyellow',
'#ansidarkblue': 'ansiblue',
'#ansipurple': 'ansimagenta',
'#ansiteal': 'ansicyan',
'#ansilightgray': 'ansigray',
# normal
'#ansidarkgray': 'ansibrightblack',
'#ansired': 'ansibrightred',
'#ansigreen': 'ansibrightgreen',
'#ansiyellow': 'ansibrightyellow',
'#ansiblue': 'ansibrightblue',
'#ansifuchsia': 'ansibrightmagenta',
'#ansiturquoise': 'ansibrightcyan',
'#ansiwhite': 'ansiwhite',
}
ansicolors = set(_ansimap)
class StyleMeta(type):
def __new__(mcs, name, bases, dct):
obj = type.__new__(mcs, name, bases, dct)
for token in STANDARD_TYPES:
if token not in obj.styles:
obj.styles[token] = ''
def colorformat(text):
if text in ansicolors:
return text
if text[0:1] == '#':
col = text[1:]
if len(col) == 6:
return col
elif len(col) == 3:
return col[0] * 2 + col[1] * 2 + col[2] * 2
elif text == '':
return ''
elif text.startswith('var') or text.startswith('calc'):
return text
assert False, "wrong color format %r" % text
_styles = obj._styles = {}
for ttype in obj.styles:
for token in ttype.split():
if token in _styles:
continue
ndef = _styles.get(token.parent, None)
styledefs = obj.styles.get(token, '').split()
if not ndef or token is None:
ndef = ['', 0, 0, 0, '', '', 0, 0, 0]
elif 'noinherit' in styledefs and token is not Token:
ndef = _styles[Token][:]
else:
ndef = ndef[:]
_styles[token] = ndef
for styledef in obj.styles.get(token, '').split():
if styledef == 'noinherit':
pass
elif styledef == 'bold':
ndef[1] = 1
elif styledef == 'nobold':
ndef[1] = 0
elif styledef == 'italic':
ndef[2] = 1
elif styledef == 'noitalic':
ndef[2] = 0
elif styledef == 'underline':
ndef[3] = 1
elif styledef == 'nounderline':
ndef[3] = 0
elif styledef[:3] == 'bg:':
ndef[4] = colorformat(styledef[3:])
elif styledef[:7] == 'border:':
ndef[5] = colorformat(styledef[7:])
elif styledef == 'roman':
ndef[6] = 1
elif styledef == 'sans':
ndef[7] = 1
elif styledef == 'mono':
ndef[8] = 1
else:
ndef[0] = colorformat(styledef)
return obj
def style_for_token(cls, token):
t = cls._styles[token]
ansicolor = bgansicolor = None
color = t[0]
if color in _deprecated_ansicolors:
color = _deprecated_ansicolors[color]
if color in ansicolors:
ansicolor = color
color = _ansimap[color]
bgcolor = t[4]
if bgcolor in _deprecated_ansicolors:
bgcolor = _deprecated_ansicolors[color]
if bgcolor in ansicolors:
bgansicolor = bgcolor
bgcolor = _ansimap[bgcolor]
return {
'color': color or None,
'bold': bool(t[1]),
'italic': bool(t[2]),
'underline': bool(t[3]),
'bgcolor': bgcolor or None,
'border': t[5] or None,
'roman': bool(t[6]) or None,
'sans': bool(t[7]) or None,
'mono': bool(t[8]) or None,
'ansicolor': ansicolor,
'bgansicolor': bgansicolor,
}
def list_styles(cls):
return list(cls)
def styles_token(cls, ttype):
return ttype in cls._styles
def __iter__(cls):
for token in cls._styles:
yield token, cls.style_for_token(token)
def __len__(cls):
return len(cls._styles)
@add_metaclass(StyleMeta)
class Style(object):
#: overall background color (``None`` means transparent)
background_color = '#ffffff'
#: highlight background color
highlight_color = '#ffffcc'
#: Style definitions for individual token types.
styles = {}
|
catboostREPO_NAMEcatboostPATH_START.@catboost_extracted@catboost-master@contrib@python@Pygments@py2@pygments@style.py@.PATH_END.py
|
{
"filename": "singleParticleBoundaryCollision-3d.py",
"repo_name": "LLNL/spheral",
"repo_path": "spheral_extracted/spheral-main/tests/functional/DEM/LinearSpringDEM/SolidBoundaryCondition/singleParticleBoundaryCollision-3d.py",
"type": "Python"
}
|
#ATS:DEM3dSPBC1 = test( SELF, "--clearDirectories True --boolCheckRestitutionCoefficient True --normalRestitutionCoefficient 1.0 --g0 0.0 --steps 100", label="DEM perfectly elastic collision with solid boundary -- 3-D (serial)")
#ATS:DEM3dSPBC2 = test( SELF, "--clearDirectories True --boolCheckRestitutionCoefficient True --normalRestitutionCoefficient 0.5 --g0 0.0 --steps 100", label="DEM inelastic collision with solid boundary -- 3-D (serial)")
#ATS:DEM3dSPBC3 = test( SELF, "--clearDirectories True --boolCheckSlidingFrictionX True --normalRestitutionCoefficient 0.5 --g0 0.0 --steps 100", label="DEM sliding check x with solid boundary -- 3-D (serial)")
#ATS:DEM3dSPBC4 = test( SELF, "--clearDirectories True --boolCheckSlidingFrictionY True --normalRestitutionCoefficient 0.5 --g0 0.0 --steps 100", label="DEM sliding check y with solid boundary -- 3-D (serial)")
#ATS:DEM3dSPBC5 = test( SELF, "--clearDirectories True --boolCheckTorsionalFriction True --normalRestitutionCoefficient 0.5 --g0 0.0 --steps 100", label="DEM torsion check with solid boundary -- 3-D (serial)")
import os, sys, shutil, mpi
from math import *
from Spheral3d import *
from SpheralTestUtilities import *
from findLastRestart import *
from GenerateNodeDistribution3d import *
from GenerateDEMfromSPHGenerator import GenerateDEMfromSPHGenerator3d
sys.path.insert(0, '..')
from DEMConservationTracker import TrackConservation3d as TrackConservation
if mpi.procs > 1:
from PeanoHilbertDistributeNodes import distributeNodes3d
else:
from DistributeNodes import distributeNodes3d
title("DEM Boundary Restitution Coefficient Test")
#-------------------------------------------------------------------------------
# Generic problem parameters
#-------------------------------------------------------------------------------
commandLine(vImpact = 1.0, # impact velocity
omega0 = 0.1, # initial angular velocity it we're doing that
g0 = 0.0, # grav acceleration
h0 = 1.00, # initial height above the solid bc plane
radius = 0.95, # particle radius
normalSpringConstant=10000.0, # spring constant for LDS model
normalRestitutionCoefficient=1.00, # restitution coefficient to get damping const
tangentialSpringConstant=2857.0, # spring constant for LDS model
tangentialRestitutionCoefficient=0.55, # restitution coefficient to get damping const
dynamicFriction = 1.0, # static friction coefficient sliding
staticFriction = 1.0, # dynamic friction coefficient sliding
rollingFriction = 1.05, # static friction coefficient for rolling
torsionalFriction = 1.3, # static friction coefficient for torsion
cohesiveTensileStrength = 0.0, # units of pressure
shapeFactor = 0.5, # in [0,1] shape factor from Zhang 2018, 0 - no torsion or rolling
neighborSearchBuffer = 0.1, # multiplicative buffer to radius for neighbor search algo
# integration
IntegratorConstructor = VerletIntegrator, # Verlet one integrator to garenteee conservation
stepsPerCollision = 50, # replaces CFL for DEM
goalTime = 3.0,
dt = 1e-8,
dtMin = 1.0e-8,
dtMax = 0.1,
dtGrowth = 2.0,
steps = None,
maxSteps = None,
statsStep = 10,
domainIndependent = False,
rigorousBoundaries = False,
dtverbose = False,
# output control
vizCycle = None,
vizTime = 0.1,
clearDirectories = False,
restoreCycle = None,
restartStep = 1000,
redistributeStep = 500,
dataDir = "dumps-DEM-particle-boundary-3d",
# ats parameters
boolCheckRestitutionCoefficient=False, # turn on error checking for restitution coefficient
boolCheckSlidingFrictionX=False, # checks sliding friction reduces relative rotation
boolCheckSlidingFrictionY=False, # checks rolling friction reduces relative rotation
boolCheckTorsionalFriction=False, # checks torsional friction reduces relative rotation
restitutionErrorThreshold = 0.02, # relative error actual restitution vs nominal
omegaThreshold = 1e-14, # theshold for perpendicular components that should stay zero
)
#-------------------------------------------------------------------------------
# check for bad inputs
#-------------------------------------------------------------------------------
assert mpi.procs == 1
assert g0 <= 0.0
assert h0 > radius
assert shapeFactor <= 1.0 and shapeFactor >= 0.0
assert dynamicFriction >= 0.0
assert staticFriction >= 0.0
assert torsionalFriction >= 0.0
assert rollingFriction >= 0.0
assert cohesiveTensileStrength >= 0.0
assert sum([boolCheckRestitutionCoefficient,
boolCheckSlidingFrictionX,
boolCheckSlidingFrictionY,
boolCheckTorsionalFriction]) <= 1
if boolCheckSlidingFrictionX or boolCheckSlidingFrictionY:
shapeFactor = 0.0
#-------------------------------------------------------------------------------
# file things
#-------------------------------------------------------------------------------
testName = "DEM-SingleParticleBoundaryCollision-3d"
dataDir = os.path.join(dataDir,
"restitutionCoefficient=%s" % normalRestitutionCoefficient,
"boolCheckRestitutionCoefficient=%s" % boolCheckRestitutionCoefficient,
"boolCheckSlidingFrictionX=%s" % boolCheckSlidingFrictionX,
"boolCheckSlidingFrictionY=%s" % boolCheckSlidingFrictionY,
"boolCheckTorsionalFriction=%s" % boolCheckTorsionalFriction)
restartDir = os.path.join(dataDir, "restarts")
vizDir = os.path.join(dataDir, "visit")
restartBaseName = os.path.join(restartDir, testName)
vizBaseName = testName
if vizCycle is None and vizTime is None:
vizBaseName=None
#-------------------------------------------------------------------------------
# Check if the necessary output directories exist. If not, create them.
#-------------------------------------------------------------------------------
if mpi.rank == 0:
if clearDirectories and os.path.exists(dataDir):
shutil.rmtree(dataDir)
if not os.path.exists(restartDir):
os.makedirs(restartDir)
if not os.path.exists(vizDir):
os.makedirs(vizDir)
mpi.barrier()
#-------------------------------------------------------------------------------
# If we're restarting, find the set of most recent restart files.
#-------------------------------------------------------------------------------
if restoreCycle is None:
restoreCycle = findLastRestart(restartBaseName)
#-------------------------------------------------------------------------------
# This doesn't really matter kernel filler for neighbor algo
#-------------------------------------------------------------------------------
WT = TableKernel(WendlandC2Kernel(), 1000)
#-------------------------------------------------------------------------------
# Make the NodeList.
#-------------------------------------------------------------------------------
units = CGuS()
nodes1 = makeDEMNodeList("nodeList1",
hmin = 1.0e-30,
hmax = 1.0e30,
hminratio = 100.0,
neighborSearchBuffer = neighborSearchBuffer,
kernelExtent = WT.kernelExtent)
nodeSet = [nodes1]
for nodes in nodeSet:
output("nodes.name")
output("nodes.hmin")
output("nodes.hmax")
output("nodes.hminratio")
output("nodes.nodesPerSmoothingScale")
#-------------------------------------------------------------------------------
# Set the node properties. (gen 2 particles visit doesn't like just one)
#-------------------------------------------------------------------------------
generator0 = GenerateNodeDistribution3d(1, 1, 1,
rho = 1.0,
distributionType = "lattice",
xmin = (-1.0, -1.0, -1+h0),
xmax = (1.0, 1.0, 1+h0))
generator1 = GenerateDEMfromSPHGenerator3d(WT,
generator0)
distributeNodes3d((nodes1, generator1))
#-------------------------------------------------------------------------------
# Construct a DataBase to hold our node list
#-------------------------------------------------------------------------------
db = DataBase()
output("db")
for nodes in nodeSet:
db.appendNodeList(nodes)
output("db.numNodeLists")
output("db.numDEMNodeLists")
output("db.numFluidNodeLists")
#-------------------------------------------------------------------------------
# DEM
#-------------------------------------------------------------------------------
dem = DEM(db,
normalSpringConstant = normalSpringConstant,
normalRestitutionCoefficient = normalRestitutionCoefficient,
tangentialSpringConstant = tangentialSpringConstant,
tangentialRestitutionCoefficient = tangentialRestitutionCoefficient,
dynamicFrictionCoefficient = dynamicFriction,
staticFrictionCoefficient = staticFriction,
rollingFrictionCoefficient = rollingFriction,
torsionalFrictionCoefficient = torsionalFriction,
cohesiveTensileStrength =cohesiveTensileStrength,
shapeFactor = shapeFactor,
stepsPerCollision = stepsPerCollision)
packages = [dem]
solidWall = InfinitePlaneSolidBoundary(Vector(0.0, 0.0, 0.0), Vector( 0.0, 0.0, 1.0))
dem.appendSolidBoundary(solidWall)
#-------------------------------------------------------------------------------
# PhysicsPackage : gravity
#-------------------------------------------------------------------------------
gravity = ConstantAcceleration(a0 = Vector(0.0,0.0,g0),
nodeList = nodes1)
packages += [gravity]
#-------------------------------------------------------------------------------
# initial conditions
#-------------------------------------------------------------------------------
velocity = nodes1.velocity()
particleRadius = nodes1.particleRadius()
omega = dem.omega
velocity[0] = Vector(0.0,0.0,-vImpact)
particleRadius[0] = radius
if boolCheckSlidingFrictionX:
omega[0][0] = Vector(0.0,omega0,0.0)
elif boolCheckSlidingFrictionY:
omega[0][0] = Vector(omega0,0.0,0.0)
elif boolCheckTorsionalFriction:
omega[0][0] = Vector(0.0,0.0,omega0)
#-------------------------------------------------------------------------------
# Construct a time integrator, and add the physics packages.
#-------------------------------------------------------------------------------
integrator = IntegratorConstructor(db)
for p in packages:
integrator.appendPhysicsPackage(p)
integrator.lastDt = dt
integrator.dtMin = dtMin
integrator.dtMax = dtMax
integrator.dtGrowth = dtGrowth
integrator.domainDecompositionIndependent = domainIndependent
integrator.verbose = dtverbose
integrator.rigorousBoundaries = rigorousBoundaries
integrator.cullGhostNodes = False
output("integrator")
output("integrator.havePhysicsPackage(dem)")
output("integrator.lastDt")
output("integrator.dtMin")
output("integrator.dtMax")
output("integrator.dtGrowth")
output("integrator.domainDecompositionIndependent")
output("integrator.rigorousBoundaries")
output("integrator.verbose")
#-------------------------------------------------------------------------------
# Make the problem controller.
#-------------------------------------------------------------------------------
from SpheralPointmeshSiloDump import dumpPhysicsState
control = SpheralController(integrator, WT,
iterateInitialH = False,
initializeDerivatives = True,
statsStep = statsStep,
restartStep = restartStep,
restartBaseName = restartBaseName,
restoreCycle = restoreCycle,
vizBaseName = vizBaseName,
vizMethod=dumpPhysicsState,
vizDir = vizDir,
vizStep = vizCycle,
vizTime = vizTime)
output("control")
#-------------------------------------------------------------------------------
# Advance to the end time.
#-------------------------------------------------------------------------------
if not steps is None:
control.step(steps)
else:
control.advance(goalTime, maxSteps)
#-------------------------------------------------------------------------------
# Great success?
#-------------------------------------------------------------------------------
if boolCheckRestitutionCoefficient:
# check our restitution coefficient is correct
#-------------------------------------------------------------
vijPostImpact = -velocity[0].z
vijPreImpact = vImpact
restitutionEff = vijPostImpact/vijPreImpact
restitutionError = abs(restitutionEff + normalRestitutionCoefficient)/normalRestitutionCoefficient
if restitutionError > restitutionErrorThreshold:
print(" final velocity = {0}".format(vijPostImpact))
print(" initial velocity = {0}".format(vijPreImpact))
raise ValueError(" relative restitution coefficient error, %g, exceeds bounds" % restitutionError)
# check for non-physical behavior
#-------------------------------------------------------------
if boolCheckSlidingFrictionX:
if omega[0][0].magnitude() > omega0:
raise ValueError("particles are rotating faster post-collision")
if abs(omega[0][0].x) > omegaThreshold or abs(omega[0][0].z) > omegaThreshold:
raise ValueError("erroneous spin-up in perpendicular direction")
if boolCheckSlidingFrictionY:
if omega[0][0].magnitude() > omega0:
raise ValueError("particles are rotating faster post-collision")
if abs(omega[0][0].y) > omegaThreshold or abs(omega[0][0].z) > omegaThreshold:
raise ValueError("erroneous spin-up in perpendicular direction")
if boolCheckTorsionalFriction:
if omega[0][0].magnitude() > omega0:
raise ValueError("particles are rotating faster post-collision")
if abs(omega[0][0].x) > omegaThreshold or abs(omega[0][0].y) > omegaThreshold:
raise ValueError("erroneous spin-up in perpendicular direction")
|
LLNLREPO_NAMEspheralPATH_START.@spheral_extracted@spheral-main@tests@functional@DEM@LinearSpringDEM@SolidBoundaryCondition@singleParticleBoundaryCollision-3d.py@.PATH_END.py
|
{
"filename": "_pattern.py",
"repo_name": "plotly/plotly.py",
"repo_path": "plotly.py_extracted/plotly.py-master/packages/python/plotly/plotly/validators/barpolar/marker/_pattern.py",
"type": "Python"
}
|
import _plotly_utils.basevalidators
class PatternValidator(_plotly_utils.basevalidators.CompoundValidator):
def __init__(self, plotly_name="pattern", parent_name="barpolar.marker", **kwargs):
super(PatternValidator, self).__init__(
plotly_name=plotly_name,
parent_name=parent_name,
data_class_str=kwargs.pop("data_class_str", "Pattern"),
data_docs=kwargs.pop(
"data_docs",
"""
bgcolor
When there is no colorscale sets the color of
background pattern fill. Defaults to a
`marker.color` background when `fillmode` is
"overlay". Otherwise, defaults to a transparent
background.
bgcolorsrc
Sets the source reference on Chart Studio Cloud
for `bgcolor`.
fgcolor
When there is no colorscale sets the color of
foreground pattern fill. Defaults to a
`marker.color` background when `fillmode` is
"replace". Otherwise, defaults to dark grey or
white to increase contrast with the `bgcolor`.
fgcolorsrc
Sets the source reference on Chart Studio Cloud
for `fgcolor`.
fgopacity
Sets the opacity of the foreground pattern
fill. Defaults to a 0.5 when `fillmode` is
"overlay". Otherwise, defaults to 1.
fillmode
Determines whether `marker.color` should be
used as a default to `bgcolor` or a `fgcolor`.
shape
Sets the shape of the pattern fill. By default,
no pattern is used for filling the area.
shapesrc
Sets the source reference on Chart Studio Cloud
for `shape`.
size
Sets the size of unit squares of the pattern
fill in pixels, which corresponds to the
interval of repetition of the pattern.
sizesrc
Sets the source reference on Chart Studio Cloud
for `size`.
solidity
Sets the solidity of the pattern fill. Solidity
is roughly the fraction of the area filled by
the pattern. Solidity of 0 shows only the
background color without pattern and solidty of
1 shows only the foreground color without
pattern.
soliditysrc
Sets the source reference on Chart Studio Cloud
for `solidity`.
""",
),
**kwargs,
)
|
plotlyREPO_NAMEplotly.pyPATH_START.@plotly.py_extracted@plotly.py-master@packages@python@plotly@plotly@validators@barpolar@marker@_pattern.py@.PATH_END.py
|
{
"filename": "_stream.py",
"repo_name": "catboost/catboost",
"repo_path": "catboost_extracted/catboost-master/contrib/python/plotly/py2/plotly/graph_objs/densitymapbox/_stream.py",
"type": "Python"
}
|
from plotly.basedatatypes import BaseTraceHierarchyType as _BaseTraceHierarchyType
import copy as _copy
class Stream(_BaseTraceHierarchyType):
# class properties
# --------------------
_parent_path_str = "densitymapbox"
_path_str = "densitymapbox.stream"
_valid_props = {"maxpoints", "token"}
# maxpoints
# ---------
@property
def maxpoints(self):
"""
Sets the maximum number of points to keep on the plots from an
incoming stream. If `maxpoints` is set to 50, only the newest
50 points will be displayed on the plot.
The 'maxpoints' property is a number and may be specified as:
- An int or float in the interval [0, 10000]
Returns
-------
int|float
"""
return self["maxpoints"]
@maxpoints.setter
def maxpoints(self, val):
self["maxpoints"] = val
# token
# -----
@property
def token(self):
"""
The stream id number links a data trace on a plot with a
stream. See https://chart-studio.plotly.com/settings for more
details.
The 'token' property is a string and must be specified as:
- A non-empty string
Returns
-------
str
"""
return self["token"]
@token.setter
def token(self, val):
self["token"] = val
# Self properties description
# ---------------------------
@property
def _prop_descriptions(self):
return """\
maxpoints
Sets the maximum number of points to keep on the plots
from an incoming stream. If `maxpoints` is set to 50,
only the newest 50 points will be displayed on the
plot.
token
The stream id number links a data trace on a plot with
a stream. See https://chart-studio.plotly.com/settings
for more details.
"""
def __init__(self, arg=None, maxpoints=None, token=None, **kwargs):
"""
Construct a new Stream object
Parameters
----------
arg
dict of properties compatible with this constructor or
an instance of
:class:`plotly.graph_objs.densitymapbox.Stream`
maxpoints
Sets the maximum number of points to keep on the plots
from an incoming stream. If `maxpoints` is set to 50,
only the newest 50 points will be displayed on the
plot.
token
The stream id number links a data trace on a plot with
a stream. See https://chart-studio.plotly.com/settings
for more details.
Returns
-------
Stream
"""
super(Stream, self).__init__("stream")
if "_parent" in kwargs:
self._parent = kwargs["_parent"]
return
# Validate arg
# ------------
if arg is None:
arg = {}
elif isinstance(arg, self.__class__):
arg = arg.to_plotly_json()
elif isinstance(arg, dict):
arg = _copy.copy(arg)
else:
raise ValueError(
"""\
The first argument to the plotly.graph_objs.densitymapbox.Stream
constructor must be a dict or
an instance of :class:`plotly.graph_objs.densitymapbox.Stream`"""
)
# Handle skip_invalid
# -------------------
self._skip_invalid = kwargs.pop("skip_invalid", False)
self._validate = kwargs.pop("_validate", True)
# Populate data dict with properties
# ----------------------------------
_v = arg.pop("maxpoints", None)
_v = maxpoints if maxpoints is not None else _v
if _v is not None:
self["maxpoints"] = _v
_v = arg.pop("token", None)
_v = token if token is not None else _v
if _v is not None:
self["token"] = _v
# Process unknown kwargs
# ----------------------
self._process_kwargs(**dict(arg, **kwargs))
# Reset skip_invalid
# ------------------
self._skip_invalid = False
|
catboostREPO_NAMEcatboostPATH_START.@catboost_extracted@catboost-master@contrib@python@plotly@py2@plotly@graph_objs@densitymapbox@_stream.py@.PATH_END.py
|
{
"filename": "llm_checker.ipynb",
"repo_name": "langchain-ai/langchain",
"repo_path": "langchain_extracted/langchain-master/cookbook/llm_checker.ipynb",
"type": "Jupyter Notebook"
}
|
# Self-checking chain
This notebook showcases how to use LLMCheckerChain.
```python
from langchain.chains import LLMCheckerChain
from langchain_openai import OpenAI
llm = OpenAI(temperature=0.7)
text = "What type of mammal lays the biggest eggs?"
checker_chain = LLMCheckerChain.from_llm(llm, verbose=True)
checker_chain.invoke(text)
```
[1m> Entering new LLMCheckerChain chain...[0m
[1m> Entering new SequentialChain chain...[0m
[1m> Finished chain.[0m
[1m> Finished chain.[0m
' No mammal lays the biggest eggs. The Elephant Bird, which was a species of giant bird, laid the largest eggs of any bird.'
```python
```
|
langchain-aiREPO_NAMElangchainPATH_START.@langchain_extracted@langchain-master@cookbook@llm_checker.ipynb@.PATH_END.py
|
{
"filename": "MetadataDisplayer.md",
"repo_name": "tensorflow/tensorflow",
"repo_path": "tensorflow_extracted/tensorflow-master/tensorflow/lite/g3doc/api_docs/python/tflite_support/metadata/MetadataDisplayer.md",
"type": "Markdown"
}
|
page_type: reference
description: Displays metadata and associated file info in human-readable format.
<link rel="stylesheet" href="/site-assets/css/style.css">
<!-- DO NOT EDIT! Automatically generated file. -->
<div itemscope itemtype="http://developers.google.com/ReferenceObject">
<meta itemprop="name" content="tflite_support.metadata.MetadataDisplayer" />
<meta itemprop="path" content="Stable" />
<meta itemprop="property" content="__init__"/>
<meta itemprop="property" content="get_associated_file_buffer"/>
<meta itemprop="property" content="get_metadata_buffer"/>
<meta itemprop="property" content="get_metadata_json"/>
<meta itemprop="property" content="get_packed_associated_file_list"/>
<meta itemprop="property" content="with_model_buffer"/>
<meta itemprop="property" content="with_model_file"/>
</div>
# tflite_support.metadata.MetadataDisplayer
<!-- Insert buttons and diff -->
<table class="tfo-notebook-buttons tfo-api nocontent" align="left">
<td>
<a target="_blank" href="https://github.com/tensorflow/tflite-support/blob/v0.4.4/tensorflow_lite_support/metadata/python/metadata.py#L686-L789">
<img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />
View source on GitHub
</a>
</td>
</table>
Displays metadata and associated file info in human-readable format.
<pre class="devsite-click-to-copy prettyprint lang-py tfo-signature-link">
<code>tflite_support.metadata.MetadataDisplayer(
model_buffer, metadata_buffer, associated_file_list
)
</code></pre>
<!-- Placeholder for "Used in" -->
<!-- Tabular view -->
<table class="responsive fixed orange">
<colgroup><col width="214px"><col></colgroup>
<tr><th colspan="2"><h2 class="add-link">Args</h2></th></tr>
<tr>
<td>
`model_buffer`<a id="model_buffer"></a>
</td>
<td>
valid buffer of the model file.
</td>
</tr><tr>
<td>
`metadata_buffer`<a id="metadata_buffer"></a>
</td>
<td>
valid buffer of the metadata file.
</td>
</tr><tr>
<td>
`associated_file_list`<a id="associated_file_list"></a>
</td>
<td>
list of associate files in the model file.
</td>
</tr>
</table>
## Methods
<h3 id="get_associated_file_buffer"><code>get_associated_file_buffer</code></h3>
<a target="_blank" class="external" href="https://github.com/tensorflow/tflite-support/blob/v0.4.4/tensorflow_lite_support/metadata/python/metadata.py#L739-L756">View source</a>
<pre class="devsite-click-to-copy prettyprint lang-py tfo-signature-link">
<code>get_associated_file_buffer(
filename
)
</code></pre>
Get the specified associated file content in bytearray.
<!-- Tabular view -->
<table class="responsive fixed orange">
<colgroup><col width="214px"><col></colgroup>
<tr><th colspan="2">Args</th></tr>
<tr>
<td>
`filename`
</td>
<td>
name of the file to be extracted.
</td>
</tr>
</table>
<!-- Tabular view -->
<table class="responsive fixed orange">
<colgroup><col width="214px"><col></colgroup>
<tr><th colspan="2">Returns</th></tr>
<tr class="alt">
<td colspan="2">
The file content in bytearray.
</td>
</tr>
</table>
<!-- Tabular view -->
<table class="responsive fixed orange">
<colgroup><col width="214px"><col></colgroup>
<tr><th colspan="2">Raises</th></tr>
<tr>
<td>
`ValueError`
</td>
<td>
if the file does not exist in the model.
</td>
</tr>
</table>
<h3 id="get_metadata_buffer"><code>get_metadata_buffer</code></h3>
<a target="_blank" class="external" href="https://github.com/tensorflow/tflite-support/blob/v0.4.4/tensorflow_lite_support/metadata/python/metadata.py#L758-L760">View source</a>
<pre class="devsite-click-to-copy prettyprint lang-py tfo-signature-link">
<code>get_metadata_buffer()
</code></pre>
Get the metadata buffer in bytearray out from the model.
<h3 id="get_metadata_json"><code>get_metadata_json</code></h3>
<a target="_blank" class="external" href="https://github.com/tensorflow/tflite-support/blob/v0.4.4/tensorflow_lite_support/metadata/python/metadata.py#L762-L764">View source</a>
<pre class="devsite-click-to-copy prettyprint lang-py tfo-signature-link">
<code>get_metadata_json()
</code></pre>
Converts the metadata into a json string.
<h3 id="get_packed_associated_file_list"><code>get_packed_associated_file_list</code></h3>
<a target="_blank" class="external" href="https://github.com/tensorflow/tflite-support/blob/v0.4.4/tensorflow_lite_support/metadata/python/metadata.py#L766-L772">View source</a>
<pre class="devsite-click-to-copy prettyprint lang-py tfo-signature-link">
<code>get_packed_associated_file_list()
</code></pre>
Returns a list of associated files that are packed in the model.
<!-- Tabular view -->
<table class="responsive fixed orange">
<colgroup><col width="214px"><col></colgroup>
<tr><th colspan="2">Returns</th></tr>
<tr class="alt">
<td colspan="2">
A name list of associated files.
</td>
</tr>
</table>
<h3 id="with_model_buffer"><code>with_model_buffer</code></h3>
<a target="_blank" class="external" href="https://github.com/tensorflow/tflite-support/blob/v0.4.4/tensorflow_lite_support/metadata/python/metadata.py#L721-L737">View source</a>
<pre class="devsite-click-to-copy prettyprint lang-py tfo-signature-link">
<code>@classmethod</code>
<code>with_model_buffer(
model_buffer
)
</code></pre>
Creates a MetadataDisplayer object for a file buffer.
<!-- Tabular view -->
<table class="responsive fixed orange">
<colgroup><col width="214px"><col></colgroup>
<tr><th colspan="2">Args</th></tr>
<tr>
<td>
`model_buffer`
</td>
<td>
TensorFlow Lite model buffer in bytearray.
</td>
</tr>
</table>
<!-- Tabular view -->
<table class="responsive fixed orange">
<colgroup><col width="214px"><col></colgroup>
<tr><th colspan="2">Returns</th></tr>
<tr class="alt">
<td colspan="2">
MetadataDisplayer object.
</td>
</tr>
</table>
<h3 id="with_model_file"><code>with_model_file</code></h3>
<a target="_blank" class="external" href="https://github.com/tensorflow/tflite-support/blob/v0.4.4/tensorflow_lite_support/metadata/python/metadata.py#L703-L719">View source</a>
<pre class="devsite-click-to-copy prettyprint lang-py tfo-signature-link">
<code>@classmethod</code>
<code>with_model_file(
model_file
)
</code></pre>
Creates a MetadataDisplayer object for the model file.
<!-- Tabular view -->
<table class="responsive fixed orange">
<colgroup><col width="214px"><col></colgroup>
<tr><th colspan="2">Args</th></tr>
<tr>
<td>
`model_file`
</td>
<td>
valid path to a TensorFlow Lite model file.
</td>
</tr>
</table>
<!-- Tabular view -->
<table class="responsive fixed orange">
<colgroup><col width="214px"><col></colgroup>
<tr><th colspan="2">Returns</th></tr>
<tr class="alt">
<td colspan="2">
MetadataDisplayer object.
</td>
</tr>
</table>
<!-- Tabular view -->
<table class="responsive fixed orange">
<colgroup><col width="214px"><col></colgroup>
<tr><th colspan="2">Raises</th></tr>
<tr>
<td>
`IOError`
</td>
<td>
File not found.
</td>
</tr><tr>
<td>
`ValueError`
</td>
<td>
The model does not have metadata.
</td>
</tr>
</table>
|
tensorflowREPO_NAMEtensorflowPATH_START.@tensorflow_extracted@tensorflow-master@tensorflow@lite@g3doc@api_docs@python@tflite_support@metadata@MetadataDisplayer.md@.PATH_END.py
|
{
"filename": "mpl_axes.py",
"repo_name": "waynebhayes/SpArcFiRe",
"repo_path": "SpArcFiRe_extracted/SpArcFiRe-master/scripts/SpArcFiRe-pyvenv/lib/python2.7/site-packages/mpl_toolkits/axes_grid1/mpl_axes.py",
"type": "Python"
}
|
from __future__ import (absolute_import, division, print_function,
unicode_literals)
import six
import warnings
import matplotlib.axes as maxes
from matplotlib.artist import Artist
from matplotlib.axis import XAxis, YAxis
class SimpleChainedObjects(object):
def __init__(self, objects):
self._objects = objects
def __getattr__(self, k):
_a = SimpleChainedObjects([getattr(a, k) for a in self._objects])
return _a
def __call__(self, *kl, **kwargs):
for m in self._objects:
m(*kl, **kwargs)
class Axes(maxes.Axes):
class AxisDict(dict):
def __init__(self, axes):
self.axes = axes
super(Axes.AxisDict, self).__init__()
def __getitem__(self, k):
if isinstance(k, tuple):
r = SimpleChainedObjects(
[super(Axes.AxisDict, self).__getitem__(k1) for k1 in k])
return r
elif isinstance(k, slice):
if k.start is None and k.stop is None and k.step is None:
r = SimpleChainedObjects(list(six.itervalues(self)))
return r
else:
raise ValueError("Unsupported slice")
else:
return dict.__getitem__(self, k)
def __call__(self, *v, **kwargs):
return maxes.Axes.axis(self.axes, *v, **kwargs)
def __init__(self, *kl, **kw):
super(Axes, self).__init__(*kl, **kw)
def _init_axis_artists(self, axes=None):
if axes is None:
axes = self
self._axislines = self.AxisDict(self)
self._axislines["bottom"] = SimpleAxisArtist(self.xaxis, 1, self.spines["bottom"])
self._axislines["top"] = SimpleAxisArtist(self.xaxis, 2, self.spines["top"])
self._axislines["left"] = SimpleAxisArtist(self.yaxis, 1, self.spines["left"])
self._axislines["right"] = SimpleAxisArtist(self.yaxis, 2, self.spines["right"])
def _get_axislines(self):
return self._axislines
axis = property(_get_axislines)
def cla(self):
super(Axes, self).cla()
self._init_axis_artists()
class SimpleAxisArtist(Artist):
def __init__(self, axis, axisnum, spine):
self._axis = axis
self._axisnum = axisnum
self.line = spine
if isinstance(axis, XAxis):
self._axis_direction = ["bottom", "top"][axisnum-1]
elif isinstance(axis, YAxis):
self._axis_direction = ["left", "right"][axisnum-1]
else:
raise ValueError("axis must be instance of XAxis or YAxis : %s is provided" % (axis,))
Artist.__init__(self)
def _get_major_ticks(self):
tickline = "tick%dline" % self._axisnum
return SimpleChainedObjects([getattr(tick, tickline) for tick \
in self._axis.get_major_ticks()])
def _get_major_ticklabels(self):
label = "label%d" % self._axisnum
return SimpleChainedObjects([getattr(tick, label) for tick \
in self._axis.get_major_ticks()])
def _get_label(self):
return self._axis.label
major_ticks = property(_get_major_ticks)
major_ticklabels = property(_get_major_ticklabels)
label = property(_get_label)
def set_visible(self, b):
self.toggle(all=b)
self.line.set_visible(b)
self._axis.set_visible(True)
Artist.set_visible(self, b)
def set_label(self, txt):
self._axis.set_label_text(txt)
def toggle(self, all=None, ticks=None, ticklabels=None, label=None):
if all:
_ticks, _ticklabels, _label = True, True, True
elif all is not None:
_ticks, _ticklabels, _label = False, False, False
else:
_ticks, _ticklabels, _label = None, None, None
if ticks is not None:
_ticks = ticks
if ticklabels is not None:
_ticklabels = ticklabels
if label is not None:
_label = label
tickOn = "tick%dOn" % self._axisnum
labelOn = "label%dOn" % self._axisnum
if _ticks is not None:
tickparam = {tickOn: _ticks}
self._axis.set_tick_params(**tickparam)
if _ticklabels is not None:
tickparam = {labelOn: _ticklabels}
self._axis.set_tick_params(**tickparam)
if _label is not None:
pos = self._axis.get_label_position()
if (pos == self._axis_direction) and not _label:
self._axis.label.set_visible(False)
elif _label:
self._axis.label.set_visible(True)
self._axis.set_label_position(self._axis_direction)
if __name__ == '__main__':
import matplotlib.pyplot as plt
fig = plt.figure()
ax = Axes(fig, [0.1, 0.1, 0.8, 0.8])
fig.add_axes(ax)
ax.cla()
|
waynebhayesREPO_NAMESpArcFiRePATH_START.@SpArcFiRe_extracted@SpArcFiRe-master@scripts@SpArcFiRe-pyvenv@lib@python2.7@site-packages@mpl_toolkits@axes_grid1@mpl_axes.py@.PATH_END.py
|
{
"filename": "send_cor.py",
"repo_name": "igmhub/picca",
"repo_path": "picca_extracted/picca-master/tutorials/eboss_dr16/Scripts/send_cor.py",
"type": "Python"
}
|
import scipy as sp
import scipy.linalg
import argparse
import subprocess
import fitsio
import os
import h5py
import glob
import time
import matplotlib.pyplot as plt
from picca.constants import ABSORBER_IGM
path_here = os.environ['DR16_BASE']
path_drq = os.environ['QSO_CAT']
path_deltas = os.environ['DR16_BASE']
metList = {}
metList['LYA'] = ['CIV(eff)','SiII(1260)','SiIII(1207)','SiII(1193)','SiII(1190)']
metList['LYB'] = ['CIV(eff)','SiII(1260)','SiIII(1207)','SiII(1193)','SiII(1190)']
def send_xcf(zmin,zmax,do_corr,do_dist,do_met,f='LYA',l='LYA'):
if (zmin==0.) and (zmax==10.):
zmin = int(zmin)
zmax = int(zmax)
in_dir = path_deltas+'/Delta_{}/Delta/'.format(f)
else:
if False:
in_dir = path_deltas+'/Delta_{}_z_{}_{}/Delta/'.format(f,zmin,zmax)
else:
print('\nNot use /Delta_{}_z_{}_{}/Delta/ \n'.format(f,zmin,zmax))
in_dir = path_deltas+'/Delta_{}/Delta/'.format(f,zmin,zmax)
strl = l.replace('(','').replace(')','')
cmd = 'picca_xcf.py'
cmd += ' --in-dir '+in_dir
cmd += ' --drq '+path_drq
cmd += ' --out {}/Correlations/xcf_z_{}_{}.fits.gz'.format(path_here,zmin,zmax)
cmd += ' --z-evol-obj 1.44 '
cmd += ' --fid-Om 0.314569514863487 --fid-Or 7.97505418919554e-5'
cmd += ' --nside 16'
if l!='LYA':
cmd += ' --lambda-abs '+l.replace('(','\(').replace(')','\)')
if (f!='LYA') or (l!='LYA'):
cmd = cmd.replace('xcf_','xcf_{}_in_{}_'.format(strl,f))
print('')
print(cmd)
if do_corr:
start = time.time()
subprocess.call(cmd, shell=True)
done = time.time()
print('\n\nTime spent in picca_xcf = {} minutes\n\n'.format((done-start)/60))
cmd = 'picca_xdmat.py'
cmd += ' --in-dir '+in_dir
cmd += ' --drq '+path_drq
cmd += ' --out {}/Correlations/xdmat_z_{}_{}.fits.gz'.format(path_here, zmin, zmax)
cmd += ' --z-evol-obj 1.44 '
cmd += ' --fid-Om 0.314569514863487 --fid-Or 7.97505418919554e-5'
cmd += ' --nside 16'
cmd += ' --rej 0.99'
if l!='LYA':
cmd += ' --lambda-abs '+l.replace('(','\(').replace(')','\)')
if (f!='LYA') or (l!='LYA'):
cmd = cmd.replace('xdmat_','xdmat_{}_in_{}_'.format(strl,f))
print('')
print(cmd)
if do_dist:
start = time.time()
subprocess.call(cmd, shell=True)
done = time.time()
print('\n\nTime spent in picca_xdmat = {} minutes\n\n'.format((done-start)/60))
cmd = 'picca_export.py'
cmd += ' --data {}/Correlations/xcf_z_{}_{}.fits.gz'.format(path_here, zmin, zmax)
cmd += ' --dmat {}/Correlations/xdmat_z_{}_{}.fits.gz'.format(path_here, zmin, zmax)
cmd += ' --out {}/Correlations/xcf_z_{}_{}-exp.fits.gz'.format(path_here, zmin, zmax)
if (f!='LYA') or (l!='LYA'):
cmd = cmd.replace('xcf_','xcf_{}_in_{}_'.format(strl,f))
cmd = cmd.replace('xdmat_','xdmat_{}_in_{}_'.format(strl,f))
print('')
print(cmd)
if do_dist: subprocess.call(cmd, shell=True)
cmd = 'picca_metal_xdmat.py'
cmd += ' --in-dir '+in_dir
cmd += ' --drq '+path_drq
cmd += ' --out {}/Correlations/metal_xdmat_z_{}_{}.fits.gz'.format(path_here, zmin, zmax)
cmd += ' --z-evol-obj 1.44 '
cmd += ' --fid-Om 0.314569514863487 --fid-Or 7.97505418919554e-5'
cmd += ' --nside 16'
cmd += ' --rej 0.999'
cmd += ' --abs-igm '
for m in metList[f]:
cmd += m+' '
if l!='LYA':
cmd += ' --lambda-abs '+l#.replace('(','\(').replace(')','\)')
if (f!='LYA') or (l!='LYA'):
cmd = cmd.replace('metal_xdmat_','metal_xdmat_{}_in_{}_'.format(strl,f))
cmd = cmd.replace('(','\(').replace(')','\)')
print('')
print(cmd)
if do_met:
start = time.time()
subprocess.call(cmd, shell=True)
done = time.time()
print('\n\nTime spent in picca_metal_xdmat = {} minutes\n\n'.format((done-start)/60))
return
def send_cf(zmin,zmax,do_corr,do_dist,do_met,f='LYA',l='LYA'):
if (zmin==0.) and (zmax==10.):
zmin = int(zmin)
zmax = int(zmax)
strl = l.replace('(','').replace(')','')
###
cmd = 'picca_cf.py'
cmd += ' --in-dir {}/Delta_{}/Delta/'.format(path_deltas,f)
cmd += ' --out {}/Correlations/cf_z_{}_{}.fits.gz'.format(path_here,zmin,zmax)
cmd += ' --z-cut-min {} --z-cut-max {}'.format(zmin, zmax)
#cmd += ' --remove-same-half-plate-close-pairs'
cmd += ' --fid-Om 0.314569514863487 --fid-Or 7.97505418919554e-5'
cmd += ' --nside 16'
if l!='LYA':
cmd += ' --lambda-abs '+l.replace('(','\(').replace(')','\)')
if (f!='LYA') or (l!='LYA'):
cmd = cmd.replace('cf_','cf_{}_in_{}_'.format(strl,f))
print('')
print(cmd)
if do_corr:
start = time.time()
subprocess.call(cmd, shell=True)
done = time.time()
print('\n\nTime spent in picca_cf = {} minutes\n\n'.format((done-start)/60))
###
cmd = 'picca_dmat.py'
cmd += ' --in-dir {}/Delta_{}/Delta/'.format(path_deltas,f)
cmd += ' --out {}/Correlations/dmat_z_{}_{}.fits.gz'.format(path_here,zmin,zmax)
#cmd += ' --remove-same-half-plate-close-pairs'
cmd += ' --z-cut-min {} --z-cut-max {}'.format(zmin, zmax)
cmd += ' --fid-Om 0.314569514863487 --fid-Or 7.97505418919554e-5'
cmd += ' --nside 16'
cmd += ' --rej 0.99'
if l!='LYA':
cmd += ' --lambda-abs '+l.replace('(','\(').replace(')','\)')
if (f!='LYA') or (l!='LYA'):
cmd = cmd.replace('dmat_','dmat_{}_in_{}_'.format(strl,f))
print('')
print(cmd)
if do_dist:
start = time.time()
subprocess.call(cmd, shell=True)
done = time.time()
print('\n\nTime spent in picca_dmat = {} minutes\n\n'.format((done-start)/60))
###
cmd = 'picca_export.py'
cmd += ' --data {}/Correlations/cf_z_{}_{}.fits.gz'.format(path_here,zmin,zmax)
cmd += ' --dmat {}/Correlations/dmat_z_{}_{}.fits.gz'.format(path_here,zmin,zmax)
cmd += ' --out {}/Correlations/cf_z_{}_{}-exp.fits.gz'.format(path_here,zmin,zmax)
if (f!='LYA') or (l!='LYA'):
cmd = cmd.replace('cf_','cf_{}_in_{}_'.format(strl,f))
cmd = cmd.replace('dmat_','dmat_{}_in_{}_'.format(strl,f))
print('')
print(cmd)
if do_dist: subprocess.call(cmd, shell=True)
###
cmd = 'picca_metal_dmat.py'
cmd += ' --in-dir {}/Delta_{}/Delta/'.format(path_deltas,f)
cmd += ' --out {}/Correlations/metal_dmat_z_{}_{}.fits.gz'.format(path_here,zmin,zmax)
cmd += ' --z-cut-min {} --z-cut-max {}'.format(zmin, zmax)
#cmd += ' --remove-same-half-plate-close-pairs'
cmd += ' --fid-Om 0.314569514863487 --fid-Or 7.97505418919554e-5'
cmd += ' --nside 16'
cmd += ' --rej 0.999'
cmd += ' --abs-igm '
for m in metList[f]:
cmd += m+' '
if l!='LYA':
cmd += ' --lambda-abs '+l.replace('dmat_','dmat_{}_in_{}_'.format(strl,f))
if (f!='LYA') or (l!='LYA'):
cmd = cmd.replace('metal_dmat_','metal_dmat_{}_in_{}_'.format(strl,f))
cmd = cmd.replace('(','\(').replace(')','\)')
print('')
print(cmd)
if do_met:
start = time.time()
subprocess.call(cmd, shell=True)
done = time.time()
print('\n\nTime spent in picca_metal_dmat = {} minutes\n\n'.format((done-start)/60))
return
def send_cf_cross(zmin,zmax,do_corr,do_dist,do_met,f1='LYA',l1='LYA',f2='LYB',l2='LYA'):
strl1 = l1.replace('(','').replace(')','')
strl2 = l2.replace('(','').replace(')','')
if (zmin==0.) and (zmax==10.):
zmin = int(zmin)
zmax = int(zmax)
cmd = 'picca_cf.py'
cmd += ' --in-dir {}/Delta_{}/Delta/'.format(path_deltas,f1)
cmd += ' --in-dir2 {}/Delta_{}/Delta/'.format(path_deltas,f2)
cmd += ' --out {}/Correlations/cf_{}_in_{}_{}_in_{}_z_{}_{}.fits.gz'.format(path_here,strl1,f1,strl2,f2,zmin,zmax)
cmd += ' --z-cut-min {} --z-cut-max {}'.format(zmin, zmax)
cmd += ' --fid-Om 0.314569514863487 --fid-Or 7.97505418919554e-5'
cmd += ' --nside 16'
print('')
print(cmd)
if do_corr:
start = time.time()
subprocess.call(cmd, shell=True)
done = time.time()
print('\n\nTime spent in picca_cf (cross) = {} minutes\n\n'.format((done-start)/60))
cmd = 'picca_dmat.py'
cmd += ' --in-dir {}/Delta_{}/Delta/'.format(path_deltas,f1)
cmd += ' --in-dir2 {}/Delta_{}/Delta/'.format(path_deltas,f2)
cmd += ' --out {}/Correlations/dmat_{}_in_{}_{}_in_{}_z_{}_{}.fits.gz'.format(path_here,strl1,f1,strl2,f2,zmin,zmax)
cmd += ' --z-cut-min {} --z-cut-max {}'.format(zmin, zmax)
cmd += ' --fid-Om 0.314569514863487 --fid-Or 7.97505418919554e-5'
cmd += ' --nside 16'
cmd += ' --rej 0.99'
print('')
print(cmd)
if do_dist:
start = time.time()
subprocess.call(cmd, shell=True)
done = time.time()
print('\n\nTime spent in picca_dmat (cross) = {} minutes\n\n'.format((done-start)/60))
cmd = 'picca_export.py'
cmd += ' --data {}/Correlations/cf_{}_in_{}_{}_in_{}_z_{}_{}.fits.gz'.format(path_here,strl1,f1,strl2,f2,zmin,zmax)
cmd += ' --dmat {}/Correlations/dmat_{}_in_{}_{}_in_{}_z_{}_{}.fits.gz'.format(path_here,strl1,f1,strl2,f2,zmin,zmax)
cmd += ' --out {}/Correlations/cf_{}_in_{}_{}_in_{}_z_{}_{}-exp.fits.gz'.format(path_here,strl1,f1,strl2,f2,zmin,zmax)
print('')
print(cmd)
if do_dist: subprocess.call(cmd, shell=True)
###
cmd = 'picca_metal_dmat.py'
cmd += ' --in-dir {}/Delta_{}/Delta/'.format(path_deltas,f1)
cmd += ' --in-dir2 {}/Delta_{}/Delta/'.format(path_deltas,f2)
cmd += ' --out {}/Correlations/metal_dmat_{}_in_{}_{}_in_{}_z_{}_{}.fits.gz'.format(path_here,strl1,f1,strl2,f2,zmin,zmax)
cmd += ' --z-cut-min {} --z-cut-max {}'.format(zmin, zmax)
cmd += ' --fid-Om 0.314569514863487 --fid-Or 7.97505418919554e-5'
cmd += ' --nside 16'
cmd += ' --rej 0.999'
cmd += ' --abs-igm '
for m in metList[f1]:
cmd += m+' '
cmd += ' --abs-igm2 '
for m in metList[f2]:
cmd += m+' '
cmd = cmd.replace('(','\(').replace(')','\)')
print('')
print(cmd)
if do_met:
start = time.time()
subprocess.call(cmd, shell=True)
done = time.time()
print('\n\nTime spent in picca_metal_dmat (cross) = {} minutes\n\n'.format((done-start)/60))
def parse():
parser=argparse.ArgumentParser(
formatter_class=argparse.ArgumentDefaultsHelpFormatter,
description="Measure a particular correlation")
parser.add_argument('--corr_type', type=str, required=True,
help="Correlation type (LyaLya, LyaQSO, LyaLyb or LybQSO)")
parser.add_argument('--zmin', type=float, default=0.0, help="minimum redshift")
parser.add_argument('--zmax', type=float, default=10.0, help="maximum redshift")
parser.add_argument('--do_corr', action = "store_true",
help="compute correlation (auto or cross)")
parser.add_argument('--do_dist', action = "store_true",
help="compute distortion matrix (assumes correlation is done)")
parser.add_argument('--do_met', action = "store_true",
help="compute metal distortion matrix")
return parser.parse_args()
print('start job')
args = parse()
corr_type=args.corr_type
zmin=args.zmin
zmax=args.zmax
do_corr=args.do_corr
do_dist=args.do_dist
do_met=args.do_met
if corr_type == 'LyaQSO':
print('compute LyaQSO')
send_xcf(zmin,zmax,do_corr=do_corr,do_dist=do_dist,do_met=do_met)
print('\n\n\n\n')
elif corr_type == 'LyaLya':
print('compute LyaLya')
send_cf(zmin,zmax,do_corr=do_corr,do_dist=do_dist,do_met=do_met)
print('\n\n\n\n')
elif corr_type == 'LybQSO':
print('compute LybQSO')
send_xcf(zmin,zmax,do_corr=do_corr,do_dist=do_dist,do_met=do_met,f='LYB')
print('\n\n\n\n')
elif corr_type == 'LyaLyb':
print('compute LyaLyb')
send_cf_cross(zmin,zmax,do_corr=do_corr,do_dist=do_dist,do_met=do_met)
print('\n\n\n\n')
|
igmhubREPO_NAMEpiccaPATH_START.@picca_extracted@picca-master@tutorials@eboss_dr16@Scripts@send_cor.py@.PATH_END.py
|
{
"filename": "sd_fit.py",
"repo_name": "spedas/pyspedas",
"repo_path": "pyspedas_extracted/pyspedas-master/pyspedas/projects/erg/ground/radar/superdarn/sd_fit.py",
"type": "Python"
}
|
import cdflib
import fnmatch
import numpy as np
from copy import deepcopy
from pytplot import get_data, store_data, options, clip, ylim, zlim
from pytplot import tnames
from ....satellite.erg.load import load
from ....satellite.erg.get_gatt_ror import get_gatt_ror
from .get_sphcntr import get_sphcntr
"""
;Internal routine to get the table of the pixel
;centers from the table of the pixel corners.
"""
def get_pixel_cntr(tbl_array):
dim_tuple = tbl_array.shape
rgmax = dim_tuple[0] - 1
azmax = dim_tuple[1] - 1
cnttbl = np.zeros(shape=(rgmax, azmax, 2))
for i in range(rgmax):
for j in range(azmax):
axis_0_indices_array = np.repeat(
np.array([[i, i + 1, i + 1, i]]).T, 4, 1
).T.reshape(16)
axis_1_indices_array = np.array([j] * 8 + [j + 1] * 8)
lonarr = tbl_array[
tuple([axis_0_indices_array, axis_1_indices_array, 0])
].reshape(4, 4)
latarr = tbl_array[
tuple([axis_0_indices_array, axis_1_indices_array, 1])
].reshape(4, 4)
pos_array = get_sphcntr(latarr, lonarr)
cnttbl[i, j, 1] = pos_array[0]
cnttbl[i, j, 0] = pos_array[1]
return cnttbl
from typing import List, Optional, Union
def sd_fit(
trange: List[str] = ["2018-10-18/00:00:00", "2018-10-18/02:00:00"],
suffix: str = "",
site: Union[str, List[str]] = "all",
get_support_data: bool = False,
varformat: Optional[str] = None,
varnames: List[str] = [],
downloadonly: bool = False,
notplot: bool = False,
no_update: bool = False,
uname: Optional[str] = None,
passwd: Optional[str] = None,
time_clip: bool = False,
ror: bool = True,
compact: bool = False,
force_download: bool = False,
) -> List[str]:
"""
Load SuperDARN data from ERG Science Center
Parameters
----------
trange: list of str
time range of interest [starttime, endtime] with the format
'YYYY-MM-DD','YYYY-MM-DD'] or to specify more or less than a day
['YYYY-MM-DD/hh:mm:ss','YYYY-MM-DD/hh:mm:ss']
Default: ['2020-08-01', '2020-08-02']
suffix: str
The tplot variable names will be given this suffix. Default: ''
site: str or list of str
The site or list of sites to load. Valid values: 'ade', 'adw', 'bks', 'bpk',
'cly', 'cve', 'cvw', 'dce', 'fhe', 'fhw', 'fir', 'gbr', 'hal', 'han', 'hok',
'hkw', 'inv', 'kap', 'ker', 'kod', 'ksr', 'mcm', 'pgr', 'pyk', 'rkn', 'san',
'sas', 'sps', 'sto', 'sye', 'sys', 'tig', 'unw', 'wal', 'zho', 'lyr', 'all
Default: ['all']
get_support_data: bool
If true, data with an attribute "VAR_TYPE" with a value of "support_data"
or 'data' will be loaded into tplot. Default: False
varformat: str
The CDF file variable formats to load into tplot. Wildcard character
"*" is accepted. Default: None (all variables will be loaded).
varnames: list of str
List of variable names to load. Default: [] (all variables will be loaded)
downloadonly: bool
Set this flag to download the CDF files, but not load them into
tplot variables. Default: False
notplot: bool
Return the data in hash tables instead of creating tplot variables. Default: False
no_update: bool
If set, only load data from your local cache. Default: False
uname: str
User name. Default: None
passwd: str
Password. Default: None
time_clip: bool
Time clip the variables to exactly the range specified in the trange keyword. Default: False
ror: bool
If set, print PI info and rules of the road. Default: True
compact: bool
If True, leave only minimal set of the variables. Default: False
force_download: bool
Download file even if local version is more recent than server version
Default: False
Returns
-------
Examples
________
>>> import pyspedas
>>> sd_vars=pyspedas.projects.erg.sd_fit(trange=['2018-10-14/00:00:00','2018-10-14/02:00:00'],site='ade')
>>> print(sd_vars)
"""
valid_sites = [
"ade",
"adw",
"bks",
"bpk",
"cly",
"cve",
"cvw",
"dce",
"fhe",
"fhw",
"fir",
"gbr",
"hal",
"han",
"hok",
"hkw",
"inv",
"kap",
"ker",
"kod",
"ksr",
"mcm",
"pgr",
"pyk",
"rkn",
"san",
"sas",
"sps",
"sto",
"sye",
"sys",
"tig",
"unw",
"wal",
"zho",
"lyr",
]
if isinstance(site, str):
site_code = site.lower()
site_code = site_code.split(" ")
elif isinstance(site, list):
site_code = []
for i in range(len(site)):
site_code.append(site[i].lower())
if "all" in site_code:
site_code = valid_sites
site_code = list(set(site_code).intersection(valid_sites))
new_cdflib = False
if cdflib.__version__ > "0.4.9":
new_cdflib = True
else:
new_cdflib = False
if notplot:
loaded_data = {}
else:
loaded_data = []
for site_input in site_code:
prefix = "sd_" + site_input + "_"
file_res = 3600.0 * 24.0
pathformat = (
"ground/radar/sd/fitacf/"
+ site_input
+ "/%Y/sd_fitacf_l2_"
+ site_input
+ "_%Y%m%d*.cdf"
)
loaded_data_temp = load(
pathformat=pathformat,
file_res=file_res,
trange=trange,
prefix=prefix,
suffix=suffix,
get_support_data=get_support_data,
varformat=varformat,
downloadonly=downloadonly,
notplot=notplot,
time_clip=time_clip,
no_update=no_update,
uname=uname,
passwd=passwd,
force_download=force_download,
)
if notplot:
loaded_data.update(loaded_data_temp)
else:
loaded_data += loaded_data_temp
if (len(loaded_data_temp) > 0) and ror:
try:
gatt = get_gatt_ror(downloadonly, loaded_data)
print("############## RULES OF THE ROAD ################")
print(gatt["Rules_of_use"])
print("############## RULES OF THE ROAD ################")
except:
print("printing PI info and rules of the road was failed")
if (not downloadonly) and (not notplot) and (len(loaded_data_temp) > 0):
t_plot_name_list = tnames(
[prefix + "pwr*", prefix + "spec*", prefix + "vlos*"]
)
t_plot_name_list = list(set(t_plot_name_list).intersection(loaded_data))
for t_plot_name in t_plot_name_list:
clip(t_plot_name, -9000, 9000)
t_plot_name_list = tnames([prefix + "elev*"])
t_plot_name_list = list(set(t_plot_name_list).intersection(loaded_data))
if len(t_plot_name_list) > 5:
for t_plot_name in t_plot_name_list:
clip(t_plot_name, -9000, 9000)
azim_no_name_list = list(
set(tnames(prefix + "*azim_no_?" + suffix)).intersection(loaded_data)
)
number_string_list = []
for azim_no_name in azim_no_name_list:
number_string_list.append(azim_no_name.split("_")[4][0])
for number_string in number_string_list:
site_input_upper = site_input.upper()
# ;Set labels for some tplot variables
options(
prefix + "pwr_" + number_string + suffix,
"ysubtitle",
"[range gate]",
)
options(
prefix + "pwr_" + number_string + suffix,
"ztitle",
"Backscatter power [dB]",
)
options(
prefix + "pwr_" + number_string + suffix,
"ytitle",
site_input_upper + "\nall beams",
)
options(
prefix + "pwr_err_" + number_string + suffix,
"ytitle",
site_input_upper + "\nall beams",
)
options(
prefix + "pwr_err_" + number_string + suffix,
"ysubtitle",
"[range gate]",
)
options(
prefix + "pwr_err_" + number_string + suffix,
"ztitle",
"power err [dB]",
)
options(
prefix + "spec_width_" + number_string + suffix,
"ytitle",
site_input_upper + "\nall beams",
)
options(
prefix + "spec_width_" + number_string + suffix,
"ysubtitle",
"[range gate]",
)
options(
prefix + "spec_width_" + number_string + suffix,
"ztitle",
"Spec. width [m/s]",
)
options(
prefix + "spec_width_err_" + number_string + suffix,
"ytitle",
site_input_upper + "\nall beams",
)
options(
prefix + "spec_width_err_" + number_string + suffix,
"ysubtitle",
"[range gate]",
)
options(
prefix + "spec_width_err_" + number_string + suffix,
"ztitle",
"Spec. width err [m/s]",
)
if not prefix + "vlos_" + number_string + suffix in loaded_data:
vlos_notplot_dictionary = load(
pathformat=pathformat,
file_res=file_res,
trange=trange,
prefix=prefix,
suffix=suffix,
get_support_data=get_support_data,
varformat="vlos_" + number_string,
downloadonly=downloadonly,
notplot=True,
time_clip=time_clip,
no_update=no_update,
uname=uname,
passwd=passwd,
)
vlos_tplot_name = prefix + "vlos_" + number_string + suffix
if len(vlos_notplot_dictionary) > 0:
store_data(
vlos_tplot_name,
data={
"x": vlos_notplot_dictionary[vlos_tplot_name]["x"],
"y": vlos_notplot_dictionary[vlos_tplot_name]["y"],
"v1": vlos_notplot_dictionary[vlos_tplot_name]["v"],
"v2": np.arange(
vlos_notplot_dictionary[vlos_tplot_name]["y"].shape[
2
]
),
},
attr_dict={
"CDF": vlos_notplot_dictionary[vlos_tplot_name]["CDF"]
},
)
clip(vlos_tplot_name, -9000, 9000)
options(vlos_tplot_name, "spec", 1)
loaded_data.append(vlos_tplot_name)
options(
prefix + "vlos_" + number_string + suffix,
"ytitle",
site_input_upper + "\nall beams",
)
options(
prefix + "vlos_" + number_string + suffix,
"ysubtitle",
"[range gate]",
)
options(
prefix + "vlos_" + number_string + suffix,
"ztitle",
"Doppler velocity [m/s]",
)
options(
prefix + "vlos_err_" + number_string + suffix,
"ytitle",
site_input_upper + "\nall beams",
)
options(
prefix + "vlos_err_" + number_string + suffix,
"ysubtitle",
"[range gate]",
)
options(
prefix + "vlos_err_" + number_string + suffix,
"ztitle",
"Vlos err [m/s]",
)
if (
prefix + "elev_angle_" + number_string + suffix in loaded_data
): # need to get_support_data=True
options(
prefix + "elev_angle_" + number_string + suffix,
"ytitle",
site_input_upper + "\nall beams",
)
options(
prefix + "elev_angle_" + number_string + suffix,
"ysubtitle",
"[range gate]",
)
options(
prefix + "elev_angle_" + number_string + suffix,
"ztitle",
"Elev. angle [deg]",
)
options(
prefix + "echo_flag_" + number_string + suffix,
"ytitle",
site_input_upper + "\nall beams",
)
options(
prefix + "echo_flag_" + number_string + suffix,
"ysubtitle",
"[range gate]",
)
options(
prefix + "echo_flag_" + number_string + suffix,
"ztitle",
"1: iono. echo",
)
options(
prefix + "quality_" + number_string + suffix,
"ytitle",
site_input_upper + "\nall beams",
)
options(
prefix + "quality_" + number_string + suffix,
"ysubtitle",
"[range gate]",
)
options(
prefix + "quality_" + number_string + suffix, "ztitle", "quality"
)
options(
prefix + "quality_flag_" + number_string + suffix,
"ytitle",
site_input_upper + "\nall beams",
)
options(
prefix + "quality_flag_" + number_string + suffix,
"ysubtitle",
"[range gate]",
)
options(
prefix + "quality_flag_" + number_string + suffix,
"ztitle",
"quality flg",
)
# ;Split vlos_? tplot variable into 3 components
get_data_vlos = get_data(prefix + "vlos_" + number_string + suffix)
if get_data_vlos is not None:
if get_data_vlos[1].ndim >= 3:
get_metadata_vlos = get_data(
prefix + "vlos_" + number_string + suffix, metadata=True
)
store_data(
prefix + "vnorth_" + number_string + suffix,
data={
"x": get_data_vlos[0],
"y": get_data_vlos[1][:, :, 0],
"v": get_data_vlos[2],
},
attr_dict=get_metadata_vlos,
)
options(
prefix + "vnorth_" + number_string + suffix,
"ztitle",
"LOS V Northward [m/s]",
)
loaded_data.append(prefix + "vnorth_" + number_string + suffix)
if get_data_vlos[1].shape[2] >= 1:
store_data(
prefix + "veast_" + number_string + suffix,
data={
"x": get_data_vlos[0],
"y": get_data_vlos[1][:, :, 1],
"v": get_data_vlos[2],
},
attr_dict=get_metadata_vlos,
)
options(
prefix + "veast_" + number_string + suffix,
"ztitle",
"LOS V Eastward [m/s]",
)
loaded_data.append(
prefix + "veast_" + number_string + suffix
)
if get_data_vlos[1].shape[2] >= 2:
store_data(
prefix + "vlos_" + number_string + suffix,
data={
"x": get_data_vlos[0],
"y": get_data_vlos[1][:, :, 2],
"v": get_data_vlos[2],
},
attr_dict=get_metadata_vlos,
)
options(
prefix + "vlos_" + number_string + suffix,
"ztitle",
"LOS Doppler vel. [m/s]",
)
# ;Combine iono. echo and ground echo for vlos
v_var_names = ["vlos_", "vnorth_", "veast_"]
flag_data = get_data(prefix + "echo_flag_" + number_string + suffix)
for v_var in v_var_names:
v_var_data = get_data(prefix + v_var + number_string + suffix)
if v_var_data is not None:
v_var_metadata = get_data(
prefix + v_var + number_string + suffix, metadata=True
)
g_data_y = np.where(
flag_data[1] == 1.0, np.nan, v_var_data[1]
)
v_var_data_y = np.where(
flag_data[1] != 1.0, np.nan, v_var_data[1]
)
max_rg = np.nanmax(v_var_data[2]) + 1
store_data(
prefix + v_var + "iscat_" + number_string + suffix,
data={
"x": v_var_data[0],
"y": v_var_data_y,
"v": v_var_data[2],
},
attr_dict=v_var_metadata,
)
options(
prefix + v_var + "iscat_" + number_string + suffix,
"ytitle",
" ",
)
options(
prefix + v_var + "iscat_" + number_string + suffix,
"ysubtitle",
" ",
)
options(
prefix + v_var + "iscat_" + number_string + suffix,
"ztitle",
" ",
)
options(
prefix + v_var + "iscat_" + number_string + suffix,
"spec",
1,
)
loaded_data.append(
prefix + v_var + "iscat_" + number_string + suffix
)
metadata_for_gscat = deepcopy(v_var_metadata)
metadata_for_gscat["plot_options"]["extras"][
"fill_color"
] = 5 # options like, 'fill_color:5' in IDL, have not implemented.
store_data(
prefix + v_var + "gscat_" + number_string + suffix,
data={
"x": v_var_data[0],
"y": g_data_y,
"v": v_var_data[2],
},
attr_dict=metadata_for_gscat,
)
options(
prefix + v_var + "gscat_" + number_string + suffix,
"ytitle",
" ",
)
options(
prefix + v_var + "gscat_" + number_string + suffix,
"ysubtitle",
" ",
)
options(
prefix + v_var + "gscat_" + number_string + suffix,
"ztitle",
" ",
)
options(
prefix + v_var + "gscat_" + number_string + suffix,
"spec",
1,
)
loaded_data.append(
prefix + v_var + "gscat_" + number_string + suffix
)
store_data(
prefix + v_var + "bothscat_" + number_string + suffix,
data=[
prefix + v_var + "iscat_" + number_string + suffix,
prefix + v_var + "gscat_" + number_string + suffix,
],
)
options(
prefix + v_var + "bothscat_" + number_string + suffix,
"yrange",
[0, max_rg],
)
loaded_data.append(
prefix + v_var + "bothscat_" + number_string + suffix
)
"""
Currently, '*iscat_*' and '*bothscat_*' are almost same plot outputs.
Because, options like, 'fill_color:5' of IDL for '*gscat_*' have not implemented.
"""
# ;Set the z range explicitly for some tplot variables
zlim(prefix + "pwr_" + number_string + suffix, 0.0, 30.0)
zlim(prefix + "pwr_err_" + number_string + suffix, 0.0, 30.0)
zlim(prefix + "spec_width_" + number_string + suffix, 0.0, 200.0)
zlim(prefix + "spec_width_err_" + number_string + suffix, 0.0, 300.0)
# zlim for '*vlos_*scat_*'
t_names_raw = tnames(prefix + "vlos_*scat_" + number_string + suffix)
t_names_remove_space = [t_name.split(" ")[0] for t_name in t_names_raw]
t_plot_name_list = list(
set(t_names_remove_space).intersection(loaded_data)
)
for t_plot_name in t_plot_name_list:
zlim(t_plot_name, -400.0, 400.0)
# zlim for '*vnorth_*scat_*'
t_names_raw = tnames(prefix + "vnorth_*scat_" + number_string + suffix)
t_names_remove_space = [t_name.split(" ")[0] for t_name in t_names_raw]
t_plot_name_list = list(
set(t_names_remove_space).intersection(loaded_data)
)
for t_plot_name in t_plot_name_list:
zlim(t_plot_name, -400.0, 400.0)
# zlim for '*veast_*scat_*'
t_names_raw = tnames(prefix + "veast_*scat_" + number_string + suffix)
t_names_remove_space = [t_name.split(" ")[0] for t_name in t_names_raw]
t_plot_name_list = list(
set(t_names_remove_space).intersection(loaded_data)
)
for t_plot_name in t_plot_name_list:
zlim(t_plot_name, -400.0, 400.0)
zlim(prefix + "vlos_err_" + number_string + suffix, 0.0, 300.0)
# ;Fill values --> NaN
get_data_vars_pwr = get_data(prefix + "pwr_" + number_string + suffix)
if get_data_vars_pwr is not None:
pwr_y = deepcopy(get_data_vars_pwr[1])
indices_array_tuple = np.where(np.isfinite(pwr_y) == False)
var_name_list = ["echo_flag_", "quality_", "quality_flag_"]
for var_name in var_name_list:
t_plot_name = prefix + var_name + number_string + suffix
get_data_vars = get_data(t_plot_name)
get_metadata_vars = get_data(t_plot_name, metadata=True)
if get_data_vars is not None:
val_array = deepcopy(get_data_vars[1].astype(np.float64))
val_array[indices_array_tuple] = np.nan
store_data(
t_plot_name,
data={
"x": get_data_vars[0],
"y": val_array,
"v": get_data_vars[2],
},
attr_dict=get_metadata_vars,
)
# ;Reassign scan numbers for the combined data
if (
prefix + "scanstartflag_" + number_string + suffix in loaded_data
) and (
prefix + "scanno_" + number_string + suffix in loaded_data
): # need to get_support_data=True
t_plot_name = prefix + "scanstartflag_" + number_string + suffix
scanstartflag_data = get_data(t_plot_name)
if scanstartflag_data is not None:
scflg = abs(scanstartflag_data[1])
try:
scno = np.full(
shape=scflg.shape, fill_value=-1, dtype=np.int64
)
except:
scno = np.full(
shape=scflg.shape, fill_value=-1, dtype=np.int32
)
scno_t = 0
scno[0] = scno_t
gt_1_indices_array = np.where(scflg > 0)[0]
for i in range(gt_1_indices_array.size - 1):
scno[gt_1_indices_array[i] : gt_1_indices_array[i + 1]] = i
scno[gt_1_indices_array[i + 1] :] = i + 1
t_plot_name = prefix + "scanno_" + number_string + suffix
get_data_var_scanno = get_data(t_plot_name)
if get_data_var_scanno is not None:
get_metadata_var_scanno = get_data(
t_plot_name, metadata=True
)
store_data(
t_plot_name,
data={"x": get_data_var_scanno[0], "y": scno},
attr_dict=get_metadata_var_scanno,
)
"""
;Load the position table(s) ;;;;;;;;;;;;;;;;;;
;Currently supports SD fitacf CDFs containing up to 4 pos. tables.
"""
tbllist = [
"tbl_0",
"tbl_1",
"tbl_2",
"tbl_3",
"tbl_4",
"tbl_5",
"tbl_6",
"tbl_7",
"tbl_8",
"tbl_9",
]
timelist = [
"time_0",
"time_1",
"time_2",
"time_3",
"time_4",
"time_5",
"time_6",
"time_7",
"time_8",
"time_9",
]
get_metadata_vars = get_data(loaded_data[-1], metadata=True)
if get_metadata_vars is not None:
datfiles = deepcopy(get_metadata_vars["CDF"]["FILENAME"])
if type(datfiles) is str:
datfiles = [datfiles]
position_tbl_dictionary = {}
for i in range(10):
position_tbl_dictionary[str(i)] = {
"time_input": [],
"tbl_input": [],
"cnttbl_input": [],
}
if len(datfiles) > 0:
for file_name in datfiles:
cdf_file = cdflib.CDF(file_name)
cdf_info = cdf_file.cdf_info()
if new_cdflib:
all_cdf_variables = cdf_info.rVariables + cdf_info.zVariables
else:
all_cdf_variables = cdf_info["rVariables"] + cdf_info["zVariables"]
timevn = fnmatch.filter(all_cdf_variables, "Epoch_?")
ptblvn = fnmatch.filter(all_cdf_variables, "position_tbl_?")
timevn.sort()
ptblvn.sort()
for j in range(len(ptblvn)):
tv_name = timevn[j]
stblno = tv_name.split("_")[-1]
pv_name = ptblvn[j]
time_array = cdf_file.varget(tv_name)
tbl_array = cdf_file.varget(pv_name)
cnttbl = get_pixel_cntr(tbl_array)
position_tbl_dictionary[stblno]["time_input"] += [
time_array[0],
time_array[-1],
]
dim_tuple = tbl_array.shape
tbl2_array = tbl_array.reshape(
1, dim_tuple[0], dim_tuple[1], dim_tuple[2]
)
cnttbl2_array = cnttbl.reshape(
1, dim_tuple[0] - 1, dim_tuple[1] - 1, dim_tuple[2]
)
if len(position_tbl_dictionary[stblno]["tbl_input"]) == 0:
position_tbl_dictionary[stblno][
"tbl_input"
] = np.concatenate([tbl2_array, tbl2_array], axis=0)
else:
position_tbl_dictionary[stblno][
"tbl_input"
] = np.concatenate(
[
position_tbl_dictionary[stblno]["tbl_input"],
tbl2_array,
tbl2_array,
],
axis=0,
)
if (
len(position_tbl_dictionary[stblno]["cnttbl_input"])
== 0
):
position_tbl_dictionary[stblno][
"cnttbl_input"
] = np.concatenate(
[cnttbl2_array, cnttbl2_array], axis=0
)
else:
position_tbl_dictionary[stblno][
"cnttbl_input"
] = np.concatenate(
[
position_tbl_dictionary[stblno]["cnttbl_input"],
cnttbl2_array,
cnttbl2_array,
],
axis=0,
)
for t_plot_suffix_number in position_tbl_dictionary.keys():
if (
len(
position_tbl_dictionary[t_plot_suffix_number][
"time_input"
]
)
>= 2
):
input_tplot_time_array = (
np.array(
position_tbl_dictionary[t_plot_suffix_number][
"time_input"
]
)
/ 1000.0
- 719528.0 * 24.0 * 3600.0
)
t_plot_name = (
prefix + "position_tbl_" + t_plot_suffix_number + suffix
)
store_data(
t_plot_name,
data={
"x": input_tplot_time_array,
"y": position_tbl_dictionary[t_plot_suffix_number][
"tbl_input"
],
},
)
loaded_data.append(t_plot_name)
t_plot_name = (
prefix
+ "positioncnt_tbl_"
+ t_plot_suffix_number
+ suffix
)
store_data(
t_plot_name,
data={
"x": input_tplot_time_array,
"y": position_tbl_dictionary[t_plot_suffix_number][
"cnttbl_input"
],
},
)
loaded_data.append(t_plot_name)
if compact: # ;Leave only minimal set of the variables if compact=True.
search_var_list = [
"*cpid*",
"*channel*",
"*int_time*",
"*azim_no*",
"*pwr_err*",
"*spec_width_err*",
"*vlos_err*",
"*elev_angle*",
"*elev_angle_err*",
"*phi0*",
"*phi0_err*",
"*echo_flag*",
"*quality*",
"*quality_flag*",
"*scanno*",
"*scanstartflag*",
"*lagfr*",
"*smsep*",
"*nrang_max*",
"*tfreq*",
"*noise*",
"*num_ave*",
"*txpl*",
"*vnorth*",
"*veast*",
"*vlos_bothscat*",
"*vlos_iscat*",
"*vlos_gscat*",
"*vnorth_iscat*",
"*vnorth_gscat*",
"*vnorth_bothscat*",
"*veast_iscat*",
"*veast_gscat*",
"*veast_bothscat*",
"*position_tbl*",
"*positioncnt_tbl*",
]
delete_tplot_name_list = list(
set(tnames(search_var_list)).intersection(loaded_data)
)
if len(delete_tplot_name_list) > 0:
store_data(delete_tplot_name_list, delete=True)
loaded_data = list(set(loaded_data).difference(delete_tplot_name_list))
return loaded_data
|
spedasREPO_NAMEpyspedasPATH_START.@pyspedas_extracted@pyspedas-master@pyspedas@projects@erg@ground@radar@superdarn@sd_fit.py@.PATH_END.py
|
{
"filename": "mArchiveDownload.py",
"repo_name": "Caltech-IPAC/Montage",
"repo_path": "Montage_extracted/Montage-main/python/MontagePy/mArchiveDownload.py",
"type": "Python"
}
|
#!/bin/env python
import os
import sys
import ssl
import json
import bz2
import urllib.parse
from urllib.request import urlopen
def mArchiveDownload(survey, location, size, path):
"""
.
mArchiveDownload populates a directory with raw images from
one of several astronomical missions (2MASS, SDSS, WISE,
DSS, IRAC, MIPS). These images are all in FITS format
and suitable for reprojection, moaicking, etc.
Parameters
----------
survey: str
The survey and band information for the mission
(e.g. "2MASS J" or "SDSS g"). See
http://montage.ipac.caltech.edu/applications/ArchiveList
for a complete list.
location: str
Coordinates or name of an astronomical object
(e.g. "4h23m11s -12d14m32.3s", "Messier 017").
size: float
Region size in degrees.
path: str
Directory for output files.
"""
debug = False
# Build the URL to get image metadata
url = "http://montage.ipac.caltech.edu/cgi-bin/ArchiveList/nph-archivelist?survey=" \
+ urllib.parse.quote_plus(survey) \
+ "&location=" \
+ urllib.parse.quote_plus(location) \
+ "&size=" \
+ str(size) + "&units=deg&mode=JSON"
if debug:
print('DEBUG> url = "' + url + '"')
# Retrieve the image metadata and convert
# the JSON to a Python dictionary
response = urlopen(url)
ctx = ssl.create_default_context()
ctx.check_hostname = False
ctx.verify_mode = ssl.CERT_NONE
fjson = urlopen(url, context=ctx)
data = json.load(fjson)
if debug:
print("DEBUG> data: ")
print(data)
nimages = len(data)
if debug:
print("DEBUG> nimages = " + str(nimages))
# We need to check the given directory,
# whether it exists, whether it is writeable,
# etc. We'll do it by trying to create it,
# then trying to write the image data it.
try:
if not os.path.exists(path):
os.makedirs(path)
except:
return("{'status': '1', 'msg': 'Cannot create output directory.'}")
# Retrieve all the images into the data directory
chunk_size = 4096
try:
for index in range(0,nimages):
datafile = path + "/" + data[index]['file']
url = data[index]['url']
bzfile = False
if len(datafile) > 4 and datafile[-4:] == '.bz2':
datafile = datafile[:-4]
bzfile = True
##### r = requests.get(url, stream=True, verify=False)
r = urlopen(url, context=ctx)
decompressor = bz2.BZ2Decompressor()
with open(datafile, 'wb') as fd:
while True:
chunk = r.read(chunk_size)
if not chunk:
break
if bzfile:
decompressed = decompressor.decompress(chunk)
if decompressed:
fd.write(decompressed)
else:
fd.write(chunk)
fd.close()
if debug:
print(datafile)
except:
return("{'status': '1', 'msg': 'Error writing data'}")
# Success
return("{'status': '0', 'count': " + str(nimages) + "}")
|
Caltech-IPACREPO_NAMEMontagePATH_START.@Montage_extracted@Montage-main@python@MontagePy@mArchiveDownload.py@.PATH_END.py
|
{
"filename": "maps.py",
"repo_name": "dhanson/quicklens",
"repo_path": "quicklens_extracted/quicklens-master/quicklens/maps.py",
"type": "Python"
}
|
# quicklens/maps.py
# --
# this module contains classes and subroutines
# for working with flat-sky temperature and polarization maps,
# as well as their 2D fourier transforms.
# overview of classes:
# * pix = descriptor class for a map pixelization with rectangular pixels.
# * rmap = real-valued map class.
# * tqumap = container for three real-valued maps (corresponding to temperature T, Q and U polarization).
# * tqumap_wt = container for a 3x3 weight (or covariance matrix) object for T, Q, U.
# * rfft = fourier transform of an rmap.
# * cfft = fourier transform of a complex-valued map.
# * tebfft = fourier transform of a tqumap, divided into temperature T and E/B-mode polarization.
import hashlib
import numpy as np
import spec
import util
class pix(object):
def __init__(self, nx, dx, ny=None, dy=None):
if ny is None:
ny = nx
if dy is None:
dy = dx
self.nx = nx; self.ny = ny; self.dx = dx; self.dy = dy
def hashdict(self):
return { 'nx' : self.nx,
'dx' : self.dx,
'ny' : self.ny,
'dy' : self.dy }
def __eq__(self, other):
return self.compatible(other)
def compatible(self, other):
return ( (self.nx == other.nx) and
(self.ny == other.ny) and
(self.dx == other.dx) and
(self.dy == other.dy) )
def is_rmap(obj):
""" ducktyping check of whether an object is an rmap. """
return ( hasattr(obj, 'nx') and hasattr(obj, 'dx') and
hasattr(obj, 'ny') and hasattr(obj, 'dy') and
hasattr(obj, 'map') )
class rmap(pix):
def __init__(self, nx, dx, map=None, ny=None, dy=None):
""" class which contains a real-valued map """
super( rmap, self ).__init__(nx, dx, ny=ny, dy=dy)
if map is None:
self.map = np.zeros( (self.ny, self.nx) )
else:
self.map = map
assert( (self.ny, self.nx) == self.map.shape )
def hashdict(self):
""" returns a dictionary which should uniquely characterize the contents of this object """
return { 'pix' : super(rmap, self).hashdict(),
'map' : hashlib.sha1(self.map.view(np.uint8)).hexdigest() }
def copy(self):
return rmap( self.nx, self.dx,
self.map.copy(),
ny = self.ny, dy = self.dy )
def pad(self, nxp, nyp):
""" make a new map with dimensions nxp (>nx), nyp (>ny) with this map at its center. """
assert( nxp > self.nx )
assert( nyp > self.ny )
assert( np.mod( nxp - self.nx, 2 ) == 0 )
assert( np.mod( nyp - self.ny, 2 ) == 0 )
ret = rmap( nx=nxp, dx=self.dx, ny=nyp, dy=self.dy )
ret.map[ (nyp-self.ny)/2:(nyp+self.ny)/2, (nxp-self.nx)/2:(nxp+self.nx)/2 ] = self.map
return ret
def compatible(self, other):
""" check whether this map can be added, subtracted, etc. to the map 'other'. """
return ( hasattr(other, 'map') and
super(rmap, self).compatible(other) )
def get_rfft(self):
""" return an rfft object containing the real fourier transform of this map. """
ret = rfft( self.nx, self.dx,
ny = self.ny, dy = self.dy )
tfac = np.sqrt((self.dx * self.dy) / (self.nx * self.ny))
ret.fft[:] = np.fft.rfft2(self.map) * tfac
return ret
def get_cfft(self):
""" return a cfft object containing the full fourier transform of this map. """
return self.get_rfft().get_cfft()
def degrade(self, fac, intensive=False):
""" degrade the size/resolution of this map by fac in each dimension. """
assert( np.mod(self.nx, fac) == 0 )
assert( np.mod(self.ny, fac) == 0 )
ret = rmap( self.nx/fac, self.dx*fac, ny=self.ny/fac, dy=self.dy*fac )
for i in xrange(0,fac):
for j in xrange(0, fac):
ret.map += self.map[i::fac,j::fac]
if intensive == True:
ret.map *= (1./fac**2)
return ret
def prograde(self, fac):
""" increase the size/resolution of this map by fac in each dimension. """
ret = rmap( self.nx*fac, self.dx/fac, ny=self.ny*fac, dy=self.dy/fac )
for i in xrange(0,fac):
for j in xrange(0, fac):
ret.map[i::fac,j::fac] = self.map
return ret
def __mul__(self, other):
if False:
pass
elif np.isscalar(other):
ret = self.copy()
ret.map *= other
return ret
elif is_rmap(other):
assert( self.compatible(other) )
ret = self.copy()
ret.map *= other.map
return ret
elif (getattr(other, 'shape', ()) == (self.ny, self.nx)):
ret = self.copy()
ret.map *= other
return ret
else:
assert(0)
def __imul__(self, other):
if False:
pass
elif is_rmap(other):
assert( self.compatible(other) )
self.map *= other.map
return self
elif (getattr(other, 'shape', ()) == (self.ny, self.nx)):
self.map *= other
return self
else:
assert(0)
def __add__(self, other):
assert( self.compatible(other) )
return rmap( self.nx, self.dx, self.map+other.map, ny = self.ny, dy = self.dy )
def __sub__(self, other):
assert( self.compatible(other) )
return rmap( self.nx, self.dx, self.map-other.map, ny = self.ny, dy = self.dy )
def __iadd__(self, other):
assert( self.compatible(other) )
self.map += other.map
return self
def __isub__(self, other):
assert( self.compatible(other) )
self.map -= other.map
return self
def make_apod(self, fwhm_wght=10., wdth_bord=30., fwhm_apod=15., maxthresh=None, avgthresh=None):
""" construct an apodization mask, taking this map as a set of weights. the process is
(1) smooth the map, with a full-width-at-half-maximum (fwhm) given by fwhm_weight (in arcmin).
(2) threshold the smoothed weights, as a percentage of the smoothed maximum (maxthresh) and/or of the smoothed average (avgthresh).
(3) remove all pixels within a distance of wdth_bord (in arcmin) of any pixels which have been thresholded to zero.
(4) apply a gaussian apodization, with fwhm given by fwhm_apod. """
import scipy.ndimage
assert( self.dx == self.dy )
reso_arcmin = (self.dx * 180.*60./np.pi)
smoothwt = scipy.ndimage.gaussian_filter( self.map, fwhm_wght / reso_arcmin / 2*np.sqrt(2*np.log(2)) )
threshwt = np.ones( smoothwt.shape )
if maxthresh != None:
threshwt[ np.where(smoothwt / smoothwt.flatten().max() < maxthresh) ] = 0.0
if avgthresh != None:
threshwt[ np.where(smoothwt / smoothwt.flatten()[np.where(smoothwt.flatten() != 0)].avg() < avgthresh) ] = 0.0
npix_bord = 2.*int(wdth_bord/reso_arcmin)
xs, ys = np.meshgrid( np.linspace(-1., 1., npix_bord), np.linspace(-1., 1., npix_bord) )
kern_bord = np.ones( (npix_bord, npix_bord) )
kern_bord[ np.where( (xs**2 + ys**2) >= 1. ) ] = 0.0
bordwt = scipy.ndimage.minimum_filter( threshwt, footprint=kern_bord )
return rmap( self.nx, self.dx, ny=self.ny, dy=self.dy,
map=scipy.ndimage.gaussian_filter( bordwt, fwhm_apod / reso_arcmin / 2*np.sqrt(2*np.log(2)) ) )
def is_tqumap(obj):
return ( hasattr(obj, 'nx') and hasattr(obj, 'dx') and
hasattr(obj, 'ny') and hasattr(obj, 'dy') and
hasattr(obj, 'tmap') and hasattr(obj, 'qmap') and hasattr(obj, 'umap') )
class tqumap(pix):
def __init__(self, nx, dx, maps=None, ny=None, dy=None):
""" class which contains temperature (T) and polarization (Q, U) maps. """
super( tqumap, self ).__init__(nx, dx, ny=ny, dy=dy)
if maps is None:
self.tmap = np.zeros( (self.ny, self.nx) )
self.qmap = np.zeros( (self.ny, self.nx) )
self.umap = np.zeros( (self.ny, self.nx) )
else:
[self.tmap, self.qmap, self.umap] = maps
assert( (self.ny, self.nx) == self.tmap.shape )
assert( (self.ny, self.nx) == self.qmap.shape )
assert( (self.ny, self.nx) == self.umap.shape )
def copy(self):
return tqumap( self.nx, self.dx,
[self.tmap.copy(), self.qmap.copy(), self.umap.copy()],
ny = self.ny, dy = self.dy )
def pad(self, nxp, nyp):
""" make a new map with dimensions nxp (>nx), nyp (>ny) with this map at its center. """
assert( nxp > self.nx )
assert( nyp > self.ny )
assert( np.mod( nxp - self.nx, 2 ) == 0 )
assert( np.mod( nyp - self.ny, 2 ) == 0 )
ret = tqumap( nx=nxp, dx=self.dx, ny=nyp, dy=self.dy )
for this, that in [ [self.tmap, ret.tmap], [self.qmap, ret.qmap], [self.umap, ret.umap] ]:
that[ (nyp-self.ny)/2:(nyp+self.ny)/2, (nxp-self.nx)/2:(nxp+self.nx)/2 ] = this
return ret
def threshold(self, vmin, vmax=None, vcut=0.):
""" returns a new, thresholded version of the current map.
threshold(v) -> set all pixels which don't satisfy (-|v| < val < |v|) equal to vcut.
threshold(min,max) -> set all pixels which don't satisfy (vmin < val < vmax) equal to vcut.
"""
if vmax is None:
vmin = -np.abs(vmin)
vmax = +np.abs(vmin)
assert( vmin < vmax )
ret = self.copy()
for m in [ret.tmap, ret.qmap, ret.umap]:
m[np.where(m < vmin)] = vcut
m[np.where(m > vmax)] = vcut
return ret
def compatible(self, other):
""" check whether this map can be added, subtracted, etc. to the map 'other'. """
return ( hasattr(other, 'tmap') and
hasattr(other, 'qmap') and
hasattr(other, 'umap') and
super(tqumap, self).compatible(other) )
def get_teb(self):
""" return a tebfft object containing the fourier transform of the T,Q,U maps. """
ret = tebfft( self.nx, self.dx, ny = self.ny, dy = self.dy )
lx, ly = ret.get_lxly()
tpi = 2.*np.arctan2(lx, -ly)
tfac = np.sqrt((self.dx * self.dy) / (self.nx * self.ny))
qfft = np.fft.rfft2(self.qmap) * tfac
ufft = np.fft.rfft2(self.umap) * tfac
ret.tfft[:] = np.fft.rfft2(self.tmap) * tfac
ret.efft[:] = (+np.cos(tpi) * qfft + np.sin(tpi) * ufft)
ret.bfft[:] = (-np.sin(tpi) * qfft + np.cos(tpi) * ufft)
return ret
def degrade(self, fac, intensive=False):
""" degrade the size/resolution of this map by fac in each dimension. """
assert( np.mod(self.nx, fac) == 0 )
assert( np.mod(self.ny, fac) == 0 )
ret = tqumap( self.nx/fac, self.dx*fac, ny=self.ny/fac, dy=self.dy*fac )
for i in xrange(0,fac):
for j in xrange(0, fac):
ret.tmap += self.tmap[i::fac,j::fac]
ret.qmap += self.qmap[i::fac,j::fac]
ret.umap += self.umap[i::fac,j::fac]
if intensive == True:
ret *= (1./fac**2)
return ret
def get_chi(self, pixel_radius=2, field='B'):
""" estimate the \chi_E or \chi_B fields from the Q and U maps using finite differences,
following Smith and Zaldarriaga (2006) http://arxiv.org/abs/astro-ph/0610059 """
assert( self.dx == self.dy )
if pixel_radius==1:
w = [1., 1./2, 0, 0, 0, 0]
elif pixel_radius==2:
w = [4./3, 2./3, -1./12, -1./24, 0, 0]
else:
assert(0)
w = np.asarray(w)
w /= self.dx * self.dy
def roll(array, shift):
out = array
if shift[0]:
out = np.roll( out, shift[0], axis=1 )
if shift[1]:
out = np.roll( out, shift[1], axis=0 )
return out
if field=='B':
q, u = self.qmap, -self.umap
elif field=='E':
u, q = self.qmap, -self.umap
else:
assert(0)
chi= ( w[0]*( roll(u,[+1,0]) - roll(u,[0,-1]) + roll(u,[-1,0]) - roll(u,[0,+1]) )
-w[1]*( roll(q,[-1,+1]) + roll(q,[+1,-1]) - roll(q,[+1,+1]) - roll(q,[-1,-1]) ) )
if w[2]:
chi += w[2]*( roll(u,[+2,0]) - roll(u,[0,-2]) + roll(u,[-2,0]) - roll(u,[0,+2]) )
if w[3]:
chi -= w[3]*( roll(q,[-2,+2]) + roll(q,[+2,-2]) - roll(q,[+2,+2]) - roll(q,[-2,-2]) )
return rmap( self.nx, self.dx, chi, ny=self.ny, dy=self.dy )
def get_t(self):
return rmap( self.nx, self.dx, map=self.tmap, ny=self.ny, dy=self.dy )
def get_q(self):
return rmap( self.nx, self.dx, map=self.qmap, ny=self.ny, dy=self.dy )
def get_u(self):
return rmap( self.nx, self.dx, map=self.umap, ny=self.ny, dy=self.dy )
def __mul__(self, other):
if False:
pass
elif is_tqumap(other):
assert( self.compatible(other) )
ret = self.copy()
ret.tmap *= other.tmap
ret.qmap *= other.qmap
ret.umap *= other.umap
return ret
elif is_tqumap_wt(other):
return other * self
elif (getattr(other, 'shape', ()) == (self.ny, self.nx)):
ret = self.copy()
ret.tmap *= other
ret.qmap *= other
ret.umap *= other
return ret
else:
assert(0)
def __imul__(self, other):
if False:
pass
elif is_tqumap(other):
assert( self.compatible(other) )
self.tmap *= other.tmap
self.qmap *= other.qmap
self.umap *= other.umap
return self
elif (getattr(other, 'shape', ()) == (self.ny, self.nx)):
self.tmap *= other
self.qmap *= other
self.umap *= other
return self
else:
assert(0)
def __add__(self, other):
assert( self.compatible(other) )
return tqumap( self.nx, self.dx,
[self.tmap + other.tmap, self.qmap + other.qmap, self.umap + other.umap],
ny = self.ny, dy = self.dy )
def __sub__(self, other):
assert( self.compatible(other) )
return tqumap( self.nx, self.dx,
[self.tmap - other.tmap, self.qmap - other.qmap, self.umap - other.umap],
ny = self.ny, dy = self.dy )
def __iadd__(self, other):
assert( self.compatible(other) )
self.tmap += other.tmap; self.qmap += other.qmap; self.umap += other.umap
return self
def __isub__(self, other):
assert( self.compatible(other) )
self.tmap -= other.tmap; self.qmap -= other.qmap; self.umap -= other.umap
return self
def is_tqumap_wt(obj):
""" ducktyping check of whether an object is an tqumap_wt. """
return ( hasattr(obj, 'nx') and hasattr(obj, 'dx') and
hasattr(obj, 'ny') and hasattr(obj, 'dy') and
hasattr(obj, 'weight') )
class tqumap_wt(pix):
def __init__(self, nx, dx, weight=None, ny=None, dy=None):
""" class which contains a 3x3 weight or covariance matrix for each pixel of a tqumap."""
super( tqumap_wt, self ).__init__(nx, dx, ny=ny, dy=dy)
if weight is None:
self.weight = np.zeros( (self.ny, self.nx, 3, 3) )
else:
self.weight = weight
assert( (self.ny, self.nx, 3, 3) == self.weight.shape )
def hashdict(self):
return { 'pix' : super(tqumap_wt, self).hashdict(),
'weight' : hashlib.sha1(self.weight.view(np.uint8)).hexdigest() }
def __mul__(self, other):
if False:
pass
elif is_tqumap(other):
assert( self.compatible(other) )
tqu = other
weight = self.weight
reti = tqu.tmap*weight[:,:,0,0] + tqu.qmap*weight[:,:,0,1] + tqu.umap*weight[:,:,0,2]
retq = tqu.tmap*weight[:,:,1,0] + tqu.qmap*weight[:,:,1,1] + tqu.umap*weight[:,:,1,2]
retu = tqu.tmap*weight[:,:,2,0] + tqu.qmap*weight[:,:,2,1] + tqu.umap*weight[:,:,2,2]
return tqumap( tqu.nx, tqu.dx, ny=tqu.ny, dy=tqu.dy, maps=[reti, retq, retu] )
else:
assert(0)
def make_tqumap_wt( pix, ninv=None, mask=None, ninv_dcut=None, nlev_tp=None, maskt=None, maskq=None, masku=None ):
""" helper function to generate a tqumap_wt which describes an inverse-noise covariance matrix.
* pix = pixelization for the tqumap.
* (optional) ninv = tqumat_wt object. pixels for which this matrix weight function has determinant < ninv_dcut will be masked.
* (optional) mask = global mask map to apply (effectively taking noise level to infinity for pixels where mask is zero).
* (optional) ninv_dcut = used only in conjunction with ninv.
* (optional) nlev_tp = a tuple (nT, nP) giving pixel temperature/polarization white noise levels to use for the noise covariance in uK.arcmin.
* (optional) maskt, maskq, masku = individual T, Q and U masks to apply.
"""
ret = tqumap_wt( pix.nx, pix.dx, ny=pix.ny, dy=pix.dy )
if ninv != None:
assert( ret.compatible(ninv) )
ret.weight[:,:,:,:] = ninv.weight[:,:,:,:]
if ninv_dcut != None:
dets = np.abs(util.det_3x3(ninv.weight))
else:
assert(ninv_dcut is None)
if nlev_tp != None:
ret.weight = np.zeros( ret.weight.shape )
ret.weight[:,:,0,0] = (180.*60./np.pi)**2 * ret.dx * ret.dy / nlev_tp[0]**2
ret.weight[:,:,1,1] = (180.*60./np.pi)**2 * ret.dx * ret.dy / nlev_tp[1]**2
ret.weight[:,:,2,2] = (180.*60./np.pi)**2 * ret.dx * ret.dy / nlev_tp[1]**2
for i in xrange(0,3):
for j in xrange(0,3):
if (ninv_dcut != None):
print "cutting ", len( np.where(dets < ninv_dcut)[0] ), " pixels for det"
ret.weight[:,:,i,j][np.where(dets < ninv_dcut)] = 0.0
if mask != None:
for i in xrange(0,3):
for j in xrange(0,3):
ret.weight[:,:,i,j] *= mask
if maskt != None:
for i in xrange(0,3):
ret.weight[:,:,i,0] *= maskt
ret.weight[:,:,0,i] *= maskt
if maskq != None:
for i in xrange(0,3):
ret.weight[:,:,i,1] *= maskq
ret.weight[:,:,1,i] *= maskq
if masku != None:
for i in xrange(0,3):
ret.weight[:,:,i,2] *= masku
ret.weight[:,:,2,i] *= masku
return ret
def is_tebfft(obj):
""" ducktyping check of whether an object is a tebfft. """
return ( hasattr(obj, 'nx') and hasattr(obj, 'dx') and
hasattr(obj, 'ny') and hasattr(obj, 'dy') and
hasattr(obj, 'tfft') and hasattr(obj, 'efft') and hasattr(obj, 'bfft') )
class tebfft(pix):
def __init__(self, nx, dx, ffts=None, ny=None, dy=None):
""" class which contains the FFT of a tqumap. temperature (T), E- and B-mode polarization. """
super( tebfft, self ).__init__(nx, dx, ny=ny, dy=dy)
if ffts is None:
self.tfft = np.zeros( (self.ny, self.nx/2+1), dtype=np.complex )
self.efft = np.zeros( (self.ny, self.nx/2+1), dtype=np.complex )
self.bfft = np.zeros( (self.ny, self.nx/2+1), dtype=np.complex )
else:
[self.tfft, self.efft, self.bfft] = ffts
assert( (self.ny, self.nx/2+1) == self.tfft.shape )
assert( (self.ny, self.nx/2+1) == self.efft.shape )
assert( (self.ny, self.nx/2+1) == self.bfft.shape )
def hashdict(self):
return { 'pix' : super(tebfft, self).hashdict(),
'tfft' : hashlib.sha1(self.tfft.view(np.uint8)).hexdigest(),
'efft' : hashlib.sha1(self.efft.view(np.uint8)).hexdigest(),
'bfft' : hashlib.sha1(self.bfft.view(np.uint8)).hexdigest() }
def get_ml( self, lbins, t=None, psimin=0., psimax=np.inf, psispin=1 ):
"""" returns a Cl object containing average over rings of the FFT.
* lbins = list of bin edges.
* t = function t(l) which scales the FFT before averaging. defaults to unity.
* psimin, psimax, psispin = parameters used to set wedges for the averaging.
psi = mod(psispin * arctan2(lx, -ly), 2pi) in the range [psimin, psimax].
"""
dopsi = ( (psimin, psimax, psispin) != (0., np.inf, 1) )
l = self.get_ell().flatten()
if dopsi:
lx, ly = self.get_lxly()
psi = np.mod( psispin*np.arctan2(lx, -ly), 2.*np.pi ).flatten()
lb = 0.5*(lbins[:-1] + lbins[1:])
if t is None:
t = np.ones(l.shape)
else:
t = t(l)
cldict = {}
for field in ['t', 'e', 'b']:
c = getattr(self, field + 'fft').flatten()
m = np.ones(c.shape)
m[ np.isnan(c) ] = 0.0
c[ np.isnan(c) ] = 0.0
if dopsi:
m[ np.where( psi < psimin ) ] = 0.0
m[ np.where( psi >= psimax ) ] = 0.0
norm, bins = np.histogram(l, bins=lbins, weights=m) # get number of modes in each l-bin.
clrr, bins = np.histogram(l, bins=lbins, weights=m*t*c) # bin the spectrum.
# normalize the spectrum.
clrr[np.nonzero(norm)] /= norm[np.nonzero(norm)]
cldict['cl' + field*2] = clrr
return spec.bcl(lbins, cldict )
def __imul__(self, other):
if ( np.isscalar(other) or ( (type(other) == np.ndarray) and
(getattr(other, 'shape', None) == self.tfft.shape) ) ):
self.tfft *= other
self.efft *= other
self.bfft *= other
return self
elif is_rfft(other) and pix.compatible(self, other):
self.tfft *= other.fft
self.efft *= other.fft
self.bfft *= other.fft
return self
elif (len(getattr(other, 'shape', [])) == 1):
tfac = np.interp( self.get_ell().flatten(), np.arange(0, len(other)), other, right=0 ).reshape((self.ny,self.nx/2+1))
self.tfft *= tfac; self.efft *= tfac; self.bfft *= tfac
return self
else:
assert(0)
def __mul__(self, other):
if ( np.isscalar(other) or ( (type(other) == np.ndarray) and
(getattr(other, 'shape', None) == self.tfft.shape) ) ):
return tebfft( self.nx, self.dx,
ffts=[self.tfft * other,
self.efft * other,
self.bfft * other],
ny=self.ny, dy=self.dy )
elif (type(other) == np.ndarray) and (len(getattr(other, 'shape', [])) == 1):
tfac = np.interp( self.get_ell().flatten(), np.arange(0, len(other)), other, right=0 ).reshape((self.ny,self.nx/2+1))
return tebfft( self.nx, self.dx,
ffts=[self.tfft * tfac,
self.efft * tfac,
self.bfft * tfac],
ny=self.ny, dy=self.dy )
elif is_rfft(other) and pix.compatible(self, other):
return tebfft( self.nx, self.dx,
ffts=[self.tfft * other.fft,
self.efft * other.fft,
self.bfft * other.fft],
ny=self.ny, dy=self.dy )
elif is_tebfft(other) and self.compatible(other):
return tebfft( self.nx, self.dx,
ffts=[self.tfft * other.tfft,
self.efft * other.efft,
self.bfft * other.bfft],
ny=self.ny, dy=self.dy )
elif spec.is_clmat_teb(other):
return other * self
elif spec.is_camb_clfile(other):
return spec.clmat_teb(other) * self
else:
assert(0)
def __div__(self, other):
if ( np.isscalar(other) or ( (type(other) == np.ndarray) and
(getattr(other, 'shape', None) == self.tfft.shape) ) ):
return tebfft( self.nx, self.dx,
ffts=[self.tfft,
self.efft,
self.bfft],
ny=self.ny, dy=self.dy )
elif is_rfft(other) and pix.compatible(self, other):
return tebfft( self.nx, self.dx,
ffts=[np.nan_to_num(self.tfft / other.fft),
np.nan_to_num(self.efft / other.fft),
np.nan_to_num(self.bfft / other.fft)],
ny=self.ny, dy=self.dy )
elif is_tebfft(other) and self.compatible(other):
return tebfft( self.nx, self.dx,
ffts=[np.nan_to_num(self.tfft / other.tfft),
np.nan_to_num(self.efft / other.efft),
np.nan_to_num(self.bfft / other.bfft)],
ny=self.ny, dy=self.dy )
else:
assert(0)
def __rdiv__(self, other):
if np.isscalar(other):
return tebfft( self.nx, self.dx,
ffts=[other/self.tfft,
other/self.efft,
other/self.bfft],
ny=self.ny, dy=self.dy )
else:
assert(0)
def compatible(self, other):
return ( hasattr(other, 'tfft') and
hasattr(other, 'efft') and
hasattr(other, 'bfft') and
super(tebfft, self).compatible(other) )
def copy(self):
return tebfft( self.nx, self.dx,
[self.tfft.copy(), self.efft.copy(), self.bfft.copy()],
ny = self.ny, dy = self.dy )
def inverse(self):
""" return a new tebfft for which all elements have been set to their inverses, with exception of zeros which are untouched. """
tfft_inv = np.zeros(self.tfft.shape, dtype=np.complex); tfft_inv[self.tfft != 0] = 1./self.tfft[self.tfft != 0]
efft_inv = np.zeros(self.efft.shape, dtype=np.complex); efft_inv[self.efft != 0] = 1./self.efft[self.efft != 0]
bfft_inv = np.zeros(self.bfft.shape, dtype=np.complex); bfft_inv[self.bfft != 0] = 1./self.bfft[self.bfft != 0]
ret = tebfft( self.nx, self.dx,
[tfft_inv, efft_inv, bfft_inv],
ny = self.ny, dy = self.dy )
return ret
def degrade(self, fac):
""" reduce the resolution of this map by a factor fac. """
assert( np.mod(self.nx, fac) == 0 )
assert( np.mod(self.ny, fac) == 0 )
assert( np.mod(self.nx/fac, 2) == 0 )
return tebfft( nx=self.nx/fac, dx=self.dx*fac,
ffts = [ self.tfft[0:self.ny/fac,0:self.nx/fac/2+1],
self.efft[0:self.ny/fac,0:self.nx/fac/2+1],
self.bfft[0:self.ny/fac,0:self.nx/fac/2+1] ],
ny=self.ny/fac, dy=self.dy*fac )
def get_pix_transf(self):
""" return the FFT describing the map-level transfer function for the pixelization of this object. """
return rfft( self.nx, self.dx, ny=self.ny, dy=self.dy ).get_pix_transf()
def get_cl( self, lbins, t=None, psimin=0., psimax=np.inf, psispin=1 ):
""" returns a Cl object containing the auto-spectra of T,E,B in this map. """
return spec.tebfft2cl( lbins, self, t=t, psimin=psimin, psimax=psimax, psispin=psispin )
def get_tqu(self):
""" returns the tqumap given by the inverse Fourier transform of this object. """
lx, ly = self.get_lxly()
tpi = 2.*np.arctan2(lx, -ly)
tfac = np.sqrt((self.nx * self.ny) / (self.dx * self.dy))
tmap = np.fft.irfft2(self.tfft) * tfac
qmap = np.fft.irfft2(np.cos(tpi)*self.efft - np.sin(tpi)*self.bfft) * tfac
umap = np.fft.irfft2(np.sin(tpi)*self.efft + np.cos(tpi)*self.bfft) * tfac
return tqumap( self.nx, self.dx, [tmap, qmap, umap], ny = self.ny, dy = self.dy )
def get_ffts(self):
""" returns a list of the individual (real) ffts for T, E, B. """
return [ rfft( self.nx, self.dx, fft=self.tfft, ny=self.ny, dy=self.dy ),
rfft( self.nx, self.dx, fft=self.efft, ny=self.ny, dy=self.dy ),
rfft( self.nx, self.dx, fft=self.bfft, ny=self.ny, dy=self.dy ) ]
def get_cffts(self):
""" returns a list of the individual (complex) ffts for T, E, B. """
return [ rfft( self.nx, self.dx, fft=self.tfft, ny=self.ny, dy=self.dy ).get_cfft(),
rfft( self.nx, self.dx, fft=self.efft, ny=self.ny, dy=self.dy ).get_cfft(),
rfft( self.nx, self.dx, fft=self.bfft, ny=self.ny, dy=self.dy ).get_cfft() ]
def get_lxly(self):
""" returns the (lx, ly) pair associated with each Fourier mode in T, E, B. """
return np.meshgrid( np.fft.fftfreq( self.nx, self.dx )[0:self.nx/2+1]*2.*np.pi,
np.fft.fftfreq( self.ny, self.dy )*2.*np.pi )
def get_ell(self):
""" returns the wavenumber l = \sqrt(lx**2 + ly**2) for each Fourier mode in T, E, B. """
lx, ly = self.get_lxly()
return np.sqrt(lx**2 + ly**2)
def __add__(self, other):
assert( self.compatible(other) )
return tebfft( self.nx, self.dx,
[self.tfft + other.tfft, self.efft + other.efft, self.bfft + other.bfft],
ny = self.ny, dy = self.dy )
def __sub__(self, other):
assert( self.compatible(other) )
return tebfft( self.nx, self.dx,
[self.tfft - other.tfft, self.efft - other.efft, self.bfft - other.bfft],
ny = self.ny, dy = self.dy )
def __iadd__(self, other):
assert( self.compatible(other) )
self.tfft += other.tfft; self.efft += other.efft; self.bfft += other.bfft
return self
def __isub__(self, other):
assert( self.compatible(other) )
self.tfft -= other.tfft; self.efft -= other.efft; self.bfft -= other.bfft
return self
def get_l_masked( self, lmin=None, lmax=None, lxmin=None, lxmax=None, lymin=None, lymax=None ):
""" returns a copy of this object which has been masked to zero in a customizable range of Fourier space. """
lx, ly = self.get_lxly()
ell = np.sqrt(lx**2 + ly**2)
mask = np.ones( self.tfft.shape )
if lmin != None: mask[ np.where(ell < lmin) ] = 0.0
if lmax != None: mask[ np.where(ell >=lmax) ] = 0.0
if lxmin != None: mask[ np.where(np.abs(lx) < lxmin) ] = 0.0
if lymin != None: mask[ np.where(np.abs(ly) < lymin) ] = 0.0
if lxmax != None: mask[ np.where(np.abs(lx) >=lxmax) ] = 0.0
if lymax != None: mask[ np.where(np.abs(ly) >=lymax) ] = 0.0
return tebfft( self.nx, self.dx,
[self.tfft * mask, self.efft * mask, self.bfft * mask],
ny = self.ny, dy = self.dy )
def get_l_mask( self, lmin=None, lmax=None, lxmin=None, lxmax=None, lymin=None, lymax=None ):
""" return a Fourier mask for the pixelization associated with this object which is zero over customizable ranges of L. """
lx, ly = self.get_lxly()
ell = np.sqrt(lx**2 + ly**2)
mask = np.ones( self.tfft.shape )
if lmin != None: mask[ np.where(ell < lmin) ] = 0.0
if lmax != None: mask[ np.where(ell >=lmax) ] = 0.0
if lxmin != None: mask[ np.where(np.abs(lx) < lxmin) ] = 0.0
if lymin != None: mask[ np.where(np.abs(ly) < lymin) ] = 0.0
if lxmax != None: mask[ np.where(np.abs(lx) >=lxmax) ] = 0.0
if lymax != None: mask[ np.where(np.abs(ly) >=lymax) ] = 0.0
return tebfft( self.nx, self.dx,
[mask, mask, mask],
ny = self.ny, dy = self.dy )
def is_rfft(obj):
""" ducktyping check of whether an object is an rfft. """
if not ( hasattr(obj, 'nx') and hasattr(obj, 'dx') and
hasattr(obj, 'ny') and hasattr(obj, 'dy') and
hasattr(obj, 'fft') ): return False
return obj.fft.shape == (obj.nx, obj.ny/2+1)
class rfft(pix):
def __init__(self, nx, dx, fft=None, ny=None, dy=None):
""" class which contains the FFT of an rmap. """
super( rfft, self ).__init__(nx, dx, ny=ny, dy=dy)
if fft is None:
fft = np.zeros( (self.ny, self.nx/2+1), dtype=np.complex )
self.fft = fft
assert( (self.ny, self.nx/2+1) == self.fft.shape )
def __iadd__(self, other):
if False:
pass
elif is_rfft(other):
assert( self.compatible(other) )
self.fft[:,:] += other.fft[:,:]
return self
else:
assert(0)
def __add__(self, other):
if False:
pass
elif is_rfft(other):
assert( self.compatible(other) )
ret = self.copy()
ret.fft[:,:] += other.fft[:,:]
return ret
else:
assert(0)
def __sub__(self, other):
if False:
pass
elif is_rfft(other):
assert( self.compatible(other) )
ret = self.copy()
ret.fft[:,:] -= other.fft[:,:]
return ret
else:
assert(0)
def __div__(self, other):
if False:
pass
elif is_rfft(other):
assert( self.compatible(other) )
ret = self.copy()
ret.fft[:,:] /= other.fft[:,:]
return ret
else:
assert(0)
def __rdiv__(self, other):
print "rfft rdiv, other = ", other
if False:
pass
elif np.isscalar(other):
ret = self.copy()
ret.fft[:,:] = other / self.fft[:,:]
return ret
else:
assert(0)
def __mul__(self, other):
if False:
pass
elif is_rfft(other):
assert( self.compatible(other) )
ret = self.copy()
ret.fft[:,:] *= other.fft[:,:]
return ret
elif np.isscalar(other):
ret = self.copy()
ret.fft *= other
return ret
elif ( ( getattr(other, 'size', 0) > 1 ) and ( len( getattr(other, 'shape', ()) ) == 1 ) ):
ell = self.get_ell()
ret = self.copy()
ret.fft *= np.interp( ell.flatten(), np.arange(0, len(other)), other, right=0 ).reshape(self.fft.shape)
return ret
else:
assert(0)
def __rmul__(self, other):
return self.__mul__(other)
def get_pix_transf(self):
""" return the FFT describing the map-level transfer function for the pixelization of this object. """
lx, ly = self.get_lxly()
fft = np.zeros( self.fft.shape )
fft[ 0, 0] = 1.0
fft[ 0,1:] = np.sin(self.dx*lx[ 0,1:]/2.) / (self.dx * lx[0,1:] / 2.)
fft[1:, 0] = np.sin(self.dy*ly[1:, 0]/2.) / (self.dy * ly[1:,0] / 2.)
fft[1:,1:] = np.sin(self.dx*lx[1:,1:]/2.) * np.sin(self.dy*ly[1:,1:]/2.) / (self.dx * self.dy * lx[1:,1:] * ly[1:,1:] / 4.)
return rfft( self.nx, self.dx, ny=self.ny, dy=self.dy, fft=fft )
def compatible(self, other):
return ( hasattr(other, 'fft') and
getattr(other, 'fft', np.array([])).shape == self.fft.shape and
super(rfft, self).compatible(other) )
def copy(self):
return rfft( self.nx, self.dx, self.fft.copy(), ny = self.ny, dy = self.dy )
def get_cl( self, lbins, t=None ):
return spec.rcfft2cl( lbins, self, t=t )
def get_rmap( self ):
""" return the rmap given by this FFT. """
tfac = np.sqrt((self.nx * self.ny) / (self.dx * self.dy))
return rmap( self.nx, self.dx, map=np.fft.irfft2(self.fft)*tfac, ny=self.ny, dy=self.dy )
def get_cfft( self ):
""" return the complex FFT. """
fft = np.zeros( (self.ny, self.nx), dtype=np.complex )
fft[:,0:(self.nx/2+1)] = self.fft[:,:]
fft[0,(self.nx/2+1):] = np.conj(self.fft[0,1:self.nx/2][::-1])
fft[1:,(self.nx/2+1):] = np.conj(self.fft[1:,1:self.nx/2][::-1,::-1])
return cfft( self.nx, self.dx, fft=fft, ny=self.ny, dy=self.dy )
def get_lxly(self):
""" returns the (lx, ly) pair associated with each Fourier mode in T, E, B. """
return np.meshgrid( np.fft.fftfreq( self.nx, self.dx )[0:self.nx/2+1]*2.*np.pi,
np.fft.fftfreq( self.ny, self.dy )*2.*np.pi )
def get_ell(self):
""" returns the wavenumber l = \sqrt(lx**2 + ly**2) for each Fourier mode in T, E, B. """
lx, ly = self.get_lxly()
return np.sqrt(lx**2 + ly**2)
def is_cfft(obj):
""" ducktyping check of whether an object is a cfft. """
return (hasattr(obj, 'nx') and hasattr(obj, 'ny') and hasattr(obj, 'dx') and hasattr(obj, 'dy') and hasattr(obj, 'fft'))
class cfft(pix):
"""
Complex FFT object.
fft, numpy complex ndarray, containing the fft
nx, number of pixels in the x direction
dx, size of pixels in the x direction [units of radians]
"""
def __init__(self, nx, dx, fft=None, ny=None, dy=None):
super( cfft, self ).__init__(nx, dx, ny=ny, dy=dy)
if fft is None:
fft = np.zeros( (self.ny, self.nx), dtype=np.complex )
self.fft = fft
assert( (self.ny, self.nx) == self.fft.shape )
def hashdict(self):
""" returns a dictionary which should uniquely characterize the contents of this object """
return { 'pix' : super(cfft, self).hashdict(),
'fft' : hashlib.sha1(self.fft.view(np.uint8)).hexdigest() }
def __mul__(self, other):
if False:
pass
elif ( hasattr(other, 'nx') and hasattr(other, 'nx') and
hasattr(other, 'dx') and hasattr(other, 'dx') and hasattr(other, 'fft') ):
assert( self.compatible(other) )
ret = self.copy()
ret.fft[:,:] = self.fft[:,:] * other.fft[:,:]
return ret
elif np.isscalar(other):
ret = self.copy()
ret.fft *= other
return ret
elif ( ( getattr(other, 'size', 0) > 1 ) and ( len( getattr(other, 'shape', ()) ) == 1 ) ):
ell = self.get_ell()
ret = self.copy()
ret.fft *= np.interp( ell.flatten(), np.arange(0, len(other)), other, right=0 ).reshape(self.fft.shape)
return ret
else:
assert(0)
def __rmul__(self, other):
return self.__mul__(other)
def __div__(self, other):
if False:
pass
elif is_cfft(other):
assert( self.compatible(other) )
ret = self.copy()
ret.fft[:,:] = self.fft[:,:] / other.fft[:,:]
return ret
elif np.isscalar(other):
ret = self.copy()
ret.fft /= other
return ret
elif ( ( getattr(other, 'size', 0) > 1 ) and ( len( getattr(other, 'shape', ()) ) == 1 ) ):
ell = self.get_ell()
ret = self.copy()
ret.fft /= np.interp( ell.flatten(), np.arange(0, len(other)), other, right=0 ).reshape(self.fft.shape)
return ret
else:
assert(0)
def __rdiv__(self, other):
if False:
pass
elif np.isscalar(other):
ret = self.copy()
ret.fft = other/ret.fft
return ret
else:
assert(0)
def __add__(self, other):
if False:
pass
elif is_cfft(other):
assert( self.compatible(other) )
ret = self.copy()
ret.fft += other.fft
return ret
elif ( ( getattr(other, 'size', 0) > 1 ) and ( len( getattr(other, 'shape', ()) ) == 1 ) ):
ell = self.get_ell()
ret = self.copy()
ret.fft += np.interp( ell.flatten(), np.arange(0, len(other)), other, right=0 ).reshape(self.fft.shape)
return ret
else:
assert(0)
def __sub__(self, other):
if False:
pass
elif ( hasattr(other, 'nx') and hasattr(other, 'nx') and
hasattr(other, 'dx') and hasattr(other, 'dx') and hasattr(other, 'fft') ):
assert( self.compatible(other) )
return cfft( nx=self.nx, dx=self.dx, ny=self.ny, dy=self.dy, fft = (self.fft - other.fft) )
else:
assert(0)
def __pow__(self, p2):
ret = self.copy()
ret.fft = self.fft**p2
return ret
def compatible(self, other):
""" check whether this map can be added, subtracted, etc. to the map 'other'. """
return ( hasattr(other, 'fft') and
getattr(other, 'fft', np.array([])).shape == self.fft.shape,
super(cfft, self).compatible(other) )
def copy(self):
""" return a clone of this cfft. """
return cfft( self.nx, self.dx, self.fft.copy(), ny = self.ny, dy = self.dy )
def conj(self):
""" return a new cfft which is the complex conjugate of this one. """
ret = self.copy()
ret.fft = np.conj(ret.fft)
return ret
def get_cl( self, lbins, t=None ):
""" returns a Cl object containing the auto-spectra of this map. """
return spec.rcfft2cl( lbins, self, t=t )
def get_ml( self, lbins, t=None, psimin=0., psimax=np.inf, psispin=1 ):
"""" returns a Cl object containing average over rings of the FFT.
* lbins = list of bin edges.
* t = function t(l) which weights the FFT before averaging. defaults to unity.
* psimin, psimax, psispin = parameters used to set wedges for the averaging.
psi = mod(psispin * arctan2(lx, -ly), 2pi) in the range [psimin, psimax].
"""
dopsi = ( (psimin, psimax, psispin) != (0., np.inf, 1) )
l = self.get_ell().flatten()
if dopsi:
lx, ly = self.get_lxly()
psi = np.mod( psispin*np.arctan2(lx, -ly), 2.*np.pi ).flatten()
lb = 0.5*(lbins[:-1] + lbins[1:])
if t is None:
t = np.ones(l.shape)
else:
t = t(l)
c = self.fft.flatten()
m = np.ones(c.shape)
m[ np.isnan(c) ] = 0.0
c[ np.isnan(c) ] = 0.0
if dopsi:
m[ np.where( psi < psimin ) ] = 0.0
m[ np.where( psi >= psimax ) ] = 0.0
norm, bins = np.histogram(l, bins=lbins, weights=m) # get number of modes in each l-bin.
clrr, bins = np.histogram(l, bins=lbins, weights=m*t*c) # bin the spectrum.
# normalize the spectrum.
clrr[np.nonzero(norm)] /= norm[np.nonzero(norm)]
return spec.bcl(lbins, { 'cl' : clrr } )
def get_lxly(self):
""" returns the (lx, ly) pair associated with each Fourier mode. """
return np.meshgrid( np.fft.fftfreq( self.nx, self.dx )*2.*np.pi,
np.fft.fftfreq( self.ny, self.dy )*2.*np.pi )
def get_ell(self):
""" returns the wavenumber l = \sqrt(lx**2 + ly**2) for each Fourier mode """
lx, ly = self.get_lxly()
return np.sqrt(lx**2 + ly**2)
def get_pix_transf(self):
""" return the FFT describing the map-level transfer function for the pixelization of this object. """
lx, ly = self.get_lxly()
fft = np.zeros( self.fft.shape )
fft[0, 0] = 1.0
fft[0, 1:] = np.sin(self.dx*lx[ 0,1:]/2.) / (self.dx * lx[0,1:] / 2.)
fft[1:, 0] = np.sin(self.dy*ly[1:, 0]/2.) / (self.dy * ly[1:,0] / 2.)
fft[1:,1:] = np.sin(self.dx*lx[1:,1:]/2.) * np.sin(self.dy*ly[1:,1:]/2.) / (self.dx * self.dy * lx[1:,1:] * ly[1:,1:] / 4.)
return cfft( self.nx, self.dx, ny=self.ny, dy=self.dy, fft=fft )
def get_rffts( self ):
""" return the real-valued FFT objects corresponding to the real and imaginary parts of the map associated with this fft. """
cmap = np.fft.ifft2( self.fft )
return ( rfft( self.nx, self.dx, fft=np.fft.rfft2(cmap.real), ny=self.ny, dy=self.dy ),
rfft( self.nx, self.dx, fft=np.fft.rfft2(cmap.imag), ny=self.ny, dy=self.dy ) )
def get_l_masked( self, lmin=None, lmax=None, lxmin=None, lxmax=None, lymin=None, lymax=None ):
""" returns a copy of this object which has been masked to zero in a customizable range of Fourier space. """
ret = self.copy()
lx, ly = ret.get_lxly()
ell = np.sqrt(lx**2 + ly**2)
if lmin != None: ret.fft[ np.where(ell < lmin) ] = 0.0
if lmax != None: ret.fft[ np.where(ell >=lmax) ] = 0.0
if lxmin != None: ret.fft[ np.where(np.abs(lx) < lxmin) ] = 0.0
if lymin != None: ret.fft[ np.where(np.abs(ly) < lymin) ] = 0.0
if lxmax != None: ret.fft[ np.where(np.abs(lx) >=lxmax) ] = 0.0
if lymax != None: ret.fft[ np.where(np.abs(ly) >=lymax) ] = 0.0
return ret
def get_l_mask( self, lmin=None, lmax=None, lxmin=None, lxmax=None, lymin=None, lymax=None ):
""" return a Fourier mask for the pixelization associated with this object which is zero over customizable ranges of L. """
ret = self.copy()
ret.fft[:,:] = 1.0
lx, ly = ret.get_lxly()
ell = np.sqrt(lx**2 + ly**2)
if lmin != None: ret.fft[ np.where(ell < lmin) ] = 0.0
if lmax != None: ret.fft[ np.where(ell >=lmax) ] = 0.0
if lxmin != None: ret.fft[ np.where(np.abs(lx) < lxmin) ] = 0.0
if lymin != None: ret.fft[ np.where(np.abs(ly) < lymin) ] = 0.0
if lxmax != None: ret.fft[ np.where(np.abs(lx) >=lxmax) ] = 0.0
if lymax != None: ret.fft[ np.where(np.abs(ly) >=lymax) ] = 0.0
return ret
|
dhansonREPO_NAMEquicklensPATH_START.@quicklens_extracted@quicklens-master@quicklens@maps.py@.PATH_END.py
|
{
"filename": "piernik_problem.py",
"repo_name": "piernik-dev/piernik",
"repo_path": "piernik_extracted/piernik-master/problems/sedov/piernik_problem.py",
"type": "Python"
}
|
from yt.mods import \
load, SlicePlot, parallel_objects
FIELDS = ['denn', 'enen']
def visualize(files):
output = []
for fn in parallel_objects(files, njobs=-1):
pf = load(fn)
for field in FIELDS:
slc = SlicePlot(pf, 'z', field)
output.append(slc.save(fn.replace('.h5', '_%s.png' % field))[0])
return output
if __name__ == "__main__":
import sys
print visualize(sys.argv[1:])
|
piernik-devREPO_NAMEpiernikPATH_START.@piernik_extracted@piernik-master@problems@sedov@piernik_problem.py@.PATH_END.py
|
{
"filename": "models.py",
"repo_name": "ArtificialStellarPopulations/ArtPop",
"repo_path": "ArtPop_extracted/ArtPop-main/src/artpop/space/models.py",
"type": "Python"
}
|
# Third-party
import numpy as np
from astropy.modeling import Fittable2DModel, Parameter
__all__ = ['Plummer2D', 'Constant2D']
class Plummer2D(Fittable2DModel):
"""
Two-dimensional Plummer surface brightness profile.
Parameters
----------
amplitude : float
Central surface brightness.
scale_radius : float
Characteristic scale radius of the mass distribution.
x_0 : float, optional
x position of the center.
y_0 : float, optional
y position of the center.
"""
amplitude = Parameter(default=1)
scale_radius = Parameter(default=1)
x_0 = Parameter(default=0)
y_0 = Parameter(default=0)
@classmethod
def evaluate(cls, x, y, amplitude, scale_radius, x_0, y_0):
"""Two-dimensional Plummer profile evaluation function."""
r = np.sqrt((x - x_0)**2 + (y - y_0)**2)
return amplitude / (1 + (r / scale_radius)**2)**2
class Constant2D(Fittable2DModel):
"""
The simplest model ever.
Parameters
----------
amplitude : float
Constant surface brightness.
"""
amplitude = Parameter(default=1)
@classmethod
def evaluate(cls, x, y, amplitude):
"""Constant surface brightness evaluation function."""
if hasattr(x, 'shape') and hasattr(y, 'shape'):
assert x.shape == y.shape, 'Shapes of x and y must match'
z = np.ones(x.shape)
elif hasattr(x, 'shape'):
z = np.ones(x.shape)
elif hasattr(y, 'shape'):
z = np.ones(y.shape)
else:
z = 1.0
return amplitude * z
|
ArtificialStellarPopulationsREPO_NAMEArtPopPATH_START.@ArtPop_extracted@ArtPop-main@src@artpop@space@models.py@.PATH_END.py
|
{
"filename": "acspyTestError.py",
"repo_name": "ACS-Community/ACS",
"repo_path": "ACS_extracted/ACS-master/LGPL/CommonSoftware/acspycommon/test/acspyTestError.py",
"type": "Python"
}
|
#!/usr/bin/env python
#*******************************************************************************
# ALMA - Atacama Large Millimiter Array
# (c) Associated Universities Inc., 2002
# (c) European Southern Observatory, 2002
# Copyright by ESO (in the framework of the ALMA collaboration)
# and Cosylab 2002, All rights reserved
#
# This library is free software; you can redistribute it and/or
# modify it under the terms of the GNU Lesser General Public
# License as published by the Free Software Foundation; either
# version 2.1 of the License, or (at your option) any later version.
#
# This library is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public
# License along with this library; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston,
# MA 02111-1307 USA
#
# @(#) $Id: acspyTestError.py,v 1.1.1.1 2012/03/07 17:40:45 acaproni Exp $
###############################################################################
"""
Tests the Python Error system.
"""
###############################################################################
import ACSErrTypePythonNative
import ACSErrTypePythonNativeImpl
import ACSErr
import ACSLog
from Acspy.Common.Err import pyExceptionToCORBA
from Acspy.Common.Log import getLogger
###############################################################################
def fakeCORBAMethod():
'''
Simulates the testing of a fake CORBA method
This function:
- creates a new native local exception and throws it
- catches it as native exception
- converts it to an ACS exception using pyExceptionToCORBA() function
- throws it again
- catches it as a CORBA exception
- converts it back into the helper class WITHOUT adding new error info
- rethrows the new local exception
- catches it as a CORBA exception
- converts it back into the helper class AND adds new error info
- throws the exception again which should have two error traces
'''
print "--fakeCORBAMethod1-------------------------------------------------"
try:
try:
raise Exception("fake python exception")
except Exception, ex:
print "Raising the local exception..."
raise pyExceptionToCORBA(ex)
#make sure we can catch the real CORBA exception
except ACSErrTypePythonNative.PythonEx, e:
#convert it back into the helper class w/o adding error information
helperException = ACSErrTypePythonNativeImpl.PythonExImpl(exception=e, create=0)
#print to stdout...only one error trace should be seen
helperException.Print()
print "--fakeCORBAMethod2-------------------------------------------------"
try:
#reraise a local ACS exception
raise helperException
except ACSErrTypePythonNative.PythonEx, e:
#make sure we can catch the real CORBA exception
helperException = ACSErrTypePythonNativeImpl.PythonExImpl(exception=e)
#Printing to stdout AGAIN...should see TWO errorTraces this time around
helperException.Print()
#finally raise the exception out of the pseudo CORBA method
raise helperException
###############################################################################
def fakeClientFunction():
'''
Invokes a fake CORBA method which raises a fake exception.
This function:
- invokes a fake CORBA method which raises a fake CORBA exception
- catches the CORBA exception
- converts it back into the helper class AND adds new error info
- throws the exception again which should have three error traces
'''
print "--fakeClientFunction1-------------------------------------------------"
try:
fakeCORBAMethod()
except ACSErrTypePythonNative.PythonEx, e:
print "--fakeClientFunction2-------------------------------------------------"
helperException = ACSErrTypePythonNativeImpl.PythonExImpl(exception=e)
#Printing to stdout...should see three errorTraces
helperException.Print()
raise helperException
###############################################################################
def fakeCORBAMethodNew():
'''
Simulates the testing of a fake CORBA method
This function:
- creates a new native local exception and throws it
- catches it as native exception
- creates a new ACS exception out of it, retaining all its information
- throws it again
- catches it as a CORBA exception
- converts it back into the helper class WITHOUT adding new error info
- rethrows the new local exception
- catches it as a CORBA exception
- converts it back into the helper class AND adds new error info
- throws the exception again which should have two error traces
'''
print "--fakeCORBAMethodNew1-------------------------------------------------"
try:
try:
raise Exception("fake python exception")
except Exception, ex:
print "Raising the local exception..."
raise ACSErrTypePythonNativeImpl.PythonExImpl(exception=ex)
#make sure we can catch the real CORBA exception
except ACSErrTypePythonNative.PythonEx, e:
#convert it back into the helper class w/o adding error information
helperException = ACSErrTypePythonNativeImpl.PythonExImpl(exception=e, create=0)
#print to stdout...only one error trace should be seen
helperException.Print()
print "--fakeCORBAMethodNew2-------------------------------------------------"
try:
#reraise a local ACS exception
raise helperException
except ACSErrTypePythonNative.PythonEx, e:
#make sure we can catch the real CORBA exception
helperException = ACSErrTypePythonNativeImpl.PythonExImpl(exception=e)
#Printing to stdout AGAIN...should see TWO errorTraces this time around
helperException.Print()
#finally raise the exception out of the pseudo CORBA method
raise helperException
###############################################################################
if __name__ == "__main__":
logger = getLogger('Error Test')
print "--main1-------------------------------------------------"
try:
fakeClientFunction()
except ACSErrTypePythonNative.PythonEx, e:
print "--main2-------------------------------------------------"
helperException = ACSErrTypePythonNativeImpl.PythonExImpl(exception=e)
#should be four error traces at this point
helperException.Print()
print "--main2-------------------------------------------------"
print "Testing all public methods"
print ""
print "Grep me out", helperException.getErrorTrace()
print "Grep me out", helperException.getNext()
helperException.log(logger)
helperException.log(logger, ACSLog.ACS_LOG_DEBUG)
print "Grep me out", helperException.isOK()
helperException.addData("name", "value")
print "getData('no data set'):", helperException.getData("no data set")
print "getData('name'):", helperException.getData("name")
print "Grep me out", helperException.getDescription()
print "Grep me out", helperException.getFileName()
print "Grep me out", helperException.getLineNumber()
print "Grep me out", helperException.getRoutine()
print "Grep me out", helperException.getHostName()
print "Grep me out", helperException.getProcess()
print "Grep me out", helperException.getThread()
print "Grep me out", helperException.getTimeStamp()
print "Grep me out", helperException.getErrorCode()
print "Grep me out", helperException.getErrorType()
print "Grep me out", helperException.getSeverity()
helperException.setTimeStamp(23L)
helperException.setFileName("noFileName")
helperException.setLineNumber(1L)
helperException.setError(0, 0)
helperException.setSeverity(ACSErr.Error)
print "--mainNew1-------------------------------------------------"
try:
fakeCORBAMethodNew()
except ACSErrTypePythonNative.PythonEx, e:
print "--mainNew2-------------------------------------------------"
helperException = ACSErrTypePythonNativeImpl.PythonExImpl(exception=e)
#should be four error traces at this point
helperException.Print()
print "The end __oOo__"
|
ACS-CommunityREPO_NAMEACSPATH_START.@ACS_extracted@ACS-master@LGPL@CommonSoftware@acspycommon@test@acspyTestError.py@.PATH_END.py
|
{
"filename": "redivide_segments.py",
"repo_name": "ideasrule/sparta",
"repo_path": "sparta_extracted/sparta-master/gj1214/redivide_segments.py",
"type": "Python"
}
|
import astropy.io.fits
import matplotlib.pyplot as plt
import numpy as np
import copy
#Integration numbers, counting from beginning
transit_begin = 10824
transit_end = 11302
seg5_begin = 10528
seg6_begin = 11000
def write_pre_transit_segment(input_filename="old_uncalibrated/jw01803001001_04103_00003-seg005_mirimage_uncal.fits", output_filename="jw01803001001_04103_00003-seg005a_mirimage_uncal.fits"):
seg1 = astropy.io.fits.open(input_filename)
seg1_header = dict(seg1[0].header)
times = np.linspace(seg1[0].header["BSTRTIME"], seg1[0].header["BENDTIME"], seg1[1].data.shape[0])
cutoff = transit_begin - seg5_begin
transit_intstart = seg1[0].header["INTSTART"] + cutoff
transit_data = np.copy(seg1[1].data[cutoff:])
header_hdu = astropy.io.fits.PrimaryHDU()
header_hdu.header["NGROUPS"] = seg1[1].data.shape[1]
header_hdu.header["NINTS"] = seg1[0].header["NINTS"]
header_hdu.header["NFRAMES"] = 1
header_hdu.header["GROUPGAP"] = 0
header_hdu.header["BSTRTIME"] = seg1[0].header["BSTRTIME"]
header_hdu.header["BENDTIME"] = seg1[0].header["BENDTIME"]
header_hdu.header["INTSTART"] = seg1[0].header["INTSTART"]
header_hdu.header["INTEND"] = transit_intstart - 1
#print(header_hdu.header["INTEND"] - header_hdu.header["INTSTART"], seg1[1].data[:cutoff].shape[0])
assert(header_hdu.header["INTEND"] - header_hdu.header["INTSTART"] + 1 == seg1[1].data[:cutoff].shape[0])
output_hdul1 = astropy.io.fits.HDUList([
header_hdu,
astropy.io.fits.ImageHDU(seg1[1].data[:cutoff], name="SCI")])
output_hdul1.writeto(output_filename, overwrite=True)
output_hdul1.close()
seg1.close()
return transit_data, transit_intstart, seg1_header
def write_post_transit_segment(input_filename="old_uncalibrated/jw01803001001_04103_00003-seg006_mirimage_uncal.fits", output_filename="jw01803001001_04103_00003-seg005c_mirimage_uncal.fits"):
seg3 = astropy.io.fits.open(input_filename)
times = np.linspace(seg3[0].header["BSTRTIME"], seg3[0].header["BENDTIME"], seg3[1].data.shape[0])
cutoff = transit_end - seg6_begin
transit_intend = seg3[0].header["INTSTART"] + cutoff
transit_data2 = seg3[1].data[0:cutoff]
header_hdu = astropy.io.fits.PrimaryHDU()
header_hdu.header["NGROUPS"] = seg3[1].data.shape[1]
header_hdu.header["NINTS"] = seg3[0].header["NINTS"]
header_hdu.header["NFRAMES"] = 1
header_hdu.header["GROUPGAP"] = 0
header_hdu.header["BSTRTIME"] = seg3[0].header["BSTRTIME"]
header_hdu.header["BENDTIME"] = seg3[0].header["BENDTIME"]
header_hdu.header["INTSTART"] = transit_intend
header_hdu.header["INTEND"] = seg3[0].header["INTEND"]
assert(header_hdu.header["INTEND"] - header_hdu.header["INTSTART"] + 1 == seg3[1].data[cutoff:].shape[0])
output_hdul3 = astropy.io.fits.HDUList([
header_hdu,
astropy.io.fits.ImageHDU(seg3[1].data[cutoff:], name="SCI")])
output_hdul3.writeto(output_filename, overwrite=True)
output_hdul3.close()
seg3.close()
return transit_data2, transit_intend
def write_transit_segment(transit_data, transit_intstart, transit_intend, seg1_header, output_filename="jw01803001001_04103_00003-seg005b_mirimage_uncal.fits"):
header_hdu = astropy.io.fits.PrimaryHDU()
header_hdu.header["NGROUPS"] = transit_data.shape[1]
header_hdu.header["NINTS"] = seg1_header["NINTS"]
header_hdu.header["NFRAMES"] = 1
header_hdu.header["GROUPGAP"] = 0
header_hdu.header["BSTRTIME"] = seg1_header["BSTRTIME"]
header_hdu.header["BENDTIME"] = seg1_header["BENDTIME"]
header_hdu.header["INTSTART"] = transit_intstart
header_hdu.header["INTEND"] = transit_intend - 1
assert(header_hdu.header["INTEND"] - header_hdu.header["INTSTART"] + 1 == transit_data.shape[0])
output_hdul2 = astropy.io.fits.HDUList([
header_hdu,
astropy.io.fits.ImageHDU(transit_data, name="SCI")])
output_hdul2.writeto(output_filename, overwrite=True)
output_hdul2.close()
transit_data, transit_intstart, seg1_header = write_pre_transit_segment()
transit_data2, transit_intend = write_post_transit_segment()
transit_data = np.append(transit_data, transit_data2, axis=0)
write_transit_segment(transit_data, transit_intstart, transit_intend, seg1_header)
|
ideasruleREPO_NAMEspartaPATH_START.@sparta_extracted@sparta-master@gj1214@redivide_segments.py@.PATH_END.py
|
{
"filename": "neg.py",
"repo_name": "tensorflow/tensorflow",
"repo_path": "tensorflow_extracted/tensorflow-master/tensorflow/lite/testing/op_tests/neg.py",
"type": "Python"
}
|
# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Test configs for neg."""
import tensorflow as tf
from tensorflow.lite.testing.zip_test_utils import create_tensor_data
from tensorflow.lite.testing.zip_test_utils import make_zip_of_tests
from tensorflow.lite.testing.zip_test_utils import register_make_test_function
@register_make_test_function()
def make_neg_tests(options):
"""Make a set of tests to do neg."""
test_parameters = [{
"input_dtype": [tf.float32, tf.int32],
"input_shape": [[1, 3, 4, 3], [5], []],
}]
def build_graph(parameters):
"""Build the neg op testing graph."""
input_tensor = tf.compat.v1.placeholder(
dtype=parameters["input_dtype"],
name="input",
shape=parameters["input_shape"])
out = tf.negative(input_tensor)
return [input_tensor], [out]
def build_inputs(parameters, sess, inputs, outputs):
values = create_tensor_data(parameters["input_dtype"],
parameters["input_shape"])
return [values], sess.run(outputs, feed_dict=dict(zip(inputs, [values])))
make_zip_of_tests(options, test_parameters, build_graph, build_inputs)
|
tensorflowREPO_NAMEtensorflowPATH_START.@tensorflow_extracted@tensorflow-master@tensorflow@lite@testing@op_tests@neg.py@.PATH_END.py
|
{
"filename": "_anchor.py",
"repo_name": "catboost/catboost",
"repo_path": "catboost_extracted/catboost-master/contrib/python/plotly/py3/plotly/validators/layout/yaxis/_anchor.py",
"type": "Python"
}
|
import _plotly_utils.basevalidators
class AnchorValidator(_plotly_utils.basevalidators.EnumeratedValidator):
def __init__(self, plotly_name="anchor", parent_name="layout.yaxis", **kwargs):
super(AnchorValidator, self).__init__(
plotly_name=plotly_name,
parent_name=parent_name,
edit_type=kwargs.pop("edit_type", "plot"),
values=kwargs.pop(
"values",
[
"free",
"/^x([2-9]|[1-9][0-9]+)?( domain)?$/",
"/^y([2-9]|[1-9][0-9]+)?( domain)?$/",
],
),
**kwargs,
)
|
catboostREPO_NAMEcatboostPATH_START.@catboost_extracted@catboost-master@contrib@python@plotly@py3@plotly@validators@layout@yaxis@_anchor.py@.PATH_END.py
|
{
"filename": "__init__.py",
"repo_name": "philbull/FastBox",
"repo_path": "FastBox_extracted/FastBox-main/fastbox/__init__.py",
"type": "Python"
}
|
from .box import CosmoBox
from . import analysis, beams, box, filters, forecast, foregrounds, halos, inpaint, noise, plot, tracers, utils, voids
|
philbullREPO_NAMEFastBoxPATH_START.@FastBox_extracted@FastBox-main@fastbox@__init__.py@.PATH_END.py
|
{
"filename": "_line.py",
"repo_name": "catboost/catboost",
"repo_path": "catboost_extracted/catboost-master/contrib/python/plotly/py2/plotly/validators/sunburst/marker/_line.py",
"type": "Python"
}
|
import _plotly_utils.basevalidators
class LineValidator(_plotly_utils.basevalidators.CompoundValidator):
def __init__(self, plotly_name="line", parent_name="sunburst.marker", **kwargs):
super(LineValidator, self).__init__(
plotly_name=plotly_name,
parent_name=parent_name,
data_class_str=kwargs.pop("data_class_str", "Line"),
data_docs=kwargs.pop(
"data_docs",
"""
color
Sets the color of the line enclosing each
sector. Defaults to the `paper_bgcolor` value.
colorsrc
Sets the source reference on Chart Studio Cloud
for color .
width
Sets the width (in px) of the line enclosing
each sector.
widthsrc
Sets the source reference on Chart Studio Cloud
for width .
""",
),
**kwargs
)
|
catboostREPO_NAMEcatboostPATH_START.@catboost_extracted@catboost-master@contrib@python@plotly@py2@plotly@validators@sunburst@marker@_line.py@.PATH_END.py
|
{
"filename": "util_test.py",
"repo_name": "google/jax",
"repo_path": "jax_extracted/jax-main/tests/util_test.py",
"type": "Python"
}
|
# Copyright 2020 The JAX Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import operator
from absl.testing import absltest
import jax
from jax._src import linear_util as lu
from jax._src import test_util as jtu
from jax._src import util
from jax._src.util import weakref_lru_cache
jax.config.parse_flags_with_absl()
try:
from jax._src.lib import utils as jaxlib_utils
except:
jaxlib_utils = None
class UtilTest(jtu.JaxTestCase):
def test_wrapped_fun_transforms(self):
"""Test a combination of transforms."""
def f(*args, **kwargs):
"""The function to be transformed.
Scales the positional arguments by a factor.
Takes only one keyword argument, the factor to scale by."""
factor = kwargs.pop('factor', 2) # For PY2
assert not kwargs
return tuple(a * factor for a in args)
@lu.transformation_with_aux2
def kw_to_positional(f, store, factor, *args, **kwargs):
"""A transformation with auxiliary output.
Turns all keyword parameters into positional ones.
On entry, append the values of the keyword arguments to the positional
arguments. On exit, take a list of results and recreate a dictionary
from the tail of the results. The auxiliary output is the list of
keyword keys.
"""
kwargs_keys = kwargs.keys()
new_args = tuple(kwargs[k] for k in kwargs_keys)
new_kwargs = dict(factor=factor)
results = f(*(args + new_args), **new_kwargs) # Yield transformed (args, kwargs)
# Assume results correspond 1:1 to the args + new_args
assert len(results) == len(args) + len(new_args)
aux_output = len(new_args)
store.store(aux_output)
return (results[0:len(args)], dict(zip(kwargs_keys, results[len(args):])))
wf = lu.wrap_init(f) # Wraps `f` as a `WrappedFun`.
wf, out_thunk = kw_to_positional(wf, 2)
# Call the transformed function.
scaled_positional, scaled_kwargs = wf.call_wrapped(1, 2, three=3, four=4)
self.assertEqual((2, 4), scaled_positional)
self.assertEqual(dict(three=6, four=8), scaled_kwargs)
self.assertEqual(2, out_thunk())
def test_weakref_lru_cache(self):
@weakref_lru_cache
def example_cached_fn(key):
return object()
class Key:
def __init__(self):
# Make a GC loop.
self.ref_loop = [self]
stable_keys = [Key() for _ in range(2049)]
for i in range(10000):
example_cached_fn(stable_keys[i % len(stable_keys)])
example_cached_fn(Key())
def test_weakref_lru_cache_asan_problem(self):
@weakref_lru_cache
def reference_loop_generator(x):
return x
for _ in range(4097):
reference_loop_generator(lambda x: x)
class SafeMapTest(jtu.JaxTestCase):
def test_safe_map(self):
def unreachable(*args, **kwargs):
raise RuntimeError("unreachable")
self.assertEqual([], util.safe_map(unreachable, []))
self.assertEqual([], util.safe_map(unreachable, (), []))
self.assertEqual([], util.safe_map(unreachable, [], [], []))
self.assertEqual([], util.safe_map(unreachable, [], iter([]), [], []))
def double(x):
return x * 2
self.assertEqual([14], util.safe_map(double, (7,)))
self.assertEqual([0, 2, 4, 6], util.safe_map(double, range(4)))
def make_tuple(*args):
return args
self.assertEqual(
[(0, 4), (1, 5), (2, 6), (3, 7)],
util.safe_map(make_tuple, range(4), range(4, 8)),
)
def test_safe_map_errors(self):
with self.assertRaisesRegex(
TypeError, "safe_map requires at least 2 arguments"
):
util.safe_map()
with self.assertRaisesRegex(
TypeError, "safe_map requires at least 2 arguments"
):
util.safe_map(lambda x: x)
with self.assertRaisesRegex(TypeError, "'int' object is not callable"):
util.safe_map(7, range(6))
def error(*args, **kwargs):
raise RuntimeError("hello")
with self.assertRaisesRegex(RuntimeError, "hello"):
util.safe_map(error, range(6))
with self.assertRaisesRegex(
ValueError, r"safe_map\(\) argument 2 is longer than argument 1"
):
util.safe_map(operator.add, range(3), range(4))
with self.assertRaisesRegex(
ValueError, r"safe_map\(\) argument 2 is shorter than argument 1"
):
util.safe_map(operator.add, range(7), range(2))
with self.assertRaisesRegex(
ValueError, r"safe_map\(\) argument 2 is longer than argument 1"
):
util.safe_map(operator.add, (), range(3))
class SafeZipTest(jtu.JaxTestCase):
def test_safe_zip(self):
self.assertEqual([], util.safe_zip([]))
self.assertEqual([], util.safe_zip((), []))
self.assertEqual([], util.safe_zip([], [], []))
self.assertEqual([], util.safe_zip([], iter([]), [], []))
self.assertEqual([(7,)], util.safe_zip((7,)))
self.assertEqual([(0,), (1,), (2,), (3,)], util.safe_zip(range(4)))
self.assertEqual(
[(0, 4), (1, 5), (2, 6), (3, 7)],
util.safe_zip(range(4), range(4, 8)),
)
def test_safe_zip_errors(self):
with self.assertRaisesRegex(
TypeError, "safe_zip requires at least 1 argument"
):
util.safe_zip()
with self.assertRaisesRegex(
TypeError, "'function' object is not iterable"
):
util.safe_zip(lambda x: x)
with self.assertRaisesRegex(
ValueError, r"safe_zip\(\) argument 2 is longer than argument 1"
):
util.safe_zip(range(3), range(4))
with self.assertRaisesRegex(
ValueError, r"safe_zip\(\) argument 2 is shorter than argument 1"
):
util.safe_zip(range(7), range(2))
with self.assertRaisesRegex(
ValueError, r"safe_zip\(\) argument 2 is longer than argument 1"
):
util.safe_zip((), range(3))
if __name__ == "__main__":
absltest.main(testLoader=jtu.JaxTestLoader())
|
googleREPO_NAMEjaxPATH_START.@jax_extracted@jax-main@tests@util_test.py@.PATH_END.py
|
{
"filename": "test_kernel.py",
"repo_name": "clemson-cal/sailfish",
"repo_path": "sailfish_extracted/sailfish-master/scripts/test_kernel.py",
"type": "Python"
}
|
import sys
import logging
sys.path.insert(1, ".")
code = """
PUBLIC void my_1d_kernel(
int ni,
double *data) // :: $.shape == (ni,)
{
FOR_EACH_1D(ni)
{
data[i] = i;
}
}
PUBLIC void my_2d_kernel(
int ni,
int nj,
double *data) // :: $.shape == (ni, nj)
{
FOR_EACH_2D(ni, nj)
{
data[i * nj + j] = i + j;
}
}
PUBLIC void my_3d_kernel(
int ni,
int nj,
int nk,
double *data) // :: $.shape == (ni, nj, nk)
{
FOR_EACH_3D(ni, nj, nk)
{
data[i * nj * nk + j * nk + k] = i + j + k;
}
}
"""
def main():
import argparse
import numpy as np
from sailfish.kernel.library import Library
from sailfish.kernel.system import configure_build
configure_build(enable_openmp=True)
logging.basicConfig(level=logging.INFO)
parser = argparse.ArgumentParser()
parser.add_argument("--mode", default="cpu", choices=["cpu", "omp", "gpu"])
args = parser.parse_args()
if args.mode == "gpu":
import cupy as xp
else:
import numpy as xp
library = Library(code, mode=args.mode)
data_1d = xp.zeros([10])
data_2d = xp.zeros([10, 20])
data_3d = xp.zeros([10, 20, 30])
library.my_1d_kernel[data_1d.shape](data_1d)
library.my_2d_kernel[data_2d.shape](data_2d)
library.my_3d_kernel[data_3d.shape](data_3d)
if args.mode == "gpu":
data_1d = data_1d.get()
data_2d = data_2d.get()
data_3d = data_3d.get()
for (i,), x in np.ndenumerate(data_1d):
assert i == x
for (i, j), x in np.ndenumerate(data_2d):
assert i + j == x
for (i, j, k), x in np.ndenumerate(data_3d):
assert i + j + k == x
if __name__ == "__main__":
main()
|
clemson-calREPO_NAMEsailfishPATH_START.@sailfish_extracted@sailfish-master@scripts@test_kernel.py@.PATH_END.py
|
{
"filename": "torchvision_schema.py",
"repo_name": "alibaba/TinyNeuralNetwork",
"repo_path": "TinyNeuralNetwork_extracted/TinyNeuralNetwork-main/tinynn/converter/schemas/torch/torchvision_schema.py",
"type": "Python"
}
|
from abc import abstractmethod
from ...operators.torch.base import OperatorConverter
class TorchVisionPsRoiAlignSchema(OperatorConverter):
@abstractmethod
def parse(self, node, attrs, args, graph_converter):
'''torchvision::ps_roi_align(Tensor input, Tensor rois, float spatial_scale, int pooled_height, int pooled_width, int sampling_ratio) -> (Tensor, Tensor)'''
pass
class TorchVisionRoiAlignSchema(OperatorConverter):
@abstractmethod
def parse(self, node, attrs, args, graph_converter):
'''torchvision::roi_align(Tensor input, Tensor rois, float spatial_scale, int pooled_height, int pooled_width, int sampling_ratio, bool aligned) -> (Tensor)'''
pass
class TorchVisionPsRoiPoolSchema(OperatorConverter):
@abstractmethod
def parse(self, node, attrs, args, graph_converter):
'''torchvision::ps_roi_pool(Tensor input, Tensor rois, float spatial_scale, int pooled_height, int pooled_width) -> (Tensor, Tensor)'''
pass
class TorchVisionDeformConv2dSchema(OperatorConverter):
@abstractmethod
def parse(self, node, attrs, args, graph_converter):
'''torchvision::deform_conv2d(Tensor input, Tensor weight, Tensor offset, Tensor mask, Tensor bias, int stride_h, int stride_w, int pad_h, int pad_w, int dilation_h, int dilation_w, int groups, int offset_groups, bool use_mask) -> (Tensor)'''
pass
class TorchVisionInterpolateBilinear2dAaSchema(OperatorConverter):
@abstractmethod
def parse(self, node, attrs, args, graph_converter):
'''torchvision::_interpolate_bilinear2d_aa(Tensor input, int[] output_size, bool align_corners) -> (Tensor)'''
pass
class TorchVisionInterpolateBicubic2dAaSchema(OperatorConverter):
@abstractmethod
def parse(self, node, attrs, args, graph_converter):
'''torchvision::_interpolate_bicubic2d_aa(Tensor input, int[] output_size, bool align_corners) -> (Tensor)'''
pass
class TorchVisionNmsSchema(OperatorConverter):
@abstractmethod
def parse(self, node, attrs, args, graph_converter):
'''torchvision::nms(Tensor dets, Tensor scores, float iou_threshold) -> (Tensor)'''
pass
class TorchVisionRoiPoolSchema(OperatorConverter):
@abstractmethod
def parse(self, node, attrs, args, graph_converter):
'''torchvision::roi_pool(Tensor input, Tensor rois, float spatial_scale, int pooled_height, int pooled_width) -> (Tensor, Tensor)'''
pass
|
alibabaREPO_NAMETinyNeuralNetworkPATH_START.@TinyNeuralNetwork_extracted@TinyNeuralNetwork-main@tinynn@converter@schemas@torch@torchvision_schema.py@.PATH_END.py
|
{
"filename": "_maxpoints.py",
"repo_name": "catboost/catboost",
"repo_path": "catboost_extracted/catboost-master/contrib/python/plotly/py2/plotly/validators/indicator/stream/_maxpoints.py",
"type": "Python"
}
|
import _plotly_utils.basevalidators
class MaxpointsValidator(_plotly_utils.basevalidators.NumberValidator):
def __init__(
self, plotly_name="maxpoints", parent_name="indicator.stream", **kwargs
):
super(MaxpointsValidator, self).__init__(
plotly_name=plotly_name,
parent_name=parent_name,
edit_type=kwargs.pop("edit_type", "calc"),
max=kwargs.pop("max", 10000),
min=kwargs.pop("min", 0),
role=kwargs.pop("role", "info"),
**kwargs
)
|
catboostREPO_NAMEcatboostPATH_START.@catboost_extracted@catboost-master@contrib@python@plotly@py2@plotly@validators@indicator@stream@_maxpoints.py@.PATH_END.py
|
{
"filename": "maketable.py",
"repo_name": "alexbinks/tessilator",
"repo_path": "tessilator_extracted/tessilator-main/tessilator/maketable.py",
"type": "Python"
}
|
'''
Alexander Binks & Moritz Guenther, 2024
Licence: MIT 2024
Make tabular data for tessilator
This module contains the functions required to convert the input data
into correct formatted astropy tables to be used for further analysis
in the tessilator.
'''
###############################################################################
####################################IMPORTS####################################
###############################################################################
#Internal
import warnings
import sys
import inspect
#Third party
from astropy.table import Table
from astroquery.simbad import Simbad
from astroquery.gaia import Gaia
from astropy.coordinates import SkyCoord, ICRS
import astropy.units as u
import numpy as np
# Local application
from .file_io import logger_tessilator
###############################################################################
###############################################################################
###############################################################################
# initialize the logger object
logger = logger_tessilator(__name__)
def table_from_simbad(input_names):
'''Generate the formatted astropy table from a list of target names.
All characters can be parsed except commas, since the table is in comma
separated variable (.csv) format.
parameters
----------
input_names : `astropy.table.Table`
An input list of target names.
returns
-------
gaia_table : `astropy.table.Table`
The output table ready for further analysis.
'''
# Part 1: Use the SIMBAD database to retrieve the Gaia source identifier
# from the target names.
# set the column header = "ID"
input_names.rename_column(input_names.colnames[0], 'ID')
input_names["ID"] = input_names["ID"].astype(str)
# create arrays to store naming variables
name_arr, is_Gaia = [], [0 for i in input_names]
for i, input_name in enumerate(input_names["ID"]):
# if the target name is the numeric part of the Gaia DR3 source identifier
# prefix the name with "Gaia DR3 "
if input_name.isnumeric() and len(input_name) > 10:
input_name = "Gaia DR3 " + input_name
is_Gaia[i] = 1
name_arr.append(input_name)
# Get a list object identifiers from Simbad
# suppress the Simbad.query_objectids warnings if there are no matches for
# the input name
# with warnings.catch_warnings():
# warnings.simplefilter(action='ignore', category=UserWarning)
try:
result_table = [Simbad.query_objectids(name) for name in name_arr]
except:
result_table = [None for name in name_arr]
NameList = []
GaiaList = []
for r, res in enumerate(result_table):
input_name = input_names["ID"][r]
if res is None: # no targets resolved by SIMBAD
logger.warning(f"Simbad did not resolve {input_name} - checking "
f"Gaia")
if is_Gaia[i] == 1:
NameList.append("Gaia DR3 " + input_name)
GaiaList.append(input_name)
else:
logger.error(f"Could not find any match for '{input_name}'")
else: # Simbad returns at least one identifier
r_list = [z for z in res["id"]]
m = [s for s in r_list if "Gaia DR3" in s]
if len(m) == 0: # if Gaia identifier is not in the Simbad list
if is_Gaia[i] == 1:
logger.warning("Simbad didn't resolve Gaia DR3 "
f"identifiers for {input_name}, "
f"but we'll check anyway!")
NameList.append("Gaia DR3 " + input_name)
GaiaList.append(input_name)
else:
logger.error(f"Could not find any match for "
f"'{input_name}'")
else:
NameList.append(input_name)
GaiaList.append(m[0].split(' ')[2])
if len(NameList) == 0:
logger.error(
"No targets have been resolved, either by Simbad or "
"Gaia DR3. Please check the target names are "
"resolvable."
)
sys.exit()
# Part 2: Query Gaia database using Gaia identifiers retrieved in part 1.
ID_string = ""
for g_i, gaia_name in enumerate(GaiaList):
if g_i == len(GaiaList)-1:
ID_string += gaia_name
else:
ID_string += gaia_name+','
qry = "SELECT source_id,ra,dec,parallax,"\
"phot_g_mean_mag,phot_bp_mean_mag,phot_rp_mean_mag "\
"FROM gaiadr3.gaia_source "\
f"WHERE source_id in ({ID_string});"
job = Gaia.launch_job_async( qry )
gaia_table = job.get_results() # Astropy table
logger.info('query completed!')
# convert source_id column to str (astroquery returns type np.int64)
gaia_table["source_id"] = gaia_table["source_id"].astype(str)
list_ind = []
# astroquery returns the table sorted numerically by the source identifier
# the rows are rearranged to match with the input list.
for row in GaiaList:
list_ind.append(np.where(np.array(gaia_table["source_id"] == \
str(row)))[0][0])
gaia_table = gaia_table[list_ind]
gaia_table["source_id"] = [f"Gaia DR3 {i}" for i in gaia_table["source_id"]]
gaia_table['name'] = NameList
gaia_table.rename_column('phot_g_mean_mag', 'Gmag')
gaia_table.rename_column('phot_bp_mean_mag', 'BPmag')
gaia_table.rename_column('phot_rp_mean_mag', 'RPmag')
new_order = ['name', 'source_id', 'ra', 'dec', 'parallax',
'Gmag', 'BPmag', 'RPmag']
gaia_table = gaia_table[new_order]
return gaia_table
def get_twomass_like_name(coords):
'''If the Gaia DR3 system is not chosen, this function returns a string
which has the same format as the 2MASS identifiers.
parameters
----------
coords : `astropy.coordinates.SkyCoord`
The SkyCoord tuple of right ascencion and declination values.
returns
-------
radec_fin : `list`
A list of 2MASS-like identifiers.
'''
ra_hms = coords.ra.to_string(u.h, sep="", precision=2, alwayssign=False,
pad=True)
ra_hms_fin = [ra.replace(".","") for ra in ra_hms]
dec_hms = coords.dec.to_string(sep="", precision=1, alwayssign=True,
pad=True)
dec_hms_fin = [dec.replace(".","") for dec in dec_hms]
radec_fin = []
for r,d in zip(ra_hms_fin, dec_hms_fin):
radec_fin.append(f'{r}{d}')
return radec_fin
def table_from_coords(coord_table, ang_max=10.0, type_coord='icrs',
gaia_sys=True):
"""Generate the formatted astropy table from a list of coordinates.
Each entry needs to be in comma separated variable(.csv) format.
parameters
----------
coord_table : `astropy.table.Table`
A table with two columns named 'col1' and 'col2'. If the coordinates
are in the 'icrs' system, the columns should contain the right
ascension and declination values in degrees. If the coordinates are in
the 'galactic' or 'ecliptic' system, the columns contain the longitude and
latitude in degrees.
ang_max : `float`, optional, default=10.0
the maximum angular distance in arcseconds from the input coordinates
provided in the table.
type_coord : `str`, optional, default='icrs'
The coordinate system of the input positions. These can be 'icrs'
(default), 'galactic' or 'ecliptic'.
gaia_sys : `bool`, optional, default=True
Choose to format the data based on Gaia DR3. Note that no contamination
can be calculated if this is False.
returns
-------
gaia_table : `astropy.table.Table`
The output table ready for further analysis.
"""
gaia_table = Table(names=('source_id', 'ra', 'dec', 'parallax',
'Gmag', 'BPmag', 'RPmag'), \
dtype=(int,float,float,float,float,float,float))
if type_coord == 'galactic':
gal = SkyCoord(l=coord_table['col1'],\
b=coord_table['col2'],\
unit=u.deg, frame='galactic')
c = gal.transform_to(ICRS)
coord_table['col1'], coord_table['col2'] = c.ra.deg, c.dec.deg
elif type_coord == 'ecliptic':
ecl = SkyCoord(lon=coord_table['col1'],\
lat=coord_table['col2'],\
unit=u.deg, frame='barycentricmeanecliptic')
c = ecl.transform_to(ICRS)
elif type_coord == 'icrs':
c = SkyCoord(ra=coord_table['col1'], dec=coord_table['col2'],
unit=u.deg, frame='icrs')
coord_table['col1'], coord_table['col2'] = c.ra.deg, c.dec.deg
coord_table.rename_column(coord_table.colnames[0], 'ra')
coord_table.rename_column(coord_table.colnames[1], 'dec')
if gaia_sys:
for i in range(len(coord_table)):
# Generate an SQL query for each target, where the nearest source is
# returned within a maximum radius set by ang_max.
qry = f"SELECT source_id,ra,dec,parallax,phot_g_mean_mag,\
phot_bp_mean_mag,phot_rp_mean_mag, \
DISTANCE(\
POINT({coord_table['ra'][i]}, {coord_table['dec'][i]}),\
POINT(ra, dec)) AS ang_sep\
FROM gaiadr3.gaia_source \
WHERE 1 = CONTAINS(\
POINT({coord_table['ra'][i]}, {coord_table['dec'][i]}),\
CIRCLE(ra, dec, {ang_max}/3600.)) \
ORDER BY ang_sep ASC"
job = Gaia.launch_job_async( qry )
x = job.get_results() # Astropy table
print(f"astroquery completed for target {i+1} of "
f"{len(coord_table)}")
# Fill the empty table with results from astroquery
if len(x) == 0:
continue
else:
y = x[0]['source_id', 'ra', 'dec', 'parallax',
'phot_g_mean_mag', 'phot_bp_mean_mag',
'phot_rp_mean_mag']
gaia_table.add_row((y))
# For each source, query the identifiers resolved by SIMBAD and return
# the target with the shortest number of characters (which is more
# likely to be the most common reference name for the target).
GDR3_Names = ["Gaia DR3 " + i for i in \
gaia_table['source_id'].astype(str)]
try:
result_table = [Simbad.query_objectids(i) for i in GDR3_Names]
except:
result_table = [None for i in GDR3_Names]
NameList = []
for i, r in enumerate(result_table):
if r is None:
NameList.append(gaia_table['source_id'][i].astype(str))
else:
NameList.append(sorted(r["ID"], key=len)[0])
gaia_table["name"] = NameList
gaia_table["source_id"] = GDR3_Names
else:
twomass_name = get_twomass_like_name(c)
for i in range(len(twomass_name)):
source_id = f'{i+1:0{len(str(len(twomass_name)))}d}'
row = [source_id,c[i].ra.deg,c[i].dec.deg,-999,-999,-999,-999]
gaia_table.add_row(row)
gaia_table['name'] = twomass_name
new_order = ['name', 'source_id', 'ra', 'dec', 'parallax',
'Gmag', 'BPmag', 'RPmag']
gaia_table = gaia_table[new_order]
return gaia_table
def table_from_table(input_table, name_is_source_id=False):
'''Generate the formatted astropy table from a pre-formatted astropy
table.
Each entry needs to be in comma separated variable(.csv) format. This
is the quickest way to produce the table ready for analysis, but it is
important the input data is properly formatted.
parameters
----------
input_table : `astropy.table.Table`
The columns of table must be in the following order:
* source_id (data type: `str`)
* ra (data type: `float`)
* dec (data type: `float`)
* parallax (data type: `float`)
* Gmag (data type: `float`)
* BPmag (data type: `float`)
* RPmag (data type: `float`)
The column headers must not be included!
name_is_source_id : `bool`, optional, default=False
Choose if the name is to be the same as the Gaia DR3 source identifier.
returns
-------
gaia_table : `astropy.table.Table`
The output table ready for further analysis.
'''
gaia_table = Table(data=input_table, dtype=(str, float, float, float,
float, float, float),
names=('source_id', 'ra', 'dec', 'parallax', 'Gmag',
'BPmag', 'RPmag'))
gaia_table['source_id'] = gaia_table['source_id']
if name_is_source_id:
gaia_table['name'] = gaia_table['source_id'].data
else:
GDR3_Names = [i for i in\
gaia_table['source_id']]
NameList = []
try:
result_table = [Simbad.query_objectids(i) for i in GDR3_Names]
except:
result_table = [None for i in GDR3_Names]
for i, r in enumerate(result_table):
if r is None:
NameList.append(str(gaia_table['source_id'][i]))
else:
NameList.append(sorted(r["ID"], key=len)[0])
gaia_table["name"] = NameList
new_order = ['name', 'source_id', 'ra', 'dec', 'parallax', 'Gmag',
'BPmag', 'RPmag']
gaia_table = gaia_table[new_order]
return gaia_table
def get_gaia_data(gaia_table, name_is_source_id=False, type_coord='icrs',
gaia_sys=True):
"""Reads the input table and returns a table in the correct format for
TESSilator.
The table must be in comma-separated variable format, in either of these
3 ways:
1. A table with a single column containing the source identifier
Note that this is the preferred method since the target identified
in the Gaia query is unambiguously the same as the input value.
Also, the name match runs faster than the coordinate match using
astroquery.
2. A table with sky-coordinates in either the 'icrs' (default),
'galactic', or 'ecliptic' system.
* note this is slower because of the time required to run the Vizier
query.
3. A table with all 7 columns already made.
Parameters
----------
gaia_table : `astropy.table.Table`
The input table
name_is_source_id : `bool`, optional, default=False
If the input table has 7 columns, this provides the choice to set the
name column equal to "source_id" (True), or to find a common target
identifier (False)
type_coord : `str`, optional, default='icrs'
The coordinate system of the input data. Choose from 'icrs', 'galactic'
or 'barycentricmeanecliptic', where the latter is the conventional
coordinate system used by TESS.
gaia_sys : `bool`, optional, default=True
Choose to format the data based on Gaia DR3. Note that no contamination
can be calculated if this is False.
Returns
-------
tbl : `astropy.table.Table`
The table ready for TESSilator analysis, with the columns:
* name: the preferred choice of source identifier
* source_id: the Gaia DR3 source identifier
* ra: right ascension (icrs) or longditude (galactic,
barycentricmeanecliptic)
* dec: declination (icrs) or latitude (galactic,
barycentricmeanecliptic)
* parallax: parallax from Gaia DR3 (in mas)
* Gmag: the apparent G-band magnitude from Gaia DR3
* BPmag: the apparent BP-band magnitude from Gaia DR3
* RPmag: the apparent RP-band magnitude from Gaia DR3
"""
warnings.filterwarnings('ignore', category=UserWarning, append=True)
if len(gaia_table.colnames) == 1:
tbl = table_from_simbad(gaia_table)
elif len(gaia_table.colnames) == 2:
tbl = table_from_coords(gaia_table, type_coord=type_coord,
gaia_sys=gaia_sys)
elif len(gaia_table.colnames) == 7:
tbl = table_from_table(gaia_table, name_is_source_id=name_is_source_id)
else:
raise Exception(
"Input table has invalid format. Please use one of "
"the following formats: \n [1] source_id \n [2] ra "
"and dec\n [3] source_id, ra, dec, parallax, Gmag, "
"BPmag and RPmag"
)
return tbl
__all__ = [item[0] for item in inspect.getmembers(sys.modules[__name__], \
predicate = lambda f: inspect.isfunction(f) and \
f.__module__ == __name__)]
|
alexbinksREPO_NAMEtessilatorPATH_START.@tessilator_extracted@tessilator-main@tessilator@maketable.py@.PATH_END.py
|
{
"filename": "_cache.py",
"repo_name": "xpsi-group/xpsi",
"repo_path": "xpsi_extracted/xpsi-main/xpsi/PostProcessing/_cache.py",
"type": "Python"
}
|
from .. import __version__
from ._global_imports import *
try:
import h5py
except ImportError:
print('Install h5py to enable signal caching.')
raise
class _Cache(object):
""" Cache numerical model objects computed during likelihood evaluation.
:param str filename:
Filename of cache.
:param str cache_dir:
Directory to write cache to.
:param bool read_only:
Do not write to cache file?
:param bool archive:
If not read-only, then archive an existing cache file found at the
same path?
"""
def __init__(self, filename, cache_dir='./',
read_only=False, archive=True):
if isinstance(filename, _six.string_types):
if filename[-3:] != '.h5':
self._filename = filename + '.h5'
else:
self._filename = filename
self._cache_dir = cache_dir
self._path = _os.path.join(self._cache_dir, self._filename)
self._read_only = read_only
self._archive_if_incompatible = archive
def __enter__(self):
return self
def __exit__(self, exc, exc_value, traceback):
if exc:
print('Encountered problem whilst caching:')
def _open(self, mode='r'):
""" Get the :mod:`h5py` context manager. """
if self._read_only and mode != 'r':
raise RuntimeError('The cache is in read-only mode.')
return h5py.File(self._path, mode)
def cache(self, data):
""" Cache the computational data. """
with self._open('r+') as f:
g = f['data']
for key, value in data.items():
if isinstance(value, tuple) or isinstance(value, list):
if key not in list(g.keys()):
shape = [f.attrs['n'], len(value)]
shape += [s for s in value[0].shape]
g.create_dataset(key, shape=shape, dtype='float64')
for j, v in enumerate(value):
g[key][self.i,j,...] = v
else:
if key not in list(g.keys()):
shape = [f.attrs['n']] + [s for s in value.shape]
g.create_dataset(key, shape=shape, dtype='float64')
g[key][self.i,...] = value
self.i += 1
def reset_iterator(self):
""" Reset the counter for the cache iterator. """
self.i = 0
def __iter__(self):
self.reset_iterator()
return self
def __next__(self):
""" Read from the cache. """
cached = {}
with self._open('r') as f:
g = f['data']
for key in g.keys():
cached[key] = g[key][self.i,...]
self.i += 1
return cached
@make_verbose('Checking whether an existing cache can be read:',
'Cache state determined')
def do_caching(self, samples, force=False):
""" Check whether a new cache is required or whether an exising
cache can be read without additional computation.
:return: Boolean indicating whether to read (``False``) or write.
"""
if force:
self._new(samples)
return True
try: # try reading file and checking keys
with self._open('r') as f:
if 'thetas' not in list(f.keys()):
self._new(samples)
return True
except IOError: # create new cache file
self._new(samples)
return True
else: # can be read, so check if samples array are matching
if self._changed(samples):
self._new(samples)
return True
else:
return False
@make_verbose('Creating new cache file', 'Cache file created')
def _new(self, samples):
""" Prepare a new cache file. """
if not _os.path.isdir(self._cache_dir):
_os.mkdir(self._cache_dir)
if self._archive_if_incompatible:
try:
with self._open('r'):
pass
except IOError:
self._initialise(samples)
else:
self._archive()
self._initialise(samples)
else:
self._initialise(samples)
@make_verbose('Initialising cache file', 'Cache file initialised')
def _initialise(self, samples):
""" Initialise the cache. """
with self._open('w') as f:
f.attrs['version'] = __version__
f.attrs['n'] = samples.shape[0]
f.create_dataset('thetas', data=samples)
f.create_group('/data')
self.reset_iterator()
def _changed(self, samples):
""" Check whether software version or sample set has changed. """
with self._open('r') as f:
if f.attrs['version'] != __version__:
return True
if not _np.array_equal(f['thetas'], samples):
return True
return False
@make_verbose('Attempting to archive existing cache file in '
'a subdirectory')
def _archive(self):
""" Archive an existing cache file. """
# to archive the existing cache file
archive_dir = _os.path.join(self._cache_dir, 'archive')
try:
if not _os.path.isdir(archive_dir):
_os.mkdir(archive_dir)
except OSError:
yield ('Archiving failed... cache file %s will be '
'overwritten.' % self._filename)
yield
else:
yield 'Targeting subdirectory: %s.' % archive_dir
try:
from datetime import datetime
except ImportError:
yield ('Archiving failed... cache file %s will be '
'overwritten.' % self._filename)
yield
else:
name_archived = self._filename[:-3] + '__archive__'
name_archived += 'xpsi_version_%s__' % __version__
obj = datetime.now()
name_archived += 'datetime__%i.%i.%i__%i.%i.%i' % (obj.day,
obj.month,
obj.year,
obj.hour,
obj.minute,
obj.second)
try:
_os.rename(self._filename,
_os.path.join(archive_dir, name_archived + '.h5'))
except OSError:
yield ('Archiving failed... cache file %s will be '
'overwritten.' % self._filename)
else:
yield ('Exisiting cache file archived in '
'subdirectory %s.' % archive_dir)
yield None
|
xpsi-groupREPO_NAMExpsiPATH_START.@xpsi_extracted@xpsi-main@xpsi@PostProcessing@_cache.py@.PATH_END.py
|
{
"filename": "velocity_moments_kexpanded_fftw.py",
"repo_name": "sfschen/velocileptors",
"repo_path": "velocileptors_extracted/velocileptors-master/velocileptors/EPT/velocity_moments_kexpanded_fftw.py",
"type": "Python"
}
|
import numpy as np
import time
from scipy.interpolate import interp1d
from velocileptors.Utils.spherical_bessel_transform_fftw import SphericalBesselTransform
from velocileptors.EPT.cleft_kexpanded_fftw import KECLEFT
class KEVelocityMoments(KECLEFT):
'''
Class based on cleft_kexpanded_fftw to compute pairwise velocity moments, in expanded LPT.
Structured in the same way as the inherited class but with functions for velocity moments.
'''
def __init__(self, *args, beyond_gauss = True, **kw):
'''
Same keywords as the cleft_kexpanded_fftw class. Go look there!
'''
# Set up the configuration space quantities
KECLEFT.__init__(self, *args, **kw)
self.setup_onedot()
self.setup_twodots()
self.beyond_gauss = beyond_gauss
# v12 and sigma12 only have a subset of the bias contributions so we don't need to have as many FFTs
if self.third_order:
self.num_vel_components = 8; self.vii = np.array([0,1,2,3,4,6,7,10]) + 1
self.num_spar_components = 5; self.sparii = np.array([0,1,2,3,6]) + 1
self.num_strace_components = 5; self.straceii = np.array([0,1,2,3,6]) + 1
elif self.shear:
self.num_vel_components = 7; self.vii = np.array([0,1,2,3,4,6,7]) + 1
self.num_spar_components = 5; self.sparii = np.array([0,1,2,3,6]) + 1
self.num_strace_components = 5; self.straceii = np.array([0,1,2,3,6]) + 1
else:
self.num_vel_components = 5; self.vii = np.array([0,1,2,3,4]) + 1
self.num_spar_components = 4; self.sparii = np.array([0,1,2,3]) + 1
self.num_strace_components = 4; self.straceii = np.array([0,1,2,3]) + 1
# Need one extra component to do the matter za
self.sph_v = SphericalBesselTransform(self.qint, L=self.jn, ncol=(self.num_vel_components), threads=self.threads, import_wisdom= self.import_wisdom, wisdom_file = self.wisdom_file)
self.sph_spar = SphericalBesselTransform(self.qint, L=self.jn, ncol=(self.num_spar_components), threads=self.threads, import_wisdom= self.import_wisdom, wisdom_file = self.wisdom_file)
self.sph_strace = SphericalBesselTransform(self.qint, L=self.jn, ncol=(self.num_strace_components), threads=self.threads, import_wisdom= self.import_wisdom, wisdom_file = self.wisdom_file)
if self.beyond_gauss:
# Beyond the first two moments
self.num_gamma_components = 2; self.gii = np.array([0,1]) + 1 # gamma has matter (all loop, so lump into 0) and b1
self.sph_gamma1 = SphericalBesselTransform(self.qint, L=self.jn, ncol=(self.num_gamma_components), threads=self.threads, import_wisdom= self.import_wisdom, wisdom_file = self.wisdom_file)
self.sph_gamma2 = SphericalBesselTransform(self.qint, L=self.jn, ncol=(self.num_gamma_components), threads=self.threads, import_wisdom= self.import_wisdom, wisdom_file = self.wisdom_file)
# fourth moment
self.num_kappa_components = 3; self.kii = np.array([0,1,2]) + 1 # note that these are not the bias comps
self.sph_kappa = SphericalBesselTransform(self.qint, L=self.jn, ncol=(self.num_kappa_components), threads=self.threads, import_wisdom= self.import_wisdom, wisdom_file = self.wisdom_file)
self.compute_oneloop_spectra()
def make_tables(self, kmin = 1e-3, kmax = 3, nk = 100, linear_theory=False):
self.kv = np.logspace(np.log10(kmin), np.log10(kmax), nk)
self.make_ptable(kmin=kmin, kmax=kmax, nk=nk)
self.make_vtable(kmin=kmin, kmax=kmax, nk=nk)
self.make_spartable(kmin=kmin, kmax=kmax, nk=nk)
self.make_stracetable(kmin=kmin, kmax=kmax, nk=nk)
self.convert_sigma_bases()
if self.beyond_gauss: # make these even if not required since they're fast
self.make_gamma1table(kmin=kmin,kmax=kmax,nk=nk)
self.make_gamma2table(kmin=kmin,kmax=kmax,nk=nk)
self.convert_gamma_bases()
self.make_kappatable(kmin=kmin,kmax=kmax,nk=nk)
self.convert_kappa_bases()
def compute_oneloop_spectra(self):
'''
Compute all velocity spectra nonzero at one loop.
'''
self.compute_p_linear()
self.compute_p_connected()
self.compute_p_k0()
self.compute_p_k1()
self.compute_p_k2()
self.compute_p_k3()
self.compute_p_k4()
self.compute_v_linear()
self.compute_v_connected()
self.compute_v_k0()
self.compute_v_k1()
self.compute_v_k2()
self.compute_v_k3()
self.compute_spar_linear()
self.compute_spar_connected()
self.compute_spar_k0()
self.compute_spar_k1()
self.compute_spar_k2()
self.compute_strace_linear()
self.compute_strace_connected()
self.compute_strace_k0()
self.compute_strace_k1()
self.compute_strace_k2()
if self.beyond_gauss:
self.compute_gamma1_connected()
self.compute_gamma1_k0()
self.compute_gamma1_k1()
self.compute_gamma2_connected()
self.compute_gamma2_k0()
self.compute_gamma2_k1()
self.compute_kappa_k0()
def update_power_spectrum(self,k,p):
'''
Same as the one in cleft_fftw but also do the velocities.
'''
super(KEVelocityMoments,self).update_power_spectrum(k,p)
self.setup_onedot()
self.setup_twodots()
self.setup_threedots()
def setup_onedot(self):
'''
Create quantities linear in f. All quantities are with f = 1, since converting back is trivial.
'''
self.Xdot = self.Xlin; self.sigmadot = self.Xdot[-1]
self.Ydot = self.Ylin
self.Vdot = 4./3 * self.Vloop #these are only the symmetrized version since all we need...
self.Tdot = 4./3 * self.Tloop # is k_i k_j k_k W_{ijk}
self.Udot = self.Ulin
self.Uloopdot = 3 * self.U3
self.U11dot = 2 * self.U11
self.U20dot = 2 * self.U20
# some one loop terms have to be explicitly set to zero
if self.one_loop:
self.Xloopdot = (4 * self.qf.Xloop13 + 2 * self.qf.Xloop22) * self.one_loop; self.sigmaloopdot = self.Xloopdot[-1]
self.Yloopdot = (4 * self.qf.Yloop13 + 2 * self.qf.Yloop22) * self.one_loop
self.X10dot = 1.5 * self.X10; self.sigma10dot = self.X10dot[-1]
self.Y10dot = 1.5 * self.Y10
else:
self.Xloopdot = 0; self.sigmaloopdot = 0
self.Yloopdot = 0
self.X10dot = 0; self.sigma10dot = 0
self.Y10dot = 0
if self.shear or self.third_order:
self.Us2dot = 2 * self.Us2
self.V12dot = self.V
self.Xs2dot = self.Xs2; self.sigmas2dot = self.Xs2dot[-1]
self.Ys2dot = self.Ys2
if self.third_order:
self.Ub3dot = self.Ub3
def setup_twodots(self):
'''
Same as onedot but now for those quadratic in f.
'''
self.Xddot = self.Xlin; self.sigmaddot = self.Xddot[-1]
self.Yddot = self.Ylin
# Here we will need two forms, one symmetrized:
self.Vddot = 5./3 * self.Vloop #these are only the symmetrized version since all we need...
self.Tddot = 5./3 * self.Tloop # is k_i k_j k_k W_{ijk}
# Explicitly set certain terms to zero if not one loop
if self.one_loop:
self.Xloopddot = (4 * self.qf.Xloop22 + 6 * self.qf.Xloop13) * self.one_loop; self.sigmaloopddot = self.Xloopddot[-1]
self.Yloopddot = (4 * self.qf.Yloop22 + 6 * self.qf.Yloop13) * self.one_loop
self.X10ddot = 2 * self.X10; self.sigma10ddot = self.X10ddot[-1]
self.Y10ddot = 2 * self.Y10
# and the other from k_i \delta_{jk} \ddot{W}_{ijk}
self.kdelta_Wddot = (18 * self.qf.V1loop112 + 7 * self.qf.V3loop112 + 5 * self.qf.Tloop112) * self.one_loop
else:
self.Xloopddot = 0; self.sigmaloopddot = 0
self.Yloopddot = 0
self.X10ddot = 0; self.sigma10ddot = 0
self.Y10ddot = 0
self.kdelta_Wddot = 0
if self.shear or self.third_order:
self.Xs2ddot = self.Xs2; self.sigmas2ddot = self.Xs2ddot[-1]
self.Ys2ddot = self.Ys2
def setup_threedots(self):
self.Vdddot = 2 * self.Vloop
self.Tdddot = 2 * self.Tloop
def compute_v_linear(self):
self.v_linear = np.zeros( (self.num_vel_components, self.N) )
self.v_linear[0,:] = (- 2 * self.pint )/self.kint
self.v_linear[1,:] = (- 2 * self.pint )/self.kint
def compute_v_connected(self):
self.v_connected = np.zeros( (self.num_vel_components, self.N) )
self.v_connected[0,:] = (- 2 * (2*9./98*self.qf.Q1 + 4*5./21*self.qf.R1) - 12./7*(self.qf.Q2 + 2*self.qf.R2) )/self.kint
self.v_connected[1,:] = (- 3 * (12*self.qf.R2 + 6*self.qf.Q5 + 6*self.qf.R1)/7 - 3*(10./21*self.qf.R1))/self.kint
self.v_connected[2,:] = - 12./7 * (self.qf.R1 + self.qf.R2) / self.kint
self.v_connected[3,:] = - 6./7 * self.qf.Q8 / self.kint
if self.shear or self.third_order:
self.v_connected[5,:] = - 2 * 2 * 1./7 * self.qf.Qs2 / self.kint
if self.third_order:
self.v_connected[7,:] = -2 * self.qf.Rb3 * self.pint / self.kint
def compute_v_k0(self):
self.v_k0 = np.zeros( (self.num_vel_components, self.N) )
ret = np.zeros(self.num_vel_components)
bias_integrands = np.zeros( (self.num_vel_components,self.N) )
for l in range(2):
mu1fac = (l == 1)
bias_integrands[4,:] = mu1fac * (2*self.corlin*self.Udot)
if self.shear or self.third_order:
bias_integrands[6,:] = mu1fac * (2*self.V12dot)
if l >= 0:
bias_integrands -= bias_integrands[:,-1][:,None]
# do FFTLog
ktemps, bias_ffts = self.sph_v.sph(l, bias_integrands)
self.v_k0 += 4 * np.pi * interp1d(ktemps, bias_ffts, bounds_error=False)(self.kint)
def compute_v_k1(self):
self.v_k1 = np.zeros( (self.num_vel_components, self.N) )
ret = np.zeros(self.num_vel_components)
bias_integrands = np.zeros( (self.num_vel_components,self.N) )
for l in [0,2]:
mu0fac = (l == 0)
mu1fac = (l == 1)
mu2fac = 1./3 * (l==0) - 2./3*(l==2)
bias_integrands[2,:] = mu0fac * ( self.corlin*self.Xdot ) + \
mu2fac * (2*self.Ulin*self.Udot + self.corlin*self.Ydot)
bias_integrands[3,:] = mu2fac * (2*self.Ulin*self.Udot)
if self.shear or self.third_order:
bias_integrands[5,:] = mu0fac * (2*self.Xs2dot) + mu2fac * (2*self.Ys2dot)
#bias_integrands[9,:] = mu0fac * self.zeta
if l >= 0:
bias_integrands -= bias_integrands[:,-1][:,None]
# do FFTLog
ktemps, bias_ffts = self.sph_v.sph(l, bias_integrands)
self.v_k1 += 4 * np.pi * interp1d(ktemps, bias_ffts, bounds_error=False)(self.kint)
def compute_v_k2(self):
self.v_k2 = np.zeros( (self.num_vel_components, self.N) )
ret = np.zeros(self.num_vel_components)
bias_integrands = np.zeros( (self.num_vel_components,self.N) )
for l in range(4):
mu0fac = (l == 0)
mu1fac = (l == 1)
mu2fac = 1./3 * (l==0) - 2./3*(l==2)
mu3fac = 0.6 * (l==1) - 0.4 * (l==3)
bias_integrands[1,:] = mu1fac * ( -2*self.Ulin*self.Xdot - self.Udot*self.Xlin ) + \
mu3fac * ( -2*self.Ulin*self.Ydot - self.Udot*self.Ylin )
#if self.shear or self.third_order:
#bias_integrands[8,:] = mu0fac * self.chi
#bias_integrands[9,:] = mu0fac * self.zeta
if l >= 0:
bias_integrands -= bias_integrands[:,-1][:,None]
# do FFTLog
ktemps, bias_ffts = self.sph_v.sph(l, bias_integrands)
self.v_k2 += 4 * np.pi * interp1d(ktemps, bias_ffts, bounds_error=False)(self.kint)
def compute_v_k3(self):
self.v_k3 = np.zeros( (self.num_vel_components, self.N) )
ret = np.zeros(self.num_vel_components)
bias_integrands = np.zeros( (self.num_vel_components,self.N) )
for l in range(self.jn):
mu0fac = (l == 0)
mu1fac = (l == 1)
mu2fac = 1./3 * (l==0) - 2./3*(l==2)
mu3fac = 0.6 * (l==1) - 0.4 * (l==3)
mu4fac = 0.2 * (l==0) - 4./7*(l==2) + 8./35*(l==4)
bias_integrands[0,:] = mu0fac * ( - 0.5*self.Xdot*self.Xlin ) + \
mu2fac * ( - 0.5*(self.Xdot*self.Ylin+self.Ydot*self.Xlin) ) + \
mu4fac * ( - 0.5*self.Ylin*self.Ydot )
#if self.shear or self.third_order:
#bias_integrands[8,:] = mu0fac * self.chi
#bias_integrands[9,:] = mu0fac * self.zeta
if l >= 0:
bias_integrands -= bias_integrands[:,-1][:,None]
# do FFTLog
ktemps, bias_ffts = self.sph_v.sph(l, bias_integrands)
self.v_k3 += 4 * np.pi * interp1d(ktemps, bias_ffts, bounds_error=False)(self.kint)
def compute_spar_linear(self):
self.spar_linear= np.zeros( (self.num_spar_components, self.N) )
self.spar_linear[0,:] = (-2 * self.pint)/self.kint**2
def compute_spar_connected(self):
self.spar_connected = np.zeros( (self.num_spar_components, self.N) )
self.spar_connected[0,:] = (-2*(4*9./98*self.qf.Q1 + 6*5./21*self.qf.R1) - 6*(5./3*3./7*(self.qf.Q2+2*self.qf.R2)))/self.kint**2
self.spar_connected[1,:] = (-2 * 2 * (12./7*self.qf.R2 + 6./7*self.qf.Q5 + 6./7*self.qf.R1))/self.kint**2
def compute_spar_k0(self):
self.spar_k0 = np.zeros( (self.num_spar_components, self.N) )
bias_integrands = np.zeros( (self.num_spar_components,self.N) )
for l in range(3):
mu0fac = (l == 0)
mu1fac = (l == 1)
mu2fac = 1./3 * (l==0) - 2./3*(l==2)
bias_integrands[2,:] = mu0fac * (self.corlin*self.Xddot) + mu2fac * (self.corlin*self.Yddot + 2*self.Udot**2)
bias_integrands[3,:] = mu2fac * (2*self.Udot**2)
if self.shear or self.third_order:
bias_integrands[4,:] = mu0fac * (2*self.Xs2ddot) + mu2fac * (2*self.Ys2ddot)
if l >= 0:
bias_integrands -= bias_integrands[:,-1][:,None]
# do FFTLog
ktemps, bias_ffts = self.sph_spar.sph(l, bias_integrands)
self.spar_k0 += 4 * np.pi * interp1d(ktemps, bias_ffts, bounds_error=False)(self.kint)
def compute_spar_k1(self):
self.spar_k1 = np.zeros( (self.num_spar_components, self.N) )
bias_integrands = np.zeros( (self.num_spar_components,self.N) )
for l in range(4):
mu0fac = (l == 0)
mu1fac = (l == 1)
mu2fac = 1./3 * (l==0) - 2./3*(l==2)
mu3fac = 0.6 * (l==1) - 0.4 * (l==3)
mu4fac = 0.2 * (l==0) - 4./7*(l==2) + 8./35*(l==4)
bias_integrands[1,:] = mu1fac * (-2*(self.Ulin*self.Xddot + 2*self.Udot*self.Xdot) ) + \
mu3fac * (-2*(self.Ulin*self.Yddot + 2*self.Udot*self.Ydot) )
#if self.shear or self.third_order:
#bias_integrands[8,:] = mu0fac * self.chi
#bias_integrands[9,:] = mu0fac * self.zeta
if l >= 0:
bias_integrands -= bias_integrands[:,-1][:,None]
# do FFTLog
ktemps, bias_ffts = self.sph_spar.sph(l, bias_integrands)
self.spar_k1 += 4 * np.pi * interp1d(ktemps, bias_ffts, bounds_error=False)(self.kint)
def compute_spar_k2(self):
self.spar_k2 = np.zeros( (self.num_spar_components, self.N) )
bias_integrands = np.zeros( (self.num_spar_components,self.N) )
for l in range(self.jn):
mu0fac = (l == 0)
mu1fac = (l == 1)
mu2fac = 1./3 * (l==0) - 2./3*(l==2)
mu3fac = 0.6 * (l==1) - 0.4 * (l==3)
mu4fac = 0.2 * (l==0) - 4./7*(l==2) + 8./35*(l==4)
bias_integrands[0,:] = mu0fac * (-self.Xdot**2 - 0.5*self.Xddot*self.Xlin) + \
mu2fac * (- 2*self.Xdot*self.Ydot - 0.5*(self.Xddot*self.Ylin + self.Yddot*self.Xlin)) + \
mu4fac * (-self.Ydot**2 - 0.5*self.Yddot*self.Ylin)
#if self.shear or self.third_order:
#bias_integrands[8,:] = mu0fac * self.chi
#bias_integrands[9,:] = mu0fac * self.zeta
if l >= 0:
bias_integrands -= bias_integrands[:,-1][:,None]
# do FFTLog
ktemps, bias_ffts = self.sph_spar.sph(l, bias_integrands)
self.spar_k2 += 4 * np.pi * interp1d(ktemps, bias_ffts, bounds_error=False)(self.kint)
def compute_strace_linear(self):
self.strace_linear = np.zeros( (self.num_spar_components, self.N) )
self.strace_linear[0,:] = (-2 * self.pint )/self.kint**2
def compute_strace_connected(self):
self.strace_connected = np.zeros( (self.num_spar_components, self.N) )
self.strace_connected[0,:] = (- 2*(4*9./98*self.qf.Q1 + 6*5./21*self.qf.R1) \
+ 6./7*(self.qf.Q1-5*self.qf.Q2+4*self.qf.R1-10*self.qf.R2))/self.kint**2
self.strace_connected[1,:] = (-4/self.kint**2* 3./7* (4*self.qf.R2+2*self.qf.Q5) )
def compute_strace_k0(self):
self.strace_k0 = np.zeros( (self.num_strace_components, self.N) )
bias_integrands = np.zeros( (self.num_strace_components,self.N) )
for l in range(self.jn):
mu0fac = (l == 0)
bias_integrands[2,:] = mu0fac * (self.corlin*(3*self.Xddot + self.Yddot) + 2*self.Udot**2)
bias_integrands[3,:] = mu0fac * (2 * self.Udot**2)
if self.shear or self.third_order:
bias_integrands[4,:] = mu0fac * ( 2*(3*self.Xs2ddot + self.Ys2ddot) )
if l >= 0:
bias_integrands -= bias_integrands[:,-1][:,None]
# do FFTLog
ktemps, bias_ffts = self.sph_strace.sph(l, bias_integrands)
self.strace_k0 += 4 * np.pi * interp1d(ktemps, bias_ffts, bounds_error=False)(self.kint)
def compute_strace_k1(self):
self.strace_k1 = np.zeros( (self.num_strace_components, self.N) )
bias_integrands = np.zeros( (self.num_strace_components,self.N) )
for l in range(self.jn):
mu0fac = (l == 0)
mu1fac = (l == 1)
bias_integrands[1,:] = mu1fac * (-2*self.Ulin*(3*self.Xddot+self.Yddot) - 4*self.Udot*(self.Xdot+self.Ydot) )
if l >= 0:
bias_integrands -= bias_integrands[:,-1][:,None]
# do FFTLog
ktemps, bias_ffts = self.sph_strace.sph(l, bias_integrands)
self.strace_k1 += 4 * np.pi * interp1d(ktemps, bias_ffts, bounds_error=False)(self.kint)
def compute_strace_k2(self):
self.strace_k2 = np.zeros( (self.num_strace_components, self.N) )
bias_integrands = np.zeros( (self.num_strace_components,self.N) )
for l in range(self.jn):
mu0fac = (l == 0)
mu1fac = (l == 1)
mu2fac = 1./3 * (l==0) - 2./3*(l==2)
bias_integrands[0,:] = mu0fac * ( (3*self.Xddot + self.Yddot)*(- 0.5*self.Xlin) - self.Xdot**2) + \
mu2fac * ((3*self.Xddot + self.Yddot)*(- 0.5*self.Ylin) - (self.Ydot**2+2*self.Xdot*self.Ydot))
if l >= 0:
bias_integrands -= bias_integrands[:,-1][:,None]
# do FFTLog
ktemps, bias_ffts = self.sph_strace.sph(l, bias_integrands)
self.strace_k2 += 4 * np.pi * interp1d(ktemps, bias_ffts, bounds_error=False)(self.kint)
def compute_gamma1_connected(self):
self.gamma1_connected = np.zeros( (self.num_gamma_components, self.N) )
self.gamma1_connected[0,:] = 36./7 * (self.qf.Q2 + 2*self.qf.R2) / self.kint**3
def compute_gamma1_k0(self):
self.gamma1_k0 = np.zeros( (self.num_gamma_components, self.N) )
bias_integrands = np.zeros( (self.num_gamma_components,self.N) )
for l in range(self.jn):
mu1fac = (l == 1)
mu3fac = 0.6 * (l==1) - 0.4 * (l==3)
bias_integrands[1,:] = mu1fac * (6*self.Udot*self.Xddot) + mu3fac * (6*self.Udot*self.Yddot)
if l >= 0:
bias_integrands -= bias_integrands[:,-1][:,None]
# do FFTLog
ktemps, bias_ffts = self.sph_gamma1.sph(l, bias_integrands)
self.gamma1_k0 += 4 * np.pi * interp1d(ktemps, bias_ffts, bounds_error=False)(self.kint)
def compute_gamma1_k1(self):
self.gamma1_k1 = np.zeros( (self.num_gamma_components, self.N) )
bias_integrands = np.zeros( (self.num_gamma_components,self.N) )
for l in range(self.jn):
mu0fac = (l == 0)
mu2fac = 1./3 * (l==0) - 2./3*(l==2)
mu4fac = 0.2 * (l==0) - 4./7*(l==2) + 8./35*(l==4)
bias_integrands[0,:] = mu0fac * (3*self.Xdot*self.Xddot) + \
mu2fac * (3*(self.Xdot*self.Yddot+self.Ydot*self.Xddot)) + \
mu4fac * (3*self.Ydot*self.Yddot)
if l >= 0:
bias_integrands -= bias_integrands[:,-1][:,None]
# do FFTLog
ktemps, bias_ffts = self.sph_gamma1.sph(l, bias_integrands)
self.gamma1_k1 += 4 * np.pi * interp1d(ktemps, bias_ffts, bounds_error=False)(self.kint)
def compute_gamma2_connected(self):
self.gamma2_connected = np.zeros( (self.num_gamma_components, self.N) )
self.gamma2_connected[0,:] = - 12./7 * (2*self.qf.R1 - 6*self.qf.R2 + self.qf.Q1 - 3*self.qf.Q2)/self.kint**3
def compute_gamma2_k0(self):
self.gamma2_k0 = np.zeros( (self.num_gamma_components, self.N) )
bias_integrands = np.zeros( (self.num_gamma_components,self.N) )
for l in range(self.jn):
mu1fac = (l == 1)
mu3fac = 0.6 * (l==1) - 0.4 * (l==3)
bias_integrands[1,:] = mu1fac * ( 2*self.Udot*(5*self.Xddot + 3*self.Yddot) )
if l >= 0:
bias_integrands -= bias_integrands[:,-1][:,None]
# do FFTLog
ktemps, bias_ffts = self.sph_gamma2.sph(l, bias_integrands)
self.gamma2_k0 += 4 * np.pi * interp1d(ktemps, bias_ffts, bounds_error=False)(self.kint)
def compute_gamma2_k1(self):
self.gamma2_k1 = np.zeros( (self.num_gamma_components, self.N) )
bias_integrands = np.zeros( (self.num_gamma_components,self.N) )
for l in range(self.jn):
mu0fac = (l == 0)
mu2fac = 1./3 * (l==0) - 2./3*(l==2)
mu4fac = 0.2 * (l==0) - 4./7*(l==2) + 8./35*(l==4)
bias_integrands[0,:] = mu0fac * ( (5*self.Xdot*self.Xddot+self.Xdot*self.Yddot) ) + \
mu2fac * ( (2*self.Xdot*self.Yddot+self.Ydot*(5*self.Xddot+3*self.Yddot)) )
if l >= 0:
bias_integrands -= bias_integrands[:,-1][:,None]
# do FFTLog
ktemps, bias_ffts = self.sph_gamma2.sph(l, bias_integrands)
self.gamma2_k1 += 4 * np.pi * interp1d(ktemps, bias_ffts, bounds_error=False)(self.kint)
def compute_kappa_k0(self):
self.kappa_k0 = np.zeros( (self.num_kappa_components, self.N) )
bias_integrands = np.zeros( (self.num_kappa_components,self.N) )
for l in range(self.jn):
mu0fac = (l == 0)
mu2fac = 1./3 * (l==0) - 2./3*(l==2)
mu4fac = 0.2 * (l==0) - 4./7*(l==2) + 8./35*(l==4)
bias_integrands[0,:] = mu0fac * (15 * self.Xddot**2 + 10 * self.Xddot*self.Yddot + 3 * self.Yddot**2)
bias_integrands[1,:] = mu0fac * (5 * self.Xddot**2 + self.Xddot*self.Yddot) + \
mu2fac * (7*self.Xddot*self.Yddot + 3*self.Yddot**2)
bias_integrands[2,:] = mu0fac * (3 * self.Xddot**2) + mu2fac * (6*self.Xddot*self.Yddot) + mu4fac * (3*self.Yddot**2)
if l >= 0:
bias_integrands -= bias_integrands[:,-1][:,None]
# do FFTLog
ktemps, bias_ffts = self.sph_kappa.sph(l, bias_integrands)
self.kappa_k0 += 4 * np.pi * interp1d(ktemps, bias_ffts, bounds_error=False)(self.kint)
def make_table(self, kmin = 1e-3, kmax = 3, nk = 100, func_name = 'power', linear_theory=False):
'''
Make a table of different terms of P(k), v(k), sigma(k) between a given
'kmin', 'kmax' and for 'nk' equally spaced values in log10 of k
This is the most time consuming part of the code.
'''
pktable = np.zeros([nk, self.num_power_components+1]) # one column for ks, but last column in power now the counterterm
kv = np.logspace(np.log10(kmin), np.log10(kmax), nk)
pktable[:, 0] = kv[:]
if not linear_theory:
if func_name == 'power':
components = [ (1, self.p_linear+self.p_connected + self.p_k0), (self.kint, self.p_k1),\
(self.kint**2, self.p_k2), (self.kint**3, self.p_k3), (self.kint**4, self.p_k4) ]
iis = np.arange(1,1+self.num_power_components)
elif func_name == 'velocity':
components = [ (1, self.v_linear+self.v_connected+self.v_k0), (self.kint, self.v_k1), (self.kint**2, self.v_k2), (self.kint**3, self.v_k3) ]
iis = self.vii
elif func_name == 'spar':
components = [ (1, self.spar_linear+self.spar_connected+self.spar_k0),(self.kint, self.spar_k1),(self.kint**2, self.spar_k2) ]
iis = self.sparii
elif func_name == 'strace':
components = [ (1, self.strace_linear+self.strace_connected+self.strace_k0),(self.kint, self.strace_k1),(self.kint**2, self.strace_k2) ]
iis = self.straceii
elif func_name == 'gamma1':
components = [ (1, self.gamma1_connected+self.gamma1_k0),(self.kint, self.gamma1_k1) ]
iis = self.gii
elif func_name == 'gamma2':
components = [ (1, self.gamma2_connected+self.gamma2_k0),(self.kint, self.gamma2_k1) ]
iis = self.gii
elif func_name == 'kappa':
components = [ (1, self.kappa_k0) ]
iis = self.kii
else:
if func_name == 'power':
components = [ (1, self.p_linear) ]
iis = np.arange(1,1+self.num_power_components)
elif func_name == 'velocity':
components = [ (1, self.v_linear) ]
iis = self.vii
elif func_name == 'spar':
components = [ (1, self.spar_linear) ]
iis = self.sparii
elif func_name == 'strace':
components = [ (1, self.strace_linear) ]
iis = self.straceii
elif func_name == 'gamma1':
return pktable
elif func_name == 'gamma2':
return pktable
elif func_name == 'kappa':
return pktable
# sum the components:
ptable = 0
for (kpow, comp) in components:
ptable += kpow * comp
# interpolate onto range of interest
for jj in range(len(iis)):
pktable[:,iis[jj]] = interp1d(self.kint, ptable[jj,:])(kv)
return pktable
def make_ptable(self, kmin = 1e-3, kmax = 3, nk = 100):
self.pktable_linear = self.make_table(kmin=kmin,kmax=kmax,nk=nk,func_name='power',linear_theory=True)
self.pktable = self.make_table(kmin=kmin,kmax=kmax,nk=nk,func_name='power')
def make_vtable(self, kmin = 1e-3, kmax = 3, nk = 100):
self.vktable_linear = self.make_table(kmin=kmin,kmax=kmax,nk=nk,func_name='velocity',linear_theory=True)
self.vktable = self.make_table(kmin=kmin,kmax=kmax,nk=nk,func_name='velocity')
def make_spartable(self, kmin = 1e-3, kmax = 3, nk = 100):
self.sparktable_linear = self.make_table(kmin=kmin,kmax=kmax,nk=nk,func_name='spar',linear_theory=True)
self.sparktable = self.make_table(kmin=kmin,kmax=kmax,nk=nk,func_name='spar')
def make_stracetable(self, kmin = 1e-3, kmax = 3, nk = 100):
self.stracektable_linear = self.make_table(kmin=kmin,kmax=kmax,nk=nk,func_name='strace',linear_theory=True)
self.stracektable = self.make_table(kmin=kmin,kmax=kmax,nk=nk,func_name='strace')
def make_gamma1table(self, kmin = 1e-3, kmax = 3, nk = 100):
self.gamma1ktable_linear = self.make_table(kmin=kmin,kmax=kmax,nk=nk,func_name='gamma1',linear_theory=True)
self.gamma1ktable = self.make_table(kmin=kmin,kmax=kmax,nk=nk,func_name='gamma1')
def make_gamma2table(self, kmin = 1e-3, kmax = 3, nk = 100, linear_theory=True):
self.gamma2ktable_linear = self.make_table(kmin=kmin,kmax=kmax,nk=nk,func_name='gamma2', linear_theory=True)
self.gamma2ktable = self.make_table(kmin=kmin,kmax=kmax,nk=nk,func_name='gamma2')
def make_kappatable(self, kmin = 1e-3, kmax = 3, nk = 100):
self.kappaktable_linear = self.make_table(kmin=kmin,kmax=kmax,nk=nk,func_name='kappa',linear_theory=True)
self.kappaktable = self.make_table(kmin=kmin,kmax=kmax,nk=nk,func_name='kappa')
def convert_sigma_bases(self, basis='Legendre'):
'''
Function to convert Tr\sigma and \sigma_\par to the desired basis.
These are:
- Legendre
sigma = sigma_0 delta_ij + sigma_2 (3 k_i k_j - delta_ij)/2
- Polynomial
sigma = sigma_0 delta_ij + sigma_2 k_i k_j
- los (line of sight, note that sigma_0 = kpar and sigma_2 = kperp in this case)
sigma = sigma_0 k_i k_j + sigma_2 (delta_ij - k_i k_j)/2
'''
if self.sparktable is None or self.stracektable is None:
print("Error: Need to compute sigma before changing bases!")
return 0
kv = self.sparktable[:,0]
if basis == 'Legendre':
self.s0_linear = self.stracektable_linear / 3.
self.s2_linear = self.sparktable_linear - self.s0_linear
self.s0_linear[:,0] = kv; self.s2_linear[:,0] = kv
self.s0 = self.stracektable / 3.
self.s2 = self.sparktable - self.s0
self.s0[:,0] = kv; self.s2[:,0] = kv
if basis == 'Polynomial':
self.s0_linear = 0.5 * (self.stracektable_linear - self.sparktable_linear)
self.s2_linear = 0.5 * (3 * self.sparktable_linear - self.stracektable_linear)
self.s0_linear[:,0] = kv; self.s2_linear[:,0] = kv
self.s0 = 0.5 * (self.stracektable - self.sparktable)
self.s2 = 0.5 * (3 * self.sparktable - self.stracektable)
self.s0[:,0] = kv; self.s2[:,0] = kv
if basis == 'los':
self.s0_linear = self.sparktable_linear
self.s2_linear = self.stracektable_linear - self.sparktable_linear
self.s0_linear[:,0] = kv; self.s2_linear[:,0] = kv
self.s0 = self.sparktable
self.s2 = self.stracektable - self.sparktable
self.s0[:,0] = kv; self.s2[:,0] = kv
def convert_gamma_bases(self, basis='Polynomial'):
'''
Translates the contraction of gamma into the polynomial/legendre basis
given by Im[gamma] = g3 \hk_i \hk_j \hk_k + g1 (\hk_i \delta{ij} + et cycl) / 3
'''
if self.gamma1ktable is None or self.gamma2ktable is None:
print("Error: Need to compute sigma before changing bases!")
return 0
kv = self.gamma1ktable[:,0]
# Polynomial basis
if basis == 'Polynomial':
self.g1 = 1.5 * self.gamma2ktable - 1.5 * self.gamma1ktable
self.g3 = 2.5 * self.gamma1ktable - 1.5 * self.gamma2ktable
if basis == 'Legendre':
self.g1 = 0.6 * self.gamma2ktable
self.g3 = 2.5 * self.gamma1ktable - 1.5 * self.gamma2ktable
self.g1[:,0] = kv; self.g3[:,0] = kv
def convert_kappa_bases(self, basis='Polynomial'):
'''
Translates the contraction of gamma into the polynomial basis
given by
kappa = kappa0 / 3 * (delta_ij delta_kl + perms)
+ kappa2 / 6 * (k_i k_j delta_kl + perms)
+ kappa4 * k_i k_j k_k k_l
'''
if self.kappaktable is None:
print("Error: Need to compute kappa before changing bases!")
return 0
self.k0 = 3./8 * (self.kappaktable[:,1] - 2*self.kappaktable[:,2] + self.kappaktable[:,3])
self.k2 = 3./4 * (-self.kappaktable[:,1] + 6*self.kappaktable[:,2] - 5*self.kappaktable[:,3])
self.k4 = 1./8 * (3*self.kappaktable[:,1] - 30*self.kappaktable[:,2] + 35*self.kappaktable[:,3])
|
sfschenREPO_NAMEvelocileptorsPATH_START.@velocileptors_extracted@velocileptors-master@velocileptors@EPT@velocity_moments_kexpanded_fftw.py@.PATH_END.py
|
{
"filename": "scheduler_virtual.py",
"repo_name": "cosmo-ethz/hide",
"repo_path": "hide_extracted/hide-master/hide/strategy/scheduler_virtual.py",
"type": "Python"
}
|
# HIDE is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# HIDE is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with HIDE. If not, see <http://www.gnu.org/licenses/>.
'''
Created on May 4, 2016
author: jakeret
'''
from __future__ import print_function, division, absolute_import, unicode_literals
import importlib
from datetime import timedelta
from hide.strategy import scheduler
from hide.astro import gsm_point_src
from hide.utils import sphere
def replace_calibrations(schedule, obs):
for entry in schedule:
if not entry.is_survey():
src_name = "Virtual_%s"%entry.src
try:
source = gsm_point_src.SOURCES[src_name]
obs_time = 2 * 60 *60
date = entry.date + timedelta(seconds=obs_time / 2)
alt, az = sphere.radec_to_altaz(date, source.ra, source.dec, obs)
entry.az = az
entry.el = alt
except KeyError:
pass
def load_strategy(ctx):
"""
Creates a scanning strategy from a scheduler file.
:param ctx: The ctx instance with the path to the scheduler file
:returns strategy: A list of CoordSpec with the scanning strategy
"""
if ctx.params.scheduler_file == "default":
mod = importlib.import_module(ctx.params.instrument)
path = mod.get_schedule()
else:
path = ctx.params.scheduler_file
obs = sphere.get_observer(ctx)
schedule_entries = scheduler.parse_schedule(path, ctx.strategy_start)
replace_calibrations(schedule_entries, obs)
strategy, calibration_days = scheduler.process_schedule(schedule_entries,
ctx.params.strategy_step_size,
ctx.strategy_start,
ctx.strategy_end,
obs)
ctx.calibration = calibration_days
return strategy
|
cosmo-ethzREPO_NAMEhidePATH_START.@hide_extracted@hide-master@hide@strategy@scheduler_virtual.py@.PATH_END.py
|
{
"filename": "_ambient.py",
"repo_name": "plotly/plotly.py",
"repo_path": "plotly.py_extracted/plotly.py-master/packages/python/plotly/plotly/validators/mesh3d/lighting/_ambient.py",
"type": "Python"
}
|
import _plotly_utils.basevalidators
class AmbientValidator(_plotly_utils.basevalidators.NumberValidator):
def __init__(self, plotly_name="ambient", parent_name="mesh3d.lighting", **kwargs):
super(AmbientValidator, self).__init__(
plotly_name=plotly_name,
parent_name=parent_name,
edit_type=kwargs.pop("edit_type", "calc"),
max=kwargs.pop("max", 1),
min=kwargs.pop("min", 0),
**kwargs,
)
|
plotlyREPO_NAMEplotly.pyPATH_START.@plotly.py_extracted@plotly.py-master@packages@python@plotly@plotly@validators@mesh3d@lighting@_ambient.py@.PATH_END.py
|
{
"filename": "__init__.py",
"repo_name": "langchain-ai/langchain",
"repo_path": "langchain_extracted/langchain-master/libs/community/tests/unit_tests/evaluation/__init__.py",
"type": "Python"
}
|
langchain-aiREPO_NAMElangchainPATH_START.@langchain_extracted@langchain-master@libs@community@tests@unit_tests@evaluation@__init__.py@.PATH_END.py
|
|
{
"filename": "test_los_distribution.py",
"repo_name": "sibirrer/hierArc",
"repo_path": "hierArc_extracted/hierArc-main/test/test_Likelihood/test_los_distribution.py",
"type": "Python"
}
|
from hierarc.Sampling.Distributions.los_distributions import LOSDistribution
from scipy.stats import genextreme
import numpy as np
import numpy.testing as npt
import unittest
class TestLOSDistribution(object):
def setup_method(self):
pass
def test_gev(self):
xi = -0.1
mean_gev = 0.02
sigma_gev = np.exp(-5.46)
mean_gauss = 0.1
sigma_gauss = 0.2
kappa_ext_draw = genextreme.rvs(c=xi, loc=mean_gev, scale=sigma_gev, size=10000)
npt.assert_almost_equal(np.mean(kappa_ext_draw), mean_gev, decimal=2)
npt.assert_almost_equal(np.std(kappa_ext_draw), sigma_gev, decimal=2)
kappa_pdf, kappa_bin_edges = np.histogram(kappa_ext_draw, bins=100)
kappa_pdf = np.array(kappa_pdf, dtype=float) / np.sum(kappa_pdf)
los_distribution = ["GAUSSIAN", "GEV"]
kwargs_los = [
{"mean": mean_gauss, "sigma": sigma_gauss},
{"mean": mean_gev, "sigma": sigma_gev, "xi": xi},
]
# here we draw from the scipy function
dist_gev = LOSDistribution(
global_los_distribution=1,
los_distributions=los_distribution,
individual_distribution="PDF",
kwargs_individual={"pdf_array": kappa_pdf, "bin_edges": kappa_bin_edges},
)
kappa_dist_drawn = dist_gev.draw_los(kwargs_los, size=10000)
npt.assert_almost_equal(np.mean(kappa_dist_drawn), mean_gev, decimal=2)
npt.assert_almost_equal(np.std(kappa_dist_drawn), sigma_gev, decimal=2)
# here we draw from the distribution
dist_gev = LOSDistribution(
global_los_distribution=False,
los_distributions=los_distribution,
individual_distribution="PDF",
kwargs_individual={"pdf_array": kappa_pdf, "bin_edges": kappa_bin_edges},
)
kappa_dist_drawn = dist_gev.draw_los(kwargs_los, size=10000)
npt.assert_almost_equal(np.mean(kappa_dist_drawn), mean_gev, decimal=2)
npt.assert_almost_equal(np.std(kappa_dist_drawn), sigma_gev, decimal=2)
# draw from Gaussian
dist_gev = LOSDistribution(
global_los_distribution=0,
los_distributions=los_distribution,
individual_distribution="PDF",
kwargs_individual={"pdf_array": kappa_pdf, "bin_edges": kappa_bin_edges},
)
kappa_dist_drawn = dist_gev.draw_los(kwargs_los, size=10000)
npt.assert_almost_equal(np.mean(kappa_dist_drawn), mean_gauss, decimal=2)
npt.assert_almost_equal(np.std(kappa_dist_drawn), sigma_gauss, decimal=2)
def test_draw_bool(self):
xi = -0.1
mean_gev = 0.02
sigma_gev = np.exp(-5.46)
mean_gauss = 0.1
sigma_gauss = 0
kappa_ext_draw = genextreme.rvs(c=xi, loc=mean_gev, scale=sigma_gev, size=10000)
npt.assert_almost_equal(np.mean(kappa_ext_draw), mean_gev, decimal=2)
npt.assert_almost_equal(np.std(kappa_ext_draw), sigma_gev, decimal=2)
kappa_pdf, kappa_bin_edges = np.histogram(kappa_ext_draw, bins=100)
kappa_pdf = np.array(kappa_pdf, dtype=float) / np.sum(kappa_pdf)
los_distribution = ["GAUSSIAN", "GEV"]
kwargs_los = [
{"mean": mean_gauss, "sigma": sigma_gauss},
{"mean": mean_gev, "sigma": sigma_gev, "xi": xi},
]
dist = LOSDistribution(
global_los_distribution=1,
los_distributions=los_distribution,
individual_distribution="PDF",
kwargs_individual={"pdf_array": kappa_pdf, "bin_edges": kappa_bin_edges},
)
bool_draw = dist.draw_bool(kwargs_los)
assert bool_draw is True
dist = LOSDistribution(
global_los_distribution=0,
los_distributions=los_distribution,
individual_distribution="PDF",
kwargs_individual={"pdf_array": kappa_pdf, "bin_edges": kappa_bin_edges},
)
bool_draw = dist.draw_bool(kwargs_los)
assert bool_draw is False
dist = LOSDistribution(
global_los_distribution=False,
los_distributions=los_distribution,
individual_distribution="PDF",
kwargs_individual={"pdf_array": kappa_pdf, "bin_edges": kappa_bin_edges},
)
bool_draw = dist.draw_bool(kwargs_los)
assert bool_draw is True
class TestRaise(unittest.TestCase):
def test_raise(self):
with self.assertRaises(ValueError):
los = LOSDistribution(
individual_distribution=None,
kwargs_individual=None,
global_los_distribution=0,
los_distributions=["BAD"],
)
los.draw_los(kwargs_los=[{}])
with self.assertRaises(ValueError):
los = LOSDistribution(
individual_distribution="BAD",
kwargs_individual=None,
global_los_distribution=False,
los_distributions=None,
)
|
sibirrerREPO_NAMEhierArcPATH_START.@hierArc_extracted@hierArc-main@test@test_Likelihood@test_los_distribution.py@.PATH_END.py
|
{
"filename": "outputs.py",
"repo_name": "jordanflitter/21cmFirstCLASS",
"repo_path": "21cmFirstCLASS_extracted/21cmFirstCLASS-main/src/py21cmfast/outputs.py",
"type": "Python"
}
|
"""
Output class objects.
The classes provided by this module exist to simplify access to large datasets created within C.
Fundamentally, ownership of the data belongs to these classes, and the C functions merely accesses
this and fills it. The various boxes and lightcones associated with each output are available as
instance attributes. Along with the output data, each output object contains the various input
parameter objects necessary to define it.
.. warning:: These should not be instantiated or filled by the user, but always handled
as output objects from the various functions contained here. Only the data
within the objects should be accessed.
"""
import h5py
import numpy as np
import os
import warnings
from astropy import units
from astropy.cosmology import z_at_value
from cached_property import cached_property
from hashlib import md5
from pathlib import Path
from typing import Dict, List, Optional, Sequence, Tuple, Union
from . import __version__
from . import _utils as _ut
from ._cfg import config
from ._utils import OutputStruct as _BaseOutputStruct
from ._utils import _check_compatible_inputs
from .c_21cmfast import ffi, lib
from .inputs import AstroParams, CosmoParams, FlagOptions, UserParams, global_params
class _OutputStruct(_BaseOutputStruct):
_global_params = global_params
def __init__(self, *, user_params=None, cosmo_params=None, **kwargs):
self.cosmo_params = cosmo_params or CosmoParams()
self.user_params = user_params or UserParams()
super().__init__(**kwargs)
_ffi = ffi
class _OutputStructZ(_OutputStruct):
_inputs = _OutputStruct._inputs + ("redshift",)
class InitialConditions(_OutputStruct):
"""A class containing all initial conditions boxes."""
_c_compute_function = lib.ComputeInitialConditions
# The filter params indicates parameters to overlook when deciding if a cached box
# matches current parameters.
# It is useful for ignoring certain global parameters which may not apply to this
# step or its dependents.
_meta = False
_filter_params = _OutputStruct._filter_params + [
"ALPHA_UVB", # ionization
"EVOLVE_DENSITY_LINEARLY", # perturb
"SMOOTH_EVOLVED_DENSITY_FIELD", # perturb
"R_smooth_density", # perturb
"HII_ROUND_ERR", # ionization
"FIND_BUBBLE_ALGORITHM", # ib
"N_POISSON", # ib
"T_USE_VELOCITIES", # bt
"MAX_DVDR", # bt
"DELTA_R_HII_FACTOR", # ib
"HII_FILTER", # ib
"INITIAL_REDSHIFT", # pf
"HEAT_FILTER", # st
"CLUMPING_FACTOR", # st
"Z_HEAT_MAX", # st
"R_XLy_MAX", # st
"NUM_FILTER_STEPS_FOR_Ts", # ts
"ZPRIME_STEP_FACTOR", # ts
"TK_at_Z_HEAT_MAX", # ts
"XION_at_Z_HEAT_MAX", # ts
"Pop", # ib
"Pop2_ion", # ib
"Pop3_ion", # ib
"NU_X_BAND_MAX", # st
"NU_X_MAX", # ib
]
def prepare_for_perturb(self, flag_options: FlagOptions, force: bool = False):
"""Ensure the ICs have all the boxes loaded for perturb, but no extra."""
keep = ["hires_density"]
if not self.user_params.PERTURB_ON_HIGH_RES:
keep.append("lowres_density")
keep.append("lowres_vx")
keep.append("lowres_vy")
keep.append("lowres_vz")
if self.user_params.USE_2LPT:
keep.append("lowres_vx_2LPT")
keep.append("lowres_vy_2LPT")
keep.append("lowres_vz_2LPT")
if flag_options.USE_HALO_FIELD:
keep.append("hires_density")
else:
keep.append("hires_vx")
keep.append("hires_vy")
keep.append("hires_vz")
if self.user_params.USE_2LPT:
keep.append("hires_vx_2LPT")
keep.append("hires_vy_2LPT")
keep.append("hires_vz_2LPT")
if self.user_params.USE_RELATIVE_VELOCITIES:
keep.append("lowres_vcb")
self.prepare(keep=keep, force=force)
def prepare_for_spin_temp(self, flag_options: FlagOptions, force: bool = False):
"""Ensure ICs have all boxes required for spin_temp, and no more."""
keep = []
if self.user_params.USE_RELATIVE_VELOCITIES:
keep.append("lowres_vcb")
self.prepare(keep=keep, force=force)
def _get_box_structures(self) -> Dict[str, Union[Dict, Tuple[int]]]:
shape = (self.user_params.HII_DIM,) * 3
hires_shape = (self.user_params.DIM,) * 3
out = {
"lowres_density": shape,
"lowres_vx": shape,
"lowres_vy": shape,
"lowres_vz": shape,
"hires_density": hires_shape,
"hires_vx": hires_shape,
"hires_vy": hires_shape,
"hires_vz": hires_shape,
}
if self.user_params.USE_2LPT:
out.update(
{
"lowres_vx_2LPT": shape,
"lowres_vy_2LPT": shape,
"lowres_vz_2LPT": shape,
"hires_vx_2LPT": hires_shape,
"hires_vy_2LPT": hires_shape,
"hires_vz_2LPT": hires_shape,
}
)
if self.user_params.USE_RELATIVE_VELOCITIES:
out.update({"lowres_vcb": shape})
# JordanFlitter: added new SDM boxes to the InitialConditions structure
if (self.user_params.SCATTERING_DM and self.user_params.USE_SDM_FLUCTS):
out.update({"lowres_xe_zhigh": shape})
out.update({"lowres_Tk_zhigh": shape})
out.update({"lowres_Tchi_zhigh": shape})
out.update({"lowres_V_chi_b_zhigh": shape}) # V_chi_b at Z_HIGH_MAX
return out
def get_required_input_arrays(self, input_box: _BaseOutputStruct) -> List[str]:
"""Return all input arrays required to compute this object."""
return []
def compute(self, hooks: dict):
"""Compute the function."""
return self._compute(
self.random_seed,
self.user_params,
self.cosmo_params,
hooks=hooks,
)
class PerturbedField(_OutputStructZ):
"""A class containing all perturbed field boxes."""
_c_compute_function = lib.ComputePerturbField
_meta = False
_filter_params = _OutputStruct._filter_params + [
"ALPHA_UVB", # ionization
"HII_ROUND_ERR", # ionization
"FIND_BUBBLE_ALGORITHM", # ib
"N_POISSON", # ib
"T_USE_VELOCITIES", # bt
"MAX_DVDR", # bt
"DELTA_R_HII_FACTOR", # ib
"HII_FILTER", # ib
"HEAT_FILTER", # st
"CLUMPING_FACTOR", # st
"Z_HEAT_MAX", # st
"R_XLy_MAX", # st
"NUM_FILTER_STEPS_FOR_Ts", # ts
"ZPRIME_STEP_FACTOR", # ts
"TK_at_Z_HEAT_MAX", # ts
"XION_at_Z_HEAT_MAX", # ts
"Pop", # ib
"Pop2_ion", # ib
"Pop3_ion", # ib
"NU_X_BAND_MAX", # st
"NU_X_MAX", # ib
]
def _get_box_structures(self) -> Dict[str, Union[Dict, Tuple[int]]]:
shape = (self.user_params.HII_DIM,) * 3
out = {
"density": (self.user_params.HII_DIM,) * 3,
"velocity": (self.user_params.HII_DIM,) * 3,
}
# JordanFlitter: added new baryons (and SDM) density box to the PerturbedField structure
if (self.user_params.EVOLVE_BARYONS):
out.update({"baryons_density": shape})
if (self.user_params.SCATTERING_DM):
out.update({"SDM_density": shape})
return out
def get_required_input_arrays(self, input_box: _BaseOutputStruct) -> List[str]:
"""Return all input arrays required to compute this object."""
required = []
if not isinstance(input_box, InitialConditions):
raise ValueError(
f"{type(input_box)} is not an input required for PerturbedField!"
)
# Always require hires_density
required += ["hires_density"]
if self.user_params.PERTURB_ON_HIGH_RES:
required += ["hires_vx", "hires_vy", "hires_vz"]
if self.user_params.USE_2LPT:
required += ["hires_vx_2LPT", "hires_vy_2LPT", "hires_vz_2LPT"]
else:
required += ["lowres_density", "lowres_vx", "lowres_vy", "lowres_vz"]
if self.user_params.USE_2LPT:
required += [
"lowres_vx_2LPT",
"lowres_vy_2LPT",
"lowres_vz_2LPT",
]
if self.user_params.USE_RELATIVE_VELOCITIES:
required.append("lowres_vcb")
return required
def compute(self, *, ics: InitialConditions, hooks: dict):
"""Compute the function."""
return self._compute(
self.redshift,
self.user_params,
self.cosmo_params,
ics,
hooks=hooks,
)
class _AllParamsBox(_OutputStructZ):
_meta = True
_inputs = _OutputStructZ._inputs + ("flag_options", "astro_params")
_filter_params = _OutputStruct._filter_params + [
"T_USE_VELOCITIES", # bt
"MAX_DVDR", # bt
]
def __init__(
self,
*,
astro_params: Optional[AstroParams] = None,
flag_options: Optional[FlagOptions] = None,
first_box: bool = False,
**kwargs,
):
self.flag_options = flag_options or FlagOptions()
self.astro_params = astro_params or AstroParams(
INHOMO_RECO=self.flag_options.INHOMO_RECO
)
self.log10_Mturnover_ave = 0.0
self.log10_Mturnover_MINI_ave = 0.0
self.first_box = first_box
if first_box:
self.mean_f_coll = 0.0
self.mean_f_coll_MINI = 0.0
super().__init__(**kwargs)
class HaloField(_AllParamsBox):
"""A class containing all fields related to halos."""
_c_based_pointers = (
"halo_masses",
"halo_coords",
"mass_bins",
"fgtrm",
"sqrt_dfgtrm",
"dndlm",
"sqrtdn_dlm",
)
_c_compute_function = lib.ComputeHaloField
def _get_box_structures(self) -> Dict[str, Union[Dict, Tuple[int]]]:
return {}
def _c_shape(self, cstruct):
return {
"halo_masses": (cstruct.n_halos,),
"halo_coords": (cstruct.n_halos, 3),
"mass_bins": (cstruct.n_mass_bins,),
"fgtrm": (cstruct.n_mass_bins,),
"sqrt_dfgtrm": (cstruct.n_mass_bins,),
"dndlm": (cstruct.n_mass_bins,),
"sqrtdn_dlm": (cstruct.n_mass_bins,),
}
def get_required_input_arrays(self, input_box: _BaseOutputStruct) -> List[str]:
"""Return all input arrays required to compute this object."""
if isinstance(input_box, InitialConditions):
return ["hires_density"]
else:
raise ValueError(
f"{type(input_box)} is not an input required for HaloField!"
)
def compute(self, *, ics: InitialConditions, hooks: dict):
"""Compute the function."""
return self._compute(
self.redshift,
self.user_params,
self.cosmo_params,
self.astro_params,
self.flag_options,
ics,
hooks=hooks,
)
class PerturbHaloField(_AllParamsBox):
"""A class containing all fields related to halos."""
_c_compute_function = lib.ComputePerturbHaloField
_c_based_pointers = ("halo_masses", "halo_coords")
def _get_box_structures(self) -> Dict[str, Union[Dict, Tuple[int]]]:
return {}
def _c_shape(self, cstruct):
return {
"halo_masses": (cstruct.n_halos,),
"halo_coords": (cstruct.n_halos, 3),
}
def get_required_input_arrays(self, input_box: _BaseOutputStruct) -> List[str]:
"""Return all input arrays required to compute this object."""
required = []
if isinstance(input_box, InitialConditions):
if self.user_params.PERTURB_ON_HIGH_RES:
required += ["hires_vx", "hires_vy", "hires_vz"]
else:
required += ["lowres_vx", "lowres_vy", "lowres_vz"]
if self.user_params.USE_2LPT:
required += [k + "_2LPT" for k in required]
elif isinstance(input_box, HaloField):
required += ["halo_coords", "halo_masses"]
else:
raise ValueError(
f"{type(input_box)} is not an input required for PerturbHaloField!"
)
return required
def compute(self, *, ics: InitialConditions, halo_field: HaloField, hooks: dict):
"""Compute the function."""
return self._compute(
self.redshift,
self.user_params,
self.cosmo_params,
self.astro_params,
self.flag_options,
ics,
halo_field,
hooks=hooks,
)
class TsBox(_AllParamsBox):
"""A class containing all spin temperature boxes."""
# JordanFlitter: added next_redshift_input and next_redshift_output to TsBox.
# This helps dramtically reducing the runtime during the dark ages while still having output from that epoch
_c_compute_function = lib.ComputeTsBox
_meta = False
_inputs = _AllParamsBox._inputs + ("prev_spin_redshift", "perturbed_field_redshift", "next_redshift_input") # JordanFlitter: added next_redshift_input
def __init__(
self,
*,
prev_spin_redshift: Optional[float] = None,
perturbed_field_redshift: Optional[float] = None,
next_redshift_input: Optional[float] = None, # JordanFlitter: added next_redshift_input
**kwargs,
):
self.prev_spin_redshift = prev_spin_redshift
self.perturbed_field_redshift = perturbed_field_redshift
self.next_redshift_input = next_redshift_input # JordanFlitter: added next_redshift_input
self.next_redshift_output = next_redshift_input if not next_redshift_input is None else 0.# JordanFlitter: added next_redshift_output
super().__init__(**kwargs)
def _get_box_structures(self) -> Dict[str, Union[Dict, Tuple[int]]]:
shape = (self.user_params.HII_DIM,) * 3
out = {
"Ts_box": shape,
"x_e_box": shape,
"Tk_box": shape,
"J_21_LW_box": shape,
"J_Lya_box": shape, # JordanFlitter: added Lya flux box to the TsBox structure (because why not)
}
# JordanFlitter: added new SDM boxes to the TsBox structure
if (self.user_params.SCATTERING_DM):
out.update({"T_chi_box": shape})
out.update({"V_chi_b_box": shape})
return out
@cached_property
def global_Ts(self):
"""Global (mean) spin temperature."""
if "Ts_box" not in self._computed_arrays:
raise AttributeError(
"global_Ts is not defined until the ionization calculation has been performed"
)
else:
return np.mean(self.Ts_box)
@cached_property
def global_Tk(self):
"""Global (mean) Tk."""
if "Tk_box" not in self._computed_arrays:
raise AttributeError(
"global_Tk is not defined until the ionization calculation has been performed"
)
else:
return np.mean(self.Tk_box)
@cached_property
def global_x_e(self):
"""Global (mean) x_e."""
if "x_e_box" not in self._computed_arrays:
raise AttributeError(
"global_x_e is not defined until the ionization calculation has been performed"
)
else:
return np.mean(self.x_e_box)
def get_required_input_arrays(self, input_box: _BaseOutputStruct) -> List[str]:
"""Return all input arrays required to compute this object."""
required = []
if isinstance(input_box, InitialConditions):
if (
self.user_params.USE_RELATIVE_VELOCITIES
and self.flag_options.USE_MINI_HALOS
):
required += ["lowres_vcb"]
elif isinstance(input_box, PerturbedField):
required += ["density"]
elif isinstance(input_box, TsBox):
required += [
"Tk_box",
"x_e_box",
]
if self.flag_options.USE_MINI_HALOS:
required += ["J_21_LW_box"]
else:
raise ValueError(
f"{type(input_box)} is not an input required for PerturbHaloField!"
)
return required
def compute(
self,
*,
cleanup: bool,
perturbed_field: PerturbedField,
prev_spin_temp,
ics: InitialConditions,
hooks: dict,
):
"""Compute the function."""
return self._compute(
self.redshift,
self.prev_spin_redshift,
self.user_params,
self.cosmo_params,
self.astro_params,
self.flag_options,
self.perturbed_field_redshift,
self.next_redshift_input, # JordanFlitter: added next_redshift_input
cleanup,
perturbed_field,
prev_spin_temp,
ics,
hooks=hooks,
)
class IonizedBox(_AllParamsBox):
"""A class containing all ionized boxes."""
_meta = False
_c_compute_function = lib.ComputeIonizedBox
_inputs = _AllParamsBox._inputs + ("prev_ionize_redshift",)
def __init__(self, *, prev_ionize_redshift: Optional[float] = None, **kwargs):
self.prev_ionize_redshift = prev_ionize_redshift
super().__init__(**kwargs)
def _get_box_structures(self) -> Dict[str, Union[Dict, Tuple[int]]]:
if self.flag_options.USE_MINI_HALOS:
n_filtering = (
int(
np.log(
min(
self.astro_params.R_BUBBLE_MAX,
0.620350491 * self.user_params.BOX_LEN,
)
/ max(
global_params.R_BUBBLE_MIN,
0.620350491
* self.user_params.BOX_LEN
/ self.user_params.HII_DIM,
)
)
/ np.log(global_params.DELTA_R_HII_FACTOR)
)
+ 1
)
else:
n_filtering = 1
shape = (self.user_params.HII_DIM,) * 3
filter_shape = (n_filtering,) + shape
out = {
"xH_box": {"init": np.ones, "shape": shape},
"Gamma12_box": shape,
"MFP_box": shape,
"z_re_box": shape,
"dNrec_box": shape,
"temp_kinetic_all_gas": shape,
"Fcoll": filter_shape,
}
if self.flag_options.USE_MINI_HALOS:
out["Fcoll_MINI"] = filter_shape
return out
@cached_property
def global_xH(self):
"""Global (mean) neutral fraction."""
if not self.filled:
raise AttributeError(
"global_xH is not defined until the ionization calculation has been performed"
)
else:
return np.mean(self.xH_box)
def get_required_input_arrays(self, input_box: _BaseOutputStruct) -> List[str]:
"""Return all input arrays required to compute this object."""
required = []
if isinstance(input_box, InitialConditions):
if (
self.user_params.USE_RELATIVE_VELOCITIES
and self.flag_options.USE_MASS_DEPENDENT_ZETA
):
required += ["lowres_vcb"]
elif isinstance(input_box, PerturbedField):
required += ["density"]
elif isinstance(input_box, TsBox):
required += ["J_21_LW_box", "x_e_box", "Tk_box"]
elif isinstance(input_box, IonizedBox):
required += ["z_re_box", "Gamma12_box"]
if self.flag_options.INHOMO_RECO:
required += [
"dNrec_box",
]
if (
self.flag_options.USE_MASS_DEPENDENT_ZETA
and self.flag_options.USE_MINI_HALOS
):
required += ["Fcoll", "Fcoll_MINI"]
elif isinstance(input_box, PerturbHaloField):
required += ["halo_coords", "halo_masses"]
else:
raise ValueError(
f"{type(input_box)} is not an input required for IonizedBox!"
)
return required
def compute(
self,
*,
perturbed_field: PerturbedField,
prev_perturbed_field: PerturbedField,
prev_ionize_box,
spin_temp: TsBox,
pt_halos: PerturbHaloField,
ics: InitialConditions,
hooks: dict,
):
"""Compute the function."""
return self._compute(
self.redshift,
self.prev_ionize_redshift,
self.user_params,
self.cosmo_params,
self.astro_params,
self.flag_options,
perturbed_field,
prev_perturbed_field,
prev_ionize_box,
spin_temp,
pt_halos,
ics,
hooks=hooks,
)
class BrightnessTemp(_AllParamsBox):
"""A class containing the brightness temperature box."""
_c_compute_function = lib.ComputeBrightnessTemp
_meta = False
_filter_params = _OutputStructZ._filter_params
def _get_box_structures(self) -> Dict[str, Union[Dict, Tuple[int]]]:
return {"brightness_temp": (self.user_params.HII_DIM,) * 3}
@cached_property
def global_Tb(self):
"""Global (mean) brightness temperature."""
if not self.is_computed:
raise AttributeError(
"global_Tb is not defined until the ionization calculation has been performed"
)
else:
return np.mean(self.brightness_temp)
def get_required_input_arrays(self, input_box: _BaseOutputStruct) -> List[str]:
"""Return all input arrays required to compute this object."""
required = []
if isinstance(input_box, PerturbedField):
required += ["velocity"]
elif isinstance(input_box, TsBox):
required += ["Ts_box"]
elif isinstance(input_box, IonizedBox):
required += ["xH_box"]
else:
raise ValueError(
f"{type(input_box)} is not an input required for BrightnessTemp!"
)
return required
def compute(
self,
*,
spin_temp: TsBox,
ionized_box: IonizedBox,
perturbed_field: PerturbedField,
hooks: dict,
):
"""Compute the function."""
return self._compute(
self.redshift,
self.user_params,
self.cosmo_params,
self.astro_params,
self.flag_options,
spin_temp,
ionized_box,
perturbed_field,
hooks=hooks,
)
class _HighLevelOutput:
def get_cached_data(
self, kind: str, redshift: float, load_data: bool = False
) -> _OutputStruct:
"""
Return an OutputStruct object which was cached in creating this Coeval box.
Parameters
----------
kind
The kind of object: "init", "perturb", "spin_temp", "ionize" or "brightness"
redshift
The (approximate) redshift of the object to return.
load_data
Whether to actually read the field data of the object in (call ``obj.read()``
after this function to do this manually)
Returns
-------
output
The output struct object.
"""
if self.cache_files is None:
raise AttributeError(
"No cache files were associated with this Coeval object."
)
# TODO: also check this file, because it may have been "gather"d.
if kind not in self.cache_files:
raise ValueError(
f"{kind} is not a valid kind for the cache. Valid options: "
f"{self.cache_files.keys()}"
)
files = self.cache_files.get(kind, {})
# files is a list of tuples of (redshift, filename)
redshifts = np.array([f[0] for f in files])
indx = np.argmin(np.abs(redshifts - redshift))
fname = files[indx][1]
if not os.path.exists(fname):
raise OSError(
"The cached file you requested does not exist (maybe it was removed?)."
)
kinds = {
"init": InitialConditions,
"perturb_field": PerturbedField,
"ionized_box": IonizedBox,
"spin_temp": TsBox,
"brightness_temp": BrightnessTemp,
}
cls = kinds[kind]
return cls.from_file(fname, load_data=load_data)
def gather(
self,
fname: Union[str, None, Path] = None,
kinds: Union[Sequence, None] = None,
clean: Union[bool, dict] = False,
direc: Union[str, Path, None] = None,
) -> Path:
"""Gather the cached data associated with this object into its file."""
kinds = kinds or [
"init",
"perturb_field",
"ionized_box",
"spin_temp",
"brightness_temp",
]
clean = kinds if clean and not hasattr(clean, "__len__") else clean or []
if any(c not in kinds for c in clean):
raise ValueError(
"You are trying to clean cached items that you will not be gathering."
)
direc = Path(direc or config["direc"]).expanduser().absolute()
fname = Path(fname or self.get_unique_filename()).expanduser()
if not fname.exists():
fname = direc / fname
for kind in kinds:
redshifts = (f[0] for f in self.cache_files[kind])
for i, z in enumerate(redshifts):
cache_fname = self.cache_files[kind][i][1]
obj = self.get_cached_data(kind, redshift=z, load_data=True)
with h5py.File(fname, "a") as fl:
cache = (
fl.create_group("cache") if "cache" not in fl else fl["cache"]
)
kind_group = (
cache.create_group(kind) if kind not in cache else cache[kind]
)
zstr = f"z{z:.2f}"
if zstr not in kind_group:
z_group = kind_group.create_group(zstr)
else:
z_group = kind_group[zstr]
obj.write_data_to_hdf5_group(z_group)
if kind in clean:
os.remove(cache_fname)
return fname
def _get_prefix(self):
return "{name}_z{redshift:.4}_{{hash}}_r{seed}.h5".format(
name=self.__class__.__name__,
redshift=float(self.redshift),
seed=self.random_seed,
)
def _input_rep(self):
rep = ""
for inp in [
"user_params",
"cosmo_params",
"astro_params",
"flag_options",
"global_params",
]:
rep += repr(getattr(self, inp))
return rep
def get_unique_filename(self):
"""Generate a unique hash filename for this instance."""
return self._get_prefix().format(
hash=md5((self._input_rep() + self._particular_rep()).encode()).hexdigest()
)
def _write(self, direc=None, fname=None, clobber=False):
"""
Write the lightcone to file in standard HDF5 format.
This method is primarily meant for the automatic caching. Its default
filename is a hash generated based on the input data, and the directory is
the configured caching directory.
Parameters
----------
direc : str, optional
The directory into which to write the file. Default is the configuration
directory.
fname : str, optional
The filename to write, default a unique name produced by the inputs.
clobber : bool, optional
Whether to overwrite existing file.
Returns
-------
fname : str
The absolute path to which the file was written.
"""
direc = os.path.expanduser(direc or config["direc"])
if fname is None:
fname = self.get_unique_filename()
if not os.path.isabs(fname):
fname = os.path.abspath(os.path.join(direc, fname))
if not clobber and os.path.exists(fname):
raise FileExistsError(
"The file {} already exists. If you want to overwrite, set clobber=True.".format(
fname
)
)
with h5py.File(fname, "w") as f:
# Save input parameters as attributes
for k in [
"user_params",
"cosmo_params",
"flag_options",
"astro_params",
"global_params",
]:
q = getattr(self, k)
kfile = "_globals" if k == "global_params" else k
grp = f.create_group(kfile)
try:
dct = q.self
except AttributeError:
dct = q
for kk, v in dct.items():
if v is None:
continue
try:
grp.attrs[kk] = v
except TypeError:
# external_table_path is a cdata object and can't be written.
pass
if self.photon_nonconservation_data is not None:
photon_data = f.create_group("photon_nonconservation_data")
for k, val in self.photon_nonconservation_data.items():
photon_data[k] = val
f.attrs["redshift"] = self.redshift
f.attrs["random_seed"] = self.random_seed
f.attrs["version"] = __version__
self._write_particulars(fname)
return fname
def _write_particulars(self, fname):
pass
def save(self, fname=None, direc="."):
"""Save to disk.
This function has defaults that make it easy to save a unique box to
the current directory.
Parameters
----------
fname : str, optional
The filename to write, default a unique name produced by the inputs.
direc : str, optional
The directory into which to write the file. Default is the current directory.
Returns
-------
str :
The filename to which the box was written.
"""
return self._write(direc=direc, fname=fname)
@classmethod
def _read_inputs(cls, fname):
kwargs = {}
with h5py.File(fname, "r") as fl:
glbls = dict(fl["_globals"].attrs)
kwargs["redshift"] = fl.attrs["redshift"]
if "photon_nonconservation_data" in fl.keys():
d = fl["photon_nonconservation_data"]
kwargs["photon_nonconservation_data"] = {k: d[k][...] for k in d.keys()}
return kwargs, glbls
@classmethod
def read(cls, fname, direc="."):
"""Read a lightcone file from disk, creating a LightCone object.
Parameters
----------
fname : str
The filename path. Can be absolute or relative.
direc : str
If fname, is relative, the directory in which to find the file. By default,
both the current directory and default cache and the will be searched, in
that order.
Returns
-------
LightCone :
A :class:`LightCone` instance created from the file's data.
"""
if not os.path.isabs(fname):
fname = os.path.abspath(os.path.join(direc, fname))
if not os.path.exists(fname):
raise FileExistsError(f"The file {fname} does not exist!")
park, glbls = cls._read_inputs(fname)
boxk = cls._read_particular(fname)
with global_params.use(**glbls):
out = cls(**park, **boxk)
return out
class Coeval(_HighLevelOutput):
"""A full coeval box with all associated data."""
def __init__(
self,
redshift: float,
initial_conditions: InitialConditions,
perturbed_field: PerturbedField,
ionized_box: IonizedBox,
brightness_temp: BrightnessTemp,
ts_box: Union[TsBox, None] = None,
cache_files: Union[dict, None] = None,
photon_nonconservation_data=None,
_globals=None,
):
_check_compatible_inputs(
initial_conditions,
perturbed_field,
ionized_box,
brightness_temp,
ts_box,
ignore=[],
)
self.redshift = redshift
self.init_struct = initial_conditions
self.perturb_struct = perturbed_field
self.ionization_struct = ionized_box
self.brightness_temp_struct = brightness_temp
self.spin_temp_struct = ts_box
self.cache_files = cache_files
self.photon_nonconservation_data = photon_nonconservation_data
# A *copy* of the current global parameters.
self.global_params = _globals or dict(global_params.items())
# Expose all the fields of the structs to the surface of the Coeval object
for box in [
initial_conditions,
perturbed_field,
ionized_box,
brightness_temp,
ts_box,
]:
if box is None:
continue
for field in box._get_box_structures():
setattr(self, field, getattr(box, field))
@classmethod
def get_fields(cls, spin_temp: bool = True) -> List[str]:
"""Obtain a list of name of simulation boxes saved in the Coeval object."""
pointer_fields = []
for cls in [InitialConditions, PerturbedField, IonizedBox, BrightnessTemp]:
pointer_fields += cls.get_pointer_fields()
if spin_temp:
pointer_fields += TsBox.get_pointer_fields()
return pointer_fields
@property
def user_params(self):
"""User params shared by all datasets."""
return self.brightness_temp_struct.user_params
@property
def cosmo_params(self):
"""Cosmo params shared by all datasets."""
return self.brightness_temp_struct.cosmo_params
@property
def flag_options(self):
"""Flag Options shared by all datasets."""
return self.brightness_temp_struct.flag_options
@property
def astro_params(self):
"""Astro params shared by all datasets."""
return self.brightness_temp_struct.astro_params
@property
def random_seed(self):
"""Random seed shared by all datasets."""
return self.brightness_temp_struct.random_seed
def _particular_rep(self):
return ""
def _write_particulars(self, fname):
for name in ["init", "perturb", "ionization", "brightness_temp", "spin_temp"]:
struct = getattr(self, name + "_struct")
if struct is not None:
struct.write(fname=fname, write_inputs=False)
# Also write any other inputs to any of the constituent boxes
# to the overarching attrs.
with h5py.File(fname, "a") as fl:
for inp in struct._inputs:
if inp not in fl.attrs and inp not in [
"user_params",
"cosmo_params",
"flag_options",
"astro_params",
"global_params",
]:
fl.attrs[inp] = getattr(struct, inp)
@classmethod
def _read_particular(cls, fname):
kwargs = {}
with h5py.File(fname, "r") as fl:
for output_class in _ut.OutputStruct._implementations():
if output_class.__name__ in fl:
kwargs[
_ut.camel_to_snake(output_class.__name__)
] = output_class.from_file(fname)
return kwargs
def __eq__(self, other):
"""Determine if this is equal to another object."""
return (
isinstance(other, self.__class__)
and other.redshift == self.redshift
and self.user_params == other.user_params
and self.cosmo_params == other.cosmo_params
and self.flag_options == other.flag_options
and self.astro_params == other.astro_params
)
class LightCone(_HighLevelOutput):
"""A full Lightcone with all associated evolved data."""
def __init__(
self,
redshift,
user_params,
cosmo_params,
astro_params,
flag_options,
random_seed,
lightcones,
node_redshifts=None,
global_quantities=None,
photon_nonconservation_data=None,
cache_files: Union[dict, None] = None,
_globals=None,
Cl_data=None, # JordanFlitter: added Cl_data to lightcone structure
c_T_median=None, # JordanFlitter: added c_T to lightcone structure
c_x_e_median=None, # JordanFlitter: added c_x_e to lightcone structure
c_T_s_median=None, # JordanFlitter: added c_T_s to lightcone structure
c_21_median=None, # JordanFlitter: added c_21 to lightcone structure
coeval_boxes=None, # JordanFlitter: added coeval_boxes to lightcone structure
):
self.redshift = redshift
self.random_seed = random_seed
self.user_params = user_params
self.cosmo_params = cosmo_params
self.astro_params = astro_params
self.flag_options = flag_options
self.node_redshifts = node_redshifts
self.cache_files = cache_files
self.Cl_data = Cl_data # JordanFlitter: added Cl_data to lightcone structure
self.c_T_median = c_T_median # JordanFlitter: added c_T to lightcone structure
self.c_x_e_median = c_x_e_median # JordanFlitter: added c_x_e to lightcone structure
self.c_T_s_median = c_T_s_median # JordanFlitter: added c_T_s to lightcone structure
self.c_21_median = c_21_median # JordanFlitter: added c_21 to lightcone structure
self.coeval_boxes = coeval_boxes # JordanFlitter: added coeval_boxes to lightcone structure
# A *copy* of the current global parameters.
self.global_params = _globals or dict(global_params.items())
if global_quantities:
for name, data in global_quantities.items():
if name.endswith("_box"):
# Remove the _box because it looks dumb.
setattr(self, "global_" + name[:-4], data)
else:
setattr(self, "global_" + name, data)
self.photon_nonconservation_data = photon_nonconservation_data
for name, data in lightcones.items():
setattr(self, name, data)
# Hold a reference to the global/lightcones in a dict form for easy reference.
self.global_quantities = global_quantities
self.lightcones = lightcones
@property
def global_xHI(self):
"""Global neutral fraction function."""
warnings.warn(
"global_xHI is deprecated. From now on, use global_xH. Will be removed in v3.1"
)
return self.global_xH
@property
def cell_size(self):
"""Cell size [Mpc] of the lightcone voxels."""
return self.user_params.BOX_LEN / self.user_params.HII_DIM
@property
def lightcone_dimensions(self):
"""Lightcone size over each dimension -- tuple of (x,y,z) in Mpc."""
return (
self.user_params.BOX_LEN,
self.user_params.BOX_LEN,
self.n_slices * self.cell_size,
)
@property
def shape(self):
"""Shape of the lightcone as a 3-tuple."""
return self.brightness_temp.shape
@property
def n_slices(self):
"""Number of redshift slices in the lightcone."""
return self.shape[-1]
@property
def lightcone_coords(self):
"""Co-ordinates [Mpc] of each cell along the redshift axis."""
return np.linspace(0, self.lightcone_dimensions[-1], self.n_slices)
@property
def lightcone_distances(self):
"""Comoving distance to each cell along the redshift axis, from z=0."""
return (
self.cosmo_params.cosmo.comoving_distance(self.redshift).value
+ self.lightcone_coords
)
@property
def lightcone_redshifts(self):
"""Redshift of each cell along the redshift axis."""
return np.array(
[
z_at_value(self.cosmo_params.cosmo.comoving_distance, d * units.Mpc, zmax=1e6) # JordanFlitter: added zmax parameter to support having output at the dark ages
for d in self.lightcone_distances # Note: zmax is required if one wishes to convert angular distance to redshift. We don't care about this parameter since comoving distance is a one-to-one function of redshift
]
)
def _particular_rep(self):
return (
str(np.round(self.node_redshifts, 3))
+ str(self.global_quantities.keys())
+ str(self.lightcones.keys())
)
def _write_particulars(self, fname):
with h5py.File(fname, "a") as f:
# Save the boxes to the file
boxes = f.create_group("lightcones")
# Go through all fields in this struct, and save
for k, val in self.lightcones.items():
boxes[k] = val
global_q = f.create_group("global_quantities")
for k, v in self.global_quantities.items():
global_q[k] = v
f["node_redshifts"] = self.node_redshifts
@classmethod
def _read_inputs(cls, fname):
kwargs = {}
with h5py.File(fname, "r") as fl:
for (k, kls) in [
("user_params", UserParams),
("cosmo_params", CosmoParams),
("flag_options", FlagOptions),
("astro_params", AstroParams),
]:
grp = fl[k]
kwargs[k] = kls(dict(grp.attrs))
kwargs["random_seed"] = fl.attrs["random_seed"]
# Get the standard inputs.
kw, glbls = _HighLevelOutput._read_inputs(fname)
return {**kw, **kwargs}, glbls
@classmethod
def _read_particular(cls, fname):
kwargs = {}
with h5py.File(fname, "r") as fl:
boxes = fl["lightcones"]
kwargs["lightcones"] = {k: boxes[k][...] for k in boxes.keys()}
glb = fl["global_quantities"]
kwargs["global_quantities"] = {k: glb[k][...] for k in glb.keys()}
kwargs["node_redshifts"] = fl["node_redshifts"][...]
return kwargs
def __eq__(self, other):
"""Determine if this is equal to another object."""
return (
isinstance(other, self.__class__)
and other.redshift == self.redshift
and np.all(np.isclose(other.node_redshifts, self.node_redshifts, atol=1e-3))
and self.user_params == other.user_params
and self.cosmo_params == other.cosmo_params
and self.flag_options == other.flag_options
and self.astro_params == other.astro_params
and self.global_quantities.keys() == other.global_quantities.keys()
and self.lightcones.keys() == other.lightcones.keys()
)
|
jordanflitterREPO_NAME21cmFirstCLASSPATH_START.@21cmFirstCLASS_extracted@21cmFirstCLASS-main@src@py21cmfast@outputs.py@.PATH_END.py
|
{
"filename": "__init__.py",
"repo_name": "OpenAccess-AI-Collective/axolotl",
"repo_path": "axolotl_extracted/axolotl-main/src/axolotl/core/__init__.py",
"type": "Python"
}
|
OpenAccess-AI-CollectiveREPO_NAMEaxolotlPATH_START.@axolotl_extracted@axolotl-main@src@axolotl@core@__init__.py@.PATH_END.py
|
|
{
"filename": "template.py",
"repo_name": "3fon3fonov/exostriker",
"repo_path": "exostriker_extracted/exostriker-main/exostriker/lib/pyqtgraph/examples/template.py",
"type": "Python"
}
|
"""
Description of example
"""
import numpy as np
import pyqtgraph as pg
from pyqtgraph.Qt import QtCore, QtGui, QtWidgets, mkQApp
app = mkQApp()
# win.setWindowTitle('pyqtgraph example: ____')
if __name__ == '__main__':
pg.exec()
|
3fon3fonovREPO_NAMEexostrikerPATH_START.@exostriker_extracted@exostriker-main@exostriker@lib@pyqtgraph@examples@template.py@.PATH_END.py
|
{
"filename": "svg_histogram_sgskip.py",
"repo_name": "matplotlib/matplotlib",
"repo_path": "matplotlib_extracted/matplotlib-main/galleries/examples/user_interfaces/svg_histogram_sgskip.py",
"type": "Python"
}
|
"""
=============
SVG Histogram
=============
Demonstrate how to create an interactive histogram, in which bars
are hidden or shown by clicking on legend markers.
The interactivity is encoded in ecmascript (javascript) and inserted in
the SVG code in a post-processing step. To render the image, open it in
a web browser. SVG is supported in most web browsers used by Linux and
macOS users. Windows IE9 supports SVG, but earlier versions do not.
Notes
-----
The matplotlib backend lets us assign ids to each object. This is the
mechanism used here to relate matplotlib objects created in python and
the corresponding SVG constructs that are parsed in the second step.
While flexible, ids are cumbersome to use for large collection of
objects. Two mechanisms could be used to simplify things:
* systematic grouping of objects into SVG <g> tags,
* assigning classes to each SVG object according to its origin.
For example, instead of modifying the properties of each individual bar,
the bars from the `~.pyplot.hist` function could either be grouped in
a PatchCollection, or be assigned a class="hist_##" attribute.
CSS could also be used more extensively to replace repetitive markup
throughout the generated SVG.
Author: david.huard@gmail.com
"""
from io import BytesIO
import json
import xml.etree.ElementTree as ET
import matplotlib.pyplot as plt
import numpy as np
plt.rcParams['svg.fonttype'] = 'none'
# Apparently, this `register_namespace` method is necessary to avoid garbling
# the XML namespace with ns0.
ET.register_namespace("", "http://www.w3.org/2000/svg")
# Fixing random state for reproducibility
np.random.seed(19680801)
# --- Create histogram, legend and title ---
plt.figure()
r = np.random.randn(100)
r1 = r + 1
labels = ['Rabbits', 'Frogs']
H = plt.hist([r, r1], label=labels)
containers = H[-1]
leg = plt.legend(frameon=False)
plt.title("From a web browser, click on the legend\n"
"marker to toggle the corresponding histogram.")
# --- Add ids to the svg objects we'll modify
hist_patches = {}
for ic, c in enumerate(containers):
hist_patches[f'hist_{ic}'] = []
for il, element in enumerate(c):
element.set_gid(f'hist_{ic}_patch_{il}')
hist_patches[f'hist_{ic}'].append(f'hist_{ic}_patch_{il}')
# Set ids for the legend patches
for i, t in enumerate(leg.get_patches()):
t.set_gid(f'leg_patch_{i}')
# Set ids for the text patches
for i, t in enumerate(leg.get_texts()):
t.set_gid(f'leg_text_{i}')
# Save SVG in a fake file object.
f = BytesIO()
plt.savefig(f, format="svg")
# Create XML tree from the SVG file.
tree, xmlid = ET.XMLID(f.getvalue())
# --- Add interactivity ---
# Add attributes to the patch objects.
for i, t in enumerate(leg.get_patches()):
el = xmlid[f'leg_patch_{i}']
el.set('cursor', 'pointer')
el.set('onclick', "toggle_hist(this)")
# Add attributes to the text objects.
for i, t in enumerate(leg.get_texts()):
el = xmlid[f'leg_text_{i}']
el.set('cursor', 'pointer')
el.set('onclick', "toggle_hist(this)")
# Create script defining the function `toggle_hist`.
# We create a global variable `container` that stores the patches id
# belonging to each histogram. Then a function "toggle_element" sets the
# visibility attribute of all patches of each histogram and the opacity
# of the marker itself.
script = """
<script type="text/ecmascript">
<![CDATA[
var container = %s
function toggle(oid, attribute, values) {
/* Toggle the style attribute of an object between two values.
Parameters
----------
oid : str
Object identifier.
attribute : str
Name of style attribute.
values : [on state, off state]
The two values that are switched between.
*/
var obj = document.getElementById(oid);
var a = obj.style[attribute];
a = (a == values[0] || a == "") ? values[1] : values[0];
obj.style[attribute] = a;
}
function toggle_hist(obj) {
var num = obj.id.slice(-1);
toggle('leg_patch_' + num, 'opacity', [1, 0.3]);
toggle('leg_text_' + num, 'opacity', [1, 0.5]);
var names = container['hist_'+num]
for (var i=0; i < names.length; i++) {
toggle(names[i], 'opacity', [1, 0])
};
}
]]>
</script>
""" % json.dumps(hist_patches)
# Add a transition effect
css = tree.find('.//{http://www.w3.org/2000/svg}style')
css.text = css.text + "g {-webkit-transition:opacity 0.4s ease-out;" + \
"-moz-transition:opacity 0.4s ease-out;}"
# Insert the script and save to file.
tree.insert(0, ET.XML(script))
ET.ElementTree(tree).write("svg_histogram.svg")
|
matplotlibREPO_NAMEmatplotlibPATH_START.@matplotlib_extracted@matplotlib-main@galleries@examples@user_interfaces@svg_histogram_sgskip.py@.PATH_END.py
|
{
"filename": "pdfratio.py",
"repo_name": "icecube/skyllh",
"repo_path": "skyllh_extracted/skyllh-master/skyllh/plotting/i3/pdfratio.py",
"type": "Python"
}
|
# -*- coding: utf-8 -*-
"""Plotting module to plot IceCube specific PDF ratio objects.
"""
import numpy as np
import itertools
from matplotlib.axes import Axes
from matplotlib.colors import LogNorm
from skyllh.core.py import classname
from skyllh.core.source_hypo_grouping import (
SourceHypoGroupManager,
)
from skyllh.core.storage import DataFieldRecordArray
from skyllh.core.trialdata import TrialDataManager
from skyllh.i3.pdfratio import SplinedI3EnergySigSetOverBkgPDFRatio
class SplinedI3EnergySigSetOverBkgPDFRatioPlotter(object):
"""Plotter class to plot an I3EnergySigSetOverBkgPDFRatioSpline object.
"""
def __init__(self, tdm, pdfratio):
"""Creates a new plotter object for plotting an
I3EnergySigSetOverBkgPDFRatioSpline object.
Parameters
----------
tdm : instance of TrialDataManager
The instance of TrialDataManager that provides the data for the
PDF ratio evaluation.
pdfratio : I3EnergySigSetOverBkgPDFRatioSpline
The PDF ratio object to plot.
"""
self.tdm = tdm
self.pdfratio = pdfratio
@property
def pdfratio(self):
"""The PDF ratio object to plot.
"""
return self._pdfratio
@pdfratio.setter
def pdfratio(self, pdfratio):
if not isinstance(pdfratio, SplinedI3EnergySigSetOverBkgPDFRatio):
raise TypeError(
'The pdfratio property must be an instance of '
'SplinedI3EnergySigSetOverBkgPDFRatio!')
self._pdfratio = pdfratio
@property
def tdm(self):
"""The TrialDataManager that provides the data for the PDF evaluation.
"""
return self._tdm
@tdm.setter
def tdm(self, obj):
if not isinstance(obj, TrialDataManager):
raise TypeError(
'The tdm property must be an instance of TrialDataManager!')
self._tdm = obj
def plot(self, src_hypo_group_manager, axes, fitparams, **kwargs):
"""Plots the PDF ratio for the given set of fit paramater values.
Parameters
----------
src_hypo_group_manager : instance of SourceHypoGroupManager
The instance of SourceHypoGroupManager that defines the source
hypotheses.
axes : mpl.axes.Axes
The matplotlib Axes object on which the PDF ratio should get drawn
to.
fitparams : dict
The dictionary with the set of fit paramater values.
Additional Keyword Arguments
----------------------------
Any additional keyword arguments will be passed to the `mpl.imshow`
function.
Returns
-------
img : instance of mpl.AxesImage
The AxesImage instance showing the PDF ratio image.
"""
if not isinstance(src_hypo_group_manager, SourceHypoGroupManager):
raise TypeError(
'The src_hypo_group_manager argument must be an '
'instance of SourceHypoGroupManager!')
if not isinstance(axes, Axes):
raise TypeError(
'The axes argument must be an instance of '
'matplotlib.axes.Axes!')
if not isinstance(fitparams, dict):
raise TypeError(
'The fitparams argument must be an instance of dict!')
# Get the binning for the axes. We use the background PDF to get it
# from. By construction, all PDFs use the same binning. We know that
# the PDFs are 2-dimensional.
(xbinning, ybinning) = self._pdfratio.backgroundpdf.binnings
# Create a 2D array with the ratio values. We put one event into each
# bin.
ratios = np.zeros((xbinning.nbins, ybinning.nbins), dtype=np.float64)
events = DataFieldRecordArray(np.zeros(
(ratios.size,),
dtype=[('ix', np.int64), (xbinning.name, np.float64),
('iy', np.int64), (ybinning.name, np.float64)]))
for (i, ((ix, x), (iy, y))) in enumerate(itertools.product(
enumerate(xbinning.bincenters),
enumerate(ybinning.bincenters))):
events['ix'][i] = ix
events[xbinning.name][i] = x
events['iy'][i] = iy
events[ybinning.name][i] = y
self._tdm.initialize_for_new_trial(src_hypo_group_manager, events)
event_ratios = self.pdfratio.get_ratio(self._tdm, fitparams)
for i in range(len(events)):
ratios[events['ix'][i], events['iy'][i]] = event_ratios[i]
(left, right, bottom, top) = (xbinning.lower_edge, xbinning.upper_edge,
ybinning.lower_edge, ybinning.upper_edge)
img = axes.imshow(
ratios.T,
extent=(left, right, bottom, top),
origin='lower',
norm=LogNorm(),
interpolation='none',
**kwargs)
axes.set_xlabel(xbinning.name)
axes.set_ylabel(ybinning.name)
axes.set_title(classname(self._pdfratio))
return img
|
icecubeREPO_NAMEskyllhPATH_START.@skyllh_extracted@skyllh-master@skyllh@plotting@i3@pdfratio.py@.PATH_END.py
|
{
"filename": "_startline.py",
"repo_name": "catboost/catboost",
"repo_path": "catboost_extracted/catboost-master/contrib/python/plotly/py2/plotly/validators/carpet/baxis/_startline.py",
"type": "Python"
}
|
import _plotly_utils.basevalidators
class StartlineValidator(_plotly_utils.basevalidators.BooleanValidator):
def __init__(self, plotly_name="startline", parent_name="carpet.baxis", **kwargs):
super(StartlineValidator, self).__init__(
plotly_name=plotly_name,
parent_name=parent_name,
edit_type=kwargs.pop("edit_type", "calc"),
role=kwargs.pop("role", "style"),
**kwargs
)
|
catboostREPO_NAMEcatboostPATH_START.@catboost_extracted@catboost-master@contrib@python@plotly@py2@plotly@validators@carpet@baxis@_startline.py@.PATH_END.py
|
{
"filename": "demo_FEA_loads_dynamic.py",
"repo_name": "projectchrono/chrono",
"repo_path": "chrono_extracted/chrono-main/src/demos/python/fea/demo_FEA_loads_dynamic.py",
"type": "Python"
}
|
# =============================================================================
# PROJECT CHRONO - http://projectchrono.org
#
# Copyright (c) 2014 projectchrono.org
# All rights reserved.
#
# Use of this source code is governed by a BSD-style license that can be found
# in the LICENSE file at the top level of the distribution and at
# http://projectchrono.org/license-chrono.txt.
#
# =============================================================================
import pychrono as chrono
import pychrono.fea as fea
import pychrono.irrlicht as chronoirr
import errno
import os
import copy
out_dir = chrono.GetChronoOutputPath() + "FEA_LOADS" # Output directory
print("Copyright (c) 2017 projectchrono.org ")
# Create (if needed) output directory
try:
os.mkdir(out_dir)
except OSError as exc:
if exc.errno != errno.EEXIST:
print("Error creating output directory " )
# Create the physical system
sys = chrono.ChSystemSMC()
# Create a mesh
mesh = fea.ChMesh()
sys.Add(mesh)
# Create some nodes (with default mass 0)
nodeA = fea.ChNodeFEAxyzrot(chrono.ChFramed(chrono.ChVector3d(0, 0, 0)))
nodeB = fea.ChNodeFEAxyzrot(chrono.ChFramed(chrono.ChVector3d(2, 0, 0)))
nodeA.SetMass(0.0)
nodeB.SetMass(0.0)
mesh.AddNode(nodeA)
mesh.AddNode(nodeB)
# Create beam section & material
beam_section = fea.ChBeamSectionEulerAdvanced()
beam_wy = 0.1
beam_wz = 0.2
beam_section.SetAsRectangularSection(beam_wy, beam_wz)
beam_section.SetYoungModulus(0.01e9)
beam_section.SetShearModulus(0.01e9 * 0.3)
beam_section.SetRayleighDamping(0.200)
beam_section.SetDensity(1500)
# Create an Eulero-Bernoulli beam with a single element
elementA = fea.ChElementBeamEuler()
elementA.SetNodes(nodeA, nodeB)
elementA.SetSection(beam_section)
mesh.AddElement(elementA)
# Create the ground body
ground = chrono.ChBody()
ground.SetFixed(True)
sys.Add(ground)
# Create a constraint the end of the beam
constrA = chrono.ChLinkMateGeneric()
constrA.Initialize(nodeA, ground, False, nodeA.Frame(), nodeA.Frame())
sys.Add(constrA)
constrA.SetConstrainedCoords(True, True, True, # x, y, z
True, True, True) # Rx, Ry, Rz
# Create the load container
load_container = chrono.ChLoadContainer()
sys.Add(load_container)
# Create a custom load with stiff force, acting on a single node, but
# this time we inherit directly from ChLoadCustom, i.e. a load that does not require ChLoader features.
# This is mostly used in case one does not need the automatic surface/volume quadrature of ChLoader.
# As a stiff load, this will automatically generate a jacobian (tangent stiffness matrix K)
# that will be used in statics, implicit integrators, etc.
print(" Custom load with stiff force, acting on a single node.")
nodeD = fea.ChNodeFEAxyz(chrono.ChVector3d(2, 10, 3))
mesh.AddNode(nodeD)
class MyLoadCustom(chrono.ChLoadCustom):
def __init__(self, loadable):
chrono.ChLoadCustom.__init__(self, loadable)
# "Virtual" copy constructor (covariant return type).
def Clone(self):
newinst = copy.deepcopy(self)
return newinst
# Compute Q=Q(x,v)
# This is the function that you have to implement. It should return the generalized Q load
# (i.e.the force in generalized lagrangian coordinates).
# For ChNodeFEAxyz, Q loads are expected as 3-rows vectors, containing absolute force x,y,z.
# As this is a stiff force field, dependency from state_x and state_y must be considered.
def ComputeQ(self, #
state_x, # state position to evaluate Q
state_w): # state speed to evaluate Q
if not state_x==None and not state_w==None :
node_pos = chrono.ChVector3d(state_x.GetItem(0), state_x.GetItem(1), state_x.GetItem(2))
node_vel = chrono.ChVector3d(state_w.GetItem(0), state_w.GetItem(1), state_w.GetItem(2))
else:
node = fea.CastToChNodeFEAxyz( fea.CastToChNodeFEAbase( chrono.CastToChNodeBase(self.loadable) ))
node_pos = node.GetPos()
node_vel = node.GetPosDt()
# Just implement a simple force+spring+damper in xy plane,
# for spring & damper connected to absolute reference
Kx = 100
Ky = 400
Dx = 0.6
Dy = 0.9
x_offset = 2
y_offset = 5
x_force = 50
y_force = 0
# Store the computed generalized forces in this.load_Q, same x,y,z order as in state_w
self.load_Q.SetItem(0, x_force - Kx * (node_pos.x - x_offset) - Dx * node_vel.x)
self.load_Q.SetItem(1, y_force - Ky * (node_pos.y - y_offset) - Dy * node_vel.y)
self.load_Q.SetItem(2, 0.0)
# Set this as stiff, to enable the Jacobians
def IsStiff(self) :
return True
# Instance load object, applying to a node, and add to container
custom_load = MyLoadCustom(nodeD)
load_container.Add(custom_load)
# -----------------------------------------------------------------
# Set visualization of the FEM mesh.
beam_visA = chrono.ChVisualShapeFEA(mesh)
beam_visA.SetFEMdataType(chrono.ChVisualShapeFEA.DataType_ELEM_BEAM_MZ)
beam_visA.SetColorscaleMinMax(-400, 200)
beam_visA.SetSmoothFaces(True)
beam_visA.SetWireframe(False)
mesh.AddVisualShapeFEA(beam_visA)
beam_visB = chrono.ChVisualShapeFEA(mesh)
beam_visB.SetFEMglyphType(chrono.ChVisualShapeFEA.GlyphType_NODE_CSYS)
beam_visB.SetFEMdataType(chrono.ChVisualShapeFEA.DataType_NONE)
beam_visB.SetSymbolsThickness(0.006)
beam_visB.SetSymbolsScale(0.01)
beam_visB.SetZbufferHide(False)
mesh.AddVisualShapeFEA(beam_visB)
# Create the Irrlicht visualization
vis = chronoirr.ChVisualSystemIrrlicht()
vis.AttachSystem(sys)
vis.SetWindowSize(1024,768)
vis.SetWindowTitle('Loads on beams')
vis.Initialize()
vis.AddLogo(chrono.GetChronoDataFile('logo_pychrono_alpha.png'))
vis.AddSkyBox()
vis.AddCamera(chrono.ChVector3d(0.5, 0.0, -3.0), chrono.ChVector3d(0.5, 0.0, 0.0))
vis.AddTypicalLights()
# -----------------------------------------------------------------
# Setup a MINRES solver. For FEA one cannot use the default PSOR type solver.
solver = chrono.ChSolverMINRES()
sys.SetSolver(solver)
solver.SetMaxIterations(200)
solver.SetTolerance(1e-15)
solver.EnableDiagonalPreconditioner(True)
solver.SetVerbose(False)
sys.GetSolver().AsIterative().SetTolerance(1e-13)
# Set integrator
ts = chrono.ChTimestepperEulerImplicitLinearized(sys)
sys.SetTimestepper(ts)
# Simulation loop
while vis.Run():
vis.BeginScene()
vis.Render()
vis.EndScene()
sys.DoStepDynamics(0.001)
|
projectchronoREPO_NAMEchronoPATH_START.@chrono_extracted@chrono-main@src@demos@python@fea@demo_FEA_loads_dynamic.py@.PATH_END.py
|
{
"filename": "gammapy_plugin.py",
"repo_name": "andreatramacere/jetset",
"repo_path": "jetset_extracted/jetset-master/jetset/gammapy_plugin.py",
"type": "Python"
}
|
__author__ = "Andrea Tramacere"
import os
try:
from gammapy.modeling.models import (
SpectralModel,
)
from gammapy.modeling.parameter import Parameter,Parameters
from gammapy.estimators import FluxPoints
from gammapy.datasets import FluxPointsDataset
from gammapy.modeling import Fit
gammapy_installed = True
except:
on_rtd = os.environ.get('READTHEDOCS', None) == 'True'
if on_rtd is True:
SpectralModel=object
pass
else:
raise ImportError('to use gammapy plugin you need to install gammapy: https://docs.gammapy.org/0.19/getting-started/install.html')
import astropy.units as u
import numpy as np
__all__=['GammapyJetsetModel','GammapyJetsetModelFactory']
class GammapyJetsetModel(SpectralModel):
def __init__(self,jetset_model,clone=True):
if clone is True:
_jetset_model = jetset_model.clone()
else:
_jetset_model= jetset_model
self._jetset_model=_jetset_model
self._jetset_model.add_user_par(name='fake_norm',units='',val=1,val_min=0,val_max=None)
self._jetset_model.parameters.fake_norm.frozen=True
parameters = []
for ID,p in enumerate(self._jetset_model.parameters.par_array):
#print(p.name)
if p.name=='fake_norm':
is_norm=True
else:
is_norm=False
parameter = Parameter(p.name, p.val, is_norm=is_norm,frozen=p.frozen)
if _jetset_model.parameters.par_array[ID].units is not None:
try:
parameter.unit = p.units
except:
parameter.unit = ''
else:
parameter.unit = ''
if p.val_min is not None:
parameter.min=p.val_min
if p.fit_range_min is not None:
parameter.min=p.fit_range_min
if p.val_max is not None:
parameter.max=p.val_max
if p.fit_range_max is not None:
parameter.max=p.fit_range_max
parameters.append(parameter)
self.default_parameters = Parameters(parameters)
self.tag=_jetset_model.name
super(GammapyJetsetModel, self).__init__()
def evaluate(self,energy=None,**kwargs):
if energy is None:
el1=np.log10( self._jetset_model.nu_min)
el2=np.log10( self._jetset_model.nu_max)
energy=(np.logspace(el1,el2,self._jetset_model.nu_size)*u.Hz).to('eV',equivalencies=u.spectral())
nu = energy.to("Hz", equivalencies=u.spectral())
for p in self.parameters:
if p.name not in kwargs.keys():
self._jetset_model.set_par(p.name ,val=p.value)
for k,v in kwargs.items():
self._jetset_model.set_par(k,val=v.value)
self._jetset_model.eval(nu=nu.value)
_spec= self._jetset_model.spectral_components.Sum.SED.nuFnu.to('eV cm-2 s-1')/(energy.to('eV')**2)
return _spec.to("1 / (cm2 eV s)")
@property
def jetset_model(self):
return self._jetset_model
def GammapyJetsetModelFactory(jetset_model,clone=True):
return GammapyJetsetModel(jetset_model,clone=clone)
|
andreatramacereREPO_NAMEjetsetPATH_START.@jetset_extracted@jetset-master@jetset@gammapy_plugin.py@.PATH_END.py
|
{
"filename": "plot_fig4.py",
"repo_name": "mkenworthy/exorings",
"repo_path": "exorings_extracted/exorings-master/plot_fig4.py",
"type": "Python"
}
|
import sys, getopt
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
from matplotlib.patches import Rectangle
from astropy.io import ascii
from scipy.interpolate import interp1d
import exorings3 as exorings
# set sensible imshow defaults
mpl.rc('image', interpolation='nearest', origin='lower', cmap='gray')
# no scientific notation for numbers on plots
mpl.rc('axes.formatter', limits=(-7, 7))
# use latex for labelling
mpl.rc('text', usetex=True)
mpl.rc('font', family='serif')
# load in J1407 binned photometry curve
tin = ascii.read("j1407_bincc.dat")
time = tin['time']
flux = tin['flux']
flux_err = tin['flux_rms']
# 54160 to 54300
goodp = (time > 54160) * (time < 54300)
flux_err = flux_err[goodp]
flux = flux[goodp]
time = time[goodp]
print ('number of photometric points: %d' % time.size)
vstar = -1.
try:
opts, args = getopt.getopt(sys.argv[1:], "hr:o:s:", ["rfile=", "ofile=", "vstar="])
except getopt.GetoptError:
print ('%s -s <velocity> -r <inputfile> -o <outputfile>' % sys.argv[0])
sys.exit(2)
for opt, arg in opts:
if opt == '-h':
print( help)
sys.exit()
elif opt in ("-r", "--rfile"):
fitsin = arg
elif opt in ("-o", "--ofile"):
plotout = arg
elif opt in ("-s", "--vstar"):
vstar = np.array(float(arg))
print ('ring file in is %s' % fitsin)
print ('plot file out is %s' % plotout)
(res, taun_rings, rad_rings, dstar) = exorings.read_ring_fits(fitsin)
exorings.print_ring_tx(rad_rings, exorings.y_to_tx(taun_rings))
# set up stellar disk
kern = exorings.make_star_limbd(21, 0.8)
# produce fine grained gradient and ring values
samp_t = np.arange(-100, 100, 0.001) + 54222.
(samp_r, samp_g) = exorings.ring_grad_line(samp_t, res[0], res[1], res[2], res[3])
hjd_minr = samp_t[np.argmin(samp_g)]
hjd_to_ring = interp1d(samp_t, samp_r, kind='linear')
sst = exorings.print_disk_parameters(res, hjd_minr, samp_r)
## Calculate the best model fit given the rings and disk parameters
strip, convo, g = exorings.ellipse_strip(rad_rings, exorings.y_to_tx(taun_rings), \
res[0], res[1], res[2], res[3], kern, dstar)
fit_time = g[0]
fit_flux = g[1]
### BEGIN THE PLOT ##################################################
datacolor = 'red'
modelcolor = 'green'
eb = dict(fmt='.', color=datacolor, ecolor=datacolor, capsize=0.0, \
marker='o', mfc=datacolor, mec=datacolor, ms=3, mew=0.001, \
elinewidth=0.5)
smalleb = dict(fmt='o', color='white', ecolor=datacolor, capsize=0.0, \
marker='o', mfc='white', mec=datacolor, ms=4, mew=1, elinewidth=2.0)
mdict = dict(color=modelcolor, zorder=-5)
ty = dict(color='black', fontsize=10, fontweight='bold', va='top', ha='right')
# set up plot area
fig = plt.figure(figsize=(10, 12))
# split into two panels - the top with the light curve and model
# fit and the bottom with the zoomed in plots
#gs = gridspec.GridSpec(2, 1, height_ratios=[1, 4], wspace=0.0, hspace=0.05)
gs = fig.add_gridspec(2, 1, height_ratios=[1, 4], wspace=0.0, hspace=0.05)
ax1 = plt.subplot(gs[0, :])
# the J1407 photometry
ax1.errorbar(time, flux, flux_err, zorder=-4, **eb)
# the ring model
ax1.plot(fit_time, fit_flux, **mdict)
ax1.axis((54180., 54260, 0., 1.19))
ax1.ticklabel_format(style='plain', useOffset=False, axis='x', scilimits=(-5, 10))
ax1.set_xlabel("Time [days]")
ax1.xaxis.set_label_position('top')
ax1.xaxis.tick_top()
# the vertical line marking t_tangential
ax1.vlines(hjd_minr, -1., 2., colors='k', linestyle='dashed')
# array of days that we want a zoom into
dt = np.array((-23, -22, -17, -16, -15, -14, -11, -10, -9, -8, -7, -6, \
+3, +5, +9, +10, +11, +24))
dt += 1
# disk parameters as a latex table
ax1.text(0.17, 0.60, sst, transform=ax1.transAxes, **ty)
# xdet and ydet are the sizes of the zoomed boxes
ep_zoom = 0.5
y_zoom = 0.4
fiddle_time = 0.3
import matplotlib.gridspec as gridspec
og = gs[1].subgridspec(3,6, wspace=0.0, hspace=0.0)
for i in np.arange(dt.size):
print ("image %d " % i)
print(i.dtype)
ep_center = hjd_minr + dt[i] + fiddle_time
ax = fig.add_subplot(og[i])
ax.errorbar(time,flux, flux_err, zorder=-4, **eb)
# first select all the pixels in that day range
# then centroid on that subset of pixels with the zoomed box
ep_day = (time < (ep_center+0.5)) * (time > (ep_center-0.5))
time_day = time[ep_day]
flux_day = flux[ep_day]
flux_err_day = flux_err[ep_day]
ax.errorbar(time_day, flux_day, flux_err_day, zorder=-3, **smalleb)
# the ring model
ax.plot(fit_time, fit_flux, linewidth=3, **mdict)
# get the center of the box from the median values of the selected
# day
day_center = np.median(time_day)
y_center = (np.max(flux_day) + np.min(flux_day))/2.
# label the top plot with a marker
ax1.scatter(day_center, 1.05, marker='v', color='k')
# corners of the zoomed box
ep_low = day_center - (ep_zoom/2.)
ep_hig = day_center + (ep_zoom/2.)
flux_low = y_center - (y_zoom/2.)
flux_hig = y_center + (y_zoom/2.)
#ax1.add_patch(Rectangle((ep_low, flux_low), ep_zoom, y_zoom, facecolor="grey",zorder=-10,linewidth=0))
if i == 0:
ax1.add_patch(Rectangle((ep_low, flux_low), ep_zoom, y_zoom, facecolor="none", zorder=-10, linewidth=1))
ax.text(0.1, 0.1, r'$\rm{width}=%4.2f$\ \rm{d}' % ep_zoom, transform=ax.transAxes)
ax.text(0.1, 0.22, r'$\rm{height}=%4.2f$\ \rm{T}' % y_zoom, transform=ax.transAxes)
ax.axis((ep_low, ep_hig, flux_low, flux_hig))
# label the delta day
ax.text(0.95, 0.95, dt[i], transform=ax.transAxes, **ty)
ax.set_xticks([])
ax.set_yticks([])
fig.savefig(plotout)
|
mkenworthyREPO_NAMEexoringsPATH_START.@exorings_extracted@exorings-master@plot_fig4.py@.PATH_END.py
|
{
"filename": "configuration.py",
"repo_name": "AWehrhahn/PyReduce",
"repo_path": "PyReduce_extracted/PyReduce-master/pyreduce/configuration.py",
"type": "Python"
}
|
# -*- coding: utf-8 -*-
"""Loads configuration files
This module loads json configuration files from disk,
and combines them with the default settings,
to create one dict that contains all parameters.
It also checks that all parameters exists, and that
no new parameters have been added by accident.
"""
import json
import logging
from os.path import dirname, join
import jsonschema
logger = logging.getLogger(__name__)
if int(jsonschema.__version__[0]) < 3: # pragma: no cover
logger.warning(
"Jsonschema %s found, but at least 3.0.0 is required to check configuration. Skipping the check.",
jsonschema.__version__,
)
hasJsonSchema = False
else:
hasJsonSchema = True
def get_configuration_for_instrument(instrument, **kwargs):
local = dirname(__file__)
instrument = str(instrument)
if instrument in ["pyreduce", None]:
fname = join(local, "settings", f"settings_pyreduce.json")
else:
fname = join(local, "settings", f"settings_{instrument.upper()}.json")
config = load_config(fname, instrument)
for kwarg_key, kwarg_value in kwargs.items():
for key, value in config.items():
if isinstance(config[key], dict) and kwarg_key in config[key].keys():
config[key][kwarg_key] = kwarg_value
return config
def load_config(configuration, instrument, j=0):
if configuration is None:
logger.info(
"No configuration specified, using default values for this instrument"
)
config = get_configuration_for_instrument(instrument, plot=False)
elif isinstance(configuration, dict):
if instrument in configuration.keys():
config = configuration[str(instrument)]
elif (
"__instrument__" in configuration.keys()
and configuration["__instrument__"] == str(instrument).upper()
):
config = configuration
else:
raise KeyError("This configuration is for a different instrument")
elif isinstance(configuration, list):
config = configuration[j]
elif isinstance(configuration, str):
config = configuration
if isinstance(config, str):
logger.info("Loading configuration from %s", config)
try:
with open(config) as f:
config = json.load(f)
except FileNotFoundError:
fname = dirname(__file__)
fname = join(fname, "settings", config)
with open(fname) as f:
config = json.load(f)
# Combine instrument specific settings, with default values
settings = read_config()
settings = update(settings, config)
# If it doesn't raise an Exception everything is as expected
validate_config(settings)
logger.debug("Configuration succesfully validated")
return settings
def update(dict1, dict2, check=True, name="dict1"):
"""
Update entries in dict1 with entries of dict2 recursively,
i.e. if the dict contains a dict value, values inside the dict will
also be updated
Parameters
----------
dict1 : dict
dict that will be updated
dict2 : dict
dict that contains the values to update
check : bool
If True, will check that the keys from dict2 exist in dict1 already.
Except for those contained in field "instrument"
Returns
-------
dict1 : dict
the updated dict
Raises
------
KeyError
If dict2 contains a key that is not in dict1
"""
# Instrument is a 'special' section as it may include any number of values
# In that case we don't want to raise an error for new keys
exclude = ["instrument"]
for key, value in dict2.items():
if check and key not in dict1.keys():
logger.warning(f"{key} is not contained in {name}")
if isinstance(value, dict):
dict1[key] = update(dict1[key], value, check=key not in exclude, name=key)
else:
dict1[key] = value
return dict1
def read_config(fname="settings_pyreduce.json"):
"""Read the configuration file from disk
If no filename is given it will load the default configuration.
The configuration file must be a json file.
Parameters
----------
fname : str, optional
Filename of the configuration. By default "settings_pyreduce.json",
i.e. the default configuration
Returns
-------
config : dict
The read configuration file
"""
this_dir = dirname(__file__)
fname = join(this_dir, "settings", fname)
with open(fname) as file:
settings = json.load(file)
return settings
def validate_config(config):
"""Test that the input configuration complies with the expected schema
Since it requires features from jsonschema 3+, it will only run if that is installed.
Otherwise show a warning but continue. This is incase some other module needs an earlier,
jsonschema (looking at you jwst).
If the function runs through without raising an exception, the check was succesful or skipped.
Parameters
----------
config : dict
Configurations to check
Raises
------
ValueError
If there is a problem with the configuration.
Usually that means a setting has an unallowed value.
"""
if not hasJsonSchema: # pragma: no cover
# Can't check with old version
return
fname = "settings_schema.json"
this_dir = dirname(__file__)
fname = join(this_dir, "settings", fname)
with open(fname) as f:
schema = json.load(f)
try:
jsonschema.validate(schema=schema, instance=config)
except jsonschema.ValidationError as ve:
logger.error("Configuration failed validation check.\n%s", ve.message)
raise ValueError(ve.message)
|
AWehrhahnREPO_NAMEPyReducePATH_START.@PyReduce_extracted@PyReduce-master@pyreduce@configuration.py@.PATH_END.py
|
{
"filename": "__init__.py",
"repo_name": "ML4GW/hermes",
"repo_path": "hermes_extracted/hermes-main/hermes/aeriel/serve/__init__.py",
"type": "Python"
}
|
from .serve import serve
|
ML4GWREPO_NAMEhermesPATH_START.@hermes_extracted@hermes-main@hermes@aeriel@serve@__init__.py@.PATH_END.py
|
{
"filename": "__init__.py",
"repo_name": "catboost/catboost",
"repo_path": "catboost_extracted/catboost-master/contrib/python/redis/py3/redis/commands/__init__.py",
"type": "Python"
}
|
from .cluster import READ_COMMANDS, AsyncRedisClusterCommands, RedisClusterCommands
from .core import AsyncCoreCommands, CoreCommands
from .helpers import list_or_args
from .redismodules import AsyncRedisModuleCommands, RedisModuleCommands
from .sentinel import AsyncSentinelCommands, SentinelCommands
__all__ = [
"AsyncCoreCommands",
"AsyncRedisClusterCommands",
"AsyncRedisModuleCommands",
"AsyncSentinelCommands",
"CoreCommands",
"READ_COMMANDS",
"RedisClusterCommands",
"RedisModuleCommands",
"SentinelCommands",
"list_or_args",
]
|
catboostREPO_NAMEcatboostPATH_START.@catboost_extracted@catboost-master@contrib@python@redis@py3@redis@commands@__init__.py@.PATH_END.py
|
{
"filename": "reference_pixels.py",
"repo_name": "spacetelescope/jwst",
"repo_path": "jwst_extracted/jwst-main/jwst/refpix/reference_pixels.py",
"type": "Python"
}
|
# Module for handling Reference Pixels
# Final CCWG Recommendation of 6/2013:
#
# The reference pixel correction for the NIR detectors should be done\
# immediately following the zero frame subtraction. We recommend that
# the following steps be taken in order for each frame of each exposure,
# with the option to turn each one off *on a per-SCA basis*. The
# parameters should eventually be optimized for each detector and instrument.
#
# (1) For each amplifier, the sigma-clipped mean values for all odd and all
# even columns of the horizontal (both top and bottom) reference pixels
# should be calculated. These values should then be subtracted from
# every pixel in the corresponding odd/even columns. There should be
# an option to turn this step off, and replace with a single
# sigma-clipped mean value for all horizontal reference pixels in
# each amplifier.
#
# (2) The vertical (both left and right) reference pixels should be smoothed
# with an N-pixel wide boxcar convolution, where N may depend on detector
# and instrument (adopt N=10 as a default). The median value of the 8
# smoothed reference pixels in each row should then be multiplied by a
# gain factor of some value between 0 (which effectively turns off the
# correction) and 1 (for full subtraction, should be the default), with
# the exact value to be tuned for each detector and instrument. Finally,
# these smoothed and scaled values should be subtracted from every pixel
# in the corresponding row.
#
# Subarray processing added 7/2018
#
# For NIR exposures, if the value of the meta.exposure.noutputs attribute is 1,
# calculate the clipped means of odd and even columns
# in detector coordinates. Subtract the odd mean from the odd columns, and
# the even mean from the even columns. If there are no reference pixels in the
# subarray, omit the refpix step.
#
# If the value of meta.exposure.noutputs is 4, calculate odd and even reference
# values for each amplifier separately, if available, and subtract those values
# from their corresponding data sections. Also use side reference pixels if
# available.
#
# For MIRI subarray exposures, omit the refpix step.
import logging
from copy import deepcopy
import numpy as np
from scipy import stats
from stdatamodels.jwst.datamodels import dqflags
from ..lib import pipe_utils, reffile_utils
from .irs2_subtract_reference import make_irs2_mask
log = logging.getLogger(__name__)
log.setLevel(logging.DEBUG)
#
# NIR Reference section dictionaries are zero indexed and specify the values
# to be used in the following slice:
# (rowstart: rowstop, colstart:colstop)
# The 'stop' values are one more than the actual final row or column, in
# accordance with how Python slices work
NIR_reference_sections = {'A': {'top': (2044, 2048, 0, 512),
'bottom': (0, 4, 0, 512),
'side': (0, 2048, 0, 4),
'data': (0, 2048, 0, 512)},
'B': {'top': (2044, 2048, 512, 1024),
'bottom': (0, 4, 512, 1024),
'data': (0, 2048, 512, 1024)},
'C': {'top': (2044, 2048, 1024, 1536),
'bottom': (0, 4, 1024, 1536),
'data': (0, 2048, 1024, 1536)},
'D': {'top': (2044, 2048, 1536, 2048),
'bottom': (0, 4, 1536, 2048),
'side': (0, 2048, 2044, 2048),
'data': (0, 2048, 1536, 2048)}
}
# IRS2 sections for NIRSpec have a different size due to the
# interleaved reference pixels and the reference sector.
IRS2_reference_sections = {'0': {'top': (2044, 2048, 0, 640),
'bottom': (0, 4, 0, 640),
'data': (0, 2048, 0, 640)},
'A': {'top': (2044, 2048, 640, 1280),
'bottom': (0, 4, 640, 1280),
'data': (0, 2048, 640, 1280)},
'B': {'top': (2044, 2048, 1280, 1920),
'bottom': (0, 4, 1280, 1920),
'data': (0, 2048, 1280, 1920)},
'C': {'top': (2044, 2048, 1920, 2560),
'bottom': (0, 4, 1920, 2560),
'data': (0, 2048, 1920, 2560)},
'D': {'top': (2044, 2048, 2560, 3200),
'bottom': (0, 4, 2560, 3200),
'data': (0, 2048, 2560, 3200)}
}
# Special behavior is requested for NIRSpec subarrays that do not reach
# detector edges; for these input models, we will assign the top and bottom
# four rows as reference pixels to better treat pedestal noise issues.
NRS_edgeless_subarrays = ['SUB512', 'SUB512S', 'SUB32']
#
# MIR Reference section dictionaries are zero indexed and specify the values
# to be used in the following slice:
# name ('left' or 'right'): (rowstart, rowstop, column)
# except the 'data' entry:
# 'data': (rowstart, rowstop, colstart, colstop, stride)
MIR_reference_sections = {'A': {'left': (0, 1024, 0),
'right': (0, 1024, 1028),
'data': (0, 1024, 0, 1032, 4)},
'B': {'left': (0, 1024, 1),
'right': (0, 1024, 1029),
'data': (0, 1024, 1, 1032, 4)},
'C': {'left': (0, 1024, 2),
'right': (0, 1024, 1030),
'data': (0, 1024, 2, 1032, 4)},
'D': {'left': (0, 1024, 3),
'right': (0, 1024, 1031),
'data': (0, 1024, 3, 1032, 4)}
}
#
# Status returns
REFPIX_OK = 0
BAD_REFERENCE_PIXELS = 1
SUBARRAY_DOESNTFIT = 2
SUBARRAY_SKIPPED = 3
class Dataset():
"""Base Class to handle passing stuff from routine to routine
Parameters:
-----------
input_model: data model object
Science data model to be corrected
is_subarray: boolean
flag that shows whether the dataset was created from subarray
data
odd_even_columns: boolean
flag that controls whether odd and even-numbered columns are
processed separately (NIR only)
use_side_ref_pixels: boolean
flag the controls whether the side reference pixels are used in
the correction (NIR only)
side_smoothing_length: integer
smoothing length the use in calculating the running median of
the side reference pixels (NIR only)
side_gain: float
gain to use in applying the side reference pixel correction
(NIR only)
odd_even_rows: boolean
flag that controls whether odd and even-numbered rows are handled
separately (MIR only)
"""
def __init__(self, input_model,
odd_even_columns,
use_side_ref_pixels,
side_smoothing_length,
side_gain,
odd_even_rows):
if (input_model.meta.subarray.xstart is None or
input_model.meta.subarray.ystart is None or
input_model.meta.subarray.xsize is None or
input_model.meta.subarray.ysize is None):
raise ValueError('subarray metadata not found')
self.input_model = input_model
is_subarray = False
self.subarray = None
if reffile_utils.is_subarray(input_model):
is_subarray = True
self.subarray = input_model.meta.subarray.name
self.is_subarray = is_subarray
self.zeroframe_proc = False
(nints, ngroups, nrows, ncols) = input_model.data.shape
self.nints = nints
self.ngroups = ngroups
self.nrows = nrows
self.ncols = ncols
self.full_shape = (nrows, ncols)
self.detector = input_model.meta.instrument.detector
self.noutputs = input_model.meta.exposure.noutputs
self.xstart = input_model.meta.subarray.xstart
self.ystart = input_model.meta.subarray.ystart
self.xsize = input_model.meta.subarray.xsize
self.ysize = input_model.meta.subarray.ysize
self.colstart = self.xstart - 1
self.colstop = self.colstart + self.xsize
self.rowstart = self.ystart - 1
self.rowstop = self.rowstart + self.ysize
self.odd_even_columns = odd_even_columns
self.use_side_ref_pixels = use_side_ref_pixels
self.side_smoothing_length = side_smoothing_length
self.side_gain = side_gain
self.odd_even_rows = odd_even_rows
self.bad_reference_pixels = False
self.reference_sections = None
self.amplifiers = 'ABCD'
# Define temp array for processing every group
self.pixeldq = self.get_pixeldq()
self.group = None
def sigma_clip(self, data, dq, low=3.0, high=3.0):
"""Wrap the scipy.stats.sigmaclip so that data with zero variance
is handled cleanly
Parameters:
-----------
data: NDArray
Array of pixels to be sigma-clipped
dq: NDArray
DQ array for data
low: float
lower clipping boundary, in standard deviations from the mean (default=3.0)
high: float
upper clipping boundary, in standard deviations from the mean (default=3.0)
Returns:
--------
mean: float
clipped mean of data array
"""
#
# Only calculate the clipped mean for pixels that don't have the DO_NOT_USE
# DQ bit set
goodpixels = np.where(np.bitwise_and(dq, dqflags.pixel['DO_NOT_USE']) == 0)
#
# If there are no good pixels, return None
if len(goodpixels[0]) == 0:
return None
#
# scipy routine fails if the pixels all have exactly the same value
if np.std(data[goodpixels], dtype=np.float64) != 0.0:
clipped_ref, lowlim, uplim = stats.sigmaclip(data[goodpixels],
low, high)
mean = clipped_ref.mean()
else:
mean = data[goodpixels].mean(dtype=np.float64)
return mean
def get_pixeldq(self):
"""Get the properly sized version of the pixeldq array from the
input model.
Parameters
----------
None
Returns
-------
pixeldq : NDArray
numpy array for the pixeldq data with the full shape of the detector
"""
if self.is_subarray:
# deal with subarrays by embedding the pixeldq array in a full-sized
# array with DO_NOT_USE and REFERENCE_PIXEL dqflags bit set where the
# reference pixels live, except where the data are embedded
if self.detector[:3] == 'MIR':
fullrows = 1024
fullcols = 1032
else:
fullrows = 2048
fullcols = 2048
self.full_shape = (fullrows, fullcols)
pixeldq = np.zeros(self.full_shape, dtype=self.input_model.pixeldq.dtype)
refpixdq_dontuse = dqflags.pixel['DO_NOT_USE'] | dqflags.pixel['REFERENCE_PIXEL']
pixeldq[0:4, :] = refpixdq_dontuse
pixeldq[fullrows - 4:fullrows, :] = refpixdq_dontuse
pixeldq[4:fullrows - 4, 0:4] = refpixdq_dontuse
pixeldq[4:fullrows - 4, fullcols - 4:fullcols] = refpixdq_dontuse
pixeldq[self.rowstart:self.rowstop, self.colstart:self.colstop] = self.input_model.pixeldq.copy()
if self.subarray in NRS_edgeless_subarrays:
# Log assignment as rows (in DMS plane) despite assigning columns (in detector plane)
log.info(f"Subarray {self.subarray} has no reference pixels: "
f"assigning top and bottom four rows as reference pixels.")
pixeldq[self.rowstart:self.rowstop, self.colstart:self.colstart+4] = \
pixeldq[self.rowstart:self.rowstop, self.colstart:self.colstart+4] | dqflags.pixel['REFERENCE_PIXEL']
pixeldq[self.rowstart:self.rowstop, self.colstop-4:self.colstop] = \
pixeldq[self.rowstart:self.rowstop, self.colstop-4:self.colstop] | dqflags.pixel['REFERENCE_PIXEL']
else:
pixeldq = self.input_model.pixeldq.copy()
return pixeldq
def get_group(self, integration, group):
"""Get a properly sized copy of the array for each group
Parameters
----------
integration : int
Index of the integration from the input model from which to extract
the group array
group : int
Index of the group, within the integration, from which to extract
the group array
"""
if self.group is None:
self.group = np.zeros(self.full_shape, dtype=self.input_model.data.dtype)
if self.is_subarray:
self.group[self.rowstart:self.rowstop, self.colstart:self.colstop] = self.input_model.data[integration, group].copy()
else:
self.group[:, :] = self.input_model.data[integration, group].copy()
def restore_group(self, integration, group):
"""Replace input model data with processed group array
Parameters
----------
integration : int
Index of the integration from the input model which needs to be
updated with the newly processed group array
group : int
Index of the group, within the integration, which needs to be
updated with the newly processed group array
"""
if self.is_subarray:
self.input_model.data[integration, group] = self.group[self.rowstart:self.rowstop, self.colstart:self.colstop]
else:
self.input_model.data[integration, group] = self.group.copy()
def log_parameters(self):
"""Print out the parameters that are valid for this type of data, and
those that aren't
Parameters
----------
input_model : JWST datamodel
Datamodel being processed
Returns
-------
None
"""
is_NIR = isinstance(self, NIRDataset)
if is_NIR:
if not self.is_subarray:
log.info('NIR full frame data')
log.info('The following parameters are valid for this mode:')
log.info(f'use_side_ref_pixels = {self.use_side_ref_pixels}')
log.info(f'odd_even_columns = {self.odd_even_columns}')
log.info(f'side_smoothing_length = {self.side_smoothing_length}')
log.info(f'side_gain = {self.side_gain}')
log.info('The following parameter is not applicable and is ignored:')
log.info(f'odd_even_rows = {self.odd_even_rows}')
else:
log.info('NIR subarray data')
# Transform the pixeldq array from DMS to detector coords
self.DMS_to_detector_dq()
ngoodside = self.count_good_side_refpixels()
ngoodtopbottom = self.count_good_top_bottom_refpixels()
# Re-assign the pixeldq array since we transformed it to detector space
# and we don't want to do it again
self.pixeldq = self.get_pixeldq()
is_4amp = False
if self.noutputs == 4:
is_4amp = True
if is_4amp:
log.info('4 readout amplifiers used')
if (ngoodside + ngoodtopbottom) == 0:
log.info('No valid reference pixels. This step will have no effect')
else:
log.info('The following parameters are valid for this mode:')
if ngoodtopbottom > 0:
log.info(f'odd_even_columns = {self.odd_even_columns}')
if ngoodside > 0:
log.info(f'use_side_ref_pixels = {self.use_side_ref_pixels}')
log.info(f'side_smoothing_length = {self.side_smoothing_length}')
log.info(f'side_gain = {self.side_gain}')
log.info('The following parameters are not applicable and are ignored')
if ngoodtopbottom == 0:
log.info(f'odd_even_columns = {self.odd_even_columns}')
if ngoodside == 0:
log.info(f'use_side_ref_pixels = {self.use_side_ref_pixels}')
log.info(f'side_smoothing_length = {self.side_smoothing_length}')
log.info(f'side_gain = {self.side_gain}')
log.info(f'odd_even_rows = {self.odd_even_rows}')
else:
log.info('Single readout amplifier used')
if ngoodtopbottom == 0:
log.info('No valid reference pixels. This step will have no effect.')
else:
log.info('The following parameter is valid for this mode:')
log.info(f'odd_even_columns = {self.odd_even_columns}')
log.info('The following parameters are not applicable and are ignored:')
log.info(f'use_side_ref_pixels = {self.use_side_ref_pixels}')
log.info(f'side_smoothing_length = {self.side_smoothing_length}')
log.info(f'side_gain = {self.side_gain}')
log.info(f'odd_even_rows = {self.odd_even_rows}')
else:
if not self.is_subarray:
log.info('MIRI full frame data')
log.info('The following parameter is valid for this mode:')
log.info(f'odd_even_rows = {self.odd_even_rows}')
log.info('The following parameters are not applicable and are ignored:')
log.info(f'use_side_ref_pixels = {self.use_side_ref_pixels}')
log.info(f'odd_even_columns = {self.odd_even_columns}')
log.info(f'side_smoothing_length = {self.side_smoothing_length}')
log.info(f'side_gain = {self.side_gain}')
else:
log.info('MIRI subarray data')
log.info('refpix processing skipped for this mode')
def count_good_side_refpixels(self):
donotuse = dqflags.pixel['DO_NOT_USE']
ngood = 0
for amplifier in 'AD':
rowstart, rowstop, colstart, colstop = self.reference_sections[amplifier]['side']
good = np.where(np.bitwise_and(self.pixeldq[rowstart:rowstop, colstart:colstop], donotuse) != donotuse)
ngood += len(good[0])
return ngood
def count_good_top_bottom_refpixels(self):
donotuse = dqflags.pixel['DO_NOT_USE']
refdq = dqflags.pixel['REFERENCE_PIXEL']
ngood = 0
if self.subarray in NRS_edgeless_subarrays:
ngood = len(np.where((self.pixeldq & refdq == refdq) & (self.pixeldq & donotuse != donotuse))[0])
log.debug(f"Edgeless subarray {self.subarray} has {ngood} reference pixels.")
else:
for edge in ['top', 'bottom']:
for amplifier in self.amplifiers:
rowstart, rowstop, colstart, colstop = self.reference_sections[amplifier][edge]
log.debug(f"Ref sections for {edge} & {amplifier}: {rowstart, rowstop, colstart, colstop}")
good = np.where(np.bitwise_and(self.pixeldq[rowstart:rowstop, colstart:colstop], donotuse) != donotuse)
ngood += len(good[0])
log.debug(f"For {edge} & {amplifier}: {len(good[0])}")
return ngood
class NIRDataset(Dataset):
"""Generic NIR detector Class.
Parameters
----------
input_model: data model object
Science data model to be corrected
is_subarray: boolean
flag that shows whether the dataset was created from subarray
data
odd_even_columns: boolean
flag that controls whether odd and even-numbered columns are
processed separately
use_side_ref_pixels: boolean
flag the controls whether the side reference pixels are used in
the correction
side_smoothing_length: integer
smoothing length the use in calculating the running median of
the side reference pixels
side_gain: float
gain to use in applying the side reference pixel correction
"""
def __init__(self, input_model,
odd_even_columns,
use_side_ref_pixels,
side_smoothing_length,
side_gain):
super(NIRDataset, self).__init__(input_model,
odd_even_columns,
use_side_ref_pixels,
side_smoothing_length,
side_gain,
odd_even_rows=False)
# Set appropriate NIR sections
self.is_irs2 = pipe_utils.is_irs2(input_model)
if self.is_irs2:
self.reference_sections = deepcopy(IRS2_reference_sections)
self.amplifiers = '0ABCD'
self.irs2_odd_mask = self.make_irs2_odd_mask(input_model)
else:
self.reference_sections = NIR_reference_sections
self.irs2_odd_mask = None
def make_irs2_odd_mask(self, input_model, scipix_n_default=16, refpix_r_default=4):
"""
Make an odd pixel mask for IRS2 mode.
The even pixel mask can be generated by inverting the odd mask.
Parameters
----------
input_model : DataModel
Input model containing data to mask.
scipix_n_default : int, optional
Number of regular samples before stepping out to collect
reference samples.
refpix_r_default : int, optional
Number of reference samples before stepping back in to collect
regular samples.
Returns
-------
odd_mask : NDArray
Boolean array matching data column size in detector orientation.
True identifies all odd pixels (science and reference).
"""
# Get data information from input model.
# (y and x here refer to detector orientation, although input
# data has not yet been rotated)
ny = input_model.data.shape[-1] # 2048
nx = input_model.data.shape[-2] # 3200
# Default n=16, r=4
scipix_n = input_model.meta.exposure.nrs_normal
if scipix_n is None:
log.warning("Keyword NRS_NORM not found; using default value %d" %
scipix_n_default)
scipix_n = scipix_n_default
refpix_r = input_model.meta.exposure.nrs_reference
if refpix_r is None:
log.warning("Keyword NRS_REF not found; using default value %d" %
refpix_r_default)
refpix_r = refpix_r_default
# If these are not set to standard values, the
# reference sections values must be changed to match.
n_sector = nx // 5
areas = ['top', 'bottom', 'data'] # assuming no 'side'
if nx != 3200:
for i, amplifier in enumerate('0ABCD'):
x_start = n_sector * i
x_stop = n_sector * (i + 1)
for area in areas:
sec = self.reference_sections[amplifier][area]
self.reference_sections[amplifier][area] = (
sec[0], sec[1], x_start, x_stop)
# Make a column mask that identifies the reference sector and
# all interleaved pixels as False, all science pixels
# and standard reference pixels as True
x_mask = make_irs2_mask(nx, ny, scipix_n, refpix_r)
# Switch True/False to identify reference pixels instead of
# science pixels
x_mask = ~x_mask
# Treat the reference sector like the other sectors
x_mask[:n_sector] = x_mask[n_sector: 2 * n_sector]
# Find even and odd interleaved pixels:
# reference pixels come in two pairs, the first set odd
# and the second set even, so pixels
# SSSSSSSSrrrrSSSSSSSSSSSSSSSSrrrrSSSS...
# have parity:
# --------0011----------------0011----...
# for the interleaved pixels, where the traditional pixels
# are:
# 01010101----0101010101010101----0101...
# first two pixels are odd
even_interleaved = x_mask.copy()
even_interleaved[0::4] = False
even_interleaved[1::4] = False
# second two pixels are even
odd_interleaved = x_mask.copy()
odd_interleaved[2::4] = False
odd_interleaved[3::4] = False
# Make an odd mask for the image columns in detector orientation.
# This will be used both for identifying the correct
# reference pixels and for applying the correction later.
odd_mask = np.full(nx, False)
odd_mask[0::2] = True
odd_mask[even_interleaved] = False
odd_mask[odd_interleaved] = True
return odd_mask
# Even though the recommendation specifies calculating the mean of the
# combined top and bottom reference sections, there's a good chance we
# might want to calculate them separately
def collect_odd_refpixels(self, group, amplifier, top_or_bottom):
"""Collect odd reference pixels.
For traditional readouts, odd pixels correspond to odd-numbered
rows (first, third, fifth, etc.), which are even array indices.
For IRS2 mode, science and traditional reference pixels have
the same parity, but interleaved pixels come in two pairs. The
first two are odd and the second two are even.
Parameters
----------
group : NDArray
The group that is being processed
amplifier: string ['A'|'B'|'C'|'D']
String corresponding to the amplifier being processed
top_or_bottom: string ['top'|'bottom']
String corresponding to whether top or bottom reference pixels
are bing processed
Returns
-------
oddref : NDArray
Array containing all the odd reference pixels
odddq: NDArray
Array containing all the odd dq values for those reference pixels
"""
rowstart, rowstop, colstart, colstop = \
self.reference_sections[amplifier][top_or_bottom]
# handle interleaved pixels if needed
if self.is_irs2:
odd_mask = self.irs2_odd_mask[colstart:colstop]
oddref = group[rowstart:rowstop, colstart:colstop][:, odd_mask]
odddq = self.pixeldq[rowstart:rowstop, colstart:colstop][:, odd_mask]
else:
oddref = group[rowstart:rowstop, colstart:colstop:2]
odddq = self.pixeldq[rowstart:rowstop, colstart:colstop:2]
return oddref, odddq
def collect_even_refpixels(self, group, amplifier, top_or_bottom):
"""Collect even reference pixels.
For traditional readouts, even pixels correspond to even-numbered
rows (second, fourth, sixth, etc.), which are odd array indices.
For IRS2 mode, science and traditional reference pixels have
the same parity, but interleaved pixels come in two pairs. The
first two are odd and the second two are even.
Parameters
----------
group : NDArray
The group that is being processed
amplifier: string ['A'|'B'|'C'|'D']
String corresponding to the amplifier being processed
top_or_bottom: string ['top'|'bottom']
String corresponding to whether top or bottom reference pixels
are bing processed
Returns
-------
evenref : NDArray
Array containing all the even reference pixels
evendq : NDArray
Array containing all the even dq values for those reference pixels
"""
rowstart, rowstop, colstart, colstop = \
self.reference_sections[amplifier][top_or_bottom]
# handle interleaved pixels if needed
if self.is_irs2:
even_mask = ~self.irs2_odd_mask[colstart:colstop]
evenref = group[rowstart:rowstop, colstart:colstop][:, even_mask]
evendq = self.pixeldq[rowstart:rowstop, colstart:colstop][:, even_mask]
else:
# Even columns start on the second column
colstart = colstart + 1
evenref = group[rowstart:rowstop, colstart:colstop:2]
evendq = self.pixeldq[rowstart:rowstop, colstart:colstop:2]
return evenref, evendq
def get_odd_refvalue(self, group, amplifier, top_or_bottom):
"""Calculate the clipped mean of the counts in the reference pixels
in odd-numbered columns
Parameters:
-----------
group: NDArray
Group that is being processed
amplifier: string (['A'|'B'|'C'|'D'])
Amplifier that is being processed
top_or_bottom: string (['top'|'bottom'])
Processing top or bottom reference pixels?
Returns:
--------
odd: float
Value of the clipped mean of the reference pixels in odd-numbered
columns
"""
ref, dq = self.collect_odd_refpixels(group, amplifier, top_or_bottom)
odd = self.sigma_clip(ref, dq)
return odd
def get_even_refvalue(self, group, amplifier, top_or_bottom):
"""Calculate the clipped mean of the counts in the reference pixels
in even-numbered columns
Parameters:
-----------
group: NDArray
Group that is being processed
amplifier: string (['A'|'B'|'C'|'D'])
Amplifier that is being processed
top_or_bottom: string (['top'|'bottom'])
Processing top or bottom reference pixels?
Returns:
--------
even: float
Value of the clipped mean of the reference pixels in even-numbered
columns
"""
ref, dq = self.collect_even_refpixels(group, amplifier, top_or_bottom)
even = self.sigma_clip(ref, dq)
return even
def get_amplifier_refvalue(self, group, amplifier, top_or_bottom):
"""Calculate the reference pixel mean for a given amplifier
Parameters:
-----------
group: NDArray
Group that is being processed
amplifier: string (['A'|'B'|'C'|'D'])
Amplifier that is being processed
top_or_bottom: string (['top'|'bottom'])
Processing top or bottom reference pixels?
Returns:
--------
Either:
odd: float
Value of the clipped mean of the reference pixels in odd-numbered
columns
even: float
Value of the clipped mean of the reference pixels in even-numbered
columns
Or:
mean: float
Value of the clipped mean of the reference pixels in both odd-numbered
and even-numbered columns
"""
if self.odd_even_columns:
odd = self.get_odd_refvalue(group, amplifier, top_or_bottom)
even = self.get_even_refvalue(group, amplifier, top_or_bottom)
if odd is None or even is None:
self.bad_reference_pixels = True
return odd, even
else:
rowstart, rowstop, colstart, colstop = \
self.reference_sections[amplifier][top_or_bottom]
ref = group[rowstart:rowstop, colstart:colstop]
dq = self.pixeldq[rowstart:rowstop, colstart:colstop]
mean = self.sigma_clip(ref, dq)
if mean is None:
self.bad_reference_pixels = True
return mean
def get_refvalues(self, group):
"""Get the reference pixel values for each amplifier, odd and even columns
and top and bottom reference pixels
Parameters:
-----------
group: NDArray
Group that is being processed
Returns:
--------
refpix: dictionary
Dictionary containing the clipped mean of the reference pixels for
each amplifier, odd and even columns (if selected, otherwise all columns)
and top and bottom.
"""
refpix = {}
for amplifier in self.amplifiers:
refpix[amplifier] = {}
refpix[amplifier]['odd'] = {}
refpix[amplifier]['even'] = {}
for top_bottom in ('top', 'bottom'):
refvalues = self.get_amplifier_refvalue(group, amplifier,
top_bottom)
if self.odd_even_columns:
refpix[amplifier]['odd'][top_bottom] = refvalues[0]
refpix[amplifier]['even'][top_bottom] = refvalues[1]
else:
refpix[amplifier][top_bottom] = refvalues
return refpix
def do_top_bottom_correction(self, group, refvalues):
"""Do the top/bottom correction
Parameters:
----------
group: NDArray
Group that is being processed
refvalues: dictionary
Dictionary of reference pixel clipped means
Returns:
--------
None
Side Effect:
------------
The parameter _group_ is corrected for the bias drift using the
top and bottom reference pixels
"""
for amplifier in self.amplifiers:
datarowstart, datarowstop, datacolstart, datacolstop = \
self.reference_sections[amplifier]['data']
if self.odd_even_columns:
oddreftop = refvalues[amplifier]['odd']['top']
oddrefbottom = refvalues[amplifier]['odd']['bottom']
evenreftop = refvalues[amplifier]['even']['top']
evenrefbottom = refvalues[amplifier]['even']['bottom']
#
# For now, just average the top and bottom corrections
oddrefsignal = self.average_with_None(oddreftop, oddrefbottom)
evenrefsignal = self.average_with_None(evenreftop, evenrefbottom)
if oddrefsignal is not None and evenrefsignal is not None:
if not self.is_irs2:
oddslice = (slice(datarowstart, datarowstop, 1),
slice(datacolstart, datacolstop, 2))
evenslice = (slice(datarowstart, datarowstop, 1),
slice(datacolstart + 1, datacolstop, 2))
group[oddslice] = group[oddslice] - oddrefsignal
group[evenslice] = group[evenslice] - evenrefsignal
else:
dataslice = (slice(datarowstart, datarowstop, 1),
slice(datacolstart, datacolstop, 1))
odd_mask = self.irs2_odd_mask[datacolstart:datacolstop]
group[dataslice][:, odd_mask] -= oddrefsignal
group[dataslice][:, ~odd_mask] -= evenrefsignal
else:
pass
else:
reftop = refvalues[amplifier]['top']
refbottom = refvalues[amplifier]['bottom']
refsignal = self.average_with_None(reftop, refbottom)
if refsignal is not None:
dataslice = (slice(datarowstart, datarowstop, 1),
slice(datacolstart, datacolstop, 1))
group[dataslice] = group[dataslice] - refsignal
else:
pass
return
def average_with_None(self, a, b):
"""Average two numbers. If one is None, return the
other. If both are None, return None
Parameters:
-----------
a, b: Numbers or None
Returns:
--------
result = Number or None
"""
if a is None and b is None:
return None
if a is None:
return b
elif b is None:
return a
else:
return 0.5 * (a + b)
def create_reflected(self, data, smoothing_length):
"""Make an array bigger by extending it at the top and bottom by
an amount equal to .5(smoothing length-1)
(as the smoothing length will be odd)
The extension is a reflection of the ends of the input array
Parameters:
-----------
data: NDArray
input data array
smoothing_length: integer (should be odd, will be converted if not)
smoothing length. Amount by which the input array is extended is
smoothing_length // 2 at the bottom and smoothing_length // 2 at
the top
Returns:
--------
reflected: NDArray
array that has been extended at the top and bottom by reflecting the
first and last few rows
"""
nrows, ncols = data.shape
if smoothing_length % 2 == 0:
log.info("Smoothing length must be odd, adding 1")
smoothing_length = smoothing_length + 1
newheight = nrows + smoothing_length - 1
reflected = np.zeros((newheight, ncols), dtype=data.dtype)
bufsize = smoothing_length // 2
reflected[bufsize:bufsize + nrows] = data[:]
reflected[:bufsize] = data[bufsize:0:-1]
reflected[-(bufsize):] = data[-2:-(bufsize + 2):-1]
return reflected
def median_filter(self, data, dq, smoothing_length):
"""Simple median filter. Run a box of the same width as the data and
height = smoothing_length. Reflect the data at the top and bottom
Parameters:
-----------
data: NDArray
input 2-d science array
dq: NDArray
input 2-d dq array
smoothing_length: integer (should be odd)
height of box within which the median value is calculated
Returns:
--------
result: NDArray
1-d array that is a median filtered version of the input data
"""
augmented_data = self.create_reflected(data, smoothing_length)
augmented_dq = self.create_reflected(dq, smoothing_length)
nrows, ncols = data.shape
result = np.zeros(nrows)
for i in range(nrows):
rowstart = i
rowstop = rowstart + smoothing_length
goodpixels = np.where(np.bitwise_and(augmented_dq[rowstart:rowstop],
dqflags.pixel['DO_NOT_USE']) == 0)
if len(goodpixels[0]) == 0:
result[i] = np.nan
else:
window = augmented_data[rowstart:rowstop][goodpixels]
result[i] = np.median(window)
return result
def calculate_side_ref_signal(self, group, colstart, colstop):
"""Calculate the reference pixel signal from the side reference pixels
by running a box up the side reference pixels and calculating the running
median
Parameters:
-----------
group: NDArray
Group that is being processed
colstart: integer
Starting column
colstop: integer
Ending column
Returns:
--------
NDArray
Median filtered version of the side reference pixels
"""
smoothing_length = self.side_smoothing_length
data = group[:, colstart:colstop + 1]
dq = self.pixeldq[:, colstart:colstop + 1]
return self.median_filter(data, dq, smoothing_length)
def combine_ref_signals(self, left, right):
"""Combine the left and right reference signals by averaging
on a row-by-row basis
Parameters:
-----------
left: NDArray
1-d array of median-filtered reference pixel values from the left side
right: NDArray
1-d array of median-filtered reference pixel values from the right side
Returns:
--------
sidegroup: NDArray
2-d array of average reference pixel vector replicated horizontally
"""
combined = self.combine_with_NaNs(left, right)
sidegroup = np.zeros((2048, 2048))
for column in range(2048):
sidegroup[:, column] = combined
return sidegroup
def combine_with_NaNs(self, a, b):
"""Combine 2 1-d arrays that have NaNs.
Wherever both arrays are NaN, output is 0.0.
Wherever a is NaN and b is not, return b.
Wherever b is NaN and a is not, return a.
Wherever neither a nor b is NaN, return the average of
a and b
Parameters:
-----------
a, b: numpy 1-d arrays of numbers
Returns:
result = numpy 1-d array of numbers
"""
result = np.zeros(len(a), dtype=a.dtype)
bothnan = np.where(np.isnan(a) & np.isnan(b))
result[bothnan] = 0.0
a_nan = np.where(np.isnan(a) & ~np.isnan(b))
result[a_nan] = b[a_nan]
b_nan = np.where(~np.isnan(a) & np.isnan(b))
result[b_nan] = a[b_nan]
no_nan = np.where(~np.isnan(a) & ~np.isnan(b))
result[no_nan] = 0.5 * (a[no_nan] + b[no_nan])
return result
def apply_side_correction(self, group, sidegroup):
"""Apply reference pixel correction from the side reference pixels
Parameters:
-----------
group: NDArray
Group being processed
sidegroup: NDArray
Side reference pixel signal replicated horizontally
Returns:
--------
corrected_group: NDArray
The group corrected for the side reference pixel signal
"""
corrected_group = group - self.side_gain * sidegroup
return corrected_group
def do_side_correction(self, group):
"""Do all the steps of the side reference pixel correction
Parameters:
-----------
group: NDArray
Group being processed
Returns:
--------
corrected_group: NDArray
Corrected group
"""
left = self.calculate_side_ref_signal(group, 0, 3)
right = self.calculate_side_ref_signal(group, 2044, 2047)
sidegroup = self.combine_ref_signals(left, right)
corrected_group = self.apply_side_correction(group, sidegroup)
return corrected_group
def do_corrections(self):
if self.is_subarray:
if self.noutputs == 4:
self.do_fullframe_corrections()
else:
self.do_subarray_corrections()
else:
self.do_fullframe_corrections()
def do_fullframe_corrections(self):
"""Do Reference Pixels Corrections for all amplifiers, NIR detectors
First read of each integration is NOT subtracted, as the signal is removed
in the superbias subtraction step"""
#
# First transform pixeldq array to detector coordinates
self.DMS_to_detector_dq()
for integration in range(self.nints):
for group in range(self.ngroups):
#
# Get the reference values from the top and bottom reference
# pixels
#
self.DMS_to_detector(integration, group)
thisgroup = self.group
refvalues = self.get_refvalues(thisgroup)
self.do_top_bottom_correction(thisgroup, refvalues)
if self.use_side_ref_pixels:
corrected_group = self.do_side_correction(thisgroup)
self.group = corrected_group
else:
self.group = thisgroup
#
# Now transform back from detector to DMS coordinates.
self.detector_to_DMS(integration, group)
log.setLevel(logging.INFO)
return
def do_subarray_corrections(self):
"""Do corrections for subarray. Reference pixel value calculated
separately for odd and even columns if odd_even_columns is True,
otherwise a single number calculated from all reference pixels"""
#
# First transform to detector coordinates
#
refdq = dqflags.pixel['REFERENCE_PIXEL']
donotuse = dqflags.pixel['DO_NOT_USE']
#
# This transforms the pixeldq array from DMS to detector coordinates,
# only needs to be done once
self.DMS_to_detector_dq()
# Determined refpix indices to use on each group
refpixindices = np.where((self.pixeldq & refdq == refdq) & (self.pixeldq & donotuse != donotuse))
nrefpixels = len(refpixindices[0])
if nrefpixels == 0:
self.bad_reference_pixels = True
return
if self.odd_even_columns:
oddrefpixindices_row = []
oddrefpixindices_col = []
evenrefpixindices_row = []
evenrefpixindices_col = []
for i in range(nrefpixels):
if (refpixindices[1][i] % 2) == 0:
evenrefpixindices_row.append(refpixindices[0][i])
evenrefpixindices_col.append(refpixindices[1][i])
else:
oddrefpixindices_row.append(refpixindices[0][i])
oddrefpixindices_col.append(refpixindices[1][i])
evenrefpixindices = (np.array(evenrefpixindices_row),
np.array(evenrefpixindices_col))
oddrefpixindices = (np.array(oddrefpixindices_row),
np.array(oddrefpixindices_col))
for integration in range(self.nints):
for group in range(self.ngroups):
#
# Get the reference values from the top and bottom reference
# pixels
#
self.DMS_to_detector(integration, group)
thisgroup = self.group
if self.odd_even_columns:
evenrefpixvalue = self.sigma_clip(thisgroup[evenrefpixindices],
self.pixeldq[evenrefpixindices])
oddrefpixvalue = self.sigma_clip(thisgroup[oddrefpixindices],
self.pixeldq[oddrefpixindices])
thisgroup[:, 0::2] -= evenrefpixvalue
thisgroup[:, 1::2] -= oddrefpixvalue
else:
refpixvalue = self.sigma_clip(thisgroup[refpixindices],
self.pixeldq[refpixindices])
thisgroup -= refpixvalue
#
# Now transform back from detector to DMS coordinates.
self.detector_to_DMS(integration, group)
log.setLevel(logging.INFO)
return
class NRS1Dataset(NIRDataset):
"""For NRS1 data"""
def DMS_to_detector(self, integration, group):
#
# NRS1 is just flipped over the line X=Y
self.get_group(integration, group)
self.group = np.swapaxes(self.group, 0, 1)
def detector_to_DMS(self, integration, group):
#
# Just flip back
self.group = np.swapaxes(self.group, 0, 1)
self.restore_group(integration, group)
def DMS_to_detector_dq(self):
# pixeldq only has to be done once
self.pixeldq = np.swapaxes(self.pixeldq, 0, 1)
class NRS2Dataset(NIRDataset):
"""NRS2 Data"""
def DMS_to_detector(self, integration, group):
#
# NRS2 is flipped over the line Y=X, then rotated 180 degrees
self.get_group(integration, group)
self.group = np.swapaxes(self.group, 0, 1)[::-1, ::-1]
def DMS_to_detector_dq(self):
# pixeldq only has to be done once
self.pixeldq = np.swapaxes(self.pixeldq, 0, 1)[::-1, ::-1]
def detector_to_DMS(self, integration, group):
#
# The inverse is to rotate 180 degrees, then flip over the line Y=X
self.group = np.swapaxes(self.group[::-1, ::-1], 0, 1)
self.restore_group(integration, group)
class NRCA1Dataset(NIRDataset):
"""For NRCA1 data"""
def DMS_to_detector(self, integration, group):
#
# NRCA1 is just flipped in X
self.get_group(integration, group)
self.group = self.group[:, ::-1]
def DMS_to_detector_dq(self):
# pixeldq only has to be done once
self.pixeldq = self.pixeldq[:, ::-1]
def detector_to_DMS(self, integration, group):
#
# Just flip back
self.group = self.group[:, ::-1]
self.restore_group(integration, group)
class NRCA2Dataset(NIRDataset):
"""For NRCA2 data"""
def DMS_to_detector(self, integration, group):
#
# NRCA2 is just flipped in Y
self.get_group(integration, group)
self.group = self.group[::-1]
def DMS_to_detector_dq(self):
# pixeldq only has to be done once
self.pixeldq = self.pixeldq[::-1]
def detector_to_DMS(self, integration, group):
#
# Just flip back
self.group = self.group[::-1]
self.restore_group(integration, group)
class NRCA3Dataset(NIRDataset):
"""For NRCA3 data"""
def DMS_to_detector(self, integration, group):
#
# NRCA3 is just flipped in X
self.get_group(integration, group)
self.group = self.group[:, ::-1]
def DMS_to_detector_dq(self):
# pixeldq only has to be done once
self.pixeldq = self.pixeldq[:, ::-1]
def detector_to_DMS(self, integration, group):
#
# Just flip back
self.group = self.group[:, ::-1]
self.restore_group(integration, group)
class NRCA4Dataset(NIRDataset):
"""For NRCA4 data"""
def DMS_to_detector(self, integration, group):
#
# NRCA4 is just flipped in Y
self.get_group(integration, group)
self.group = self.group[::-1]
def DMS_to_detector_dq(self):
# pixeldq only has to be done once
self.pixeldq = self.pixeldq[::-1]
def detector_to_DMS(self, integration, group):
#
# Just flip back
self.group = self.group[::-1]
self.restore_group(integration, group)
class NRCALONGDataset(NIRDataset):
"""For NRCALONG data"""
def DMS_to_detector(self, integration, group):
#
# NRCALONG is just flipped in X
self.get_group(integration, group)
self.group = self.group[:, ::-1]
def DMS_to_detector_dq(self):
# pixeldq only has to be done once
self.pixeldq = self.pixeldq[:, ::-1]
def detector_to_DMS(self, integration, group):
#
# Just flip back
self.group = self.group[:, ::-1]
self.restore_group(integration, group)
class NRCB1Dataset(NIRDataset):
"""For NRCB1 data"""
def DMS_to_detector(self, integration, group):
#
# NRCB1 is just flipped in Y
self.get_group(integration, group)
self.group = self.group[::-1]
def DMS_to_detector_dq(self):
# pixeldq only has to be done once
self.pixeldq = self.pixeldq[::-1]
def detector_to_DMS(self, integration, group):
#
# Just flip back
self.group = self.group[::-1]
self.restore_group(integration, group)
class NRCB2Dataset(NIRDataset):
"""For NRCB2 data"""
def DMS_to_detector(self, integration, group):
#
# NRCB2 is just flipped in X
self.get_group(integration, group)
self.group = self.group[:, ::-1]
def DMS_to_detector_dq(self):
# pixeldq only has to be done once
self.pixeldq = self.pixeldq[:, ::-1]
def detector_to_DMS(self, integration, group):
#
# Just flip back
self.group = self.group[:, ::-1]
self.restore_group(integration, group)
# self.pixeldq = self.pixeldq[:, ::-1]
class NRCB3Dataset(NIRDataset):
"""For NRCB3 data"""
def DMS_to_detector(self, integration, group):
#
# NRCB3 is just flipped in Y
self.get_group(integration, group)
self.group = self.group[::-1]
def DMS_to_detector_dq(self):
# pixeldq only has to be done once
self.pixeldq = self.pixeldq[::-1]
def detector_to_DMS(self, integration, group):
#
# Just flip back
self.group = self.group[::-1]
self.restore_group(integration, group)
class NRCB4Dataset(NIRDataset):
"""For NRCB4 data"""
def DMS_to_detector(self, integration, group):
#
# NRCB4 is just flipped in X
self.get_group(integration, group)
self.group = self.group[:, ::-1]
def DMS_to_detector_dq(self):
# pixeldq only has to be done once
self.pixeldq = self.pixeldq[:, ::-1]
def detector_to_DMS(self, integration, group):
#
# Just flip back
self.group = self.group[:, ::-1]
self.restore_group(integration, group)
class NRCBLONGDataset(NIRDataset):
"""For NRCBLONG data"""
def DMS_to_detector(self, integration, group):
#
# NRCBLONG is just flipped in Y
self.get_group(integration, group)
self.group = self.group[::-1]
def DMS_to_detector_dq(self):
# pixeldq only has to be done once
self.pixeldq = self.pixeldq[::-1]
def detector_to_DMS(self, integration, group):
#
# Just flip back
self.group = self.group[::-1]
self.restore_group(integration, group)
class NIRISSDataset(NIRDataset):
"""For NIRISS data"""
def DMS_to_detector(self, integration, group):
#
# NIRISS has a 180 degree rotation followed by a flip across the line
# X=Y
self.get_group(integration, group)
self.group = np.swapaxes(self.group[::-1, ::-1], 0, 1)
def DMS_to_detector_dq(self):
# pixeldq only has to be done once
self.pixeldq = np.swapaxes(self.pixeldq[::-1, ::-1], 0, 1)
def detector_to_DMS(self, integration, group):
#
# Just flip and rotate back
self.group = np.swapaxes(self.group, 0, 1)[::-1, ::-1]
self.restore_group(integration, group)
class GUIDER1Dataset(NIRDataset):
"""For GUIDER1 data"""
def DMS_to_detector(self, integration, group):
#
# GUIDER1 is flipped in X and Y
self.get_group(integration, group)
self.group = self.group[::-1, ::-1]
def DMS_to_detector_dq(self):
# pixeldq only has to be done once
self.pixeldq = self.pixeldq[::-1, ::-1]
def detector_to_DMS(self, integration, group):
#
# Just flip back
self.group = self.group[::-1, ::-1]
self.restore_group(integration, group)
class GUIDER2Dataset(NIRDataset):
"""For GUIDER2 data"""
def DMS_to_detector(self, integration, group):
#
# GUIDER2 is just flipped in X
self.get_group(integration, group)
self.group = self.group[:, ::-1]
def DMS_to_detector_dq(self):
# pixeldq only has to be done once
self.pixeldq = self.pixeldq[:, ::-1]
def detector_to_DMS(self, integration, group):
#
# Just flip back
self.group = self.group[:, ::-1]
self.restore_group(integration, group)
class MIRIDataset(Dataset):
"""For MIRI data
Parameters:
-----------
input_model: data model object
Science data model to be corrected
is_subarray: boolean
flag that shows whether the dataset was created from subarray
data
odd_even_rows: boolean
Flag that controls whether odd and even-numbered rows are
handled separately
"""
def __init__(self, input_model,
odd_even_rows):
super(MIRIDataset, self).__init__(input_model,
odd_even_columns=False,
use_side_ref_pixels=False,
side_smoothing_length=False,
side_gain=False,
odd_even_rows=odd_even_rows)
self.reference_sections = MIR_reference_sections
def DMS_to_detector(self, integration, group):
#
# MIRI data doesn't need transforming
pass
def detector_to_DMS(self, integration, group):
#
# Do the opposite of above
pass
def collect_odd_refpixels(self, group, amplifier, left_or_right):
"""Collect reference pixels from odd-numbered rows
Parameters:
-----------
group: NDArray
Group being processed
amplifier: string
Amplifier being processed (['A'|'B'|'C'|'D'])
left_or_right: string
Process left or right side reference pixels (['left'|'right'])
Returns:
--------
oddref: NDArray
Reference pixels from odd-numbered rows
odddq: NDArray
DQ values for reference pixels from odd-numbered rows
"""
rowstart, rowstop, column = self.reference_sections[amplifier][left_or_right]
oddref = group[rowstart:rowstop:2, column]
odddq = self.pixeldq[rowstart:rowstop:2, column]
return oddref, odddq
def collect_even_refpixels(self, group, amplifier, left_or_right):
"""Collect reference pixels from even-numbered rows
Parameters:
-----------
group: NDArray
Group being processed
amplifier: string
Amplifier being processed (['A'|'B'|'C'|'D'])
left_or_right: string
Process left or right side reference pixels (['left'|'right'])
Returns:
--------
evenref: NDArray
Reference pixels from even-numbered rows
evendq: NDArray
DQ values for reference pixels from even-numbered rows
"""
rowstart, rowstop, column = self.reference_sections[amplifier][left_or_right]
#
# Even reference pixels start on the second row
rowstart = rowstart + 1
evenref = group[rowstart:rowstop:2, column]
evendq = self.pixeldq[rowstart:rowstop:2, column]
return evenref, evendq
def get_odd_refvalue(self, group, amplifier, left_or_right):
"""Calculate the clipped mean of the counts in the reference pixels
in odd-numbered rows
Parameters:
-----------
group: NDArray
Group that is being processed
amplifier: string (['A'|'B'|'C'|'D'])
Amplifier that is being processed
left_or_right: string (['left'|'right'])
Processing left or right reference pixels?
Returns:
--------
odd: float
Value of the clipped mean of the reference pixels in odd-numbered
rows
"""
ref, dq = self.collect_odd_refpixels(group, amplifier, left_or_right)
odd = self.sigma_clip(ref, dq)
return odd
def get_even_refvalue(self, group, amplifier, left_or_right):
"""Calculate the clipped mean of the counts in the reference pixels
in even-numbered rows
Parameters:
-----------
group: NDArray
Group that is being processed
amplifier: string (['A'|'B'|'C'|'D'])
Amplifier that is being processed
left_or_right: string (['left'|'right'])
Processing left or right reference pixels?
Returns:
--------
even: float
Value of the clipped mean of the reference pixels in even-numbered
rows
"""
ref, dq = self.collect_even_refpixels(group, amplifier, left_or_right)
even = self.sigma_clip(ref, dq)
return even
def get_amplifier_refvalue(self, group, amplifier, left_or_right):
"""Calculate the reference pixel mean for a given amplifier
Parameters:
-----------
group: NDArray
Group that is being processed
amplifier: string (['A'|'B'|'C'|'D'])
Amplifier that is being processed
left_or_right: string (['left'|'right'])
Processing left or right side reference pixels?
Returns:
--------
Either:
odd: float
Value of the clipped mean of the reference pixels in odd-numbered
rows
even: float
Value of the clipped mean of the reference pixels in even-numbered
rows
Or:
mean: float
Value of the clipped mean of the reference pixels in both odd-numbered
and even-numbered rows
"""
if self.odd_even_rows:
odd = self.get_odd_refvalue(group, amplifier, left_or_right)
even = self.get_even_refvalue(group, amplifier, left_or_right)
if odd is None:
log.warning("Odd rows for amplifier {} have no good reference pixels".format(amplifier))
self.bad_reference_piels = True
elif even is None:
log.warning("Even rows for amplifier {} have no good reference pixels".format(amplifier))
self.bad_reference_pixels = True
return odd, even
else:
rowstart, rowstop, column = self.reference_sections[amplifier][left_or_right]
ref = group[rowstart:rowstop, column]
dq = self.pixeldq[rowstart:rowstop, column]
mean = self.sigma_clip(ref, dq)
if mean is None:
self.bad_reference_pixels = True
return mean
def get_refvalues(self, group):
"""Get the reference pixel values for each amplifier, odd and even rows
and left and right side reference pixels
Parameters:
-----------
group: NDArray
Group that is being processed
Returns:
--------
refpix: dictionary
Dictionary containing the clipped mean of the reference pixels for
each amplifier, odd and even rows (if selected, otherwise all rows)
and left and right.
"""
refpix = {}
for amplifier in self.amplifiers:
refpix[amplifier] = {}
refpix[amplifier]['odd'] = {}
refpix[amplifier]['even'] = {}
for left_right in ('left', 'right'):
refvalues = self.get_amplifier_refvalue(group, amplifier,
left_right)
if self.odd_even_rows:
refpix[amplifier]['odd'][left_right] = refvalues[0]
refpix[amplifier]['even'][left_right] = refvalues[1]
else:
refpix[amplifier][left_right] = refvalues
return refpix
def do_left_right_correction(self, group, refvalues):
"""Do the reference pixel correction
Parameters:
----------
group: NDArray
Group that is being processed
refvalues: dictionary
Dictionary of reference pixel clipped means
Returns:
--------
None
Side Effect:
------------
The parameter _group_ is corrected for the bias drift using the
left and right side reference pixels
"""
for amplifier in self.amplifiers:
datarowstart, datarowstop, datacolstart, datacolstop, stride = \
self.reference_sections[amplifier]['data']
if self.odd_even_rows:
oddrefleft = refvalues[amplifier]['odd']['left']
oddrefright = refvalues[amplifier]['odd']['right']
evenrefleft = refvalues[amplifier]['even']['left']
evenrefright = refvalues[amplifier]['even']['right']
#
# For now, just average the left and right corrections
oddrefsignal = 0.5 * (oddrefleft + oddrefright)
evenrefsignal = 0.5 * (evenrefleft + evenrefright)
oddslice = (slice(datarowstart, datarowstop, 2),
slice(datacolstart, datacolstop, 4))
evenslice = (slice(datarowstart + 1, datarowstop, 2),
slice(datacolstart, datacolstop, 4))
group[oddslice] = group[oddslice] - oddrefsignal
group[evenslice] = group[evenslice] - evenrefsignal
else:
refleft = refvalues[amplifier]['left']
refright = refvalues[amplifier]['right']
refsignal = 0.5 * (refleft + refright)
dataslice = (slice(datarowstart, datarowstop, 1),
slice(datacolstart, datacolstop, 4))
group[dataslice] = group[dataslice] - refsignal
return
def do_corrections(self):
if self.is_subarray:
self.do_subarray_corrections()
else:
self.do_fullframe_corrections()
def do_subarray_corrections(self):
log.warning("Refpix correction skipped for MIRI subarray")
return
def do_fullframe_corrections(self):
"""Do Reference Pixels Corrections for all amplifiers, MIRI detectors"""
#
# First we need to subtract the first read of each integration
first_read = np.zeros((self.nints, self.nrows, self.ncols))
log.info('Subtracting initial read from each integration')
for i in range(self.nints):
first_read[i] = self.input_model.data[i, 0].copy()
self.input_model.data[i] = self.input_model.data[i] - first_read[i]
#
# First transform to detector coordinates
#
for integration in range(self.nints):
#
# Don't process the first group as it's all zeros and the clipped
# mean will return NaN
#
for group in range(1, self.ngroups):
#
# Get the reference values from the top and bottom reference
# pixels
#
self.get_group(integration, group)
thisgroup = self.group
refvalues = self.get_refvalues(thisgroup)
if self.bad_reference_pixels:
log.warning("Group {} has no reference pixels".format(group))
break
self.do_left_right_correction(thisgroup, refvalues)
#
# Now transform back from detector to DMS coordinates and transfer results to output
self.restore_group(integration, group)
log.setLevel(logging.INFO)
#
# All done, now add the first read back in
log.info('Adding initial read back in')
for i in range(self.nints):
self.input_model.data[i] += first_read[i]
del first_read
return
def create_dataset(input_model,
odd_even_columns,
use_side_ref_pixels,
side_smoothing_length,
side_gain,
odd_even_rows):
"""Create a dataset object from an input model.
Parameters:
-----------
input_model: data model object
Science data model to be corrected
odd_even_columns: boolean
flag that controls whether odd and even-numbered columns are
processed separately (NIR only)
use_side_ref_pixels: boolean
flag the controls whether the side reference pixels are used in
the correction (NIR only)
side_smoothing_length: integer
smoothing length the use in calculating the running median of
the side reference pixels (NIR only)
side_gain: float
gain to use in applying the side reference pixel correction
(NIR only)
odd_even_rows: boolean
flag that controls whether odd and even-numbered rows are handled
separately (MIR only)
"""
detector = input_model.meta.instrument.detector
if reffile_utils.is_subarray(input_model):
colstart = input_model.meta.subarray.xstart - 1
colstop = colstart + input_model.meta.subarray.xsize
rowstart = input_model.meta.subarray.ystart - 1
rowstop = rowstart + input_model.meta.subarray.ysize
if rowstart < 0 or colstart < 0 \
or rowstop > 2048 or colstop > 2048:
return None
if detector[:3] == 'MIR':
return MIRIDataset(input_model,
odd_even_rows)
elif detector == 'NRS1':
return NRS1Dataset(input_model,
odd_even_columns,
use_side_ref_pixels,
side_smoothing_length,
side_gain)
elif detector == 'NRS2':
return NRS2Dataset(input_model,
odd_even_columns,
use_side_ref_pixels,
side_smoothing_length,
side_gain)
elif detector == 'NRCA1':
return NRCA1Dataset(input_model,
odd_even_columns,
use_side_ref_pixels,
side_smoothing_length,
side_gain)
elif detector == 'NRCA2':
return NRCA2Dataset(input_model,
odd_even_columns,
use_side_ref_pixels,
side_smoothing_length,
side_gain)
elif detector == 'NRCA3':
return NRCA3Dataset(input_model,
odd_even_columns,
use_side_ref_pixels,
side_smoothing_length,
side_gain)
elif detector == 'NRCA4':
return NRCA4Dataset(input_model,
odd_even_columns,
use_side_ref_pixels,
side_smoothing_length,
side_gain)
elif detector == 'NRCALONG':
return NRCALONGDataset(input_model,
odd_even_columns,
use_side_ref_pixels,
side_smoothing_length,
side_gain)
elif detector == 'NRCB1':
return NRCB1Dataset(input_model,
odd_even_columns,
use_side_ref_pixels,
side_smoothing_length,
side_gain)
elif detector == 'NRCB2':
return NRCB2Dataset(input_model,
odd_even_columns,
use_side_ref_pixels,
side_smoothing_length,
side_gain)
elif detector == 'NRCB3':
return NRCB3Dataset(input_model,
odd_even_columns,
use_side_ref_pixels,
side_smoothing_length,
side_gain)
elif detector == 'NRCB4':
return NRCB4Dataset(input_model,
odd_even_columns,
use_side_ref_pixels,
side_smoothing_length,
side_gain)
elif detector == 'NRCBLONG':
return NRCBLONGDataset(input_model,
odd_even_columns,
use_side_ref_pixels,
side_smoothing_length,
side_gain)
elif detector == 'NIS':
return NIRISSDataset(input_model,
odd_even_columns,
use_side_ref_pixels,
side_smoothing_length,
side_gain)
elif detector == 'GUIDER1':
return GUIDER1Dataset(input_model,
odd_even_columns,
use_side_ref_pixels,
side_smoothing_length,
side_gain)
elif detector == 'GUIDER2':
return GUIDER2Dataset(input_model,
odd_even_columns,
use_side_ref_pixels,
side_smoothing_length,
side_gain)
else:
log.error('Unrecognized detector')
return NIRDataset(input_model,
odd_even_columns,
use_side_ref_pixels,
side_smoothing_length,
side_gain)
def correct_model(input_model, odd_even_columns,
use_side_ref_pixels,
side_smoothing_length, side_gain,
odd_even_rows):
"""Wrapper to do Reference Pixel Correction on a JWST Model.
Performs the correction on the datamodel
Parameters:
-----------
input_model: jwst.datamodels.model
Model to be corrected
odd_even_columns: boolean
flag that controls whether odd and even-numbered columns are
processed separately (NIR only)
use_side_ref_pixels: boolean
flag the controls whether the side reference pixels are used in
the correction (NIR only)
side_smoothing_length: integer
smoothing length the use in calculating the running median of
the side reference pixels (NIR only)
side_gain: float
gain to use in applying the side reference pixel correction
(NIR only)
odd_even_rows: boolean
flag that controls whether odd and even-numbered rows are handled
separately (MIR only)
"""
if input_model.meta.instrument.name == 'MIRI':
if reffile_utils.is_subarray(input_model):
log.warning("Refpix correction skipped for MIRI subarrays")
return SUBARRAY_SKIPPED
input_dataset = create_dataset(input_model,
odd_even_columns,
use_side_ref_pixels,
side_smoothing_length,
side_gain,
odd_even_rows)
if input_dataset is None:
status = SUBARRAY_DOESNTFIT
return status
input_dataset.log_parameters()
reference_pixel_correction(input_dataset)
return REFPIX_OK
def reference_pixel_correction(input_dataset):
"""
Do the Reference Pixel Correction.
Parameters:
-----------
input_dataset: Dataset
Dataset to be corrected
Returns:
--------
input_dataset: Dataset
Corrected dataset
"""
input_dataset.do_corrections()
if input_dataset.input_model.meta.exposure.zero_frame:
process_zeroframe_correction(input_dataset)
return
def process_zeroframe_correction(input_dataset):
"""
Do the Reference Pixel Correction for the ZEROFRAME array.
Parameters
----------
input_dataset : Dataset
Dataset to be corrected
Returns
-------
input_dataset : Dataset
Corrected dataset
"""
# Setup input model for ZEROFRAME
saved_values = save_science_values(input_dataset)
setup_dataset_for_zeroframe(input_dataset, saved_values)
# Run refpix correction on ZEROFRAME
input_dataset.do_corrections()
restore_input_model(input_dataset, saved_values)
def restore_input_model(input_dataset, saved_values):
"""
Restore the input model with saved values and move
the computed ZEROFRAME value to the correct class
variable.
Parameters
----------
input_dataset : Dataset
Dataset to be corrected
saved_values : tuple
A tuple of saved values to be used to setup the final
corrected RampModel.
"""
data, gdq, pdq, wh_zero = saved_values
nints, ngroups, nrows, ncols = data.shape
zdims = (nints, nrows, ncols)
# Get ZEROFRAME data
zframe = input_dataset.input_model.data
zdq = input_dataset.input_model.groupdq
# Restore SCI data
input_dataset.input_model.data = data
input_dataset.input_model.groupdq = gdq
input_dataset.input_model.pixeldq = pdq
# Save computed ZEROFRAME
zframe[zdq != 0] = 0.
input_dataset.input_model.zeroframe = zframe.reshape(zdims)
input_dataset.input_model.zeroframe[wh_zero] = 0.
def setup_dataset_for_zeroframe(input_dataset, saved_values):
"""
Saves off corrected data for the SCI data.
Parameters:
-----------
input_dataset : Dataset
Dataset to be corrected
"""
# Setup dimensions
dims = input_dataset.input_model.zeroframe.shape
nints, nrows, ncols = dims
ngroups = 1
new_dims = (nints, ngroups, nrows, ncols)
# Setup ZEROFRAME data
data = input_dataset.input_model.zeroframe
data = data.reshape(new_dims)
# Setup ZEROFRAME dummy groupdq
gdtype = input_dataset.input_model.groupdq.dtype
gdq = np.zeros(dims, dtype=gdtype)
wh_zero = saved_values[-1]
gdq[wh_zero] = dqflags.pixel['DO_NOT_USE']
gdq = gdq.reshape(new_dims)
# Setup dataset with ZEROFRAME data
input_dataset.ngroups = ngroups
input_dataset.pixeldq = input_dataset.get_pixeldq()
input_dataset.input_model.data = data
input_dataset.input_model.groupdq = gdq
input_dataset.zeroframe_proc = True
def save_science_values(input_dataset):
"""
Saves off corrected data for the SCI data.
Parameters:
-----------
input_dataset : Dataset
Dataset to be corrected
Returns
-------
data : ndarray
The correct SCI data.
gdq : ndarray
The correct SCI groupdq.
pdq : ndarray
The correct SCI pixeldq.
wh_zero : ndarray
The location of the zeroed out locations in the ZEROFRAME.
"""
data = input_dataset.input_model.data
gdq = input_dataset.input_model.groupdq
pdq = input_dataset.input_model.pixeldq
wh_zero = np.where(input_dataset.input_model.zeroframe[:, :, :] == 0.)
return data, gdq, pdq, wh_zero
|
spacetelescopeREPO_NAMEjwstPATH_START.@jwst_extracted@jwst-main@jwst@refpix@reference_pixels.py@.PATH_END.py
|
{
"filename": "__init__.py",
"repo_name": "plotly/plotly.py",
"repo_path": "plotly.py_extracted/plotly.py-master/packages/python/plotly/plotly/validators/violin/stream/__init__.py",
"type": "Python"
}
|
import sys
from typing import TYPE_CHECKING
if sys.version_info < (3, 7) or TYPE_CHECKING:
from ._token import TokenValidator
from ._maxpoints import MaxpointsValidator
else:
from _plotly_utils.importers import relative_import
__all__, __getattr__, __dir__ = relative_import(
__name__, [], ["._token.TokenValidator", "._maxpoints.MaxpointsValidator"]
)
|
plotlyREPO_NAMEplotly.pyPATH_START.@plotly.py_extracted@plotly.py-master@packages@python@plotly@plotly@validators@violin@stream@__init__.py@.PATH_END.py
|
{
"filename": "__init__.py",
"repo_name": "catboost/catboost",
"repo_path": "catboost_extracted/catboost-master/contrib/python/plotly/py3/plotly/validators/volume/colorbar/__init__.py",
"type": "Python"
}
|
import sys
from typing import TYPE_CHECKING
if sys.version_info < (3, 7) or TYPE_CHECKING:
from ._yref import YrefValidator
from ._ypad import YpadValidator
from ._yanchor import YanchorValidator
from ._y import YValidator
from ._xref import XrefValidator
from ._xpad import XpadValidator
from ._xanchor import XanchorValidator
from ._x import XValidator
from ._title import TitleValidator
from ._tickwidth import TickwidthValidator
from ._tickvalssrc import TickvalssrcValidator
from ._tickvals import TickvalsValidator
from ._ticktextsrc import TicktextsrcValidator
from ._ticktext import TicktextValidator
from ._ticksuffix import TicksuffixValidator
from ._ticks import TicksValidator
from ._tickprefix import TickprefixValidator
from ._tickmode import TickmodeValidator
from ._ticklen import TicklenValidator
from ._ticklabelstep import TicklabelstepValidator
from ._ticklabelposition import TicklabelpositionValidator
from ._ticklabeloverflow import TicklabeloverflowValidator
from ._tickformatstopdefaults import TickformatstopdefaultsValidator
from ._tickformatstops import TickformatstopsValidator
from ._tickformat import TickformatValidator
from ._tickfont import TickfontValidator
from ._tickcolor import TickcolorValidator
from ._tickangle import TickangleValidator
from ._tick0 import Tick0Validator
from ._thicknessmode import ThicknessmodeValidator
from ._thickness import ThicknessValidator
from ._showticksuffix import ShowticksuffixValidator
from ._showtickprefix import ShowtickprefixValidator
from ._showticklabels import ShowticklabelsValidator
from ._showexponent import ShowexponentValidator
from ._separatethousands import SeparatethousandsValidator
from ._outlinewidth import OutlinewidthValidator
from ._outlinecolor import OutlinecolorValidator
from ._orientation import OrientationValidator
from ._nticks import NticksValidator
from ._minexponent import MinexponentValidator
from ._lenmode import LenmodeValidator
from ._len import LenValidator
from ._labelalias import LabelaliasValidator
from ._exponentformat import ExponentformatValidator
from ._dtick import DtickValidator
from ._borderwidth import BorderwidthValidator
from ._bordercolor import BordercolorValidator
from ._bgcolor import BgcolorValidator
else:
from _plotly_utils.importers import relative_import
__all__, __getattr__, __dir__ = relative_import(
__name__,
[],
[
"._yref.YrefValidator",
"._ypad.YpadValidator",
"._yanchor.YanchorValidator",
"._y.YValidator",
"._xref.XrefValidator",
"._xpad.XpadValidator",
"._xanchor.XanchorValidator",
"._x.XValidator",
"._title.TitleValidator",
"._tickwidth.TickwidthValidator",
"._tickvalssrc.TickvalssrcValidator",
"._tickvals.TickvalsValidator",
"._ticktextsrc.TicktextsrcValidator",
"._ticktext.TicktextValidator",
"._ticksuffix.TicksuffixValidator",
"._ticks.TicksValidator",
"._tickprefix.TickprefixValidator",
"._tickmode.TickmodeValidator",
"._ticklen.TicklenValidator",
"._ticklabelstep.TicklabelstepValidator",
"._ticklabelposition.TicklabelpositionValidator",
"._ticklabeloverflow.TicklabeloverflowValidator",
"._tickformatstopdefaults.TickformatstopdefaultsValidator",
"._tickformatstops.TickformatstopsValidator",
"._tickformat.TickformatValidator",
"._tickfont.TickfontValidator",
"._tickcolor.TickcolorValidator",
"._tickangle.TickangleValidator",
"._tick0.Tick0Validator",
"._thicknessmode.ThicknessmodeValidator",
"._thickness.ThicknessValidator",
"._showticksuffix.ShowticksuffixValidator",
"._showtickprefix.ShowtickprefixValidator",
"._showticklabels.ShowticklabelsValidator",
"._showexponent.ShowexponentValidator",
"._separatethousands.SeparatethousandsValidator",
"._outlinewidth.OutlinewidthValidator",
"._outlinecolor.OutlinecolorValidator",
"._orientation.OrientationValidator",
"._nticks.NticksValidator",
"._minexponent.MinexponentValidator",
"._lenmode.LenmodeValidator",
"._len.LenValidator",
"._labelalias.LabelaliasValidator",
"._exponentformat.ExponentformatValidator",
"._dtick.DtickValidator",
"._borderwidth.BorderwidthValidator",
"._bordercolor.BordercolorValidator",
"._bgcolor.BgcolorValidator",
],
)
|
catboostREPO_NAMEcatboostPATH_START.@catboost_extracted@catboost-master@contrib@python@plotly@py3@plotly@validators@volume@colorbar@__init__.py@.PATH_END.py
|
{
"filename": "harps2kpf.py",
"repo_name": "Keck-DataReductionPipelines/KPF-Pipeline",
"repo_path": "KPF-Pipeline_extracted/KPF-Pipeline-master/modules/TemplateFit/tools/harps2kpf.py",
"type": "Python"
}
|
Keck-DataReductionPipelinesREPO_NAMEKPF-PipelinePATH_START.@KPF-Pipeline_extracted@KPF-Pipeline-master@modules@TemplateFit@tools@harps2kpf.py@.PATH_END.py
|
|
{
"filename": "test_io.py",
"repo_name": "cta-observatory/cta-lstchain",
"repo_path": "cta-lstchain_extracted/cta-lstchain-main/lstchain/io/tests/test_io.py",
"type": "Python"
}
|
import tempfile
import json
import math
import numpy as np
import pandas as pd
import pytest
import tables
from astropy.table import Table, QTable
from ctapipe.instrument import SubarrayDescription
from lstchain.io import add_config_metadata
from lstchain.io.io import get_resource_path
from pathlib import PosixPath
from traitlets.config.loader import DeferredConfigString, LazyConfigValue
@pytest.fixture
def merged_h5file(tmp_path, simulated_dl1_file):
"""Produce a merged h5 file from simulated dl1 files."""
from lstchain.io.io import auto_merge_h5files
subarray_before = SubarrayDescription.from_hdf(simulated_dl1_file)
merged_dl1_file = tmp_path / "dl1_merged.h5"
auto_merge_h5files(
[simulated_dl1_file, simulated_dl1_file], output_filename=merged_dl1_file
)
merged_dl1_file_ = tmp_path / "dl1_merged_nocheck.h5"
auto_merge_h5files(
[simulated_dl1_file, simulated_dl1_file],
output_filename=merged_dl1_file_,
run_checks=False,
)
subarray_merged = SubarrayDescription.from_hdf(merged_dl1_file)
# check that subarray name is correctly retained
assert subarray_before.name == subarray_merged.name
return merged_dl1_file
def test_write_dataframe():
from lstchain.io import config, global_metadata
from lstchain.io.io import write_dataframe
df = pd.DataFrame(
{
"x": np.random.normal(size=10),
"N": np.random.poisson(5, size=10),
}
)
config = config.get_standard_config()
with tempfile.NamedTemporaryFile() as f:
meta = global_metadata()
write_dataframe(df, f.name, "data/awesome_table", config=config, meta=meta)
with tables.open_file(f.name) as h5_file:
# make sure nothing else in this group
# (e.g. like pandas writes _i_ tables)
assert h5_file.root.data._v_children.keys() == {"awesome_table"}
table = h5_file.root.data.awesome_table[:]
for col in df.columns:
np.testing.assert_array_equal(table[col], df[col])
# test global metadata and config are properly written
for k in meta.keys():
assert meta[k] == h5_file.root.data.awesome_table.attrs[k]
assert config == h5_file.root.data.awesome_table.attrs["config"]
# test it's also readable by pandas directly
df_read = pd.read_hdf(f.name, "data/awesome_table")
assert df.equals(df_read)
# and with astropy
t = Table.read(f.name, "data/awesome_table")
for col in df.columns:
np.testing.assert_array_equal(t[col], df[col])
def test_write_dataframe_index():
"""Test that also an index can be written."""
from lstchain.io.io import write_dataframe
df = pd.DataFrame(
{
"x": np.random.normal(size=10),
"N": np.random.poisson(5, size=10),
}
)
df.index.name = "event_id"
with tempfile.NamedTemporaryFile() as f:
write_dataframe(df, f.name, "data/awesome_table", index=True)
with tables.open_file(f.name) as file:
table = file.root.data.awesome_table[:]
for col in df.columns:
np.testing.assert_array_equal(table[col], df[col])
np.testing.assert_array_equal(table["event_id"], df.index)
def test_write_dl2_dataframe(tmp_path, simulated_dl2_file):
from lstchain.io.io import dl2_params_lstcam_key
from lstchain.io import write_dl2_dataframe
dl2 = pd.read_hdf(simulated_dl2_file, key=dl2_params_lstcam_key)
write_dl2_dataframe(dl2, tmp_path / "dl2_test.h5")
def test_merging_check(simulated_dl1_file):
from lstchain.io.io import merging_check
# the same file should be mergeable with itself
dl1_file = simulated_dl1_file
assert merging_check([dl1_file, dl1_file]) == [dl1_file, dl1_file]
def test_merge_h5files(merged_h5file):
assert merged_h5file.is_file()
# check source filenames is properly written
with tables.open_file(merged_h5file) as file:
assert len(file.root.source_filenames.filenames) == 2
def test_read_simu_info_hdf5(simulated_dl1_file):
from lstchain.io.io import read_simu_info_hdf5
mcheader = read_simu_info_hdf5(simulated_dl1_file)
# simtel verion of the mc_gamma_testfile defined in test_lstchain
assert mcheader.simtel_version == 1593356843
assert mcheader.n_showers == 10
def test_read_simu_info_merged_hdf5(merged_h5file):
from lstchain.io.io import read_simu_info_merged_hdf5
mcheader = read_simu_info_merged_hdf5(merged_h5file)
# simtel verion of the mc_gamma_testfile defined in test_lstchain
assert mcheader.simtel_version == 1593356843
assert mcheader.n_showers == 20
def test_trigger_type_in_dl1_params(simulated_dl1_file):
from lstchain.io.io import dl1_params_lstcam_key
params = pd.read_hdf(simulated_dl1_file, key=dl1_params_lstcam_key)
assert "trigger_type" in params.columns
def test_extract_simulation_nsb(mc_gamma_testfile):
from lstchain.io.io import extract_simulation_nsb
import astropy.units as u
nsb = extract_simulation_nsb(mc_gamma_testfile)
assert np.isclose(nsb[1].to_value(u.GHz), 0.246, rtol=0.1)
def test_remove_duplicated_events():
from lstchain.io.io import remove_duplicated_events
d = {
"event_id": [1, 2, 3, 1, 2, 4, 1, 2, 3],
"gh_score": [0.1, 0.5, 0.7, 0.5, 0.8, 0.1, 0.9, 0.1, 0.5],
"alpha": range(9),
}
df = pd.DataFrame(data=d)
data1 = QTable.from_pandas(df)
remove_duplicated_events(data1)
d2 = {
"event_id": [3, 2, 4, 1],
"gh_score": [0.7, 0.8, 0.1, 0.9],
"alpha": [2, 4, 5, 6],
}
df2 = pd.DataFrame(data=d2)
data2 = QTable.from_pandas(df2)
assert np.all(data1 == data2)
def test_check_mc_type(simulated_dl1_file):
from lstchain.io.io import check_mc_type
mc_type = check_mc_type(simulated_dl1_file)
assert mc_type == "diffuse"
def test_add_config_metadata():
class Container:
meta = {}
lazy_value = LazyConfigValue()
lazy_value.update({"key": "new_value"})
config = {
"param1": 1,
"param2": "value2",
"param3": [1, 2, 3],
"param4": {"a": 1, "b": 2},
"param5": None,
"param6": lazy_value,
"param7": DeferredConfigString("some_string"),
"param8": PosixPath("/path/to/file"),
"param9": np.inf,
"param10": True,
"param11": False,
"param12": np.array([1, 2, 3]),
}
expected_config = {
"param1": 1,
"param2": "value2",
"param3": [1, 2, 3],
"param4": {"a": 1, "b": 2},
"param5": None,
"param6": {"update": {"key": "new_value"}},
"param7": "some_string",
"param8": "/path/to/file",
"param9": math.inf,
"param10": True,
"param11": False,
"param12": [1, 2, 3],
}
container = Container()
add_config_metadata(container, config)
assert json.loads(container.meta["config"]) == expected_config
# test also with standard config in case of future changes
from lstchain.io.config import get_standard_config
config = get_standard_config()
container = Container()
add_config_metadata(container, config)
assert json.loads(container.meta["config"]) == config
def test_get_resource_path():
filepath = get_resource_path("data/SinglePhE_ResponseInPhE_expo2Gaus.dat")
assert filepath.is_file()
|
cta-observatoryREPO_NAMEcta-lstchainPATH_START.@cta-lstchain_extracted@cta-lstchain-main@lstchain@io@tests@test_io.py@.PATH_END.py
|
{
"filename": "_volume.py",
"repo_name": "plotly/plotly.py",
"repo_path": "plotly.py_extracted/plotly.py-master/packages/python/plotly/plotly/validators/_volume.py",
"type": "Python"
}
|
import _plotly_utils.basevalidators
class VolumeValidator(_plotly_utils.basevalidators.CompoundValidator):
def __init__(self, plotly_name="volume", parent_name="", **kwargs):
super(VolumeValidator, self).__init__(
plotly_name=plotly_name,
parent_name=parent_name,
data_class_str=kwargs.pop("data_class_str", "Volume"),
data_docs=kwargs.pop(
"data_docs",
"""
autocolorscale
Determines whether the colorscale is a default
palette (`autocolorscale: true`) or the palette
determined by `colorscale`. In case
`colorscale` is unspecified or `autocolorscale`
is true, the default palette will be chosen
according to whether numbers in the `color`
array are all positive, all negative or mixed.
caps
:class:`plotly.graph_objects.volume.Caps`
instance or dict with compatible properties
cauto
Determines whether or not the color domain is
computed with respect to the input data (here
`value`) or the bounds set in `cmin` and `cmax`
Defaults to `false` when `cmin` and `cmax` are
set by the user.
cmax
Sets the upper bound of the color domain. Value
should have the same units as `value` and if
set, `cmin` must be set as well.
cmid
Sets the mid-point of the color domain by
scaling `cmin` and/or `cmax` to be equidistant
to this point. Value should have the same units
as `value`. Has no effect when `cauto` is
`false`.
cmin
Sets the lower bound of the color domain. Value
should have the same units as `value` and if
set, `cmax` must be set as well.
coloraxis
Sets a reference to a shared color axis.
References to these shared color axes are
"coloraxis", "coloraxis2", "coloraxis3", etc.
Settings for these shared color axes are set in
the layout, under `layout.coloraxis`,
`layout.coloraxis2`, etc. Note that multiple
color scales can be linked to the same color
axis.
colorbar
:class:`plotly.graph_objects.volume.ColorBar`
instance or dict with compatible properties
colorscale
Sets the colorscale. The colorscale must be an
array containing arrays mapping a normalized
value to an rgb, rgba, hex, hsl, hsv, or named
color string. At minimum, a mapping for the
lowest (0) and highest (1) values are required.
For example, `[[0, 'rgb(0,0,255)'], [1,
'rgb(255,0,0)']]`. To control the bounds of the
colorscale in color space, use `cmin` and
`cmax`. Alternatively, `colorscale` may be a
palette name string of the following list: Blac
kbody,Bluered,Blues,Cividis,Earth,Electric,Gree
ns,Greys,Hot,Jet,Picnic,Portland,Rainbow,RdBu,R
eds,Viridis,YlGnBu,YlOrRd.
contour
:class:`plotly.graph_objects.volume.Contour`
instance or dict with compatible properties
customdata
Assigns extra data each datum. This may be
useful when listening to hover, click and
selection events. Note that, "scatter" traces
also appends customdata items in the markers
DOM elements
customdatasrc
Sets the source reference on Chart Studio Cloud
for `customdata`.
flatshading
Determines whether or not normal smoothing is
applied to the meshes, creating meshes with an
angular, low-poly look via flat reflections.
hoverinfo
Determines which trace information appear on
hover. If `none` or `skip` are set, no
information is displayed upon hovering. But, if
`none` is set, click and hover events are still
fired.
hoverinfosrc
Sets the source reference on Chart Studio Cloud
for `hoverinfo`.
hoverlabel
:class:`plotly.graph_objects.volume.Hoverlabel`
instance or dict with compatible properties
hovertemplate
Template string used for rendering the
information that appear on hover box. Note that
this will override `hoverinfo`. Variables are
inserted using %{variable}, for example "y:
%{y}" as well as %{xother}, {%_xother},
{%_xother_}, {%xother_}. When showing info for
several points, "xother" will be added to those
with different x positions from the first
point. An underscore before or after
"(x|y)other" will add a space on that side,
only when this field is shown. Numbers are
formatted using d3-format's syntax
%{variable:d3-format}, for example "Price:
%{y:$.2f}". https://github.com/d3/d3-
format/tree/v1.4.5#d3-format for details on the
formatting syntax. Dates are formatted using
d3-time-format's syntax %{variable|d3-time-
format}, for example "Day: %{2019-01-01|%A}".
https://github.com/d3/d3-time-
format/tree/v2.2.3#locale_format for details on
the date formatting syntax. The variables
available in `hovertemplate` are the ones
emitted as event data described at this link
https://plotly.com/javascript/plotlyjs-
events/#event-data. Additionally, every
attributes that can be specified per-point (the
ones that are `arrayOk: true`) are available.
Anything contained in tag `<extra>` is
displayed in the secondary box, for example
"<extra>{fullData.name}</extra>". To hide the
secondary box completely, use an empty tag
`<extra></extra>`.
hovertemplatesrc
Sets the source reference on Chart Studio Cloud
for `hovertemplate`.
hovertext
Same as `text`.
hovertextsrc
Sets the source reference on Chart Studio Cloud
for `hovertext`.
ids
Assigns id labels to each datum. These ids for
object constancy of data points during
animation. Should be an array of strings, not
numbers or any other type.
idssrc
Sets the source reference on Chart Studio Cloud
for `ids`.
isomax
Sets the maximum boundary for iso-surface plot.
isomin
Sets the minimum boundary for iso-surface plot.
legend
Sets the reference to a legend to show this
trace in. References to these legends are
"legend", "legend2", "legend3", etc. Settings
for these legends are set in the layout, under
`layout.legend`, `layout.legend2`, etc.
legendgroup
Sets the legend group for this trace. Traces
and shapes part of the same legend group
hide/show at the same time when toggling legend
items.
legendgrouptitle
:class:`plotly.graph_objects.volume.Legendgroup
title` instance or dict with compatible
properties
legendrank
Sets the legend rank for this trace. Items and
groups with smaller ranks are presented on
top/left side while with "reversed"
`legend.traceorder` they are on bottom/right
side. The default legendrank is 1000, so that
you can use ranks less than 1000 to place
certain items before all unranked items, and
ranks greater than 1000 to go after all
unranked items. When having unranked or equal
rank items shapes would be displayed after
traces i.e. according to their order in data
and layout.
legendwidth
Sets the width (in px or fraction) of the
legend for this trace.
lighting
:class:`plotly.graph_objects.volume.Lighting`
instance or dict with compatible properties
lightposition
:class:`plotly.graph_objects.volume.Lightpositi
on` instance or dict with compatible properties
meta
Assigns extra meta information associated with
this trace that can be used in various text
attributes. Attributes such as trace `name`,
graph, axis and colorbar `title.text`,
annotation `text` `rangeselector`,
`updatemenues` and `sliders` `label` text all
support `meta`. To access the trace `meta`
values in an attribute in the same trace,
simply use `%{meta[i]}` where `i` is the index
or key of the `meta` item in question. To
access trace `meta` in layout attributes, use
`%{data[n[.meta[i]}` where `i` is the index or
key of the `meta` and `n` is the trace index.
metasrc
Sets the source reference on Chart Studio Cloud
for `meta`.
name
Sets the trace name. The trace name appears as
the legend item and on hover.
opacity
Sets the opacity of the surface. Please note
that in the case of using high `opacity` values
for example a value greater than or equal to
0.5 on two surfaces (and 0.25 with four
surfaces), an overlay of multiple transparent
surfaces may not perfectly be sorted in depth
by the webgl API. This behavior may be improved
in the near future and is subject to change.
opacityscale
Sets the opacityscale. The opacityscale must be
an array containing arrays mapping a normalized
value to an opacity value. At minimum, a
mapping for the lowest (0) and highest (1)
values are required. For example, `[[0, 1],
[0.5, 0.2], [1, 1]]` means that higher/lower
values would have higher opacity values and
those in the middle would be more transparent
Alternatively, `opacityscale` may be a palette
name string of the following list: 'min',
'max', 'extremes' and 'uniform'. The default is
'uniform'.
reversescale
Reverses the color mapping if true. If true,
`cmin` will correspond to the last color in the
array and `cmax` will correspond to the first
color.
scene
Sets a reference between this trace's 3D
coordinate system and a 3D scene. If "scene"
(the default value), the (x,y,z) coordinates
refer to `layout.scene`. If "scene2", the
(x,y,z) coordinates refer to `layout.scene2`,
and so on.
showlegend
Determines whether or not an item corresponding
to this trace is shown in the legend.
showscale
Determines whether or not a colorbar is
displayed for this trace.
slices
:class:`plotly.graph_objects.volume.Slices`
instance or dict with compatible properties
spaceframe
:class:`plotly.graph_objects.volume.Spaceframe`
instance or dict with compatible properties
stream
:class:`plotly.graph_objects.volume.Stream`
instance or dict with compatible properties
surface
:class:`plotly.graph_objects.volume.Surface`
instance or dict with compatible properties
text
Sets the text elements associated with the
vertices. If trace `hoverinfo` contains a
"text" flag and "hovertext" is not set, these
elements will be seen in the hover labels.
textsrc
Sets the source reference on Chart Studio Cloud
for `text`.
uid
Assign an id to this trace, Use this to provide
object constancy between traces during
animations and transitions.
uirevision
Controls persistence of some user-driven
changes to the trace: `constraintrange` in
`parcoords` traces, as well as some `editable:
true` modifications such as `name` and
`colorbar.title`. Defaults to
`layout.uirevision`. Note that other user-
driven trace attribute changes are controlled
by `layout` attributes: `trace.visible` is
controlled by `layout.legend.uirevision`,
`selectedpoints` is controlled by
`layout.selectionrevision`, and
`colorbar.(x|y)` (accessible with `config:
{editable: true}`) is controlled by
`layout.editrevision`. Trace changes are
tracked by `uid`, which only falls back on
trace index if no `uid` is provided. So if your
app can add/remove traces before the end of the
`data` array, such that the same trace has a
different index, you can still preserve user-
driven changes if you give each trace a `uid`
that stays with it as it moves.
value
Sets the 4th dimension (value) of the vertices.
valuehoverformat
Sets the hover text formatting rulefor `value`
using d3 formatting mini-languages which are
very similar to those in Python. For numbers,
see: https://github.com/d3/d3-
format/tree/v1.4.5#d3-format.By default the
values are formatted using generic number
format.
valuesrc
Sets the source reference on Chart Studio Cloud
for `value`.
visible
Determines whether or not this trace is
visible. If "legendonly", the trace is not
drawn, but can appear as a legend item
(provided that the legend itself is visible).
x
Sets the X coordinates of the vertices on X
axis.
xhoverformat
Sets the hover text formatting rulefor `x`
using d3 formatting mini-languages which are
very similar to those in Python. For numbers,
see: https://github.com/d3/d3-
format/tree/v1.4.5#d3-format. And for dates
see: https://github.com/d3/d3-time-
format/tree/v2.2.3#locale_format. We add two
items to d3's date formatter: "%h" for half of
the year as a decimal number as well as "%{n}f"
for fractional seconds with n digits. For
example, *2016-10-13 09:15:23.456* with
tickformat "%H~%M~%S.%2f" would display
*09~15~23.46*By default the values are
formatted using `xaxis.hoverformat`.
xsrc
Sets the source reference on Chart Studio Cloud
for `x`.
y
Sets the Y coordinates of the vertices on Y
axis.
yhoverformat
Sets the hover text formatting rulefor `y`
using d3 formatting mini-languages which are
very similar to those in Python. For numbers,
see: https://github.com/d3/d3-
format/tree/v1.4.5#d3-format. And for dates
see: https://github.com/d3/d3-time-
format/tree/v2.2.3#locale_format. We add two
items to d3's date formatter: "%h" for half of
the year as a decimal number as well as "%{n}f"
for fractional seconds with n digits. For
example, *2016-10-13 09:15:23.456* with
tickformat "%H~%M~%S.%2f" would display
*09~15~23.46*By default the values are
formatted using `yaxis.hoverformat`.
ysrc
Sets the source reference on Chart Studio Cloud
for `y`.
z
Sets the Z coordinates of the vertices on Z
axis.
zhoverformat
Sets the hover text formatting rulefor `z`
using d3 formatting mini-languages which are
very similar to those in Python. For numbers,
see: https://github.com/d3/d3-
format/tree/v1.4.5#d3-format. And for dates
see: https://github.com/d3/d3-time-
format/tree/v2.2.3#locale_format. We add two
items to d3's date formatter: "%h" for half of
the year as a decimal number as well as "%{n}f"
for fractional seconds with n digits. For
example, *2016-10-13 09:15:23.456* with
tickformat "%H~%M~%S.%2f" would display
*09~15~23.46*By default the values are
formatted using `zaxis.hoverformat`.
zsrc
Sets the source reference on Chart Studio Cloud
for `z`.
""",
),
**kwargs,
)
|
plotlyREPO_NAMEplotly.pyPATH_START.@plotly.py_extracted@plotly.py-master@packages@python@plotly@plotly@validators@_volume.py@.PATH_END.py
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.