metadata
dict | text
stringlengths 0
40.6M
| id
stringlengths 14
255
|
|---|---|---|
{
"filename": "chap2.md",
"repo_name": "federicomarulli/CosmoBolognaLib",
"repo_path": "CosmoBolognaLib_extracted/CosmoBolognaLib-master/External/CLASS/doc/input/chap2.md",
"type": "Markdown"
}
|
Where to find information and documentation on CLASS?
======================================================
Author: Julien Lesgourgues
* __For what the code can actually compute__: all possible input parameters, all coded cosmological models, all functionalities, all observables, etc.: read the file `explanatory.ini` in the main `CLASS` directory: it is THE reference file where we keep track of all possible input and the definition of all input parameters. For that reason we recommend to leave it always unchanged and to work with copies of it, or with short input files written from scratch.
* __For the structure, style, and concrete aspects of the code__: this documentation, especially the `CLASS overview` chapter (the extensive automatically-generated part of this documentation is more for advanced users); plus the slides of our `CLASS` lectures, for instance those from New York 2019 available at
`https://lesgourg.github.io/class-tour-NewYork.html`
An updated overview of available `CLASS` lecture slides is always available at
`http://lesgourg.github.io/courses.html`
in the section `Courses on numerical tools`.
* __For the python wrapper of `CLASS`__: at the moment, the best are the "Usage I" and "Usage II" slides of the New York 2019 course,
`https://lesgourg.github.io/class-tour-NewYork.html`
* __For the physics and equations used in the code__: mainly, the following papers:
- *Cosmological perturbation theory in the synchronous and conformal Newtonian gauges*
C. P. Ma and E. Bertschinger.
http://arxiv.org/abs/astro-ph/9506072
10.1086/176550
Astrophys. J. __455__, 7 (1995)
- *The Cosmic Linear Anisotropy Solving System (CLASS) II: Approximation schemes*
D. Blas, J. Lesgourgues and T. Tram.
http://arxiv.org/abs/1104.2933 [astro-ph.CO]
10.1088/1475-7516/2011/07/034
JCAP __1107__, 034 (2011)
- *The Cosmic Linear Anisotropy Solving System (CLASS) IV: efficient implementation of non-cold relics*
J. Lesgourgues and T. Tram.
http://arxiv.org/abs/1104.2935 [astro-ph.CO]
10.1088/1475-7516/2011/09/032
JCAP __1109__, 032 (2011)
- *Optimal polarisation equations in FLRW universes*
T. Tram and J. Lesgourgues.
http://arxiv.org/abs/1305.3261 [astro-ph.CO]
10.1088/1475-7516/2013/10/002
JCAP __1310__, 002 (2013)
- *Fast and accurate CMB computations in non-flat FLRW universes*
J. Lesgourgues and T. Tram.
http://arxiv.org/abs/1312.2697 [astro-ph.CO]
10.1088/1475-7516/2014/09/032
JCAP __1409__, no. 09, 032 (2014)
- *The CLASSgal code for Relativistic Cosmological Large Scale Structure*
E. Di Dio, F. Montanari, J. Lesgourgues and R. Durrer.
http://arxiv.org/abs/1307.1459 [astro-ph.CO]
10.1088/1475-7516/2013/11/044
JCAP __1311__, 044 (2013)
- *The synergy between CMB spectral distortions and anisotropies*
M. Lucca, N. Schöneberg, D. C. Hooper, J. Lesgourgues, J. Chluba.
http://arxiv.org/abs/1910.04619 [astro-ph.CO]
JCAP 02 (2020) 026
- *Optimal Boltzmann hierarchies with nonvanishing spatial curvature*
C. Pitrou, T. S. Pereira, J. Lesgourgues,
http://arxiv.org/abs/2005.12119 [astro-ph.CO]
Phys.Rev.D 102 (2020) 2, 023511
plus also some latex notes on specific sectors:
- *Equations for perturbed recombination*
(can be turned on optionally by the user since v2.1.0)
L. Voruz.
http://lesgourg.github.io/class_public/perturbed_recombination.pdf
- *PPF formalism in Newtonian and synchronous gauge*
(used by default for the fluid perturbations since v2.6.0)
T. Tram.
http://lesgourg.github.io/class_public/PPF_formalism.pdf
|
federicomarulliREPO_NAMECosmoBolognaLibPATH_START.@CosmoBolognaLib_extracted@CosmoBolognaLib-master@External@CLASS@doc@input@chap2.md@.PATH_END.py
|
{
"filename": "_visible.py",
"repo_name": "catboost/catboost",
"repo_path": "catboost_extracted/catboost-master/contrib/python/plotly/py3/plotly/validators/scattergl/error_y/_visible.py",
"type": "Python"
}
|
import _plotly_utils.basevalidators
class VisibleValidator(_plotly_utils.basevalidators.BooleanValidator):
def __init__(
self, plotly_name="visible", parent_name="scattergl.error_y", **kwargs
):
super(VisibleValidator, self).__init__(
plotly_name=plotly_name,
parent_name=parent_name,
edit_type=kwargs.pop("edit_type", "calc"),
**kwargs,
)
|
catboostREPO_NAMEcatboostPATH_START.@catboost_extracted@catboost-master@contrib@python@plotly@py3@plotly@validators@scattergl@error_y@_visible.py@.PATH_END.py
|
{
"filename": "_fill.py",
"repo_name": "plotly/plotly.py",
"repo_path": "plotly.py_extracted/plotly.py-master/packages/python/plotly/plotly/validators/volume/caps/y/_fill.py",
"type": "Python"
}
|
import _plotly_utils.basevalidators
class FillValidator(_plotly_utils.basevalidators.NumberValidator):
def __init__(self, plotly_name="fill", parent_name="volume.caps.y", **kwargs):
super(FillValidator, self).__init__(
plotly_name=plotly_name,
parent_name=parent_name,
edit_type=kwargs.pop("edit_type", "calc"),
max=kwargs.pop("max", 1),
min=kwargs.pop("min", 0),
**kwargs,
)
|
plotlyREPO_NAMEplotly.pyPATH_START.@plotly.py_extracted@plotly.py-master@packages@python@plotly@plotly@validators@volume@caps@y@_fill.py@.PATH_END.py
|
{
"filename": "__init__.py",
"repo_name": "NannyML/nannyml",
"repo_path": "nannyml_extracted/nannyml-main/nannyml/plots/blueprints/__init__.py",
"type": "Python"
}
|
# Author: Niels Nuyttens <niels@nannyml.com>
# #
# License: Apache Software License 2.0
|
NannyMLREPO_NAMEnannymlPATH_START.@nannyml_extracted@nannyml-main@nannyml@plots@blueprints@__init__.py@.PATH_END.py
|
{
"filename": "tensor_utils.py",
"repo_name": "freelunchtheorem/Conditional_Density_Estimation",
"repo_path": "Conditional_Density_Estimation_extracted/Conditional_Density_Estimation-master/cde/utils/tf_utils/tensor_utils.py",
"type": "Python"
}
|
import operator
import numpy as np
def flatten_tensors(tensors):
if len(tensors) > 0:
return np.concatenate([np.reshape(x, [-1]) for x in tensors])
else:
return np.asarray([])
def unflatten_tensors(flattened, tensor_shapes):
tensor_sizes = list(map(int, map(np.prod, tensor_shapes)))
indices = np.cumsum(tensor_sizes)[:-1]
return [np.reshape(pair[0], pair[1]) for pair in zip(np.split(flattened, indices), tensor_shapes)]
|
freelunchtheoremREPO_NAMEConditional_Density_EstimationPATH_START.@Conditional_Density_Estimation_extracted@Conditional_Density_Estimation-master@cde@utils@tf_utils@tensor_utils.py@.PATH_END.py
|
{
"filename": "_bgcolorsrc.py",
"repo_name": "catboost/catboost",
"repo_path": "catboost_extracted/catboost-master/contrib/python/plotly/py3/plotly/validators/cone/hoverlabel/_bgcolorsrc.py",
"type": "Python"
}
|
import _plotly_utils.basevalidators
class BgcolorsrcValidator(_plotly_utils.basevalidators.SrcValidator):
def __init__(
self, plotly_name="bgcolorsrc", parent_name="cone.hoverlabel", **kwargs
):
super(BgcolorsrcValidator, self).__init__(
plotly_name=plotly_name,
parent_name=parent_name,
edit_type=kwargs.pop("edit_type", "none"),
**kwargs,
)
|
catboostREPO_NAMEcatboostPATH_START.@catboost_extracted@catboost-master@contrib@python@plotly@py3@plotly@validators@cone@hoverlabel@_bgcolorsrc.py@.PATH_END.py
|
{
"filename": "load.py",
"repo_name": "spedas/pyspedas",
"repo_path": "pyspedas_extracted/pyspedas-master/pyspedas/projects/fast/load.py",
"type": "Python"
}
|
import logging
from pyspedas.utilities.dailynames import dailynames
from pyspedas.utilities.download import download
from pytplot import time_clip as tclip
from pytplot import cdf_to_tplot
from .config import CONFIG
def load(
trange=["1996-12-01", "1996-12-02"],
instrument="dcf",
datatype="",
level="l2",
prefix="",
suffix="",
get_support_data=False,
varformat=None,
varnames=[],
downloadonly=False,
notplot=False,
no_update=False,
time_clip=False,
force_download=False,
):
"""
Load FAST data into tplot variables.
Parameters
----------
trange : list of str, optional
Time range of interest [starttime, endtime] with the format
['YYYY-MM-DD','YYYY-MM-DD'] or to specify more or less than a day
['YYYY-MM-DD/hh:mm:ss','YYYY-MM-DD/hh:mm:ss'].
Default is ["1996-12-01", "1996-12-02"].
instrument : str or list of str, optional
Type of instrument.
Values can be: 'dcf', 'acf', 'esa', 'teams', 'all'.
If 'all' is specified, all instruments will be loaded.
Default is 'dcf'.
datatype : str, optional
Data type to load. Depends on the instrument.
For 'esa' valid options are: 'eeb', 'ees', 'ieb', 'ies'.
For all other insturments, this keyword is ignored.
Default is ''.
level : str, optional
Data level to load. Depends on the instrument.
For 'dcf' and 'teams' valid options are: 'l2', 'k0'.
For all other instruments, this keyword is ignored.
Default is 'l2'.
prefix : str, optional
The tplot variable names will be given this prefix.
Default is ''.
In all cases a suitable prefix will be given depending on the instrument.
suffix : str, optional
The tplot variable names will be given this suffix.
Default is no suffix is added.
get_support_data : bool, optional
Data with an attribute "VAR_TYPE" with a value of "support_data" will be loaded into tplot.
Default is False; only loads in data with a "VAR_TYPE" attribute of "data".
varformat : str, optional
The file variable formats to load into tplot.
Wildcard character "*" is accepted.
Default is all variables are loaded in.
varnames : list of str, optional
List of variable names to load.
Default is all data variables are loaded.
downloadonly : bool, optional
Set this flag to download the CDF files, but not load them into tplot variables.
Default is False.
notplot : bool, optional
Return the data in hash tables instead of creating tplot variables.
Default is False.
no_update : bool, optional
If set, only load data from your local cache.
Default is False.
time_clip : bool, optional
Time clip the variables to exactly the range specified in the trange keyword.
Default is False.
force_download : bool, optional
Download file even if local version is more recent than server version.
Default is False.
Returns
-------
list of str/dictionary
List of tplot variables created.
If downloadonly is set to True, returns a list of the downloaded files.
If notplot is set to True, returns a dictionary of the data loaded.
Examples
--------
>>> import pyspedas
>>> from pytplot import tplot
>>> dcf_vars = pyspedas.projects.fast.dcf(trange=["1996-12-01", "1996-12-02"])
>>> tplot(['fast_dcf_DeltaB_GEI'])
>>> acf_vars = pyspedas.projects.fast.acf(trange=["1996-12-01", "1996-12-02"])
>>> tplot('fast_acf_HF_E_SPEC')
>>> esa_vars = pyspedas.projects.fast.esa(trange=["1996-12-01", "1996-12-02"])
>>> tplot('fast_esa_eflux')
>>> teams_vars = pyspedas.projects.fast.teams(trange=["2005-08-01", "2005-08-02"])
>>> tplot(['fast_teams_helium_omni_flux'])
"""
out_files = []
out_vars = []
file_resolution = 24 * 3600.0
if (
trange is None
or not isinstance(trange, list)
or len(trange) != 2
or trange[0] > trange[1]
):
logging.error("Invalid trange specified.")
return out_vars
if not isinstance(instrument, list):
instrument = [instrument]
if "all" in instrument:
instrument = ["dcf", "acf", "esa", "teams"]
pathformat = ""
for instr in instrument:
if instr == "dcf":
# levels are l2 (1996-1998) or k0 (1996-2002)
if level == "k0":
pathformat = "dcf/k0/%Y/fa_k0_dcf_%Y%m%d_v??.cdf"
else:
pathformat = "dcf/l2/dcb/%Y/%m/fast_hr_dcb_%Y%m%d%H????_?????_v??.cdf"
file_resolution = 3600.0
elif instr == "acf":
# level k0 only (1996-2002)
pathformat = "acf/k0/%Y/fa_k0_acf_%Y%m%d_v??.cdf"
elif instr == "esa":
# level l2 only
# datatypes are eeb, ees, ieb, ies (1996-2009)
if datatype not in ["eeb", "ees", "ieb", "ies"]:
datatype = "eeb" # default
pathformat = (
"esa/l2/"
+ datatype
+ "/%Y/%m/fa_esa_l2_"
+ datatype
+ "_%Y%m%d??????_*_v??.cdf"
)
elif instr == "teams":
# levels are l2 (1996-2009) or k0 (1996-2009)
if level == "k0":
# no datatype for k0 data
pathformat = "teams/k0/%Y/fa_k0_tms_%Y%m%d_v??.cdf"
else:
# for l2 data, the only available datatype is "pa"
pathformat = "teams/l2/pa/%Y/%m/fast_teams_pa_l2_%Y%m%d_?????_v??.cdf"
else:
logging.error("Invalid instrument type: " + instr)
continue
# If prefix is not empty, add it to the pre variable
pre = "fast_" + instr + "_"
if prefix != "":
pre = prefix + pre
# find the full remote path names using the trange
remote_names = dailynames(
file_format=pathformat, trange=trange, res=file_resolution
)
# download the files
files = download(
remote_file=remote_names,
remote_path=CONFIG["remote_data_dir"],
local_path=CONFIG["local_data_dir"],
no_download=no_update,
force_download=force_download,
)
if files is not None:
if not isinstance(files, list):
files = [files]
out_files.extend(files)
if not downloadonly and len(files) > 0:
# Read the files into tplot variables
vars = cdf_to_tplot(
files,
prefix=pre,
suffix=suffix,
get_support_data=get_support_data,
varformat=varformat,
varnames=varnames,
notplot=notplot,
)
if not isinstance(vars, list):
vars = [vars]
out_vars.extend(vars)
out_files = list(set(out_files))
out_files = sorted(out_files)
if not downloadonly and len(out_files) < 1:
logging.info("No files were downloaded.")
return out_files # return an empty list
if downloadonly:
return out_files
if notplot:
if len(out_vars) < 1:
logging.info("No variables were loaded.")
return {}
else:
return out_vars[0] # return data in hash tables
if time_clip:
tclip(out_vars, trange[0], trange[1], suffix="", overwrite=True)
return out_vars
|
spedasREPO_NAMEpyspedasPATH_START.@pyspedas_extracted@pyspedas-master@pyspedas@projects@fast@load.py@.PATH_END.py
|
{
"filename": "_showticksuffix.py",
"repo_name": "plotly/plotly.py",
"repo_path": "plotly.py_extracted/plotly.py-master/packages/python/plotly/plotly/validators/mesh3d/colorbar/_showticksuffix.py",
"type": "Python"
}
|
import _plotly_utils.basevalidators
class ShowticksuffixValidator(_plotly_utils.basevalidators.EnumeratedValidator):
def __init__(
self, plotly_name="showticksuffix", parent_name="mesh3d.colorbar", **kwargs
):
super(ShowticksuffixValidator, self).__init__(
plotly_name=plotly_name,
parent_name=parent_name,
edit_type=kwargs.pop("edit_type", "colorbars"),
values=kwargs.pop("values", ["all", "first", "last", "none"]),
**kwargs,
)
|
plotlyREPO_NAMEplotly.pyPATH_START.@plotly.py_extracted@plotly.py-master@packages@python@plotly@plotly@validators@mesh3d@colorbar@_showticksuffix.py@.PATH_END.py
|
{
"filename": "test_ui_url.py",
"repo_name": "PrefectHQ/prefect",
"repo_path": "prefect_extracted/prefect-main/tests/events/jinja_filters/test_ui_url.py",
"type": "Python"
}
|
from datetime import timedelta
from uuid import uuid4
import jinja2
import pytest
from pendulum import DateTime
from prefect.events.schemas.events import ReceivedEvent, Resource
from prefect.server.events.jinja_filters import ui_url
from prefect.server.events.schemas.automations import Automation, EventTrigger, Posture
from prefect.server.schemas.core import (
Deployment,
Flow,
FlowRun,
TaskRun,
WorkPool,
WorkQueue,
)
from prefect.settings import PREFECT_UI_URL, temporary_settings
template_environment = jinja2.Environment()
template_environment.filters["ui_url"] = ui_url
MOCK_PREFECT_UI_URL = "http://localhost:3000"
@pytest.fixture(autouse=True)
def mock_prefect_ui_url():
with temporary_settings({PREFECT_UI_URL: MOCK_PREFECT_UI_URL}):
yield
@pytest.fixture
async def chonk_party() -> Automation:
return Automation(
name="If my lilies get nibbled, tell me about it",
description="Send an email notification whenever the lilies are nibbled",
enabled=True,
trigger=EventTrigger(
expect={"animal.ingested"},
match_related={
"prefect.resource.role": "meal",
"genus": "Hemerocallis",
"species": "fulva",
},
posture=Posture.Reactive,
threshold=0,
within=timedelta(seconds=30),
),
actions=[{"type": "do-nothing"}],
)
@pytest.fixture
def woodchonk_walked(start_of_test: DateTime) -> ReceivedEvent:
return ReceivedEvent(
occurred=start_of_test + timedelta(microseconds=2),
received=start_of_test + timedelta(microseconds=2),
event="animal.walked",
resource={
"kingdom": "Animalia",
"phylum": "Chordata",
"class": "Mammalia",
"order": "Rodentia",
"family": "Sciuridae",
"genus": "Marmota",
"species": "monax",
"prefect.resource.id": "woodchonk",
},
id=uuid4(),
)
def test_automation_url(chonk_party: Automation):
template = template_environment.from_string("{{ automation|ui_url }}")
rendered = template.render({"automation": chonk_party})
assert rendered == (
"http://localhost:3000" f"/automations/automation/{chonk_party.id}"
)
def test_deployment_resource_url(chonk_party: Automation):
deployment_id = uuid4()
template = template_environment.from_string("{{ deployment_resource|ui_url}}")
rendered = template.render(
{
"automation": chonk_party,
"deployment_resource": Resource.model_validate(
{"prefect.resource.id": f"prefect.deployment.{deployment_id}"}
),
}
)
assert rendered == (f"http://localhost:3000/deployments/deployment/{deployment_id}")
def test_flow_resource_url(chonk_party: Automation):
flow_id = uuid4()
template = template_environment.from_string("{{ flow_resource|ui_url }}")
rendered = template.render(
{
"automation": chonk_party,
"flow_resource": Resource.model_validate(
{"prefect.resource.id": f"prefect.flow.{flow_id}"}
),
}
)
assert rendered == ("http://localhost:3000" f"/flows/flow/{flow_id}")
def test_flow_run_resource_url(chonk_party: Automation):
flow_run_id = uuid4()
template = template_environment.from_string("{{ flow_run_resource|ui_url }}")
rendered = template.render(
{
"automation": chonk_party,
"flow_run_resource": Resource.model_validate(
{"prefect.resource.id": f"prefect.flow-run.{flow_run_id}"}
),
}
)
assert rendered == f"http://localhost:3000/runs/flow-run/{flow_run_id}"
def test_task_run_resource_url(chonk_party: Automation):
task_run_id = uuid4()
template = template_environment.from_string("{{ task_run_resource|ui_url }}")
rendered = template.render(
{
"automation": chonk_party,
"task_run_resource": Resource.model_validate(
{"prefect.resource.id": f"prefect.task-run.{task_run_id}"}
),
}
)
assert rendered == f"http://localhost:3000/runs/task-run/{task_run_id}"
def test_work_queue_resource_url(chonk_party: Automation):
work_queue_id = uuid4()
template = template_environment.from_string("{{ work_queue_resource|ui_url }}")
rendered = template.render(
{
"automation": chonk_party,
"work_queue_resource": Resource.model_validate(
{"prefect.resource.id": f"prefect.work-queue.{work_queue_id}"}
),
}
)
assert rendered == f"http://localhost:3000/work-queues/work-queue/{work_queue_id}"
def test_work_pool_resource_url(chonk_party: Automation):
template = template_environment.from_string("{{ work_pool_resource|ui_url }}")
rendered = template.render(
{
"automation": chonk_party,
"work_pool_resource": Resource.model_validate(
{
"prefect.resource.id": f"prefect.work-pool.{uuid4()}",
"prefect.resource.name": "hi-there",
}
),
}
)
assert rendered == "http://localhost:3000/work-pools/work-pool/hi-there"
def test_deployment_model(chonk_party: Automation):
deployment = Deployment(id=uuid4(), name="the-deployment", flow_id=uuid4())
template = template_environment.from_string("{{ deployment|ui_url }}")
rendered = template.render({"automation": chonk_party, "deployment": deployment})
assert rendered == f"http://localhost:3000/deployments/deployment/{deployment.id}"
def test_flow_model(chonk_party: Automation):
flow = Flow(id=uuid4(), name="the-flow")
template = template_environment.from_string("{{ flow|ui_url }}")
rendered = template.render({"automation": chonk_party, "flow": flow})
assert rendered == f"http://localhost:3000/flows/flow/{flow.id}"
def test_flow_run_model(chonk_party: Automation):
flow_run = FlowRun(id=uuid4(), name="the-flow-run", flow_id=uuid4())
template = template_environment.from_string("{{ flow_run|ui_url }}")
rendered = template.render({"automation": chonk_party, "flow_run": flow_run})
assert rendered == f"http://localhost:3000/runs/flow-run/{flow_run.id}"
def test_task_run_model(chonk_party: Automation):
task_run = TaskRun(
id=uuid4(),
flow_run_id=uuid4(),
name="the-task-run",
task_key="key123",
dynamic_key="a",
)
template = template_environment.from_string("{{ task_run|ui_url }}")
rendered = template.render({"automation": chonk_party, "task_run": task_run})
assert rendered == f"http://localhost:3000/runs/task-run/{task_run.id}"
def test_work_queue_model(chonk_party: Automation):
work_queue = WorkQueue(
id=uuid4(), name="the-work-queue", work_pool_id=uuid4(), priority=1
)
template = template_environment.from_string("{{ work_queue|ui_url }}")
rendered = template.render({"automation": chonk_party, "work_queue": work_queue})
assert rendered == f"http://localhost:3000/work-queues/work-queue/{work_queue.id}"
async def test_work_pool_model(chonk_party: Automation):
work_pool = WorkPool(
id=uuid4(), name="the-work-pool", type="chonk", default_queue_id=uuid4()
)
template = template_environment.from_string("{{ work_pool|ui_url }}")
rendered = template.render({"automation": chonk_party, "work_pool": work_pool})
assert rendered == f"http://localhost:3000/work-pools/work-pool/{work_pool.name}"
|
PrefectHQREPO_NAMEprefectPATH_START.@prefect_extracted@prefect-main@tests@events@jinja_filters@test_ui_url.py@.PATH_END.py
|
{
"filename": "test_ellipsoid.py",
"repo_name": "rennehan/yt-swift",
"repo_path": "yt-swift_extracted/yt-swift-main/yt/data_objects/tests/test_ellipsoid.py",
"type": "Python"
}
|
import numpy as np
from numpy.testing import assert_array_less
from yt.testing import fake_random_ds
def setup():
from yt.config import ytcfg
ytcfg["yt", "log_level"] = 50
ytcfg["yt", "internals", "within_testing"] = True
def _difference(x1, x2, dw):
rel = x1 - x2
rel[rel > dw / 2.0] -= dw
rel[rel < -dw / 2.0] += dw
return rel
def test_ellipsoid():
# We decompose in different ways
cs = [
np.array([0.5, 0.5, 0.5]),
np.array([0.1, 0.2, 0.3]),
np.array([0.8, 0.8, 0.8]),
]
np.random.seed(0x4D3D3D3)
for nprocs in [1, 2, 4, 8]:
ds = fake_random_ds(64, nprocs=nprocs)
DW = ds.domain_right_edge - ds.domain_left_edge
min_dx = 2.0 / ds.domain_dimensions
ABC = np.random.random((3, 12)) * 0.1
e0s = np.random.random((3, 12))
tilts = np.random.random(12)
ABC[:, 0] = 0.1
for i in range(12):
for c in cs:
A, B, C = sorted(ABC[:, i], reverse=True)
A = max(A, min_dx[0])
B = max(B, min_dx[1])
C = max(C, min_dx[2])
e0 = e0s[:, i]
tilt = tilts[i]
ell = ds.ellipsoid(c, A, B, C, e0, tilt)
assert_array_less(ell[("index", "radius")], A)
p = np.array([ell["index", ax] for ax in "xyz"])
dot_evec = [np.zeros_like(ell[("index", "radius")]) for i in range(3)]
vecs = [ell._e0, ell._e1, ell._e2]
mags = [ell._A, ell._B, ell._C]
my_c = np.array([c] * p.shape[1]).transpose()
dot_evec = [de.to_ndarray() for de in dot_evec]
mags = [m.to_ndarray() for m in mags]
for ax_i in range(3):
dist = _difference(p[ax_i, :], my_c[ax_i, :], DW[ax_i])
for ax_j in range(3):
dot_evec[ax_j] += dist * vecs[ax_j][ax_i]
dist = 0
for ax_i in range(3):
dist += dot_evec[ax_i] ** 2.0 / mags[ax_i] ** 2.0
assert_array_less(dist, 1.0)
|
rennehanREPO_NAMEyt-swiftPATH_START.@yt-swift_extracted@yt-swift-main@yt@data_objects@tests@test_ellipsoid.py@.PATH_END.py
|
{
"filename": "plot2d.py",
"repo_name": "treecode/Bonsai",
"repo_path": "Bonsai_extracted/Bonsai-master/tools/density_estimator/plot2d.py",
"type": "Python"
}
|
import sys
import os
import numpy as np
import matplotlib.pylab as plt
import math
def plot2D(x, y, z, zmin= None, zmax= None, xlim=None, ylim=None, nx = 200, ny = 200):
if zmin == None:
zmin = min(z)
if zmax == None:
zmax = max(z)
for i in range(len(z)):
z[i] = min(z[i], zmax)
z[i] = max(z[i], zmin)
xi = np.linspace(min(x), max(x), nx)
yi = np.linspace(min(y), max(y), ny)
zi = plt.mlab.griddata(x, y, z, xi, yi)
plt.contourf(xi, yi, zi, 32, cmap=plt.cm.jet) #, norm=plt.Normalize(zmin, zmax))
plt.contourf(xi, yi, zi, 32, norm=plt.Normalize(zmin, zmax))
if 1 == 1:
if (xlim == None):
plt.xlim(-6.0, +6.0)
else:
plt.xlim(xlim[0], xlim[1]);
if (ylim == None):
plt.ylim(-6.0, +6.0)
else:
plt.ylim(ylim[0], ylim[1]);
else:
plt.xlim(-0.5, +0.5)
plt.ylim(-0.5, +0.5)
plt.colorbar()
plt.show()
x = []
y = []
w = []
data = sys.stdin.readlines();
zcrd_min = -1
zcrd_max = +1
for line in data:
wrd = line.split();
xcrd = float(wrd[1]);
ycrd = float(wrd[2]);
zcrd = float(wrd[3]);
wcrd = float(wrd[4]);
if zcrd > zcrd_min and zcrd < zcrd_max:
x.append(xcrd)
y.append(ycrd)
w.append(math.log10(wcrd))
print len(w)
plot2D(x,y,w, xlim=[-100, 100], ylim=[-100,100])
|
treecodeREPO_NAMEBonsaiPATH_START.@Bonsai_extracted@Bonsai-master@tools@density_estimator@plot2d.py@.PATH_END.py
|
{
"filename": "conf.py",
"repo_name": "mikecokina/elisa",
"repo_path": "elisa_extracted/elisa-master/src/elisa/analytics/params/conf.py",
"type": "Python"
}
|
import numpy as np
from astropy.time import Time
from ... import units as u
from ... atm import atm_file_prefix_to_quantity_list
from ... import settings
PARAM_PARSER = '@'
NUISANCE_PARSER = 'nuisance'
TEMPERATURES = atm_file_prefix_to_quantity_list("temperature", settings.ATM_ATLAS)
METALLICITY = atm_file_prefix_to_quantity_list("metallicity", settings.ATM_ATLAS)
COMPOSITE_FLAT_PARAMS = [
'spot',
'pulsation'
]
DEFAULT_NORMALIZATION_SPOT = {
"longitude": (0, 360),
"latitude": (0, 180),
"angular_radius": (0, 90),
"temperature_factor": (0.1, 3),
}
DEFAULT_NORMALIZATION_NUISANCE = {
"ln_f": (-20, -10),
}
DEFAULT_NORMALIZATION_PULSATION = {
"l": (0, 10),
"m": (-10, 10),
"amplitude": (0, 5000),
"frequency": (0.01, 40),
"start_phase": (0, 360),
"mode_axis_theta": (0, 180),
"mode_axis_phi": (0, 360)
}
DEFAULT_NORMALIZATION_STAR = {
"mass": (0.1, 50),
"t_eff": (np.min(TEMPERATURES), np.max(TEMPERATURES)),
"metallicity": (np.min(METALLICITY), np.max(METALLICITY)),
"surface_potential": (2.0, 50.0),
"albedo": (0, 1),
"gravity_darkening": (0, 1),
"synchronicity": (0.01, 10),
}
DEFAULT_NORMALIZATION_SYSTEM = {
"inclination": (0, 180),
"eccentricity": (0, 0.9999),
"argument_of_periastron": (0, 360),
"gamma": (0, 1e6),
"mass_ratio": (1e-6, 2),
"semi_major_axis": (0.01, 100),
"asini": (0.0001, 100),
"period": (0.001, 100),
"additional_light": (0, 1.0),
"phase_shift": (-0.8, 0.8),
"primary_minimum_time": (Time.now().jd - 365.0, Time.now().jd),
}
SPOTS_PARAMETERS = ['longitude', 'latitude', 'angular_radius', 'temperature_factor', 'angular_radius']
PULSATIONS_PARAMETERS = ['l', 'm', 'amplitude', 'frequency', 'start_phase', 'mode_axis_phi', 'mode_axis_theta']
DEFAULT_FLOAT_ANGULAR_UNIT = u.deg
DEFAULT_FLOAT_MASS_UNIT = u.solMass
DEFAULT_FLOAT_UNITS = {
'inclination': u.deg,
'eccentricity': None,
'argument_of_periastron': u.deg,
'gamma': u.VELOCITY_UNIT,
'mass': u.solMass,
't_eff': u.TEMPERATURE_UNIT,
'metallicity': None,
'surface_potential': None,
'albedo': None,
'gravity_darkening': None,
'synchronicity': None,
'mass_ratio': None,
'semi_major_axis': u.solRad,
'asini': u.solRad,
'period': u.PERIOD_UNIT,
'primary_minimum_time': u.PERIOD_UNIT,
'additional_light': None,
'phase_shift': None,
# SPOTS
'latitude': u.deg,
'longitude': u.deg,
'angular_radius': u.deg,
'temperature_factor': None,
# PULSATIONS
'l': None,
'm': None,
'amplitude': u.VELOCITY_UNIT,
'frequency': u.FREQUENCY_UNIT,
'start_phase': u.deg,
'mode_axis_theta': u.deg,
'mode_axis_phi': u.deg,
# NUISANCE
'ln_f': None
}
PARAMS_KEY_TEX_MAP = {
'system@argument_of_periastron': '$\\omega$',
'system@inclination': '$i$',
'system@eccentricity': '$e$',
'system@gamma': '$\\gamma$',
'system@mass_ratio': '$q$',
'system@semi_major_axis': '$a$',
'primary@mass': '$M_1$',
'primary@t_eff': '$T_1^{eff}$',
'primary@surface_potential': '$\\Omega_1$',
'primary@gravity_darkening': '$\\beta_1$',
'primary@albedo': '$A_1$',
'primary@metallicity': '$M/H_1$',
'primary@synchronicity': '$F_1$',
'secondary@mass': '$M_2$',
'secondary@t_eff': '$T_2^{eff}$',
'secondary@surface_potential': '$\\Omega_2$',
'secondary@gravity_darkening': '$\\beta_2$',
'secondary@albedo': '$A_2$',
'secondary@metallicity': '$M/H_2$',
'secondary@synchronicity': '$F_2$',
'system@asini': 'a$sin$(i)',
'system@period': '$period$',
'system@primary_minimum_time': '$T_0$',
'system@additional_light': '$l_{add}$',
'system@phase_shift': 'phase shift$',
# SPOTS
'longitude': '$\\varphi$',
'latitude': '$\\vartheta$',
'angular_radius': '$R_{spot}$',
'temperature_factor': '$T_{spot}/T_{eff}$',
# PULSATIONS
'l': '$\\ell$',
'm': '$m$',
'amplitude': '$A$',
'frequency': '$f$',
'start_phase': '$\\Phi_0$',
'mode_axis_phi': '$\\phi_{mode}$',
'mode_axis_theta': '$\\theta_{mode}$',
# NUISANCE
'nuisance@ln_f': "$ln(f)$"
}
|
mikecokinaREPO_NAMEelisaPATH_START.@elisa_extracted@elisa-master@src@elisa@analytics@params@conf.py@.PATH_END.py
|
{
"filename": "__init__.py",
"repo_name": "n-claes/legolas",
"repo_path": "legolas_extracted/legolas-master/post_processing/pylbo/utilities/datfiles/__init__.py",
"type": "Python"
}
|
n-claesREPO_NAMElegolasPATH_START.@legolas_extracted@legolas-master@post_processing@pylbo@utilities@datfiles@__init__.py@.PATH_END.py
|
|
{
"filename": "utils.py",
"repo_name": "DifferentiableUniverseInitiative/sbi_lens",
"repo_path": "sbi_lens_extracted/sbi_lens-main/sbi_lens/simulator/utils.py",
"type": "Python"
}
|
import itertools
from functools import partial
from pathlib import Path
import astropy.units as u
import jax
import jax.numpy as jnp
import jax_cosmo as jc
import numpy as np
import numpyro
import numpyro.distributions as dist
import tensorflow_probability as tfp
from lenstools import ConvergenceMap
from numpyro import sample
from numpyro.handlers import condition, reparam, seed, trace
from numpyro.infer.reparam import LocScaleReparam, TransformReparam
from sbi_lens.simulator.redshift import subdivide
tfp = tfp.substrates.jax
tfd = tfp.distributions
np.complex = complex
np.float = float
SOURCE_FILE = Path(__file__)
SOURCE_DIR = SOURCE_FILE.parent
ROOT_DIR = SOURCE_DIR.parent.resolve()
DATA_DIR = ROOT_DIR / "data"
def get_samples_and_scores(
model,
key,
batch_size=64,
score_type="density",
thetas=None,
with_noise=True,
):
"""Handling function sampling and computing the score from the model.
Parameters
----------
model : numpyro model
key : PRNG Key
batch_size : int, optional
size of the batch to sample, by default 64
score_type : str, optional
'density' for nabla_theta log p(theta | y, z) or
'conditional' for nabla_theta log p(y | z, theta), by default 'density'
thetas : Array (batch_size, 2), optional
thetas used to sample simulations or
'None' sample thetas from the model, by default None
with_noise : bool, optional
add noise in simulations, by default True
note: if no noise the score is only nabla_theta log p(theta, z)
and log_prob log p(theta, z)
Returns
-------
Array
(log_prob, sample), score
"""
params_name = ["omega_c", "omega_b", "sigma_8", "h_0", "n_s", "w_0"]
def log_prob_fn(theta, key):
cond_model = seed(model, key)
cond_model = condition(
cond_model,
{
"omega_c": theta[0],
"omega_b": theta[1],
"sigma_8": theta[2],
"h_0": theta[3],
"n_s": theta[4],
"w_0": theta[5],
},
)
model_trace = trace(cond_model).get_trace()
sample = {
"theta": jnp.stack(
[model_trace[name]["value"] for name in params_name], axis=-1
),
"y": model_trace["y"]["value"],
}
if score_type == "density":
logp = 0
for name in params_name:
logp += model_trace[name]["fn"].log_prob(model_trace[name]["value"])
elif score_type == "conditional":
logp = 0
if with_noise:
logp += (
model_trace["y"]["fn"]
.log_prob(jax.lax.stop_gradient(model_trace["y"]["value"]))
.sum()
)
logp += model_trace["z"]["fn"].log_prob(model_trace["z"]["value"]).sum()
return logp, sample
# Split the key by batch
keys = jax.random.split(key, batch_size)
# Sample theta from the model
if thetas is None:
@jax.vmap
def get_params(key):
model_trace = trace(seed(model, key)).get_trace()
thetas = jnp.stack(
[model_trace[name]["value"] for name in params_name], axis=-1
)
return thetas
thetas = get_params(keys)
return jax.vmap(jax.value_and_grad(log_prob_fn, has_aux=True))(thetas, keys)
def _lensingPS(map_size, sigma_e, a, b, z0, gals_per_arcmin2, ell, nbins):
# Field parameters
f_sky = map_size**2 / 41_253
nz = jc.redshift.smail_nz(a, b, z0, gals_per_arcmin2=gals_per_arcmin2)
nz_bins = subdivide(nz, nbins=nbins, zphot_sigma=0.05)
# Cosmological parameters
omega_c = sample("omega_c", dist.TruncatedNormal(0.2664, 0.2, low=0))
omega_b = sample("omega_b", dist.Normal(0.0492, 0.006))
sigma_8 = sample("sigma_8", dist.Normal(0.831, 0.14))
h_0 = sample("h_0", dist.Normal(0.6727, 0.063))
n_s = sample("n_s", dist.Normal(0.9645, 0.08))
w_0 = sample("w_0", dist.TruncatedNormal(-1.0, 0.9, low=-2.0, high=-0.3))
cosmo = jc.Planck15(
Omega_c=omega_c, Omega_b=omega_b, h=h_0, n_s=n_s, sigma8=sigma_8, w0=w_0
)
tracer = jc.probes.WeakLensing(nz_bins, sigma_e=sigma_e)
# Calculate power spectrum
cl_noise = jc.angular_cl.noise_cl(ell, [tracer]).flatten()
cl, C = jc.angular_cl.gaussian_cl_covariance_and_mean(
cosmo, ell, [tracer], f_sky=f_sky, sparse=True
)
# Compute precision matrix
P = jc.sparse.to_dense(jc.sparse.inv(jax.lax.stop_gradient(C)))
C = jc.sparse.to_dense(C)
cl = sample(
"cl",
dist.MultivariateNormal(cl + cl_noise, precision_matrix=P, covariance_matrix=C),
)
return cl
def compute_power_spectrum_theory(
nbins,
sigma_e,
a,
b,
z0,
gals_per_arcmin2,
cosmo_params,
ell,
with_noise=True,
):
"""Compute theoric power spectrum given given cosmological
parameters, redshift distribution and multipole bin edges
Parameters
----------
n_bins: int
Number of redshift bins
sigma_e : float
Dispersion of the ellipticity distribution
a : float
Parameter defining the redshift distribution
b : float
Parameter defining the redshift distribution
z0 : float
Parameter defining the redshift distribution
gals_per_arcmin2 : int
Number of galaxies per arcmin
cosmo_params : Array (6)
cosmological parameters in the following order:
(omega_c, omega_b, sigma_8, h_0, n_s, w_0)
ell : Array
Multipole bin edges
with_noise : bool, optional
True if there is noise in the mass_map, by default True
Returns
-------
Theoric power spectrum
"""
omega_c, omega_b, sigma_8, h_0, n_s, w_0 = cosmo_params
# power spectrum from theory
cosmo = jc.Planck15(
Omega_c=omega_c,
Omega_b=omega_b,
h=h_0,
n_s=n_s,
sigma8=sigma_8,
w0=w_0,
)
nz = jc.redshift.smail_nz(a, b, z0, gals_per_arcmin2=gals_per_arcmin2)
nz_bins = subdivide(nz, nbins=nbins, zphot_sigma=0.05)
tracer = jc.probes.WeakLensing(nz_bins, sigma_e=sigma_e)
cell_theory = jc.angular_cl.angular_cl(cosmo, ell, [tracer])
cell_noise = jc.angular_cl.noise_cl(ell, [tracer])
if with_noise:
Cl_theo = cell_theory + cell_noise
else:
Cl_theo = cell_theory
return Cl_theo
def compute_power_spectrum_mass_map(nbins, map_size, mass_map):
"""Compute the power spectrum of the convergence map
Parameters
----------
n_bins: int
Number of redshift bins
map_size : int
The total angular size area is given by map_size x map_size
mass_map : Array (N,N, nbins)
Lensing convergence maps
Returns
-------
Power spectrum and ell
"""
l_edges_kmap = np.arange(100.0, 5000.0, 50.0)
ell = ConvergenceMap(mass_map[:, :, 0], angle=map_size * u.deg).cross(
ConvergenceMap(mass_map[:, :, 0], angle=map_size * u.deg),
l_edges=l_edges_kmap,
)[0]
# power spectrum of the map
ps = []
for i, j in itertools.combinations_with_replacement(range(nbins), 2):
ps_ij = ConvergenceMap(mass_map[:, :, i], angle=map_size * u.deg).cross(
ConvergenceMap(mass_map[:, :, j], angle=map_size * u.deg),
l_edges=l_edges_kmap,
)[1]
ps.append(ps_ij)
return np.array(ps), ell
def gaussian_log_likelihood(
cosmo_params, mass_map, nbins, map_size, sigma_e, a, b, z0, gals_per_arcmin2
):
"""Compute the gaussian likelihood log probrobability
Parameters
----------
cosmo_params : Array
cosmological parameters in the following order:
(omega_c, omega_b, sigma_8, h_0, n_s, w_0)
mass_map : Array (N,N, nbins)
Lensing convergence maps
n_bins: int
Number of redshift bins
map_size : int
The total angular size area is given by map_size x map_size
sigma_e : float
Dispersion of the ellipticity distribution
a : float
Parameter defining the redshift distribution
b : float
Parameter defining the redshift distribution
z0 : float
Parameter defining the redshift distribution
gals_per_arcmin2 : int
Number of galaxies per arcmin
Returns
-------
log p(mass_map | cosmo_params)
"""
pl_array, ell = compute_power_spectrum_mass_map(nbins, map_size, mass_map)
cl_obs = np.stack(pl_array)
model_lensingPS = partial(
_lensingPS,
map_size=map_size,
sigma_e=sigma_e,
a=a,
b=b,
z0=z0,
gals_per_arcmin2=gals_per_arcmin2,
ell=ell,
nbins=5,
)
# Now we condition the model on obervations
cond_model = condition(
model_lensingPS,
{
"cl": cl_obs.flatten(),
"omega_c": cosmo_params[0],
"omega_b": cosmo_params[1],
"sigma_8": cosmo_params[2],
"h_0": cosmo_params[3],
"n_s": cosmo_params[4],
"w_0": cosmo_params[5],
},
)
model_trace = trace(cond_model).get_trace()
log_prob = model_trace["cl"]["fn"].log_prob(model_trace["cl"]["value"])
return log_prob
def get_reference_sample_posterior_power_spectrum(
run_mcmc=False,
N=256,
map_size=10,
gals_per_arcmin2=27,
sigma_e=0.26,
nbins=5,
a=2,
b=0.68,
z0=0.11,
m_data=None,
num_results=500,
num_warmup=200,
num_chains=1,
chain_method="parallel",
max_tree_depth=6,
step_size=1e-2,
init_strat=numpyro.infer.init_to_value,
key=None,
):
"""Posterior p(theta|x=m_data) from power spectrum analysis.
Note: pre samples chains correspond to the following fiducial parameters:
(omega_c, omega_b, sigma_8, h_0, n_s, w_0)
= (0.2664, 0.0492, 0.831, 0.6727, 0.9645, -1.0)
Parameters
----------
run_mcmc : bool, optional
if True the MCMC (No U-Turn Sampler) will be run,
if False pre sampled chains are returned according to
gals_per_arcmin2, sigma_e, N, map_size,
by default False
N : int, optional
Number of pixels on the map., by default 256
map_size : int, optional
The total angular size area is given by map_size x map_size,
by default 10
gals_per_arcmin2 : int
Number of galaxies per arcmin, by default 27
sigma_e : float
Dispersion of the ellipticity distribution, by default 0.26
n_bins: int
Number of redshift bins, by defautlt 5
a : float
Parameter defining the redshift distribution, by defautlt 2
b : float
Parameter defining the redshift distribution, by defautlt 0.68
z0 : float
Parameter defining the redshift distribution, , by defautlt 0.11
m_data : Array (N,N)
Lensing convergence map, by default None
if run_mcmc=True m_data can not be None
num_results : int
Number of samples, by default 500
num_warmup : int
Number of warmup steps, by default 200
num_chains : int
Number of MCMC chains to run, by default 1
chain_method : str
'parallel', 'sequential', 'vectorized', by default 'parallel'
max_tree_depth : int
Max depth of the binary tree created during the doubling scheme
of NUTS sampler, by default 6
step_size : float
Size of a single step, by default 1e-2
init_strat : callable
Sampler initialization Strategies.
See https://num.pyro.ai/en/stable/utilities.html#init-strategy
key : PRNG key
Only needed if run_mcmc=True, by default None
Returns
-------
Array (num_results,2)
MCMC chains corresponding to p(theta|x=m_data)
"""
if run_mcmc:
pl_array, ell = compute_power_spectrum_mass_map(nbins, map_size, m_data)
cl_obs = np.stack(pl_array)
model_lensingPS = partial(
_lensingPS,
map_size=map_size,
sigma_e=sigma_e,
a=a,
b=b,
z0=z0,
gals_per_arcmin2=gals_per_arcmin2,
ell=ell,
nbins=nbins,
)
# Now we condition the model on obervations
observed_model = condition(model_lensingPS, {"cl": cl_obs.flatten()})
def config(x):
if type(x["fn"]) is dist.TransformedDistribution:
return TransformReparam()
elif (
type(x["fn"]) is dist.Normal or type(x["fn"]) is dist.TruncatedNormal
) and ("decentered" not in x["name"]):
return LocScaleReparam(centered=0)
else:
return None
observed_model_reparam = reparam(observed_model, config=config)
nuts_kernel = numpyro.infer.NUTS(
model=observed_model_reparam,
init_strategy=init_strat,
max_tree_depth=max_tree_depth,
step_size=step_size,
)
mcmc = numpyro.infer.MCMC(
nuts_kernel,
num_warmup=num_warmup,
num_samples=num_results,
num_chains=num_chains,
chain_method=chain_method,
progress_bar=True,
)
mcmc.run(key)
samples = mcmc.get_samples()
samples = jnp.stack(
[
samples["omega_c"],
samples["omega_b"],
samples["sigma_8"],
samples["h_0"],
samples["n_s"],
samples["w_0"],
],
axis=-1,
)
return samples
else:
SOURCE_FILE = Path(__file__)
SOURCE_DIR = SOURCE_FILE.parent
ROOT_DIR = SOURCE_DIR.parent.resolve()
DATA_DIR = ROOT_DIR / "data"
theta = np.load(
DATA_DIR / "posterior_power_spectrum__"
"{}N_{}ms_{}gpa_{}se.npy".format(N, map_size, gals_per_arcmin2, sigma_e)
)
m_data = np.load(
DATA_DIR / "m_data__"
"{}N_{}ms_{}gpa_{}se.npy".format(N, map_size, gals_per_arcmin2, sigma_e)
)
return theta, m_data
def get_reference_sample_posterior_full_field(
run_mcmc=False,
N=256,
map_size=10,
gals_per_arcmin2=27,
sigma_e=0.26,
model=None,
m_data=None,
num_results=500,
num_warmup=200,
nb_loop=1,
num_chains=1,
chain_method="parallel",
max_tree_depth=6,
step_size=1e-2,
init_strat=numpyro.infer.init_to_value,
key=None,
):
"""Full field posterior p(theta|x=m_data).
Note: pre samples chains correspond to the following fiducial parameters:
(omega_c, omega_b, sigma_8, h_0, n_s, w_0)
= (0.2664, 0.0492, 0.831, 0.6727, 0.9645, -1.0)
Parameters
----------
run_mcmc : bool, optional
if True the MCMC (No U-Turn Sampler) will be run,
if False pre sampled chains are returned according to
gals_per_arcmin2, sigma_e, N, map_size,
by default False
N : int, optional
Number of pixels on the map., by default 256
map_size : int, optional
The total angular size area is given by map_size x map_size,
by default 10
gals_per_arcmin2 : int
Number of galaxies per arcmin, by default 27
sigma_e : float
Dispersion of the ellipticity distribution, by default 0.26
model : numpyro model
only needed if run_mcmc=True, by default None
if run_mcmc=True model can not be None
m_data : Array (N,N)
Lensing convergence map, by default None
if run_mcmc=True m_data can not be None
num_results : int
Number of samples, by default 500
num_warmup : int
Number of warmup steps, by default 200
nb_loop : int
Sequentially draw num_results samples
(ex nb_loop=2 and num_results=100, the number of samples you
get at the end is 200), by default 1
num_chains : int
Number of MCMC chains to run, by default 1
chain_method : str
'parallel', 'sequential', 'vectorized', by default 'parallel'
max_tree_depth : int
Max depth of the binary tree created during the doubling scheme
of NUTS sampler, by default 6
step_size : float
Size of a single step, by default 1e-2
init_strat : callable
Sampler initialization Strategies.
See https://num.pyro.ai/en/stable/utilities.html#init-strategy
key : PRNG key
Only needed if run_mcmc=True, by default None
Returns
-------
Array (num_results,2)
MCMC chains corresponding to p(theta|x=m_data)
"""
if run_mcmc:
def config(x):
if type(x["fn"]) is dist.TransformedDistribution:
return TransformReparam()
elif (
type(x["fn"]) is dist.Normal or type(x["fn"]) is dist.TruncatedNormal
) and ("decentered" not in x["name"]):
return LocScaleReparam(centered=0)
else:
return None
observed_model = condition(model, {"y": m_data})
observed_model_reparam = reparam(observed_model, config=config)
nuts_kernel = numpyro.infer.NUTS(
model=observed_model_reparam,
init_strategy=init_strat,
max_tree_depth=max_tree_depth,
step_size=step_size,
)
mcmc = numpyro.infer.MCMC(
nuts_kernel,
num_warmup=num_warmup,
num_samples=num_results,
num_chains=num_chains,
chain_method=chain_method,
progress_bar=True,
)
samples_ff_store = []
mcmc.run(key)
samples_ = mcmc.get_samples()
mcmc.post_warmup_state = mcmc.last_state
# save only sample of interest
samples_ = jnp.stack(
[
samples_["omega_c"],
samples_["omega_b"],
samples_["sigma_8"],
samples_["h_0"],
samples_["n_s"],
samples_["w_0"],
],
axis=-1,
)
samples_ff_store.append(samples_)
for i in range(1, nb_loop):
mcmc.run(mcmc.post_warmup_state.rng_key)
samples_ = mcmc.get_samples()
mcmc.post_warmup_state = mcmc.last_state
# save only sample of interest
samples_ = jnp.stack(
[
samples_["omega_c"],
samples_["omega_b"],
samples_["sigma_8"],
samples_["h_0"],
samples_["n_s"],
samples_["w_0"],
],
axis=-1,
)
samples_ff_store.append(samples_)
return jnp.array(samples_ff_store).reshape([-1, 6])
else:
SOURCE_FILE = Path(__file__)
SOURCE_DIR = SOURCE_FILE.parent
ROOT_DIR = SOURCE_DIR.parent.resolve()
DATA_DIR = ROOT_DIR / "data"
theta = np.load(
DATA_DIR / "posterior_full_field__"
"{}N_{}ms_{}gpa_{}se.npy".format(N, map_size, gals_per_arcmin2, sigma_e)
)
m_data = np.load(
DATA_DIR / "m_data__"
"{}N_{}ms_{}gpa_{}se.npy".format(N, map_size, gals_per_arcmin2, sigma_e)
)
return theta, m_data
|
DifferentiableUniverseInitiativeREPO_NAMEsbi_lensPATH_START.@sbi_lens_extracted@sbi_lens-main@sbi_lens@simulator@utils.py@.PATH_END.py
|
{
"filename": "_namelengthsrc.py",
"repo_name": "plotly/plotly.py",
"repo_path": "plotly.py_extracted/plotly.py-master/packages/python/plotly/plotly/validators/densitymap/hoverlabel/_namelengthsrc.py",
"type": "Python"
}
|
import _plotly_utils.basevalidators
class NamelengthsrcValidator(_plotly_utils.basevalidators.SrcValidator):
def __init__(
self, plotly_name="namelengthsrc", parent_name="densitymap.hoverlabel", **kwargs
):
super(NamelengthsrcValidator, self).__init__(
plotly_name=plotly_name,
parent_name=parent_name,
edit_type=kwargs.pop("edit_type", "none"),
**kwargs,
)
|
plotlyREPO_NAMEplotly.pyPATH_START.@plotly.py_extracted@plotly.py-master@packages@python@plotly@plotly@validators@densitymap@hoverlabel@_namelengthsrc.py@.PATH_END.py
|
{
"filename": "maximum_mass.py",
"repo_name": "nuclear-multimessenger-astronomy/nmma",
"repo_path": "nmma_extracted/nmma-main/nmma/tests/maximum_mass.py",
"type": "Python"
}
|
from argparse import Namespace
import os
from pathlib import Path
import pytest
from ..joint import maximum_mass_constraint
@pytest.fixture(scope="module")
def args():
workingDir = os.path.dirname(__file__)
dataDir = os.path.join(workingDir, "data")
priorDir = Path(__file__).resolve().parent.parent.parent
priorDir = os.path.join(priorDir, "priors")
args = Namespace(
outdir="outdir",
prior = f"{priorDir}/maximum_mass_resampling.prior",
joint_posterior = f"{dataDir}/GW+KN+GRB_posterior",
eos_path_macro = f"{dataDir}/eos_macro",
eos_path_micro = f"{dataDir}/eos_micro",
nlive = 32,
use_M_Kepler = False
)
return args
def test_maximum_mass_resampling(args):
maximum_mass_constraint.main(args)
|
nuclear-multimessenger-astronomyREPO_NAMEnmmaPATH_START.@nmma_extracted@nmma-main@nmma@tests@maximum_mass.py@.PATH_END.py
|
{
"filename": "ArithmeticFieldList_PYB11.py",
"repo_name": "LLNL/spheral",
"repo_path": "spheral_extracted/spheral-main/src/PYB11/FieldList/ArithmeticFieldList_PYB11.py",
"type": "Python"
}
|
"""
Spheral ArithmeticFieldList module.
Provides the ArithmeticFieldList classes.
"""
from PYB11Generator import *
from SpheralCommon import *
from spheralDimensions import *
dims = spheralDimensions()
from ArithmeticFieldList import *
from MinMaxFieldList import *
#-------------------------------------------------------------------------------
# Includes
#-------------------------------------------------------------------------------
PYB11includes += ['"Geometry/Dimension.hh"',
'"Field/FieldBase.hh"',
'"Field/Field.hh"',
'"Field/FieldList.hh"',
'"Field/FieldListSet.hh"',
'"Utilities/FieldDataTypeTraits.hh"',
'"Utilities/DomainNode.hh"',
'"Geometry/CellFaceFlag.hh"',
'<vector>']
#-------------------------------------------------------------------------------
# Namespaces
#-------------------------------------------------------------------------------
PYB11namespaces = ["Spheral"]
#-------------------------------------------------------------------------------
# Do our dimension dependent instantiations.
#-------------------------------------------------------------------------------
for ndim in dims:
#...........................................................................
# arithmetic fields
for (value, label) in (("int", "Int"),
("unsigned", "Unsigned"),
("uint64_t", "ULL"),
("Dim<%i>::Vector" % ndim, "Vector"),
("Dim<%i>::Tensor" % ndim, "Tensor"),
("Dim<%i>::ThirdRankTensor" % ndim, "ThirdRankTensor"),
("Dim<%i>::FourthRankTensor" % ndim, "FourthRankTensor"),
("Dim<%i>::FifthRankTensor" % ndim, "FifthRankTensor")):
exec('''
%(label)sFieldList%(ndim)sd = PYB11TemplateClass(ArithmeticFieldList, template_parameters=("Dim<%(ndim)i>", "%(value)s"))
''' % {"ndim" : ndim,
"value" : value,
"label" : label})
#...........................................................................
# A few fields can apply the min/max with a scalar additionally
for (value, label) in (("double", "Scalar"),
("Dim<%i>::SymTensor" % ndim, "SymTensor")):
exec('''
%(label)sFieldList%(ndim)sd = PYB11TemplateClass(MinMaxFieldList, template_parameters=("Dim<%(ndim)i>", "%(value)s"))
''' % {"ndim" : ndim,
"value" : value,
"label" : label})
|
LLNLREPO_NAMEspheralPATH_START.@spheral_extracted@spheral-main@src@PYB11@FieldList@ArithmeticFieldList_PYB11.py@.PATH_END.py
|
{
"filename": "baseconfig.py",
"repo_name": "SBU-COSMOLIKE/CAMBLateDE",
"repo_path": "CAMBLateDE_extracted/CAMBLateDE-main/build/lib/camb/baseconfig.py",
"type": "Python"
}
|
import os
import os.path as osp
import sys
import platform
import ctypes
from ctypes import Structure, POINTER, byref, c_int, c_double, c_bool, c_float
from numpy.ctypeslib import ndpointer
import numpy as np
def ndpointer_or_null(*args, **kwargs):
# allows passing None to fortran optional arguments
# from https://stackoverflow.com/a/37664693/1022775
base = ndpointer(*args, **kwargs)
def from_param(cls, obj):
if obj is None:
return obj
return base.from_param(obj)
return type(base.__name__, (base,), {'from_param': classmethod(from_param)})
numpy_3d = ndpointer(c_double, flags='C_CONTIGUOUS', ndim=3)
numpy_2d = ndpointer(c_double, flags='C_CONTIGUOUS', ndim=2)
numpy_1d = ndpointer(c_double, flags='C_CONTIGUOUS')
numpy_1d_or_null = ndpointer_or_null(c_double, flags='C_CONTIGUOUS')
numpy_1d_int = ndpointer(c_int, flags='C_CONTIGUOUS')
BASEDIR = osp.abspath(osp.dirname(__file__))
if platform.system() == "Windows":
DLLNAME = 'cambdll.dll'
else:
DLLNAME = 'camblib.so'
CAMBL = osp.join(BASEDIR, DLLNAME)
gfortran = True
class IfortGfortranLoader(ctypes.CDLL):
def __getitem__(self, name_or_ordinal):
if gfortran:
res = super().__getitem__(name_or_ordinal)
else:
res = super().__getitem__(name_or_ordinal.replace('_MOD_', '_mp_').replace('__', '') + '_')
return res
mock_load = os.environ.get('CAMB_MOCK_LOAD', None)
if mock_load:
# noinspection PyCompatibility
from unittest.mock import MagicMock
camblib = MagicMock()
import_property = MagicMock()
else:
if not osp.isfile(CAMBL):
sys.exit(
'Library file %s does not exist.\nMake sure you have installed or built the camb package '
'(e.g. using "python setup.py make"); or remove any old conflicting installation and install again.'
% CAMBL)
camblib = ctypes.LibraryLoader(IfortGfortranLoader).LoadLibrary(CAMBL)
try:
c_int.in_dll(camblib, "handles_mp_set_cls_template_")
gfortran = False
except Exception:
pass
class _dll_value:
__slots__ = ['f']
def __init__(self, f):
self.f = f
def __get__(self, instance, owner):
return self.f.value
def __set__(self, instance, value):
self.f.value = value
def import_property(tp, module, func):
if gfortran:
f = tp.in_dll(camblib, "__%s_MOD_%s" % (module.lower(), func.lower()))
else:
f = tp.in_dll(camblib, "%s_mp_%s_" % (module.lower(), func.lower()))
return _dll_value(f)
def lib_import(module_name, class_name, func_name, restype=None):
if class_name:
class_name += '_'
func = getattr(camblib, '__' + module_name.lower() +
'_MOD_' + (class_name + func_name).lower())
if restype:
func.restype = restype
return func
def set_cl_template_file(cl_template_file=None):
if cl_template_file and not osp.exists(cl_template_file):
raise ValueError('File not found : %s' % cl_template_file)
template = cl_template_file or osp.join(BASEDIR,
"HighLExtrapTemplate_lenspotentialCls.dat")
if not osp.exists(template):
template = osp.abspath(
osp.join(BASEDIR, "..", "fortran",
"HighLExtrapTemplate_lenspotentialCls.dat"))
template = template.encode("latin-1")
func = camblib.__handles_MOD_set_cls_template
func.argtypes = [ctypes.c_char_p, ctypes.c_long]
s = ctypes.create_string_buffer(template)
func(s, ctypes.c_long(len(template)))
def check_fortran_version(version):
if mock_load:
return
func = camblib.__camb_MOD_camb_getversion
func.argtypes = [ctypes.c_char_p, ctypes.c_long]
s = ctypes.create_string_buffer(33)
func(s, ctypes.c_long(32))
fortran_version = s.value.decode('ascii').strip()
if fortran_version != version:
raise CAMBFortranError(
'Version %s of fortran library does not match python version (%s).' %
(fortran_version,
version) + '\nUpdate install or use "setup.py make" to rebuild library.'
+ '\n(also check camb.__file__ is actually at the path you think you are loading)')
set_cl_template_file()
def _get_fortran_sizes():
_get_allocatable_size = camblib.__handles_MOD_getallocatablesize
allocatable = c_int()
allocatable_array = c_int()
allocatable_object_array = c_int()
_get_allocatable_size(byref(allocatable), byref(allocatable_array),
byref(allocatable_object_array))
return allocatable.value, allocatable_array.value, allocatable_object_array.value
_f_allocatable_size, _f_allocatable_array_size, _f_allocatable_object_array_size = _get_fortran_sizes()
assert _f_allocatable_size % ctypes.sizeof(ctypes.c_void_p) == 0 and \
_f_allocatable_array_size % ctypes.sizeof(ctypes.c_void_p) == 0 and \
_f_allocatable_object_array_size % ctypes.sizeof(ctypes.c_void_p) == 0
# make dummy type of right size to hold fortran allocatable; must be ctypes pointer type to keep auto-alignment correct
f_pointer = ctypes.c_void_p * (_f_allocatable_size // ctypes.sizeof(ctypes.c_void_p))
# These are used for general types, so avoid type checking (and making problematic temporary cast objects)
_get_allocatable = lib_import('handles', 'F2003Class', 'GetAllocatable')
_set_allocatable = lib_import('handles', 'F2003Class', 'SetAllocatable')
_new_instance = lib_import('handles', 'F2003Class', 'new')
_free_instance = lib_import('handles', 'F2003Class', 'free')
_get_id = lib_import('handles', 'F2003Class', 'get_id')
class FortranAllocatable(Structure):
pass
_reuse_pointer = ctypes.c_void_p()
_reuse_typed_id = f_pointer()
# member corresponding to class(...), allocatable :: member in fortran
class _AllocatableObject(FortranAllocatable):
_fields_ = [("allocatable", f_pointer)]
def get_allocatable(self):
_get_allocatable(byref(self), byref(_reuse_typed_id), byref(_reuse_pointer))
if _reuse_pointer:
return ctypes.cast(_reuse_pointer, POINTER(
F2003Class._class_pointers[tuple(_reuse_typed_id)])).contents
else:
return None
def set_allocatable(self, instance, name):
if instance and not isinstance(instance, self._baseclass):
raise TypeError(
'%s expects object that is an instance of %s' % (name, self._baseclass.__name__))
_set_allocatable(byref(self), byref(instance.fortran_self) if instance else None)
_class_cache = {}
# noinspection PyPep8Naming
def AllocatableObject(cls=None):
if cls is None:
cls = F2003Class
if not issubclass(cls, F2003Class):
raise ValueError("AllocatableObject type must be descended from F2003Class")
res = _class_cache.get(cls, None)
if res:
return res
else:
res = type("Allocatable" + cls.__name__, (_AllocatableObject,),
{"_baseclass": cls})
_class_cache[cls] = res
return res
class _AllocatableArray(FortranAllocatable): # member corresponding to allocatable :: d(:) member in fortran
_fields_ = [("allocatable", ctypes.c_void_p * (
_f_allocatable_array_size // ctypes.sizeof(ctypes.c_void_p)))]
def get_allocatable(self):
size = self._get_allocatable_1D_array(byref(self), byref(_reuse_pointer))
if size:
return ctypes.cast(_reuse_pointer, POINTER(self._ctype * size)).contents
else:
return np.empty(0)
def set_allocatable(self, array, name):
self._set_allocatable_1D_array(byref(self), np.array(array, dtype=self._dtype),
byref(c_int(0 if array is None else len(array))))
class _ArrayOfAllocatable(FortranAllocatable):
def __getitem__(self, item):
value = self.allocatables[item]
if isinstance(value, list):
return [x.get_allocatable() for x in value]
else:
return value.get_allocatable()
def __setitem__(self, key, value):
alloc = self.allocatables[key]
alloc.set_allocatable(value, self.__class__.__name__)
def __len__(self):
return len(self.allocatables)
def __repr__(self):
s = ''
for i in range(len(self.allocatables)):
item = self[i]
content = item._as_string() if isinstance(item, CAMB_Structure) else repr(item)
s += ('%s: <%s>\n ' % (i, item.__class__.__name__) + content.replace('\n', '\n ')).strip(' ')
return s
def _make_array_class(baseclass, size):
res = _class_cache.get((baseclass, size), None)
if res:
return res
class Temp(_ArrayOfAllocatable):
_fields_ = [("allocatables", AllocatableObject(baseclass) * size)]
Temp.__name__ = "%sArray_%s" % (baseclass.__name__, size)
_class_cache[(baseclass, size)] = Temp
return Temp
class _AllocatableObjectArray(FortranAllocatable):
# member corresponding to allocatable :: d(:) array of allocatable classes
_fields_ = [("allocatable", ctypes.c_void_p * (
_f_allocatable_object_array_size // ctypes.sizeof(ctypes.c_void_p)))]
def get_allocatable(self):
size = self._get_allocatable_object_1D_array(byref(self), byref(_reuse_pointer))
if size:
return ctypes.cast(_reuse_pointer,
POINTER(_make_array_class(self._baseclass, size))).contents
else:
return []
def set_allocatable(self, array, name):
if array is None:
array = []
pointers = (f_pointer * len(array))()
for i, instance in enumerate(array):
if not isinstance(instance, self._baseclass):
raise TypeError(
'%s expects object that is an instance of %s' % (name, self._baseclass.__name__))
pointers[i] = instance.fortran_self
self._set_allocatable_object_1D_array(byref(self), byref(pointers), byref(c_int(len(array))))
_AllocatableObjectArray._get_allocatable_object_1D_array = camblib.__handles_MOD_get_allocatable_object_1d_array
_AllocatableObjectArray._get_allocatable_object_1D_array.restype = c_int
_AllocatableObjectArray._set_allocatable_object_1D_array = camblib.__handles_MOD_set_allocatable_object_1d_array
# noinspection PyPep8Naming
def AllocatableObjectArray(cls=None):
if cls is None:
cls = F2003Class
if not issubclass(cls, F2003Class):
raise ValueError("AllocatableObject type must be descended from F2003Class")
return type("AllocatableArray" + cls.__name__, (_AllocatableObjectArray,),
{"_baseclass": cls})
class AllocatableArrayInt(_AllocatableArray):
_dtype = int
_ctype = c_int
AllocatableArrayInt._get_allocatable_1D_array = camblib.__handles_MOD_get_allocatable_1d_array_int
AllocatableArrayInt._get_allocatable_1D_array.restype = c_int
AllocatableArrayInt._set_allocatable_1D_array = camblib.__handles_MOD_set_allocatable_1d_array_int
AllocatableArrayInt._set_allocatable_1D_array.argtypes = [POINTER(AllocatableArrayInt), numpy_1d_int, POINTER(c_int)]
class AllocatableArrayDouble(_AllocatableArray):
_dtype = np.float64
_ctype = c_double
AllocatableArrayDouble._get_allocatable_1D_array = camblib.__handles_MOD_get_allocatable_1d_array
AllocatableArrayDouble._get_allocatable_1D_array.restype = c_int
AllocatableArrayDouble._set_allocatable_1D_array = camblib.__handles_MOD_set_allocatable_1d_array
AllocatableArrayDouble._set_allocatable_1D_array.argtypes = [POINTER(AllocatableArrayDouble), numpy_1d, POINTER(c_int)]
def fortran_array(c_pointer, shape, dtype=np.float64, order='F', own_data=True):
if not hasattr(shape, '__len__'):
shape = np.atleast_1d(shape)
arr_size = np.prod(shape[:]) * np.dtype(dtype).itemsize
buf_from_mem = ctypes.pythonapi.PyMemoryView_FromMemory
buf_from_mem.restype = ctypes.py_object
buf_from_mem.argtypes = (ctypes.c_void_p, ctypes.c_int, ctypes.c_int)
buffer = buf_from_mem(c_pointer, arr_size, 0x100)
arr = np.ndarray(tuple(shape[:]), dtype, buffer, order=order)
if own_data and not arr.flags.owndata:
return arr.copy()
else:
return arr
class CAMBError(Exception):
pass
class CAMBValueError(ValueError):
pass
class CAMBUnknownArgumentError(ValueError):
pass
class CAMBParamRangeError(CAMBError):
pass
class CAMBFortranError(Exception):
pass
def method_import(module_name, class_name, func_name, restype=None, extra_args=(),
nopass=False):
func = lib_import(module_name, class_name, func_name, restype)
if extra_args is not None and len(extra_args):
func.argtypes = ([] if nopass else [POINTER(f_pointer)]) + list(extra_args)
return func
# Handle custom field types inspired by:
# https://stackoverflow.com/questions/45527945/extend-ctypes-to-specify-field-overloading
class FortranManagedField:
__slots__ = ['name', 'real_name', 'type_']
def __init__(self, name, type_):
self.name = name
self.type_ = type_
self.real_name = "_" + name
def __get__(self, instance, owner):
value = getattr(instance, self.real_name)
if issubclass(self.type_, FortranAllocatable):
return value.get_allocatable()
return value
def __set__(self, instance, value):
field_value = getattr(instance, self.real_name)
if issubclass(self.type_, FortranAllocatable):
field_value.set_allocatable(value, self.name)
return
if issubclass(self.type_, F2003Class):
field_value.replace(value)
return
setattr(instance, self.real_name, value)
class NamedIntField:
__slots__ = ['real_name', 'values', 'name_values']
def __init__(self, name, **kwargs):
self.real_name = "_" + name
names = kwargs["names"]
self.values = {}
if isinstance(names, (list, tuple)):
self.name_values = {}
start = kwargs.get("start", 0)
for i, name in enumerate(names):
self.name_values[name] = i + start
self.values[i + start] = name
else:
assert isinstance(names, dict)
self.name_values = names
for name in names:
self.values[names[name]] = name
def __get__(self, instance, owner):
value = getattr(instance, self.real_name)
return self.values[value]
def __set__(self, instance, value):
if isinstance(value, str):
value = self.name_values[value]
elif value not in self.values:
raise ValueError("Value %s not in allowed: %s" % (value, self.name_values))
setattr(instance, self.real_name, value)
class BoolField: # fortran-compatible boolean (actually c_int internally)
__slots__ = ['real_name']
def __init__(self, name):
self.real_name = "_" + name
def __get__(self, instance, owner):
return getattr(instance, self.real_name) != 0
def __set__(self, instance, value):
setattr(instance, self.real_name, (0, 1)[value])
class SizedArrayField: # statically sized array with another field determining size
__slots__ = ['real_name', 'size_name']
def __init__(self, name, size_name):
self.real_name = "_" + name
self.size_name = size_name
def __get__(self, instance, owner):
size = getattr(instance, self.size_name)
value = getattr(instance, self.real_name)
if size == len(value):
return value
return POINTER(value._type_ * size)(value).contents
def __set__(self, instance, value):
field = getattr(instance, self.real_name)
if len(value) > len(field):
raise CAMBParamRangeError(
"%s can be of max length %s" % (self.real_name[1:], len(field)))
field[:len(value)] = value
setattr(instance, self.size_name, len(value))
class CAMBStructureMeta(type(Structure)):
# noinspection PyMethodParameters
def __new__(metacls, name, bases, namespace):
_fields = namespace.get("_fields_", "")
ctypes_fields = []
try:
F2003 = F2003Class
except NameError:
class F2003:
pass
tps = {c_bool: "boolean", c_double: "float64", c_int: "integer",
c_float: "float32",
AllocatableArrayDouble: "float64 array",
AllocatableArrayInt: "integer array",
ctypes.c_void_p: "pointer"}
field_doc = ''
for field in _fields:
field_name = field[0]
field_type = field[1]
if field_type == c_bool:
new_field = BoolField(field_name)
ctypes_fields.append(("_" + field_name, c_int))
elif issubclass(field_type, FortranAllocatable) or issubclass(field_type, F2003):
new_field = FortranManagedField(field_name, field_type)
ctypes_fields.append(("_" + field_name, field_type))
elif len(field) > 2 and isinstance(field[2], dict):
dic = field[2]
if "names" in dic:
if field_type != c_int:
raise CAMBFortranError("Named fields only allowed for c_int")
new_field = NamedIntField(field_name, **dic)
ctypes_fields.append(("_" + field_name, c_int))
elif "size" in dic:
if not issubclass(field_type, ctypes.Array):
raise CAMBFortranError(
"sized fields only allowed for ctypes Arrays")
if dic["size"] not in [x[0] for x in _fields]:
raise CAMBFortranError(
"size must be the name of a field in same structure (%s for %s)" % (
dic["size"], field_name))
new_field = SizedArrayField(field_name, dic["size"])
ctypes_fields.append(("_" + field_name, field_type))
else:
raise CAMBFortranError(
"Unknown dictionary content for %s, %s" % (field_name, dic))
else:
new_field = None
if new_field:
namespace[field_name] = new_field
else:
ctypes_fields.append((field_name, field_type))
if field[0][0] != '_': # add :ivar: documentation for each field
field_doc += "\n :ivar %s:" % field[0]
if isinstance(field[-1], dict) and field[-1].get('names', None):
field_doc += " (integer/string, one of: %s) " % (", ".join(field[-1]["names"]))
else:
tp = tps.get(field[1], None)
if tp:
field_doc += " (*%s*)" % tp
elif issubclass(field[1], ctypes.Array):
field_doc += " (*%s array*)" % tps[field[1]._type_]
elif issubclass(field[1], CAMB_Structure):
field_doc += " :class:`%s.%s`" % (field[1].__module__, field[1].__name__)
elif issubclass(field[1], _AllocatableObject):
field_doc += " :class:`%s.%s`" % (
field[1]._baseclass.__module__, field[1]._baseclass.__name__)
elif issubclass(field[1], _AllocatableObjectArray):
field_doc += " array of :class:`%s.%s`" % (
field[1]._baseclass.__module__, field[1]._baseclass.__name__)
if len(field) > 2 and not isinstance(field[-1], dict):
field_doc += " " + field[-1]
namespace["_fields_"] = ctypes_fields
if field_doc:
namespace['__doc__'] = namespace.get('__doc__', "") + "\n" + field_doc
# noinspection PyTypeChecker
cls: CAMB_Structure = super().__new__(metacls, name, bases, namespace)
if name == "F2003Class" or issubclass(bases[0], F2003):
cls._class_imports = {}
if "_fortran_class_name_" not in cls.__dict__:
cls._fortran_class_name_ = name
prefix = getattr(cls, '_method_prefix_', "f_")
methods = cls.__dict__.get("_methods_", "")
def make_method(_func, _name, _nopass, doc):
if _nopass:
def method_func(self, *args):
return _func(*args)
else:
def method_func(self, *args):
return _func(self.fortran_self, *args)
if doc:
method_func.__doc__ = doc
method_func.__name__ = _name
return method_func
for method in methods:
method_name = method[0]
extra_args = method[1]
restype = method[2] if len(method) > 2 else None
opts = method[3] if len(method) > 3 else {}
nopass = opts.get("nopass", False)
try:
func = method_import(cls._fortran_class_module_, cls._fortran_class_name_,
method_name, extra_args=extra_args, nopass=nopass, restype=restype)
except AttributeError:
raise AttributeError('No function %s_%s found in module %s' % (
cls._fortran_class_name_, method_name, cls._fortran_class_module_))
new_method = make_method(func, prefix + method_name, nopass, opts.get("doc", ""))
setattr(cls, prefix + method_name, new_method)
return cls
# noinspection PyPep8Naming
class CAMB_Structure(Structure, metaclass=CAMBStructureMeta):
# noinspection PyUnresolvedReferences
@classmethod
def get_all_fields(cls):
if cls != CAMB_Structure:
fields = cls.__bases__[0].get_all_fields()
else:
fields = []
fields += [(name[1:], value) if name.startswith('_') else (name, value) for name, value in
cls.__dict__.get('_fields_', []) if
not name.startswith('__')]
return fields
@classmethod
def get_valid_field_names(cls):
return set(field[0] for field in cls.get_all_fields())
def _as_string(self):
s = ''
for field_name, field_type in self.get_all_fields():
obj = getattr(self, field_name)
if isinstance(obj, (CAMB_Structure, FortranAllocatable)):
content = obj._as_string() if isinstance(obj, CAMB_Structure) else str(obj)
s += (field_name + ': <%s>\n ' % obj.__class__.__name__ + content.replace('\n', '\n ')).strip(' ')
else:
if isinstance(obj, ctypes.Array):
if len(obj) > 20:
s += field_name + ' = ' + str(obj[:7])[:-1] + ', ...]\n'
else:
s += field_name + ' = ' + str(obj[:len(obj)]) + '\n'
else:
s += field_name + ' = ' + str(obj) + '\n'
return s
def __repr__(self):
return 'class: <%s>\n ' % self.__class__.__name__ + self._as_string().replace('\n', '\n ')
class _FortranSelf:
def __get__(self, instance, owner):
if not instance and owner.__class__ is CAMBStructureMeta:
# prevent error during introspection of classes
return None
pointer = f_pointer()
owner._fortran_selfpointer_function(byref(ctypes.pointer(instance)), byref(pointer))
instance.fortran_self = pointer
return pointer
class F2003Class(CAMB_Structure):
# Wraps a fortran type/class that is allocated in fortran, potentially containing allocatable _fields_
# elements that are instances of classes, allocatable arrays that are wrapped in python, and list
# of class _methods_ from fortran.
#
# Note that assigning to allocatable fields makes a deep copy of the object so the object always owns all memory
# belonging to its fields. Accessing an allocatable field makes a new class pointer object on the fly. It can
# become undefined if the allocatable field is reassigned.
# classes are referenced by their fortran null pointer object. _class_pointers is a dictionary relating these
# f_pointer to python classes. Elements are added each class by the @fortran_class decorator.
_class_pointers = {}
# dictionary mapping class names to classes
_class_names = {}
__slots__ = ()
# pointer to fortran class; generated once per instance using _fortran_selfpointer_function then replaced with
# the actual value
fortran_self = _FortranSelf()
def __new__(cls, *args, **kwargs):
return cls._new_copy()
def __init__(self, **kwargs):
unknowns = set(kwargs) - self.get_valid_field_names()
if unknowns:
raise ValueError('Unknown argument(s): %s' % unknowns)
super().__init__(**kwargs)
@classmethod
def _new_copy(cls, source=None):
if source is None:
_key = POINTER(cls)()
else:
_key = POINTER(cls)(source)
pointer_func = getattr(cls, '_fortran_selfpointer_function', None)
if pointer_func is None:
if getattr(cls, '_optional_compile', False):
raise CAMBFortranError(
'Class %s has not been built into the Fortran binary,'
' edit Makefile_main and rebuild to use.' % cls.__name__)
raise CAMBFortranError(
'Cannot instantiate %s, is base class or needs @fortran_class decorator' % cls.__name__)
_new_instance(byref(_key), byref(pointer_func))
instance = _key.contents
instance._key = _key
return instance
def copy(self):
"""
Make independent copy of this object.
:return: deep copy of self
"""
return self._new_copy(source=self)
__copy__ = copy
@classmethod
def import_method(cls, tag, extra_args=(), restype=None, nopass=False,
allow_inherit=True):
func = cls._class_imports.get(tag, None)
if func is None:
try:
func = method_import(cls._fortran_class_module_, cls._fortran_class_name_,
tag, extra_args=extra_args, nopass=nopass, restype=restype)
except AttributeError:
try:
if not allow_inherit or cls.__bases__[0] == F2003Class:
raise
# noinspection PyUnresolvedReferences
func = cls.__bases__[0].import_method(tag, extra_args, restype, nopass=nopass)
except AttributeError:
raise AttributeError(
'No function %s_%s found ' % (cls._fortran_class_name_, tag))
cls._class_imports[tag] = func
return func
def call_method(self, tag, extra_args=(), args=(), restype=None, nopass=False, allow_inherit=True):
func = self.import_method(tag, extra_args=extra_args, restype=restype,
nopass=nopass, allow_inherit=allow_inherit)
if nopass:
return func(*args)
else:
return func(byref(self.fortran_self), *args)
def __del__(self):
key = getattr(self, '_key', None)
if key:
_free_instance(byref(key), byref(self.__class__._fortran_selfpointer_function))
def replace(self, instance):
"""
Replace the content of this class with another instance, doing a deep copy (in Fortran)
:param instance: instance of the same class to replace this instance with
"""
if type(instance) != type(self):
raise TypeError(
'Cannot assign non-identical types (%s to %s, non-allocatable)' % (type(instance), type(self)))
self.call_method('Replace', extra_args=[POINTER(f_pointer)], allow_inherit=False,
args=[byref(instance.fortran_self)])
@staticmethod
def make_class_named(name, base_class=None):
if not isinstance(name, type):
cls = F2003Class._class_names.get(name, None)
if not cls:
raise CAMBValueError("Class not found: %s" % name)
else:
cls = name
if base_class is None or issubclass(cls, base_class):
return cls()
else:
raise CAMBValueError(
"class %s is not a type of %s" % (cls.__name__, base_class.__name__))
# Decorator to get function to get class pointers to each class type, and build index of classes
# that allocatables could have
def fortran_class(cls, optional=False):
if mock_load:
return cls
class_module = getattr(cls, "_fortran_class_module_", None)
if not class_module:
msg = "F2003Class %s must define _fortran_class_module_" % cls.__name__
print(msg)
raise CAMBFortranError(msg)
try:
cls._fortran_selfpointer_function = lib_import(class_module, cls._fortran_class_name_,
'selfpointer')
except AttributeError as e:
if optional:
cls._optional_compile = True
return cls
else:
print(e)
raise CAMBFortranError("Class %s cannot find fortran %s_SelfPointer method in module %s." %
(cls.__name__, cls._fortran_class_name_, class_module))
_get_id(byref(cls._fortran_selfpointer_function), byref(_reuse_typed_id))
F2003Class._class_pointers[tuple(_reuse_typed_id)] = cls
F2003Class._class_names[cls.__name__] = cls
return cls
def optional_fortran_class(cls):
return fortran_class(cls, optional=True)
|
SBU-COSMOLIKEREPO_NAMECAMBLateDEPATH_START.@CAMBLateDE_extracted@CAMBLateDE-main@build@lib@camb@baseconfig.py@.PATH_END.py
|
{
"filename": "example_IFS.py",
"repo_name": "SAIL-Labs/AMICAL",
"repo_path": "AMICAL_extracted/AMICAL-main/doc/example_IFS.py",
"type": "Python"
}
|
"""
@author: Anthony Soulain (UGA - IPAG)
-------------------------------------------------------------------------
AMICAL: Aperture Masking Interferometry Calibration and Analysis Library
-------------------------------------------------------------------------
In this example, we provide a way to clean and extract data coming from IFU
instruments like the VLT/SPHERE. The main idea is to deal with each spectral
channels individually and combine them afterward.
--------------------------------------------------------------------
"""
import os
import amical
# Set your own cleaning paramters (appropriate for SPHERE)
clean_param = {"isz": 149, "r1": 70, "dr": 2, "apod": True, "window": 65, "f_kernel": 3}
# Set the AMICAL parameters
params_ami = {
"peakmethod": "fft",
"bs_multi_tri": False,
"maskname": "g7",
"fw_splodge": 0.7,
"filtname": "YJ", # Use the appropriate filter (only YJ and YH implemented)
}
# We deal with the spectral channel individually.
list_index_ifu = [0, 10, 20] # List of index to be used (nwl = 39 for SPHERE)
# You can also use all of them (np.arange(39)).
# You can first check which channel you will use
wave = amical.get_infos_obs.get_ifu_table(
list_index_ifu, filtname=params_ami["filtname"]
)
print("Wave used:", wave, "µm")
datadir = "NRM_DATA/"
file_ifs = os.path.join(datadir, "example_sphere_ifs.fits")
# Then, we can clean and extract each wavelengths
l_cal = []
for i_wl in list_index_ifu:
# Step 1: clean the data (only one wave at a time: i_wl index)
cube_clean = amical.select_clean_data(file_ifs, i_wl=i_wl, **clean_param)
# Step 2: extract data (i_wl must be specified to use the good bandwidth and
# central wavelength (automatical in AMICAL).
bs_ifs = amical.extract_bs(
cube_clean,
file_ifs,
i_wl=i_wl,
display=True,
**params_ami,
)
# We convert the raw variable into appropriate format for visualisation
cal = amical.oifits.wrap_raw(bs_ifs)
# Or apply the standard calibration procedure using a calibrator
# cal - amical.calibrate(bs_t, bs_c) # See doc/example_SPHERE.py
# We save all wavelenght into the same list
l_cal.append(cal)
# Finally, you can display your observable
amical.show(l_cal)
# You can finaly save the file as usual
# amical.save(l_cal, oifits_file="fake_ifs.oifits", pa=bs_t.infos.pa)
|
SAIL-LabsREPO_NAMEAMICALPATH_START.@AMICAL_extracted@AMICAL-main@doc@example_IFS.py@.PATH_END.py
|
{
"filename": "__init__.py",
"repo_name": "tgrassi/prizmo",
"repo_path": "prizmo_extracted/prizmo-main/src_py/ChiantiPy/core/tests/__init__.py",
"type": "Python"
}
|
tgrassiREPO_NAMEprizmoPATH_START.@prizmo_extracted@prizmo-main@src_py@ChiantiPy@core@tests@__init__.py@.PATH_END.py
|
|
{
"filename": "_lenmode.py",
"repo_name": "plotly/plotly.py",
"repo_path": "plotly.py_extracted/plotly.py-master/packages/python/plotly/plotly/validators/icicle/marker/colorbar/_lenmode.py",
"type": "Python"
}
|
import _plotly_utils.basevalidators
class LenmodeValidator(_plotly_utils.basevalidators.EnumeratedValidator):
def __init__(
self, plotly_name="lenmode", parent_name="icicle.marker.colorbar", **kwargs
):
super(LenmodeValidator, self).__init__(
plotly_name=plotly_name,
parent_name=parent_name,
edit_type=kwargs.pop("edit_type", "colorbars"),
values=kwargs.pop("values", ["fraction", "pixels"]),
**kwargs,
)
|
plotlyREPO_NAMEplotly.pyPATH_START.@plotly.py_extracted@plotly.py-master@packages@python@plotly@plotly@validators@icicle@marker@colorbar@_lenmode.py@.PATH_END.py
|
{
"filename": "test_dtype.py",
"repo_name": "scikit-image/scikit-image",
"repo_path": "scikit-image_extracted/scikit-image-main/skimage/util/tests/test_dtype.py",
"type": "Python"
}
|
import numpy as np
import itertools
from skimage import (
img_as_float,
img_as_float32,
img_as_float64,
img_as_int,
img_as_uint,
img_as_ubyte,
)
from skimage.util.dtype import _convert
from skimage._shared._warnings import expected_warnings
from skimage._shared import testing
from skimage._shared.testing import assert_equal, parametrize
dtype_range = {
np.uint8: (0, 255),
np.uint16: (0, 65535),
np.int8: (-128, 127),
np.int16: (-32768, 32767),
np.float32: (-1.0, 1.0),
np.float64: (-1.0, 1.0),
}
img_funcs = (img_as_int, img_as_float64, img_as_float32, img_as_uint, img_as_ubyte)
dtypes_for_img_funcs = (np.int16, np.float64, np.float32, np.uint16, np.ubyte)
img_funcs_and_types = zip(img_funcs, dtypes_for_img_funcs)
def _verify_range(msg, x, vmin, vmax, dtype):
assert_equal(x[0], vmin)
assert_equal(x[-1], vmax)
assert x.dtype == dtype
@parametrize("dtype, f_and_dt", itertools.product(dtype_range, img_funcs_and_types))
def test_range(dtype, f_and_dt):
imin, imax = dtype_range[dtype]
x = np.linspace(imin, imax, 10).astype(dtype)
f, dt = f_and_dt
y = f(x)
omin, omax = dtype_range[dt]
if imin == 0 or omin == 0:
omin = 0
imin = 0
_verify_range(
f"From {np.dtype(dtype)} to {np.dtype(dt)}", y, omin, omax, np.dtype(dt)
)
# Add non-standard data types that are allowed by the `_convert` function.
dtype_range_extra = dtype_range.copy()
dtype_range_extra.update(
{np.int32: (-2147483648, 2147483647), np.uint32: (0, 4294967295)}
)
dtype_pairs = [
(np.uint8, np.uint32),
(np.int8, np.uint32),
(np.int8, np.int32),
(np.int32, np.int8),
(np.float64, np.float32),
(np.int32, np.float32),
]
@parametrize("dtype_in, dt", dtype_pairs)
def test_range_extra_dtypes(dtype_in, dt):
"""Test code paths that are not skipped by `test_range`"""
imin, imax = dtype_range_extra[dtype_in]
x = np.linspace(imin, imax, 10).astype(dtype_in)
y = _convert(x, dt)
omin, omax = dtype_range_extra[dt]
_verify_range(
f"From {np.dtype(dtype_in)} to {np.dtype(dt)}", y, omin, omax, np.dtype(dt)
)
def test_downcast():
x = np.arange(10).astype(np.uint64)
with expected_warnings(['Downcasting']):
y = img_as_int(x)
assert np.allclose(y, x.astype(np.int16))
assert y.dtype == np.int16, y.dtype
def test_float_out_of_range():
too_high = np.array([2], dtype=np.float32)
with testing.raises(ValueError):
img_as_int(too_high)
too_low = np.array([-2], dtype=np.float32)
with testing.raises(ValueError):
img_as_int(too_low)
def test_float_float_all_ranges():
arr_in = np.array([[-10.0, 10.0, 1e20]], dtype=np.float32)
np.testing.assert_array_equal(img_as_float(arr_in), arr_in)
def test_copy():
x = np.array([1], dtype=np.float64)
y = img_as_float(x)
z = img_as_float(x, force_copy=True)
assert y is x
assert z is not x
def test_bool():
img_ = np.zeros((10, 10), bool)
img8 = np.zeros((10, 10), np.bool_)
img_[1, 1] = True
img8[1, 1] = True
for func, dt in [
(img_as_int, np.int16),
(img_as_float, np.float64),
(img_as_uint, np.uint16),
(img_as_ubyte, np.ubyte),
]:
converted_ = func(img_)
assert np.sum(converted_) == dtype_range[dt][1]
converted8 = func(img8)
assert np.sum(converted8) == dtype_range[dt][1]
def test_clobber():
# The `img_as_*` functions should never modify input arrays.
for func_input_type in img_funcs:
for func_output_type in img_funcs:
img = np.random.rand(5, 5)
img_in = func_input_type(img)
img_in_before = img_in.copy()
func_output_type(img_in)
assert_equal(img_in, img_in_before)
def test_signed_scaling_float32():
x = np.array([-128, 127], dtype=np.int8)
y = img_as_float32(x)
assert_equal(y.max(), 1)
def test_float32_passthrough():
x = np.array([-1, 1], dtype=np.float32)
y = img_as_float(x)
assert_equal(y.dtype, x.dtype)
float_dtype_list = [
float,
float,
np.float64,
np.single,
np.float32,
np.float64,
'float32',
'float64',
]
def test_float_conversion_dtype():
"""Test any conversion from a float dtype to an other."""
x = np.array([-1, 1])
# Test all combinations of dtypes conversions
dtype_combin = np.array(np.meshgrid(float_dtype_list, float_dtype_list)).T.reshape(
-1, 2
)
for dtype_in, dtype_out in dtype_combin:
x = x.astype(dtype_in)
y = _convert(x, dtype_out)
assert y.dtype == np.dtype(dtype_out)
def test_float_conversion_dtype_warns():
"""Test that convert issues a warning when called"""
from skimage.util.dtype import convert
x = np.array([-1, 1])
# Test all combinations of dtypes conversions
dtype_combin = np.array(np.meshgrid(float_dtype_list, float_dtype_list)).T.reshape(
-1, 2
)
for dtype_in, dtype_out in dtype_combin:
x = x.astype(dtype_in)
with expected_warnings(["The use of this function is discouraged"]):
y = convert(x, dtype_out)
assert y.dtype == np.dtype(dtype_out)
def test_subclass_conversion():
"""Check subclass conversion behavior"""
x = np.array([-1, 1])
for dtype in float_dtype_list:
x = x.astype(dtype)
y = _convert(x, np.floating)
assert y.dtype == x.dtype
def test_int_to_float():
"""Check Normalization when casting img_as_float from int types to float"""
int_list = np.arange(9, dtype=np.int64)
converted = img_as_float(int_list)
assert np.allclose(converted, int_list * 1e-19, atol=0.0, rtol=0.1)
ii32 = np.iinfo(np.int32)
ii_list = np.array([ii32.min, ii32.max], dtype=np.int32)
floats = img_as_float(ii_list)
assert_equal(floats.max(), 1)
assert_equal(floats.min(), -1)
def test_img_as_ubyte_supports_npulonglong():
# Pre NumPy <2.0.0, `data_scaled.dtype.type` is `np.ulonglong` instead of
# np.uint64 as one might expect. This caused issues with `img_as_ubyte` due
# to `np.ulonglong` missing from `skimage.util.dtype._integer_types`.
# This doesn't seem to be an issue for NumPy >=2.0.0.
# https://github.com/scikit-image/scikit-image/issues/7385
data = np.arange(50, dtype=np.uint64)
data_scaled = data * 256 ** (data.dtype.itemsize - 1)
result = img_as_ubyte(data_scaled)
assert result.dtype == np.uint8
|
scikit-imageREPO_NAMEscikit-imagePATH_START.@scikit-image_extracted@scikit-image-main@skimage@util@tests@test_dtype.py@.PATH_END.py
|
{
"filename": "watchlists.md",
"repo_name": "lsst-uk/lasair-lsst",
"repo_path": "lasair-lsst_extracted/lasair-lsst-main/docs/source/core_functions/watchlists.md",
"type": "Markdown"
}
|
# Watchlists
A watchlist is a set of points in the sky, together with a radius in arcseconds, which
can be the same for all sources, or different for each.
It is assumed to be a list of "interesting" sources, so that any transient that
falls within the radius of one of the sources might indicate activity of that source.
Each user of the Lasair system has their own set of watchlists, and can be
alerted when a transient is coincident with a watchlist source. Here, the word coincident means
within the radius of the source.
An "Active" watchlist is one that is run every day, so that it is up to date with the latest objects.
## Create new watchlist
You can create a watchlist of sources by preparing a text file, where each
comma-separated or |-separated line has RA and Dec in decimal degrees,
an identifier, with optional radius in arcseconds. One way to do this is
with [Vizier](http://vizier.u-strasbg.fr/viz-bin/VizieR) (see below) or with a spreadsheet
program such as Excel or Numbers.
Here is [an example of the data](BLLac.html). The 42 entries are _BL Lac candidates for TeV observations (Massaro+, 2013)_
Note that you must be logged in to create a watchlist.
Many astronomers are interested in transients that are associated with specific
astronomical objects, perhaps active galaxies or star formation regions.
Once you have an account on Lasair, you can create any number of watchlists, to be
used in the query engine. To be specific, suppose we are interested in the 42 objects in the
catalogue BL Lac candidates for TeV observations (Massaro+, 2013),
that can be found in the Vizier library of catalogues. You can make your
watchlist “public”, so other Lasair users can see it and use it in queries,
and you can make your watchlist “active”, meaning that the crossmatch (see below)
is done automatically every day.
The following is how to make the correct file format from [Vizier](http://vizier.u-strasbg.fr/viz-bin/VizieR).
<img src="../_images/watchlist/vizier.png" width="600px"/>
First you select a catalogue, which may consist of a number of tables. Select JUST ONE TABLE,
so that there is just a single list of attributes. For example,
<a href=https://vizier.unistra.fr/viz-bin/VizieR?-source=J/MNRAS/482/98&-to=3>this link</a>
has two tables, but <a href=https://vizier.unistra.fr/viz-bin/VizieR-3?-source=J/MNRAS/482/98/table1>this link</a> is for a single table.
Once you have selected your table,
1. Deselect all the columns
2. Select a column that can act as the identifier for each source.
These need to be unique and not empty: if not, you must edit the resulting file to make it so.
3. Choose “Decimal” for the coordinates
4. Choose “|-separated” for the format
5. Select "unlimited" or however many you want in your watchlist
6. Click submit to download the file.
Once you have the file, you can paste it into a form, or upload the file directly.
There may be error messages about unparsable lines, which can be eliminated by
editing the file so every non-numerical line begins with the # symbol.
The upload form is shown here:
<img src="../_images/watchlist/create.png" width="400px"/>
Fill in the name and description of the watchlist. Choose a default value of the
radius to use in matching, in arcseconds.
Each line should be RA, Dec, ID, and may have a fourth entry, the radius to use in matching,
in arcseconds, if different from the default. Then click “Create”.
Here is a successful creation of a watchlist. Some messages – “Bad line” – because there were
some lines without data, but you can ignore these, and look for where it
says “Watchlist created successfully”. You can now find it in the list of “My Watchlists”.
## Find outbursts from my watchlist
Once you have made a watchlist, you may be interested in being notified whenever
something unusual – outburst for example – happens to one of your sources.
Thus we combine a watchlist with a query on magnitude that detects fast rise.
For the watch list see Build a Watchlist of your sources, and for the query we
utilise the moving averages of apparent magnitudes that Lasair provides.
|
lsst-ukREPO_NAMElasair-lsstPATH_START.@lasair-lsst_extracted@lasair-lsst-main@docs@source@core_functions@watchlists.md@.PATH_END.py
|
{
"filename": "builtin_surface.py",
"repo_name": "enthought/mayavi",
"repo_path": "mayavi_extracted/mayavi-master/mayavi/sources/builtin_surface.py",
"type": "Python"
}
|
""" A module that allows a user to create one of several standard VTK
poly data sources.
"""
#Author: Suyog Dutt Jain <suyog.jain@aero.iitb.ac.in>
# Prabhu Ramachandran <prabhu_r@users.sf.net>
# Copyright (c) 2008, Enthought, Inc.
# License: BSD Style.
# Enthought library imports.
from traits.api import Instance, Enum, Dict, Str
from traitsui.api import View, Item, Group
from tvtk.api import tvtk
# Local imports
from mayavi.core.source import Source
from mayavi.core.pipeline_info import PipelineInfo
######################################################################
# `BuiltinSurface` class.
######################################################################
class BuiltinSurface(Source):
# The version of this class. Used for persistence.
__version__ = 0
# Flag to set the poly data type.
source = Enum('arrow', 'cone', 'cube', 'cylinder', 'disk', 'earth',
'line', 'outline', 'plane', 'point', 'polygon', 'sphere',
'superquadric', 'textured sphere', 'glyph2d',
desc='which poly data source to be used')
# Define the trait 'data_source' whose value must be an instance of
# type PolyData
data_source = Instance(tvtk.PolyDataAlgorithm, allow_none=False,
record=True)
# Information about what this object can produce.
output_info = PipelineInfo(datasets=['poly_data'],
attribute_types=['any'],
attributes=['any'])
# Create the UI for the traits.
view = View(Group(Item(name='source'),
Item(name='data_source',
style='custom',
resizable=True),
label='Surface Source',
show_labels=False),
resizable=True)
########################################
# Private traits.
# A dictionary that maps the source names to instances of the
# poly data sources.
_source_dict = Dict(Str,
Instance(tvtk.PolyDataAlgorithm,
allow_none=False))
######################################################################
# `object` interface
######################################################################
def __init__(self, **traits):
# Call parent class' init.
super(BuiltinSurface, self).__init__(**traits)
# Initialize the source to the default mode's instance from
# the dictionary if needed.
if 'source' not in traits:
self._source_changed(self.source)
def __set_pure_state__(self, state):
self.source = state.source
super(BuiltinSurface, self).__set_pure_state__(state)
def has_output_port(self):
""" Return True as the data source has output port."""
return True
def get_output_object(self):
""" Return the data source output port."""
return self.data_source.output_port
######################################################################
# Non-public methods.
######################################################################
def _source_changed(self, value):
"""This method is invoked (automatically) when the `source`
trait is changed.
"""
self.data_source = self._source_dict[self.source]
def _data_source_changed(self, old, new):
"""This method is invoked (automatically) when the
poly data source is changed ."""
self.outputs = [self.data_source]
if old is not None:
old.on_trait_change(self.render, remove=True)
new.on_trait_change(self.render)
def __source_dict_default(self):
"""Default value for source dict."""
sd = {'arrow':tvtk.ArrowSource(),
'cone':tvtk.ConeSource(),
'cube':tvtk.CubeSource(),
'cylinder':tvtk.CylinderSource(),
'disk':tvtk.DiskSource(),
'earth':tvtk.EarthSource(),
'line':tvtk.LineSource(),
'outline':tvtk.OutlineSource(),
'plane':tvtk.PlaneSource(),
'point':tvtk.PointSource(),
'polygon':tvtk.RegularPolygonSource(),
'sphere':tvtk.SphereSource(),
'superquadric':tvtk.SuperquadricSource(),
'textured sphere':tvtk.TexturedSphereSource(),
'glyph2d': tvtk.GlyphSource2D()}
return sd
|
enthoughtREPO_NAMEmayaviPATH_START.@mayavi_extracted@mayavi-master@mayavi@sources@builtin_surface.py@.PATH_END.py
|
{
"filename": "_sizesrc.py",
"repo_name": "plotly/plotly.py",
"repo_path": "plotly.py_extracted/plotly.py-master/packages/python/plotly/plotly/validators/scatter/hoverlabel/font/_sizesrc.py",
"type": "Python"
}
|
import _plotly_utils.basevalidators
class SizesrcValidator(_plotly_utils.basevalidators.SrcValidator):
def __init__(
self, plotly_name="sizesrc", parent_name="scatter.hoverlabel.font", **kwargs
):
super(SizesrcValidator, self).__init__(
plotly_name=plotly_name,
parent_name=parent_name,
edit_type=kwargs.pop("edit_type", "none"),
**kwargs,
)
|
plotlyREPO_NAMEplotly.pyPATH_START.@plotly.py_extracted@plotly.py-master@packages@python@plotly@plotly@validators@scatter@hoverlabel@font@_sizesrc.py@.PATH_END.py
|
{
"filename": "_borderwidth.py",
"repo_name": "catboost/catboost",
"repo_path": "catboost_extracted/catboost-master/contrib/python/plotly/py2/plotly/validators/scattercarpet/marker/colorbar/_borderwidth.py",
"type": "Python"
}
|
import _plotly_utils.basevalidators
class BorderwidthValidator(_plotly_utils.basevalidators.NumberValidator):
def __init__(
self,
plotly_name="borderwidth",
parent_name="scattercarpet.marker.colorbar",
**kwargs
):
super(BorderwidthValidator, self).__init__(
plotly_name=plotly_name,
parent_name=parent_name,
edit_type=kwargs.pop("edit_type", "colorbars"),
min=kwargs.pop("min", 0),
role=kwargs.pop("role", "style"),
**kwargs
)
|
catboostREPO_NAMEcatboostPATH_START.@catboost_extracted@catboost-master@contrib@python@plotly@py2@plotly@validators@scattercarpet@marker@colorbar@_borderwidth.py@.PATH_END.py
|
{
"filename": "__init__.py",
"repo_name": "catboost/catboost",
"repo_path": "catboost_extracted/catboost-master/contrib/python/plotly/py2/plotly/validators/scatterpolar/marker/gradient/__init__.py",
"type": "Python"
}
|
import sys
if sys.version_info < (3, 7):
from ._typesrc import TypesrcValidator
from ._type import TypeValidator
from ._colorsrc import ColorsrcValidator
from ._color import ColorValidator
else:
from _plotly_utils.importers import relative_import
__all__, __getattr__, __dir__ = relative_import(
__name__,
[],
[
"._typesrc.TypesrcValidator",
"._type.TypeValidator",
"._colorsrc.ColorsrcValidator",
"._color.ColorValidator",
],
)
|
catboostREPO_NAMEcatboostPATH_START.@catboost_extracted@catboost-master@contrib@python@plotly@py2@plotly@validators@scatterpolar@marker@gradient@__init__.py@.PATH_END.py
|
{
"filename": "_tickvalssrc.py",
"repo_name": "plotly/plotly.py",
"repo_path": "plotly.py_extracted/plotly.py-master/packages/python/plotly/plotly/validators/layout/ternary/aaxis/_tickvalssrc.py",
"type": "Python"
}
|
import _plotly_utils.basevalidators
class TickvalssrcValidator(_plotly_utils.basevalidators.SrcValidator):
def __init__(
self, plotly_name="tickvalssrc", parent_name="layout.ternary.aaxis", **kwargs
):
super(TickvalssrcValidator, self).__init__(
plotly_name=plotly_name,
parent_name=parent_name,
edit_type=kwargs.pop("edit_type", "none"),
**kwargs,
)
|
plotlyREPO_NAMEplotly.pyPATH_START.@plotly.py_extracted@plotly.py-master@packages@python@plotly@plotly@validators@layout@ternary@aaxis@_tickvalssrc.py@.PATH_END.py
|
{
"filename": "test_analytic_kinematics.py",
"repo_name": "sibirrer/lenstronomy",
"repo_path": "lenstronomy_extracted/lenstronomy-main/test/test_GalKin/test_analytic_kinematics.py",
"type": "Python"
}
|
import pytest
from lenstronomy.GalKin.analytic_kinematics import AnalyticKinematics
import numpy as np
from lenstronomy.GalKin.numeric_kinematics import NumericKinematics
from astropy.cosmology import FlatLambdaCDM
import numpy.testing as npt
class TestAnalyticKinematics(object):
def setup_method(self):
pass
def test_sigma_s2(self):
kwargs_aperture = {
"center_ra": 0,
"width": 1,
"length": 1,
"angle": 0,
"center_dec": 0,
"aperture_type": "slit",
}
kwargs_cosmo = {"d_d": 1000, "d_s": 1500, "d_ds": 800}
kwargs_psf = {"psf_type": "GAUSSIAN", "fwhm": 1}
kin = AnalyticKinematics(
kwargs_cosmo,
interpol_grid_num=2000,
log_integration=True,
max_integrate=100,
min_integrate=5e-6,
)
kwargs_light = {"r_eff": 1}
sigma_s2 = kin.sigma_s2(
r=1,
R=0.1,
kwargs_mass={"theta_E": 1, "gamma": 2},
kwargs_light=kwargs_light,
kwargs_anisotropy={"r_ani": 1},
)
npt.assert_almost_equal(sigma_s2[0], 70885880558.5913, decimal=3)
def test_properties(self):
kwargs_aperture = {
"center_ra": 0,
"width": 1,
"length": 1,
"angle": 0,
"center_dec": 0,
"aperture_type": "slit",
}
kwargs_cosmo = {"d_d": 1000, "d_s": 1500, "d_ds": 800}
kwargs_psf = {"psf_type": "GAUSSIAN", "fwhm": 1}
kin = AnalyticKinematics(
kwargs_cosmo,
interpol_grid_num=2000,
log_integration=True,
max_integrate=150,
min_integrate=5e-6,
)
assert kin.max_integrate == 150
assert kin.min_integrate == 5e-6
def test_draw_light(self):
kin = AnalyticKinematics
assert kin._get_hernquist_scale_radius({"a": 1}) == 1
assert kin._get_hernquist_scale_radius({"Rs": 2}) == 2
assert kin._get_hernquist_scale_radius({"r_eff": 4}) == 4 * 0.551
with pytest.raises(ValueError):
kin._get_hernquist_scale_radius({"not_Rs": 1})
def test_I_R_sigma2_and_IR(self):
kwargs_aperture = {
"center_ra": 0,
"width": 1,
"length": 1,
"angle": 0,
"center_dec": 0,
"aperture_type": "slit",
}
kwargs_cosmo = {"d_d": 1000, "d_s": 1500, "d_ds": 800}
kwargs_psf = {"psf_type": "GAUSSIAN", "fwhm": 1}
kin = AnalyticKinematics(
kwargs_cosmo,
interpol_grid_num=10000,
log_integration=False,
max_integrate=100,
min_integrate=1e-4,
)
kwargs_mass = {"theta_E": 1, "gamma": 2}
kwargs_light = {"r_eff": 1}
kwargs_ani = {"r_ani": 1}
IR_sigma2, IR = kin._I_R_sigma2(
R=1,
kwargs_mass=kwargs_mass,
kwargs_light=kwargs_light,
kwargs_anisotropy=kwargs_ani,
)
kin._log_int = True
kin._interp_grid_num = 1000
IR_sigma2_2, IR_2 = kin._I_R_sigma2(
R=1,
kwargs_mass=kwargs_mass,
kwargs_light=kwargs_light,
kwargs_anisotropy=kwargs_ani,
)
assert IR_sigma2 - IR_sigma2_2 < 10
def test_against_numeric_profile(self):
z_d = 0.295
z_s = 0.657
kwargs_model = {
"mass_profile_list": ["EPL"],
"light_profile_list": ["HERNQUIST"],
"anisotropy_model": "OM",
# 'lens_redshift_list': [z_d],
}
cosmo = FlatLambdaCDM(H0=70, Om0=0.3)
D_d = cosmo.angular_diameter_distance(z_d).value
D_s = cosmo.angular_diameter_distance(z_s).value
D_ds = cosmo.angular_diameter_distance_z1z2(z_d, z_s).value
kwargs_cosmo = {"d_d": D_d, "d_s": D_s, "d_ds": D_ds}
numeric_kin = NumericKinematics(
kwargs_model,
kwargs_cosmo,
interpol_grid_num=1000,
max_integrate=1000,
min_integrate=1e-4,
)
analytic_kin = AnalyticKinematics(
kwargs_cosmo,
interpol_grid_num=2000,
log_integration=True,
max_integrate=100,
min_integrate=1e-4,
)
R = np.logspace(-5, np.log10(6), 100)
r_eff = 1.85
theta_e = 1.63
gamma = 2
a_ani = 1
numeric_s2ir, numeric_ir = numeric_kin.I_R_sigma2_and_IR(
R,
[{"theta_E": theta_e, "gamma": gamma, "center_x": 0.0, "center_y": 0.0}],
[{"Rs": r_eff * 0.551, "amp": 1.0, "center_x": 0.0, "center_y": 0.0}],
{"r_ani": a_ani * r_eff},
)
numeric_vel_dis = np.sqrt(numeric_s2ir / numeric_ir) / 1e3
analytic_s2ir = np.zeros_like(R)
analytic_ir = np.zeros_like(R)
for i, r in enumerate(R):
analytic_s2ir[i], analytic_ir[i] = analytic_kin.I_R_sigma2_and_IR(
r,
{"theta_E": theta_e, "gamma": gamma, "center_x": 0.0, "center_y": 0.0},
{"Rs": r_eff * 0.551, "amp": 1.0, "center_x": 0.0, "center_y": 0.0},
{"r_ani": a_ani * r_eff},
)
analytic_vel_dis = np.sqrt(analytic_s2ir / analytic_ir) / 1e3
# check if matches below 1%
npt.assert_array_less(
(numeric_vel_dis - analytic_vel_dis) / numeric_vel_dis,
0.01 * np.ones_like(numeric_vel_dis),
)
if __name__ == "__main__":
pytest.main()
|
sibirrerREPO_NAMElenstronomyPATH_START.@lenstronomy_extracted@lenstronomy-main@test@test_GalKin@test_analytic_kinematics.py@.PATH_END.py
|
{
"filename": "_thickness.py",
"repo_name": "plotly/plotly.py",
"repo_path": "plotly.py_extracted/plotly.py-master/packages/python/plotly/plotly/validators/mesh3d/colorbar/_thickness.py",
"type": "Python"
}
|
import _plotly_utils.basevalidators
class ThicknessValidator(_plotly_utils.basevalidators.NumberValidator):
def __init__(
self, plotly_name="thickness", parent_name="mesh3d.colorbar", **kwargs
):
super(ThicknessValidator, self).__init__(
plotly_name=plotly_name,
parent_name=parent_name,
edit_type=kwargs.pop("edit_type", "colorbars"),
min=kwargs.pop("min", 0),
**kwargs,
)
|
plotlyREPO_NAMEplotly.pyPATH_START.@plotly.py_extracted@plotly.py-master@packages@python@plotly@plotly@validators@mesh3d@colorbar@_thickness.py@.PATH_END.py
|
{
"filename": "test_npairs_3d.py",
"repo_name": "astropy/halotools",
"repo_path": "halotools_extracted/halotools-master/halotools/mock_observables/pair_counters/test_pair_counters/test_npairs_3d.py",
"type": "Python"
}
|
"""
"""
from __future__ import absolute_import, division, print_function, unicode_literals
import numpy as np
import pytest
from astropy.utils.misc import NumpyRNGContext
from pathlib import Path
from ..npairs_3d import npairs_3d
from ..pairs import npairs as pure_python_brute_force_npairs_3d
from ...tests.cf_helpers import generate_locus_of_3d_points
from ...tests.cf_helpers import generate_3d_regular_mesh
__all__ = ('test_rectangular_mesh_pairs_tight_locus1', )
fixed_seed = 43
# Determine whether the machine is mine
# This will be used to select tests whose
# returned values depend on the configuration
# of my personal cache directory files
aph_home = '/Users/aphearin'
detected_home = Path.home()
if aph_home == detected_home:
APH_MACHINE = True
else:
APH_MACHINE = False
@pytest.mark.installation_test
def test_rectangular_mesh_pairs_tight_locus1():
""" Verify that `halotools.mock_observables.npairs_3d` returns
the correct counts for two tight loci of points.
In this test, PBCs are irrelevant
"""
npts1, npts2 = 100, 100
data1 = generate_locus_of_3d_points(npts1, xc=0.1, yc=0.1, zc=0.1, seed=fixed_seed)
data2 = generate_locus_of_3d_points(npts2, xc=0.1, yc=0.1, zc=0.2, seed=fixed_seed)
rbins = np.array((0.05, 0.15, 0.3))
result = npairs_3d(data1, data2, rbins, period=1)
assert np.all(result == [0, npts1*npts2, npts1*npts2])
def test_rectangular_mesh_pairs_tight_locus2():
""" Verify that `halotools.mock_observables.npairs_3d` returns
the correct counts for two tight loci of points.
In this test, PBCs are important.
"""
npts1, npts2 = 100, 100
data1 = generate_locus_of_3d_points(npts1, xc=0.1, yc=0.1, zc=0.05, seed=fixed_seed)
data2 = generate_locus_of_3d_points(npts2, xc=0.1, yc=0.1, zc=0.95, seed=fixed_seed)
rbins = np.array((0.05, 0.15, 0.3))
result = npairs_3d(data1, data2, rbins, period=1)
assert np.all(result == [0, npts1*npts2, npts1*npts2])
def test_rectangular_mesh_pairs_tight_locus3():
""" Verify that the pair counters return the correct results
when operating on a tight locus of points.
For this test, PBCs have no impact.
"""
npts1, npts2 = 100, 200
points1 = generate_locus_of_3d_points(npts1, xc=0.1, yc=0.1, zc=0.1, seed=fixed_seed)
points2 = generate_locus_of_3d_points(npts2, xc=0.1, yc=0.1, zc=0.25, seed=fixed_seed)
rbins = np.array([0.1, 0.2, 0.3])
correct_result = np.array([0, npts1*npts2, npts1*npts2])
counts = npairs_3d(points1, points2, rbins, num_threads='max')
assert np.all(counts == correct_result)
def test_rectangular_mesh_pairs_tight_locus4():
""" Verify that the pair counters return the correct results
when operating on a tight locus of points.
For this test, PBCs have no impact.
"""
npts1, npts2 = 100, 200
points1 = generate_locus_of_3d_points(npts1, xc=0.1, yc=0.1, zc=0.1, seed=fixed_seed)
points2 = generate_locus_of_3d_points(npts2, xc=0.1, yc=0.1, zc=0.25, seed=fixed_seed)
rbins = np.array([0.1, 0.2, 0.3])
correct_result = np.array([0, npts1*npts2, npts1*npts2])
counts = npairs_3d(points1, points2, rbins, num_threads=1)
assert np.all(counts == correct_result)
def test_rectangular_mesh_pairs_tight_locus5():
""" Verify that the pair counters return the correct results
when operating on a tight locus of points.
For this test, PBCs have no impact.
"""
npts1, npts2 = 100, 200
points1 = generate_locus_of_3d_points(npts1, xc=0.1, yc=0.1, zc=0.1, seed=fixed_seed)
points2 = generate_locus_of_3d_points(npts2, xc=0.1, yc=0.1, zc=0.25, seed=fixed_seed)
rbins = np.array([0.1, 0.2, 0.3])
correct_result = np.array([0, npts1*npts2, npts1*npts2])
counts = npairs_3d(points1, points2, rbins, period=1.)
assert np.all(counts == correct_result)
def test_rectangular_mesh_pairs_tight_locus6():
""" Verify that the pair counters return the correct results
when operating on a tight locus of points.
For this test, PBCs have no impact.
"""
npts1, npts2 = 100, 200
points1 = generate_locus_of_3d_points(npts1, xc=0.1, yc=0.1, zc=0.1, seed=fixed_seed)
points2 = generate_locus_of_3d_points(npts2, xc=0.1, yc=0.1, zc=0.25, seed=fixed_seed)
rbins = np.array([0.1, 0.2, 0.3])
correct_result = np.array([0, npts1*npts2, npts1*npts2])
counts = npairs_3d(points1, points2, rbins, approx_cell1_size=[0.1, 0.1, 0.1])
assert np.all(counts == correct_result)
def test_rectangular_mesh_pairs_tight_locus7():
""" Verify that the pair counters return the correct results
when operating on a tight locus of points.
For this test, PBCs have no impact.
"""
npts1, npts2 = 100, 200
points1 = generate_locus_of_3d_points(npts1, xc=0.1, yc=0.1, zc=0.1, seed=fixed_seed)
points2 = generate_locus_of_3d_points(npts2, xc=0.1, yc=0.1, zc=0.25, seed=fixed_seed)
rbins = np.array([0.1, 0.2, 0.3])
correct_result = np.array([0, npts1*npts2, npts1*npts2])
counts = npairs_3d(points1, points2, rbins,
approx_cell1_size=[0.1, 0.1, 0.1],
approx_cell2_size=[0.1, 0.1, 0.1])
assert np.all(counts == correct_result)
def test_rectangular_mesh_pairs_tight_locus8():
""" Verify that the pair counters return the correct results
when operating on a tight locus of points.
For this test, PBCs have no impact.
"""
npts1, npts2 = 100, 200
points1 = generate_locus_of_3d_points(npts1, xc=0.1, yc=0.1, zc=0.1, seed=fixed_seed)
points2 = generate_locus_of_3d_points(npts2, xc=0.1, yc=0.1, zc=0.25, seed=fixed_seed)
rbins = np.array([0.1, 0.2, 0.3])
correct_result = np.array([0, npts1*npts2, npts1*npts2])
counts = npairs_3d(points1, points2, rbins,
approx_cell1_size=0.1, approx_cell2_size=0.1, period=1)
assert np.all(counts == correct_result)
def test_rectangular_mesh_pairs_tight_locus9():
""" Verify that the pair counters return the correct results
when operating on a tight locus of points.
For this test, PBCs have no impact.
"""
npts1, npts2 = 100, 200
points1 = generate_locus_of_3d_points(npts1, xc=0.1, yc=0.1, zc=0.1, seed=fixed_seed)
points2 = generate_locus_of_3d_points(npts2, xc=0.1, yc=0.1, zc=0.25, seed=fixed_seed)
rbins = np.array([0.1, 0.2, 0.3])
correct_result = np.array([0, npts1*npts2, npts1*npts2])
counts = npairs_3d(points1, points2, rbins,
approx_cell1_size=[0.2, 0.2, 0.2],
approx_cell2_size=[0.15, 0.15, 0.15], period=1)
assert np.all(counts == correct_result)
def test_rectangular_mesh_pairs():
""" Verify that `halotools.mock_observables.npairs_3d` returns
the correct counts for two regularly spaced grids of points.
"""
npts_per_dim = 10
data1 = generate_3d_regular_mesh(npts_per_dim)
data2 = generate_3d_regular_mesh(npts_per_dim)
grid_spacing = 1./npts_per_dim
Lbox = 1.
r1 = grid_spacing/100.
epsilon = 0.0001
r2 = grid_spacing + epsilon
r3 = grid_spacing*np.sqrt(2) + epsilon
r4 = grid_spacing*np.sqrt(3) + epsilon
rbins = np.array([r1, r2, r3, r4])
result = npairs_3d(data1, data2, rbins, period=Lbox, approx_cell1_size=0.1)
assert np.all(result ==
[npts_per_dim**3, 7*npts_per_dim**3, 19*npts_per_dim**3, 27*npts_per_dim**3])
@pytest.mark.skipif('not APH_MACHINE')
def test_parallel():
""" Verify that `halotools.mock_observables.npairs_3d` returns
identical counts whether it is run in serial or parallel.
"""
npts_per_dim = 10
data1 = generate_3d_regular_mesh(npts_per_dim)
data2 = generate_3d_regular_mesh(npts_per_dim)
grid_spacing = 1./npts_per_dim
Lbox = 1.
r1 = grid_spacing/100.
epsilon = 0.0001
r2 = grid_spacing + epsilon
r3 = grid_spacing*np.sqrt(2) + epsilon
r4 = grid_spacing*np.sqrt(3) + epsilon
rbins = np.array([r1, r2, r3, r4])
serial_result = npairs_3d(data1, data2, rbins, period=Lbox, approx_cell1_size=0.1)
parallel_result2 = npairs_3d(data1, data2, rbins, period=Lbox,
approx_cell1_size=0.1, num_threads=2)
parallel_result7 = npairs_3d(data1, data2, rbins, period=Lbox,
approx_cell1_size=0.1, num_threads=3)
assert np.all(serial_result == parallel_result2)
assert np.all(serial_result == parallel_result7)
@pytest.mark.installation_test
def test_npairs_brute_force_periodic():
"""
Function tests npairs with periodic boundary conditions.
"""
Npts = 1000
with NumpyRNGContext(fixed_seed):
random_sample = np.random.random((Npts, 3))
period = np.array([1.0, 1.0, 1.0])
rbins = np.array([0.001, 0.1, 0.2, 0.3])
result = npairs_3d(random_sample, random_sample, rbins, period=period)
msg = 'The returned result is an unexpected shape.'
assert np.shape(result) == (len(rbins),), msg
test_result = pure_python_brute_force_npairs_3d(
random_sample, random_sample, rbins, period=period)
msg = "The double tree's result(s) are not equivalent to simple pair counter's."
assert np.all(test_result == result), msg
def test_npairs_brute_force_nonperiodic():
"""
test npairs without periodic boundary conditions.
"""
Npts = 1000
with NumpyRNGContext(fixed_seed):
random_sample = np.random.random((Npts, 3))
rbins = np.array([0.001, 0.1, 0.2, 0.3])
result = npairs_3d(random_sample, random_sample, rbins, period=None)
msg = 'The returned result is an unexpected shape.'
assert np.shape(result) == (len(rbins),), msg
test_result = pure_python_brute_force_npairs_3d(
random_sample, random_sample, rbins, period=None)
msg = "The double tree's result(s) are not equivalent to simple pair counter's."
assert np.all(test_result == result), msg
def test_sensible_num_threads():
npts1, npts2 = 100, 100
data1 = generate_locus_of_3d_points(npts1, xc=0.1, yc=0.1, zc=0.1, seed=fixed_seed)
data2 = generate_locus_of_3d_points(npts2, xc=0.1, yc=0.1, zc=0.2, seed=fixed_seed)
rbins = np.array((0.05, 0.15, 0.3))
with pytest.raises(ValueError) as err:
result = npairs_3d(data1, data2, rbins, period=1,
num_threads="Cuba Gooding Jr.")
substr = "Input ``num_threads`` argument must be an integer or the string 'max'"
assert substr in err.value.args[0]
def test_sensible_rbins():
npts1, npts2 = 100, 100
data1 = generate_locus_of_3d_points(npts1, xc=0.1, yc=0.1, zc=0.1, seed=fixed_seed)
data2 = generate_locus_of_3d_points(npts2, xc=0.1, yc=0.1, zc=0.2, seed=fixed_seed)
rbins = 0.1
with pytest.raises(ValueError) as err:
result = npairs_3d(data1, data2, rbins, period=1)
substr = "Input ``rbins`` must be a monotonically increasing 1D array with at least two entries"
assert substr in err.value.args[0]
def test_sensible_period():
npts1, npts2 = 100, 100
data1 = generate_locus_of_3d_points(npts1, xc=0.1, yc=0.1, zc=0.1, seed=fixed_seed)
data2 = generate_locus_of_3d_points(npts2, xc=0.1, yc=0.1, zc=0.2, seed=fixed_seed)
rbins = np.array((0.05, 0.15, 0.3))
with pytest.raises(ValueError) as err:
result = npairs_3d(data1, data2, rbins, period=np.inf)
substr = "Input ``period`` must be a bounded positive number in all dimensions"
assert substr in err.value.args[0]
def test_pure_python_npairs_3d_argument_handling1():
"""
"""
npts = 10
with NumpyRNGContext(fixed_seed):
sample1 = np.random.random((npts, 3))
sample2 = np.random.random((npts, 2))
rbins = np.linspace(0.01, 0.1, 5)
with pytest.raises(ValueError) as err:
__ = pure_python_brute_force_npairs_3d(sample1, sample2, rbins, period=None)
substr = "sample1 and sample2 inputs do not have the same dimension"
assert substr in err.value.args[0]
def test_pure_python_npairs_3d_argument_handling2():
"""
"""
npts = 10
with NumpyRNGContext(fixed_seed):
sample1 = np.random.random((npts, 3))
sample2 = np.random.random((npts, 3))
rbins = np.linspace(0.01, 0.1, 5)
__ = pure_python_brute_force_npairs_3d(sample1, sample2, rbins, period=1)
def test_pure_python_npairs_3d_argument_handling3():
"""
"""
npts = 10
with NumpyRNGContext(fixed_seed):
sample1 = np.random.random((npts, 2))
sample2 = np.random.random((npts, 2))
rbins = np.linspace(0.01, 0.1, 5)
with pytest.raises(ValueError) as err:
__ = pure_python_brute_force_npairs_3d(sample1, sample2, rbins, period=[1, 1, 1])
substr = "period should have len == dimension of points"
assert substr in err.value.args[0]
|
astropyREPO_NAMEhalotoolsPATH_START.@halotools_extracted@halotools-master@halotools@mock_observables@pair_counters@test_pair_counters@test_npairs_3d.py@.PATH_END.py
|
{
"filename": "_california_housing.py",
"repo_name": "scikit-learn/scikit-learn",
"repo_path": "scikit-learn_extracted/scikit-learn-main/sklearn/datasets/_california_housing.py",
"type": "Python"
}
|
"""California housing dataset.
The original database is available from StatLib
http://lib.stat.cmu.edu/datasets/
The data contains 20,640 observations on 9 variables.
This dataset contains the average house value as target variable
and the following input variables (features): average income,
housing average age, average rooms, average bedrooms, population,
average occupation, latitude, and longitude in that order.
References
----------
Pace, R. Kelley and Ronald Barry, Sparse Spatial Autoregressions,
Statistics and Probability Letters, 33 (1997) 291-297.
"""
# Authors: The scikit-learn developers
# SPDX-License-Identifier: BSD-3-Clause
import logging
import tarfile
from numbers import Integral, Real
from os import PathLike, makedirs, remove
from os.path import exists
import joblib
import numpy as np
from ..utils import Bunch
from ..utils._param_validation import Interval, validate_params
from . import get_data_home
from ._base import (
RemoteFileMetadata,
_convert_data_dataframe,
_fetch_remote,
_pkl_filepath,
load_descr,
)
# The original data can be found at:
# https://www.dcc.fc.up.pt/~ltorgo/Regression/cal_housing.tgz
ARCHIVE = RemoteFileMetadata(
filename="cal_housing.tgz",
url="https://ndownloader.figshare.com/files/5976036",
checksum="aaa5c9a6afe2225cc2aed2723682ae403280c4a3695a2ddda4ffb5d8215ea681",
)
logger = logging.getLogger(__name__)
@validate_params(
{
"data_home": [str, PathLike, None],
"download_if_missing": ["boolean"],
"return_X_y": ["boolean"],
"as_frame": ["boolean"],
"n_retries": [Interval(Integral, 1, None, closed="left")],
"delay": [Interval(Real, 0.0, None, closed="neither")],
},
prefer_skip_nested_validation=True,
)
def fetch_california_housing(
*,
data_home=None,
download_if_missing=True,
return_X_y=False,
as_frame=False,
n_retries=3,
delay=1.0,
):
"""Load the California housing dataset (regression).
============== ==============
Samples total 20640
Dimensionality 8
Features real
Target real 0.15 - 5.
============== ==============
Read more in the :ref:`User Guide <california_housing_dataset>`.
Parameters
----------
data_home : str or path-like, default=None
Specify another download and cache folder for the datasets. By default
all scikit-learn data is stored in '~/scikit_learn_data' subfolders.
download_if_missing : bool, default=True
If False, raise an OSError if the data is not locally available
instead of trying to download the data from the source site.
return_X_y : bool, default=False
If True, returns ``(data.data, data.target)`` instead of a Bunch
object.
.. versionadded:: 0.20
as_frame : bool, default=False
If True, the data is a pandas DataFrame including columns with
appropriate dtypes (numeric, string or categorical). The target is
a pandas DataFrame or Series depending on the number of target_columns.
.. versionadded:: 0.23
n_retries : int, default=3
Number of retries when HTTP errors are encountered.
.. versionadded:: 1.5
delay : float, default=1.0
Number of seconds between retries.
.. versionadded:: 1.5
Returns
-------
dataset : :class:`~sklearn.utils.Bunch`
Dictionary-like object, with the following attributes.
data : ndarray, shape (20640, 8)
Each row corresponding to the 8 feature values in order.
If ``as_frame`` is True, ``data`` is a pandas object.
target : numpy array of shape (20640,)
Each value corresponds to the average
house value in units of 100,000.
If ``as_frame`` is True, ``target`` is a pandas object.
feature_names : list of length 8
Array of ordered feature names used in the dataset.
DESCR : str
Description of the California housing dataset.
frame : pandas DataFrame
Only present when `as_frame=True`. DataFrame with ``data`` and
``target``.
.. versionadded:: 0.23
(data, target) : tuple if ``return_X_y`` is True
A tuple of two ndarray. The first containing a 2D array of
shape (n_samples, n_features) with each row representing one
sample and each column representing the features. The second
ndarray of shape (n_samples,) containing the target samples.
.. versionadded:: 0.20
Notes
-----
This dataset consists of 20,640 samples and 9 features.
Examples
--------
>>> from sklearn.datasets import fetch_california_housing
>>> housing = fetch_california_housing()
>>> print(housing.data.shape, housing.target.shape)
(20640, 8) (20640,)
>>> print(housing.feature_names[0:6])
['MedInc', 'HouseAge', 'AveRooms', 'AveBedrms', 'Population', 'AveOccup']
"""
data_home = get_data_home(data_home=data_home)
if not exists(data_home):
makedirs(data_home)
filepath = _pkl_filepath(data_home, "cal_housing.pkz")
if not exists(filepath):
if not download_if_missing:
raise OSError("Data not found and `download_if_missing` is False")
logger.info(
"Downloading Cal. housing from {} to {}".format(ARCHIVE.url, data_home)
)
archive_path = _fetch_remote(
ARCHIVE,
dirname=data_home,
n_retries=n_retries,
delay=delay,
)
with tarfile.open(mode="r:gz", name=archive_path) as f:
cal_housing = np.loadtxt(
f.extractfile("CaliforniaHousing/cal_housing.data"), delimiter=","
)
# Columns are not in the same order compared to the previous
# URL resource on lib.stat.cmu.edu
columns_index = [8, 7, 2, 3, 4, 5, 6, 1, 0]
cal_housing = cal_housing[:, columns_index]
joblib.dump(cal_housing, filepath, compress=6)
remove(archive_path)
else:
cal_housing = joblib.load(filepath)
feature_names = [
"MedInc",
"HouseAge",
"AveRooms",
"AveBedrms",
"Population",
"AveOccup",
"Latitude",
"Longitude",
]
target, data = cal_housing[:, 0], cal_housing[:, 1:]
# avg rooms = total rooms / households
data[:, 2] /= data[:, 5]
# avg bed rooms = total bed rooms / households
data[:, 3] /= data[:, 5]
# avg occupancy = population / households
data[:, 5] = data[:, 4] / data[:, 5]
# target in units of 100,000
target = target / 100000.0
descr = load_descr("california_housing.rst")
X = data
y = target
frame = None
target_names = [
"MedHouseVal",
]
if as_frame:
frame, X, y = _convert_data_dataframe(
"fetch_california_housing", data, target, feature_names, target_names
)
if return_X_y:
return X, y
return Bunch(
data=X,
target=y,
frame=frame,
target_names=target_names,
feature_names=feature_names,
DESCR=descr,
)
|
scikit-learnREPO_NAMEscikit-learnPATH_START.@scikit-learn_extracted@scikit-learn-main@sklearn@datasets@_california_housing.py@.PATH_END.py
|
{
"filename": "hist_mutual.py",
"repo_name": "rodluger/planetplanet",
"repo_path": "planetplanet_extracted/planetplanet-master/scripts/hist_mutual.py",
"type": "Python"
}
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
'''
hist_mutual.py |github|
-----------------------
Histograms of the mutual transit events in TRAPPIST-1. Shows histograms
of the fractional depth and duration of these events for all pairs of
planets.
TRAPPIST-1b
~~~~~~~~~~~
.. image:: /b_mutual.jpg
:width: 400px
:align: center
TRAPPIST-1c
~~~~~~~~~~~
.. image:: /c_mutual.jpg
:width: 400px
:align: center
TRAPPIST-1d
~~~~~~~~~~~
.. image:: /d_mutual.jpg
:width: 400px
:align: center
TRAPPIST-1e
~~~~~~~~~~~
.. image:: /e_mutual.jpg
:width: 400px
:align: center
TRAPPIST-1f
~~~~~~~~~~~
.. image:: /f_mutual.jpg
:width: 400px
:align: center
TRAPPIST-1g
~~~~~~~~~~~
.. image:: /g_mutual.jpg
:width: 400px
:align: center
TRAPPIST-1h
~~~~~~~~~~~
.. image:: /h_mutual.jpg
:width: 400px
:align: center
.. role:: raw-html(raw)
:format: html
.. |github| replace:: :raw-html:`<a href = "https://github.com/rodluger/planetplanet/blob/master/scripts/hist_mutual.py"><i class="fa fa-github" aria-hidden="true"></i></a>`
'''
from __future__ import division, print_function, absolute_import, \
unicode_literals
import os
import subprocess
import planetplanet
from planetplanet import jwst
from planetplanet import Trappist1
from planetplanet.constants import *
from planetplanet.pool import Pool
import matplotlib
import matplotlib.pyplot as pl
from matplotlib.ticker import FuncFormatter
import numpy as np
import corner
from tqdm import tqdm
from scipy.stats import norm
datapath = os.path.join(os.path.dirname(os.path.dirname(
os.path.abspath(planetplanet.__file__))),
'scripts', 'data')
histpath = os.path.join(os.path.dirname(os.path.dirname(
os.path.abspath(planetplanet.__file__))),
'scripts')
if not os.path.exists(datapath):
os.makedirs(datapath)
def _test():
'''
This routine is too expensive to test on Travis, so I'm
bypassing it for now.
'''
pass
def Submit(queue = None, email = None, walltime = 8, nodes = 5, ppn = 12,
mpn = None, nsamp = 50000, batch_size = 30, nproc = None):
'''
Submits a PBS cluster job to run :py:func:`Compute` in parallel.
:param str queue: The name of the queue to submit to. \
Default :py:obj:`None`
:param str email: The email to send job status notifications to. \
Default :py:obj:`None`
:param int walltime: The number of hours to request. Default `8`
:param int nodes: The number of nodes to request. Default `5`
:param int ppn: The number of processors per node to request. Default `12`
:param int nsamp: The number of prior samples to draw. Default `50,000`
:param int batch_size: Size of each batch used in the parallelization. \
Default `100`
:param int mpn: Memory per node in gb to request. Default no setting.
:param int nproc: Number of processes to spawn. Default is the number of \
core.
'''
if nproc is None:
nproc = ppn * nodes
str_w = 'walltime=%d:00:00' % walltime
if mpn is not None:
str_n = 'nodes=%d:ppn=%d,feature=%dcore,mem=%dgb' % \
(nodes, ppn, ppn, mpn * nodes)
else:
str_n = 'nodes=%d:ppn=%d,feature=%dcore' % (nodes, ppn, ppn)
str_v = 'NPROC=%d,HISTPATH=%s,NSAMP=%d,BATCHSZ=%d' % \
(nproc, histpath, nsamp, batch_size)
str_name = 'planetplanet'
str_out = 'hist_mutual.log'
qsub_args = ['qsub', 'hist_mutual.pbs',
'-v', str_v,
'-o', str_out,
'-j', 'oe',
'-N', str_name,
'-l', str_n,
'-l', str_w]
if email is not None:
qsub_args.append(['-M', email, '-m', 'ae'])
if queue is not None:
qsub_args += ['-q', queue]
print("Submitting the job...")
subprocess.call(qsub_args)
class _FunctionWrapper(object):
'''
A simple function wrapper class. Stores :py:obj:`args` and :py:obj:`kwargs`
and allows an arbitrary function to be called via :py:func:`map`.
Used internally.
'''
def __init__(self, f, *args, **kwargs):
'''
'''
self.f = f
self.args = args
self.kwargs = kwargs
def __call__(self, x):
'''
'''
return self.f(*self.args, **self.kwargs)
def _Parallelize(nsamp, batch_size):
'''
Runs the actual parallelized computations. Used internally.
'''
# Get our function wrapper
m = _FunctionWrapper(Compute, nsamp = batch_size,
progress_bar = False)
# Parallelize. We will run `N` iterations
N = int(np.ceil(nsamp / batch_size))
with Pool() as pool:
pool.map(m, range(N))
def histogram(system, tstart, tend, dt = 0.0001):
'''
Computes statistical properties of mutual events (PPOs occuring on the face
of the star).
:param system: A system instance.
:type system: :py:obj:`planetplanet.structs.System`
:param float tstart: The integration start time (BJD − 2,450,000)
:param float tend: The integration end time (BJD − 2,450,000)
:param float dt: The time resolution in days. Occultations shorter \
than this will not be registered.
'''
# Compute the orbits
time = np.arange(tstart, tend, dt)
system.compute_orbits(time)
pairs = []
durs = []
depths = []
for bi, body in enumerate(system.bodies[1:]):
# Loop over all times w/ occultations of the star
inds = np.where(system.bodies[0].occultor)[0]
for i in inds:
# Get all bodies currently occulting the star
occultors = []
for occ in range(1, len(system.bodies)):
if (system.bodies[0].occultor[i] & 2 ** occ):
occultors.append(occ)
# Check if any of these occult each other
if len(occultors) > 1:
for occ1 in occultors:
for occ2 in occultors:
if system.bodies[occ1].occultor[i] & 2 ** occ2:
# Sort the planet indices
occ1, occ2 = sorted([occ1, occ2])
# Is this a new occultation?
if (len(pairs)==0) or (pairs[-1] != [occ1, occ2]):
pairs.append([occ1, occ2])
durs.append(0.)
depths.append(0.)
# Update the maximum depth. Based on
# http://mathworld.wolfram.com/
# Circle-CircleIntersection.html
dx2 = (system.bodies[occ1].x[i] -
system.bodies[occ2].x[i]) ** 2
dy2 = (system.bodies[occ1].y[i] -
system.bodies[occ2].y[i]) ** 2
d = np.sqrt(dx2 + dy2)
r1 = system.bodies[occ1]._r
r2 = system.bodies[occ2]._r
A = r1 ** 2 * np.arccos((d ** 2 + r1 ** 2 - r2 ** 2)
/ (2 * d * r1)) \
+ r2 ** 2 * np.arccos((d ** 2 + r2 ** 2 - r1 ** 2)
/ (2 * d * r2)) \
- 0.5 * np.sqrt((-d + r1 + r2) *
(d + r1 - r2) *
(d - r1 + r2) *
(d + r1 + r2))
Astar = np.pi * (system.bodies[0]._r) ** 2
depth = A / Astar
depths[-1] = max(depths[-1], depth)
# Update the duration
durs[-1] += dt
return np.array(pairs, dtype = int), \
np.array(durs, dtype = float), \
np.array(depths, dtype = float)
def Compute(nsamp = 100, nbody = True, progress_bar = True, **kwargs):
'''
Runs the simulations.
:param int nsamp: The number of prior samples to draw. Default `300`
:param bool nbody: Use the N-Body solver? Default :py:obj:`True`
:param bool progress_bar: Display a progress bar? Default :py:obj:`True`
'''
# Draw samples from the prior
pairs = np.empty([0, 2], dtype = int)
durs = np.array([], dtype = float)
depths = np.array([], dtype = float)
if progress_bar:
wrap = tqdm
else:
wrap = lambda x: x
for n in wrap(range(nsamp)):
# Instantiate the Trappist-1 system
system = Trappist1(sample = True, nbody = nbody,
quiet = True, **kwargs)
system.settings.timestep = 1. / 24.
# Run!
try:
p, t, d = histogram(system, OCTOBER_08_2016, OCTOBER_08_2016 + 365)
except:
print("ERROR in routine `hist.Compute()`")
continue
if len(p):
pairs = np.vstack([pairs, p])
durs = np.append(durs, t)
depths = np.append(depths, d)
# Save
n = 0
while os.path.exists(os.path.join(datapath, 'hist_mutual%03d.npz' % n)):
n += 1
np.savez(os.path.join(datapath, 'hist_mutual%03d.npz' % n),
pairs = pairs, durs = durs, depths = depths)
def MergeFiles():
'''
Merge all the `npz` savesets into a single one for faster plotting.
'''
# Load
pairs = np.empty([0, 2], dtype = int)
durs = np.array([], dtype = float)
depths = np.array([], dtype = float)
print("Loading...")
for n in tqdm(range(1000)):
if os.path.exists(os.path.join(datapath, 'hist_mutual%03d.npz' % n)):
# Skip corrupt files
try:
data = np.load(os.path.join(datapath, 'hist_mutual%03d.npz' % n))
os.remove(os.path.join(datapath, 'hist_mutual%03d.npz' % n))
data['pairs'][0]
data['durs'][0]
data['depths'][0]
except:
continue
else:
break
pairs = np.vstack([pairs, data['pairs']])
durs = np.append(durs, data['durs'])
depths = np.append(depths, data['depths'])
# Save as one big file
if n > 0:
print("Saving...")
np.savez(os.path.join(datapath,'hist_mutual000.npz'),
pairs = pairs, durs = durs, depths = depths)
def Plot():
'''
'''
# Load
pairs = np.empty([0, 2], dtype = int)
durs = np.array([], dtype = float)
depths = np.array([], dtype = float)
print("Loading...")
for n in tqdm(range(1000)):
if os.path.exists(os.path.join(datapath, 'hist_mutual%03d.npz' % n)):
# Skip corrupt files
try:
data = np.load(os.path.join(datapath, 'hist_mutual%03d.npz' % n))
data['pairs'][0]
data['durs'][0]
data['depths'][0]
except:
continue
else:
if n == 0:
raise Exception("Please run `Compute()` first.")
break
pairs = np.vstack([pairs, data['pairs']])
durs = np.append(durs, data['durs'])
depths = np.append(depths, data['depths'])
# Dummy system to get colors
system = Trappist1()
colors = [system.bodies[n].color for n in range(1, 8)]
# For the paper, we ran 30,000 simulations, so the average
# number of mutual transits per year is...
print("Mutual transits per year: %.3f" % (len(pairs) / 30000.))
# Loop over all planets
for k in range(1, 8):
# Indices of events involving this planet
inds = np.where((pairs[:,0] == k) | (pairs[:,1] == k))[0]
# Again, for the 30,000 simulations we ran...
print("%s: %.3f" % (system.bodies[k].name, len(pairs[inds]) / 30000.))
# Duration
dt = durs[inds] / MINUTE
# Depth
d = depths[inds] * 1e2
# Corner plot
samples = np.vstack((dt, d)).T
fig = corner.corner(samples, plot_datapoints = False,
range = [(0, 60), (0, 1)],
labels = ["Duration [min]",
"Depth [%]"],
bins = 30,
hist_kwargs = {'color': 'w'})
# Indices of events involving each of the planets
pinds = [[] for j in range(1, 8)]
for j in range(1, 8):
if j != k:
pinds[j - 1] = np.where((pairs[inds,0] == j) | (pairs[inds,1] == j))[0]
# Duration stacked histogram
n, _, _ = fig.axes[0].hist([dt[p] for p in pinds], bins = 30,
range = (0, 60),
stacked = True,
normed = 1,
color = colors,
alpha = 0.5)
maxn = np.max(np.array(n)[-1])
fig.axes[0].hist(dt, bins = 30, range = (0, 60), normed = 1,
color = 'k', alpha = 1, histtype = 'step')
fig.axes[0].set_ylim(0, 1.1 * maxn)
# Depth stacked histogram
n, _, _ = fig.axes[3].hist([d[p] for p in pinds], bins = 30,
range = (0, 1),
stacked = True,
normed = 1,
color = colors,
alpha = 0.5)
maxn = np.max(np.array(n)[-1])
fig.axes[3].hist(d, bins = 30, range = (0, 1), normed = 1,
color = 'k', alpha = 1, histtype = 'step')
fig.axes[3].set_ylim(0, 1.1 * maxn)
# Tweak appearance
for i, ax in enumerate(fig.axes):
ax.set_xlabel(ax.get_xlabel(), fontsize = 14, fontweight = 'bold')
ax.set_ylabel(ax.get_ylabel(), fontsize = 14, fontweight = 'bold')
for tick in ax.get_xticklabels() + ax.get_yticklabels():
tick.set_fontsize(12)
# HACK: Legend for planet `b`
if k == 1:
for occultor in [2,3,4,5,7]:
fig.axes[0].axhline(-1, color = system.bodies[occultor].color,
lw = 4, alpha = 0.5,
label = system.bodies[occultor].name)
fig.axes[0].legend(loc = 'upper right',
fontsize = 8, borderpad = 1)
# Save!
fig.savefig('%s_mutual.pdf' % system.bodies[k].name,
bbox_inches = 'tight')
|
rodlugerREPO_NAMEplanetplanetPATH_START.@planetplanet_extracted@planetplanet-master@scripts@hist_mutual.py@.PATH_END.py
|
{
"filename": "setup.py",
"repo_name": "mamartinod/grip",
"repo_path": "grip_extracted/grip-main/setup.py",
"type": "Python"
}
|
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
from setuptools import setup, find_packages
setup(
name='grip-nulling',
version='1.5.0',
author='M.-A. Martinod',
description='Self-calibration data reduction tools for nulling interferometry',
long_description='GRIP is a novel data reduction methods providing self-calibrated methods for nulling interferometry. Check https://github.com/mamartinod/grip for more information',
packages=find_packages(),
install_requires=[
'numpy>=1.26.2',
'scipy>=1.11.4',
'matplotlib>=3.6.3',
'h5py>=3.8.0',
'emcee>=3.1.4',
'numdifftools>=0.9.41',
'astropy>=5.2.1'
],
)
|
mamartinodREPO_NAMEgripPATH_START.@grip_extracted@grip-main@setup.py@.PATH_END.py
|
{
"filename": "object_registration.py",
"repo_name": "fchollet/keras",
"repo_path": "keras_extracted/keras-master/keras/src/saving/object_registration.py",
"type": "Python"
}
|
import inspect
from keras.src.api_export import keras_export
from keras.src.backend.common import global_state
GLOBAL_CUSTOM_OBJECTS = {}
GLOBAL_CUSTOM_NAMES = {}
@keras_export(
[
"keras.saving.CustomObjectScope",
"keras.saving.custom_object_scope",
"keras.utils.CustomObjectScope",
"keras.utils.custom_object_scope",
]
)
class CustomObjectScope:
"""Exposes custom classes/functions to Keras deserialization internals.
Under a scope `with custom_object_scope(objects_dict)`, Keras methods such
as `keras.models.load_model()` or
`keras.models.model_from_config()` will be able to deserialize any
custom object referenced by a saved config (e.g. a custom layer or metric).
Example:
Consider a custom regularizer `my_regularizer`:
```python
layer = Dense(3, kernel_regularizer=my_regularizer)
# Config contains a reference to `my_regularizer`
config = layer.get_config()
...
# Later:
with custom_object_scope({'my_regularizer': my_regularizer}):
layer = Dense.from_config(config)
```
Args:
custom_objects: Dictionary of `{str: object}` pairs,
where the `str` key is the object name.
"""
def __init__(self, custom_objects):
self.custom_objects = custom_objects or {}
self.backup = None
def __enter__(self):
self.backup = global_state.get_global_attribute(
"custom_objects_scope_dict", {}
).copy()
global_state.set_global_attribute(
"custom_objects_scope_dict", self.custom_objects.copy()
)
return self
def __exit__(self, *args, **kwargs):
global_state.set_global_attribute(
"custom_objects_scope_dict", self.backup.copy()
)
# Alias.
custom_object_scope = CustomObjectScope
@keras_export(
[
"keras.saving.get_custom_objects",
"keras.utils.get_custom_objects",
]
)
def get_custom_objects():
"""Retrieves a live reference to the global dictionary of custom objects.
Custom objects set using `custom_object_scope()` are not added to the
global dictionary of custom objects, and will not appear in the returned
dictionary.
Example:
```python
get_custom_objects().clear()
get_custom_objects()['MyObject'] = MyObject
```
Returns:
Global dictionary mapping registered class names to classes.
"""
return GLOBAL_CUSTOM_OBJECTS
@keras_export(
[
"keras.saving.register_keras_serializable",
"keras.utils.register_keras_serializable",
]
)
def register_keras_serializable(package="Custom", name=None):
"""Registers an object with the Keras serialization framework.
This decorator injects the decorated class or function into the Keras custom
object dictionary, so that it can be serialized and deserialized without
needing an entry in the user-provided custom object dict. It also injects a
function that Keras will call to get the object's serializable string key.
Note that to be serialized and deserialized, classes must implement the
`get_config()` method. Functions do not have this requirement.
The object will be registered under the key `'package>name'` where `name`,
defaults to the object name if not passed.
Example:
```python
# Note that `'my_package'` is used as the `package` argument here, and since
# the `name` argument is not provided, `'MyDense'` is used as the `name`.
@register_keras_serializable('my_package')
class MyDense(keras.layers.Dense):
pass
assert get_registered_object('my_package>MyDense') == MyDense
assert get_registered_name(MyDense) == 'my_package>MyDense'
```
Args:
package: The package that this class belongs to. This is used for the
`key` (which is `"package>name"`) to identify the class. Note that
this is the first argument passed into the decorator.
name: The name to serialize this class under in this package. If not
provided or `None`, the class' name will be used (note that this is
the case when the decorator is used with only one argument, which
becomes the `package`).
Returns:
A decorator that registers the decorated class with the passed names.
"""
def decorator(arg):
"""Registers a class with the Keras serialization framework."""
class_name = name if name is not None else arg.__name__
registered_name = package + ">" + class_name
if inspect.isclass(arg) and not hasattr(arg, "get_config"):
raise ValueError(
"Cannot register a class that does not have a "
"get_config() method."
)
GLOBAL_CUSTOM_OBJECTS[registered_name] = arg
GLOBAL_CUSTOM_NAMES[arg] = registered_name
return arg
return decorator
@keras_export(
[
"keras.saving.get_registered_name",
"keras.utils.get_registered_name",
]
)
def get_registered_name(obj):
"""Returns the name registered to an object within the Keras framework.
This function is part of the Keras serialization and deserialization
framework. It maps objects to the string names associated with those objects
for serialization/deserialization.
Args:
obj: The object to look up.
Returns:
The name associated with the object, or the default Python name if the
object is not registered.
"""
if obj in GLOBAL_CUSTOM_NAMES:
return GLOBAL_CUSTOM_NAMES[obj]
else:
return obj.__name__
@keras_export(
[
"keras.saving.get_registered_object",
"keras.utils.get_registered_object",
]
)
def get_registered_object(name, custom_objects=None, module_objects=None):
"""Returns the class associated with `name` if it is registered with Keras.
This function is part of the Keras serialization and deserialization
framework. It maps strings to the objects associated with them for
serialization/deserialization.
Example:
```python
def from_config(cls, config, custom_objects=None):
if 'my_custom_object_name' in config:
config['hidden_cls'] = tf.keras.saving.get_registered_object(
config['my_custom_object_name'], custom_objects=custom_objects)
```
Args:
name: The name to look up.
custom_objects: A dictionary of custom objects to look the name up in.
Generally, custom_objects is provided by the user.
module_objects: A dictionary of custom objects to look the name up in.
Generally, module_objects is provided by midlevel library
implementers.
Returns:
An instantiable class associated with `name`, or `None` if no such class
exists.
"""
custom_objects_scope_dict = global_state.get_global_attribute(
"custom_objects_scope_dict", {}
)
if name in custom_objects_scope_dict:
return custom_objects_scope_dict[name]
elif name in GLOBAL_CUSTOM_OBJECTS:
return GLOBAL_CUSTOM_OBJECTS[name]
elif custom_objects and name in custom_objects:
return custom_objects[name]
elif module_objects and name in module_objects:
return module_objects[name]
return None
|
fcholletREPO_NAMEkerasPATH_START.@keras_extracted@keras-master@keras@src@saving@object_registration.py@.PATH_END.py
|
{
"filename": "stis_reduction-wasp-121b-paper.ipynb",
"repo_name": "natalieallen/stis_pipeline",
"repo_path": "stis_pipeline_extracted/stis_pipeline-main/stis_reduction-wasp-121b-paper.ipynb",
"type": "Jupyter Notebook"
}
|
```python
from STIS_pipeline_functions import *
```
```python
# read in data and calibration files - the test directory I have has
# two different instruments so I just quickly sort those
data_files_430 = sorted(glob.glob("wasp-121/HST/G430L/*/*flt.fits"))
data_files_750 = sorted(glob.glob("wasp-121/HST/G750L/*/*flt.fits"))
```
```python
# get the data from the files - since I use default settings, it reads out: data, header, jitter, dqs, errors
# currently I remove the last few files from the 750 data because they're flat frames rather than science data
# didn't realize this was a possibility to be included - will update the function to exclude anything that isn't
# "science" in the get_data function later
data_430 = get_data(data_files_430)
data_750 = get_data(data_files_750[:-1])
```
```python
# check on one of the data frames (first part of data) - 430 first and then 750
%matplotlib inline
plt.figure(figsize = (10,10))
im = plt.imshow(data_430[0][0])
im.set_clim(0,150)
plt.ylabel("Y Pixel")
plt.xlabel("X Pixel")
plt.show()
plt.figure(figsize = (10,10))
im = plt.imshow(data_750[0][0])
im.set_clim(0,150)
plt.ylabel("Y Pixel")
plt.xlabel("X Pixel")
plt.show()
```
```python
# initialize a list for and then create traces for each of the data frames
trace_list_430 = []
for i in data_430[0]:
# trace wants xi, xf, and an initial y guess - these are numbers I just chose from some testing
# it can be seen above this just corresponds to the strongest flux
# this function is fast so you can test for yourself if these values don't work!
trace = trace_spectrum(i, 100,1000,60)
trace_list_430.append(trace)
trace_list_750 = []
for i in data_750[0]:
# trace wants xi, xf, and an initial y guess - these are numbers I just chose from some testing
# it can be seen above this just corresponds to the strongest flux
# this function is fast so you can test for yourself if these values don't work!
trace = trace_spectrum(i, 100,1000,60)
trace_list_750.append(trace)
```
```python
# fit a polynomial to the traces - I just use a second-order chebyshev
# and i fit it over the whole length of the data frame instead of just what was use for the fit
trace_fit_430 = []
plt.figure(figsize = (10,7))
for j in trace_list_430:
coeffs1 = chebyshev.chebfit(j[0], j[1], deg=2)
# plotting just to check everything looks okay
plt.plot(np.arange(len(data_430[0][0][0])),chebyshev.chebval(np.arange(len(data_430[0][0][0])),coeffs1),lw=3, alpha = 0.5)
trace = [np.arange(len(data_430[0][0][0])),chebyshev.chebval(np.arange(len(data_430[0][0][0])),coeffs1)]
trace_fit_430.append(trace)
plt.ylabel("Y Pixel")
plt.xlabel("X Pixel")
plt.show()
# fit a polynomial to the traces - I just use a second-order chebyshev
# and i fit it over the whole length of the data frame instead of just what was use for the fit
trace_fit_750 = []
plt.figure(figsize = (10,7))
for j in trace_list_750:
coeffs1 = chebyshev.chebfit(j[0], j[1], deg=2)
# plotting just to check everything looks okay
plt.plot(np.arange(len(data_750[0][0][0])),chebyshev.chebval(np.arange(len(data_750[0][0][0])),coeffs1),lw=3, alpha = 0.5)
trace = [np.arange(len(data_750[0][0][0])),chebyshev.chebval(np.arange(len(data_750[0][0][0])),coeffs1)]
trace_fit_750.append(trace)
plt.ylabel("Y Pixel")
plt.xlabel("X Pixel")
plt.show()
# there's maybe some outliers here, but this is just an initial trace to feed into the cleaning function
# so we don't worry about it - should subside once the cosmic rays are cleaned
```
```python
# this is the number of frames in each orbit
data_750[5]
```
```python
# now to clean the data - default is all steps on (dq, difference_correct, hc_correct, spline_correct)
# i'm breaking up the orbits manually here, not sure what the best way to do this automatically is yet
# need to feed in the dq frames (the third output from the data function) and initial traces
cleaned_1_750 = clean_data(data_750[0][:4], dqs = data_750[3][:4], traces = trace_fit_750[:4])
cleaned_2_750 = clean_data(data_750[0][4:8], dqs = data_750[3][4:8], traces = trace_fit_750[4:8])
cleaned_3_750 = clean_data(data_750[0][8:12], dqs = data_750[3][8:12], traces = trace_fit_750[8:12])
cleaned_4_750 = clean_data(data_750[0][12:17], dqs = data_750[3][12:17], traces = trace_fit_750[12:17])
cleaned_5_750 = clean_data(data_750[0][17:22], dqs = data_750[3][17:22], traces = trace_fit_750[17:22])
cleaned_6_750 = clean_data(data_750[0][22:27], dqs = data_750[3][22:27], traces = trace_fit_750[22:27])
cleaned_7_750 = clean_data(data_750[0][27:32], dqs = data_750[3][27:32], traces = trace_fit_750[27:32])
cleaned_8_750 = clean_data(data_750[0][32:37], dqs = data_750[3][32:37], traces = trace_fit_750[32:37])
cleaned_9_750 = clean_data(data_750[0][37:42], dqs = data_750[3][37:42], traces = trace_fit_750[37:42])
cleaned_10_750 = clean_data(data_750[0][42:47], dqs = data_750[3][42:47], traces = trace_fit_750[42:47])
cleaned_11_750 = clean_data(data_750[0][47:52], dqs = data_750[3][47:52], traces = trace_fit_750[47:52])
cleaned_12_750 = clean_data(data_750[0][52:57], dqs = data_750[3][52:57], traces = trace_fit_750[52:57])
cleaned_13_750 = clean_data(data_750[0][57:62], dqs = data_750[3][57:62], traces = trace_fit_750[57:62])
cleaned_14_750 = clean_data(data_750[0][62:67], dqs = data_750[3][62:67], traces = trace_fit_750[62:67])
cleaned_15_750 = clean_data(data_750[0][67:72], dqs = data_750[3][67:72], traces = trace_fit_750[67:72])
```
```python
# comparison between the original and the cleaned image
# can easily see the lack of cosmic rays/bad pixels and removal of bad columns
plt.figure(figsize = (10,10))
im = plt.imshow(data_750[0][1])
im.set_clim(0,200)
plt.ylabel("Y Pixel")
plt.xlabel("X Pixel")
plt.show()
plt.figure(figsize = (10,10))
cim = plt.imshow(cleaned_1_750[1])
cim.set_clim(0,200)
plt.ylabel("Y Pixel")
plt.xlabel("X Pixel")
plt.show()
```
```python
# same for the 430 data
data_430[5]
```
```python
cleaned_1_430 = clean_data(data_430[0][:4], dqs = data_430[3][:4], traces = trace_fit_430[:4])
cleaned_2_430 = clean_data(data_430[0][4:8], dqs = data_430[3][4:8], traces = trace_fit_430[4:8])
cleaned_3_430 = clean_data(data_430[0][8:11], dqs = data_430[3][8:11], traces = trace_fit_430[8:11])
cleaned_4_430 = clean_data(data_430[0][11:14], dqs = data_430[3][11:14], traces = trace_fit_430[11:14])
cleaned_5_430 = clean_data(data_430[0][14:18], dqs = data_430[3][14:18], traces = trace_fit_430[14:18])
cleaned_6_430 = clean_data(data_430[0][18:21], dqs = data_430[3][18:21], traces = trace_fit_430[18:21])
cleaned_7_430 = clean_data(data_430[0][21:24], dqs = data_430[3][21:24], traces = trace_fit_430[21:24])
cleaned_8_430 = clean_data(data_430[0][24:28], dqs = data_430[3][24:28], traces = trace_fit_430[24:28])
cleaned_9_430 = clean_data(data_430[0][28:31], dqs = data_430[3][28:31], traces = trace_fit_430[28:31])
cleaned_10_430 = clean_data(data_430[0][31:34], dqs = data_430[3][31:34], traces = trace_fit_430[31:34])
cleaned_11_430 = clean_data(data_430[0][34:38], dqs = data_430[3][34:38], traces = trace_fit_430[34:38])
cleaned_12_430 = clean_data(data_430[0][38:41], dqs = data_430[3][38:41], traces = trace_fit_430[38:41])
cleaned_13_430 = clean_data(data_430[0][41:44], dqs = data_430[3][41:44], traces = trace_fit_430[41:44])
cleaned_14_430 = clean_data(data_430[0][44:48], dqs = data_430[3][44:48], traces = trace_fit_430[44:48])
```
```python
# keep this here for now just in case
cleaned_1_4302 = clean_data(data_430[0][:4], dqs = data_430[3][:4], traces = trace_fit_430[:4], spline_correct = True)
cleaned_2_4302 = clean_data(data_430[0][4:8], dqs = data_430[3][4:8], traces = trace_fit_430[4:8], spline_correct = True)
cleaned_3_4302 = clean_data(data_430[0][8:12], dqs = data_430[3][8:12], traces = trace_fit_430[8:12], spline_correct = True)
cleaned_4_4302 = clean_data(data_430[0][12:16], dqs = data_430[3][12:16], traces = trace_fit_430[12:16], spline_correct = True)
cleaned_5_4302 = clean_data(data_430[0][16:20], dqs = data_430[3][16:20], traces = trace_fit_430[16:20], spline_correct = True)
cleaned_6_4302 = clean_data(data_430[0][20:24], dqs = data_430[3][20:24], traces = trace_fit_430[20:24], spline_correct = True)
cleaned_7_4302 = clean_data(data_430[0][24:28], dqs = data_430[3][24:28], traces = trace_fit_430[24:28], spline_correct = True)
cleaned_8_4302 = clean_data(data_430[0][28:32], dqs = data_430[3][28:32], traces = trace_fit_430[28:32], spline_correct = True)
cleaned_9_4302 = clean_data(data_430[0][32:36], dqs = data_430[3][32:36], traces = trace_fit_430[32:36], spline_correct = True)
cleaned_10_4302 = clean_data(data_430[0][36:40], dqs = data_430[3][36:40], traces = trace_fit_430[36:40], spline_correct = True)
cleaned_11_4302 = clean_data(data_430[0][40:44], dqs = data_430[3][40:44], traces = trace_fit_430[40:44], spline_correct = True)
cleaned_12_4302 = clean_data(data_430[0][44:48], dqs = data_430[3][44:48], traces = trace_fit_430[44:48], spline_correct = True)
```
```python
plt.figure(figsize = (10,10))
cim = plt.imshow(data_430[0][1])
cim.set_clim(0,200)
plt.ylabel("Y Pixel")
plt.xlabel("X Pixel")
plt.show()
plt.figure(figsize = (10,10))
cim = plt.imshow(cleaned_1_430[1])
cim.set_clim(0,200)
plt.ylabel("Y Pixel")
plt.xlabel("X Pixel")
plt.show()
```
```python
# same for the second orbit
cleaned_21_430 = clean_data(data_430[0][48:52], dqs = data_430[3][48:52], traces = trace_fit_430[48:52])
cleaned_22_430 = clean_data(data_430[0][52:56], dqs = data_430[3][52:56], traces = trace_fit_430[52:56])
cleaned_23_430 = clean_data(data_430[0][56:59], dqs = data_430[3][56:59], traces = trace_fit_430[56:59])
cleaned_24_430 = clean_data(data_430[0][59:62], dqs = data_430[3][59:62], traces = trace_fit_430[59:62])
cleaned_25_430 = clean_data(data_430[0][62:66], dqs = data_430[3][62:66], traces = trace_fit_430[62:66])
cleaned_26_430 = clean_data(data_430[0][66:69], dqs = data_430[3][66:69], traces = trace_fit_430[66:69])
cleaned_27_430 = clean_data(data_430[0][69:72], dqs = data_430[3][69:72], traces = trace_fit_430[69:72])
cleaned_28_430 = clean_data(data_430[0][72:76], dqs = data_430[3][72:76], traces = trace_fit_430[72:76])
cleaned_29_430 = clean_data(data_430[0][76:79], dqs = data_430[3][76:79], traces = trace_fit_430[76:79])
cleaned_210_430 = clean_data(data_430[0][79:82], dqs = data_430[3][79:82], traces = trace_fit_430[79:82])
cleaned_211_430 = clean_data(data_430[0][82:86], dqs = data_430[3][82:86], traces = trace_fit_430[82:86])
cleaned_212_430 = clean_data(data_430[0][86:89], dqs = data_430[3][86:89], traces = trace_fit_430[86:89])
cleaned_213_430 = clean_data(data_430[0][89:92], dqs = data_430[3][89:92], traces = trace_fit_430[89:92])
cleaned_214_430 = clean_data(data_430[0][92:96], dqs = data_430[3][92:96], traces = trace_fit_430[92:96])
```
```python
plt.figure(figsize = (10,10))
cim = plt.imshow(data_430[0][0+48])
cim.set_clim(0,200)
plt.ylabel("Y Pixel")
plt.xlabel("X Pixel")
plt.show()
plt.figure(figsize = (10,10))
cim = plt.imshow(cleaned_21_430[0])
cim.set_clim(0,200)
plt.ylabel("Y Pixel")
plt.xlabel("X Pixel")
plt.show()
```
```python
# just putting all of the cleaned files into one list
transit_750_cleaned = [*cleaned_1_750, *cleaned_2_750, *cleaned_3_750, *cleaned_4_750, *cleaned_5_750, *cleaned_6_750, *cleaned_7_750, *cleaned_8_750, *cleaned_9_750, *cleaned_10_750, *cleaned_11_750, *cleaned_12_750, *cleaned_13_750, *cleaned_14_750, *cleaned_15_750]
transit_430_cleaned = [*cleaned_1_430, *cleaned_2_430, *cleaned_3_430, *cleaned_4_430, *cleaned_5_430, *cleaned_6_430, *cleaned_7_430, *cleaned_8_430, *cleaned_9_430, *cleaned_10_430, *cleaned_11_430, *cleaned_12_430, *cleaned_13_430, *cleaned_14_430]
transit_430_2_cleaned = [*cleaned_21_430, *cleaned_22_430, *cleaned_23_430, *cleaned_24_430, *cleaned_25_430, *cleaned_26_430, *cleaned_27_430, *cleaned_28_430, *cleaned_29_430, *cleaned_210_430, *cleaned_211_430, *cleaned_212_430, *cleaned_213_430, *cleaned_214_430]
```
```python
# let's look at the 750 data
for i in range(len(transit_750_cleaned)):
print("Frame", i)
plt.figure(figsize = (10,7))
im = plt.imshow(data_750[0][i])
im.set_clim(0,200)
plt.show()
plt.figure(figsize = (10,7))
im = plt.imshow(transit_750_cleaned[i])
im.set_clim(0,200)
plt.show()
plt.figure(figsize = (10,7))
im = plt.imshow(data_750[0][i]-transit_750_cleaned[i])
im.set_clim(0,200)
plt.show()
```
```python
# and the 430 data
transit_430_cleaned_t = transit_430_cleaned + transit_430_2_cleaned
for i in range(len(transit_430_cleaned_t)):
print("Frame", i)
plt.figure(figsize = (10,7))
im = plt.imshow(data_430[0][i])
im.set_clim(0,200)
plt.show()
plt.figure(figsize = (10,7))
im = plt.imshow(transit_430_cleaned_t[i])
im.set_clim(0,200)
plt.show()
plt.figure(figsize = (10,7))
im = plt.imshow(data_430[0][i]-transit_430_cleaned_t[i])
im.set_clim(0,200)
plt.show()
```
```python
# save all of the cleaned images
name = "cleaned_data_wasp121_nov15"
if os.path.exists(name) != True:
os.makedirs(name)
for i in range(len(transit_750_cleaned)):
filename = "cleaned_750_1_" + str(i).zfill(3) + ".fits"
fits.writeto(name + "/" +filename, transit_750_cleaned[i], data_750[1][i], overwrite=True)
fits.append(name + "/" +filename, data_750[4][i])
for i in range(len(transit_430_cleaned)):
filename = "cleaned_430_1_" + str(i).zfill(3) + ".fits"
fits.writeto(name + "/" +filename, transit_430_cleaned[i], data_430[1][i], overwrite=True)
fits.append(name + "/" +filename, data_430[4][i])
for i in range(len(transit_430_2_cleaned)):
filename = "cleaned_430_2_" + str(i).zfill(3) + ".fits"
fits.writeto(name + "/" + filename, transit_430_2_cleaned[i], data_430[1][48+i], overwrite=True)
fits.append(name + "/" + filename, data_430[4][48+i])
```
```python
# and then just write the jitter dictionary to a pickle file for later use
with open('jitter_750.pkl', 'wb') as handle:
pickle.dump(data_750[2], handle, protocol=pickle.HIGHEST_PROTOCOL)
with open('jitter_430.pkl', 'wb') as handle:
pickle.dump(data_430[2], handle, protocol=pickle.HIGHEST_PROTOCOL)
```
```python
# now that we have the cleaned files, we don't need to run any of the above code a second time
# read those files in to have this as a new starting point
from STIS_pipeline_functions import *
name = "cleaned_data_wasp121_nov17"
cleaned_data_430_1 = sorted(glob.glob(name + "/cleaned_430_1*"))
transit_430_cleaned = []
headers_430 = []
errs_430 = []
for i in cleaned_data_430_1:
c_data, hdr = fits.getdata(i, ext = 0, header = True)
transit_430_cleaned.append(c_data)
headers_430.append(hdr)
e_data = fits.getdata(i, ext = 1)
errs_430.append(e_data)
cleaned_data_430_2 = sorted(glob.glob(name + "/cleaned_430_2*"))
transit_430_2_cleaned = []
headers_430_2 = []
errs_430_2 = []
for i in cleaned_data_430_2:
c_data, hdr = fits.getdata(i, ext = 0, header = True)
transit_430_2_cleaned.append(c_data)
headers_430_2.append(hdr)
e_data = fits.getdata(i, ext = 1)
errs_430_2.append(e_data)
with open('jitter_430.pkl', 'rb') as handle:
jit_430 = pickle.load(handle)
```
```python
cleaned_data_750 = sorted(glob.glob(name + "/cleaned_750*"))
transit_750_cleaned = []
headers_750 = []
errs_750 = []
for i in cleaned_data_750:
c_data, hdr = fits.getdata(i, ext = 0, header = True)
transit_750_cleaned.append(c_data)
headers_750.append(hdr)
e_data = fits.getdata(i, ext = 1)
errs_750.append(e_data)
with open('jitter_750.pkl', 'rb') as handle:
jit_750 = pickle.load(handle)
```
```python
# now to retrace the cleaned data
trace_430_list = []
plt.figure(figsize = (10,7))
for i in transit_430_cleaned:
trace = trace_spectrum(i, 200,1000,60)
trace_430_list.append(trace)
trace_430_coeffs = []
trace_430_fit = []
for j in trace_430_list:
coeffs1 = chebyshev.chebfit(j[0], j[1], deg=2)
plt.plot(np.arange(len(transit_430_cleaned[0][0])),chebyshev.chebval(np.arange(len(transit_430_cleaned[0][0])),coeffs1),lw=3, alpha = 0.5)
trace = [np.arange(len(transit_430_cleaned[0][0])),chebyshev.chebval(np.arange(len(transit_430_cleaned[0][0])),coeffs1)]
trace_430_fit.append(trace)
trace_430_coeffs.append(coeffs1)
plt.ylabel("Y Pixel")
plt.xlabel("X Pixel")
plt.show()
trace_430_2_list = []
plt.figure(figsize = (10,7))
for i in transit_430_2_cleaned:
trace = trace_spectrum(i, 200,1000,60)
trace_430_2_list.append(trace)
trace_430_2_coeffs = []
trace_430_2_fit = []
for j in trace_430_2_list:
coeffs1 = chebyshev.chebfit(j[0], j[1], deg=2)
plt.plot(np.arange(len(transit_430_2_cleaned[0][0])),chebyshev.chebval(np.arange(len(transit_430_2_cleaned[0][0])),coeffs1),lw=3, alpha = 0.5)
trace = [np.arange(len(transit_430_2_cleaned[0][0])),chebyshev.chebval(np.arange(len(transit_430_2_cleaned[0][0])),coeffs1)]
trace_430_2_fit.append(trace)
trace_430_2_coeffs.append(coeffs1)
plt.ylabel("Y Pixel")
plt.xlabel("X Pixel")
plt.show()
trace_750_list = []
plt.figure(figsize = (10,7))
for i in transit_750_cleaned:
trace = trace_spectrum(i, 200,1000,60)
trace_750_list.append(trace)
trace_750_coeffs = []
trace_750_fit = []
for j in trace_750_list:
coeffs1 = chebyshev.chebfit(j[0], j[1], deg=2)
plt.plot(np.arange(len(transit_750_cleaned[0][0])),chebyshev.chebval(np.arange(len(transit_750_cleaned[0][0])),coeffs1),lw=3, alpha = 0.5)
trace = [np.arange(len(transit_750_cleaned[0][0])),chebyshev.chebval(np.arange(len(transit_750_cleaned[0][0])),coeffs1)]
trace_750_fit.append(trace)
trace_750_coeffs.append(coeffs1)
plt.ylabel("Y Pixel")
plt.xlabel("X Pixel")
plt.show()
```
```python
# now for the spectral extraction - I use optimal since I don't do the spline cleaning, and I feed in the ron and
# gain values from the header
spectra_430 = []
for i in range(len(transit_430_cleaned)):
spec = spectral_extraction(transit_430_cleaned[i], trace_430_fit[i], method = "optimal", gain = 4.0159998, ron = 8.2327986, polynomial_order = 3, nsigma = 12)
spectra_430.append(spec)
```
```python
spectra_430_2 = []
for i in range(len(transit_430_2_cleaned)):
spec = spectral_extraction(transit_430_2_cleaned[i], trace_430_2_fit[i], method = "optimal", gain = 4.0159998, ron = 8.2327986, polynomial_order = 3, nsigma = 12)
spectra_430_2.append(spec)
```
```python
spectra_750 = []
for i in range(len(transit_750_cleaned)):
spec = spectral_extraction(transit_750_cleaned[i], trace_750_fit[i], method = "optimal", gain = 4.0159998, ron = 8.2327986, polynomial_order = 3, nsigma = 12)
spectra_750.append(spec)
```
```python
# plot all of the spectra on top of each other - looks good! just like in the wasp-121 paper
plt.figure(figsize = (10,7))
for i in spectra_430:
plt.plot(i[1], alpha = 0.1)
plt.ylabel("Counts")
plt.xlabel("X Pixel")
plt.show()
plt.figure(figsize = (10,7))
for i in spectra_430_2:
plt.plot(i[1], alpha = 0.1)
plt.ylabel("Counts")
plt.xlabel("X Pixel")
plt.show()
plt.figure(figsize = (10,7))
for i in spectra_750:
plt.plot(i[1], alpha = 0.1)
plt.ylabel("Counts")
plt.xlabel("X Pixel")
plt.show()
```
```python
# hm, there's maybe a few outliers, so let's just try to get rid of some of those using a basic median filter function
spectra_430 = spectrum_outliers(spectra_430)
spectra_430_2 = spectrum_outliers(spectra_430_2)
spectra_750 = spectrum_outliers(spectra_750)
plt.figure(figsize = (10,7))
for i in spectra_430:
plt.plot(i[1], alpha = 0.1, color = "tomato")
plt.ylabel("Counts")
plt.xlabel("X Pixel")
plt.title("WASP-121 b G430L (1)")
#plt.savefig("w121b_g430l_extspec1.png", dpi = 300, facecolor = "white")
plt.show()
plt.figure(figsize = (10,7))
for i in spectra_430_2:
plt.plot(i[1], alpha = 0.1, color = "cornflowerblue")
plt.ylabel("Counts")
plt.xlabel("X Pixel")
plt.title("WASP-121 b G430L (2)")
#plt.savefig("w121b_g430l_extspec2.png", dpi = 300, facecolor = "white")
plt.show()
plt.figure(figsize = (10,7))
for i in spectra_750:
plt.plot(i[1], alpha = 0.1, color = "darkseagreen")
plt.ylabel("Counts")
plt.xlabel("X Pixel")
plt.title("WASP-121 b G750L")
#plt.savefig("w121b_g750l_extspec.png", dpi = 300, facecolor = "white")
plt.show()
```
```python
# sum up each of the spectra for the white light curve
lc_430 = []
lc_err_430 = []
for i in spectra_430:
point = np.nansum(i[1])
# optimal extraction outputs inverse variance, so assume np.sqrt(np.nansum((1/np.sqrt(i[2]))**2))
# is the error on each point
err = np.sqrt(np.nansum((1/np.sqrt(i[2]))**2))
lc_430.append(point)
lc_err_430.append(err)
lc_430_2 = []
lc_err_430_2 = []
for i in spectra_430_2:
point = np.nansum(i[1])
err = np.sqrt(np.nansum((1/np.sqrt(i[2]))**2))
lc_430_2.append(point)
lc_err_430_2.append(err)
lc_750 = []
lc_err_750 = []
for i in spectra_750:
point = np.nansum(i[1])
err = np.sqrt(np.nansum((1/np.sqrt(i[2]))**2))
lc_750.append(point)
lc_err_750.append(err)
```
```python
# now to deal with the timestamps. HST header times come in JDUTC, we want BJD-TBD
times_430 = times_to_bjd(headers_430, starname = "WASP-121")
times_start_430 = times_430[0][0]
times_end_430 = times_430[0][1]
times_430_2 = times_to_bjd(headers_430_2, starname = "WASP-121")
times_start_430_2 = times_430_2[0][0]
times_end_430_2 = times_430_2[0][1]
times_750 = times_to_bjd(headers_750, starname = "WASP-121")
times_start_750 = times_750[0][0]
times_end_750 = times_750[0][1]
```
```python
# should now be able to plot our raw white light curve - looks pretty good!
plt.figure(figsize = (10,7))
plt.errorbar(times_start_430, lc_430/np.nanmedian(lc_430), yerr = lc_err_430/np.nanmedian(lc_430), fmt = ".", color = "cornflowerblue", label = "var", markersize = 10)
plt.xlabel("Time (BJD-TBD)")
plt.ylabel("Relative Flux")
#plt.ylim(0.95, 1.01)
plt.show()
```
```python
plt.figure(figsize = (10,7))
plt.errorbar(times_start_430_2, lc_430_2/np.nanmedian(lc_430_2), yerr = lc_err_430_2/np.nanmedian(lc_430_2), fmt = ".", color = "cornflowerblue", label = "var", markersize = 10)
plt.xlabel("Time (BJD-TBD)")
plt.ylabel("Relative Flux")
#plt.ylim(0.95, 1.01)
plt.show()
```
```python
plt.figure(figsize = (10,7))
plt.errorbar(times_start_750, lc_750/np.nanmedian(lc_750), yerr = lc_err_750/np.nanmedian(lc_750), fmt = ".", color = "cornflowerblue", label = "var", markersize = 10)
plt.xlabel("Time (BJD-TBD)")
plt.ylabel("Relative Flux")
#plt.ylim(0.95, 1.01)
plt.show()
```
```python
# calculate trace movement - another detrending vector
template_430 = np.nanmedian([x[1] for x in trace_430_fit], axis = 0)
movement_430 = []
movement_sum_430 = []
for i in trace_430_fit:
movement_430.append(np.nanmedian(template_430-i[1]))
movement_sum_430.append(np.nansum((template_430-i[1])**2))
template_430_2 = np.nanmedian([x[1] for x in trace_430_2_fit], axis = 0)
movement_430_2 = []
movement_sum_430_2 = []
for i in trace_430_2_fit:
movement_430_2.append(np.nanmedian(template_430_2-i[1]))
movement_sum_430_2.append(np.nansum((template_430_2-i[1])**2))
jit_430["Y"] = movement_430 + movement_430_2
jit_430["Y_sum"] = movement_sum_430 + movement_sum_430_2
```
```python
template_750 = np.nanmedian([x[1] for x in trace_750_fit], axis = 0)
movement_750 = []
movement_sum_750 = []
for i in trace_750_fit:
movement_750.append(np.nanmedian(template_750-i[1]))
movement_sum_750.append(np.nansum((template_750-i[1])**2))
jit_750["Y"] = movement_750
jit_750["Y_sum"] = movement_sum_750
```
```python
# calculate trace polynomial movement - another detrending vector
template_430 = np.nanmedian(trace_430_coeffs, axis = 0)
movement_430 = []
movement_coeff1_430 = []
movement_coeff2_430 = []
for i in trace_430_coeffs:
#plt.plot(template_430-i)
#movement_430.append(np.nanmedian(template_430-i[1]))
movement_430.append((template_430-i)[0])
movement_coeff1_430.append((template_430-i)[1])
movement_coeff2_430.append((template_430-i)[2])
#jit_430["Y"] = movement_430
template_430_2 = np.nanmedian(trace_430_2_coeffs, axis = 0)
movement_430_2 = []
movement_coeff1_430_2 = []
movement_coeff2_430_2 = []
for i in trace_430_2_coeffs:
#plt.plot(template_430-i)
#movement_430.append(np.nanmedian(template_430-i[1]))
movement_430_2.append((template_430-i)[0])
movement_coeff1_430_2.append((template_430-i)[1])
movement_coeff2_430_2.append((template_430-i)[2])
#jit_430["Y"] = movement_430 + movement_430_2
jit_430["T_move"] = movement_430 + movement_430_2
jit_430["T_coeff1"] = movement_coeff1_430 + movement_coeff1_430_2
jit_430["T_coeff2"] = movement_coeff2_430 + movement_coeff2_430_2
```
```python
template_750 = np.nanmedian(trace_750_coeffs, axis = 0)
movement_750 = []
movement_coeff1_750 = []
movement_coeff2_750 = []
for i in trace_750_coeffs:
#plt.plot(template_430-i)
#movement_430.append(np.nanmedian(template_430-i[1]))
movement_750.append((template_750-i)[0])
movement_coeff1_750.append((template_750-i)[1])
movement_coeff2_750.append((template_750-i)[2])
#jit_750["Y"] = movement_750
jit_750["T_move"] = movement_750
jit_750["T_coeff1"] = movement_coeff1_750
jit_750["T_coeff2"] = movement_coeff2_750
```
```python
# last detrending vector: HST orbital phase
HST_period = 95.42/(60. * 24.) # HST period in days
#visit['orbital phase'] = juliet.utils.get_phases(visit['times'], HST_period, visit['times'][0])
phase_430_1 = juliet.utils.get_phases(times_start_430, HST_period, times_start_430[0])**4
phase_430_2 = juliet.utils.get_phases(times_start_430_2, HST_period, times_start_430_2[0])**4
jit_430["phase^4"] = list(phase_430_1) + list(phase_430_2)
jit_750["phase^4"] = juliet.utils.get_phases(times_start_750, HST_period, times_start_750[0])**4
# looks about right
plt.plot((times_start_430 - times_start_430[0])*24, phase_430_1, "o")
plt.plot((times_start_430_2 - times_start_430_2[0])*24, phase_430_2, "o")
plt.plot((times_start_750 - times_start_750[0])*24, jit_750["phase^4"], "o")
```
```python
# before fitting, we just need to normalize our detrending vectors into standard deviation
# these are the detrending vectors to use
jit_use = ["V2_roll", "V3_roll", "Latitude", "Longitude", "RA", "DEC", "T_move", "T_coeff1", "T_coeff2", "phase^4"]
norm_jit_750 = {}
for i in jit_use:
norm = (jit_750[i]-np.nanmean(jit_750[i]))
norm_jit_750[i] = norm/np.std(norm)
```
```python
jit_use = ["V2_roll", "V3_roll", "Latitude", "Longitude", "RA", "DEC", "T_move", "T_coeff1", "T_coeff2", "phase^4"]
norm_jit_430 = {}
for i in jit_use:
norm = (jit_430[i][:48]-np.nanmean(jit_430[i][:48]))
norm_jit_430[i] = norm/np.std(norm)
norm_jit_430_2 = {}
for i in jit_use:
norm = (jit_430[i][48:]-np.nanmean(jit_430[i][48:]))
norm_jit_430_2[i] = norm/np.std(norm)
```
```python
# check them out
plt.figure(figsize = (10,7))
for i in norm_jit_750:
plt.plot(norm_jit_750[i], alpha = 0.4, label = str(i))
plt.xlabel("Timestamp")
plt.ylabel(r"Offset from Mean ($\sigma$)")
plt.legend()
plt.show()
```
```python
plt.figure(figsize = (10,7))
for i in norm_jit_430:
plt.plot(norm_jit_430[i], alpha = 0.4, label = str(i))
plt.xlabel("Timestamp")
plt.ylabel(r"Offset from Mean ($\sigma$)")
plt.legend()
plt.show()
plt.figure(figsize = (10,7))
for i in norm_jit_430_2:
plt.plot(norm_jit_430_2[i], alpha = 0.4, label = str(i))
plt.xlabel("Timestamp")
plt.ylabel(r"Offset from Mean ($\sigma$)")
plt.legend()
plt.show()
```
```python
# now we need to run PCA on these detrending vectors
# do the PCA
vectors = [norm_jit_430[i] for i in list(norm_jit_430.keys())]
eigenvectors,eigenvalues,PC = classic_PCA(vectors)
norm_pc_430 = {}
j = 0
for i in PC:
norm = (i-np.nanmean(i))
name = "PC" + str(j)
j = j+1
norm_pc_430[name] = norm/np.std(norm)
vectors2 = [norm_jit_430_2[i] for i in list(norm_jit_430_2.keys())]
eigenvectors,eigenvalues,PC2 = classic_PCA(vectors2)
norm_pc_430_2 = {}
j = 0
for i in PC2:
norm = (i-np.nanmean(i))
name = "PC" + str(j)
j = j+1
norm_pc_430_2[name] = norm/np.std(norm)
vectors_750 = [norm_jit_750[i] for i in list(norm_jit_750.keys())]
eigenvectors,eigenvalues,PC = classic_PCA(vectors_750)
norm_pc_750 = {}
j = 0
for i in PC:
norm = (i-np.nanmean(i))
name = "PC" + str(j)
j = j+1
norm_pc_750[name] = norm/np.std(norm)
```
```python
# and get all of the individual lists
norm_pc_use1_430 = {}
names = list(norm_pc_430.keys())
for i in range(1):
name = "PC" + str(i+1)
norm_pc_use1_430[name] = norm_pc_430[names[i]]
norm_pc_use1_430_2 = {}
names = list(norm_pc_430_2.keys())
for i in range(1):
name = "PC" + str(i+1)
norm_pc_use1_430_2[name] = norm_pc_430_2[names[i]]
norm_pc_use2_430 = {}
names = list(norm_pc_430.keys())
for i in range(2):
name = "PC" + str(i+1)
norm_pc_use2_430[name] = norm_pc_430[names[i]]
norm_pc_use2_430_2 = {}
names = list(norm_pc_430_2.keys())
for i in range(2):
name = "PC" + str(i+1)
norm_pc_use2_430_2[name] = norm_pc_430_2[names[i]]
norm_pc_use3_430 = {}
names = list(norm_pc_430.keys())
for i in range(3):
name = "PC" + str(i+1)
norm_pc_use3_430[name] = norm_pc_430[names[i]]
norm_pc_use3_430_2 = {}
names = list(norm_pc_430_2.keys())
for i in range(3):
name = "PC" + str(i+1)
norm_pc_use3_430_2[name] = norm_pc_430_2[names[i]]
norm_pc_use4_430 = {}
names = list(norm_pc_430.keys())
for i in range(4):
name = "PC" + str(i+1)
norm_pc_use4_430[name] = norm_pc_430[names[i]]
norm_pc_use4_430_2 = {}
names = list(norm_pc_430_2.keys())
for i in range(4):
name = "PC" + str(i+1)
norm_pc_use4_430_2[name] = norm_pc_430_2[names[i]]
norm_pc_use5_430 = {}
names = list(norm_pc_430.keys())
for i in range(5):
name = "PC" + str(i+1)
norm_pc_use5_430[name] = norm_pc_430[names[i]]
norm_pc_use5_430_2 = {}
names = list(norm_pc_430_2.keys())
for i in range(5):
name = "PC" + str(i+1)
norm_pc_use5_430_2[name] = norm_pc_430_2[names[i]]
norm_pc_use6_430 = {}
names = list(norm_pc_430.keys())
for i in range(6):
name = "PC" + str(i+1)
norm_pc_use6_430[name] = norm_pc_430[names[i]]
norm_pc_use6_430_2 = {}
names = list(norm_pc_430_2.keys())
for i in range(6):
name = "PC" + str(i+1)
norm_pc_use6_430_2[name] = norm_pc_430_2[names[i]]
norm_pc_use7_430 = {}
names = list(norm_pc_430.keys())
for i in range(7):
name = "PC" + str(i+1)
norm_pc_use7_430[name] = norm_pc_430[names[i]]
norm_pc_use7_430_2 = {}
names = list(norm_pc_430_2.keys())
for i in range(7):
name = "PC" + str(i+1)
norm_pc_use7_430_2[name] = norm_pc_430_2[names[i]]
norm_pc_use8_430 = {}
names = list(norm_pc_430.keys())
for i in range(8):
name = "PC" + str(i+1)
norm_pc_use8_430[name] = norm_pc_430[names[i]]
norm_pc_use8_430_2 = {}
names = list(norm_pc_430_2.keys())
for i in range(8):
name = "PC" + str(i+1)
norm_pc_use8_430_2[name] = norm_pc_430_2[names[i]]
norm_pc_use9_430 = {}
names = list(norm_pc_430.keys())
for i in range(9):
name = "PC" + str(i+1)
norm_pc_use9_430[name] = norm_pc_430[names[i]]
norm_pc_use9_430_2 = {}
names = list(norm_pc_430_2.keys())
for i in range(9):
name = "PC" + str(i+1)
norm_pc_use9_430_2[name] = norm_pc_430_2[names[i]]
norm_pc_use1_750 = {}
names = list(norm_pc_750.keys())
for i in range(1):
name = "PC" + str(i+1)
norm_pc_use1_750[name] = norm_pc_750[names[i]]
norm_pc_use2_750 = {}
names = list(norm_pc_750.keys())
for i in range(2):
name = "PC" + str(i+1)
norm_pc_use2_750[name] = norm_pc_750[names[i]]
norm_pc_use3_750 = {}
names = list(norm_pc_750.keys())
for i in range(3):
name = "PC" + str(i+1)
norm_pc_use3_750[name] = norm_pc_750[names[i]]
norm_pc_use4_750 = {}
names = list(norm_pc_750.keys())
for i in range(4):
name = "PC" + str(i+1)
norm_pc_use4_750[name] = norm_pc_750[names[i]]
norm_pc_use5_750 = {}
names = list(norm_pc_750.keys())
for i in range(5):
name = "PC" + str(i+1)
norm_pc_use5_750[name] = norm_pc_750[names[i]]
norm_pc_use6_750 = {}
names = list(norm_pc_750.keys())
for i in range(6):
name = "PC" + str(i+1)
norm_pc_use6_750[name] = norm_pc_750[names[i]]
norm_pc_use7_750 = {}
names = list(norm_pc_750.keys())
for i in range(7):
name = "PC" + str(i+1)
norm_pc_use7_750[name] = norm_pc_750[names[i]]
norm_pc_use8_750 = {}
names = list(norm_pc_750.keys())
for i in range(8):
name = "PC" + str(i+1)
norm_pc_use8_750[name] = norm_pc_750[names[i]]
norm_pc_use9_750 = {}
names = list(norm_pc_750.keys())
for i in range(9):
name = "PC" + str(i+1)
norm_pc_use9_750[name] = norm_pc_750[names[i]]
```
```python
# next, we get the wavelength solution
wl_sol_750 = fits.getdata("wasp-121/HST/G750L/od9ma2010/od9ma2010_x1d.fits", ext = 1)
for i, order_750 in enumerate(wl_sol_750):
plt.plot(order_750["WAVELENGTH"], order_750["FLUX"])
wl_sol_430 = fits.getdata("wasp-121/HST/G430L/od9m99010/od9m99010_x1d.fits", ext = 1)
for i, order_430 in enumerate(wl_sol_430):
plt.plot(order_430["WAVELENGTH"], order_430["FLUX"])
```
```python
# extracted spectra with wavelengths
plt.figure(figsize = (10,7))
for i in spectra_430:
plt.plot(order_430["WAVELENGTH"]/10000, i[1], alpha = 0.1, color = "tomato")
plt.plot([], [], color='tomato', label='G430L (1)')
plt.ylabel("Counts")
plt.xlabel(r"Wavelength ($\mu$m)")
plt.title("WASP-121 b")
#plt.savefig("w17b_g430l_extspec1.png", dpi = 300, facecolor = "white")
#plt.show()
#plt.figure(figsize = (10,7))
for i in spectra_430_2:
plt.plot(order_430["WAVELENGTH"]/10000, i[1]+14000, alpha = 0.1, color = "cornflowerblue")
plt.plot([], [], color='cornflowerblue', label='G430L (2) + offset')
#plt.ylabel("Counts")
#plt.xlabel("X Pixel")
#plt.title("WASP-17 b G430L (2)")
#plt.savefig("w17b_g430l_extspec2.png", dpi = 300, facecolor = "white")
#plt.show()
#plt.figure(figsize = (10,7))
for i in spectra_750:
plt.plot(order_750["WAVELENGTH"]/10000, i[1], alpha = 0.1, color = "darkseagreen")
plt.plot([], [], color='darkseagreen', label='G750L')
#plt.ylabel("Counts")
#plt.xlabel("X Pixel")
#plt.title("WASP-17 b G750L")
plt.xlim(2930/10000, 10200/10000)
plt.legend()
#plt.savefig("w121b_extspec.png", dpi = 300, facecolor = "white")
plt.show()
```
```python
# we're going to fix limb darkening for now
# change the path here to your exotic-ld_data
from exotic_ld import StellarLimbDarkening
sld = StellarLimbDarkening(
M_H=0.13, Teff=6776, logg=4.24, ld_model='1D',
ld_data_path="/Users/natalieallen/Documents/Grad/Sing Research/exotic-ld_data")
c1_430, c2_430 = sld.compute_quadratic_ld_coeffs(
wavelength_range=np.array([order_430["WAVELENGTH"][0], order_430["WAVELENGTH"][-1]]),
mode='HST_STIS_G430L')
c1_750, c2_750 = sld.compute_quadratic_ld_coeffs(
wavelength_range=np.array([order_750["WAVELENGTH"][0], order_750["WAVELENGTH"][-1]]),
mode='HST_STIS_G750L')
```
```python
# to read into the white_light_fit functions, just need to create a dictionary like this -
# put just the value if it should be fixed, and the value and range in a list if it should be fit
# the evans params have the limb darkening from the evans et al. 2018 wasp-121 paper for testing
params_430 = {"P": [1.274925],"t0": [2457686.2441857136], "p": [0.12, 0.05], "b": [0.1], "a": [3.823], "rho": [612], "ecc": [0], "omega": [90], "c1": [c1_430], "c2": [c2_430]}
params_430_evans = {"P": [1.274925],"t0": [2457686.2441857136], "p": [0.12, 0.05], "b": [0.1], "a": [3.823], "rho": [612], "ecc": [0], "omega": [90], "c1": [0.5], "c2": [0.11]}
```
```python
params_750 = {"P": [1.274925],"t0": [2457686.2441857136], "p": [0.12, 0.05], "b": [0.1], "a": [3.823], "rho": [612], "ecc": [0], "omega": [90], "c1": [c1_750], "c2": [c2_750]}
params_750_evans = {"P": [1.274925],"t0": [2457686.2441857136], "p": [0.12, 0.1], "b": [0.08], "a": [3.88], "rho": [612], "ecc": [0], "omega": [90], "c1": [0.21], "c2": [0.25]}
```
```python
# now for a bunch of light curve fitting
# to quickly describe, there are three main "options" for different fitting methods
# there is a gp detrending method (using juliet), and two linear detrending methods - one using juliet,
# and another just using a quick LM fit function (mostly just for test purposes)
# the juliet fits can use any of the available samplers, although right now I only have the default pymultinest and
# dynesty set up
# if you use the gp method, you need to provide a name for an output folder with juliet_name
gp_fit = white_light_fit(params_430, times_start_430, lc_430, lc_err_430, norm_pc_430, sys_method = "gp", juliet_name = "wasp-121_430_1", gp_kernel = "Matern", sampler = "dynamic_dynesty")
# this outputs a juliet "results" type - will show some ways of how to use that below
```
```python
# here are the results for that fit
for i in gp_fit.posteriors['posterior_samples'].keys():
print(i, np.median(gp_fit.posteriors['posterior_samples'][i]))
```
```python
# let's plot the results of this fit using the juliet object
plt.figure(figsize = (10,7))
t_final = np.linspace(times_start_430[0], times_start_430[-1], 1000)
transit_plus_GP_model = gp_fit.lc.evaluate('STIS')
transit_model = gp_fit.lc.evaluate('STIS', evaluate_transit = True)
transit_model_resamp, error68_up, error68_down = gp_fit.lc.evaluate('STIS', evaluate_transit = True, t = t_final, return_err = True)
offset = 0
plt.scatter(times_start_430, (lc_430/np.median(lc_430[:10])), label = "raw data", color = "cornflowerblue", alpha = 0.7)
plt.scatter(times_start_430, (lc_430/np.median(lc_430[:10])) - (transit_plus_GP_model - transit_model) + offset,alpha=1, label = "gp detrending", color = "orchid")
plt.plot(np.linspace(times_start_430[0], times_start_430[-1], 1000), transit_model_resamp + offset, color='tomato',zorder=10, label = "gp transit model")
plt.fill_between(t_final, error68_up, error68_down, color = "tomato", alpha = 0.2, label = "68% error")
plt.legend()
plt.ylabel("Relative Flux")
plt.xlabel("Time (BJD-TBD)")
plt.show()
```
```python
# second 430 transit
gp_fit_2 = white_light_fit(params_430, times_start_430_2, lc_430_2, lc_err_430_2, norm_pc_430_2, sys_method = "gp", juliet_name = "wasp-121_430_2", gp_kernel = "Matern", sampler = "dynamic_dynesty")
```
```python
for i in gp_fit_2.posteriors['posterior_samples'].keys():
print(i, np.median(gp_fit_2.posteriors['posterior_samples'][i]))
```
```python
plt.figure(figsize = (10,7))
t_final = np.linspace(times_start_430_2[0], times_start_430_2[-1], 1000)
transit_plus_GP_model = gp_fit_2.lc.evaluate('STIS')
transit_model = gp_fit_2.lc.evaluate('STIS', evaluate_transit = True)
transit_model_resamp, error68_up, error68_down = gp_fit_2.lc.evaluate('STIS', evaluate_transit = True, t = t_final, return_err = True)
offset = 0
plt.scatter(times_start_430_2, (lc_430_2/np.median(lc_430_2[:10])), color = "cornflowerblue", alpha = 0.7, label = "raw data")
plt.scatter(times_start_430_2, (lc_430_2/np.median(lc_430_2[:10])) - (transit_plus_GP_model - transit_model) + offset,alpha=1, label = "gp detrending", color = "orchid")
plt.plot(t_final, transit_model_resamp + offset, color='tomato',zorder=10, label = "gp transit model")
plt.fill_between(t_final, error68_up, error68_down, color = "tomato", alpha = 0.2, label = "68% error")
plt.legend()
plt.ylabel("Relative Flux")
plt.xlabel("Time (BJD-TBD)")
plt.show()
```
```python
# 750 transit
gp_fit_750 = white_light_fit(params_750, times_start_750, lc_750, lc_err_750, norm_pc_750, sys_method = "gp", juliet_name = "wasp-121_750", gp_kernel = "Matern", sampler = "dynamic_dynesty")
```
```python
for i in gp_fit_750.posteriors['posterior_samples'].keys():
print(i, np.median(gp_fit_750.posteriors['posterior_samples'][i]))
```
```python
plt.figure(figsize = (10,7))
t_final = np.linspace(times_start_750[0], times_start_750[-1], 1000)
transit_plus_GP_model = gp_fit_750.lc.evaluate('STIS')
transit_model = gp_fit_750.lc.evaluate('STIS', evaluate_transit = True)
transit_model_resamp, error68_up, error68_down = gp_fit_750.lc.evaluate('STIS', evaluate_transit = True, t = t_final, return_err = True)
offset = 0
plt.scatter(times_start_750, (lc_750/np.median(lc_750[:10])), color = "cornflowerblue", alpha = 0.7, label = "raw data")
plt.scatter(times_start_750, (lc_750/np.median(lc_750[:10])) - (transit_plus_GP_model - transit_model) + offset,alpha=1, label = "gp detrending", color = "orchid")
plt.plot(t_final, transit_model_resamp + offset, color='tomato',zorder=10, label = "gp transit model")
plt.fill_between(t_final, error68_up, error68_down, color = "tomato", alpha = 0.2, label = "68% error")
plt.legend()
plt.ylabel("Relative Flux")
plt.xlabel("Time (BJD-TBD)")
plt.show()
```
```python
# now that we know our light curve fits are working correctly, we'll move to our spectroscopic light curve testing
# this is used to determine the optimal detrending parameters
# spectroscopic fit params dictionary -- only difference is how you pass in the limb darkening info
params_430_spec = {"P": [1.274925],"t0": [2457686.2441857136], "p": [0.12, 0.05], "b": [0.1], "a": [3.823], "rho": [612], "ecc": [0], "omega": [90], "c1": [c1_750], "c2": [c2_750]}
params_750_spec = {"P": [1.274925],"t0": [2457686.2441857136], "p": [0.12, 0.05], "b": [0.1], "a": [3.823], "rho": [612], "ecc": [0], "omega": [90], "c1": [c1_750], "c2": [c2_750]}
```
```python
# the only difference for the spectroscopic light curve fitting function is that you pass in the wavelengths and
# spectra itself rather than the light curve, and also the binning scheme (in a file, with the beginning
# and end wavelengths specified, see examples), and the "sld", which is the limb darkening object defined above
bins_joint_dd_lin_p, rp_joint_dd_lin_p, rp_e_joint_dd_lin_p, fits_linear_p = joint_spectroscopic_lightcurve_fit(params_430_spec, order_430["WAVELENGTH"],\
times_start_430, spectra_430, norm_jit_430,\
times_start_430_2, spectra_430_2, norm_jit_430_2,\
"wasp-121_g430_bins", sld,\
sys_method = "linear", juliet_name = "wasp-121_430_spec_d_dynesty_norm_fixed_linear_phase",\
mode = 'HST_STIS_G430L', plot = False, sampler = "dynamic_dynesty")
```
```python
bins_joint_dd_lin_pc_p, rp_joint_dd_lin_pc_p, rp_e_joint_dd_lin_pc_p, fits_lin_pc_p = joint_spectroscopic_lightcurve_fit(params_430_spec, order_430["WAVELENGTH"],\
times_start_430, spectra_430, norm_pc_430,\
times_start_430_2, spectra_430_2, norm_pc_430_2,\
"wasp-121_g430_bins", sld,\
sys_method = "linear", juliet_name = "wasp-121_430_spec_d_dynesty_norm_fixed_linear_pc_phase",\
mode = 'HST_STIS_G430L', plot = False, sampler = "dynamic_dynesty")
```
```python
bins_joint_dd_lin_pc1_p, rp_joint_dd_lin_pc1_p, rp_e_joint_dd_lin_pc1_p, fits_lin_pc1_p = joint_spectroscopic_lightcurve_fit(params_430_spec, order_430["WAVELENGTH"],\
times_start_430, spectra_430, norm_pc_use1_430,\
times_start_430_2, spectra_430_2, norm_pc_use1_430_2,\
"wasp-121_g430_bins", sld,\
sys_method = "linear", juliet_name = "wasp-121_430_spec_d_dynesty_norm_fixed_linear_pc1_phase",\
mode = 'HST_STIS_G430L', plot = False, sampler = "dynamic_dynesty")
```
```python
bins_joint_dd_lin_pc2_p, rp_joint_dd_lin_pc2_p, rp_e_joint_dd_lin_pc2_p, fits_lin_pc2_p = joint_spectroscopic_lightcurve_fit(params_430_spec, order_430["WAVELENGTH"],\
times_start_430, spectra_430, norm_pc_use2_430,\
times_start_430_2, spectra_430_2, norm_pc_use2_430_2,\
"wasp-121_g430_bins", sld,\
sys_method = "linear", juliet_name = "wasp-121_430_spec_d_dynesty_norm_fixed_linear_pc2_phase",\
mode = 'HST_STIS_G430L', plot = False, sampler = "dynamic_dynesty")
```
```python
bins_joint_dd_lin_pc3_p, rp_joint_dd_lin_pc3_p, rp_e_joint_dd_lin_pc3_p, fits_lin_pc3_p = joint_spectroscopic_lightcurve_fit(params_430_spec, order_430["WAVELENGTH"],\
times_start_430, spectra_430, norm_pc_use3_430,\
times_start_430_2, spectra_430_2, norm_pc_use3_430_2,\
"wasp-121_g430_bins", sld,\
sys_method = "linear", juliet_name = "wasp-121_430_spec_d_dynesty_norm_fixed_linear_pc3_phase",\
mode = 'HST_STIS_G430L', plot = False, sampler = "dynamic_dynesty")
```
```python
bins_joint_dd_lin_pc4_p, rp_joint_dd_lin_pc4_p, rp_e_joint_dd_lin_pc4_p, fits_lin_pc4_p = joint_spectroscopic_lightcurve_fit(params_430_spec, order_430["WAVELENGTH"],\
times_start_430, spectra_430, norm_pc_use4_430,\
times_start_430_2, spectra_430_2, norm_pc_use4_430_2,\
"wasp-121_g430_bins", sld,\
sys_method = "linear", juliet_name = "wasp-121_430_spec_d_dynesty_norm_fixed_linear_pc4_phase",\
mode = 'HST_STIS_G430L', plot = False, sampler = "dynamic_dynesty")
```
```python
bins_joint_dd_lin_pc5_p, rp_joint_dd_lin_pc5_p, rp_e_joint_dd_lin_pc5_p, fits_lin_pc5_p = joint_spectroscopic_lightcurve_fit(params_430_spec, order_430["WAVELENGTH"],\
times_start_430, spectra_430, norm_pc_use5_430,\
times_start_430_2, spectra_430_2, norm_pc_use5_430_2,\
"wasp-121_g430_bins", sld,\
sys_method = "linear", juliet_name = "wasp-121_430_spec_d_dynesty_norm_fixed_linear_pc5_phase",\
mode = 'HST_STIS_G430L', plot = False, sampler = "dynamic_dynesty")
```
```python
bins_joint_dd_lin_pc6_p, rp_joint_dd_lin_pc6_p, rp_e_joint_dd_lin_pc6_p, fits_lin_pc6_p = joint_spectroscopic_lightcurve_fit(params_430_spec, order_430["WAVELENGTH"],\
times_start_430, spectra_430, norm_pc_use6_430,\
times_start_430_2, spectra_430_2, norm_pc_use6_430_2,\
"wasp-121_g430_bins", sld,\
sys_method = "linear", juliet_name = "wasp-121_430_spec_d_dynesty_norm_fixed_linear_pc6_phase",\
mode = 'HST_STIS_G430L', plot = False, sampler = "dynamic_dynesty")
```
```python
bins_joint_dd_lin_pc7_p, rp_joint_dd_lin_pc7_p, rp_e_joint_dd_lin_pc7_p, fits_lin_pc7_p = joint_spectroscopic_lightcurve_fit(params_430_spec, order_430["WAVELENGTH"],\
times_start_430, spectra_430, norm_pc_use7_430,\
times_start_430_2, spectra_430_2, norm_pc_use7_430_2,\
"wasp-121_g430_bins", sld,\
sys_method = "linear", juliet_name = "wasp-121_430_spec_d_dynesty_norm_fixed_linear_pc7_phase",\
mode = 'HST_STIS_G430L', plot = False, sampler = "dynamic_dynesty")
```
```python
bins_joint_dd_lin_pc8_p, rp_joint_dd_lin_pc8_p, rp_e_joint_dd_lin_pc8_p, fits_lin_pc8_p = joint_spectroscopic_lightcurve_fit(params_430_spec, order_430["WAVELENGTH"],\
times_start_430, spectra_430, norm_pc_use8_430,\
times_start_430_2, spectra_430_2, norm_pc_use8_430_2,\
"wasp-121_g430_bins", sld,\
sys_method = "linear", juliet_name = "wasp-121_430_spec_d_dynesty_norm_fixed_linear_pc8_phase",\
mode = 'HST_STIS_G430L', plot = False, sampler = "dynamic_dynesty")
```
```python
bins_joint_dd_lin_pc9_p, rp_joint_dd_lin_pc9_p, rp_e_joint_dd_lin_pc9_p, fits_lin_pc9_p = joint_spectroscopic_lightcurve_fit(params_430_spec, order_430["WAVELENGTH"],\
times_start_430, spectra_430, norm_pc_use9_430,\
times_start_430_2, spectra_430_2, norm_pc_use9_430_2,\
"wasp-121_g430_bins", sld,\
sys_method = "linear", juliet_name = "wasp-121_430_spec_d_dynesty_norm_fixed_linear_pc9_phase",\
mode = 'HST_STIS_G430L', plot = False, sampler = "dynamic_dynesty")
```
```python
bins_7_lin_dd_p, rp_7_lin_dd_p, rp_e_7_lin_dd_p, fits_7_lin_dd_p = spectroscopic_lightcurve_fit(params_750_spec, order_750["WAVELENGTH"],\
times_start_750, spectra_750, norm_jit_750,\
"wasp-121_g750_bins", sld,\
sys_method = "linear", juliet_name = "wasp-121_750_dd_linear_phase",\
mode = 'HST_STIS_G750L', plot = False, sampler = "dynamic_dynesty")
```
```python
bins_7_dd_lin_pc_p, rp_7_dd_lin_pc_p, rp_e_7_dd_lin_pc_p, fits_7_lin_dd_pc_p = spectroscopic_lightcurve_fit(params_750_spec, order_750["WAVELENGTH"],\
times_start_750, spectra_750, norm_pc_750,\
"wasp-121_g750_bins", sld,\
sys_method = "linear", juliet_name = "wasp-121_750_dd_pc_linear_phase",\
mode = 'HST_STIS_G750L', plot = False, sampler = "dynamic_dynesty")
```
```python
bins_7_dd_lin_pc1_p, rp_7_dd_lin_pc1_p, rp_e_7_dd_lin_pc1_p, fits_7_lin_dd_pc1_p = spectroscopic_lightcurve_fit(params_750_spec, order_750["WAVELENGTH"],\
times_start_750, spectra_750, norm_pc_use1_750,\
"wasp-121_g750_bins", sld,\
sys_method = "linear", juliet_name = "wasp-121_750_dd_pc1_linear_phase",\
mode = 'HST_STIS_G750L', plot = False, sampler = "dynamic_dynesty")
```
```python
bins_7_dd_lin_pc2_p, rp_7_dd_lin_pc2_p, rp_e_7_dd_lin_pc2_p, fits_7_lin_dd_pc2_p = spectroscopic_lightcurve_fit(params_750_spec, order_750["WAVELENGTH"],\
times_start_750, spectra_750, norm_pc_use2_750,\
"wasp-121_g750_bins", sld,\
sys_method = "linear", juliet_name = "wasp-121_750_dd_pc2_linear_phase",\
mode = 'HST_STIS_G750L', plot = False, sampler = "dynamic_dynesty")
```
```python
bins_7_dd_lin_pc3_p, rp_7_dd_lin_pc3_p, rp_e_7_dd_lin_pc3_p, fits_7_lin_dd_pc3_p = spectroscopic_lightcurve_fit(params_750_spec, order_750["WAVELENGTH"],\
times_start_750, spectra_750, norm_pc_use3_750,\
"wasp-121_g750_bins", sld,\
sys_method = "linear", juliet_name = "wasp-121_750_dd_pc3_linear_phase",\
mode = 'HST_STIS_G750L', plot = False, sampler = "dynamic_dynesty")
```
```python
bins_7_dd_lin_pc4_p, rp_7_dd_lin_pc4_p, rp_e_7_dd_lin_pc4_p, fits_7_lin_dd_pc4_p = spectroscopic_lightcurve_fit(params_750_spec, order_750["WAVELENGTH"],\
times_start_750, spectra_750, norm_pc_use4_750,\
"wasp-121_g750_bins", sld,\
sys_method = "linear", juliet_name = "wasp-121_750_dd_pc4_linear_phase",\
mode = 'HST_STIS_G750L', plot = False, sampler = "dynamic_dynesty")
```
```python
bins_7_dd_lin_pc5_p, rp_7_dd_lin_pc5_p, rp_e_7_dd_lin_pc5_p, fits_7_lin_dd_pc5_p = spectroscopic_lightcurve_fit(params_750_spec, order_750["WAVELENGTH"],\
times_start_750, spectra_750, norm_pc_use5_750,\
"wasp-121_g750_bins", sld,\
sys_method = "linear", juliet_name = "wasp-121_750_dd_pc5_linear_phase",\
mode = 'HST_STIS_G750L', plot = False, sampler = "dynamic_dynesty")
```
```python
bins_7_dd_lin_pc6_p, rp_7_dd_lin_pc6_p, rp_e_7_dd_lin_pc6_p, fits_7_lin_dd_pc6_p = spectroscopic_lightcurve_fit(params_750_spec, order_750["WAVELENGTH"],\
times_start_750, spectra_750, norm_pc_use6_750,\
"wasp-121_g750_bins", sld,\
sys_method = "linear", juliet_name = "wasp-121_750_dd_pc6_linear_phase",\
mode = 'HST_STIS_G750L', plot = False, sampler = "dynamic_dynesty")
```
```python
bins_7_dd_lin_pc7_p, rp_7_dd_lin_pc7_p, rp_e_7_dd_lin_pc7_p, fits_7_lin_dd_pc7_p = spectroscopic_lightcurve_fit(params_750_spec, order_750["WAVELENGTH"],\
times_start_750, spectra_750, norm_pc_use7_750,\
"wasp-121_g750_bins", sld,\
sys_method = "linear", juliet_name = "wasp-121_750_dd_pc7_linear_phase",\
mode = 'HST_STIS_G750L', plot = False, sampler = "dynamic_dynesty")
```
```python
bins_7_dd_lin_pc8_p, rp_7_dd_lin_pc8_p, rp_e_7_dd_lin_pc8_p, fits_7_lin_dd_pc8_p = spectroscopic_lightcurve_fit(params_750_spec, order_750["WAVELENGTH"],\
times_start_750, spectra_750, norm_pc_use8_750,\
"wasp-121_g750_bins", sld,\
sys_method = "linear", juliet_name = "wasp-121_750_dd_pc8_linear_phase",\
mode = 'HST_STIS_G750L', plot = False, sampler = "dynamic_dynesty")
```
```python
bins_7_dd_lin_pc9_p, rp_7_dd_lin_pc9_p, rp_e_7_dd_lin_pc9_p, fits_7_lin_dd_pc9_p = spectroscopic_lightcurve_fit(params_750_spec, order_750["WAVELENGTH"],\
times_start_750, spectra_750, norm_pc_use9_750,\
"wasp-121_g750_bins", sld,\
sys_method = "linear", juliet_name = "wasp-121_750_dd_pc9_linear_phase",\
mode = 'HST_STIS_G750L', plot = False, sampler = "dynamic_dynesty")
```
```python
bins_joint_dd_p, rp_joint_dd_p, rp_e_joint_dd_p, fits_jitter_p = joint_spectroscopic_lightcurve_fit(params_430_spec, order_430["WAVELENGTH"],\
times_start_430, spectra_430, norm_jit_430,\
times_start_430_2, spectra_430_2, norm_jit_430_2,\
"wasp-121_g430_bins", sld,\
sys_method = "gp", juliet_name = "wasp-121_430_spec_d_dynesty_norm_fixed_phase",\
mode = 'HST_STIS_G430L', plot = False, sampler = "dynamic_dynesty")
```
```python
bins_joint_dd_pc_p, rp_joint_dd_pc_p, rp_e_joint_dd_pc_p, fits_pc_p = joint_spectroscopic_lightcurve_fit(params_430_spec, order_430["WAVELENGTH"],\
times_start_430, spectra_430, norm_pc_430,\
times_start_430_2, spectra_430_2, norm_pc_430_2,\
"wasp-121_g430_bins", sld,\
sys_method = "gp", juliet_name = "wasp-121_430_spec_d_dynesty_norm_pc_fixed_phase",\
mode = 'HST_STIS_G430L', plot = False, sampler = "dynamic_dynesty")
```
```python
bins_joint_dd_pc1_p, rp_joint_dd_pc1_p, rp_e_joint_dd_pc1_p, fits_pc1_p = joint_spectroscopic_lightcurve_fit(params_430_spec, order_430["WAVELENGTH"],\
times_start_430, spectra_430, norm_pc_use1_430,\
times_start_430_2, spectra_430_2, norm_pc_use1_430_2,\
"wasp-121_g430_bins", sld,\
sys_method = "gp", juliet_name = "wasp-121_430_spec_d_dynesty_norm_pc1_fixed2_phase",\
mode = 'HST_STIS_G430L', plot = False, sampler = "dynamic_dynesty")
```
```python
bins_joint_dd_pc2_p, rp_joint_dd_pc2_p, rp_e_joint_dd_pc2_p, fits_pc2_p = joint_spectroscopic_lightcurve_fit(params_430_spec, order_430["WAVELENGTH"],\
times_start_430, spectra_430, norm_pc_use2_430,\
times_start_430_2, spectra_430_2, norm_pc_use2_430_2,\
"wasp-121_g430_bins", sld,\
sys_method = "gp", juliet_name = "wasp-121_430_spec_d_dynesty_norm_pc2_fixed2_phase",\
mode = 'HST_STIS_G430L', plot = False, sampler = "dynamic_dynesty")
```
```python
bins_joint_dd_pc3_p, rp_joint_dd_pc3_p, rp_e_joint_dd_pc3_p, fits_pc3_p = joint_spectroscopic_lightcurve_fit(params_430_spec, order_430["WAVELENGTH"],\
times_start_430, spectra_430, norm_pc_use3_430,\
times_start_430_2, spectra_430_2, norm_pc_use3_430_2,\
"wasp-121_g430_bins", sld,\
sys_method = "gp", juliet_name = "wasp-121_430_spec_d_dynesty_norm_pc3_fixed2_phase",\
mode = 'HST_STIS_G430L', plot = False, sampler = "dynamic_dynesty")
```
```python
bins_joint_dd_pc4_p, rp_joint_dd_pc4_p, rp_e_joint_dd_pc4_p, fits_pc4_p = joint_spectroscopic_lightcurve_fit(params_430_spec, order_430["WAVELENGTH"],\
times_start_430, spectra_430, norm_pc_use4_430,\
times_start_430_2, spectra_430_2, norm_pc_use4_430_2,\
"wasp-121_g430_bins", sld,\
sys_method = "gp", juliet_name = "wasp-121_430_spec_d_dynesty_norm_pc4_fixed2_phase",\
mode = 'HST_STIS_G430L', plot = False, sampler = "dynamic_dynesty")
```
```python
bins_joint_dd_pc5_p, rp_joint_dd_pc5_p, rp_e_joint_dd_pc5_p, fits_pc5_p = joint_spectroscopic_lightcurve_fit(params_430_spec, order_430["WAVELENGTH"],\
times_start_430, spectra_430, norm_pc_use5_430,\
times_start_430_2, spectra_430_2, norm_pc_use5_430_2,\
"wasp-121_g430_bins", sld,\
sys_method = "gp", juliet_name = "wasp-121_430_spec_d_dynesty_norm_pc5_fixed2_phase",\
mode = 'HST_STIS_G430L', plot = False, sampler = "dynamic_dynesty")
```
```python
bins_joint_dd_pc6_p, rp_joint_dd_pc6_p, rp_e_joint_dd_pc6_p, fits_pc6_p = joint_spectroscopic_lightcurve_fit(params_430_spec, order_430["WAVELENGTH"],\
times_start_430, spectra_430, norm_pc_use6_430,\
times_start_430_2, spectra_430_2, norm_pc_use6_430_2,\
"wasp-121_g430_bins", sld,\
sys_method = "gp", juliet_name = "wasp-121_430_spec_d_dynesty_norm_pc6_fixed2_phase",\
mode = 'HST_STIS_G430L', plot = False, sampler = "dynamic_dynesty")
```
```python
bins_joint_dd_pc7_p, rp_joint_dd_pc7_p, rp_e_joint_dd_pc7_p, fits_pc7_p = joint_spectroscopic_lightcurve_fit(params_430_spec, order_430["WAVELENGTH"],\
times_start_430, spectra_430, norm_pc_use7_430,\
times_start_430_2, spectra_430_2, norm_pc_use7_430_2,\
"wasp-121_g430_bins", sld,\
sys_method = "gp", juliet_name = "wasp-121_430_spec_d_dynesty_norm_pc7_fixed2_phase",\
mode = 'HST_STIS_G430L', plot = False, sampler = "dynamic_dynesty")
```
```python
bins_joint_dd_pc8_p, rp_joint_dd_pc8_p, rp_e_joint_dd_pc8_p, fits_pc8_p = joint_spectroscopic_lightcurve_fit(params_430_spec, order_430["WAVELENGTH"],\
times_start_430, spectra_430, norm_pc_use8_430,\
times_start_430_2, spectra_430_2, norm_pc_use8_430_2,\
"wasp-121_g430_bins", sld,\
sys_method = "gp", juliet_name = "wasp-121_430_spec_d_dynesty_norm_pc8_fixed2_phase",\
mode = 'HST_STIS_G430L', plot = False, sampler = "dynamic_dynesty")
```
```python
bins_joint_dd_pc9_p, rp_joint_dd_pc9_p, rp_e_joint_dd_pc9_p, fits_pc9_p = joint_spectroscopic_lightcurve_fit(params_430_spec, order_430["WAVELENGTH"],\
times_start_430, spectra_430, norm_pc_use9_430,\
times_start_430_2, spectra_430_2, norm_pc_use9_430_2,\
"wasp-121_g430_bins", sld,\
sys_method = "gp", juliet_name = "wasp-121_430_spec_d_dynesty_norm_pc9_fixed2_phase",\
mode = 'HST_STIS_G430L', plot = False, sampler = "dynamic_dynesty")
```
```python
bins_7_dd_p, rp_7_dd_p, rp_e_7_dd_p, fits_7_dd_p = spectroscopic_lightcurve_fit(params_750_spec, order_750["WAVELENGTH"],\
times_start_750, spectra_750, norm_jit_750,\
"wasp-121_g750_bins", sld,\
sys_method = "gp", juliet_name = "wasp-121_750_dd_phase",\
mode = 'HST_STIS_G750L', plot = False, sampler = "dynamic_dynesty")
```
```python
bins_7_dd_pc_p, rp_7_dd_pc_p, rp_e_7_dd_pc_p, fits_7_dd_pc_p = spectroscopic_lightcurve_fit(params_750_spec, order_750["WAVELENGTH"],\
times_start_750, spectra_750, norm_pc_750,\
"wasp-121_g750_bins", sld,\
sys_method = "gp", juliet_name = "wasp-121_750_dd_pc_phase",\
mode = 'HST_STIS_G750L', plot = False, sampler = "dynamic_dynesty")
```
```python
bins_7_dd_pc1_p, rp_7_dd_pc1_p, rp_e_7_dd_pc1_p, fits_7_dd_pc1_p = spectroscopic_lightcurve_fit(params_750_spec, order_750["WAVELENGTH"],\
times_start_750, spectra_750, norm_pc_use1_750,\
"wasp-121_g750_bins", sld,\
sys_method = "gp", juliet_name = "wasp-121_750_dd_pc1_phase",\
mode = 'HST_STIS_G750L', plot = False, sampler = "dynamic_dynesty")
```
```python
bins_7_dd_pc2_p, rp_7_dd_pc2_p, rp_e_7_dd_pc2_p, fits_7_dd_pc2_p = spectroscopic_lightcurve_fit(params_750_spec, order_750["WAVELENGTH"],\
times_start_750, spectra_750, norm_pc_use2_750,\
"wasp-121_g750_bins", sld,\
sys_method = "gp", juliet_name = "wasp-121_750_dd_pc2_phase",\
mode = 'HST_STIS_G750L', plot = False, sampler = "dynamic_dynesty")
```
```python
bins_7_dd_pc3_p, rp_7_dd_pc3_p, rp_e_7_dd_pc3_p, fits_7_dd_pc3_p = spectroscopic_lightcurve_fit(params_750_spec, order_750["WAVELENGTH"],\
times_start_750, spectra_750, norm_pc_use3_750,\
"wasp-121_g750_bins", sld,\
sys_method = "gp", juliet_name = "wasp-121_750_dd_pc3_phase",\
mode = 'HST_STIS_G750L', plot = False, sampler = "dynamic_dynesty")
```
```python
bins_7_dd_pc4_p, rp_7_dd_pc4_p, rp_e_7_dd_pc4_p, fits_7_dd_pc4_p = spectroscopic_lightcurve_fit(params_750_spec, order_750["WAVELENGTH"],\
times_start_750, spectra_750, norm_pc_use4_750,\
"wasp-121_g750_bins", sld,\
sys_method = "gp", juliet_name = "wasp-121_750_dd_pc4_phase",\
mode = 'HST_STIS_G750L', plot = False, sampler = "dynamic_dynesty")
```
```python
bins_7_dd_pc5_p, rp_7_dd_pc5_p, rp_e_7_dd_pc5_p, fits_7_dd_pc5_p = spectroscopic_lightcurve_fit(params_750_spec, order_750["WAVELENGTH"],\
times_start_750, spectra_750, norm_pc_use5_750,\
"wasp-121_g750_bins", sld,\
sys_method = "gp", juliet_name = "wasp-121_750_dd_pc5_phase",\
mode = 'HST_STIS_G750L', plot = False, sampler = "dynamic_dynesty")
```
```python
bins_7_dd_pc6_p, rp_7_dd_pc6_p, rp_e_7_dd_pc6_p, fits_7_dd_pc6_p = spectroscopic_lightcurve_fit(params_750_spec, order_750["WAVELENGTH"],\
times_start_750, spectra_750, norm_pc_use6_750,\
"wasp-121_g750_bins", sld,\
sys_method = "gp", juliet_name = "wasp-121_750_dd_pc6_phase",\
mode = 'HST_STIS_G750L', plot = False, sampler = "dynamic_dynesty")
```
```python
bins_7_dd_pc7_p, rp_7_dd_pc7_p, rp_e_7_dd_pc7_p, fits_7_dd_pc7_p = spectroscopic_lightcurve_fit(params_750_spec, order_750["WAVELENGTH"],\
times_start_750, spectra_750, norm_pc_use7_750,\
"wasp-121_g750_bins", sld,\
sys_method = "gp", juliet_name = "wasp-121_750_dd_pc7_phase",\
mode = 'HST_STIS_G750L', plot = False, sampler = "dynamic_dynesty")
```
```python
bins_7_dd_pc8_p, rp_7_dd_pc8_p, rp_e_7_dd_pc8_p, fits_7_dd_pc8_p = spectroscopic_lightcurve_fit(params_750_spec, order_750["WAVELENGTH"],\
times_start_750, spectra_750, norm_pc_use8_750,\
"wasp-121_g750_bins", sld,\
sys_method = "gp", juliet_name = "wasp-121_750_dd_pc8_phase",\
mode = 'HST_STIS_G750L', plot = False, sampler = "dynamic_dynesty")
```
```python
bins_7_dd_pc9_p, rp_7_dd_pc9_p, rp_e_7_dd_pc9_p, fits_7_dd_pc9_p = spectroscopic_lightcurve_fit(params_750_spec, order_750["WAVELENGTH"],\
times_start_750, spectra_750, norm_pc_use9_750,\
"wasp-121_g750_bins", sld,\
sys_method = "gp", juliet_name = "wasp-121_750_dd_pc9_phase",\
mode = 'HST_STIS_G750L', plot = False, sampler = "dynamic_dynesty")
```
```python
plt.figure(figsize = (10,7))
colormap = plt.cm.nipy_spectral
colors = colormap(np.linspace(0, 1, 25))
plt.errorbar(np.array(bins_joint_dd_pc1_p)/10000, rp_joint_dd_pc1_p, yerr = np.array(rp_e_joint_dd_pc1_p).T, fmt = ".", label = "1 PC, GP", color = colors[1])
plt.errorbar(np.array(bins_joint_dd_pc2_p)/10000, rp_joint_dd_pc2_p, yerr = np.array(rp_e_joint_dd_pc2_p).T, fmt = ".", label = "2 PC, GP", color = colors[2])
plt.errorbar(np.array(bins_joint_dd_pc3_p)/10000, rp_joint_dd_pc3_p, yerr = np.array(rp_e_joint_dd_pc3_p).T, fmt = ".", label = "3 PC, GP", color = colors[3])
plt.errorbar(np.array(bins_joint_dd_pc4_p)/10000, rp_joint_dd_pc4_p, yerr = np.array(rp_e_joint_dd_pc4_p).T, fmt = ".", label = "4 PC, GP", color = colors[4])
plt.errorbar(np.array(bins_joint_dd_pc5_p)/10000, rp_joint_dd_pc5_p, yerr = np.array(rp_e_joint_dd_pc5_p).T, fmt = ".", label = "5 PC, GP", color = colors[5])
plt.errorbar(np.array(bins_joint_dd_pc6_p)/10000, rp_joint_dd_pc6_p, yerr = np.array(rp_e_joint_dd_pc6_p).T, fmt = ".", label = "6 PC, GP", color = colors[6])
plt.errorbar(np.array(bins_joint_dd_pc7_p)/10000, rp_joint_dd_pc7_p, yerr = np.array(rp_e_joint_dd_pc7_p).T, fmt = ".", label = "7 PC, GP", color = colors[7])
plt.errorbar(np.array(bins_joint_dd_pc8_p)/10000, rp_joint_dd_pc8_p, yerr = np.array(rp_e_joint_dd_pc8_p).T, fmt = ".", label = "8 PC, GP", color = colors[8])
plt.errorbar(np.array(bins_joint_dd_pc9_p)/10000, rp_joint_dd_pc9_p, yerr = np.array(rp_e_joint_dd_pc9_p).T, fmt = ".", label = "9 PC, GP", color = colors[9])
plt.errorbar(np.array(bins_joint_dd_pc_p)/10000, rp_joint_dd_pc_p, yerr = np.array(rp_e_joint_dd_pc_p).T, fmt = ".", label = "10/All PCs, GP", color = colors[10])
plt.errorbar(np.array(bins_joint_dd_p)/10000, rp_joint_dd_p, yerr = np.array(rp_e_joint_dd_p).T, fmt = ".", label = "Jitter Vectors, GP", color = colors[0])
plt.errorbar(np.array(bins_joint_dd_lin_pc1_p)/10000, rp_joint_dd_lin_pc1_p, yerr = np.array(rp_e_joint_dd_lin_pc1_p).T, fmt = ".", label = "1 PC, Linear", color = colors[13])
plt.errorbar(np.array(bins_joint_dd_lin_pc2_p)/10000, rp_joint_dd_lin_pc2_p, yerr = np.array(rp_e_joint_dd_lin_pc2_p).T, fmt = ".", label = "2 PC, Linear", color = colors[14])
plt.errorbar(np.array(bins_joint_dd_lin_pc3_p)/10000, rp_joint_dd_lin_pc3_p, yerr = np.array(rp_e_joint_dd_lin_pc3_p).T, fmt = ".", label = "3 PC, Linear", color = colors[15])
plt.errorbar(np.array(bins_joint_dd_lin_pc4_p)/10000, rp_joint_dd_lin_pc4_p, yerr = np.array(rp_e_joint_dd_lin_pc4_p).T, fmt = ".", label = "4 PC, Linear", color = colors[16])
plt.errorbar(np.array(bins_joint_dd_lin_pc5_p)/10000, rp_joint_dd_lin_pc5_p, yerr = np.array(rp_e_joint_dd_lin_pc5_p).T, fmt = ".", label = "5 PC, Linear", color = colors[17])
plt.errorbar(np.array(bins_joint_dd_lin_pc6_p)/10000, rp_joint_dd_lin_pc6_p, yerr = np.array(rp_e_joint_dd_lin_pc6_p).T, fmt = ".", label = "6 PC, Linear", color = colors[18])
plt.errorbar(np.array(bins_joint_dd_lin_pc7_p)/10000, rp_joint_dd_lin_pc7_p, yerr = np.array(rp_e_joint_dd_lin_pc7_p).T, fmt = ".", label = "7 PC, Linear", color = colors[19])
plt.errorbar(np.array(bins_joint_dd_lin_pc8_p)/10000, rp_joint_dd_lin_pc8_p, yerr = np.array(rp_e_joint_dd_lin_pc8_p).T, fmt = ".", label = "8 PC, Linear", color = colors[20])
plt.errorbar(np.array(bins_joint_dd_lin_pc9_p)/10000, rp_joint_dd_lin_pc9_p, yerr = np.array(rp_e_joint_dd_lin_pc9_p).T, fmt = ".", label = "9 PC, Linear", color = colors[21])
plt.errorbar(np.array(bins_joint_dd_lin_pc_p)/10000, rp_joint_dd_lin_pc_p, yerr = np.array(rp_e_joint_dd_lin_pc_p).T, fmt = ".", label = "10/All PCs, Linear", color = colors[22])
plt.errorbar(np.array(bins_joint_dd_lin_p)/10000, rp_joint_dd_lin_p, yerr = np.array(rp_e_joint_dd_lin_p).T, fmt = ".", label = "Jitter Vectors, Linear", color = colors[24])
plt.errorbar(np.array(bins_7_dd_pc1_p)/10000, rp_7_dd_pc1_p, yerr = np.array(rp_e_7_dd_pc1_p).T, fmt = ".", color = colors[1])
plt.errorbar(np.array(bins_7_dd_pc2_p)/10000, rp_7_dd_pc2_p, yerr = np.array(rp_e_7_dd_pc2_p).T, fmt = ".", color = colors[2])
plt.errorbar(np.array(bins_7_dd_pc3_p)/10000, rp_7_dd_pc3_p, yerr = np.array(rp_e_7_dd_pc3_p).T, fmt = ".", color = colors[3])
plt.errorbar(np.array(bins_7_dd_pc4_p)/10000, rp_7_dd_pc4_p, yerr = np.array(rp_e_7_dd_pc4_p).T, fmt = ".", color = colors[4])
plt.errorbar(np.array(bins_7_dd_pc5_p)/10000, rp_7_dd_pc5_p, yerr = np.array(rp_e_7_dd_pc5_p).T, fmt = ".", color = colors[5])
plt.errorbar(np.array(bins_7_dd_pc6_p)/10000, rp_7_dd_pc6_p, yerr = np.array(rp_e_7_dd_pc6_p).T, fmt = ".", color = colors[6])
plt.errorbar(np.array(bins_7_dd_pc7_p)/10000, rp_7_dd_pc7_p, yerr = np.array(rp_e_7_dd_pc7_p).T, fmt = ".", color = colors[7])
plt.errorbar(np.array(bins_7_dd_pc8_p)/10000, rp_7_dd_pc8_p, yerr = np.array(rp_e_7_dd_pc8_p).T, fmt = ".", color = colors[8])
plt.errorbar(np.array(bins_7_dd_pc9_p)/10000, rp_7_dd_pc9_p, yerr = np.array(rp_e_7_dd_pc9_p).T, fmt = ".", color = colors[9])
plt.errorbar(np.array(bins_7_dd_pc_p)/10000, rp_7_dd_pc_p, yerr = np.array(rp_e_7_dd_pc_p).T, fmt = ".", color = colors[10])
plt.errorbar(np.array(bins_7_dd_p)/10000, rp_7_dd_p, yerr = np.array(rp_e_7_dd_p).T, fmt = ".", color = colors[0])
plt.errorbar(np.array(bins_7_dd_lin_pc1_p)/10000, rp_7_dd_lin_pc1_p, yerr = np.array(rp_e_7_dd_lin_pc1_p).T, fmt = ".", color = colors[13])
plt.errorbar(np.array(bins_7_dd_lin_pc2_p)/10000, rp_7_dd_lin_pc2_p, yerr = np.array(rp_e_7_dd_lin_pc2_p).T, fmt = ".", color = colors[14])
plt.errorbar(np.array(bins_7_dd_lin_pc3_p)/10000, rp_7_dd_lin_pc3_p, yerr = np.array(rp_e_7_dd_lin_pc3_p).T, fmt = ".", color = colors[15])
plt.errorbar(np.array(bins_7_dd_lin_pc4_p)/10000, rp_7_dd_lin_pc4_p, yerr = np.array(rp_e_7_dd_lin_pc4_p).T, fmt = ".", color = colors[16])
plt.errorbar(np.array(bins_7_dd_lin_pc5_p)/10000, rp_7_dd_lin_pc5_p, yerr = np.array(rp_e_7_dd_lin_pc5_p).T, fmt = ".", color = colors[17])
plt.errorbar(np.array(bins_7_dd_lin_pc6_p)/10000, rp_7_dd_lin_pc6_p, yerr = np.array(rp_e_7_dd_lin_pc6_p).T, fmt = ".", color = colors[18])
plt.errorbar(np.array(bins_7_dd_lin_pc7_p)/10000, rp_7_dd_lin_pc7_p, yerr = np.array(rp_e_7_dd_lin_pc7_p).T, fmt = ".", color = colors[19])
plt.errorbar(np.array(bins_7_dd_lin_pc8_p)/10000, rp_7_dd_lin_pc8_p, yerr = np.array(rp_e_7_dd_lin_pc8_p).T, fmt = ".", color = colors[20])
plt.errorbar(np.array(bins_7_dd_lin_pc9_p)/10000, rp_7_dd_lin_pc9_p, yerr = np.array(rp_e_7_dd_lin_pc9_p).T, fmt = ".", color = colors[21])
plt.errorbar(np.array(bins_7_dd_lin_pc_p)/10000, rp_7_dd_lin_pc_p, yerr = np.array(rp_e_7_dd_lin_pc_p).T, fmt = ".", color = colors[22])
plt.errorbar(np.array(bins_7_lin_dd_p)/10000, rp_7_lin_dd_p, yerr = np.array(rp_e_7_lin_dd_p).T, fmt = ".", color = colors[24])
plt.ylim(0.11, 0.131)
p = plt.legend(loc=(1.05,0.025), prop={'size':12})
plt.ylabel(r"R$_p$/R$_s$")
plt.xlabel(r"Wavelength ($\mu$m)")
plt.title("WASP-121 b", fontsize=20)
#plt.savefig("w121b.png", dpi = 300, facecolor = "white", bbox_extra_artists=(p,), bbox_inches='tight')
```
```python
plt.figure(figsize = (10,7))
colormap = plt.cm.nipy_spectral
colors = colormap(np.linspace(0, 1, 25))
plt.errorbar(np.array(bins_joint_dd_pc1_p)/10000, rp_joint_dd_pc1_p-np.nanmedian(rp_joint_dd_pc1_p), yerr = np.array(rp_e_joint_dd_pc1_p).T, fmt = ".", label = "1 PC, GP", color = colors[1])
plt.errorbar(np.array(bins_joint_dd_pc2_p)/10000, rp_joint_dd_pc2_p-np.nanmedian(rp_joint_dd_pc2_p), yerr = np.array(rp_e_joint_dd_pc2_p).T, fmt = ".", label = "2 PC, GP", color = colors[2])
plt.errorbar(np.array(bins_joint_dd_pc3_p)/10000, rp_joint_dd_pc3_p-np.nanmedian(rp_joint_dd_pc3_p), yerr = np.array(rp_e_joint_dd_pc3_p).T, fmt = ".", label = "3 PC, GP", color = colors[3])
plt.errorbar(np.array(bins_joint_dd_pc4_p)/10000, rp_joint_dd_pc4_p-np.nanmedian(rp_joint_dd_pc4_p), yerr = np.array(rp_e_joint_dd_pc4_p).T, fmt = ".", label = "4 PC, GP", color = colors[4])
plt.errorbar(np.array(bins_joint_dd_pc5_p)/10000, rp_joint_dd_pc5_p-np.nanmedian(rp_joint_dd_pc5_p), yerr = np.array(rp_e_joint_dd_pc5_p).T, fmt = ".", label = "5 PC, GP", color = colors[5])
plt.errorbar(np.array(bins_joint_dd_pc6_p)/10000, rp_joint_dd_pc6_p-np.nanmedian(rp_joint_dd_pc6_p), yerr = np.array(rp_e_joint_dd_pc6_p).T, fmt = ".", label = "6 PC, GP", color = colors[6])
plt.errorbar(np.array(bins_joint_dd_pc7_p)/10000, rp_joint_dd_pc7_p-np.nanmedian(rp_joint_dd_pc7_p), yerr = np.array(rp_e_joint_dd_pc7_p).T, fmt = ".", label = "7 PC, GP", color = colors[7])
plt.errorbar(np.array(bins_joint_dd_pc8_p)/10000, rp_joint_dd_pc8_p-np.nanmedian(rp_joint_dd_pc8_p), yerr = np.array(rp_e_joint_dd_pc8_p).T, fmt = ".", label = "8 PC, GP", color = colors[8])
plt.errorbar(np.array(bins_joint_dd_pc9_p)/10000, rp_joint_dd_pc9_p-np.nanmedian(rp_joint_dd_pc9_p), yerr = np.array(rp_e_joint_dd_pc9_p).T, fmt = ".", label = "9 PC, GP", color = colors[9])
plt.errorbar(np.array(bins_joint_dd_pc_p)/10000, rp_joint_dd_pc_p-np.nanmedian(rp_joint_dd_pc_p), yerr = np.array(rp_e_joint_dd_pc_p).T, fmt = ".", label = "All/10 PC, GP", color = colors[10])
plt.errorbar(np.array(bins_joint_dd_p)/10000, rp_joint_dd_p-np.nanmedian(rp_joint_dd_p), yerr = np.array(rp_e_joint_dd_p).T, fmt = ".", label = "Input Vectors, GP", color = colors[0])
plt.errorbar(np.array(bins_joint_dd_lin_pc1_p)/10000, rp_joint_dd_lin_pc1_p-np.nanmedian(rp_joint_dd_lin_pc1_p), yerr = np.array(rp_e_joint_dd_lin_pc1_p).T, fmt = ".", label = "1 PC, Linear", color = colors[13])
plt.errorbar(np.array(bins_joint_dd_lin_pc2_p)/10000, rp_joint_dd_lin_pc2_p-np.nanmedian(rp_joint_dd_lin_pc2_p), yerr = np.array(rp_e_joint_dd_lin_pc2_p).T, fmt = ".", label = "2 PC, Linear", color = colors[14])
plt.errorbar(np.array(bins_joint_dd_lin_pc3_p)/10000, rp_joint_dd_lin_pc3_p-np.nanmedian(rp_joint_dd_lin_pc3_p), yerr = np.array(rp_e_joint_dd_lin_pc3_p).T, fmt = ".", label = "3 PC, Linear", color = colors[15])
plt.errorbar(np.array(bins_joint_dd_lin_pc4_p)/10000, rp_joint_dd_lin_pc4_p-np.nanmedian(rp_joint_dd_lin_pc4_p), yerr = np.array(rp_e_joint_dd_lin_pc4_p).T, fmt = ".", label = "4 PC, Linear", color = colors[16])
plt.errorbar(np.array(bins_joint_dd_lin_pc5_p)/10000, rp_joint_dd_lin_pc5_p-np.nanmedian(rp_joint_dd_lin_pc5_p), yerr = np.array(rp_e_joint_dd_lin_pc5_p).T, fmt = ".", label = "5 PC, Linear", color = colors[17])
plt.errorbar(np.array(bins_joint_dd_lin_pc6_p)/10000, rp_joint_dd_lin_pc6_p-np.nanmedian(rp_joint_dd_lin_pc6_p), yerr = np.array(rp_e_joint_dd_lin_pc6_p).T, fmt = ".", label = "6 PC, Linear", color = colors[18])
plt.errorbar(np.array(bins_joint_dd_lin_pc7_p)/10000, rp_joint_dd_lin_pc7_p-np.nanmedian(rp_joint_dd_lin_pc7_p), yerr = np.array(rp_e_joint_dd_lin_pc7_p).T, fmt = ".", label = "7 PC, Linear", color = colors[19])
plt.errorbar(np.array(bins_joint_dd_lin_pc8_p)/10000, rp_joint_dd_lin_pc8_p-np.nanmedian(rp_joint_dd_lin_pc8_p), yerr = np.array(rp_e_joint_dd_lin_pc8_p).T, fmt = ".", label = "8 PC, Linear", color = colors[20])
plt.errorbar(np.array(bins_joint_dd_lin_pc9_p)/10000, rp_joint_dd_lin_pc9_p-np.nanmedian(rp_joint_dd_lin_pc9_p), yerr = np.array(rp_e_joint_dd_lin_pc9_p).T, fmt = ".", label = "9 PC, Linear", color = colors[21])
plt.errorbar(np.array(bins_joint_dd_lin_pc_p)/10000, rp_joint_dd_lin_pc_p-np.nanmedian(rp_joint_dd_lin_pc_p), yerr = np.array(rp_e_joint_dd_lin_pc_p).T, fmt = ".", label = "All/10 PC, Linear", color = colors[22])
plt.errorbar(np.array(bins_joint_dd_lin_p)/10000, rp_joint_dd_lin_p-np.nanmedian(rp_joint_dd_lin_p), yerr = np.array(rp_e_joint_dd_lin_p).T, fmt = ".", label = "Input Vectors, Linear", color = colors[24])
plt.errorbar(np.array(bins_7_dd_pc1_p)/10000, rp_7_dd_pc1_p-np.nanmedian(rp_7_dd_pc1_p), yerr = np.array(rp_e_7_dd_pc1_p).T, fmt = ".", color = colors[1])
plt.errorbar(np.array(bins_7_dd_pc2_p)/10000, rp_7_dd_pc2_p-np.nanmedian(rp_7_dd_pc2_p), yerr = np.array(rp_e_7_dd_pc2_p).T, fmt = ".", color = colors[2])
plt.errorbar(np.array(bins_7_dd_pc3_p)/10000, rp_7_dd_pc3_p-np.nanmedian(rp_7_dd_pc3_p), yerr = np.array(rp_e_7_dd_pc3_p).T, fmt = ".", color = colors[3])
plt.errorbar(np.array(bins_7_dd_pc4_p)/10000, rp_7_dd_pc4_p-np.nanmedian(rp_7_dd_pc4_p), yerr = np.array(rp_e_7_dd_pc4_p).T, fmt = ".", color = colors[4])
plt.errorbar(np.array(bins_7_dd_pc5_p)/10000, rp_7_dd_pc5_p-np.nanmedian(rp_7_dd_pc5_p), yerr = np.array(rp_e_7_dd_pc5_p).T, fmt = ".", color = colors[5])
plt.errorbar(np.array(bins_7_dd_pc6_p)/10000, rp_7_dd_pc6_p-np.nanmedian(rp_7_dd_pc6_p), yerr = np.array(rp_e_7_dd_pc6_p).T, fmt = ".", color = colors[6])
plt.errorbar(np.array(bins_7_dd_pc7_p)/10000, rp_7_dd_pc7_p-np.nanmedian(rp_7_dd_pc7_p), yerr = np.array(rp_e_7_dd_pc7_p).T, fmt = ".", color = colors[7])
plt.errorbar(np.array(bins_7_dd_pc8_p)/10000, rp_7_dd_pc8_p-np.nanmedian(rp_7_dd_pc8_p), yerr = np.array(rp_e_7_dd_pc8_p).T, fmt = ".", color = colors[8])
plt.errorbar(np.array(bins_7_dd_pc9_p)/10000, rp_7_dd_pc9_p-np.nanmedian(rp_7_dd_pc9_p), yerr = np.array(rp_e_7_dd_pc9_p).T, fmt = ".", color = colors[9])
plt.errorbar(np.array(bins_7_dd_pc_p)/10000, rp_7_dd_pc_p-np.nanmedian(rp_7_dd_pc_p), yerr = np.array(rp_e_7_dd_pc_p).T, fmt = ".", color = colors[10])
plt.errorbar(np.array(bins_7_dd_p)/10000, rp_7_dd_p-np.nanmedian(rp_7_dd_p), yerr = np.array(rp_e_7_dd_p).T, fmt = ".", color = colors[0])
plt.errorbar(np.array(bins_7_dd_lin_pc1_p)/10000, rp_7_dd_lin_pc1_p-np.nanmedian(rp_7_dd_lin_pc1_p), yerr = np.array(rp_e_7_dd_lin_pc1_p).T, fmt = ".", color = colors[13])
plt.errorbar(np.array(bins_7_dd_lin_pc2_p)/10000, rp_7_dd_lin_pc2_p-np.nanmedian(rp_7_dd_lin_pc2_p), yerr = np.array(rp_e_7_dd_lin_pc2_p).T, fmt = ".", color = colors[14])
plt.errorbar(np.array(bins_7_dd_lin_pc3_p)/10000, rp_7_dd_lin_pc3_p-np.nanmedian(rp_7_dd_lin_pc3_p), yerr = np.array(rp_e_7_dd_lin_pc3_p).T, fmt = ".", color = colors[15])
plt.errorbar(np.array(bins_7_dd_lin_pc4_p)/10000, rp_7_dd_lin_pc4_p-np.nanmedian(rp_7_dd_lin_pc4_p), yerr = np.array(rp_e_7_dd_lin_pc4_p).T, fmt = ".", color = colors[16])
plt.errorbar(np.array(bins_7_dd_lin_pc5_p)/10000, rp_7_dd_lin_pc5_p-np.nanmedian(rp_7_dd_lin_pc5_p), yerr = np.array(rp_e_7_dd_lin_pc5_p).T, fmt = ".", color = colors[17])
plt.errorbar(np.array(bins_7_dd_lin_pc6_p)/10000, rp_7_dd_lin_pc6_p-np.nanmedian(rp_7_dd_lin_pc6_p), yerr = np.array(rp_e_7_dd_lin_pc6_p).T, fmt = ".", color = colors[18])
plt.errorbar(np.array(bins_7_dd_lin_pc7_p)/10000, rp_7_dd_lin_pc7_p-np.nanmedian(rp_7_dd_lin_pc7_p), yerr = np.array(rp_e_7_dd_lin_pc7_p).T, fmt = ".", color = colors[19])
plt.errorbar(np.array(bins_7_dd_lin_pc8_p)/10000, rp_7_dd_lin_pc8_p-np.nanmedian(rp_7_dd_lin_pc8_p), yerr = np.array(rp_e_7_dd_lin_pc8_p).T, fmt = ".", color = colors[20])
plt.errorbar(np.array(bins_7_dd_lin_pc9_p)/10000, rp_7_dd_lin_pc9_p-np.nanmedian(rp_7_dd_lin_pc9_p), yerr = np.array(rp_e_7_dd_lin_pc9_p).T, fmt = ".", color = colors[21])
plt.errorbar(np.array(bins_7_dd_lin_pc_p)/10000, rp_7_dd_lin_pc_p-np.nanmedian(rp_7_dd_lin_pc_p), yerr = np.array(rp_e_7_dd_lin_pc_p).T, fmt = ".", color = colors[22])
plt.errorbar(np.array(bins_7_lin_dd_p)/10000, rp_7_lin_dd_p-np.nanmedian(rp_7_lin_dd_p), yerr = np.array(rp_e_7_lin_dd_p).T, fmt = ".", color = colors[24])
#plt.ylim(0.14, 0.155)
p = plt.legend(loc=(1.05,0.025), prop={'size':12})
plt.ylabel(r"R$_p$/R$_s$ - Average R$_p$/R$_s$")
plt.xlabel(r"Wavelength ($\mu$m)")
plt.title("WASP-121 b", fontsize=20)
#plt.hlines(0, 0.3, 0.9, linestyle = "--")
#plt.savefig("w121b_subtracted.png", dpi = 300, facecolor = "white", bbox_extra_artists=(p,), bbox_inches='tight')
```
```python
plt.figure(figsize = (10,7))
tests_gp = [fits_pc1_p, fits_pc2_p, fits_pc3_p, fits_pc4_p, fits_pc5_p, fits_pc6_p, fits_pc7_p, fits_pc8_p, fits_pc9_p, fits_pc_p, fits_jitter_p]
normalized = []
ranges = []
for i in range(len(tests_gp[0])):
minimum = min([x[i][0] for x in tests_gp])
maximum = max([x[i][0] for x in tests_gp])
min_sub = [x[i][0] for x in tests_gp] - minimum
new_max = max(min_sub)
norm_inbet = []
ranges.append(maximum-minimum)
for j in min_sub:
norm_inbet.append((j-new_max)/new_max + 1)
normalized.append(norm_inbet)
labels = ["PC1", "PC2", "PC3", "PC4", "PC5", "PC6", "PC7", "PC8", "PC9", "10/All PC", "All Vectors"]
colormap = plt.cm.nipy_spectral
colors = colormap(np.linspace(0, 1, len(fits_pc_p)))
j = 0
for i in normalized:
#plt.title("Bin " + str(j+1))
plt.scatter(np.linspace(1,10,11), i, color = colors[j], label = r"$\Delta$ ln Z = " + "{:.2f}".format(ranges[j]))
j = j+1
plt.xticks(np.linspace(1,10,11), labels, rotation = 90)
plt.ylabel("Normalized Likelihood")
p = plt.legend(loc=(1.05,-0.25), prop={'size':12})
plt.title("WASP-121 b, G430L GP")
#plt.savefig("w121b_likelihood_g430l_gp.png", dpi = 300, facecolor = "white", bbox_extra_artists=(p,), bbox_inches='tight')
```
```python
plt.figure(figsize = (10,7))
tests_lin = [fits_lin_pc1_p, fits_lin_pc2_p, fits_lin_pc3_p, fits_lin_pc4_p, fits_lin_pc5_p, fits_lin_pc6_p, fits_lin_pc7_p, fits_lin_pc8_p, fits_lin_pc9_p, fits_lin_pc_p, fits_linear_p]
normalized = []
ranges = []
for i in range(len(tests_lin[0])):
minimum = min([x[i][0] for x in tests_lin])
maximum = max([x[i][0] for x in tests_lin])
min_sub = [x[i][0] for x in tests_lin] - minimum
new_max = max(min_sub)
norm_inbet = []
ranges.append(maximum-minimum)
for j in min_sub:
norm_inbet.append((j-new_max)/new_max + 1)
normalized.append(norm_inbet)
labels = ["PC1", "PC2", "PC3", "PC4", "PC5", "PC6", "PC7", "PC8", "PC9", "10/All PC", "All Vectors"]
colormap = plt.cm.nipy_spectral
colors = colormap(np.linspace(0, 1, len(fits_pc_p)))
j = 0
for i in normalized:
#plt.title("Bin " + str(j+1))
plt.scatter(np.linspace(1,10,11), i, color = colors[j], label = r"$\Delta$ ln Z = " + "{:.2f}".format(ranges[j]))
j = j+1
plt.xticks(np.linspace(1,10,11), labels, rotation = 90)
plt.ylabel("Normalized Likelihood")
p = plt.legend(loc=(1.05,-0.25), prop={'size':12})
plt.title("WASP-121 b, G430L Linear")
#plt.savefig("w121b_likelihood_g430l_linear.png", dpi = 300, facecolor = "white", bbox_extra_artists=(p,), bbox_inches='tight')
```
```python
plt.figure(figsize = (10,7))
tests_gp_7 = [fits_7_dd_pc1_p, fits_7_dd_pc2_p, fits_7_dd_pc3_p, fits_7_dd_pc4_p, fits_7_dd_pc5_p, fits_7_dd_pc6_p, fits_7_dd_pc7_p, fits_7_dd_pc8_p, fits_7_dd_pc9_p, fits_7_dd_pc_p, fits_7_dd_p]
normalized = []
ranges = []
for i in range(len(tests_gp_7[0])):
minimum = min([x[i][0] for x in tests_gp_7])
maximum = max([x[i][0] for x in tests_gp_7])
min_sub = [x[i][0] for x in tests_gp_7] - minimum
new_max = max(min_sub)
norm_inbet = []
ranges.append(maximum-minimum)
for j in min_sub:
norm_inbet.append((j-new_max)/new_max + 1)
normalized.append(norm_inbet)
labels = ["PC1", "PC2", "PC3", "PC4", "PC5", "PC6", "PC7", "PC8", "PC9", "10/All PC", "All Vectors"]
colormap = plt.cm.nipy_spectral
colors = colormap(np.linspace(0, 1, len(fits_pc_p)))
j = 0
for i in normalized:
#plt.title("Bin " + str(j+1))
plt.scatter(np.linspace(1,10,11), i, color = colors[j], label = r"$\Delta$ ln Z = " + "{:.2f}".format(ranges[j]))
j = j+1
plt.xticks(np.linspace(1,10,11), labels, rotation = 90)
plt.ylabel("Normalized Likelihood")
plt.title("WASP-121 b, G750L GP")
p = plt.legend(loc=(1.05,-0.2), prop={'size':12})
#plt.savefig("w121b_likelihood_g750l_gp.png", dpi = 300, facecolor = "white", bbox_extra_artists=(p,), bbox_inches = "tight")
```
```python
plt.figure(figsize = (10,7))
tests_lin_7 = [fits_7_lin_dd_pc1_p, fits_7_lin_dd_pc2_p, fits_7_lin_dd_pc3_p, fits_7_lin_dd_pc4_p, fits_7_lin_dd_pc5_p, fits_7_lin_dd_pc6_p, fits_7_lin_dd_pc7_p, fits_7_lin_dd_pc8_p, fits_7_lin_dd_pc9_p, fits_7_lin_dd_pc_p, fits_7_lin_dd_p]
normalized = []
ranges = []
for i in range(len(tests_lin_7[0])):
minimum = min([x[i][0] for x in tests_lin_7])
maximum = max([x[i][0] for x in tests_lin_7])
#print(minimum)
min_sub = [x[i][0] for x in tests_lin_7] - minimum
new_max = max(min_sub)
norm_inbet = []
ranges.append(maximum-minimum)
for j in min_sub:
norm_inbet.append((j-new_max)/new_max + 1)
normalized.append(norm_inbet)
labels = ["PC1", "PC2", "PC3", "PC4", "PC5", "PC6", "PC7", "PC8", "PC9", "10/All PC", "All Vectors"]
colormap = plt.cm.nipy_spectral
colors = colormap(np.linspace(0, 1, len(fits_pc_p)))
j = 0
for i in normalized:
#plt.title("Bin " + str(j+1))
plt.scatter(np.linspace(1,10,11), i, color = colors[j], label = r"$\Delta$ ln Z = " + "{:.2f}".format(ranges[j]))
j = j+1
plt.xticks(np.linspace(1,10,11), labels, rotation = 90)
plt.ylabel("Normalized Likelihood")
p = plt.legend(loc=(1.05,-0.2), prop={'size':12})
plt.title("WASP-121 b, G750L Linear")
#plt.savefig("w121b_likelihood_g750l_linear.png", dpi = 300, facecolor = "white", bbox_extra_artists=(p,), bbox_inches = "tight")
```
```python
# with the final detrending vectors decided, we run our final fits
# first, white light curves
gp_fit_joint_final = joint_white_light_fit(params_430, times_start_430, lc_430, lc_err_430, norm_pc_use5_430, times_start_430_2, lc_430_2, lc_err_430_2, norm_pc_use5_430_2, sys_method = "gp", juliet_name = "wasp-121_430_joint_newt_pc_dynamic_newpriors_rho_uniform_final", gp_kernel = "Matern", limb_darkening = "fixed", sampler = "dynamic_dynesty")
```
```python
linear_fit_joint_final = joint_white_light_fit(params_430, times_start_430, lc_430, lc_err_430, norm_pc_use2_430, times_start_430_2, lc_430_2, lc_err_430_2, norm_pc_use2_430_2, sys_method = "linear", juliet_name = "wasp-121_430_joint_linear_final", limb_darkening = "fixed", sampler = "dynamic_dynesty")
```
```python
gp_fit_750_final = white_light_fit(params_750, times_start_750, lc_750, lc_err_750, norm_pc_use5_750, sys_method = "gp", juliet_name = "wasp-121_750_gp_final", gp_kernel = "Matern", sampler = "dynamic_dynesty")
```
```python
linear_fit_750_final = white_light_fit(params_750, times_start_750, lc_750, lc_err_750, norm_pc_use2_750, sys_method = "linear", juliet_name = "wasp-121_750_linear_final", sampler = "dynamic_dynesty")
```
```python
plt.figure(figsize = (10,7))
t_final = np.linspace(times_start_750[0], times_start_750[-1], 1000)
transit_plus_GP_model = gp_fit_750_final.lc.evaluate('STIS')
transit_model = gp_fit_750_final.lc.evaluate('STIS', evaluate_transit = True)
transit_model_resamp, error68_up, error68_down = gp_fit_750_final.lc.evaluate('STIS', evaluate_transit = True, t = t_final, return_err = True)
offset = np.nanmedian(gp_fit_750_final.posteriors['posterior_samples']['mflux_STIS'])
plt.scatter(times_start_750, (lc_750/np.median(lc_750[:10])), label = "raw data", color = "cornflowerblue", alpha = 0.7)
plt.scatter(times_start_750, (lc_750/np.median(lc_750[:10])) - (transit_plus_GP_model - transit_model) + offset,alpha=1, label = "gp detrending", color = "orchid")
plt.plot(np.linspace(times_start_750[0], times_start_750[-1], 1000), transit_model_resamp + offset, color='tomato',zorder=10, label = "gp + transit model")
plt.fill_between(t_final, error68_up+offset, error68_down+offset, color = "tomato", alpha = 0.2, label = "68% error")
plt.legend()
plt.ylabel("Relative Flux")
plt.xlabel("Time (BJD-TBD)")
plt.ylim(0.98, 1.006)
#plt.savefig("w121b_gp_750.png", dpi = 300, facecolor = "white", bbox_inches="tight")
```
```python
plt.figure(figsize = (10,7))
t_final = np.linspace(times_start_750[0], times_start_750[-1], 1000)
transit_plus_GP_model = linear_fit_750_final.lc.evaluate('STIS')
transit_model = linear_fit_750_final.lc.evaluate('STIS', evaluate_transit = True)
transit_model_resamp, error68_up, error68_down = linear_fit_750_final.lc.evaluate('STIS', evaluate_transit = True, t = t_final, return_err = True)
offset = np.nanmedian(gp_fit_750_final.posteriors['posterior_samples']['mflux_STIS'])
plt.scatter(times_start_750, (lc_750/np.median(lc_750[:10])), label = "raw data", color = "cornflowerblue", alpha = 0.7)
plt.scatter(times_start_750, (lc_750/np.median(lc_750[:10])) - (transit_plus_GP_model - transit_model) + offset,alpha=1, label = "linear detrending", color = "orchid")
plt.plot(np.linspace(times_start_750[0], times_start_750[-1], 1000), transit_model_resamp + offset, color='tomato',zorder=10, label = "linear + transit model")
plt.fill_between(t_final, error68_up+offset, error68_down+offset, color = "tomato", alpha = 0.2, label = "68% error")
plt.legend()
plt.ylabel("Relative Flux")
plt.xlabel("Time (BJD-TBD)")
plt.ylim(0.98, 1.006)
#plt.savefig("w121b_linear_750.png", dpi = 300, facecolor = "white", bbox_inches="tight")
```
```python
plt.figure(figsize = (10,7))
phases = juliet.utils.get_phases(times_start_430, params_430["P"], params_430["t0"])
phases_2 = juliet.utils.get_phases(times_start_430_2, params_430["P"], params_430["t0"])
transit_plus_GP_model1 = gp_fit_joint_final.lc.evaluate('STIS1')
transit_model1 = gp_fit_joint_final.lc.evaluate('STIS1', evaluate_transit = True)
gp_model1 = transit_plus_GP_model1 - transit_model1
transit_plus_GP_model2 = gp_fit_joint_final.lc.evaluate('STIS2')
transit_model2 = gp_fit_joint_final.lc.evaluate('STIS2', evaluate_transit = True)
gp_model2 = transit_plus_GP_model2 - transit_model2
offset1 = np.nanmedian(gp_fit_joint_final.posteriors['posterior_samples']['mflux_STIS1'])
offset2 = np.nanmedian(gp_fit_joint_final.posteriors['posterior_samples']['mflux_STIS2'])
t_final = np.linspace(times_start_430[0], times_start_430[-1], 1000)
transit_model2_resamp, error68_up, error68_down = gp_fit_joint_final.lc.evaluate('STIS2', evaluate_transit = True, t = t_final, return_err = True)
model_phases = juliet.utils.get_phases(t_final, params_430["P"], params_430["t0"])
offset3 = np.nanmedian(gp_fit_joint_final.posteriors['posterior_samples']['mflux_STIS2'])
plt.scatter(phases, (lc_430/np.median(lc_430[:10])),alpha=0.3, label = "raw data (transit 1)", color = "cornflowerblue",)
plt.scatter(phases, (lc_430/np.median(lc_430[:10])) - (transit_plus_GP_model1 - transit_model1) + offset1,alpha=0.6, label = "gp detrending (transit 1)", color = "orchid")
plt.scatter(phases_2, (lc_430_2/np.median(lc_430_2[:10])),alpha=0.3, label = "raw data (transit 2)", color = "darkorange",)
plt.scatter(phases_2, (lc_430_2/np.median(lc_430_2[:10])) - (transit_plus_GP_model2 - transit_model2) + offset2,alpha=0.6, label = "gp detrending (transit 2)", color = "green")
plt.plot(model_phases, transit_model2_resamp + offset3, color = "tomato", label = "gp + transit model", zorder = 0)
plt.fill_between(model_phases, error68_up+offset3, error68_down+offset3, color = "tomato", alpha = 0.2, label = "68% error")
plt.legend()
plt.ylabel("Relative Flux")
plt.xlabel("Phase")
plt.ylim(0.98, 1.006)
#plt.savefig("w121b_gp_430.png", dpi = 300, facecolor = "white", bbox_inches="tight")
```
```python
plt.figure(figsize = (10,7))
phases = juliet.utils.get_phases(times_start_430, params_430["P"], params_430["t0"])
phases_2 = juliet.utils.get_phases(times_start_430_2, params_430["P"], params_430["t0"])
transit_plus_GP_model1 = linear_fit_joint_final.lc.evaluate('STIS1')
transit_model1 = linear_fit_joint_final.lc.evaluate('STIS1', evaluate_transit = True)
gp_model1 = transit_plus_GP_model1 - transit_model1
transit_plus_GP_model2 = linear_fit_joint_final.lc.evaluate('STIS2')
transit_model2 = linear_fit_joint_final.lc.evaluate('STIS2', evaluate_transit = True)
gp_model2 = transit_plus_GP_model2 - transit_model2
offset1 = np.nanmedian(linear_fit_joint_final.posteriors['posterior_samples']['mflux_STIS1'])
offset2 = np.nanmedian(linear_fit_joint_final.posteriors['posterior_samples']['mflux_STIS2'])
t_final = np.linspace(times_start_430[0], times_start_430[-1], 1000)
transit_model2_resamp, error68_up, error68_down = linear_fit_joint_final.lc.evaluate('STIS2', evaluate_transit = True, t = t_final, return_err = True)
model_phases = juliet.utils.get_phases(t_final, params_430["P"], params_430["t0"])
offset3 = np.nanmedian(linear_fit_joint_final.posteriors['posterior_samples']['mflux_STIS2'])
plt.scatter(phases, (lc_430/np.median(lc_430[:10])),alpha=0.3, label = "raw data (transit 1)", color = "cornflowerblue",)
plt.scatter(phases, (lc_430/np.median(lc_430[:10])) - (transit_plus_GP_model1 - transit_model1) + offset1,alpha=0.6, label = "linear detrending (transit 1)", color = "orchid")
plt.scatter(phases_2, (lc_430_2/np.median(lc_430_2[:10])),alpha=0.3, label = "raw data (transit 2)", color = "darkorange",)
plt.scatter(phases_2, (lc_430_2/np.median(lc_430_2[:10])) - (transit_plus_GP_model2 - transit_model2) + offset2,alpha=0.6, label = "linear detrending (transit 2)", color = "green")
plt.plot(model_phases, transit_model2_resamp + offset3, color = "tomato", label = "gp + transit model", zorder = 0)
plt.fill_between(model_phases, error68_up+offset3, error68_down+offset3, color = "tomato", alpha = 0.2, label = "68% error")
plt.legend()
plt.ylabel("Relative Flux")
plt.xlabel("Phase")
plt.ylim(0.98, 1.006)
#plt.savefig("w121b_linear_430.png", dpi = 300, facecolor = "white", bbox_inches="tight")
```
```python
# final best-fit spectroscopic light curves
bins_7_dd_pc5_p, rp_7_dd_pc5_p, rp_e_7_dd_pc5_p, fits_7_dd_pc5_p = spectroscopic_lightcurve_fit(params_750_spec, order_750["WAVELENGTH"],\
times_start_750, spectra_750, norm_pc_use5_750,\
"wasp-121_g750_bins", sld,\
sys_method = "gp", juliet_name = "wasp-121_750_dd_pc5_phase",\
mode = 'HST_STIS_G750L', sampler = "dynamic_dynesty", plot = True,\
savefig = False, fig_name = "w121b_750_gp_slc.png")
```
```python
bins_7_dd_lin_pc2_p, rp_7_dd_lin_pc2_p, rp_e_7_dd_lin_pc2_p, fits_7_lin_dd_pc2_p = spectroscopic_lightcurve_fit(params_750_spec, order_750["WAVELENGTH"],\
times_start_750, spectra_750, norm_pc_use2_750,\
"wasp-121_g750_bins", sld,\
sys_method = "linear", juliet_name = "wasp-121_750_dd_pc2_linear_phase",\
mode = 'HST_STIS_G750L', sampler = "dynamic_dynesty", plot = True,\
savefig = False, fig_name = "w121b_750_linear_slc.png")
```
```python
bins_joint_dd_lin_pc2_p, rp_joint_dd_lin_pc2_p, rp_e_joint_dd_lin_pc2_p, fits_lin_pc2_p = joint_spectroscopic_lightcurve_fit(params_430_spec, order_430["WAVELENGTH"],\
times_start_430, spectra_430, norm_pc_use2_430,\
times_start_430_2, spectra_430_2, norm_pc_use2_430_2,\
"wasp-121_g430_bins", sld,\
sys_method = "linear", juliet_name = "wasp-121_430_spec_d_dynesty_norm_fixed_linear_pc2_phase",\
mode = 'HST_STIS_G430L', sampler = "dynamic_dynesty", plot = True,\
savefig = False, fig_name = "w121b_430_linear_slc.png")
```
```python
bins_joint_dd_pc5_p, rp_joint_dd_pc5_p, rp_e_joint_dd_pc5_p, fits_pc5_p = joint_spectroscopic_lightcurve_fit(params_430_spec, order_430["WAVELENGTH"],\
times_start_430, spectra_430, norm_pc_use5_430,\
times_start_430_2, spectra_430_2, norm_pc_use5_430_2,\
"wasp-121_g430_bins", sld,\
sys_method = "gp", juliet_name = "wasp-121_430_spec_d_dynesty_norm_pc5_fixed2_phase",\
mode = 'HST_STIS_G430L', sampler = "dynamic_dynesty", plot = True,\
savefig = False, fig_name = "w121b_430_gp_slc.png")
```
```python
bins_gp = np.concatenate((np.array(bins_joint_dd_pc5_p)/10000, np.array(bins_7_dd_pc5_p)/10000))
rps_gp = np.concatenate((rp_joint_dd_pc5_p, rp_7_dd_pc5_p))
err_gp = np.concatenate((rp_e_joint_dd_pc5_p, rp_e_7_dd_pc5_p))
bins_lin = np.concatenate((np.array(bins_joint_dd_lin_pc2_p)/10000, np.array(bins_7_dd_lin_pc2_p)/10000))
rps_lin = np.concatenate((rp_joint_dd_lin_pc2_p, rp_7_dd_lin_pc2_p))
err_lin = np.concatenate((rp_e_joint_dd_lin_pc2_p, rp_e_7_dd_lin_pc2_p))
#np.savetxt("w121b_gp_spectrum.txt", list(zip(bins_gp, rps_gp, err_gp[:,0])), fmt='%.18e')
#np.savetxt("w121b_lin_spectrum.txt", list(zip(bins_lin, rps_lin, err_lin[:,0])), fmt='%.18e')
#np.savetxt("w121b_gp_depths_ppm.txt", rps_gp**2*1e6)
#np.savetxt("w121b_gp_errs_1_ppm.txt", 2*err_gp[:,0]*rps_gp*1e6)
#np.savetxt("w121b_gp_errs_2_ppm.txt", 2*err_gp[:,1]*rps_gp*1e6)
#np.savetxt("w121b_lin_depths_ppm.txt", rps_lin**2*1e6)
#np.savetxt("w121b_lin_errs_1_ppm.txt", 2*err_lin[:,0]*rps_lin*1e6)
#np.savetxt("w121b_lin_errs_2_ppm.txt", 2*err_lin[:,1]*rps_lin*1e6)
```
```python
# plot the spectra to compare
plt.figure(figsize = (10,7))
plt.errorbar(np.array(bins_joint_dd_pc5_p)/10000, rp_joint_dd_pc5_p, yerr = np.array(rp_e_joint_dd_pc5_p).T, fmt = ".", color = "tomato")
plt.errorbar(np.array(bins_7_dd_pc5_p)/10000, rp_7_dd_pc5_p, yerr = np.array(rp_e_7_dd_pc5_p).T, fmt = ".", color = "tomato", label = "GP Detrending")
plt.errorbar(np.array(bins_joint_dd_lin_pc2_p)/10000, rp_joint_dd_lin_pc2_p, yerr = np.array(rp_e_joint_dd_lin_pc2_p).T, fmt = ".", color = "cornflowerblue")
plt.errorbar(np.array(bins_7_dd_lin_pc2_p)/10000, rp_7_dd_lin_pc2_p, yerr = np.array(rp_e_7_dd_lin_pc2_p).T, fmt = ".", color = "cornflowerblue", label = "Linear Detrending")
plt.legend()
plt.ylabel(r"R$_p$/R$_s$")
plt.xlabel(r"Wavelength ($\mu$m)")
plt.title("WASP-121 b", fontsize=20)
#plt.savefig("w121b_spectrum.png", facecolor = "white", dpi = 300, bbox_inches="tight")
```
```python
# results from evans et al. 2018 for comparison
evans = np.loadtxt("past-results/w121_evans.txt")
```
```python
# compare with past results
# hm! seems like there's kind of an offset at longer wavelengths - let's look into that a little
# maybe limb darkening
plt.figure(figsize = (10,7))
plt.errorbar(np.array(bins_joint_dd_pc5_p)/10000, np.array(rp_joint_dd_pc5_p)**2*1e6, yerr = np.array(rp_e_joint_dd_pc5_p).T*2*np.array(rp_joint_dd_pc5_p)*1e6, fmt = ".", color = "tomato")
plt.errorbar(np.array(bins_7_dd_pc5_p)/10000, np.array(rp_7_dd_pc5_p)**2*1e6, yerr = np.array(rp_e_7_dd_pc5_p).T*2*np.array(rp_7_dd_pc5_p)*1e6, fmt = ".", color = "tomato", label = "GP Detrending")
plt.errorbar(np.array(bins_joint_dd_lin_pc2_p)/10000, np.array(rp_joint_dd_lin_pc2_p)**2*1e6, yerr = np.array(rp_e_joint_dd_lin_pc2_p).T*2*np.array(rp_joint_dd_lin_pc2_p)*1e6, fmt = ".", color = "cornflowerblue")
plt.errorbar(np.array(bins_7_dd_lin_pc2_p)/10000, np.array(rp_7_dd_lin_pc2_p)**2*1e6, yerr = np.array(rp_e_7_dd_lin_pc2_p).T*2*np.array(rp_7_dd_lin_pc2_p)*1e6, fmt = ".", color = "cornflowerblue", label = "Linear Detrending")
plt.errorbar(evans[:,0], evans[:,2]**2*1e6, yerr = np.array(list(zip(evans[:,3], evans[:,4]))).T*2*evans[:,2]*1e6, fmt = ".", color = "darkseagreen", label = "Evans et al. 2018")
plt.legend()
plt.ylabel(r"Transit Depth (ppm)")
plt.xlabel(r"Wavelength ($\mu$m)")
plt.title("WASP-121 b", fontsize=20)
plt.xlim(0.3, 0.95)
#plt.savefig("w121b_spectrum_comparison.png", facecolor = "white", dpi = 300, bbox_inches="tight")
```
```python
# just a quick copy of the usual light curve fitting function, but with the below limb darkening from
# the evans et al 2018 paper
ld1 = [0.36, 0.39, 0.36, 0.31, 0.25, 0.26, 0.26, 0.21, 0.3, 0.3, 0.22, 0.21, 0.28, 0.19, 0.21, 0.16, 0.22, 0.15, 0.22, 0.18, 0.07, 0.16, 0.21, 0.12, 0.16, 0.16, 0.16, 0.08, 0.10, 0.16]
ld2 = [0.3, 0.22, 0.3, 0.31, 0.28, 0.26, 0.28, 0.34, 0.2, 0.23, 0.22, 0.18, 0.21, 0.25, 0.38, 0.25, 0.19, 0.26, 0.23, 0.22, 0.29, 0.16, 0.3, 0.32, 0.26, 0.27, 0.25, 0.32, 0.2, 0.22]
def spectroscopic_lightcurve_fit_evansld(params, wl, times, spectra, detrenders, bins, sld, bin_unit = "nm",\
sys_method = "gp", juliet_name = None, mode = None, plot = False,\
vertical_offset = 0.015, figure_offset = 0.015, savefig = False,\
fig_name = None, method = None, sampler = None):
ordered_binned_spectrum, ordered_binned_errors, bin_centers = binning(wl, spectra, bins, bin_unit = bin_unit)
bin_lst = np.loadtxt(bins, delimiter = "\t")
if bin_unit == "nm":
bin_lst = bin_lst*10
fits = []
depths = []
depth_e = []
fit_value = []
if os.path.exists("juliet_fits/" + juliet_name) != True:
os.mkdir("juliet_fits/" + juliet_name)
colormap = plt.cm.nipy_spectral
colors = colormap(np.linspace(0.1, 0.9, len(ordered_binned_spectrum)))
plt.figure(figsize = (8, 24))
v_off = 0
fig_off = 0
for i in range(len(ordered_binned_spectrum)):
if mode == None:
print("For limb darkening, please pass mode = which HST mode you're using :)")
break
c1, c2 = sld.compute_quadratic_ld_coeffs(
wavelength_range=np.array([bin_lst[i][0], bin_lst[i][1]]),
mode=mode)
params["c1"] = [ld1[i]]
params["c2"] = [ld2[i]]
#print(c1, c2)
if sys_method == "gp":
name = juliet_name + "/" + juliet_name + "_bin" + str(i+1).zfill(3)
wl_lc = white_light_fit(params, times, ordered_binned_spectrum[i], ordered_binned_errors[i], detrenders, sys_method = "gp", juliet_name=name, sampler = sampler)
fits.append(wl_lc)
fit_value.append(np.array([wl_lc.posteriors['lnZ'], wl_lc.posteriors['lnZerr']]))
#if plot == True:
# transit_plus_GP_model = wl_lc.lc.evaluate('STIS')
# transit_model = wl_lc.lc.evaluate('STIS', evaluate_transit = True)
# gp_model = transit_plus_GP_model - transit_model
# plt.figure(figsize = (10,7))
# plt.scatter(times, (ordered_binned_spectrum[i]/ordered_binned_spectrum[i][0]))
# plt.scatter(times, (ordered_binned_spectrum[i]/ordered_binned_spectrum[i][0]) - (transit_plus_GP_model - transit_model),alpha=0.4, label = "gp detrending")
# plt.ylabel("Relative Flux")
# plt.xlabel("Time (BJD)")
# plt.plot(times, transit_model, color='red',zorder=10, label = "gp transit model")
# plt.legend()
# plt.show()
elif sys_method == "linear":
if method == "LM":
wl_lc = white_light_fit(params, times, ordered_binned_spectrum[i], detrenders, sys_method = "linear", method = "LM")
fits.append(wl_lc)
else:
name = juliet_name + "/" + juliet_name + "_bin" + str(i+1).zfill(3)
wl_lc = white_light_fit(params, times, ordered_binned_spectrum[i], ordered_binned_errors[i], detrenders, sys_method = "linear", juliet_name = name, sampler = sampler)
fits.append(wl_lc)
fit_value.append(np.array([wl_lc.posteriors['lnZ'], wl_lc.posteriors['lnZerr']]))
if plot == True:
t_final = np.linspace(times[0], times[-1], 1000)
# then the gp detrending
transit_plus_GP_model = wl_lc.lc.evaluate('STIS')
transit_model = wl_lc.lc.evaluate('STIS', evaluate_transit = True)
transit_model_resamp, error68_up, error68_down = wl_lc.lc.evaluate('STIS', evaluate_transit = True, t = t_final, return_err = True)
# there may be a small vertical offset between the two models because of the different normalizations - account for that
offset = np.nanmedian(wl_lc.posteriors['posterior_samples']['mflux_STIS'])
plt.scatter(times, (ordered_binned_spectrum[i]/ordered_binned_spectrum[i][0])+v_off, label = "raw data", color = colors[i], alpha = 0.3, s = 15)
plt.scatter(times, (ordered_binned_spectrum[i]/ordered_binned_spectrum[i][0]) - (transit_plus_GP_model - transit_model) + offset+v_off,alpha=1, label = "gp detrending", color = colors[i], s = 15)
#plt.plot(t_final, batman_model(t_final), color='black',zorder=10, label = "gp transit model (batman)")
plt.plot(np.linspace(times[0], times[-1], 1000), transit_model_resamp + offset+v_off, color = colors[i],zorder=10, label = "gp + transit model")
plt.fill_between(t_final, error68_up+offset+v_off, error68_down+offset+v_off, color = colors[i], alpha = 0.2, label = "68% error")
txt = plt.text(times[0], 1.004 + fig_off ,str(bin_lst[i]/10000), size=11, color='black', weight = "bold")
txt.set_path_effects([PathEffects.withStroke(linewidth=5, foreground='w')])
v_off -= vertical_offset
fig_off -= figure_offset
if plot == True:
plt.ylabel("Relative Flux + Vertical Offset")
plt.xlabel("Time (BJD-TBD)")
plt.ylim(1.004 + fig_off - figure_offset, 1.004 + figure_offset)
if savefig == True:
plt.savefig(fig_name, dpi = 300, facecolor = "white", bbox_inches="tight")
else:
plt.show()
if sys_method == "gp":
for file in sorted(glob.glob("juliet_fits/" + juliet_name + "/*bin*/*posteriors.dat*")):
fit = np.genfromtxt(file, dtype=None)
for param in fit:
if param[0].decode() == "p_p1":
depths.append(param[1])
depth_e.append((param[2], param[3]))
return(bin_centers, depths, depth_e, fit_value)
elif sys_method == "linear":
if method == "LM":
print("not implemented :]")
else:
for file in sorted(glob.glob("juliet_fits/" + juliet_name + "/*bin*/*posteriors.dat*")):
fit = np.genfromtxt(file, dtype=None)
for param in fit:
if param[0].decode() == "p_p1":
depths.append(param[1])
depth_e.append((param[2], param[3]))
return(bin_centers, depths, depth_e, fit_value)
```
```python
# testing evans limb darkening
bins_7_dd_pc5_p_eld, rp_7_dd_pc5_p_eld, rp_e_7_dd_pc5_p_eld, fits_7_dd_pc5_p_eld = spectroscopic_lightcurve_fit_evansld(params_750_spec, order_750["WAVELENGTH"],\
times_start_750, spectra_750, norm_pc_use5_750,\
"wasp-121_g750_bins", sld,\
sys_method = "gp", juliet_name = "wasp-121_750_dd_pc5_phase_eld",\
mode = 'HST_STIS_G750L', sampler = "dynamic_dynesty", plot = True,\
savefig = False, fig_name = "w121b_750_gp_slc_eld.png")
```
```python
bins_7_dd_lin_pc2_p_eld, rp_7_dd_lin_pc2_p_eld, rp_e_7_dd_lin_pc2_p_eld, fits_7_lin_dd_pc2_p_eld = spectroscopic_lightcurve_fit_evansld(params_750_spec, order_750["WAVELENGTH"],\
times_start_750, spectra_750, norm_pc_use2_750,\
"wasp-121_g750_bins", sld,\
sys_method = "linear", juliet_name = "wasp-121_750_dd_pc2_linear_phase_eld",\
mode = 'HST_STIS_G750L', sampler = "dynamic_dynesty", plot = True,\
savefig = False, fig_name = "w121b_750_linear_slc_eld.png")
```
```python
# helped a little, what about an offset too?
plt.figure(figsize = (10,7))
plt.errorbar(np.array(bins_joint_dd_pc5_p)/10000, rp_joint_dd_pc5_p, yerr = np.array(rp_e_joint_dd_pc5_p).T, fmt = ".", color = "tomato")
plt.errorbar(np.array(bins_7_dd_pc5_p)/10000, rp_7_dd_pc5_p, yerr = np.array(rp_e_7_dd_pc5_p).T, fmt = ".", color = "tomato", label = "GP Detrending")
plt.errorbar(np.array(bins_joint_dd_lin_pc2_p)/10000, rp_joint_dd_lin_pc2_p, yerr = np.array(rp_e_joint_dd_lin_pc2_p).T, fmt = ".", color = "cornflowerblue")
plt.errorbar(np.array(bins_7_dd_lin_pc2_p)/10000, rp_7_dd_lin_pc2_p, yerr = np.array(rp_e_7_dd_lin_pc2_p).T, fmt = ".", color = "cornflowerblue", label = "Linear Detrending")
plt.errorbar(np.array(bins_7_dd_pc5_p_eld)/10000, rp_7_dd_pc5_p_eld, yerr = np.array(rp_e_7_dd_pc5_p_eld).T, fmt = ".", color = "Orchid", label = "GP Detrending (Evans LD)")
plt.errorbar(np.array(bins_7_dd_lin_pc2_p_eld)/10000, rp_7_dd_lin_pc2_p_eld, yerr = np.array(rp_e_7_dd_lin_pc2_p_eld).T, fmt = ".", color = "darkorange", label = "Linear Detrending (Evans LD)")
plt.errorbar(evans[:,0], evans[:,2], yerr = np.array(list(zip(evans[:,3], evans[:,4]))).T, fmt = ".", color = "darkseagreen", label = "Evans et al. 2018")
plt.legend()
plt.ylabel(r"R$_p$/R$_s$")
plt.xlabel(r"Wavelength ($\mu$m)")
plt.title("WASP-121 b", fontsize=20)
#plt.savefig("w121b_spectrum.png", facecolor = "white", dpi = 300, bbox_inches="tight")
```
```python
# there we go, looks pretty similar now
plt.figure(figsize = (10,7))
#plt.errorbar(np.array(bins_7_dd_pc5_p)/10000, rp_7_dd_pc5_p, yerr = np.array(rp_e_7_dd_pc5_p).T, fmt = ".", color = "tomato", label = "GP Detrending")
#plt.errorbar(np.array(bins_7_dd_lin_pc2_p)/10000, rp_7_dd_lin_pc2_p, yerr = np.array(rp_e_7_dd_lin_pc2_p).T, fmt = ".", color = "cornflowerblue", label = "Linear Detrending")
plt.errorbar(np.array(bins_7_dd_pc5_p_eld)/10000, rp_7_dd_pc5_p_eld-np.nanmedian(rp_7_dd_pc5_p_eld), yerr = np.array(rp_e_7_dd_pc5_p_eld).T, fmt = ".", color = "Orchid", label = "GP Detrending (Evans LD)")
plt.errorbar(np.array(bins_7_dd_lin_pc2_p_eld)/10000, rp_7_dd_lin_pc2_p_eld-np.nanmedian(rp_7_dd_lin_pc2_p_eld), yerr = np.array(rp_e_7_dd_lin_pc2_p_eld).T, fmt = ".", color = "darkorange", label = "Linear Detrending (Evans LD)")
plt.errorbar(evans[:,0], evans[:,2] - np.nanmedian(evans[:,2]), yerr = np.array(list(zip(evans[:,3], evans[:,4]))).T, fmt = ".", color = "darkseagreen", label = "Evans et al. 2018")
plt.legend()
plt.ylabel(r"R$_p$/R$_s$ - Average R$_p$/R$_s$")
plt.xlabel(r"Wavelength ($\mu$m)")
plt.xlim(0.52, 0.94)
plt.title("WASP-121 b", fontsize=20)
#plt.savefig("w121b_spectrum_offset.png", facecolor = "white", dpi = 300, bbox_inches="tight")
```
```python
# testing single 430 transit fits to estimate errors
bins_dd_lin_pc2_single_1, rp_dd_lin_pc2_single_1, rp_e_dd_lin_pc2_single_1, fits_lin_pc2_single_1 = spectroscopic_lightcurve_fit(params_430_spec, order_430["WAVELENGTH"],\
times_start_430, spectra_430, norm_pc_use2_430,\
"wasp-121_g430_bins", sld,\
sys_method = "linear", juliet_name = "wasp-121_430_spec_d_dynesty_norm_fixed_linear_pc2_single_1",\
mode = 'HST_STIS_G430L', sampler = "dynamic_dynesty", plot = True,\
savefig = False, fig_name = "w121b_430_linear_slc_single_1.png")
```
```python
bins_dd_lin_pc2_single_2, rp_dd_lin_pc2_single_2, rp_e_dd_lin_pc2_single_2, fits_lin_pc2_single_2 = spectroscopic_lightcurve_fit(params_430_spec, order_430["WAVELENGTH"],\
times_start_430_2, spectra_430_2, norm_pc_use2_430_2,\
"wasp-121_g430_bins", sld,\
sys_method = "linear", juliet_name = "wasp-121_430_spec_d_dynesty_norm_fixed_linear_pc2_single_2",\
mode = 'HST_STIS_G430L', sampler = "dynamic_dynesty", plot = True,\
savefig = False, fig_name = "w121b_430_linear_slc_single_2.png")
```
```python
bins_dd_pc5_single_1, rp_dd_pc5_single_1, rp_e_dd_pc5_single_1, fits_pc5_single_1 = spectroscopic_lightcurve_fit(params_430_spec, order_430["WAVELENGTH"],\
times_start_430, spectra_430, norm_pc_use5_430,\
"wasp-121_g430_bins", sld,\
sys_method = "gp", juliet_name = "wasp-121_430_spec_d_dynesty_norm_fixed_pc5_single_1",\
mode = 'HST_STIS_G430L', sampler = "dynamic_dynesty", plot = True,\
savefig = False, fig_name = "w121b_430_gp_slc_single_1.png")
```
```python
bins_dd_pc5_single_2, rp_dd_pc5_single_2, rp_e_dd_pc5_single_2, fits_pc5_single_2 = spectroscopic_lightcurve_fit(params_430_spec, order_430["WAVELENGTH"],\
times_start_430_2, spectra_430_2, norm_pc_use5_430_2,\
"wasp-121_g430_bins", sld,\
sys_method = "gp", juliet_name = "wasp-121_430_spec_d_dynesty_norm_fixed_pc5_single_2",\
mode = 'HST_STIS_G430L', sampler = "dynamic_dynesty", plot = True,\
savefig = False, fig_name = "w121b_430_gp_slc_single_2.png")
```
```python
plt.figure(figsize = (10,7))
plt.errorbar(bins_dd_pc5_single_1, rp_dd_pc5_single_1, yerr = np.array(rp_e_dd_pc5_single_1).T, fmt = ".", label = "Tranist 1 (GP)")
plt.errorbar(bins_dd_lin_pc2_single_1, rp_dd_lin_pc2_single_1, yerr = np.array(rp_e_dd_lin_pc2_single_1).T, fmt = ".", label = "Tranist 1 (Linear)")
plt.errorbar(bins_dd_pc5_single_2, rp_dd_pc5_single_2, yerr = np.array(rp_e_dd_pc5_single_2).T, fmt = ".", label = "Tranist 2 (GP)")
plt.errorbar(bins_dd_lin_pc2_single_2, rp_dd_lin_pc2_single_2, yerr = np.array(rp_e_dd_lin_pc2_single_2).T, fmt = ".", label = "Tranist 2 (Linear)")
plt.legend()
plt.ylabel(r"R$_p$/R$_s$")
plt.xlabel(r"Wavelength ($\mu$m)")
plt.show()
```
```python
# calculate differences
norm_gp_v1 = rp_dd_pc5_single_1 - np.mean(rp_dd_pc5_single_1)
norm_gp_v2 = rp_dd_pc5_single_2 - np.mean(rp_dd_pc5_single_2)
norm_lin_v1 = rp_dd_lin_pc2_single_1 - np.mean(rp_dd_lin_pc2_single_1)
norm_lin_v2 = rp_dd_lin_pc2_single_2 - np.mean(rp_dd_lin_pc2_single_2)
noise_v1 = norm_gp_v1 - norm_gp_v2
noise_v2 = norm_lin_v1 - norm_lin_v2
```
```python
plt.figure(figsize = (10,7))
plt.errorbar(np.array(bins_dd_pc5_single_1)/10000, noise_v1, yerr = np.array(rp_e_joint_dd_pc5_p).T, fmt = ".", label = "Transit 1 - Transit 2 (GP, Joint Fit Error)", color = "tomato", alpha = 0.8)
plt.errorbar(np.array(bins_dd_lin_pc2_single_1)/10000, noise_v2, yerr = np.array(rp_e_joint_dd_lin_pc2_p).T, fmt = ".", label = "Transit 1 - Transit 2 (Linear, Joint Fit Error)", color = "cornflowerblue", alpha = 0.8)
#plt.errorbar(np.array(bins_dd_pc5_single_2)/10000, noise_v1, yerr = np.array(rp_e_dd_pc5_single_2).T, fmt = ".", label = "Tranist 1 - Transit 2 (GP, Error Transit 2)", color = "orchid", alpha = 0.8)
#plt.errorbar(np.array(bins_dd_lin_pc2_single_2)/10000, noise_v2, yerr = np.array(rp_e_dd_lin_pc2_single_2).T, fmt = ".", label = "Tranist 1 - Transit 2 (Linear, Error Transit 2)", color = "darkseagreen", alpha = 0.8)
plt.hlines(0, 0.31, 0.575, linestyle = "--", color = "darkorange", alpha = 0.8)
plt.xlim(0.31, 0.575)
plt.legend()
plt.ylabel(r"R$_p$/R$_s$")
plt.xlabel(r"Wavelength ($\mu$m)")
#plt.savefig("w121b_errortest.png", dpi = 300, facecolor = "white", bbox_inches="tight")
```
```python
# testing the 6.5 pixel aperture
spectra_430 = []
for i in range(len(transit_430_cleaned)):
spec = spectral_extraction(transit_430_cleaned[i], trace_430_fit[i], method = "optimal", gain = 4.0159998, ron = 8.2327986, polynomial_order = 3, nsigma = 12, aperture_radius = 6.5)
spectra_430.append(spec)
spectra_430_2 = []
for i in range(len(transit_430_2_cleaned)):
spec = spectral_extraction(transit_430_2_cleaned[i], trace_430_2_fit[i], method = "optimal", gain = 4.0159998, ron = 8.2327986, polynomial_order = 3, nsigma = 12, aperture_radius = 6.5)
spectra_430_2.append(spec)
spectra_750 = []
for i in range(len(transit_750_cleaned)):
spec = spectral_extraction(transit_750_cleaned[i], trace_750_fit[i], method = "optimal", gain = 4.0159998, ron = 8.2327986, polynomial_order = 3, nsigma = 12, aperture_radius = 6.5)
spectra_750.append(spec)
```
```python
bins_7_dd_pc5_p_ap, rp_7_dd_pc5_p_ap, rp_e_7_dd_pc5_p_ap, fits_7_dd_pc5_p_ap = spectroscopic_lightcurve_fit(params_750_spec, order_750["WAVELENGTH"],\
times_start_750, spectra_750, norm_pc_use5_750,\
"wasp-121_g750_bins", sld,\
sys_method = "gp", juliet_name = "wasp-121_750_dd_pc5_phase_aper",\
mode = 'HST_STIS_G750L', sampler = "dynamic_dynesty", plot = True,\
savefig = False, fig_name = "w121b_750_gp_slc_aper.png")
```
```python
bins_7_dd_lin_pc2_p_ap, rp_7_dd_lin_pc2_p_ap, rp_e_7_dd_lin_pc2_p_ap, fits_7_lin_dd_pc2_p_ap = spectroscopic_lightcurve_fit(params_750_spec, order_750["WAVELENGTH"],\
times_start_750, spectra_750, norm_pc_use2_750,\
"wasp-121_g750_bins", sld,\
sys_method = "linear", juliet_name = "wasp-121_750_dd_pc2_linear_phase_aper",\
mode = 'HST_STIS_G750L', sampler = "dynamic_dynesty", plot = True,\
savefig = False, fig_name = "w121b_750_linear_slc_aper.png")
```
```python
bins_joint_dd_lin_pc2_p_ap, rp_joint_dd_lin_pc2_p_ap, rp_e_joint_dd_lin_pc2_p_ap, fits_lin_pc2_p_ap = joint_spectroscopic_lightcurve_fit(params_430_spec, order_430["WAVELENGTH"],\
times_start_430, spectra_430, norm_pc_use2_430,\
times_start_430_2, spectra_430_2, norm_pc_use2_430_2,\
"wasp-121_g430_bins", sld,\
sys_method = "linear", juliet_name = "wasp-121_430_spec_d_dynesty_norm_fixed_linear_pc2_phase_aper",\
mode = 'HST_STIS_G430L', sampler = "dynamic_dynesty", plot = True,\
savefig = False, fig_name = "w121b_430_linear_slc_aper.png")
```
```python
bins_joint_dd_pc5_p_ap, rp_joint_dd_pc5_p_ap, rp_e_joint_dd_pc5_p_ap, fits_pc5_p_ap = joint_spectroscopic_lightcurve_fit(params_430_spec, order_430["WAVELENGTH"],\
times_start_430, spectra_430, norm_pc_use5_430,\
times_start_430_2, spectra_430_2, norm_pc_use5_430_2,\
"wasp-121_g430_bins", sld,\
sys_method = "gp", juliet_name = "wasp-121_430_spec_d_dynesty_norm_pc5_fixed2_phase_aper",\
mode = 'HST_STIS_G430L', sampler = "dynamic_dynesty", plot = True,\
savefig = False, fig_name = "w121b_430_gp_slc_aper.png")
```
```python
plt.figure(figsize = (10,7))
plt.errorbar(np.array(bins_joint_dd_pc5_p)/10000, rp_joint_dd_pc5_p, yerr = np.array(rp_e_joint_dd_pc5_p).T, fmt = ".", color = "tomato")
plt.errorbar(np.array(bins_7_dd_pc5_p)/10000, rp_7_dd_pc5_p, yerr = np.array(rp_e_7_dd_pc5_p).T, fmt = ".", color = "tomato", label = r"GP Detrending. aper = 15, $\bar{e}$ = 0.0009")
plt.errorbar(np.array(bins_joint_dd_lin_pc2_p)/10000, rp_joint_dd_lin_pc2_p, yerr = np.array(rp_e_joint_dd_lin_pc2_p).T, fmt = ".", color = "cornflowerblue")
plt.errorbar(np.array(bins_7_dd_lin_pc2_p)/10000, rp_7_dd_lin_pc2_p, yerr = np.array(rp_e_7_dd_lin_pc2_p).T, fmt = ".", color = "cornflowerblue", label = r"Linear Detrending, aper = 15, $\bar{e}$ = 0.0009")
plt.errorbar(np.array(bins_joint_dd_pc5_p_ap)/10000, rp_joint_dd_pc5_p_ap, yerr = np.array(rp_e_joint_dd_pc5_p_ap).T, fmt = ".", color = "darkseagreen")
plt.errorbar(np.array(bins_7_dd_pc5_p_ap)/10000, rp_7_dd_pc5_p_ap, yerr = np.array(rp_e_7_dd_pc5_p_ap).T, fmt = ".", color = "darkseagreen", label = r"GP Detrending, aper = 6.5, $\bar{e}$ = 0.0016")
plt.errorbar(np.array(bins_joint_dd_lin_pc2_p_ap)/10000, rp_joint_dd_lin_pc2_p_ap, yerr = np.array(rp_e_joint_dd_lin_pc2_p_ap).T, fmt = ".", color = "orchid")
plt.errorbar(np.array(bins_7_dd_lin_pc2_p_ap)/10000, rp_7_dd_lin_pc2_p_ap, yerr = np.array(rp_e_7_dd_lin_pc2_p_ap).T, fmt = ".", color = "orchid", label = r"Linear Detrending, aper = 6.5, $\bar{e}$ = 0.0018")
plt.legend(fontsize = 12)
plt.ylabel(r"R$_p$/R$_s$")
plt.xlabel(r"Wavelength ($\mu$m)")
plt.title("WASP-121 b", fontsize=20)
#plt.savefig("w121b_spectrum_aper.png", facecolor = "white", dpi = 300, bbox_inches="tight")
```
```python
```
|
natalieallenREPO_NAMEstis_pipelinePATH_START.@stis_pipeline_extracted@stis_pipeline-main@stis_reduction-wasp-121b-paper.ipynb@.PATH_END.py
|
{
"filename": "conf.py",
"repo_name": "desihub/redrock",
"repo_path": "redrock_extracted/redrock-main/doc/conf.py",
"type": "Python"
}
|
# -*- coding: utf-8 -*-
#
# redrock documentation build configuration file, created by
# sphinx-quickstart on Tue Dec 9 10:43:33 2014.
#
# This file is execfile()d with the current directory set to its
# containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
from __future__ import absolute_import, division, print_function, unicode_literals
import sys
import os
import os.path
from importlib import import_module
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
sys.path.insert(0, os.path.abspath('../py'))
# -- General configuration ------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
try:
import sphinx.ext.napoleon
napoleon_extension = 'sphinx.ext.napoleon'
except ImportError:
try:
import sphinxcontrib.napoleon
napoleon_extension = 'sphinxcontrib.napoleon'
needs_sphinx = '1.2'
except ImportError:
needs_sphinx = '1.3'
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = [
'sphinx.ext.autodoc',
'sphinx.ext.intersphinx',
'sphinx.ext.todo',
'sphinx.ext.mathjax',
'sphinx.ext.viewcode',
'sphinx.ext.napoleon'
]
# Configuration for intersphinx, copied from astropy.
intersphinx_mapping = {
'python': ('http://docs.python.org/', None),
# 'python3': ('http://docs.python.org/3/', path.abspath(path.join(path.dirname(__file__), 'local/python3links.inv'))),
'numpy': ('http://docs.scipy.org/doc/numpy/', None),
'scipy': ('http://docs.scipy.org/doc/scipy/reference/', None),
'matplotlib': ('http://matplotlib.org/', None),
'astropy': ('http://docs.astropy.org/en/stable/', None),
'h5py': ('http://docs.h5py.org/en/latest/', None)
}
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix of source filenames.
source_suffix = '.rst'
# The encoding of source files.
#source_encoding = 'utf-8-sig'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'redrock'
copyright = u'2014-2018, DESI Collaboration'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
__import__(project)
package = sys.modules[project]
# The short X.Y version.
version = package.__version__.split('-', 1)[0]
# The full version, including alpha/beta/rc tags.
release = package.__version__
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
#today = ''
# Else, today_fmt is used as the format for a strftime call.
#today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = ['_build']
# The reST default role (used for this markup: `text`) to use for all
# documents.
#default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
#add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
#show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting.
#modindex_common_prefix = []
# If true, keep warnings as "system message" paragraphs in the built documents.
keep_warnings = True
# Include functions that begin with an underscore, e.g. _private().
napoleon_include_private_with_doc = True
# This value contains a list of modules to be mocked up. This is useful when
# some external dependencies are not met at build time and break the
# building process.
autodoc_mock_imports = []
for missing in ('astropy', 'desispec', 'desiutil', 'fitsio', 'numba',
'numpy', 'scipy'):
try:
foo = import_module(missing)
except ImportError:
autodoc_mock_imports.append(missing)
# -- Options for HTML output ----------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
#html_theme = 'default'
#html_theme = 'haiku'
try:
import sphinx_rtd_theme
html_theme = 'sphinx_rtd_theme'
html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
except ImportError:
pass
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
#html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory.
#html_theme_path = []
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
#html_title = None
# A shorter title for the navigation bar. Default is the same as html_title.
#html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
#html_logo = None
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
#html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
#html_static_path = ['_static']
# Add any extra paths that contain custom files (such as robots.txt or
# .htaccess) here, relative to this directory. These files are copied
# directly to the root of the documentation.
#html_extra_path = []
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
#html_last_updated_fmt = '%b %d, %Y'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
#html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
#html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
#html_additional_pages = {}
# If false, no module index is generated.
#html_domain_indices = True
# If false, no index is generated.
#html_use_index = True
# If true, the index is split into individual pages for each letter.
#html_split_index = False
# If true, links to the reST sources are added to the pages.
#html_show_sourcelink = True
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
#html_show_sphinx = True
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
#html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
#html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml").
#html_file_suffix = None
# Output file base name for HTML help builder.
htmlhelp_basename = 'redrockdoc'
# -- Options for LaTeX output ---------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
#'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
#'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
#'preamble': '',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
('index', 'redrock.tex', u'redrock Documentation',
u'DESI', 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
#latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
#latex_use_parts = False
# If true, show page references after internal links.
#latex_show_pagerefs = False
# If true, show URL addresses after external links.
#latex_show_urls = False
# Documents to append as an appendix to all manuals.
#latex_appendices = []
# If false, no module index is generated.
#latex_domain_indices = True
# -- Options for manual page output ---------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
('index', 'redrock', u'redrock Documentation',
[u'DESI'], 1)
]
# If true, show URL addresses after external links.
#man_show_urls = False
# -- Options for Texinfo output -------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
('index', 'redrock', u'redrock Documentation',
u'DESI', 'redrock', 'One line description of project.',
'Miscellaneous'),
]
# Documents to append as an appendix to all manuals.
#texinfo_appendices = []
# If false, no module index is generated.
#texinfo_domain_indices = True
# How to display URL addresses: 'footnote', 'no', or 'inline'.
#texinfo_show_urls = 'footnote'
# If true, do not generate a @detailmenu in the "Top" node's menu.
#texinfo_no_detailmenu = False
|
desihubREPO_NAMEredrockPATH_START.@redrock_extracted@redrock-main@doc@conf.py@.PATH_END.py
|
{
"filename": "copy_script.py",
"repo_name": "ThibeauWouters/TurboPE-BNS",
"repo_path": "TurboPE-BNS_extracted/TurboPE-BNS-main/JS_estimate/GW170817_TaylorF2/varied_runs/outdir/68916/copy_script.py",
"type": "Python"
}
|
import psutil
p = psutil.Process()
p.cpu_affinity([0])
import os
os.environ["CUDA_VISIBLE_DEVICES"] = "3"
os.environ["XLA_PYTHON_CLIENT_MEM_FRACTION"] = "0.10"
from jimgw.jim import Jim
from jimgw.single_event.detector import H1, L1, V1
from jimgw.single_event.likelihood import HeterodynedTransientLikelihoodFD
from jimgw.single_event.waveform import RippleTaylorF2
from jimgw.prior import Uniform, PowerLaw, Composite
import jax.numpy as jnp
import jax
import time
import numpy as np
jax.config.update("jax_enable_x64", True)
import shutil
import numpy as np
import matplotlib.pyplot as plt
import optax
import sys
sys.path.append("../")
import utils_plotting as utils
print(jax.devices())
################
### PREAMBLE ###
################
default_corner_kwargs = dict(bins=40,
smooth=1.,
show_titles=False,
label_kwargs=dict(fontsize=16),
title_kwargs=dict(fontsize=16),
color="blue",
# quantiles=[],
# levels=[0.9],
plot_density=True,
plot_datapoints=False,
fill_contours=True,
max_n_ticks=4,
min_n_ticks=3,
save=False)
params = {
"axes.labelsize": 30,
"axes.titlesize": 30,
"text.usetex": True,
"font.family": "serif",
}
plt.rcParams.update(params)
labels = [r'$M_c/M_\odot$', r'$q$', r'$\chi_1$', r'$\chi_2$', r'$\Lambda$', r'$\delta\Lambda$', r'$d_{\rm{L}}/{\rm Mpc}$',
r'$t_c$', r'$\phi_c$', r'$\iota$', r'$\psi$', r'$\alpha$', r'$\delta$']
naming = ['M_c', 'q', 's1_z', 's2_z', 'lambda_1', 'lambda_2', 'd_L', 't_c', 'phase_c', 'cos_iota', 'psi', 'ra', 'sin_dec']
data_path = "/home/thibeau.wouters/gw-datasets/GW170817/" # on CIT
start_runtime = time.time()
############
### BODY ###
############
### Data definitions
total_time_start = time.time()
gps = 1187008882.43
trigger_time = gps
fmin = 20
fmax = 2048
minimum_frequency = fmin
maximum_frequency = fmax
T = 128
duration = T
post_trigger_duration = 2
epoch = duration - post_trigger_duration
f_ref = fmin
tukey_alpha = 2 / (T / 2)
### Getting detector data
# This is our preprocessed data obtained from the TXT files at the GWOSC website (the GWF gave me NaNs?)
H1.frequencies = np.genfromtxt(f'{data_path}H1_freq.txt')
H1_data_re, H1_data_im = np.genfromtxt(f'{data_path}H1_data_re.txt'), np.genfromtxt(f'{data_path}H1_data_im.txt')
H1.data = H1_data_re + 1j * H1_data_im
L1.frequencies = np.genfromtxt(f'{data_path}L1_freq.txt')
L1_data_re, L1_data_im = np.genfromtxt(f'{data_path}L1_data_re.txt'), np.genfromtxt(f'{data_path}L1_data_im.txt')
L1.data = L1_data_re + 1j * L1_data_im
V1.frequencies = np.genfromtxt(f'{data_path}V1_freq.txt')
V1_data_re, V1_data_im = np.genfromtxt(f'{data_path}V1_data_re.txt'), np.genfromtxt(f'{data_path}V1_data_im.txt')
V1.data = V1_data_re + 1j * V1_data_im
# Load the PSD
H1.psd = H1.load_psd(H1.frequencies, psd_file = data_path + "GW170817-IMRD_data0_1187008882-43_generation_data_dump.pickle_H1_psd.txt")
L1.psd = L1.load_psd(L1.frequencies, psd_file = data_path + "GW170817-IMRD_data0_1187008882-43_generation_data_dump.pickle_L1_psd.txt")
V1.psd = V1.load_psd(V1.frequencies, psd_file = data_path + "GW170817-IMRD_data0_1187008882-43_generation_data_dump.pickle_V1_psd.txt")
### Define priors
# Internal parameters
Mc_prior = Uniform(1.18, 1.21, naming=["M_c"])
q_prior = Uniform(
0.125,
1.0,
naming=["q"],
transforms={"q": ("eta", lambda params: params["q"] / (1 + params["q"]) ** 2)},
)
s1z_prior = Uniform(-0.05, 0.05, naming=["s1_z"])
s2z_prior = Uniform(-0.05, 0.05, naming=["s2_z"])
lambda_1_prior = Uniform(0.0, 5000.0, naming=["lambda_1"])
lambda_2_prior = Uniform(0.0, 5000.0, naming=["lambda_2"])
dL_prior = Uniform(1.0, 75.0, naming=["d_L"])
# dL_prior = PowerLaw(1.0, 75.0, 2.0, naming=["d_L"])
t_c_prior = Uniform(-0.1, 0.1, naming=["t_c"])
phase_c_prior = Uniform(0.0, 2 * jnp.pi, naming=["phase_c"])
cos_iota_prior = Uniform(
-1.0,
1.0,
naming=["cos_iota"],
transforms={
"cos_iota": (
"iota",
lambda params: jnp.arccos(
jnp.arcsin(jnp.sin(params["cos_iota"] / 2 * jnp.pi)) * 2 / jnp.pi
),
)
},
)
psi_prior = Uniform(0.0, jnp.pi, naming=["psi"])
ra_prior = Uniform(0.0, 2 * jnp.pi, naming=["ra"])
sin_dec_prior = Uniform(
-1.0,
1.0,
naming=["sin_dec"],
transforms={
"sin_dec": (
"dec",
lambda params: jnp.arcsin(
jnp.arcsin(jnp.sin(params["sin_dec"] / 2 * jnp.pi)) * 2 / jnp.pi
),
)
},
)
prior_list = [
Mc_prior,
q_prior,
s1z_prior,
s2z_prior,
lambda_1_prior,
lambda_2_prior,
dL_prior,
t_c_prior,
phase_c_prior,
cos_iota_prior,
psi_prior,
ra_prior,
sin_dec_prior,
]
prior = Composite(prior_list)
# The following only works if every prior has xmin and xmax property, which is OK for Uniform and Powerlaw
bounds = jnp.array([[p.xmin, p.xmax] for p in prior.priors])
### Create likelihood object
ref_params = {
'M_c': 1.19793583,
'eta': 0.24794374,
's1_z': 0.00220637,
's2_z': 0.05,
'lambda_1': 105.12916663,
'lambda_2': 0.0,
'd_L': 45.41592353,
't_c': 0.00220588,
'phase_c': 5.76822606,
'iota': 2.46158044,
'psi': 2.09118099,
'ra': 5.03335133,
'dec': 0.01679998
}
n_bins = 100
likelihood = HeterodynedTransientLikelihoodFD([H1, L1, V1], prior=prior, bounds=bounds, waveform=RippleTaylorF2(f_ref=f_ref), trigger_time=gps, duration=T, n_bins=n_bins, ref_params=ref_params)
print("Running with n_bins = ", n_bins)
# Local sampler args
eps = 1e-3
n_dim = 13
mass_matrix = jnp.eye(n_dim)
mass_matrix = mass_matrix.at[0,0].set(1e-5)
mass_matrix = mass_matrix.at[1,1].set(1e-4)
mass_matrix = mass_matrix.at[2,2].set(1e-3)
mass_matrix = mass_matrix.at[3,3].set(1e-3)
mass_matrix = mass_matrix.at[7,7].set(1e-5)
mass_matrix = mass_matrix.at[11,11].set(1e-2)
mass_matrix = mass_matrix.at[12,12].set(1e-2)
local_sampler_arg = {"step_size": mass_matrix * eps}
# Build the learning rate scheduler
n_loop_training = 400
n_epochs = 50
total_epochs = n_epochs * n_loop_training
start = int(total_epochs / 10)
start_lr = 1e-3
end_lr = 1e-5
power = 4.0
schedule_fn = optax.polynomial_schedule(
start_lr, end_lr, power, total_epochs-start, transition_begin=start)
scheduler_str = f"polynomial_schedule({start_lr}, {end_lr}, {power}, {total_epochs-start}, {start})"
### Generate a random number here. That will be used to get a unique outdir and also get a unique random seed for sampling
dir_exists = False
while not dir_exists:
random_number = np.random.randint(low = 0, high = 99999)
print(f"Generated random number for sampling: {random_number}")
outdir_name = f"./outdir/{random_number}/"
if not os.path.exists(outdir_name):
print("This is a new directory so we will proceed")
os.makedirs(outdir_name)
dir_exists = True
# Create jim object
jim = Jim(
likelihood,
prior,
n_loop_training=400,
n_loop_production=20,
n_local_steps=10,
n_global_steps=300,
n_chains=1000,
n_epochs=100,
learning_rate=schedule_fn,
max_samples=50000,
momentum=0.9,
batch_size=50000,
use_global=True,
keep_quantile=0.0,
train_thinning=10,
output_thinning=30,
local_sampler_arg=local_sampler_arg,
stopping_criterion_global_acc = 0.20,
outdir_name=outdir_name
)
### Heavy computation begins
jim.sample(jax.random.PRNGKey(random_number))
### Heavy computation ends
# === Show results, save output ===
# Print a summary to screen:
jim.print_summary()
outdir = outdir_name
# Save and plot the results of the run
# - training phase
name = outdir + f'results_training.npz'
print(f"Saving samples to {name}")
state = jim.Sampler.get_sampler_state(training=True)
chains, log_prob, local_accs, global_accs, loss_vals = state["chains"], state[
"log_prob"], state["local_accs"], state["global_accs"], state["loss_vals"]
local_accs = jnp.mean(local_accs, axis=0)
global_accs = jnp.mean(global_accs, axis=0)
np.savez(name, log_prob=log_prob, local_accs=local_accs,
global_accs=global_accs, loss_vals=loss_vals)
utils.plot_accs(local_accs, "Local accs (training)",
"local_accs_training", outdir)
utils.plot_accs(global_accs, "Global accs (training)",
"global_accs_training", outdir)
utils.plot_loss_vals(loss_vals, "Loss", "loss_vals", outdir)
utils.plot_log_prob(log_prob, "Log probability (training)",
"log_prob_training", outdir)
# - production phase
name = outdir + f'results_production.npz'
state = jim.Sampler.get_sampler_state(training=False)
chains, log_prob, local_accs, global_accs = state["chains"], state[
"log_prob"], state["local_accs"], state["global_accs"]
local_accs = jnp.mean(local_accs, axis=0)
global_accs = jnp.mean(global_accs, axis=0)
np.savez(name, chains=chains, log_prob=log_prob,
local_accs=local_accs, global_accs=global_accs)
utils.plot_accs(local_accs, "Local accs (production)",
"local_accs_production", outdir)
utils.plot_accs(global_accs, "Global accs (production)",
"global_accs_production", outdir)
utils.plot_log_prob(log_prob, "Log probability (production)",
"log_prob_production", outdir)
# Plot the chains as corner plots
utils.plot_chains(chains, "chains_production", outdir, truths=None)
# Save the NF and show a plot of samples from the flow
print("Saving the NF")
jim.Sampler.save_flow(outdir + "nf_model")
name = outdir + 'results_NF.npz'
chains = jim.Sampler.sample_flow(5_000)
np.savez(name, chains=chains)
# Final steps
# Finally, copy over this script to the outdir for reproducibility
shutil.copy2(__file__, outdir + "copy_script.py")
print("Saving the jim hyperparameters")
# Change scheduler from function to a string representation
try:
jim.hyperparameters["learning_rate"] = scheduler_str
jim.Sampler.hyperparameters["learning_rate"] = scheduler_str
jim.save_hyperparameters(outdir=outdir)
except Exception as e:
# Sometimes, something breaks, so avoid crashing the whole thing
print(f"Could not save hyperparameters in script: {e}")
print("Finished successfully")
end_runtime = time.time()
runtime = end_runtime - start_runtime
print(f"Time taken: {runtime} seconds ({(runtime)/60} minutes)")
print(f"Saving runtime")
with open(outdir + 'runtime.txt', 'w') as file:
file.write(str(runtime))
|
ThibeauWoutersREPO_NAMETurboPE-BNSPATH_START.@TurboPE-BNS_extracted@TurboPE-BNS-main@JS_estimate@GW170817_TaylorF2@varied_runs@outdir@68916@copy_script.py@.PATH_END.py
|
{
"filename": "_x.py",
"repo_name": "catboost/catboost",
"repo_path": "catboost_extracted/catboost-master/contrib/python/plotly/py3/plotly/validators/scattergl/marker/colorbar/_x.py",
"type": "Python"
}
|
import _plotly_utils.basevalidators
class XValidator(_plotly_utils.basevalidators.NumberValidator):
def __init__(
self, plotly_name="x", parent_name="scattergl.marker.colorbar", **kwargs
):
super(XValidator, self).__init__(
plotly_name=plotly_name,
parent_name=parent_name,
edit_type=kwargs.pop("edit_type", "calc"),
**kwargs,
)
|
catboostREPO_NAMEcatboostPATH_START.@catboost_extracted@catboost-master@contrib@python@plotly@py3@plotly@validators@scattergl@marker@colorbar@_x.py@.PATH_END.py
|
{
"filename": "dataset_utils.py",
"repo_name": "pyDANDIA/pyDANDIA",
"repo_path": "pyDANDIA_extracted/pyDANDIA-main/pyDANDIA/dataset_utils.py",
"type": "Python"
}
|
# -*- coding: utf-8 -*-
"""
Created on Fri Jun 7 14:39:29 2019
@author: rstreet
"""
import pipeline_setup
from os import path
class DataCollection:
"""Class describing a collection of datasets for a single field pointing"""
def __init__(self):
self.primary_ref = None
self.red_dir = None
self.data_list = []
def get_datasets_for_reduction(self,datasets_file,log=None):
"""Method to read a list of the datasets to be reduced.
The datasets file should be an ASCII text file with the following structure:
RED_DIR /path/to/top/level/reduction/directory
DATASET Name-of-dataset-sub-directory
DATASET Name-of-dataset-sub-directory
DATASET Name-of-dataset-sub-directory
DATASET Name-of-dataset-sub-directory
PRIMARY_REF Name-of-dataset-sub-directory (to be used as primary reference)
Example file contents:
RED_DIR /Users/rstreet/ROMEREA/2018/full_frame_test
DATASET ROME-FIELD-16_lsc-doma-1m0-05-fl15_gp
DATASET ROME-FIELD-16_lsc-doma-1m0-05-fl15_rp
PRIMARY_REF ROME-FIELD-16_lsc-doma-1m0-05-fl15_ip
"""
if path.isfile(datasets_file) == True:
if log!= None:
log.info('Found a reduce_datasets instruction file')
file_lines = open(datasets_file).readlines()
if log!= None:
log.info('Going to reduce the following datasets:')
for line in file_lines:
if len(line.replace('\n','')) > 0:
entries = line.replace('\n','').split()
if 'RED_DIR' in entries[0]:
self.red_dir = entries[1]
elif 'DATASET' in entries[0]:
self.data_list.append(entries[1])
elif 'PRIMARY' in entries[0]:
self.data_list.append(entries[1])
self.primary_ref = entries[1]
if log!= None:
log.info(self.data_list[-1])
else:
if log!= None:
log.info('No instruction file found, halting.')
def summary(self):
output = 'Reduction location: '+repr(self.red_dir)+'\n'
output += ' Datasets:\n'
for d in self.data_list:
output += repr(d)+'\n'
output += 'Primary reference: '+repr(self.primary_ref)
return output
def build_pipeline_setup(data):
"""Function to configure the pipeline setup object"""
params = { 'base_dir': data.red_dir,
'log_dir': data.red_dir,
'pipeline_config_dir': path.join(data.red_dir,'config'),
}
setup = pipeline_setup.pipeline_setup(params)
return setup
|
pyDANDIAREPO_NAMEpyDANDIAPATH_START.@pyDANDIA_extracted@pyDANDIA-main@pyDANDIA@dataset_utils.py@.PATH_END.py
|
{
"filename": "test_variable.py",
"repo_name": "PrefectHQ/prefect",
"repo_path": "prefect_extracted/prefect-main/tests/cli/test_variable.py",
"type": "Python"
}
|
import sys
import pytest
from sqlalchemy.ext.asyncio import AsyncSession
from typer import Exit
from prefect.server.models.variables import create_variable
from prefect.server.schemas.actions import VariableCreate
from prefect.testing.cli import invoke_and_assert
@pytest.fixture(autouse=True)
def interactive_console(monkeypatch):
monkeypatch.setattr("prefect.cli.variable.is_interactive", lambda: True)
# `readchar` does not like the fake stdin provided by typer isolation so we provide
# a version that does not require a fd to be attached
def readchar():
sys.stdin.flush()
position = sys.stdin.tell()
if not sys.stdin.read():
print("TEST ERROR: CLI is attempting to read input but stdin is empty.")
raise Exit(-2)
else:
sys.stdin.seek(position)
return sys.stdin.read(1)
monkeypatch.setattr("readchar._posix_read.readchar", readchar)
@pytest.fixture
async def variable(
session: AsyncSession,
):
model = await create_variable(
session,
VariableCreate(name="my_variable", value="my-value", tags=["123", "456"]),
)
await session.commit()
return model
@pytest.fixture
async def variables(
session: AsyncSession,
):
variables = [
VariableCreate(name="variable1", value="value1", tags=["tag1"]),
VariableCreate(name="variable12", value="value12", tags=["tag2"]),
VariableCreate(name="variable2", value="value2", tags=["tag1"]),
VariableCreate(name="variable21", value="value21", tags=["tag2"]),
]
models = []
for variable in variables:
model = await create_variable(session, variable)
models.append(model)
await session.commit()
return models
def test_list_variables_none_exist():
invoke_and_assert(
["variable", "ls"],
expected_output_contains="""
┏━━━━━━┳━━━━━━━┳━━━━━━━━━┳━━━━━━━━━┓
┃ Name ┃ Value ┃ Created ┃ Updated ┃
┡━━━━━━╇━━━━━━━╇━━━━━━━━━╇━━━━━━━━━┩
└──────┴───────┴─────────┴─────────┘
""",
expected_code=0,
)
def test_list_variables_with_limit(variables):
# variables are alphabetical by name
name = sorted([variable.name for variable in variables])[0]
invoke_and_assert(
["variable", "ls", "--limit", "1"],
expected_output_contains=name,
expected_code=0,
)
def test_list_variables(variables):
names = (variable.name for variable in variables)
invoke_and_assert(
["variable", "ls"],
expected_output_contains=(names),
expected_code=0,
)
def test_inspect_variable_doesnt_exist():
invoke_and_assert(
["variable", "inspect", "doesnt_exist"],
expected_output_contains="Variable 'doesnt_exist' not found",
expected_code=1,
)
def test_inspect_variable(variable):
invoke_and_assert(
["variable", "inspect", variable.name],
expected_output_contains=(variable.name, variable.value, str(variable.id)),
expected_code=0,
)
def test_delete_variable_doesnt_exist():
invoke_and_assert(
["variable", "delete", "doesnt_exist"],
user_input="y",
expected_output_contains="Variable 'doesnt_exist' not found",
expected_code=1,
)
def test_get_variable(variable):
invoke_and_assert(
["variable", "get", variable.name],
expected_output_contains=variable.value,
expected_code=0,
)
def test_get_variable_doesnt_exist(variable):
invoke_and_assert(
["variable", "get", "doesnt_exist"],
expected_output_contains="Variable 'doesnt_exist' not found",
expected_code=1,
)
def test_set_variable():
invoke_and_assert(
[
"variable",
"set",
"my_variable",
"my-value",
"--tag",
"tag1",
"--tag",
"tag2",
],
expected_output_contains="Set variable 'my_variable'.",
expected_code=0,
)
invoke_and_assert(
["variable", "inspect", "my_variable"],
expected_output_contains=[
"name='my_variable'",
"value='my-value'",
"tags=['tag1', 'tag2']",
],
expected_code=0,
)
def test_set_existing_variable_without_overwrite(variable):
invoke_and_assert(
["variable", "set", variable.name, "new-value"],
expected_output_contains="already exists. Use `--overwrite` to update it.",
expected_code=1,
)
def test_set_overwrite_variable(variable):
invoke_and_assert(
["variable", "set", variable.name, "new-value", "--overwrite"],
expected_output_contains=f"Set variable {variable.name!r}",
expected_code=0,
)
invoke_and_assert(
["variable", "get", variable.name],
expected_output_contains="new-value",
expected_code=0,
)
def test_unset_variable_doesnt_exist():
invoke_and_assert(
["variable", "unset", "doesnt_exist"],
user_input="y",
expected_output_contains="Variable 'doesnt_exist' not found",
expected_code=1,
)
def test_unset_variable(variable):
invoke_and_assert(
["variable", "unset", variable.name],
user_input="y",
expected_output_contains=f"Unset variable {variable.name!r}.",
expected_code=0,
)
def test_unset_variable_without_confirmation_aborts(variable):
invoke_and_assert(
["variable", "unset", variable.name],
user_input="n",
expected_output_contains="Unset aborted.",
expected_code=1,
)
def test_delete_variable(variable):
invoke_and_assert(
["variable", "delete", variable.name],
user_input="y",
expected_output_contains=f"Unset variable {variable.name!r}.",
expected_code=0,
)
|
PrefectHQREPO_NAMEprefectPATH_START.@prefect_extracted@prefect-main@tests@cli@test_variable.py@.PATH_END.py
|
{
"filename": "README.md",
"repo_name": "facebookresearch/faiss",
"repo_path": "faiss_extracted/faiss-main/contrib/README.md",
"type": "Markdown"
}
|
# The contrib modules
The contrib directory contains helper modules for Faiss for various tasks.
## Code structure
The contrib directory gets compiled in the module faiss.contrib.
Note that although some of the modules may depend on additional modules (eg. GPU Faiss, pytorch, hdf5), they are not necessarily compiled in to avoid adding dependencies. It is the user's responsibility to provide them.
In contrib, we are progressively dropping python2 support.
## List of contrib modules
### rpc.py
A very simple Remote Procedure Call library, where function parameters and results are pickled, for use with client_server.py
### client_server.py
The server handles requests to a Faiss index. The client calls the remote index.
This is mainly to shard datasets over several machines, see [Distributed index](https://github.com/facebookresearch/faiss/wiki/Indexes-that-do-not-fit-in-RAM#distributed-index)
### ondisk.py
Encloses the main logic to merge indexes into an on-disk index.
See [On-disk storage](https://github.com/facebookresearch/faiss/wiki/Indexes-that-do-not-fit-in-RAM#on-disk-storage)
### exhaustive_search.py
Computes the ground-truth search results for a dataset that possibly does not fit in RAM. Uses GPU if available.
Tested in `tests/test_contrib.TestComputeGT`
### torch_utils.py
Interoperability functions for pytorch and Faiss: Importing this will allow pytorch Tensors (CPU or GPU) to be used as arguments to Faiss indexes and other functions. Torch GPU tensors can only be used with Faiss GPU indexes. If this is imported with a package that supports Faiss GPU, the necessary stream synchronization with the current pytorch stream will be automatically performed.
Numpy ndarrays can continue to be used in the Faiss python interface after importing this file. All arguments must be uniformly either numpy ndarrays or Torch tensors; no mixing is allowed.
Tested in `tests/test_contrib_torch.py` (CPU) and `gpu/test/test_contrib_torch_gpu.py` (GPU).
### inspect_tools.py
Functions to inspect C++ objects wrapped by SWIG. Most often this just means reading
fields and converting them to the proper python array.
### ivf_tools.py
A few functions to override the coarse quantizer in IVF, providing additional flexibility for assignment.
### datasets.py
(may require h5py)
Definition of how to access data for some standard datasets.
### factory_tools.py
Functions related to factory strings.
### evaluation.py
A few non-trivial evaluation functions for search results
### clustering.py
Contains:
- a Python implementation of kmeans, that can be used for special datatypes (eg. sparse matrices).
- a 2-level clustering routine and a function that can apply it to train an IndexIVF
### big_batch_search.py
Search IVF indexes with one centroid after another. Useful for large
databases that do not fit in RAM *and* a large number of queries.
|
facebookresearchREPO_NAMEfaissPATH_START.@faiss_extracted@faiss-main@contrib@README.md@.PATH_END.py
|
{
"filename": "mv-processing-methods.md",
"repo_name": "catboost/catboost",
"repo_path": "catboost_extracted/catboost-master/catboost/docs/en/_includes/work_src/reusage-missing-values/mv-processing-methods.md",
"type": "Markdown"
}
|
- "Forbidden" — Missing values are not supported, their presence is interpreted as an error.
- "Min" — Missing values are processed as the minimum value (less than all other values) for the feature. It is guaranteed that a split that separates missing values from all other values is considered when selecting trees.
- "Max" — Missing values are processed as the maximum value (greater than all other values) for the feature. It is guaranteed that a split that separates missing values from all other values is considered when selecting trees.
|
catboostREPO_NAMEcatboostPATH_START.@catboost_extracted@catboost-master@catboost@docs@en@_includes@work_src@reusage-missing-values@mv-processing-methods.md@.PATH_END.py
|
{
"filename": "cldf.py",
"repo_name": "CosmoLike/cocoa",
"repo_path": "cocoa_extracted/cocoa-main/Cocoa/external_modules/code/planck/code/plc_3.0/plc-3.1/src/python/clik/cldf.py",
"type": "Python"
}
|
import os
import os.path as osp
import shutil as shu
try:
from astropy.io import fits as pf
except ImportError as e:
# try pyfits then
import pyfits as pf
import re
import numpy as nm
def pack256(*li):
rr=""
for l in li:
rr += l+'\0'*(256-len(l))
return rr
def is_cldf(name):
f = open(name)
try:
list(f.keys())
return True
except Exception as e:
return False
_protect_open = open
def open(name,mode="r"):
return File(name,mode)
_metadata = "_mdb"
class File(object):
def __init__(self,name,mode="r"):
self._mode = '+'
if mode=="w":
self._create(name)
return
if mode=="r" or mode=="r-":
self._name = name
self._mode="-"
return
if mode=="r+":
self._name=name
def _parsemetadata(self,path=""):
if not path:
path = self._name
f=_protect_open(osp.join(path,_metadata))
dct = {}
for l in f:
if not l.strip():
continue
id0 = l.find(" ")
key = l[:id0]
id1 = l[id0+1:].find(" ") + id0+1
typ = l[id0+1:id1]
data = l[id1+1:-1]
if typ== "int":
dct[key]=int(data)
continue
if typ == "float":
dct[key] = float(data)
continue
if typ == "str":
dct[key] = data
continue
f.close()
raise TypeError("unknown type '%s' for metadata '%s'"%(typ,key))
f.close()
return dct
def _writemetadata(self,dct,path=""):
if not path:
path = self._name
f=_protect_open(osp.join(path,_metadata),"w")
for k,v in list(dct.items()):
if type(v)==str:
typ="str"
modi = "%s"
elif type(v) in (bool,int,int,nm.int32,nm.int64):
typ = "int"
v = int(v)
modi = "%d"
elif type(v) in (float,nm.float32,nm.float64):
typ="float"
modi = "%.10g"
else:
raise TypeError("bad type %s"%type(v))
f.write(("%s %s "+modi+"\n")%(k,typ,v))
f.close()
def remove(self,name):
if osp.exists(name):
if osp.isdir(name):
shu.rmtree(name)
else:
os.remove(name)
else:
dct = self._parsemetadata(osp.split(name)[0])
if osp.split(name)[1] in list(dct.keys()):
del dct[osp.split(name)[1]]
self._writemetadata(dct,osp.split(name)[0])
def _create(self,name):
if osp.isdir(name):
shu.rmtree(name)
os.mkdir(name)
f=_protect_open(osp.join(name,_metadata),"w")
f.write("")
f.close()
self._name = name
def __contains__(self,key):
try:
self[key]
except Exception:
return False
return True
def __getitem__(self,key):
fkey = osp.join(self._name,key)
if fkey[-1]=='/':
fkey = fkey[:-1]
if osp.exists(fkey):
if osp.isdir(fkey):
return File(fkey,"r"+self._mode)
try:
return pf.open(fkey)[0].data
except Exception:
value = _protect_open(fkey).read()
if key+"__type__" in self and self[key+"__type__"] == "str_array":
rvalue = []
p0 = value.find("\n")
nv = int(value[:p0])
value = value[p0+1:]
for i in range(nv):
p1 = value.find("\n")
nc = int(value[:p1])
rvalue += [value[p1+1:p1+1+nc]]
value=value[p1+1+nc+1:]
return rvalue
return value
dct = self._parsemetadata(osp.split(fkey)[0])
return dct[osp.split(fkey)[1]]
def __setitem__(self,key,value):
assert self._mode=='+'
fkey = osp.join(self._name,key)
if fkey[-1]=='/':
fkey = fkey[:-1]
self.remove(fkey)
if isinstance(value,File):
shu.copytree(value._name,fkey)
return
if type(value) in (list,tuple,nm.ndarray):
if isinstance(value[0],str):
tvalue = "%d\n"%len(value)
for v in value:
tvalue += "%d\n"%len(v)+v+"\n"
f=_protect_open(fkey,"w")
f.write(tvalue)
f.close()
self[key+"__type__"] = "str_array"
return
value = nm.array(value)
if value.dtype==nm.int32:
value = value.astype(nm.int64)
#print key,fkey,value.dtype
pf.PrimaryHDU(value).writeto(fkey)
return
if type(value) == str and ("\n" in value or "\0" in value or len(value)>50):
#print key,len(value)
f=_protect_open(fkey,"w")
f.write(value)
f.close()
return
dct = self._parsemetadata(osp.split(fkey)[0])
dct[osp.split(fkey)[1]] = value
self._writemetadata(dct,osp.split(fkey)[0])
def create_group(self,name):
assert self._mode=='+'
return File(osp.join(self._name,name),"w")
def create_dataset(self,name,data=None):
assert data is not None
self[name] = data
def __delitem__(self,key):
assert self._mode=='+'
fkey = osp.join(self._name,key)
if fkey[-1]=='/':
fkey = fkey[:-1]
if osp.exists(fkey):
self.remove(fkey)
return
dct = self._parsemetadata(osp.split(fkey)[0])
del dct[osp.split(fkey)[1]]
self._writemetadata(dct,osp.split(fkey)[0])
def copy(self,a,b,c=""):
if not c:
self[b] = self[a]
else:
b[c]=a
@property
def attrs(self):
return self
def keys(self):
dct = self._parsemetadata(self._name)
ls = [el for el in os.listdir(self._name) if el[0]!='.' and el!=_metadata]
return ls+list(dct.keys())
def items(self):
ks = list(self.keys())
return [(k,self[k]) for k in ks]
def close(self):
pass #nothing to do
try:
import h5py
def hdf2cldf_grp(hdf,fdf):
# first the metadata
for kk in list(hdf.attrs.keys()):
vl = hdf.attrs[kk]
#print kk,type(vl)
if type(vl) == str:
sz = h5py.h5a.get_info(hdf.id,kk).data_size
rr = vl.ljust(sz,'\0')
fdf[kk] = rr
else:
fdf[kk] = vl
# then the group/data
for kk in list(hdf.keys()):
if kk=="external_data":
dts = hdf[kk][:]
install_path = osp.join(fdf._name,"_external")
os.mkdir(install_path)
f=_protect_open(osp.join(install_path,"data.tar"),"w")
f.write(dts.tostring())
f.close()
assert os.system("cd %s;tar xvf data.tar"%install_path)==0
assert os.system("cd %s;rm -f data.tar"%install_path)==0
fdf["external_dir"]="."
continue
god = hdf[kk]
if isinstance(god,h5py.Group):
if not hasattr(fdf,kk):
fdf.create_group(kk)
hdf2cldf_grp(god,fdf[kk])
else:
r = god[:]
#print r
if len(r)==1:
r=r[0]
fdf[kk] = r
def hdf2cldf(ffin, ffout):
hdf = h5py.File(ffin,"r")
fdf = File(ffout,"w")
hdf2cldf_grp(hdf,fdf)
except ImportError as e:
pass
class forfile:
def __init__(self,fi):
if hasattr(fi,read):
self.fi = fi
else :
self.fi=_protect_open(fi)
self.bf=""
def read(self,fmt=''):
if self.bf=='':
sz = nm.fromstring(self.fi.read(4),dtype=nm.int32)[0]
#print "want %d bytes"%sz
self.bf = self.fi.read(sz)
#print self.bf
sz2 =nm.fromstring(self.fi.read(4),dtype=nm.int32)[0]
#print sz2
assert sz==sz
if fmt=='':
self.bf=''
return
res = [self.cvrt(ff) for ff in fmt.strip().split()]
if len(res)==1:
return res[0]
return tuple(res)
def cvrt(self,fmt):
cmd = re.findall("([0-9]*)([i|f])([0-9]+)",fmt)[0]
dtype = nm.dtype({"f":"float","i":"int"}[cmd[1]]+cmd[2])
itm = nm.array(1,dtype=dtype).itemsize
nelem=1
if cmd[0]:
nelem = int(cmd[0])
res = nm.fromstring(self.bf[:itm*nelem],dtype=dtype)
self.bf=self.bf[itm*nelem:]
if nelem==1:
return res[0]
return res
def close(self):
self.bf=''
self.fi.close()
|
CosmoLikeREPO_NAMEcocoaPATH_START.@cocoa_extracted@cocoa-main@Cocoa@external_modules@code@planck@code@plc_3.0@plc-3.1@src@python@clik@cldf.py@.PATH_END.py
|
{
"filename": "modspeclist.py",
"repo_name": "spacetelescope/hstaxe",
"repo_path": "hstaxe_extracted/hstaxe-main/hstaxe/axesim/modspeclist.py",
"type": "Python"
}
|
"""
See LICENSE.txt
"""
import os
import sys
import math
from astropy.io import fits
from stwcs.wcsutil import HSTWCS
from hstaxe.axeerror import aXeSIMError
from hstaxe.axesrc.axeiol import InputObjectList
class MagColList(InputObjectList):
"""Subclass for lists with magnitude columns
This class loads ASCII tables and identifies the magnitude colunmns,
columns which have AB magnitudes as their content.
The magnitude columns have column names "MAG_?<number>*", with
"<number>" the associated wavelength of the AB magnitude.
Moreover there are mechanisms to search for column names and store
the column index in a dictionary.
TODO: FINISH UPDATING THIS CLASS FOR AXEIOL BASE
"""
def __init__(self, filename, mag_wavelength=None):
"""Initializer for the class
Parameters
----------
filename: str
name of the magnitude column list
mag_wavelength: float
special wavelength
"""
super(InputObjectList, self).__init__(filename=filename)
# initialize the dictionary
# with indices of required columns
self.reqColIndex = {}
# check whether rows do exist
if self.nrows > 0:
# check for the mandatory columns
# in the table
self.find_magnitude_columns(mag_wavelength)
# do some fundamental schecks
self._basic_checks()
def _basic_checks(self):
"""Do some fundamental checks"""
# this check should be removed
# at some time.....
# check that there is at leas one row
if self.nrows < 1:
err_msg = ("\nThe table: {0:s} must contain at least one row!"
.format(self.filename))
raise aXeSIMError(err_msg)
# this check should be removed
# at some time.....
# check that there is at leas one row
if (len(self.header) > 0):
err_msg = ("\nThe table: {0:s} must not contain header comments!"
.format(self.filename))
raise aXeSIMError(err_msg)
def _find_required_columns(self, columnList):
"""Search and store the index of a list of required columns
The method searches for a list of column names given in the input.
In case that the column does not ecist, an exception is thrown.
The index of existing columns is written into a dictionary with the
column name as key.
@param columnList: list with required columns
@type columnList: []
"""
# go over all columns
for aColumn in columnList:
# find it
colIndex = self.find(aColumn)
# complain if not found
if colIndex < 0:
err_msg = ("\nThe table: {0:s} does not contain column: {1}"
.format(self.filename, aColumn))
raise aXeSIMError(err_msg)
else:
# add the index to the dictionary
self.reqColIndex[aColumn] = colIndex
def _identify_magcol(self, mag_cols, mag_wave):
"""Identify the meagnitude column closest to a characteristic value
The method analyses all magnitude columns and finds the one
which is closest to a wavelength given in the input.
The index of the closest wavelength in the input
list is returned.
Parameters
----------
mag_cols: list
list of [index, wavelength] with information on all magnitude cols
mag_wave: float
characteristic wavelength
Returns
-------
min_ind: int
the index of the magnitude column closest to mag_wave
"""
# define a incredible large difference
min_dist = 1.0e+30
# define a non-result
min_ind = -1
# go over al magnitude columns
for index in range(len(mag_cols)):
# check wehether a new minimum distance is achieved
if math.fabs(mag_cols[index][1]-mag_wave) < min_dist:
# transport the minimum and the index
min_ind = index
min_dist = math.fabs(mag_cols[index][1]-mag_wave)
# return the index
return min_ind
def _search_mcols(self):
"""Search the magnitude columns
The method collects all magnitude columns
with an encoded wavelength in the column name.
For each such column the column index and the
wavelength is stored in a list, and the list
of all columns is returned.
Returns
-------
mag_cols: list
a list if lists of column number-wavelength pairs
"""
# initialize the list with the result
mag_cols = []
# go over all columns
for index in range(self.ncols):
# get the column name
colname = self[index].colname
# try to decode the wavelength
wave = self._get_wavelength(colname)
# if a wavelength is encoded
if wave:
# compose and append the info
# to the resulting list
mag_cols.append([index, wave])
# return the result
return mag_cols
def _get_wavelength(self, colname):
"""Determines the wvelength from a column name
The method tries to extract the wavelength
encoded into a column name. The encoding
format is "MAG_<C><WAVE>*" with <C> a
single character, <WAVE> an integer number
and anything (*) afterwards.
in case that there is not wavelength encoded,
the value 0 id given back.
Parameters
----------
colname: str
the column name
Returns
-------
wave: float
the wavelength encoded in the column name
"""
# set the value for 'nothing found'
wave = 0
# check for the start string
if colname.find('MAG_') == 0:
# copy the rest to a substring
rest_name = colname.split('MAG_')[1][1:]
# prepare to analyse the whole substring
for index in range(len(rest_name)):
# make a progressively longer
# substring, starting from the beginning
cand_name = rest_name[0:index+1]
# try to convert the substirng to
# and integer, set a new wavelength
# if it is possible
try:
wave = int(cand_name)
# as soon as the substirng can NOT
# be transferred to an int
# return the current, best wavelength
except ValueError:
return wave
# return the best result
return wave
def find_magnitude_columns(self, mag_wavelength=None):
"""Identify all magnitude columns
The method identifiy all magnitude columns and can select the
one with the wavelength closest to a given input wavelength.
An exception is thrown in case that no magnitude column is found.
@param mag_wavelength: characteristic wavelength
@type mag_wavelength: float
"""
# search for
# magnitude columns in general
self.mag_cols = self._search_mcols()
# if magnitude columns exist,
# search the one closest to the
# desired wavelength
if len(self.mag_cols) > 0:
if (mag_wavelength is not None):
mag_index = self._identify_magcol(self.mag_cols,
mag_wavelength)
else:
mag_index = 0
# set the column number and the wavelength
self.magwave = float(self.mag_cols[mag_index][1])
self.reqColIndex['MAGNITUDE'] = self.mag_cols[mag_index][0]
else:
# enhance the error counter
err_msg = ("\nModel spectrum list: {0:s} does not contain any "
"magnitude column!".format(self.filename))
raise aXeSIMError(err_msg)
class ModelSpectrumList(MagColList):
"""Subclass for model spectra list"""
def __init__(self, filename, mag_wavelength=None):
"""Initializes the class
Parameters
----------
filename: str
name of the model spectra list
mag_wavelength: float
characteristic wavelength
"""
super(ModelSpectrumList, self).__init__(filename=filename,
mag_wavelength=mag_wavelength)
# find the columsn reqested for a
# model spectrum list
self._find_modspeclist_columns()
# re-set all values in the column "MODSPEC"
self._reset_modspec()
def _find_modspeclist_columns(self):
"""Find the requested columns
The method finds the columns which are mandatory for a model
spectra list.
"""
# the required columns
reqColumnsNames = ['NUMBER', 'Z', 'SPECTEMP']
# search for all required columns
self._find_required_columns(reqColumnsNames)
def _reset_modspec(self):
"""Reset the values in the column "MODSPEC"
The method sets all values in the column "MODSPEC" to
the value 0.
"""
# go over all rows
for index in range(self.nrows):
# set the entries to 0
self["MODSPEC"][index] = 0
def find_tempmax(self):
"""Find the largest entry in the spectral template column
The method identifies and returns the largest value found in the
column "SPECTEMP".
Returns
-------
stemp_max: int
The largest entry in "SPECTEMP"
"""
# initialize the value
stemp_max = 0
# go over all rows
for index in range(self.nrows):
# transfer a new maximum number, if necessary
stemp_max = max(self[self.reqColIndex['SPECTEMP']][index],
stemp_max)
# return the maximum number
return stemp_max
class ModelObjectTable(MagColList):
"""Subclass for model object table"""
def __init__(self, filename, model_spectra=None, model_images=None):
"""
Initializes the class
Parameters
----------
filename: str
name of the model object table
model_spectra: str
the model spectra file
model_images: str
the model image file
"""
super(ModelObjectTable, self).__init__(filename=filename)
# find the columsn reqested for a
# model spectrum list
# the required columns
reqColumnsNames = ['NUMBER', 'X_IMAGE', 'Y_IMAGE',
'A_IMAGE', 'B_IMAGE', 'THETA_IMAGE']
if model_spectra is not None:
reqColumnsNames.extend(['MODSPEC'])
if model_images is not None:
reqColumnsNames.extend(['MODIMAGE'])
# search for all required columns
self._find_required_columns(reqColumnsNames)
def fill_columns(self, WCSimage, WCSext=None):
"""Fill up column information to be ready for aXe
The method completes the model object tables with columns
requested by aXe.
Parameters
----------
WCSimage: str
the reference image with WCS
WCSext: str
the extension to use
ra_dec = iraf.xy2rd(infile=WCSimage, x=self['X_IMAGE'][index],\
y=, hms='NO', Stdout=1)
"""
# check that the image exists
if not os.path.isfile(WCSimage):
err_msg = "The WCS image: {0:s} does not exist!".format(WCSimage)
raise aXeSIMError(err_msg)
# append the image extesnion
if WCSext is not None:
WCSimage = fits.getheader(WCSimage, ext=WCSext)
else:
WCSimage = fits.getheader(WCSimage)
# go over all rows
for index in range(self.nrows):
# just copy some information
# later it would be reasonable
# to give more reasonable values
self['A_WORLD'][index] = self['A_IMAGE'][index]
self['B_WORLD'][index] = self['B_IMAGE'][index]
self['THETA_WORLD'][index] = self['THETA_IMAGE'][index]
# transform x,y to ra and dec
ra, dec = WCSimage.all_world2pix(self['Y_IMAGE'][index],
self['X_IMAGE'][index])
# store ra and dec
self['X_WORLD'][index] = float(ra)
self['Y_WORLD'][index] = float(dec)
# save the changes
self.flush()
|
spacetelescopeREPO_NAMEhstaxePATH_START.@hstaxe_extracted@hstaxe-main@hstaxe@axesim@modspeclist.py@.PATH_END.py
|
{
"filename": "everest.py",
"repo_name": "lightkurve/lightkurve",
"repo_path": "lightkurve_extracted/lightkurve-main/src/lightkurve/io/everest.py",
"type": "Python"
}
|
"""Reader for K2 EVEREST light curves."""
from ..lightcurve import KeplerLightCurve
from ..utils import KeplerQualityFlags
from .generic import read_generic_lightcurve
def read_everest_lightcurve(
filename, flux_column="flux", quality_bitmask="default", **kwargs
):
"""Read an EVEREST light curve file.
More information: https://archive.stsci.edu/hlsp/everest
Parameters
----------
filename : str
Local path or remote url of a Kepler light curve FITS file.
flux_column : 'pdcsap_flux' or 'sap_flux'
Which column in the FITS file contains the preferred flux data?
quality_bitmask : str or int
Bitmask (integer) which identifies the quality flag bitmask that should
be used to mask out bad cadences. If a string is passed, it has the
following meaning:
* "none": no cadences will be ignored (`quality_bitmask=0`).
* "default": cadences with severe quality issues will be ignored
(`quality_bitmask=1130799`).
* "hard": more conservative choice of flags to ignore
(`quality_bitmask=1664431`). This is known to remove good data.
* "hardest": removes all data that has been flagged
(`quality_bitmask=2096639`). This mask is not recommended.
See the :class:`KeplerQualityFlags` class for details on the bitmasks.
Returns
-------
lc : `KeplerLightCurve`
A populated light curve object.
"""
lc = read_generic_lightcurve(
filename,
flux_column=flux_column,
quality_column="quality",
cadenceno_column="cadn",
time_format="bkjd",
)
# Filter out poor-quality data
# NOTE: Unfortunately Astropy Table masking does not yet work for columns
# that are Quantity objects, so for now we remove poor-quality data instead
# of masking. Details: https://github.com/astropy/astropy/issues/10119
quality_mask = KeplerQualityFlags.create_quality_mask(
quality_array=lc["quality"], bitmask=quality_bitmask
)
lc = lc[quality_mask]
lc.meta["AUTHOR"] = "EVEREST"
lc.meta["TARGETID"] = lc.meta.get("KEPLERID")
lc.meta["QUALITY_BITMASK"] = quality_bitmask
lc.meta["QUALITY_MASK"] = quality_mask
return KeplerLightCurve(data=lc, **kwargs)
|
lightkurveREPO_NAMElightkurvePATH_START.@lightkurve_extracted@lightkurve-main@src@lightkurve@io@everest.py@.PATH_END.py
|
{
"filename": "gfall.py",
"repo_name": "tardis-sn/carsus",
"repo_path": "carsus_extracted/carsus-master/carsus/io/kurucz/gfall.py",
"type": "Python"
}
|
import re
import logging
import numpy as np
import pandas as pd
from carsus.util import parse_selected_species
from carsus.io.util import read_from_buffer
CARSUS_DATA_GFALL_URL = "https://github.com/tardis-sn/carsus-data-kurucz/raw/main/linelists/gfall/gfall.dat?raw=true"
GFALL_AIR_THRESHOLD = 200 # [nm], wavelengths above this value are given in air
logger = logging.getLogger(__name__)
class GFALLReader(object):
"""
Class for extracting lines and levels data from kurucz gfall files
Attributes
----------
fname: path to gfall.dat
Methods
--------
gfall_raw:
Return pandas DataFrame representation of gfall
"""
gfall_fortran_format = (
"F11.4,F7.3,F6.2,F12.3,F5.2,1X,A10,F12.3,F5.2,1X,"
"A10,F6.2,F6.2,F6.2,A4,I2,I2,I3,F6.3,I3,F6.3,I5,I5,"
"1X,I1,A1,1X,I1,A1,I1,A3,I5,I5,I6"
)
gfall_columns = [
"wavelength",
"loggf",
"element_code",
"e_first",
"j_first",
"blank1",
"label_first",
"e_second",
"j_second",
"blank2",
"label_second",
"log_gamma_rad",
"log_gamma_stark",
"log_gamma_vderwaals",
"ref",
"nlte_level_no_first",
"nlte_level_no_second",
"isotope",
"log_f_hyperfine",
"isotope2",
"log_iso_abundance",
"hyper_shift_first",
"hyper_shift_second",
"blank3",
"hyperfine_f_first",
"hyperfine_note_first",
"blank4",
"hyperfine_f_second",
"hyperfine_note_second",
"line_strength_class",
"line_code",
"lande_g_first",
"lande_g_second",
"isotopic_shift",
]
default_unique_level_identifier = ["energy", "j"]
def __init__(
self, ions=None, fname=None, unique_level_identifier=None, priority=10
):
"""
Parameters
----------
fname: str
Path to the gfall file (http or local file).
ions: str, optional
Ions to extract, by default None.
unique_level_identifier: list
List of attributes to identify unique levels from. Will always use
atomic_number and ion charge in addition.
priority: int, optional
Priority of the current data source.
"""
if fname is None:
self.fname = CARSUS_DATA_GFALL_URL
else:
self.fname = fname
self.priority = priority
if ions is not None:
self.ions = parse_selected_species(ions)
else:
self.ions = None
self._gfall_raw = None
self._gfall = None
self._levels = None
self._lines = None
if unique_level_identifier is None:
logger.warning(
"A specific combination to identify unique levels from "
"GFALL data has not been given. Defaulting to "
'["energy", "j"].'
)
self.unique_level_identifier = self.default_unique_level_identifier
else:
self.unique_level_identifier = unique_level_identifier
@property
def gfall_raw(self):
if self._gfall_raw is None:
self._gfall_raw, self.version = self.read_gfall_raw()
return self._gfall_raw
@property
def gfall(self):
if self._gfall is None:
self._gfall = self.parse_gfall()
return self._gfall
@property
def levels(self):
if self._levels is None:
self._levels = self.extract_levels()
return self._levels
@property
def lines(self):
if self._lines is None:
self._lines = self.extract_lines()
return self._lines
def read_gfall_raw(self, fname=None):
"""
Reading in a normal gfall.dat
Parameters
----------
fname: ~str
path to gfall.dat
Returns
-------
pandas.DataFrame
pandas Dataframe represenation of gfall
str
MD5 checksum
"""
if fname is None:
fname = self.fname
logger.info(f"Parsing GFALL from: {fname}")
# FORMAT(F11.4,F7.3,F6.2,F12.3,F5.2,1X,A10,F12.3,F5.2,1X,A10,
# 3F6.2,A4,2I2,I3,F6.3,I3,F6.3,2I5,1X,A1,A1,1X,A1,A1,i1,A3,2I5,I6)
number_match = re.compile(r"\d+(\.\d+)?")
type_match = re.compile(r"[FIXA]")
type_dict = {"F": np.float64, "I": np.int64, "X": str, "A": str}
field_types = tuple(
[
type_dict[item]
for item in number_match.sub("", self.gfall_fortran_format).split(",")
]
)
field_widths = type_match.sub("", self.gfall_fortran_format)
field_widths = map(int, re.sub(r"\.\d+", "", field_widths).split(","))
field_type_dict = {
col: dtype for col, dtype in zip(self.gfall_columns, field_types)
}
buffer, checksum = read_from_buffer(self.fname)
gfall = pd.read_fwf(
buffer,
widths=field_widths,
skip_blank_lines=True,
names=self.gfall_columns,
dtypes=field_type_dict,
)
# remove empty lines
gfall = gfall[~gfall.isnull().all(axis=1)].reset_index(drop=True)
return gfall, checksum
def parse_gfall(self, gfall_raw=None):
"""
Parse raw gfall DataFrame
Parameters
----------
gfall_raw: pandas.DataFrame
Returns
-------
pandas.DataFrame
a level DataFrame
"""
gfall = gfall_raw if gfall_raw is not None else self.gfall_raw.copy()
gfall = gfall.rename(
columns={"e_first": "energy_first", "e_second": "energy_second"}
)
double_columns = [
item.replace("_first", "")
for item in gfall.columns
if item.endswith("first")
]
# due to the fact that energy is stored in 1/cm
order_lower_upper = gfall["energy_first"].abs() < gfall["energy_second"].abs()
for column in double_columns:
data = pd.concat(
[
gfall["{0}_first".format(column)][order_lower_upper],
gfall["{0}_second".format(column)][~order_lower_upper],
]
)
gfall["{0}_lower".format(column)] = data
data = pd.concat(
[
gfall["{0}_first".format(column)][~order_lower_upper],
gfall["{0}_second".format(column)][order_lower_upper],
]
)
gfall["{0}_upper".format(column)] = data
del gfall["{0}_first".format(column)]
del gfall["{0}_second".format(column)]
# Clean labels
gfall["label_lower"] = gfall["label_lower"].str.strip()
gfall["label_upper"] = gfall["label_upper"].str.strip()
gfall["label_lower"] = gfall["label_lower"].str.replace(r"\s+", " ")
gfall["label_upper"] = gfall["label_upper"].str.replace(r"\s+", " ")
# Ignore lines with the labels "AVARAGE ENERGIES" and "CONTINUUM"
ignored_labels = ["AVERAGE", "ENERGIES", "CONTINUUM"]
gfall = gfall.loc[
~(
(gfall["label_lower"].isin(ignored_labels))
| (gfall["label_upper"].isin(ignored_labels))
)
].copy()
gfall["energy_lower_predicted"] = gfall["energy_lower"] < 0
gfall["energy_lower"] = gfall["energy_lower"].abs()
gfall["energy_upper_predicted"] = gfall["energy_upper"] < 0
gfall["energy_upper"] = gfall["energy_upper"].abs()
gfall["atomic_number"] = gfall.element_code.astype(int)
gfall["ion_charge"] = (
((gfall.element_code.values - gfall.atomic_number.values) * 100)
.round()
.astype(int)
)
del gfall["element_code"]
return gfall
def extract_levels(self, gfall=None, selected_columns=None):
"""
Extract levels from `gfall`. We first generate a concatenated DataFrame
of all lower and upper levels. Then we drop the duplicate leves
Parameters
----------
gfall: pandas.DataFrame
selected_columns: list
list of which columns to select (optional - default=None which selects
a default set of columns)
Returns
-------
pandas.DataFrame
a level DataFrame
"""
if gfall is None:
gfall = self.gfall
if selected_columns is None:
selected_columns = [
"atomic_number",
"ion_charge",
"energy",
"j",
"label",
"theoretical",
]
column_renames = {
"energy_{0}": "energy",
"j_{0}": "j",
"label_{0}": "label",
"energy_{0}_predicted": "theoretical",
}
e_lower_levels = gfall.rename(
columns=dict(
[(key.format("lower"), value) for key, value in column_renames.items()]
)
)
e_upper_levels = gfall.rename(
columns=dict(
[(key.format("upper"), value) for key, value in column_renames.items()]
)
)
levels = pd.concat(
[e_lower_levels[selected_columns], e_upper_levels[selected_columns]]
)
unique_level_id = ["atomic_number", "ion_charge"] + self.unique_level_identifier
levels.drop_duplicates(unique_level_id, inplace=True)
levels = levels.sort_values(
["atomic_number", "ion_charge", "energy", "j", "label"]
)
levels["method"] = levels["theoretical"].apply(
lambda x: "theor" if x else "meas"
) # Theoretical or measured
levels.drop("theoretical", axis="columns", inplace=True)
levels["level_index"] = (
levels.groupby(["atomic_number", "ion_charge"])["j"]
.transform(lambda x: np.arange(len(x), dtype=np.int64))
.values
)
levels["level_index"] = levels["level_index"].astype(int)
# ToDo: The commented block below does not work with all lines. Find a way to parse it.
# levels[["configuration", "term"]] = levels["label"].str.split(expand=True)
# levels["configuration"] = levels["configuration"].str.strip()
# levels["term"] = levels["term"].s
# TODO: move to a staticmethod
if self.ions is not None:
lvl_list = []
for ion in self.ions:
mask = (levels["atomic_number"] == ion[0]) & (
levels["ion_charge"] == ion[1]
)
lvl = levels[mask]
lvl_list.append(lvl)
levels = pd.concat(lvl_list, sort=True)
levels.set_index(["atomic_number", "ion_charge", "level_index"], inplace=True)
levels["priority"] = self.priority
return levels
def extract_lines(self, gfall=None, levels=None, selected_columns=None):
"""
Extract lines from `gfall`
Parameters
----------
gfall: pandas.DataFrame
selected_columns: list
list of which columns to select (optional - default=None which selects
a default set of columns)
Returns
-------
pandas.DataFrame
a level DataFrame
"""
if gfall is None:
gfall = self.gfall
if levels is None:
levels = self.levels
if selected_columns is None:
selected_columns = ["atomic_number", "ion_charge"]
selected_columns += [
item + "_lower" for item in self.unique_level_identifier
]
selected_columns += [
item + "_upper" for item in self.unique_level_identifier
]
selected_columns += ["wavelength", "loggf"]
logger.info("Extracting line data: {0}.".format(", ".join(selected_columns)))
unique_level_id = ["atomic_number", "ion_charge"] + self.unique_level_identifier
levels_idx = levels.reset_index()
levels_idx = levels_idx.set_index(unique_level_id)
lines = gfall[selected_columns].copy()
lines["gf"] = np.power(10, lines["loggf"])
lines = lines.drop(["loggf"], axis="columns")
# Assigning levels to lines
levels_unique_idxed = self.levels.reset_index().set_index(
["atomic_number", "ion_charge"] + self.unique_level_identifier
)
lines_lower_unique_idx = ["atomic_number", "ion_charge"] + [
item + "_lower" for item in self.unique_level_identifier
]
lines_upper_unique_idx = ["atomic_number", "ion_charge"] + [
item + "_upper" for item in self.unique_level_identifier
]
lines_lower_idx = lines.set_index(lines_lower_unique_idx)
lines_lower_idx["level_index_lower"] = levels_unique_idxed["level_index"]
lines_upper_idx = lines_lower_idx.reset_index().set_index(
lines_upper_unique_idx
)
lines_upper_idx["level_index_upper"] = levels_unique_idxed["level_index"]
lines = lines_upper_idx.reset_index()
# TODO: move to a staticmethod
if self.ions is not None:
lns_list = []
for ion in self.ions:
mask = (lines["atomic_number"] == ion[0]) & (
lines["ion_charge"] == ion[1]
)
lns = lines[mask]
lns_list.append(lns)
lines = pd.concat(lns_list, sort=True)
lines["level_index_lower"] = lines["level_index_lower"].astype("int")
lines["level_index_upper"] = lines["level_index_upper"].astype("int")
lines.set_index(
["atomic_number", "ion_charge", "level_index_lower", "level_index_upper"],
inplace=True,
)
return lines
def to_hdf(self, fname):
"""
Parameters
----------
fname : path
Path to the HDF5 output file
"""
with pd.HDFStore(fname, "w") as f:
f.put("/gfall_raw", self.gfall_raw)
f.put("/gfall", self.gfall)
|
tardis-snREPO_NAMEcarsusPATH_START.@carsus_extracted@carsus-master@carsus@io@kurucz@gfall.py@.PATH_END.py
|
{
"filename": "test_config.py",
"repo_name": "gammapy/gammapy",
"repo_path": "gammapy_extracted/gammapy-main/gammapy/analysis/tests/test_config.py",
"type": "Python"
}
|
# Licensed under a 3-clause BSD style license - see LICENSE.rst
from pathlib import Path
import pytest
from astropy.coordinates import Angle
from astropy.time import Time
from astropy.units import Quantity
from pydantic import ValidationError
from gammapy.analysis.config import AnalysisConfig, GeneralConfig
from gammapy.utils.testing import assert_allclose
CONFIG_PATH = Path(__file__).resolve().parent / ".." / "config"
DOC_FILE = CONFIG_PATH / "docs.yaml"
def test_config_default_types():
config = AnalysisConfig()
assert config.observations.obs_cone.frame is None
assert config.observations.obs_cone.lon is None
assert config.observations.obs_cone.lat is None
assert config.observations.obs_cone.radius is None
assert config.observations.obs_time.start is None
assert config.observations.obs_time.stop is None
assert config.datasets.geom.wcs.skydir.frame is None
assert config.datasets.geom.wcs.skydir.lon is None
assert config.datasets.geom.wcs.skydir.lat is None
assert isinstance(config.datasets.geom.wcs.binsize, Angle)
assert isinstance(config.datasets.geom.wcs.binsize_irf, Angle)
assert isinstance(config.datasets.geom.axes.energy.min, Quantity)
assert isinstance(config.datasets.geom.axes.energy.max, Quantity)
assert isinstance(config.datasets.geom.axes.energy_true.min, Quantity)
assert isinstance(config.datasets.geom.axes.energy_true.max, Quantity)
assert isinstance(config.datasets.geom.selection.offset_max, Angle)
assert config.fit.fit_range.min is None
assert config.fit.fit_range.max is None
assert isinstance(config.excess_map.correlation_radius, Angle)
assert config.excess_map.energy_edges.min is None
assert config.excess_map.energy_edges.max is None
assert config.excess_map.energy_edges.nbins is None
def test_config_not_default_types():
config = AnalysisConfig()
config.observations.obs_cone = {
"frame": "galactic",
"lon": "83.633 deg",
"lat": "22.014 deg",
"radius": "1 deg",
}
config.fit.fit_range = {"min": "0.1 TeV", "max": "10 TeV"}
assert config.observations.obs_cone.frame == "galactic"
assert isinstance(config.observations.obs_cone.lon, Angle)
assert isinstance(config.observations.obs_cone.lat, Angle)
assert isinstance(config.observations.obs_cone.radius, Angle)
config.observations.obs_time.start = "2019-12-01"
assert isinstance(config.observations.obs_time.start, Time)
with pytest.raises(ValueError):
config.flux_points.energy.min = "1 deg"
assert isinstance(config.fit.fit_range.min, Quantity)
assert isinstance(config.fit.fit_range.max, Quantity)
def test_config_basics():
config = AnalysisConfig()
assert "AnalysisConfig" in str(config)
config = AnalysisConfig.read(DOC_FILE)
assert config.general.outdir == "."
def test_config_create_from_dict():
data = {"general": {"log": {"level": "warning"}}}
config = AnalysisConfig(**data)
assert config.general.log.level == "warning"
def test_config_create_from_yaml():
config = AnalysisConfig.read(DOC_FILE)
assert isinstance(config.general, GeneralConfig)
config_str = Path(DOC_FILE).read_text()
config = AnalysisConfig.from_yaml(config_str)
assert isinstance(config.general, GeneralConfig)
def test_config_to_yaml(tmp_path):
config = AnalysisConfig()
assert "level: info" in config.to_yaml()
config = AnalysisConfig()
fpath = Path(tmp_path) / "temp.yaml"
config.write(fpath)
text = Path(fpath).read_text()
assert "stack" in text
with pytest.raises(IOError):
config.write(fpath)
def test_get_doc_sections():
config = AnalysisConfig()
doc = config._get_doc_sections()
assert "general" in doc.keys()
def test_safe_mask_config_validation():
config = AnalysisConfig()
# Check empty list is accepted
config.datasets.safe_mask.methods = []
with pytest.raises(ValidationError):
config.datasets.safe_mask.methods = ["bad"]
def test_time_range_iso():
cfg = """
observations:
datastore: $GAMMAPY_DATA/hess-dl3-dr1
obs_ids: [23523, 23526]
obs_time: {
start: [2004-12-04 22:04:48.000, 2004-12-04 22:26:24.000, 2004-12-04 22:53:45.600],
stop: [2004-12-04 22:26:24.000, 2004-12-04 22:53:45.600, 2004-12-04 23:31:12.000]
}
"""
config = AnalysisConfig.from_yaml(cfg)
assert_allclose(
config.observations.obs_time.start.mjd, [53343.92, 53343.935, 53343.954]
)
def test_time_range_jyear():
cfg = """
observations:
datastore: $GAMMAPY_DATA/hess-dl3-dr1
obs_ids: [23523, 23526]
obs_time: {
start: [J2004.92654346, J2004.92658453, J2004.92663655],
stop: [J2004.92658453, J2004.92663655, J2004.92670773]
}
"""
config = AnalysisConfig.from_yaml(cfg)
assert_allclose(
config.observations.obs_time.start.mjd, [53343.92, 53343.935, 53343.954]
)
|
gammapyREPO_NAMEgammapyPATH_START.@gammapy_extracted@gammapy-main@gammapy@analysis@tests@test_config.py@.PATH_END.py
|
{
"filename": "_measure.py",
"repo_name": "plotly/plotly.py",
"repo_path": "plotly.py_extracted/plotly.py-master/packages/python/plotly/plotly/validators/waterfall/_measure.py",
"type": "Python"
}
|
import _plotly_utils.basevalidators
class MeasureValidator(_plotly_utils.basevalidators.DataArrayValidator):
def __init__(self, plotly_name="measure", parent_name="waterfall", **kwargs):
super(MeasureValidator, self).__init__(
plotly_name=plotly_name,
parent_name=parent_name,
edit_type=kwargs.pop("edit_type", "calc"),
**kwargs,
)
|
plotlyREPO_NAMEplotly.pyPATH_START.@plotly.py_extracted@plotly.py-master@packages@python@plotly@plotly@validators@waterfall@_measure.py@.PATH_END.py
|
{
"filename": "test_compile.py",
"repo_name": "langchain-ai/langchain",
"repo_path": "langchain_extracted/langchain-master/libs/partners/chroma/tests/integration_tests/test_compile.py",
"type": "Python"
}
|
import pytest # type: ignore[import-not-found]
@pytest.mark.compile
def test_placeholder() -> None:
"""Used for compiling integration tests without running any real tests."""
pass
|
langchain-aiREPO_NAMElangchainPATH_START.@langchain_extracted@langchain-master@libs@partners@chroma@tests@integration_tests@test_compile.py@.PATH_END.py
|
{
"filename": "2-1-combining-multiple-quarters.ipynb",
"repo_name": "lightkurve/lightkurve",
"repo_path": "lightkurve_extracted/lightkurve-main/docs/source/tutorials/2-creating-light-curves/2-1-combining-multiple-quarters.ipynb",
"type": "Jupyter Notebook"
}
|
# Combining multiple quarters of *Kepler* data
## Learning Goals
By the end of this tutorial, you will:
- Understand a *Kepler* Quarter.
- Understand how to download multiple quarters of data at once.
- Learn how to normalize *Kepler* data.
- Understand how to combine multiple quarters of data.
## Introduction
The [*Kepler*](https://archive.stsci.edu/kepler), [*K2*](https://archive.stsci.edu/k2), and [*TESS*](https://archive.stsci.edu/tess) telescopes observe stars for long periods of time. These long, time series observations are broken up into separate chunks, called quarters for the *Kepler* mission, campaigns for *K2*, and sectors for *TESS*.
Building light curves with as much data as is available is useful when searching for small signals, such as planetary transits or stellar pulsations. In this tutorial, we will learn how to use Lightkurve's tools to download and stitch together multiple quarters of *Kepler* observations.
It is recommended to first read the tutorial discussing how to use *Kepler* light curve products with Lightkurve. That tutorial will introduce you to some specifics of how *Kepler*, *K2*, and *TESS* make observations, and how these are displayed as light curves. It also introduces some important terms and concepts that are referred to in this tutorial.
This tutorial demonstrates how to access and combine multiple quarters of data from the *Kepler* space telescope, using the Lightkurve package.
When accessing *Kepler* data through MAST, it will be stored in three-month chunks, corresponding to a quarter of observations. By combining and normalizing these separate observations, you can form a single light curve that spans all observed quarters. Utilizing all of the data available is especially important when looking at repeating signals, such as planet transits and stellar oscillations.
We will use the *Kepler* mission as an example, but these tools are extensible to *TESS* and *K2* as well.
## Imports
This tutorial requires the [**Lightkurve**](http://docs.lightkurve.org/) package, which in turn uses `matplotlib` for plotting.
```python
import lightkurve as lk
%matplotlib inline
```
## 1. What is a *Kepler* Quarter?
In order to search for planets around other stars, the *Kepler* space telescope performed near-continuous monitoring of a single field of view, from an Earth-trailing orbit. However, this posed a challenge. If the space telescope is trailing Earth and maintaining steady pointing, its solar panels would slowly receive less and less sunlight.
In order to make sure the solar panels remained oriented towards the Sun, *Kepler* performed quarterly rolls, one every 93 days. The infographic below helps visualize this, and shows the points in the orbit where the rolls took place.
After each roll, *Kepler* retained its fine-pointing at the same field of view. Because the camera rotated by 90 degrees, all of the target stars fell on different parts of the charge-coupled device (CCD) camera. This had an effect on the amount of flux recorded for the same star, because different CCD pixels have different sensitivities. The way in which the flux from the same stars was distributed on the CCD (called the point spread function or PSF) also changed after each roll, due to focus changes and other instrumental effects. As a result, the aperture mask set for a star had to be recomputed after each roll, and may capture slightly different amounts of flux.
The data obtained between rolls is referred to as a quarter. While there are changes to the flux *systematics*, not much else changes quarter to quarter, and the majority of the target list remains identical. This means that, after removing systematic trends (such as was done for the presearch data conditioning simple aperture photometry (PDCSAP) flux), multiple quarters together can form one continuous observation.
<!--  -->
<img src="https://upload.wikimedia.org/wikipedia/commons/thumb/8/84/Kepler_space_telescope_orbit.png/800px-Kepler_space_telescope_orbit.png" width="800">
*Figure*: Infographic showcasing the necessity of *Kepler*'s quarterly rolls and its Earth-trailing orbit. Source: [Kepler Science Center](https://keplergo.arc.nasa.gov/ExtendedMissionOverview.shtml).
**Note**:
Observations by *K2* and *TESS* are also broken down into chunks of a month or more, called campaigns (for *K2*) and sectors (for *TESS*). While not discussed in this tutorial, the tools below work for these data products as well.
## 2. Downloading Multiple `KeplerLightCurve` Objects at Once
To start, we can use Lightkurve's [search_lightcurve()](https://docs.lightkurve.org/reference/api/lightkurve.search_lightcurve.html?highlight=search_lightcurve) function to see what data are available for our target star on the [Mikulski Archive for Space Telescopes](https://archive.stsci.edu/kepler/) (MAST) archive. We will use the star [Kepler-8](http://www.openexoplanetcatalogue.com/planet/Kepler-8%20b/), a star somewhat larger than the Sun, and the host of a [hot Jupiter planet](https://en.wikipedia.org/wiki/Hot_Jupiter).
```python
search_result = lk.search_lightcurve("Kepler-8", author="Kepler", cadence="long")
search_result
```
In this list, each row represents a different observing quarter, for a total of 18 quarters across four years. The **observation** column lists the *Kepler* Quarter. The **target_name** represents the *Kepler* Input Catalogue (KIC) ID of the target, and the **productFilename** column is the name of the FITS files downloaded from MAST. The **distance** column shows the separation on the sky between the searched coordinates and the downloaded objects — this is only relevant when searching for specific coordinates in the sky, and not when looking for individual objects.
Instead of downloading a single quarter using the [download()](https://docs.lightkurve.org/reference/api/lightkurve.SearchResult.download.html?highlight=download#lightkurve.SearchResult.download) function, we can use the [download_all()](https://docs.lightkurve.org/reference/api/lightkurve.SearchResult.download_all.html?highlight=download_all) function to access all 18 quarters at once (this might take a while).
```python
lc_collection = search_result.download_all()
lc_collection
```
All of the downloaded data are stored in a `LightCurveCollection`. This object acts as a wrapper for 18 separate `KeplerLightCurve` objects, listed above.
We can access the `KeplerLightCurve` objects and interact with them as usual through the `LightCurveCollection`.
```python
lc_Q4 = lc_collection[4]
lc_Q4
```
```python
lc_Q4.plot();
```
#### Note:
The example given above also works for downloading target pixel files (TPFs). This will produce a `TargetPixelFileCollection` object instead.
## 3. Investigating the Data
Let's first have a look at how these observations differ from one another. We can plot the simple aperture photometry (SAP) flux of all of the observations in the [`LightCurveCollection`](https://docs.lightkurve.org/reference/api/lightkurve.collections.LightCurveCollection.html#lightkurve.collections.LightCurveCollection) to see how they compare.
```python
ax = lc_collection[0].plot(column='sap_flux', label=None)
for lc in lc_collection[1:]:
lc.plot(ax=ax, column='sap_flux', label=None)
```
In the figure above, each quarter of data looks strikingly different, with global patterns repeating every four quarters as *Kepler* has made a full rotation.
The change in flux within each quarter is in part driven by changes in the telescope focus, which are caused by changes in the temperature of *Kepler*'s components as the spacecraft orbits the Sun. The changes are also caused by an effect called *differential velocity aberration* (DVA), which causes stars to drift over the course of a quarter, depending on their distance from the center of *Kepler*'s field of view.
While the figure above looks messy, all the systematic effects mentioned above are well understood, and have been detrended in the PDCSAP flux. For a more detailed overview, see the [*Kepler* Data Characteristics Handbook](https://archive.stsci.edu/files/live/sites/mast/files/home/missions-and-data/kepler/_documents/Data_Characteristics.pdf), specifically: *Section 5. Ongoing Phenomena*.
## 4. Normalizing a Light Curve
If we want to see the actual variation of the targeted object over the course of these observations, the plot above isn't very useful to us. It is also not useful to have flux expressed in physical units, because it is affected by the observing conditions such as telescope focus and pointing (see above).
Instead, it is a common practice to normalize light curves by dividing by their median value. This means that the median of the newly normalized light curve will be equal to 1, and that the relative size of signals in the observation (such as transits) will be maintained.
A normalization can be performed using the [normalize()](https://docs.lightkurve.org/reference/api/lightkurve.LightCurve.normalize.html?highlight=normalize#lightkurve.LightCurve.normalize) method of a `KeplerLightCurve`, for example:
```python
lc_collection[4].normalize().plot();
```
In the figure above, we have plotted the normalized PDCSAP flux for Quarter 4. The median normalized flux is at 1, and the transit depths lie around 0.991, indicating a 0.9% dip in brightness due to the planet transiting the star.
The `LightCurveCollection` also has a [plot()](https://docs.lightkurve.org/reference/api/lightkurve.FoldedLightCurve.plot.html?highlight=plot#lightkurve.FoldedLightCurve.plot) method. We can use it to plot the PDCSAP flux. The method automatically normalizes the flux in same way we did for a single quarter above.
```python
lc_collection.plot();
```
As you can see above, because we have normalized the data, all of the observations form a single consistent light curve.
## 5. Combining Multiple Observations into a Single Light Curve
Finally, we can combine these different light curves into a single `KeplerLightCurve` object. This is done using the `stitch()` method. This method concatenates all quarters in our `LightCurveCollection` together, and normalizes them at the same time, in the manner we saw above.
```python
lc_stitched = lc_collection.stitch()
lc_stitched
```
This returns a single `KeplerLightCurve`! It is in all ways identical to `KeplerLightCurve` of a single quarter, just longer. We can plot it the usual way.
```python
lc_stitched.plot();
```
In this final normalized light curve, the interesting observational features of the star are more clear. Specifically: repeating transits that can be used to [characterize planets](https://docs.lightkurve.org/tutorials/02-recover-a-planet.html) and a noisy stellar flux that can be used to study brightness variability through [asteroseismology](http://docs.lightkurve.org/tutorials/02-asteroseismology.html).
Normalizing individual *Kepler* Quarters before combining them to form a single light curve isn't the only way to make sure different quarters are consistent with one another. For a breakdown of other available methods and their benefits, see *Section 6. Stitching Kepler Quarters Together* in [Kinemuchi et al. 2012](https://arxiv.org/pdf/1207.3093.pdf).
## About this Notebook
**Authors:** Oliver Hall (oliver.hall@esa.int), Geert Barentsen
**Updated On**: 2020-09-15
## Citing Lightkurve and Astropy
If you use `lightkurve` or `astropy` for published research, please cite the authors. Click the buttons below to copy BibTeX entries to your clipboard.
```python
lk.show_citation_instructions()
```
<img style="float: right;" src="https://raw.githubusercontent.com/spacetelescope/notebooks/master/assets/stsci_pri_combo_mark_horizonal_white_bkgd.png" alt="Space Telescope Logo" width="200px"/>
|
lightkurveREPO_NAMElightkurvePATH_START.@lightkurve_extracted@lightkurve-main@docs@source@tutorials@2-creating-light-curves@2-1-combining-multiple-quarters.ipynb@.PATH_END.py
|
{
"filename": "__init__.py",
"repo_name": "catboost/catboost",
"repo_path": "catboost_extracted/catboost-master/contrib/python/plotly/py3/plotly/graph_objs/histogram/__init__.py",
"type": "Python"
}
|
import sys
from typing import TYPE_CHECKING
if sys.version_info < (3, 7) or TYPE_CHECKING:
from ._cumulative import Cumulative
from ._error_x import ErrorX
from ._error_y import ErrorY
from ._hoverlabel import Hoverlabel
from ._insidetextfont import Insidetextfont
from ._legendgrouptitle import Legendgrouptitle
from ._marker import Marker
from ._outsidetextfont import Outsidetextfont
from ._selected import Selected
from ._stream import Stream
from ._textfont import Textfont
from ._unselected import Unselected
from ._xbins import XBins
from ._ybins import YBins
from . import hoverlabel
from . import legendgrouptitle
from . import marker
from . import selected
from . import unselected
else:
from _plotly_utils.importers import relative_import
__all__, __getattr__, __dir__ = relative_import(
__name__,
[".hoverlabel", ".legendgrouptitle", ".marker", ".selected", ".unselected"],
[
"._cumulative.Cumulative",
"._error_x.ErrorX",
"._error_y.ErrorY",
"._hoverlabel.Hoverlabel",
"._insidetextfont.Insidetextfont",
"._legendgrouptitle.Legendgrouptitle",
"._marker.Marker",
"._outsidetextfont.Outsidetextfont",
"._selected.Selected",
"._stream.Stream",
"._textfont.Textfont",
"._unselected.Unselected",
"._xbins.XBins",
"._ybins.YBins",
],
)
|
catboostREPO_NAMEcatboostPATH_START.@catboost_extracted@catboost-master@contrib@python@plotly@py3@plotly@graph_objs@histogram@__init__.py@.PATH_END.py
|
{
"filename": "_cosmo_interp_astropy_v5.py",
"repo_name": "sibirrer/lenstronomy",
"repo_path": "lenstronomy_extracted/lenstronomy-main/lenstronomy/Cosmo/_cosmo_interp_astropy_v5.py",
"type": "Python"
}
|
import astropy
from scipy.integrate import quad
if float(astropy.__version__[0]) < 5.0:
Warning(
"This routines are only supported for astropy version >=5. Current version is %s."
% astropy.__version__
)
else:
from astropy.cosmology.utils import vectorize_redshift_method
class CosmoInterp(object):
"""Class which interpolates the comoving transfer distance and then computes angular
diameter distances from it This class is modifying the astropy.cosmology
routines."""
def __init__(self, cosmo):
"""
:param cosmo: astropy.cosmology instance (version 4.0 as private functions need to be supported)
"""
self._cosmo = cosmo
def _integral_comoving_distance_z1z2(self, z1, z2):
"""Comoving line-of-sight distance in Mpc between objects at redshifts ``z1``
and ``z2``. The comoving distance along the line-of-sight between two objects
remains constant with time for objects in the Hubble flow.
Parameters
----------
z1, z2 : Quantity-like ['redshift'] or array-like
Input redshifts.
Returns
-------
d : `~astropy.units.Quantity` ['length']
Comoving distance in Mpc between each input redshift.
"""
return (
self._cosmo._hubble_distance
* self._integral_comoving_distance_z1z2_scalar(z1, z2)
)
@vectorize_redshift_method(nin=2)
def _integral_comoving_distance_z1z2_scalar(self, z1, z2):
"""Comoving line-of-sight distance between objects at redshifts ``z1`` and
``z2``. Value in Mpc.
The comoving distance along the line-of-sight between two objects
remains constant with time for objects in the Hubble flow.
Parameters
----------
z1, z2 : Quantity-like ['redshift'], array-like, or `~numbers.Number`
Input redshifts.
Returns
-------
d : float or ndarray
Comoving distance in Mpc between each input redshift.
Returns `float` if input scalar, `~numpy.ndarray` otherwise.
"""
return quad(
self._cosmo._inv_efunc_scalar,
z1,
z2,
args=self._cosmo._inv_efunc_scalar_args,
)[0]
|
sibirrerREPO_NAMElenstronomyPATH_START.@lenstronomy_extracted@lenstronomy-main@lenstronomy@Cosmo@_cosmo_interp_astropy_v5.py@.PATH_END.py
|
{
"filename": "pick_on_surface.py",
"repo_name": "enthought/mayavi",
"repo_path": "mayavi_extracted/mayavi-master/examples/mayavi/data_interaction/pick_on_surface.py",
"type": "Python"
}
|
""" Example showing how to pick data on a surface, going all the way back
to the index in the numpy arrays.
In this example, two views of the same data are shown. One with the data
on a sphere, the other with the data flat.
We use the 'on_mouse_pick' method of the scene to register a callback on
clicking on the sphere. The callback is called with a picker object as
and an argument. We use the point_id of the point that has been picked,
and go back to the 2D index on the data matrix to find its position.
"""
################################################################################
# Create some data
import numpy as np
pi = np.pi
cos = np.cos
sin = np.sin
phi, theta = np.mgrid[0:pi:180j,0:2*pi:180j]
m0 = 4; m1 = 3; m2 = 2; m3 = 3; m4 = 1; m5 = 2; m6 = 2; m7 = 4;
s = sin(m0*phi)**m1 + cos(m2*phi)**m3 + sin(m4*theta)**m5 + cos(m6*theta)**m7
x = sin(phi)*cos(theta)
y = cos(phi)
z = sin(phi)*sin(theta)
################################################################################
# Plot the data
from mayavi import mlab
# A first plot in 3D
fig = mlab.figure(1)
mlab.clf()
mesh = mlab.mesh(x, y, z, scalars=s)
cursor3d = mlab.points3d(0., 0., 0., mode='axes',
color=(0, 0, 0),
scale_factor=0.5)
mlab.title('Click on the ball')
# A second plot, flat
fig2d = mlab.figure(2)
mlab.clf()
im = mlab.imshow(s)
cursor = mlab.points3d(0, 0, 0, mode='2dthick_cross',
color=(0, 0, 0),
scale_factor=10)
mlab.view(90, 0)
################################################################################
# Some logic to select 'mesh' and the data index when picking.
def picker_callback(picker_obj):
picked = picker_obj.actors
if mesh.actor.actor._vtk_obj in [o._vtk_obj for o in picked]:
# m.mlab_source.points is the points array underlying the vtk
# dataset. GetPointId return the index in this array.
x_, y_ = np.lib.index_tricks.unravel_index(picker_obj.point_id,
s.shape)
print("Data indices: %i, %i" % (x_, y_))
n_x, n_y = s.shape
cursor.mlab_source.reset(x=x_ - n_x/2.,
y=y_ - n_y/2.)
cursor3d.mlab_source.reset(x=x[x_, y_],
y=y[x_, y_],
z=z[x_, y_])
fig.on_mouse_pick(picker_callback)
mlab.show()
|
enthoughtREPO_NAMEmayaviPATH_START.@mayavi_extracted@mayavi-master@examples@mayavi@data_interaction@pick_on_surface.py@.PATH_END.py
|
{
"filename": "groups.py",
"repo_name": "astropy/astropy",
"repo_path": "astropy_extracted/astropy-main/astropy/io/fits/hdu/groups.py",
"type": "Python"
}
|
# Licensed under a 3-clause BSD style license - see PYFITS.rst
import sys
import numpy as np
from astropy.io.fits.column import FITS2NUMPY, ColDefs, Column
from astropy.io.fits.fitsrec import FITS_rec, FITS_record
from astropy.io.fits.util import _is_int, _is_pseudo_integer, _pseudo_zero
from astropy.utils import lazyproperty
from .base import DELAYED, DTYPE2BITPIX
from .image import PrimaryHDU
from .table import _TableLikeHDU
class Group(FITS_record):
"""
One group of the random group data.
"""
def __init__(self, input, row=0, start=None, end=None, step=None, base=None):
super().__init__(input, row, start, end, step, base)
@property
def parnames(self):
return self.array.parnames
@property
def data(self):
# The last column in the coldefs is the data portion of the group
return self.field(self.array._coldefs.names[-1])
@lazyproperty
def _unique(self):
return _par_indices(self.parnames)
def par(self, parname):
"""
Get the group parameter value.
"""
if _is_int(parname):
result = self.array[self.row][parname]
else:
indx = self._unique[parname.upper()]
if len(indx) == 1:
result = self.array[self.row][indx[0]]
# if more than one group parameter have the same name
else:
result = self.array[self.row][indx[0]].astype("f8")
for i in indx[1:]:
result += self.array[self.row][i]
return result
def setpar(self, parname, value):
"""
Set the group parameter value.
"""
# TODO: It would be nice if, instead of requiring a multi-part value to
# be an array, there were an *option* to automatically split the value
# into multiple columns if it doesn't already fit in the array data
# type.
if _is_int(parname):
self.array[self.row][parname] = value
else:
indx = self._unique[parname.upper()]
if len(indx) == 1:
self.array[self.row][indx[0]] = value
# if more than one group parameter have the same name, the
# value must be a list (or tuple) containing arrays
else:
if isinstance(value, (list, tuple)) and len(indx) == len(value):
for i in range(len(indx)):
self.array[self.row][indx[i]] = value[i]
else:
raise ValueError(
"Parameter value must be a sequence with "
f"{len(indx)} arrays/numbers."
)
class GroupData(FITS_rec):
"""
Random groups data object.
Allows structured access to FITS Group data in a manner analogous
to tables.
"""
_record_type = Group
def __new__(
cls,
input=None,
bitpix=None,
pardata=None,
parnames=[],
bscale=None,
bzero=None,
parbscales=None,
parbzeros=None,
):
"""
Parameters
----------
input : array or FITS_rec instance
input data, either the group data itself (a
`numpy.ndarray`) or a record array (`FITS_rec`) which will
contain both group parameter info and the data. The rest
of the arguments are used only for the first case.
bitpix : int
data type as expressed in FITS ``BITPIX`` value (8, 16, 32,
64, -32, or -64)
pardata : sequence of array
parameter data, as a list of (numeric) arrays.
parnames : sequence of str
list of parameter names.
bscale : int
``BSCALE`` of the data
bzero : int
``BZERO`` of the data
parbscales : sequence of int
list of bscales for the parameters
parbzeros : sequence of int
list of bzeros for the parameters
"""
if not isinstance(input, FITS_rec):
if pardata is None:
npars = 0
else:
npars = len(pardata)
if parbscales is None:
parbscales = [None] * npars
if parbzeros is None:
parbzeros = [None] * npars
if parnames is None:
parnames = [f"PAR{idx + 1}" for idx in range(npars)]
if len(parnames) != npars:
raise ValueError(
"The number of parameter data arrays does "
"not match the number of parameters."
)
unique_parnames = _unique_parnames(parnames + ["DATA"])
if bitpix is None:
bitpix = DTYPE2BITPIX[input.dtype.name]
fits_fmt = GroupsHDU._bitpix2tform[bitpix] # -32 -> 'E'
format = FITS2NUMPY[fits_fmt] # 'E' -> 'f4'
data_fmt = f"{input.shape[1:]}{format}"
formats = ",".join(([format] * npars) + [data_fmt])
gcount = input.shape[0]
cols = [
Column(
name=unique_parnames[idx],
format=fits_fmt,
bscale=parbscales[idx],
bzero=parbzeros[idx],
)
for idx in range(npars)
]
cols.append(
Column(
name=unique_parnames[-1],
format=fits_fmt,
bscale=bscale,
bzero=bzero,
)
)
coldefs = ColDefs(cols)
self = FITS_rec.__new__(
cls,
np.rec.array(None, formats=formats, names=coldefs.names, shape=gcount),
)
# By default the data field will just be 'DATA', but it may be
# uniquified if 'DATA' is already used by one of the group names
self._data_field = unique_parnames[-1]
self._coldefs = coldefs
self.parnames = parnames
for idx, name in enumerate(unique_parnames[:-1]):
column = coldefs[idx]
# Note: _get_scale_factors is used here and in other cases
# below to determine whether the column has non-default
# scale/zero factors.
# TODO: Find a better way to do this than using this interface
scale, zero = self._get_scale_factors(column)[3:5]
if scale or zero:
self._cache_field(name, pardata[idx])
else:
np.rec.recarray.field(self, idx)[:] = pardata[idx]
column = coldefs[self._data_field]
scale, zero = self._get_scale_factors(column)[3:5]
if scale or zero:
self._cache_field(self._data_field, input)
else:
np.rec.recarray.field(self, npars)[:] = input
else:
self = FITS_rec.__new__(cls, input)
self.parnames = None
return self
def __array_finalize__(self, obj):
super().__array_finalize__(obj)
if isinstance(obj, GroupData):
self.parnames = obj.parnames
elif isinstance(obj, FITS_rec):
self.parnames = obj._coldefs.names
def __getitem__(self, key):
out = super().__getitem__(key)
if isinstance(out, GroupData):
out.parnames = self.parnames
return out
@property
def data(self):
"""
The raw group data represented as a multi-dimensional `numpy.ndarray`
array.
"""
# The last column in the coldefs is the data portion of the group
return self.field(self._coldefs.names[-1])
@lazyproperty
def _unique(self):
return _par_indices(self.parnames)
def par(self, parname):
"""
Get the group parameter values.
"""
if _is_int(parname):
result = self.field(parname)
else:
indx = self._unique[parname.upper()]
if len(indx) == 1:
result = self.field(indx[0])
# if more than one group parameter have the same name
else:
result = self.field(indx[0]).astype("f8")
for i in indx[1:]:
result += self.field(i)
return result
class GroupsHDU(PrimaryHDU, _TableLikeHDU):
"""
FITS Random Groups HDU class.
See the :ref:`astropy:random-groups` section in the Astropy documentation
for more details on working with this type of HDU.
"""
_bitpix2tform = {8: "B", 16: "I", 32: "J", 64: "K", -32: "E", -64: "D"}
_data_type = GroupData
_data_field = "DATA"
"""
The name of the table record array field that will contain the group data
for each group; 'DATA' by default, but may be preceded by any number of
underscores if 'DATA' is already a parameter name
"""
def __init__(self, data=None, header=None):
super().__init__(data=data, header=header)
if data is not DELAYED:
self.update_header()
# Update the axes; GROUPS HDUs should always have at least one axis
if len(self._axes) <= 0:
self._axes = [0]
self._header["NAXIS"] = 1
self._header.set("NAXIS1", 0, after="NAXIS")
@classmethod
def match_header(cls, header):
keyword = header.cards[0].keyword
return keyword == "SIMPLE" and "GROUPS" in header and header["GROUPS"] is True
@lazyproperty
def data(self):
"""
The data of a random group FITS file will be like a binary table's
data.
"""
if self._axes == [0]:
return
data = self._get_tbdata()
data._coldefs = self.columns
data.parnames = self.parnames
del self.columns
return data
@lazyproperty
def parnames(self):
"""The names of the group parameters as described by the header."""
pcount = self._header["PCOUNT"]
# The FITS standard doesn't really say what to do if a parname is
# missing, so for now just assume that won't happen
return [self._header["PTYPE" + str(idx + 1)] for idx in range(pcount)]
@lazyproperty
def columns(self):
if self._has_data and hasattr(self.data, "_coldefs"):
return self.data._coldefs
format = self._bitpix2tform[self._header["BITPIX"]]
pcount = self._header["PCOUNT"]
parnames = []
bscales = []
bzeros = []
for idx in range(pcount):
bscales.append(self._header.get("PSCAL" + str(idx + 1), None))
bzeros.append(self._header.get("PZERO" + str(idx + 1), None))
parnames.append(self._header["PTYPE" + str(idx + 1)])
formats = [format] * len(parnames)
dim = [None] * len(parnames)
# Now create columns from collected parameters, but first add the DATA
# column too, to contain the group data.
parnames.append("DATA")
bscales.append(self._header.get("BSCALE"))
bzeros.append(self._header.get("BZEROS"))
data_shape = self.shape[:-1]
formats.append(str(int(np.prod(data_shape))) + format)
dim.append(data_shape)
parnames = _unique_parnames(parnames)
self._data_field = parnames[-1]
cols = [
Column(name=name, format=fmt, bscale=bscale, bzero=bzero, dim=dim)
for name, fmt, bscale, bzero, dim in zip(
parnames, formats, bscales, bzeros, dim
)
]
coldefs = ColDefs(cols)
return coldefs
@property
def _nrows(self):
if not self._data_loaded:
# The number of 'groups' equates to the number of rows in the table
# representation of the data
return self._header.get("GCOUNT", 0)
else:
return len(self.data)
@lazyproperty
def _theap(self):
# Only really a lazyproperty for symmetry with _TableBaseHDU
return 0
@property
def is_image(self):
return False
@property
def size(self):
"""
Returns the size (in bytes) of the HDU's data part.
"""
size = 0
naxis = self._header.get("NAXIS", 0)
# for random group image, NAXIS1 should be 0, so we skip NAXIS1.
if naxis > 1:
size = 1
for idx in range(1, naxis):
size = size * self._header["NAXIS" + str(idx + 1)]
bitpix = self._header["BITPIX"]
gcount = self._header.get("GCOUNT", 1)
pcount = self._header.get("PCOUNT", 0)
size = abs(bitpix) * gcount * (pcount + size) // 8
return size
def update_header(self):
old_naxis = self._header.get("NAXIS", 0)
if self._data_loaded:
if isinstance(self.data, GroupData):
self._axes = list(self.data.data.shape)[1:]
self._axes.reverse()
self._axes = [0] + self._axes
field0 = self.data.dtype.names[0]
field0_code = self.data.dtype.fields[field0][0].name
elif self.data is None:
self._axes = [0]
field0_code = "uint8" # For lack of a better default
else:
raise ValueError("incorrect array type")
self._header["BITPIX"] = DTYPE2BITPIX[field0_code]
self._header["NAXIS"] = len(self._axes)
# add NAXISi if it does not exist
for idx, axis in enumerate(self._axes):
if idx == 0:
after = "NAXIS"
else:
after = "NAXIS" + str(idx)
self._header.set("NAXIS" + str(idx + 1), axis, after=after)
# delete extra NAXISi's
for idx in range(len(self._axes) + 1, old_naxis + 1):
self._header.remove(f"NAXIS{idx}", ignore_missing=True)
if self._has_data and isinstance(self.data, GroupData):
self._header.set("GROUPS", True, after="NAXIS" + str(len(self._axes)))
self._header.set("PCOUNT", len(self.data.parnames), after="GROUPS")
self._header.set("GCOUNT", len(self.data), after="PCOUNT")
column = self.data._coldefs[self._data_field]
scale, zero = self.data._get_scale_factors(column)[3:5]
if scale:
self._header.set("BSCALE", column.bscale)
if zero:
self._header.set("BZERO", column.bzero)
for idx, name in enumerate(self.data.parnames):
self._header.set("PTYPE" + str(idx + 1), name)
column = self.data._coldefs[idx]
scale, zero = self.data._get_scale_factors(column)[3:5]
if scale:
self._header.set("PSCAL" + str(idx + 1), column.bscale)
if zero:
self._header.set("PZERO" + str(idx + 1), column.bzero)
# Update the position of the EXTEND keyword if it already exists
if "EXTEND" in self._header:
if len(self._axes):
after = "NAXIS" + str(len(self._axes))
else:
after = "NAXIS"
self._header.set("EXTEND", after=after)
def _writedata_internal(self, fileobj):
"""
Basically copy/pasted from `_ImageBaseHDU._writedata_internal()`, but
we have to get the data's byte order a different way...
TODO: Might be nice to store some indication of the data's byte order
as an attribute or function so that we don't have to do this.
"""
size = 0
if self.data is not None:
self.data._scale_back()
# Based on the system type, determine the byteorders that
# would need to be swapped to get to big-endian output
if sys.byteorder == "little":
swap_types = ("<", "=")
else:
swap_types = ("<",)
# deal with unsigned integer 16, 32 and 64 data
if _is_pseudo_integer(self.data.dtype):
# Convert the unsigned array to signed
output = np.array(
self.data - _pseudo_zero(self.data.dtype),
dtype=f">i{self.data.dtype.itemsize}",
)
should_swap = False
else:
output = self.data
fname = self.data.dtype.names[0]
byteorder = self.data.dtype.fields[fname][0].str[0]
should_swap = byteorder in swap_types
if should_swap:
if output.flags.writeable:
output.byteswap(True)
try:
fileobj.writearray(output)
finally:
output.byteswap(True)
else:
# For read-only arrays, there is no way around making
# a byteswapped copy of the data.
fileobj.writearray(output.byteswap(False))
else:
fileobj.writearray(output)
size += output.size * output.itemsize
return size
def _verify(self, option="warn"):
errs = super()._verify(option=option)
# Verify locations and values of mandatory keywords.
self.req_cards(
"NAXIS", 2, lambda v: (_is_int(v) and 1 <= v <= 999), 1, option, errs
)
self.req_cards("NAXIS1", 3, lambda v: (_is_int(v) and v == 0), 0, option, errs)
after = self._header["NAXIS"] + 3
pos = lambda x: x >= after
self.req_cards("GCOUNT", pos, _is_int, 1, option, errs)
self.req_cards("PCOUNT", pos, _is_int, 0, option, errs)
self.req_cards("GROUPS", pos, lambda v: (v is True), True, option, errs)
return errs
def _calculate_datasum(self):
"""
Calculate the value for the ``DATASUM`` card in the HDU.
"""
if self._has_data:
# We have the data to be used.
# Check the byte order of the data. If it is little endian we
# must swap it before calculating the datasum.
# TODO: Maybe check this on a per-field basis instead of assuming
# that all fields have the same byte order?
byteorder = self.data.dtype.fields[self.data.dtype.names[0]][0].str[0]
if byteorder != ">":
if self.data.flags.writeable:
byteswapped = True
d = self.data.byteswap(True)
d.dtype = d.dtype.newbyteorder(">")
else:
# If the data is not writeable, we just make a byteswapped
# copy and don't bother changing it back after
d = self.data.byteswap(False)
d.dtype = d.dtype.newbyteorder(">")
byteswapped = False
else:
byteswapped = False
d = self.data
byte_data = d.view(type=np.ndarray, dtype=np.ubyte)
cs = self._compute_checksum(byte_data)
# If the data was byteswapped in this method then return it to
# its original little-endian order.
if byteswapped:
d.byteswap(True)
d.dtype = d.dtype.newbyteorder("<")
return cs
else:
# This is the case where the data has not been read from the file
# yet. We can handle that in a generic manner so we do it in the
# base class. The other possibility is that there is no data at
# all. This can also be handled in a generic manner.
return super()._calculate_datasum()
def _summary(self):
summary = super()._summary()
name, ver, classname, length, shape, format, gcount = summary
# Drop the first axis from the shape
if shape:
shape = shape[1:]
if shape and all(shape):
# Update the format
format = self.columns[0].dtype.name
# Update the GCOUNT report
gcount = f"{self._gcount} Groups {self._pcount} Parameters"
return (name, ver, classname, length, shape, format, gcount)
def _par_indices(names):
"""
Given a list of objects, returns a mapping of objects in that list to the
index or indices at which that object was found in the list.
"""
unique = {}
for idx, name in enumerate(names):
# Case insensitive
name = name.upper()
if name in unique:
unique[name].append(idx)
else:
unique[name] = [idx]
return unique
def _unique_parnames(names):
"""
Given a list of parnames, including possible duplicates, returns a new list
of parnames with duplicates prepended by one or more underscores to make
them unique. This is also case insensitive.
"""
upper_names = set()
unique_names = []
for name in names:
name_upper = name.upper()
while name_upper in upper_names:
name = "_" + name
name_upper = "_" + name_upper
unique_names.append(name)
upper_names.add(name_upper)
return unique_names
|
astropyREPO_NAMEastropyPATH_START.@astropy_extracted@astropy-main@astropy@io@fits@hdu@groups.py@.PATH_END.py
|
{
"filename": "test_queryutils.py",
"repo_name": "sdss/marvin",
"repo_path": "marvin_extracted/marvin-main/tests/tools/test_queryutils.py",
"type": "Python"
}
|
# !usr/bin/env python2
# -*- coding: utf-8 -*-
#
# Licensed under a 3-clause BSD license.
#
# @Author: Brian Cherinka
# @Date: 2017-05-24 18:27:50
# @Last modified by: Brian Cherinka
# @Last Modified time: 2018-11-16 14:35:44
from __future__ import absolute_import, division, print_function
import itertools
import pytest
from imp import reload
import marvin
from marvin import config
from marvin.utils.datamodel.query.base import query_params
@pytest.fixture(scope='function', autouse=True)
def allow_dap(monkeypatch):
monkeypatch.setattr(config, '_allow_DAP_queries', True)
global query_params
reload(marvin.utils.datamodel.query.base)
from marvin.utils.datamodel.query.base import query_params
@pytest.fixture(scope='session')
def data():
groups = ['Metadata', 'Spaxel Metadata', 'Emission Lines', 'Kinematics', 'Spectral Indices', 'NSA Catalog']
spaxelparams = ['spaxelprop.x', 'spaxelprop.y', 'spaxelprop.spx_snr']
specindparams = ['spaxelprop.specindex_d4000']
nsaparams = ['nsa.iauname', 'nsa.ra', 'nsa.dec', 'nsa.z', 'nsa.elpetro_ba', 'nsa.elpetro_mag_g_r',
'nsa.elpetro_absmag_g_r', 'nsa.elpetro_logmass', 'nsa.elpetro_th50_r', 'nsa.sersic_logmass',
'nsa.sersic_ba']
data = {'groups': groups, 'spaxelmeta': spaxelparams, 'nsa': nsaparams, 'spectral': specindparams}
return data
class TestGroupList(object):
def test_list_groups(self, data):
groups = query_params.list_groups()
assert data['groups'] == groups, 'groups should be the same'
@pytest.mark.parametrize('name, result',
[('metadata', 'Metadata'),
('spaxelmeta', 'Spaxel Metadata'),
('emission', 'Emission Lines'),
('kin', 'Kinematics'),
('nsacat', 'NSA Catalog')])
def test_get_group(self, name, result):
group = query_params[name]
assert result == group.name
@pytest.mark.parametrize('name',
[('nsa catalog'),
('NSA catalog'),
('catalognsa'),
('nsa--!catalgue'),
('nsacat')])
def test_different_keys(self, name):
group = query_params[name]
assert group.name == 'NSA Catalog'
@pytest.mark.parametrize('groups', [(None), (['nsa']), (['spectral', 'nsa']),
(['spaxelmeta', 'spectral'])])
def test_list_params(self, data, groups):
params = query_params.list_params(groups=groups)
if not groups:
myparams = data['spaxelmeta'] + data['nsa'] + data['spectral']
else:
paramlist = [data[g] for g in groups]
myparams = list(itertools.chain.from_iterable(paramlist))
assert set(myparams).issubset(set(params))
def test_raises_keyerror(self):
errmsg = "meta is too ambiguous."
with pytest.raises(KeyError) as cm:
group = query_params['meta']
assert cm.type == KeyError
assert errmsg in str(cm.value)
@pytest.mark.parametrize('name, result',
[('coord', None)])
def test_raises_valueerror(self, name, result):
errmsg = "Could not find a match for coord."
with pytest.raises(ValueError) as cm:
group = query_params[name]
assert cm.type == ValueError
assert errmsg in str(cm.value)
class TestParamList(object):
@pytest.mark.parametrize('group, name, count', [('spectral', 'Spectral Indices', 1),
('kin', 'Kinematics', 6)])
def test_get_paramgroup(self, group, name, count):
assert group in query_params
mygroup = query_params[group]
othergroup = group == query_params
assert type(mygroup) == type(othergroup)
assert mygroup.name == othergroup.name
assert mygroup.name == name
assert len(mygroup) == count
@pytest.mark.parametrize('group, param, name', [('spectral', 'd4000', 'specindex_d4000')])
def test_get_param(self, group, param, name):
assert group in query_params
assert param in query_params[group]
myparam = param == query_params[group]
assert myparam.name == name
@pytest.mark.parametrize('group, param, ltype, expname',
[('kin', 'havel', None, 'spaxelprop.emline_gvel_ha_6564'),
('kin', 'havel', 'full', 'spaxelprop.emline_gvel_ha_6564'),
('kin', 'havel', 'short', 'havel'),
('kin', 'havel', 'display', 'Halpha Velocity')])
def test_list_params(self, group, param, ltype, expname):
kwargs = {'name_type': ltype} if ltype else {}
params = query_params[group].list_params(**kwargs)
if not ltype:
assert params[0].full == expname
else:
assert params[0] == expname
@pytest.mark.parametrize('group, params, expset',
[('metadata', ['ra', 'dec'], ['cube.ra', 'cube.dec'])])
def test_list_subset(self, group, params, expset):
subset = query_params[group].list_params('full', subset=params)
assert set(expset) == set(subset)
@pytest.mark.parametrize('groups, params1, params2, expset',
[(('metadata', 'nsa'), ['ra', 'dec'], ['z', 'absmag_g_r'],
['cube.ra', 'cube.dec', 'nsa.z', 'nsa.elpetro_absmag_g_r'])])
def test_join_two_list(self, groups, params1, params2, expset):
group1, group2 = groups
g1 = query_params[group1]
g2 = query_params[group2]
mylist = g1.list_params('full', subset=params1) + g2.list_params('full', subset=params2)
assert set(expset) == set(mylist)
def test_raises_keyerror(self):
errmsg = "emline_gflux is too ambiguous."
with pytest.raises(KeyError) as cm:
param = query_params['emission']['emline_gflux']
assert cm.type == KeyError
assert errmsg in str(cm.value)
class TestQueryParams(object):
@pytest.mark.parametrize('group, param, full, name, short, display',
[('nsa', 'z', 'nsa.z', 'z', 'z', 'Redshift'),
('metadata', 'ra', 'cube.ra', 'ra', 'ra', 'RA'),
('emission', 'gflux_ha', 'spaxelprop.emline_gflux_ha_6564', 'emline_gflux_ha_6564', 'haflux', 'Halpha Flux'),
('spaxelmeta', 'spaxelprop.x', 'spaxelprop.x', 'x', 'x', 'Spaxel X')])
def test_query_param(self, group, param, full, name, short, display):
par = query_params[group][param]
assert par.full == full
assert par.name == name
assert par.short == short
assert par.display == display
def get_qp(name, grp):
qp = grp[name]
return qp
@pytest.mark.parametrize('full',
[('nsa.iauname'), ('nsa.ra'), ('nsa.dec'), ('nsa.z'), ('nsa.elpetro_ba'),
('nsa.elpetro_mag_g_r'), ('nsa.elpetro_absmag_g_r'), ('nsa.elpetro_logmass'),
('nsa.elpetro_th50_r'), ('nsa.sersic_logmass'), ('nsa.sersic_ba')])
def test_nsa_names(self, full):
nsa = query_params['nsa']
assert full in nsa
assert full == nsa[full].full
short = full.split('.')[1]
assert full == nsa[short].full
nospace = short.replace(' ', '')
assert full == nsa[nospace].full
@pytest.mark.parametrize('grp, name', [('emisison', 'flux_ha')])
def test_datamodel(self, grp, name):
emgrp = query_params[grp]
param = emgrp[name]
assert param.property is None
@pytest.mark.parametrize('group, name, full',
[('metadata', 'cube.plate', 'cube.plate'),
#('metadata', 'cube.plateifu', 'cube.plateifu'),
#('metadata', 'plate', 'cube.plate'),
('metadata', 'plateifu', 'cube.plateifu'),
#('spaxelmeta', 'x', 'spaxelprop.x'),
#('spaxelmeta', 'spaxelx', 'spaxelprop.x'),
('spaxelmeta', 'spaxelprop.x', 'spaxelprop.x'),
('spaxelmeta', 'spaxelpropx', 'spaxelprop.x'),
('nsa', 'nsa.elpetro_ba', 'nsa.elpetro_ba'),
('nsa', 'nsa.elpetroba', 'nsa.elpetro_ba'),
('nsa', 'elpetro_ba', 'nsa.elpetro_ba')])
def test_problem_names(self, group, name, full):
grp = query_params[group]
qp = grp[name]
assert qp.full == full
|
sdssREPO_NAMEmarvinPATH_START.@marvin_extracted@marvin-main@tests@tools@test_queryutils.py@.PATH_END.py
|
{
"filename": "__init__.py",
"repo_name": "simonsobs/TeleView",
"repo_path": "TeleView_extracted/TeleView-main/tvapi/tvapi/__init__.py",
"type": "Python"
}
|
simonsobsREPO_NAMETeleViewPATH_START.@TeleView_extracted@TeleView-main@tvapi@tvapi@__init__.py@.PATH_END.py
|
|
{
"filename": "_text.py",
"repo_name": "catboost/catboost",
"repo_path": "catboost_extracted/catboost-master/contrib/python/plotly/py2/plotly/validators/contour/colorbar/title/_text.py",
"type": "Python"
}
|
import _plotly_utils.basevalidators
class TextValidator(_plotly_utils.basevalidators.StringValidator):
def __init__(
self, plotly_name="text", parent_name="contour.colorbar.title", **kwargs
):
super(TextValidator, self).__init__(
plotly_name=plotly_name,
parent_name=parent_name,
edit_type=kwargs.pop("edit_type", "colorbars"),
role=kwargs.pop("role", "info"),
**kwargs
)
|
catboostREPO_NAMEcatboostPATH_START.@catboost_extracted@catboost-master@contrib@python@plotly@py2@plotly@validators@contour@colorbar@title@_text.py@.PATH_END.py
|
{
"filename": "model.py",
"repo_name": "GijsMulders/epos",
"repo_path": "epos_extracted/epos-master/EPOS/plot/model.py",
"type": "Python"
}
|
import matplotlib.pyplot as plt
import numpy as np
import scipy.stats
from matplotlib import gridspec
from . import helpers
from EPOS.population import periodradius
from EPOS.fitfunctions import brokenpowerlaw1D
# Kepler obs
# Kepler pdf mass
# kepler pdf radius
# Kepler pdf inner (M/R ??)
fmt_symbol= {'ls':'', 'marker':'o', 'mew':1, 'mec':'k', 'ms':4,'alpha':0.3}
def panels_mass(epos, Population=False, color='C1'):
f, (ax, axM, axP)= helpers.make_panels(plt)
pfm=epos.pfm
eta= epos.modelpars.get('eta',Init=True)
''' Bins '''
dw= 0.5 # bin width in ln space
xbins= np.exp(np.arange(np.log(pfm['P limits'][0]),np.log(pfm['P limits'][-1])+dw,dw))
ybins= np.exp(np.arange(np.log(pfm['M limits'][0]),np.log(pfm['M limits'][-1])+dw,dw))
''' Posterior '''
if Population:
assert hasattr(epos, 'func')
fname='.pop'
# side panels marginalized over M and P limits
pps, pdf, pdf_X, pdf_Y= periodradius(epos, Init=True)
_, _, pdf_X, _= periodradius(epos, Init=True, ybin=ybins)
_, _, _, pdf_Y= periodradius(epos, Init=True, xbin=xbins)
pps, _ , _, _ = periodradius(epos, Init=True, xbin=xbins, ybin=ybins)
#pdf/= np.max(pdf)
#pdflog= np.log10(pdf) # in %
levels= np.linspace(0,np.max(pdf))
lines= np.array([0.1, 0.5]) * np.max(pdf)
ax.contourf(epos.X_in, epos.Y_in, pdf, cmap='Purples', levels=levels)
#ax.contour(epos.X_in, epos.Y_in, pdf, levels=lines)
# Side panels
#print 'pps model= {}'.format(eta)
scale=dw
axP.plot(epos.MC_xvar, pdf_X*scale, marker='',ls='-',color='purple')
axM.plot(pdf_Y*scale, epos.in_yvar, marker='',ls='-',color='purple')
else:
fname=''
''' plot main panel'''
ax.set_title(epos.name)
#helpers.set_axes(ax, epos, Trim=True)
ax.set_xscale('log')
ax.set_yscale('log')
#ax.set_xlim(epos.mod_xlim)
#ax.set_ylim(epos.mod_ylim)
ax.plot(pfm['P'],pfm['M'], color=color, **fmt_symbol)
xlim= ax.get_xlim()
ylim= ax.get_ylim()
''' Period side panel '''
#axP.yaxis.tick_right()
#axP.yaxis.set_ticks_position('both')
#axP.tick_params(axis='y', which='minor',left='off',right='off')
axP.set_xscale('log')
axP.set_xlim(xlim)
#ax.set_xlabel('Semi-Major Axis [au]')
axP.set_xlabel('Orbital Period [days]')
axP.hist(pfm['P'], color=color, bins=xbins, weights=np.full(pfm['np'], eta/pfm['np']) )
''' Mass side panel'''
#helpers.set_axis_size(axR, epos, Trim=True) #, In= epos.MassRadius)
axM.set_ylabel(r'Planet Mass [M$_\bigoplus$]')
axM.set_yscale('log')
axM.set_ylim(ylim)
axM.hist(pfm['M'], bins=ybins, orientation='horizontal', weights=np.full(pfm['np'], eta/pfm['np']), color=color )
helpers.save(plt, '{}model/input.mass{}'.format(epos.plotdir, fname))
def panels_radius(epos, Population=False, Occurrence=False, Observation=False,
Tag=False, color='C0', clr_obs='C3', Shade=True, Fancy=True, Zoom=False):
f, (ax, axR, axP)= helpers.make_panels(plt, Fancy=Fancy)
pfm=epos.pfm
eta= epos.modelpars.get('eta',Init=True)
title=''
if not 'R' in pfm:
pfm['R'], _= epos.MR(pfm['M'])
if Tag:
# function that return a simulation subset based on the tag
subset={'Fe/H<=0': lambda tag: tag<=0,
'Fe/H>0': lambda tag: tag>0}
''' Bins '''
dwR=0.2 # bin width in ln space
dwP=0.3
if Zoom:
xbins= np.exp(np.arange(np.log(epos.xzoom[0]),np.log(epos.xzoom[-1])+dwP,dwP))
ybins= np.exp(np.arange(np.log(epos.yzoom[0]),np.log(epos.yzoom[-1])+dwR,dwR))
else:
xbins= np.exp(np.arange(np.log(epos.xtrim[0]),np.log(epos.xtrim[-1])+dwP,dwP))
ybins= np.exp(np.arange(np.log(epos.ytrim[0]),np.log(epos.ytrim[-1])+dwR,dwR))
''' Plot model occurrence or observed counts'''
if Observation:
# plot model planets * completeness
weights= eta *epos.occurrence['model']['completeness'] / pfm['ns']
else:
weights=np.full(pfm['np'], eta/pfm['ns'])
if 'draw prob' in pfm and not Tag:
prob= pfm['draw prob'][pfm['ID']]
weights*= prob*pfm['ns'] # system weights sum up to 1
#nonzero= np.where(prob>0, 1., 0.)
#weights*= nonzero*(pfm['np']/nonzero.sum())
# histograms
if Tag:
for key, f in subset.iteritems():
toplot= f(pfm['tag'])
#weights= eta*epos.occurrence['model']['completeness'] \
# *np.where(toplot,1.,0.)/f(pfm['system tag']).sum()
weights= np.where(toplot,eta,0.)/f(pfm['system tag']).sum()
axP.hist(pfm['P'], bins=xbins, weights=weights, histtype='step', label=key)
axR.hist(pfm['R'], bins=ybins, orientation='horizontal',
weights=weights, histtype='step')
else:
# color have to be 1-element lists ??
axP.hist(pfm['P'], bins=xbins, weights=weights, color=[color])
axR.hist(pfm['R'], bins=ybins, orientation='horizontal', weights=weights,
color=[color])
''' Overplot observations? '''
if Population:
assert hasattr(epos, 'func')
fname='.pop' +('.zoom' if Zoom else '')
title= epos.title
pps, pdf, pdf_X, pdf_Y= periodradius(epos, Init=True)
_, _, pdf_X, _= periodradius(epos, Init=True, ybin=ybins)
_, _, _, pdf_Y= periodradius(epos, Init=True, xbin=xbins)
pps, _ , _, _ = periodradius(epos, Init=True, xbin=xbins, ybin=ybins)
#pdf/= np.max(pdf)
#pdflog= np.log10(pdf) # in %
levels= np.linspace(0,np.max(pdf))
lines= np.array([0.1, 0.5]) * np.max(pdf)
if Shade:
ax.contourf(epos.X_in, epos.Y_in, pdf, cmap='Purples', levels=levels)
#ax.contour(epos.X_in, epos.Y_in, pdf, levels=lines)
# Side panels
#print 'pps model= {}'.format(eta)
axP.plot(epos.MC_xvar, pdf_X*dwP, marker='',ls='-',color='purple')
axR.plot(pdf_Y*dwR, epos.in_yvar, marker='',ls='-',color='purple')
else:
# renormalize
xnorm= axP.get_ylim()[1]/max(pdf_X)
ynorm= axR.get_xlim()[1]/max(pdf_Y)
axP.plot(epos.MC_xvar, pdf_X*xnorm, marker='',ls='-',color=clr_obs)
axR.plot(pdf_Y*ynorm, epos.in_yvar, marker='',ls='-',color=clr_obs)
elif Observation:
fname='.obs'+('.zoom' if Zoom else '')
title= epos.title+': Counts'
ax.plot(epos.obs_xvar, epos.obs_yvar, ls='', marker='.', ms=5.0, color='0.5')
weights= np.full(epos.obs_xvar.size, 1./epos.nstars)
axP.hist(epos.obs_xvar,bins=xbins,weights= weights, histtype='step', color='0.5')
axR.hist(epos.obs_yvar,bins=ybins,weights= weights,
orientation='horizontal', histtype='step', color='0.5')
elif Occurrence:
fname='.occ'+('.zoom' if Zoom else '')
title= epos.title+r': Occurrence, $\eta={:.2g}$'.format(eta)
ax.plot(epos.obs_xvar, epos.obs_yvar, ls='', marker='.', ms=5.0, color='0.5')
cut= epos.obs_yvar > 0.45
weights= 1. / (epos.occurrence['planet']['completeness'][cut]*epos.nstars)
axP.hist(epos.obs_xvar[cut],bins=xbins,weights=weights,histtype='step',color='k')
axR.hist(epos.obs_yvar[cut],bins=ybins,weights= weights,
orientation='horizontal', histtype='step', color='k')
elif Tag:
fname='.tag'
ax.set_title(epos.title+': Tag')
axP.legend(frameon=False, fontsize='small')
# for k, tag in enumerate(subset):
# axP.text(0.98,0.95-0.05*k,tag,ha='right',va='top',color='C1',
# transform=axP.transAxes)
else:
fname=''
if Fancy:
plt.suptitle(title, ha='center')#, x=0.05)
else:
ax.set_title(title)
''' plot main panel'''
#helpers.set_axes(ax, epos, Trim=True)
helpers.set_axes(ax, epos, Trim=True)
if Tag:
for key, f in subset.iteritems():
todraw= f(pfm['tag'])
ax.plot(pfm['P'][todraw],pfm['R'][todraw], **fmt_symbol)
elif 'draw prob' in pfm:
#fmt_symbol['alpha']= 0.6*pfm['draw prob'][pfm['ID']] # alpha can't be array
todraw= pfm['draw prob'][pfm['ID']]>0
ax.plot(pfm['P'][todraw],pfm['R'][todraw], color=color, **fmt_symbol)
else:
ax.plot(pfm['P'],pfm['R'], color=color, **fmt_symbol)
''' Period side panel '''
#axP.yaxis.tick_right()
#axP.yaxis.set_ticks_position('both')
#axP.tick_params(axis='y', which='minor',left='off',right='off')
helpers.set_axis_distance(axP, epos, Trim=True)
''' Mass side panel'''
helpers.set_axis_size(axR, epos, Trim=True) #, In= epos.MassRadius)
helpers.save(plt, '{}model/input.radius{}'.format(epos.plotdir, fname))
def inclination(epos, color='C0', clr_obs='C3', imin=1e-2, Simple=False):
pfm=epos.pfm
gs = gridspec.GridSpec(1, 2,
width_ratios=[20,6],
)
f= plt.figure()
f.subplots_adjust(wspace=0)
ax = plt.subplot(gs[0, 0])
axh = plt.subplot(gs[0, 1])
axh.tick_params(direction='in', which='both', left=False, right=True, labelleft=False)
axh.yaxis.set_label_position('right')
axh.axis('off')
''' Inc-sma'''
ax.set_title('Inclination {}'.format(epos.title))
ax.set_xlabel('Semi-Major Axis [au]')
#ax.set_ylabel(r'Planet Mass [M$_\bigoplus$]')
ax.set_ylabel('Mutual Inclination [degree]')
#ax.set_xlim(epos.mod_xlim)
ax.set_ylim(imin,90)
ax.set_xscale('log')
ax.set_yscale('log')
ax.axhline(np.median(pfm['inc']), ls='--', color=color)
ax.plot(pfm['sma'], pfm['inc'], color=color, **fmt_symbol)
''' Histogram'''
axh.set_yscale('log')
axh.set_ylim(imin,90)
inc= np.logspace(np.log10(imin),2)
axh.hist(pfm['inc'], bins=inc, orientation='horizontal', color=color)
#Model best-fit
xmax= axh.get_xlim()[-1]
if Simple:
scale=2.
pdf= scipy.stats.rayleigh(scale=scale).pdf(inc)
pdf*= xmax/max(pdf)
axh.plot(pdf, inc, ls='-', color=clr_obs)
ax.axhline(scale, color=clr_obs, ls='--')
else:
for scale, ls in zip([1,2,2.7],[':','--',':']):
pdf= scipy.stats.rayleigh(scale=scale).pdf(inc)
#pdf/= np.log(inc) #log scale?
pdf*= xmax/max(pdf)
axh.plot(pdf, inc, ls=ls, color=clr_obs)
ax.axhline(scale, color=clr_obs, ls=ls)
helpers.save(plt, epos.plotdir+'model/inc-sma')
def periodratio(epos, color='C0', clr_obs='C3', Simple=False, Fancy=True):
pfm=epos.pfm
if Fancy:
f, (ax, axR, axP)= helpers.make_panels(plt, Fancy=True)
f.suptitle('Multi-planets {}'.format(epos.title))
else:
f, (ax, axR, axP)= helpers.make_panels_right(plt)
ax.set_title('Input Multi-planets {}'.format(epos.title))
''' Inc-sma'''
ax.set_xlabel('Semi-Major Axis [au]')
ax.set_ylabel('Period ratio')
#ax.set_xlim(epos.mod_xlim)
ax.set_ylim(0.9,10)
ax.set_xscale('log')
ax.set_yscale('log')
#ax.axhline(np.median(pfm['inc']), ls='--')
single= pfm['dP'] == np.nan
inner= pfm['dP'] == 1
nth= pfm['dP']> 1
# print pfm['dP'][single]
# print pfm['dP'][nth]
# print pfm['dP'][inner]
# print pfm['dP'][nth].size, pfm['np'] # ok
ax.plot(pfm['sma'][single], pfm['dP'][single], color='0.7', **fmt_symbol)
ax.plot(pfm['sma'][nth], pfm['dP'][nth], color=color, **fmt_symbol)
ax.plot(pfm['sma'][inner], pfm['dP'][inner], color='C1', **fmt_symbol)
''' Histogram Period Ratio'''
axR.set_yscale('log')
axR.set_ylim(0.9,10)
#dP= np.logspace(0,1,15)
dP= np.logspace(0,1, 25)
axR.hist(pfm['dP'][nth], bins=dP, orientation='horizontal', color=color)
if not Simple:
ax.axhline(np.median(pfm['dP'][nth]), ls='--', color=color, zorder=-1)
#Model best-fit
dP= np.logspace(0,1)
xmax= axR.get_xlim()[-1]
with np.errstate(divide='ignore'):
Dgrid= np.log10(2.*(dP**(2./3.)-1.)/(dP**(2./3.)+1.))
Dgrid[0]= -2
if Simple:
scale= -0.37
pdf= scipy.stats.norm(scale,0.19).pdf(Dgrid)
pdf*= xmax/max(pdf)
axR.plot(pdf, dP, ls='-', color=clr_obs)
pscale= np.interp(scale,Dgrid,dP)
#ax.axhline(pscale, color=clr_obs, ls='--')
else:
for scale, ls in zip([-0.30,-0.37,-0.41],[':','--',':']):
pdf= scipy.stats.norm(scale,0.19).pdf(Dgrid)
pdf*= xmax/max(pdf)
axR.plot(pdf, dP, ls=ls, color='purple')
pscale= np.interp(scale,Dgrid,dP)
ax.axhline(pscale, color='purple', ls=ls, zorder=-1)
''' Histogram Inner Planet'''
axP.set_xscale('log')
axP.set_xlim(ax.get_xlim())
sma= np.geomspace(*ax.get_xlim(), num=25)
axP.hist(pfm['sma'][inner], bins=sma, color='C1', label='Inner Planet')
ymax= axP.get_ylim()[-1]
P= np.geomspace(0.5,730)
smaP= (P/365.25)**(1./1.5)
pdf= brokenpowerlaw1D(P, 10,1.5,-0.8)
pdf*= ymax/max(pdf)
#axP.legend(frameon=False)
if Fancy:
ax.text(0.02,0.98,'Inner Planet',ha='left',va='top',color='C1',
transform=ax.transAxes)
ax.text(0.02,0.98,'\nOuter Planet(s)',ha='left',va='top',color=color,
transform=ax.transAxes)
else:
axP.text(0.98,0.95,'Inner Planet',ha='right',va='top',color='C1',
transform=axP.transAxes)
axP.plot(smaP, pdf, ls='-', color=clr_obs)
#helpers.set_axes(ax, epos, Trim=True)
#helpers.set_axis_distance(axP, epos, Trim=True)
#helpers.set_axis_size(axR, epos, Trim=True) #, In= epos.MassRadius)
helpers.save(plt, epos.plotdir+'model/Pratio-sma')
def periodratio_size(epos, color='C1'):
pfm=epos.pfm
f, (ax, axR, axP)= helpers.make_panels_right(plt)
''' Inc-sma'''
ax.set_title('Input Multi-planets {}'.format(epos.name))
axP.set_xlabel('Period ratio')
#ax.set_ylabel(r'Size [$R_\bigoplus$]')
ax.set_xlim(0.9,10)
#ax.set_ylim(0.3,20)
ax.set_xscale('log')
#ax.set_yscale('log')
helpers.set_axis_size(ax, epos, Trim=True)
''' grids '''
dP= np.logspace(0,1)
dP_bins= np.logspace(0,1,15)
# exoplanet data + hist
ax.plot(epos.multi['Pratio'], epos.multi['Rpair'], ls='', marker='.', ms=5.0, color='0.5')
#ax.axhline(np.median(pfm['inc']), ls='--')
single= pfm['dP'] == np.nan
inner= pfm['dP'] == 1
nth= pfm['dP']> 1
# print pfm['dP'][single]
# print pfm['dP'][nth]
# print pfm['dP'][inner]
# print pfm['dP'][nth].size, pfm['np'] # ok
ax.plot(pfm['dP'][single], pfm['R'][single], color='0.7', **fmt_symbol)
ax.plot(pfm['dP'][nth], pfm['R'][nth], color=color, **fmt_symbol)
ax.plot(pfm['dP'][inner], pfm['R'][inner], color='C1', **fmt_symbol)
''' Histogram Period Ratio'''
axP.set_xscale('log')
axP.set_xlim(0.9,10)
axP.hist(pfm['dP'][nth], bins=dP_bins, color=color)
ax.axvline(np.median(pfm['dP'][nth]), ls='--', color=color)
''' Model best-fit '''
xmax= axP.get_ylim()[-1]
with np.errstate(divide='ignore'):
Dgrid= np.log10(2.*(dP**(2./3.)-1.)/(dP**(2./3.)+1.))
Dgrid[0]= -2
for scale, ls in zip([-0.30,-0.37,-0.41],[':','--',':']):
pdf= scipy.stats.norm(scale,0.19).pdf(Dgrid)
pdf*= xmax/max(pdf)
axP.plot(dP, pdf, ls=ls, color='purple')
pscale= np.interp(scale,Dgrid,dP)
ax.axvline(pscale, color='purple', ls=ls)
''' Raw data'''
scale= 1.* pfm['dP'][nth].size/ epos.multi['Pratio'].size
weights= np.full(epos.multi['Pratio'].size, scale)
axP.hist(epos.multi['Pratio'], bins=dP_bins, weights=weights, histtype='step', color='0.5', zorder=1)
''' Histogram Planet Size'''
axR.set_yscale('log')
axR.set_ylim(ax.get_ylim())
radius= np.geomspace(*ax.get_ylim(), num=15)
axR.hist(pfm['R'][inner], bins=radius, orientation='horizontal', color='C1', label='Inner Planet')
#helpers.set_axes(ax, epos, Trim=True)
#helpers.set_axis_distance(axP, epos, Trim=True)
''' Linear regression '''
try:
# data
slope, intercept, r_value, p_value, std_err= \
scipy.stats.linregress(np.log(epos.multi['Pratio']), np.log(epos.multi['Rpair']))
ax.plot(dP, np.exp(intercept+slope*np.log(dP)), label='r={:.2f}'.format(r_value),
marker='', ls='-', color='0.5')
#print slope, intercept, r_value, p_value
slope, intercept, r_value, p_value, std_err= \
scipy.stats.linregress(np.log(pfm['dP'][nth]), np.log(pfm['R'][nth]))
ax.plot(dP, np.exp(intercept+slope*np.log(dP)), label='r={:.2f}'.format(r_value),
marker='', ls='-', color=color)
ax.legend(loc='lower right')
except Exception as e: print(e)
helpers.save(plt, epos.plotdir+'model/Pratio-size')
def periodratio_inc(epos, color='C1', imin=1e-2):
pfm=epos.pfm
f, (ax, axy, axx)= helpers.make_panels(plt, Fancy=True)
''' Inc-sma'''
ax.set_title('Input Multi-planets {}'.format(epos.name))
ax.set_xlabel('Period ratio')
ax.set_ylabel('Inclination [degree]')
ax.set_xlim(0.9,10)
ax.set_ylim(imin,90)
ax.set_xscale('log')
ax.set_yscale('log')
''' grids '''
dP= np.logspace(0,1)
dP_bins= np.logspace(0,1,15)
#ax.axhline(np.median(pfm['inc']), ls='--')
single= pfm['dP'] == np.nan
inner= pfm['dP'] == 1
nth= pfm['dP']> 1
# exoplanet data + hist
ax.plot(pfm['dP'][nth], epos.pfm['inc'][nth], color=color, **fmt_symbol)
# print pfm['dP'][single]
# print pfm['dP'][nth]
# print pfm['dP'][inner]
# print pfm['dP'][nth].size, pfm['np'] # ok
#ax.plot(pfm['dP'][single], pfm['R'][single], color='0.7', **fmt_symbol)
#ax.plot(pfm['dP'][nth], pfm['R'][nth], color=color, **fmt_symbol)
#ax.plot(pfm['dP'][inner], pfm['R'][inner], color='C1', **fmt_symbol)
''' Histogram Period Ratio'''
axx.hist(pfm['dP'][nth], bins=dP_bins, color=color)
ax.axvline(np.median(pfm['dP'][nth]), ls='--', color=color)
ax.axvline(2, ls='--', color='b', zorder=0)
ax.axvline(1.5, ls='--', color='b', zorder=0)
''' Histogram Inclination'''
#axy.set_yscale('log')
#axy.set_ylim(ax.get_ylim())
inc= np.logspace(np.log10(imin),2)
axy.hist(pfm['inc'], bins=inc, orientation='horizontal', color=color)
#helpers.set_axes(ax, epos, Trim=True)
#helpers.set_axis_distance(axP, epos, Trim=True)
helpers.save(plt, epos.plotdir+'model/Pratio-inc')
def massratio(epos, color='C0', clr_obs='C3', Fancy=True):
pfm=epos.pfm
if Fancy:
f, (ax, axR, axP)= helpers.make_panels(plt, Fancy=True)
f.suptitle('Multi-planets {}'.format(epos.title))
else:
f, (ax, axR, axP)= helpers.make_panels_right(plt)
ax.set_title('Input Multi-planets {}'.format(epos.title))
''' Inc-sma'''
ax.set_xlabel('Semi-Major Axis [au]')
ax.set_ylabel('Mass ratio')
#ax.set_xlim(epos.mod_xlim)
ax.set_ylim(0.1,10)
ax.set_xscale('log')
ax.set_yscale('log')
#ax.axhline(np.median(pfm['inc']), ls='--')
single= pfm['dP'] == np.nan
inner= pfm['dP'] == 1
nth= pfm['dP']> 1
# print pfm['dP'][single]
# print pfm['dP'][nth]
# print pfm['dP'][inner]
# print pfm['dP'][nth].size, pfm['np'] # ok
ax.plot(pfm['sma'][single], pfm['dM'][single], color='0.7', **fmt_symbol)
ax.plot(pfm['sma'][nth], pfm['dM'][nth], color=color, **fmt_symbol)
ax.plot(pfm['sma'][inner], pfm['dM'][inner], color='C1', **fmt_symbol)
''' Histogram Period Ratio'''
axR.set_yscale('log')
axR.set_ylim(0.1,10)
dM= np.logspace(-1,1, 25)
axR.hist(pfm['dM'][nth], bins=dM, orientation='horizontal', color=color)
#Model best-fit
dM= np.logspace(-1,1)
xmax= axR.get_xlim()[-1]
pdf= scipy.stats.norm(loc=0, scale=0.25).pdf(np.log10(dM)) # 0.25--0.5 for diff MR indices
pdf*= xmax/max(pdf)
axR.plot(pdf, dM, ls='-', color=clr_obs)
''' Histogram Inner Planet'''
axP.set_xscale('log')
axP.set_xlim(ax.get_xlim())
sma= np.geomspace(*ax.get_xlim(), num=25)
axP.hist(pfm['sma'][inner], bins=sma, color='C1', label='Inner Planet')
ymax= axP.get_ylim()[-1]
P= np.geomspace(0.5,730)
smaP= (P/365.25)**(1./1.5)
pdf= brokenpowerlaw1D(P, 10,1.5,-0.8)
pdf*= ymax/max(pdf)
#axP.legend(frameon=False)
if Fancy:
ax.text(0.02,0.98,'Inner Planet',ha='left',va='top',color='C1',
transform=ax.transAxes)
ax.text(0.02,0.98,'\nOuter Planet(s)',ha='left',va='top',color=color,
transform=ax.transAxes)
else:
axP.text(0.98,0.95,'Inner Planet',ha='right',va='top',color='C1',
transform=axP.transAxes)
axP.plot(smaP, pdf, ls='-', color=clr_obs)
#helpers.set_axes(ax, epos, Trim=True)
#helpers.set_axis_distance(axP, epos, Trim=True)
#helpers.set_axis_size(axR, epos, Trim=True) #, In= epos.MassRadius)
helpers.save(plt, epos.plotdir+'model/Mratio-sma')
def radiusratio(epos, color='C0', clr_obs='C3', Fancy=True):
pfm=epos.pfm
if Fancy:
f, (ax, axR, axP)= helpers.make_panels(plt, Fancy=True)
f.suptitle('Multi-planets {}'.format(epos.title))
else:
f, (ax, axR, axP)= helpers.make_panels_right(plt)
ax.set_title('Input Multi-planets {}'.format(epos.title))
''' Inc-sma'''
ax.set_xlabel('Semi-Major Axis [au]')
ax.set_ylabel('Radius ratio')
#ax.set_xlim(epos.mod_xlim)
ax.set_ylim(0.1,10)
ax.set_xscale('log')
ax.set_yscale('log')
#ax.axhline(np.median(pfm['inc']), ls='--')
single= pfm['dP'] == np.nan
inner= pfm['dP'] == 1
nth= pfm['dP']> 1
# print pfm['dP'][single]
# print pfm['dP'][nth]
# print pfm['dP'][inner]
# print pfm['dP'][nth].size, pfm['np'] # ok
ax.plot(pfm['sma'][single], pfm['dR'][single], color='0.7', **fmt_symbol)
ax.plot(pfm['sma'][nth], pfm['dR'][nth], color=color, **fmt_symbol)
ax.plot(pfm['sma'][inner], pfm['dR'][inner], color='C1', **fmt_symbol)
''' Histogram Period Ratio'''
axR.set_yscale('log')
axR.set_ylim(0.1,10)
dR= np.logspace(-1,1, 25)
axR.hist(pfm['dR'][nth], bins=dR, orientation='horizontal', color=color)
#Model best-fit
dR= np.logspace(-1,1)
xmax= axR.get_xlim()[-1]
pdf= scipy.stats.norm(loc=0, scale=0.15).pdf(np.log10(dR))
pdf*= xmax/max(pdf)
axR.plot(pdf, dR, ls='-', color=clr_obs)
#ax.axhline(pscale, color=clr_obs, ls='--')
''' Histogram Inner Planet'''
axP.set_xscale('log')
axP.set_xlim(ax.get_xlim())
sma= np.geomspace(*ax.get_xlim(), num=25)
axP.hist(pfm['sma'][inner], bins=sma, color='C1', label='Inner Planet')
ymax= axP.get_ylim()[-1]
P= np.geomspace(0.5,730)
smaP= (P/365.25)**(1./1.5)
pdf= brokenpowerlaw1D(P, 10,1.5,-0.8)
pdf*= ymax/max(pdf)
#axP.legend(frameon=False)
if Fancy:
ax.text(0.02,0.98,'Inner Planet',ha='left',va='top',color='C1',
transform=ax.transAxes)
ax.text(0.02,0.98,'\nOuter Planet(s)',ha='left',va='top',color=color,
transform=ax.transAxes)
else:
axP.text(0.98,0.95,'Inner Planet',ha='right',va='top',color='C1',
transform=axP.transAxes)
axP.plot(smaP, pdf, ls='-', color=clr_obs)
#helpers.set_axes(ax, epos, Trim=True)
#helpers.set_axis_distance(axP, epos, Trim=True)
#helpers.set_axis_size(axR, epos, Trim=True) #, In= epos.MassRadius)
helpers.save(plt, epos.plotdir+'model/Rratio-sma')
def multiplicity(epos, color='C1', Planets=False, Kepler=False):
# plot multiplicity
f, ax = plt.subplots()
ax.set_title('Input Multi-planets {}'.format(epos.name))
ax.set_xlabel('planets per system')
ax.set_ylabel('number of planets' if Planets else 'number of systems')
if Kepler:
ax.set_xlim(0.5, 7.5)
#else:
#ax.set_xlim(0.5, 10.5)
''' Model planets '''
_ , counts= np.unique(epos.pfm['ID'],return_counts=True)
#bins= np.arange(10)
#ax.hist(counts, bins=bins, align='left', color=color)
if Kepler:
bins= np.arange(1,9)
bins[-1]=1000
else:
bins= np.arange(20)
hist, bin_edges = np.histogram(counts, bins=bins)
ax.bar(bins[:-1], hist, width=1, color=color)
''' Kepler '''
# intrinsic= np.zeros_like(hist)
# intrinsic[0]= 0.5*epos.pfm['ns']
# intrinsic[-1]= 0.5*epos.pfm['ns']
# #intrinsic=np.arange(len(hist))
# ax.bar(bins[:-1], intrinsic, width=1, color='', ec='k')
#ax.plot(, drawstyle='steps-mid',
# ls='--', marker='', color='gray', label='Kepler all')
ax.legend(loc='upper right', shadow=False, prop={'size':14}, numpoints=1)
helpers.save(plt, epos.plotdir+'model/multi')
def period(epos, Population=False, Occurrence=False, Observation=False, Tag=False,
color='C1', Zoom=False, Rbin=[1.,4.]):
''' Model occurrence as function of orbital period'''
f, ax = plt.subplots()
helpers.set_axis_distance(ax, epos, Trim=True)
ax.set_xlim(0.5,200)
ax.set_ylabel(r'Planet Occurrence {:.2g}-{:.2g} $R_\bigoplus$'.format(*Rbin))
ax.set_yscale('log')
pfm=epos.pfm
eta= epos.modelpars.get('eta',Init=True)
if not 'R' in pfm:
pfm['R'], _= epos.MR(pfm['M'])
if Tag:
# function that return a simulation subset based on the tag
subset={'Fe/H<=0': lambda tag: tag<=0,
'Fe/H>0': lambda tag: tag>0}
''' Bins '''
#xbins= np.geomspace(1,1000,10)/(10.**(1./3.))
#xbins= np.geomspace(0.5,200,15)
dwP=0.3 # bin width in ln space
if Zoom:
xbins= np.exp(np.arange(np.log(epos.xzoom[0]),np.log(epos.xzoom[-1])+dwP,dwP))
else:
xbins= np.exp(np.arange(np.log(epos.xtrim[0]),np.log(epos.xtrim[-1])+dwP,dwP))
''' Plot model occurrence or observed counts'''
weights=np.full(pfm['np'], eta/pfm['ns'])
if 'draw prob' in pfm and not Tag:
prob= pfm['draw prob'][pfm['ID']]
weights*= prob*pfm['ns'] # system weights sum up to 1
# histograms
if Tag:
for key, f in subset.iteritems():
toplot= f(pfm['tag']) #& (pfm['R']>1.)
weights= np.where(toplot,eta,0.)/f(pfm['system tag']).sum()
ax.hist(pfm['P'], bins=xbins, weights=weights, histtype='step', label=key,
color='#88CCEE' if key=='Fe/H<=0' else '#332288')
else:
ax.hist(pfm['P'], bins=xbins, weights=weights, color=color, histtype='step')
if Tag:
fname='.tag'
ax.set_title(epos.name+': Disk Fe/H')
ax.legend(fontsize='small', loc='lower right')
else:
fname=''
if Occurrence:
cut= (Rbin[0]<epos.obs_yvar) & (epos.obs_yvar<Rbin[-1])
weights= epos.occurrence['planet']['occ'][cut]
ax.hist(epos.obs_xvar[cut],bins=xbins,weights=weights,histtype='step',color='k')
helpers.save(plt, '{}occurrence/model.period{}'.format(epos.plotdir, fname))
# elif Observation: # counts
else:
helpers.save(plt, '{}model/input.period{}'.format(epos.plotdir, fname))
def HansenMurray(epos, color='purple'):
''' figures not included '''
import matplotlib.image as mpimg
pfm= epos.pfm
print (pfm.keys())
''' Hansen & Murray 2013 Figure 1 '''
f, ax = plt.subplots()
ax.set_xlabel('$N_p$')
ax.set_ylabel('counts')
ax.set_xlim([0.5,10.5])
ax.set_ylim([0,35])
fig1 = mpimg.imread('HM13figs/fig1_cut.png')
ax.imshow(fig1, aspect='auto', extent=[0.5,10.5,0,35])
_ , counts= np.unique(pfm['ID'],return_counts=True)
ax.hist(counts, bins=np.arange(0,11), align='left',
color='purple', alpha=0.7, weights=np.full_like(counts, 2) )
helpers.save(plt, '{}HM13/fig1'.format(epos.plotdir), dpi=300)
''' Hansen & Murray 2013 Figure 5 '''
f, ax = plt.subplots()
ax.set_xlabel('$P_2/P_1$')
ax.set_ylabel('counts')
ax.set_xlim([1,4])
ax.set_ylim([0,32])
#ax.set_xscale('log')
#ax.set_yscale('log')
fig1 = mpimg.imread('HM13figs/fig5_cut.png')
ax.imshow(fig1, aspect='auto', extent=[1,4,0,32])
dP= pfm['dP'][pfm['dP']>1]
ax.hist(dP, bins=np.linspace(1.+(1.5/37.),4,38), align='mid',
color='purple', alpha=0.7, weights=np.full_like(dP, 2))
helpers.save(plt, '{}HM13/fig5'.format(epos.plotdir), dpi=300)
''' Hansen & Murray 2013 Figure 9 '''
f, ax = plt.subplots()
ax.set_xlabel('Eccentricity')
ax.set_ylabel('counts')
ax.set_xlim([0,0.24])
ax.set_ylim([0,35])
#ax.set_xscale('log')
#ax.set_yscale('log')
fig1 = mpimg.imread('HM13figs/fig9_cut.png')
ax.imshow(fig1, aspect='auto', extent=[0,0.24,0,35])
ax.hist(pfm['ecc'], bins=np.linspace(0.01,0.25,25), align='left',
color='purple', alpha=0.7, weights=np.full_like(pfm['ecc'], 2))
helpers.save(plt, '{}HM13/fig9'.format(epos.plotdir), dpi=300)
''' Hansen & Murray 2013 Figure 10 '''
f, ax = plt.subplots()
ax.set_xlabel('sma [au]')
ax.set_ylabel('Inclination')
ax.set_xlim([0.04,1.2])
ax.set_ylim([0,33])
#ax.set_xscale('log')
fig10 = mpimg.imread('HM13figs/fig10_cut.png')
ax.imshow(fig10, aspect='auto', extent=[0.04,1.2,0,33])
ax.get_xaxis().set_ticks([])
axlog=ax.twiny()
axlog.set_xlim([0.04,1.2])
axlog.set_xscale('log')
axlog.plot(pfm['sma'], pfm['inc'], color='purple', alpha=0.7,
marker='o', ls='')
helpers.save(plt, '{}HM13/fig10'.format(epos.plotdir), dpi=300)
''' Hansen & Murray 2013 Figure 11 '''
f, (axa, axb) = plt.subplots(2)
fig11a = mpimg.imread('HM13figs/fig11a_cut.png')
fig11b = mpimg.imread('HM13figs/fig11b_cut.png')
#ax[0].set_xlabel('Inclination')
axa.set_ylabel('counts')
axa.set_xlim([0,24.5])
axa.set_ylim([0,20])
axb.set_xlabel('Inclination')
axb.set_ylabel('counts')
axb.set_xlim([0,14.5])
axb.set_ylim([0,35])
axa.imshow(fig11a, aspect='auto', extent=[0,24.5,0,20])
axb.imshow(fig11b, aspect='auto', extent=[0,14.5,0,35])
inner= pfm['inc'][pfm['sma']<0.1]
outer= pfm['inc'][(pfm['sma']>0.1) & (pfm['sma']<1.0)]
axa.hist(inner, bins=np.linspace(0,25,26), align='mid',
color='purple', alpha=0.7, weights=np.full_like(inner, 2))
axb.hist(outer, bins=np.linspace(0,15,50), align='mid',
color='purple', alpha=0.7, weights=np.full_like(outer, 2))
helpers.save(plt, '{}HM13/fig11'.format(epos.plotdir), dpi=300)
|
GijsMuldersREPO_NAMEeposPATH_START.@epos_extracted@epos-master@EPOS@plot@model.py@.PATH_END.py
|
{
"filename": "__init__.py",
"repo_name": "GeminiDRSoftware/DRAGONS",
"repo_path": "DRAGONS_extracted/DRAGONS-master/gemini_instruments/ghost/tests/__init__.py",
"type": "Python"
}
|
GeminiDRSoftwareREPO_NAMEDRAGONSPATH_START.@DRAGONS_extracted@DRAGONS-master@gemini_instruments@ghost@tests@__init__.py@.PATH_END.py
|
|
{
"filename": "_side.py",
"repo_name": "catboost/catboost",
"repo_path": "catboost_extracted/catboost-master/contrib/python/plotly/py2/plotly/validators/scatter3d/line/colorbar/title/_side.py",
"type": "Python"
}
|
import _plotly_utils.basevalidators
class SideValidator(_plotly_utils.basevalidators.EnumeratedValidator):
def __init__(
self, plotly_name="side", parent_name="scatter3d.line.colorbar.title", **kwargs
):
super(SideValidator, self).__init__(
plotly_name=plotly_name,
parent_name=parent_name,
edit_type=kwargs.pop("edit_type", "calc"),
role=kwargs.pop("role", "style"),
values=kwargs.pop("values", ["right", "top", "bottom"]),
**kwargs
)
|
catboostREPO_NAMEcatboostPATH_START.@catboost_extracted@catboost-master@contrib@python@plotly@py2@plotly@validators@scatter3d@line@colorbar@title@_side.py@.PATH_END.py
|
{
"filename": "alignment_pixels.py",
"repo_name": "adolliou/euispice_coreg",
"repo_path": "euispice_coreg_extracted/euispice_coreg-main/euispice_coreg/pxlshift/alignment_pixels.py",
"type": "Python"
}
|
import warnings
import numpy as np
from astropy.io import fits
from ..utils import rectify
from ..utils import c_correlate
from tqdm import tqdm
from ..utils import matrix_transform
import astropy.units as u
from astropy.time import Time
from ..utils import Util
class AlignmentPixels:
def __init__(self, large_fov_known_pointing: str, window_large: int, small_fov_to_correct: str,
window_small: int, ):
with fits.open(large_fov_known_pointing) as hdul_large:
hdu_large = hdul_large[window_large]
self.hdr_large = hdu_large.header.copy()
self.data_large = np.array(hdu_large.data.copy(), dtype=np.float64)
hdul_large.close()
with fits.open(small_fov_to_correct) as hdul_small:
hdu_small = hdul_small[window_small]
self.hdr_small = hdu_small.header.copy()
self.data_small = np.array(hdu_small.data.copy(), dtype=np.float64)
hdul_small.close()
self.slc_small_ref = None
self.x_large = None
self.y_large = None
def _iteration_along_dy(self, dx, ):
results = np.zeros((len(self.lag_dy)), dtype=np.float64)
for jj, dy in enumerate(tqdm(self.lag_dy, desc="dx = %i" % dx)):
results[jj] = self._step(dx=dx, dy=dy)
return results
def _step(self, dx, dy):
slc = (slice(self.slc_small_ref[0].start + dy,
self.slc_small_ref[0].stop + dy),
slice(self.slc_small_ref[1].start + dx,
self.slc_small_ref[1].stop + dx)
)
self._check_boundaries(slc, self.data_large.shape)
if (self.data_large[slc[0], slc[1]].shape != self.data_small_rotated.shape):
raise ValueError("shapes not similar")
lag = [0]
is_nan = np.array(
(np.isnan(self.data_large[slc[0], slc[1]].ravel())) | (np.isnan(self.data_small_rotated.ravel())),
dtype="bool")
return c_correlate.c_correlate(s_2=self.data_large[slc[0], slc[1]].ravel()[~is_nan],
s_1=self.data_small_rotated.ravel()[~is_nan],
lags=lag)
def find_best_parameters(self, lag_dx: np.array, lag_dy: np.array, lag_drot: np.array, unit_rot="degree",
shift_solar_rotation_dx_large=False):
if shift_solar_rotation_dx_large:
self._shift_large_fov()
self._sub_resolution_large_fov()
self._initialise_slice_corresponding_to_small()
corr = np.zeros((len(lag_dx), len(lag_dy), len(lag_drot)), dtype=np.float64)
self.lag_dx = lag_dx
self.lag_dy = lag_dy
self.lag_drot = lag_drot
self.unit_rot = unit_rot
self.data_small_rotated = self.data_small.copy()
for kk, drot in enumerate(lag_drot):
if drot != 0:
xx, yy = np.meshgrid(np.arange(self.data_small.shape[1]),
np.arange(self.data_small.shape[0]),
)
nx, ny = matrix_transform.MatrixTransform.polar_transform(xx, yy, theta=drot, units=self.unit_rot)
self.data_small_rotated = rectify.interpol2d(self.data_small.copy(), x=nx, y=ny, fill=-32762, order=1)
self.data_small_rotated[self.data_small_rotated == -32762] = np.nan
else:
self.data_small_rotated = self.data_small.copy()
for ii, dx in enumerate(lag_dx):
corr[ii, :, kk] = self._iteration_along_dy(dx=dx, )
return corr
def _shift_large_fov(self):
xx, yy = np.meshgrid(np.arange(self.data_large.shape[1]),
np.arange(self.data_large.shape[0]), )
data_large = Util.AlignCommonUtil.interpol2d(self.data_large, xx, yy, fill=-32762, order=1)
data_large = np.where(data_large == -32762, np.nan, data_large)
dcrval = self._return_shift_large_fov_solar_rotation()
if "CROTA" in self.hdr_large:
warnings.warn("CROTA must be in degree", Warning)
theta = np.deg2rad(self.hdr_large["CROTA"])
dx = (dcrval.to(self.hdr_large["CUNIT1"]).value / self.hdr_large["CDELT1"]) * np.cos(-theta)
dy = (dcrval.to(self.hdr_large["CUNIT2"]).value / self.hdr_large["CDELT2"]) * np.sin(-theta)
else:
dx = dcrval.to(self.hdr_large["CUNIT1"]).value / self.hdr_large["CDELT1"]
dy = 0
mat = matrix_transform.MatrixTransform.displacement_matrix(dx=dx, dy=dy)
nx, ny = matrix_transform.MatrixTransform.linear_transform(xx, yy, matrix=mat)
data_large = Util.AlignCommonUtil.interpol2d(data_large, nx, ny, fill=-32762, order=1)
# norm = ImageNormalize(stretch=LogStretch(20), vmin=1, vmax=1000)
# PlotFunctions.plot_fov(self.data_large, norm=norm)
self.data_large = np.where(data_large == -32762, np.nan, data_large)
# PlotFunctions.plot_fov(self.data_large, norm=norm)
print(f"corrected solar rotation on FSI on CRVAL1: {dx=}, {dy=}")
def _return_shift_large_fov_solar_rotation(self):
band = self.hdr_large['WAVELNTH']
B0 = np.deg2rad(self.hdr_large['SOLAR_B0'])
omega_car = np.deg2rad(360 / 25.38 / 86400) # rad s-1
if band == 174:
band = 171
omega = omega_car + Util.AlignEUIUtil.diff_rot(B0, f'EIT {band}') # rad s-1
# helioprojective rotation rate for s/c
Rsun = self.hdr_large['RSUN_REF'] # m
Dsun = self.hdr_large['DSUN_OBS'] # m
phi = omega * Rsun / (Dsun - Rsun) # rad s-1
phi = np.rad2deg(phi) * 3600 # arcsec s-1
time_spice = Time(self.hdr_small["DATE-AVG"])
time_fsi = Time(self.hdr_large["DATE-AVG"])
dt = (time_spice - time_fsi).to("s").value
return u.Quantity(dt * phi, "arcsec")
def _sub_resolution_large_fov(self):
cdelt1_conv = u.Quantity(self.hdr_small["CDELT1"],
self.hdr_small["CUNIT1"]).to(self.hdr_large["CUNIT1"]).value
cdelt2_conv = u.Quantity(self.hdr_small["CDELT2"],
self.hdr_small["CUNIT2"]).to(self.hdr_large["CUNIT2"]).value
self.ratio_res_1 = cdelt1_conv / self.hdr_large["CDELT1"]
self.ratio_res_2 = cdelt2_conv / self.hdr_large["CDELT2"]
x, y = np.meshgrid(np.arange(0, self.data_large.shape[1], self.ratio_res_1),
np.arange(0, self.data_large.shape[0], self.ratio_res_2), )
self.data_large = rectify.interpol2d(self.data_large, x=x, y=y, order=1, fill=-32768)
self.data_large[self.data_large == -32768] = np.nan
y_new, x_new = np.meshgrid(np.arange(self.data_large.shape[0]),
np.arange(self.data_large.shape[1]))
self.x_large = x_new
self.y_large = y_new
def _initialise_slice_corresponding_to_small(self, ):
l = [int((self.data_large.shape[n] - self.data_small.shape[n] - 1) / 2) for n in range(2)]
self.slc_small_ref = (slice(l[0], l[0] + self.data_small.shape[0]),
slice(l[1], l[1] + self.data_small.shape[1]))
@staticmethod
def _check_boundaries(slc, shape, ):
for n in range(2):
if (slc[n].start < 0):
raise ValueError("too large shift : outside FSI")
if (slc[n].stop > shape[n]):
raise ValueError("too large shift : outside FSI")
|
adolliouREPO_NAMEeuispice_coregPATH_START.@euispice_coreg_extracted@euispice_coreg-main@euispice_coreg@pxlshift@alignment_pixels.py@.PATH_END.py
|
{
"filename": "history_fitting_script.py",
"repo_name": "ArgonneCPAC/diffstar",
"repo_path": "diffstar_extracted/diffstar-main/scripts/history_fitting_script.py",
"type": "Python"
}
|
"""Script to fit Bolshoi or Multidark MAHs with a smooth model."""
import argparse
import os
import subprocess
from time import time
import h5py
import numpy as np
from mpi4py import MPI
from diffstar.data_loaders.load_smah_data import (
BEBOP,
TASSO,
load_bolshoi_data,
load_bolshoi_small_data,
load_fit_mah,
load_mdpl_data,
load_mdpl_small_data,
load_tng_data,
load_tng_small_data,
)
from diffstar.fitting_helpers.fit_smah_helpers import (
MIN_MASS_CUT,
SSFRH_FLOOR,
get_header,
get_loss_data_default,
get_loss_data_fixed_depl_noquench,
get_loss_data_fixed_hi,
get_loss_data_fixed_hi_depl,
get_loss_data_fixed_hi_rej,
get_loss_data_fixed_noquench,
get_loss_data_free,
get_outline_default,
get_outline_fixed_depl_noquench,
get_outline_fixed_hi,
get_outline_fixed_hi_depl,
get_outline_fixed_hi_rej,
get_outline_fixed_noquench,
get_outline_free,
loss_default,
loss_fixed_depl_noquench,
loss_fixed_depl_noquench_deriv_np,
loss_fixed_hi,
loss_fixed_hi_depl,
loss_fixed_hi_depl_deriv_np,
loss_fixed_hi_deriv_np,
loss_fixed_hi_rej,
loss_fixed_hi_rej_deriv_np,
loss_fixed_noquench,
loss_fixed_noquench_deriv_np,
loss_free,
loss_free_deriv_np,
loss_grad_default_np,
)
from diffstar.utils import minimizer_wrapper
TMP_OUTPAT = "_tmp_smah_fits_rank_{0}.dat"
TODAY = 13.8
def _write_collated_data(outname, data, header):
nrows, ncols = np.shape(data)
colnames = header[1:].strip().split()
assert len(colnames) == ncols, "data mismatched with header"
with h5py.File(outname, "w") as hdf:
for i, name in enumerate(colnames):
if name == "halo_id":
hdf[name] = data[:, i].astype("i8")
else:
hdf[name] = data[:, i]
if __name__ == "__main__":
comm = MPI.COMM_WORLD
rank, nranks = comm.Get_rank(), comm.Get_size()
parser = argparse.ArgumentParser()
parser.add_argument(
"simulation", help="name of the simulation (used to select the data loader)"
)
parser.add_argument("outdir", help="Output directory")
parser.add_argument("outbase", help="Basename of the output hdf5 file")
parser.add_argument(
"modelname",
help="Version of the model and loss",
choices=(
"default",
"free",
"fixed_noquench",
"fixed_hi",
"fixed_hi_rej",
"fixed_hi_depl",
"fixed_depl_noquench",
),
default="default",
)
parser.add_argument("-indir", help="Input directory", default="BEBOP")
parser.add_argument("-fitmahfn", help="Filename of fit mah parameters")
parser.add_argument("-test", help="Short test run?", type=bool, default=False)
parser.add_argument(
"-gal_type",
help="Galaxy type (only relevant for Bolshoi and MDPl2)",
default="cens",
)
parser.add_argument(
"-fstar_tdelay",
help="Time interval in Gyr for fstar definition.",
type=float,
default=1.0,
)
parser.add_argument(
"-mass_fit_min",
help="Minimum mass included in stellar mass histories.",
type=float,
default=MIN_MASS_CUT,
)
parser.add_argument(
"-ssfrh_floor",
help="Clipping floor for sSFH",
type=float,
default=SSFRH_FLOOR,
)
args = parser.parse_args()
start = time()
args = parser.parse_args()
rank_basepat = args.outbase + TMP_OUTPAT
rank_outname = os.path.join(args.outdir, rank_basepat).format(rank)
if args.indir == "TASSO":
indir = TASSO
elif args.indir == "BEBOP":
indir = BEBOP
else:
indir = args.indir
if args.simulation == "bpl":
_smah_data = load_bolshoi_data(args.gal_type, data_drn=indir)
halo_ids, log_smahs, sfrhs, tarr, dt = _smah_data
_smah_data = load_fit_mah(args.fitmahfn, data_drn=indir)
mah_fit_params, logmp = _smah_data
elif args.simulation == "tng":
_smah_data = load_tng_data(args.gal_type, data_drn=indir)
halo_ids, log_smahs, sfrhs, tarr, dt = _smah_data
_smah_data = load_fit_mah(args.fitmahfn, data_drn=indir)
mah_fit_params, logmp = _smah_data
elif args.simulation == "mdpl":
_smah_data = load_mdpl_data(args.gal_type, data_drn=indir)
halo_ids, log_smahs, sfrhs, tarr, dt = _smah_data
_smah_data = load_fit_mah(args.fitmahfn, data_drn=indir)
mah_fit_params, logmp = _smah_data
elif args.simulation == "bpl_small":
_smah_data = load_bolshoi_small_data(args.gal_type, data_drn=indir)
halo_ids, log_smahs, sfrhs, tarr, dt = _smah_data
_smah_data = load_fit_mah(args.fitmahfn, data_drn=indir)
mah_fit_params, logmp = _smah_data
elif args.simulation == "tng_small":
_smah_data = load_tng_small_data(args.gal_type, data_drn=indir)
halo_ids, log_smahs, sfrhs, tarr, dt = _smah_data
_smah_data = load_fit_mah(args.fitmahfn, data_drn=indir)
mah_fit_params, logmp = _smah_data
elif args.simulation == "mdpl_small":
_smah_data = load_mdpl_small_data(args.gal_type, data_drn=indir)
halo_ids, log_smahs, sfrhs, tarr, dt = _smah_data
_smah_data = load_fit_mah(args.fitmahfn, data_drn=indir)
mah_fit_params, logmp = _smah_data
else:
raise NotImplementedError
# Get data for rank
if args.test:
nhalos_tot = nranks * 5
else:
nhalos_tot = len(halo_ids)
_a = np.arange(0, nhalos_tot).astype("i8")
indx = np.array_split(_a, nranks)[rank]
halo_ids_for_rank = halo_ids[indx]
log_smahs_for_rank = log_smahs[indx]
sfrhs_for_rank = sfrhs[indx]
mah_params_for_rank = mah_fit_params[indx]
logmp_for_rank = logmp[indx]
nhalos_for_rank = len(halo_ids_for_rank)
if args.modelname == "default":
get_loss_data = get_loss_data_default
loss_func = loss_default
loss_func_deriv = loss_grad_default_np
get_outline = get_outline_default
header = get_header
kwargs = {
"fstar_tdelay": args.fstar_tdelay,
"mass_fit_min": args.mass_fit_min,
"ssfrh_floor": args.ssfrh_floor,
}
elif args.modelname == "free":
get_loss_data = get_loss_data_free
loss_func = loss_free
loss_func_deriv = loss_free_deriv_np
get_outline = get_outline_free
header = get_header
kwargs = {
"fstar_tdelay": args.fstar_tdelay,
"mass_fit_min": args.mass_fit_min,
"ssfrh_floor": args.ssfrh_floor,
}
elif args.modelname == "fixed_noquench":
get_loss_data = get_loss_data_fixed_noquench
loss_func = loss_fixed_noquench
loss_func_deriv = loss_fixed_noquench_deriv_np
get_outline = get_outline_fixed_noquench
header = get_header
kwargs = {
"fstar_tdelay": args.fstar_tdelay,
"mass_fit_min": args.mass_fit_min,
"ssfrh_floor": args.ssfrh_floor,
}
elif args.modelname == "fixed_hi":
get_loss_data = get_loss_data_fixed_hi
loss_func = loss_fixed_hi
loss_func_deriv = loss_fixed_hi_deriv_np
get_outline = get_outline_fixed_hi
header = get_header
kwargs = {
"fstar_tdelay": args.fstar_tdelay,
"mass_fit_min": args.mass_fit_min,
"ssfrh_floor": args.ssfrh_floor,
}
elif args.modelname == "fixed_hi_rej":
get_loss_data = get_loss_data_fixed_hi_rej
loss_func = loss_fixed_hi_rej
loss_func_deriv = loss_fixed_hi_rej_deriv_np
get_outline = get_outline_fixed_hi_rej
header = get_header
kwargs = {
"fstar_tdelay": args.fstar_tdelay,
"mass_fit_min": args.mass_fit_min,
"ssfrh_floor": args.ssfrh_floor,
}
elif args.modelname == "fixed_hi_depl":
get_loss_data = get_loss_data_fixed_hi_depl
loss_func = loss_fixed_hi_depl
loss_func_deriv = loss_fixed_hi_depl_deriv_np
get_outline = get_outline_fixed_hi_depl
header = get_header
kwargs = {
"fstar_tdelay": args.fstar_tdelay,
"mass_fit_min": args.mass_fit_min,
"ssfrh_floor": args.ssfrh_floor,
}
elif args.modelname == "fixed_depl_noquench":
get_loss_data = get_loss_data_fixed_depl_noquench
loss_func = loss_fixed_depl_noquench
loss_func_deriv = loss_fixed_depl_noquench_deriv_np
get_outline = get_outline_fixed_depl_noquench
header = get_header
kwargs = {
"fstar_tdelay": args.fstar_tdelay,
"mass_fit_min": args.mass_fit_min,
"ssfrh_floor": args.ssfrh_floor,
}
header = header()
with open(rank_outname, "w") as fout:
fout.write(header)
for i in range(nhalos_for_rank):
halo_id = halo_ids_for_rank[i]
lgsmah = log_smahs_for_rank[i, :]
sfrh = sfrhs_for_rank[i, :]
mah_params = mah_params_for_rank[i]
logmp_halo = logmp_for_rank[i]
p_init, loss_data = get_loss_data(
tarr, dt, sfrh, lgsmah, logmp_halo, mah_params, **kwargs
)
_res = minimizer_wrapper(loss_func, loss_func_deriv, p_init, loss_data)
p_best, loss_best, success = _res
outline = get_outline(halo_id, loss_data, p_best, loss_best, success)
fout.write(outline)
comm.Barrier()
end = time()
msg = (
"\n\nWallclock runtime to fit {0} galaxies with {1} ranks = {2:.1f} seconds\n\n"
)
if rank == 0:
runtime = end - start
print(msg.format(nhalos_tot, nranks, runtime))
# collate data from ranks and rewrite to disk
pat = os.path.join(args.outdir, rank_basepat)
fit_data_fnames = [pat.format(i) for i in range(nranks)]
data_collection = [np.loadtxt(fn) for fn in fit_data_fnames]
all_fit_data = np.concatenate(data_collection)
outname = os.path.join(args.outdir, args.outbase)
outname = outname + ".h5"
_write_collated_data(outname, all_fit_data, header)
# clean up temporary files
_remove_basename = pat.replace("{0}", "*")
command = "rm -rf " + _remove_basename
raw_result = subprocess.check_output(command, shell=True)
|
ArgonneCPACREPO_NAMEdiffstarPATH_START.@diffstar_extracted@diffstar-main@scripts@history_fitting_script.py@.PATH_END.py
|
{
"filename": "clearml.py",
"repo_name": "ultralytics/ultralytics",
"repo_path": "ultralytics_extracted/ultralytics-main/ultralytics/utils/callbacks/clearml.py",
"type": "Python"
}
|
# Ultralytics YOLO 🚀, AGPL-3.0 license
from ultralytics.utils import LOGGER, SETTINGS, TESTS_RUNNING
try:
assert not TESTS_RUNNING # do not log pytest
assert SETTINGS["clearml"] is True # verify integration is enabled
import clearml
from clearml import Task
assert hasattr(clearml, "__version__") # verify package is not directory
except (ImportError, AssertionError):
clearml = None
def _log_debug_samples(files, title="Debug Samples") -> None:
"""
Log files (images) as debug samples in the ClearML task.
Args:
files (list): A list of file paths in PosixPath format.
title (str): A title that groups together images with the same values.
"""
import re
if task := Task.current_task():
for f in files:
if f.exists():
it = re.search(r"_batch(\d+)", f.name)
iteration = int(it.groups()[0]) if it else 0
task.get_logger().report_image(
title=title, series=f.name.replace(it.group(), ""), local_path=str(f), iteration=iteration
)
def _log_plot(title, plot_path) -> None:
"""
Log an image as a plot in the plot section of ClearML.
Args:
title (str): The title of the plot.
plot_path (str): The path to the saved image file.
"""
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
img = mpimg.imread(plot_path)
fig = plt.figure()
ax = fig.add_axes([0, 0, 1, 1], frameon=False, aspect="auto", xticks=[], yticks=[]) # no ticks
ax.imshow(img)
Task.current_task().get_logger().report_matplotlib_figure(
title=title, series="", figure=fig, report_interactive=False
)
def on_pretrain_routine_start(trainer):
"""Runs at start of pretraining routine; initializes and connects/ logs task to ClearML."""
try:
if task := Task.current_task():
# WARNING: make sure the automatic pytorch and matplotlib bindings are disabled!
# We are logging these plots and model files manually in the integration
from clearml.binding.frameworks.pytorch_bind import PatchPyTorchModelIO
from clearml.binding.matplotlib_bind import PatchedMatplotlib
PatchPyTorchModelIO.update_current_task(None)
PatchedMatplotlib.update_current_task(None)
else:
task = Task.init(
project_name=trainer.args.project or "Ultralytics",
task_name=trainer.args.name,
tags=["Ultralytics"],
output_uri=True,
reuse_last_task_id=False,
auto_connect_frameworks={"pytorch": False, "matplotlib": False},
)
LOGGER.warning(
"ClearML Initialized a new task. If you want to run remotely, "
"please add clearml-init and connect your arguments before initializing YOLO."
)
task.connect(vars(trainer.args), name="General")
except Exception as e:
LOGGER.warning(f"WARNING ⚠️ ClearML installed but not initialized correctly, not logging this run. {e}")
def on_train_epoch_end(trainer):
"""Logs debug samples for the first epoch of YOLO training and report current training progress."""
if task := Task.current_task():
# Log debug samples
if trainer.epoch == 1:
_log_debug_samples(sorted(trainer.save_dir.glob("train_batch*.jpg")), "Mosaic")
# Report the current training progress
for k, v in trainer.label_loss_items(trainer.tloss, prefix="train").items():
task.get_logger().report_scalar("train", k, v, iteration=trainer.epoch)
for k, v in trainer.lr.items():
task.get_logger().report_scalar("lr", k, v, iteration=trainer.epoch)
def on_fit_epoch_end(trainer):
"""Reports model information to logger at the end of an epoch."""
if task := Task.current_task():
# You should have access to the validation bboxes under jdict
task.get_logger().report_scalar(
title="Epoch Time", series="Epoch Time", value=trainer.epoch_time, iteration=trainer.epoch
)
for k, v in trainer.metrics.items():
task.get_logger().report_scalar("val", k, v, iteration=trainer.epoch)
if trainer.epoch == 0:
from ultralytics.utils.torch_utils import model_info_for_loggers
for k, v in model_info_for_loggers(trainer).items():
task.get_logger().report_single_value(k, v)
def on_val_end(validator):
"""Logs validation results including labels and predictions."""
if Task.current_task():
# Log val_labels and val_pred
_log_debug_samples(sorted(validator.save_dir.glob("val*.jpg")), "Validation")
def on_train_end(trainer):
"""Logs final model and its name on training completion."""
if task := Task.current_task():
# Log final results, CM matrix + PR plots
files = [
"results.png",
"confusion_matrix.png",
"confusion_matrix_normalized.png",
*(f"{x}_curve.png" for x in ("F1", "PR", "P", "R")),
]
files = [(trainer.save_dir / f) for f in files if (trainer.save_dir / f).exists()] # filter
for f in files:
_log_plot(title=f.stem, plot_path=f)
# Report final metrics
for k, v in trainer.validator.metrics.results_dict.items():
task.get_logger().report_single_value(k, v)
# Log the final model
task.update_output_model(model_path=str(trainer.best), model_name=trainer.args.name, auto_delete_file=False)
callbacks = (
{
"on_pretrain_routine_start": on_pretrain_routine_start,
"on_train_epoch_end": on_train_epoch_end,
"on_fit_epoch_end": on_fit_epoch_end,
"on_val_end": on_val_end,
"on_train_end": on_train_end,
}
if clearml
else {}
)
|
ultralyticsREPO_NAMEultralyticsPATH_START.@ultralytics_extracted@ultralytics-main@ultralytics@utils@callbacks@clearml.py@.PATH_END.py
|
{
"filename": "nndc.py",
"repo_name": "tardis-sn/carsus",
"repo_path": "carsus_extracted/carsus-master/carsus/io/nuclear/nndc.py",
"type": "Python"
}
|
import logging
import pandas as pd
from pathlib import Path
import subprocess
DECAY_DATA_SOURCE_DIR = Path.home() / "Downloads" / "carsus-data-nndc"
DECAY_DATA_FINAL_DIR = Path.home() / "Downloads" / "tardis-data" / "decay-data"
NNDC_SOURCE_URL = "https://github.com/tardis-sn/carsus-data-nndc"
logger = logging.getLogger(__name__)
class NNDCReader:
"""
Class for extracting nuclear decay data from NNDC archives
Attributes
----------
dirname: path to directory containing the decay data in CSV format
Methods
--------
decay_data:
Return pandas DataFrame representation of the decay data
"""
def __init__(self, dirname=None, remote=False):
"""
Parameters
----------
dirname : path
Path to the directory containing the source CSV data (local file).
"""
if dirname is None:
if remote:
try:
subprocess.run(['git', 'clone', NNDC_SOURCE_URL, DECAY_DATA_SOURCE_DIR], check=True)
logger.info(f"Downloading NNDC decay data from {NNDC_SOURCE_URL}")
except subprocess.CalledProcessError:
logger.warning(f"Failed to clone the repository.\n"
"Check if the repository already exists at {DECAY_DATA_SOURCE_DIR}")
self.dirname = DECAY_DATA_SOURCE_DIR / "csv"
else:
self.dirname = dirname
logger.info(f"Parsing decay data from: {DECAY_DATA_SOURCE_DIR}/csv")
self._decay_data = None
@property
def decay_data(self):
if self._decay_data is None:
self._decay_data = self._prepare_nuclear_dataframes()
return self._decay_data
def _get_nuclear_decay_dataframe(self):
"""
Convert the CSV files from the source directory into dataframes
Returns
-------
pandas.DataFrame
pandas Dataframe representation of the decay data
"""
all_data = []
dirpath = Path(self.dirname)
for file in dirpath.iterdir():
# convert every csv file to Dataframe and append it to all_data
if file.suffix == ".csv" and file.stat().st_size != 0:
data = pd.read_csv(
file,
)
all_data.append(data)
decay_data = pd.concat(all_data)
return decay_data
def _set_group_true(self, group):
"""
Sets the entire 'Metastable' column to True if any of the values in the group is True.
Parameters
----------
group: pandas.DataFrameGroupBy object
A groupby object that contains information about the groups.
"""
if group['Metastable'].any():
group['Metastable'] = True
return group
def _add_metastable_column(self, decay_data=None):
"""
Adds a 'Metastable' column to decay_data indicating the metastable isotopes (e.g: Mn52).
Returns
-------
pandas.Dataframe
Decay dataframe after the 'Metastable' column has been added.
"""
metastable_df = decay_data if decay_data is not None else self.decay_data.copy()
# Create a boolean metastable state column before the 'Decay Mode' column
metastable_df.insert(7, "Metastable", False)
metastable_filters = (metastable_df["Decay Mode"] == "IT") & (metastable_df["Decay Mode Value"] != 0.0) & (
metastable_df["Parent E(level)"] != 0.0)
metastable_df.loc[metastable_filters, 'Metastable'] = True
# avoid duplicate indices since metastable_df is a result of pd.concat operation
metastable_df = metastable_df.reset_index()
# Group by the combination of these columns
group_criteria = ['Parent E(level)', 'T1/2 (sec)', 'Isotope']
metastable_df = metastable_df.groupby(group_criteria).apply(self._set_group_true)
return metastable_df
def _prepare_nuclear_dataframes(self):
"""
Creates the decay dataframe to be stored in HDF Format and formats it by adding
the 'Metastable' and 'Isotope' columns, setting the latter as the index.
"""
decay_data_raw = self._get_nuclear_decay_dataframe()
decay_data_raw["Isotope"] = decay_data_raw.Element.map(str) + decay_data_raw.A.map(str)
decay_data = self._add_metastable_column(decay_data_raw)
decay_data = decay_data.set_index(['Isotope']).drop(['index'], axis=1)
decay_data = decay_data.sort_values(by=decay_data.columns.tolist())
return decay_data
def to_hdf(self, fpath=None):
"""
Parameters
----------
fpath: path
Path to the HDF5 output file
"""
if fpath is None:
fpath = DECAY_DATA_FINAL_DIR
if not Path(fpath).exists():
Path(fpath).mkdir()
target_fname = Path().joinpath(fpath, "compiled_ensdf_csv.h5")
with pd.HDFStore(target_fname, 'w') as f:
f.put('/decay_data', self.decay_data)
|
tardis-snREPO_NAMEcarsusPATH_START.@carsus_extracted@carsus-master@carsus@io@nuclear@nndc.py@.PATH_END.py
|
{
"filename": "cat_hsc_nep.py",
"repo_name": "HETDEX/elixer",
"repo_path": "elixer_extracted/elixer-main/elixer/cat_hsc_nep.py",
"type": "Python"
}
|
from __future__ import print_function
"""
This is for a special, restricted use of HSC data in the NEP
Updated 2024-02-xx
Substantial re-organization of prior early data. See bottom of this file for notes on that earlier version
"""
try:
from elixer import global_config as G
from elixer import science_image
from elixer import cat_base
from elixer import match_summary
from elixer import line_prob
from elixer import utilities
from elixer import spectrum_utilities as SU
from elixer import hsc_nep_meta
except:
import global_config as G
import science_image
import cat_base
import match_summary
import line_prob
import utilities
import spectrum_utilities as SU
import hsc_nep_meta
import os.path as op
import copy
import io
import matplotlib
#matplotlib.use('agg')
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.font_manager import FontProperties
import matplotlib.gridspec as gridspec
import astropy.table
#log = G.logging.getLogger('Cat_logger')
#log.setLevel(G.logging.DEBUG)
log = G.Global_Logger('cat_logger')
log.setlevel(G.LOG_LEVEL)
pd.options.mode.chained_assignment = None #turn off warning about setting the distance field
def hsc_count_to_mag(count,cutout=None,headers=None):
"""
NEP is using the same zero point as the HETDEX HSC, so this is still valid
:param count:
:param cutout:
:param headers:
:return:
"""
#We can convert the counts into flux
# with a keyword in the header of the imaging data;
# FLUXMAG0= 63095734448.0194
#
# Because the HSC pipeline uses the zeropoint value (corresponds to 27 mag) to all filters,
# we can convert the counts values in the R-band imaging data as follows:
# -2.5*log(flux_R) -48.6 = -2.5*log(count_R / FLUXMAG0)
# --> flux_R = count_R / ( 10^(30.24) )
#note: zero point is 27 mag, tiles are no longer different?
#release 2:
#The magnitude zero point is R_ZP = 27.0 mag.
# You can convert the count values in the imaging data into flux as follows:
# -2.5*log(flux_R) -48.6 = -2.5*log(count_R) + R_ZP
# --> flux_R = count_R / (10^(30.24))
fluxmag0 = None
try:
# if 'FLUXMAG0' in headers[0]:
# fluxmag0 = float(headers[0]['FLUXMAG0'])
#this can come from a compressed .fz fits which may have added a new header at [0]
for h in headers:
if 'FLUXMAG0' in h:
fluxmag0 = float(h['FLUXMAG0'])
break
except:
fluxmag0 = None
if count is not None:
if count > 0:
if fluxmag0 is not None:
return -2.5 * np.log10(count/fluxmag0) #+ 48.6
else:
return -2.5 * np.log10(count) + 27.0
return
else:
return 99.9 # need a better floor
class HSC_NEP(cat_base.Catalog):#Hyper Suprime Cam, North Ecliptic Pole
# class variables
# HSC_BASE_PATH = G.HSC_BASE_PATH
# HSC_CAT_PATH = G.HSC_CAT_PATH
# HSC_IMAGE_PATH = G.HSC_IMAGE_PATH
#todo: Oscar ... change to your path(here)
HSC_BASE_PATH = None
HSC_CAT_PATH = HSC_BASE_PATH
HSC_IMAGE_PATH = HSC_BASE_PATH
HSC_CATALOG_FILE = None
# Note: the catalog is now a single large fits file (~ 5GB) (old catalog: H20_NEP_subset_catalog.fits")
# it would be more memory and time efficient to make a meta-file with an index on the RA, Dec
# and find matches first, then load the matching row(s) from the fits, but
# for the sake of code time and the limited use of this data, we just load the whole thing
# and drop the columns we aren't using ... generally NEP is use exclusively here for NEP targets,
# so the expectation is that the catalog is loaded once for many observations
INCLUDE_KPNO_G = False
MAG_LIMIT = 27.3 #mostly care about r (this give a little slop for error and for smaller aperture before the limit kicks in)
#based off of average data from the prior version
MAG_LIMIT_DICT = {'default':{'g':27.2,'r':26.7,'i':26.4,'z':26.0,'y':24.9},
}
mean_FWHM = 1.0 #average 0.6 to 1.0
CONT_EST_BASE = None
df = None
loaded_tracts = []
MainCatalog = None #there is no Main Catalog ... must load individual catalog tracts
Name = "HSC-NEP"#"HyperSuprimeCam_NEP"
Tile_Dict = hsc_nep_meta.HSC_META_DICT
Image_Coord_Range = hsc_nep_meta.Image_Coord_Range
#Image_Coord_Range = {'RA_min':270.3579555, 'RA_max':270.84872930, 'Dec_min':67.5036877, 'Dec_max':67.8488075}
#Image_Coord_Range = hsc_meta.Image_Coord_Range
#Tile_Dict = HSC_META_DICT #hsc_meta.HSC_META_DICT
#correct the basepaths
for k in Tile_Dict.keys():
Tile_Dict[k]['path'] = op.join(HSC_IMAGE_PATH,op.basename(Tile_Dict[k]['path']))
# older version, not used anymore
# Filters = ['g','r','i','z','y'] #case is important ... needs to be lowercase
# Filter_HDU_Image_Idx = {'g':1,'r':4,'i':7,'z':10,'y':13}
# Filter_HDU_Weight_Idx = {'g':2,'r':5,'i':8,'z':11,'y':14}
# Filter_HDU_Mask_Idx = {'g':3,'r':6,'i':9,'z':12,'y':15}
Filters = ['g','r','i','z']
Cat_Coord_Range = {'RA_min': None, 'RA_max': None, 'Dec_min': None, 'Dec_max': None}
WCS_Manual = False
AstroTable = None
#HETDEX HSC values
#Masks (bitmapped) (from *_mask.fits headers)
MP_BAD = 0 #2**0
MP_SAT = 2 #2**1
MP_INTRP = 4 #2**2
MP_CR = 8
MP_EDGE = 16
MP_DETECTED = 32
MP_DETECTED_NEGATIVE = 64
MP_SUSPECT = 128
MP_NO_DATA = 256
MP_BRIGHT_OBJECT = 512
MP_CROSSTALK = 1024
MP_NOT_DEBLENDED = 2048
MP_UNMASKEDNAN = 4096
MP_REJECTED = 8192
MP_CLIPPED = 16384
MP_SENSOR_EDGE = 32768
MP_INEXACT_PSF = 65536 #2**16
MASK_LENGTH = 17
#
# Notice: sizes in pixels at 0.168" and are diameters
# so 17.0 is 17 pixel diameter aperture (~ 2.86" diamter or 1.4" radius (about 2 touching HETDEX fibers))
#
#BidCols = [] not used for this catalog, see read_catalog()
CatalogImages = [] #built in constructor
def __init__(self):
super(HSC_NEP, self).__init__()
self.dataframe_of_bid_targets = None
self.dataframe_of_bid_targets_unique = None
self.dataframe_of_bid_targets_photoz = None
self.num_targets = 0
self.master_cutout = None
self.build_catalog_of_images()
@classmethod
def read_catalog(cls, catalog_loc=None, name=None,tract=None,position=None):
"""
:param catalog_loc:
:param name:
:param tract: list of string as the HSC track id, ie. ['16814']
:param position: a tuple or array with exactly 2 elements as integers 0-9, i.e. (0,1) or (2,8), etc
:return:
"""
if name is None:
name = cls.Name
try:
fqtract = [op.join(cls.HSC_CAT_PATH,cls.HSC_CATALOG_FILE),]
except:
fqtract = []
# fqtract =[] #fully qualified track (as a partial path)
# if (tract is not None) and (len(tract) > 0) and (position is not None) and (len(position) == len(tract)): #should be a list of positions and the same length as tract
#
# for i in range(len(tract)):
# fqtract.append(op.join(tract[i],"R_P%d_%d.cat" %(position[i][0],position[i][1])))
# else:
# log.warning("Unexpected tract and positions in cat_hsc::read_catalogs: %s, %s" %(str(tract),str(position)))
# return None
#
if set(fqtract).issubset(cls.loaded_tracts):
log.info("Catalog tract (%s) already loaded." %fqtract)
return cls.df
#todo: future more than just the R filter if any are ever added
for t in fqtract:
if t in cls.loaded_tracts: #skip if already loaded
continue
cat_name = t
cat_loc = op.join(cls.HSC_CAT_PATH, cat_name)
#not used anymore
#header = cls.BidCols
if not op.exists(cat_loc):
log.error("Cannot load catalog tract for HSC. File does not exist: %s" %cat_loc)
log.debug("Building " + cls.Name + " " + cat_name + " dataframe...")
try:
table = astropy.table.Table.read(cat_loc)#,format='fits')
#all columns
#'ID', 'ALPHA_J2000', 'DELTA_J2000', 'X_MODEL', 'Y_MODEL', 'ERRX_MODEL', 'ERRY_MODEL',
# 'ALPHA_DETECTION', 'DELTA_DETECTION', 'FARMER_ID', 'GROUP_ID', 'N_GROUP',
# 'MODEL_FLAG', 'SOLUTION_MODEL', 'EBV_MW', 'FULL_DEPTH_DR1',
# 'CFHT_u_FLUX', 'CFHT_u_FLUXERR', 'CFHT_u_MAG', 'CFHT_u_MAGERR', 'CFHT_u_CHISQ', 'CFHT_u_DRIFT', 'CFHT_u_VALID',
# 'HSC_g_FLUX', 'HSC_g_FLUXERR', 'HSC_g_MAG', 'HSC_g_MAGERR', 'HSC_g_CHISQ', 'HSC_g_DRIFT', 'HSC_g_VALID',
# 'HSC_r_FLUX', 'HSC_r_FLUXERR', 'HSC_r_MAG', 'HSC_r_MAGERR', 'HSC_r_CHISQ', 'HSC_r_DRIFT', 'HSC_r_VALID',
# 'HSC_i_FLUX', 'HSC_i_FLUXERR', 'HSC_i_MAG', 'HSC_i_MAGERR', 'HSC_i_CHISQ', 'HSC_i_DRIFT', 'HSC_i_VALID',
# 'HSC_z_FLUX', 'HSC_z_FLUXERR', 'HSC_z_MAG', 'HSC_z_MAGERR', 'HSC_z_CHISQ', 'HSC_z_DRIFT', 'HSC_z_VALID',
# 'HSC_y_FLUX', 'HSC_y_FLUXERR', 'HSC_y_MAG', 'HSC_y_MAGERR', 'HSC_y_CHISQ', 'HSC_y_DRIFT', 'HSC_y_VALID',
# 'HSC_NB0816_MAG', 'HSC_NB0816_MAGERR', 'HSC_NB0816_FLUX', 'HSC_NB0816_FLUXERR', 'HSC_NB0816_CHISQ',
# 'HSC_NB0816_DRIFT', 'HSC_NB0816_VALID',
# 'HSC_NB0921_MAG', 'HSC_NB0921_MAGERR', 'HSC_NB0921_FLUX', 'HSC_NB0921_FLUXERR', 'HSC_NB0921_CHISQ',
# 'HSC_NB0921_DRIFT', 'HSC_NB0921_VALID',
# 'IRAC_CH1_FLUX', 'IRAC_CH1_FLUXERR', 'IRAC_CH1_MAG', 'IRAC_CH1_MAGERR', 'IRAC_CH1_CHISQ',
# 'IRAC_CH1_DRIFT', 'IRAC_CH1_VALID',
# 'IRAC_CH2_FLUX', 'IRAC_CH2_FLUXERR', 'IRAC_CH2_MAG', 'IRAC_CH2_MAGERR', 'IRAC_CH2_CHISQ',
# 'IRAC_CH2_DRIFT', 'IRAC_CH2_VALID',
# 'lp_zPDF', 'lp_zPDF_l68', 'lp_zPDF_u68', 'lp_zMinChi2', 'lp_chi2_best', 'lp_zp_2', 'lp_chi2_2',
# 'lp_NbFilt', 'lp_zq', 'lp_chiq', 'lp_modq', 'lp_mods', 'lp_chis', 'lp_model',
# 'lp_age', 'lp_dust', 'lp_Attenuation', 'lp_MNUV', 'lp_MR', 'lp_MJ',
# 'lp_mass_med', 'lp_mass_med_min68', 'lp_mass_med_max68', 'lp_mass_best',
# 'lp_SFR_med', 'lp_SFR_med_min68', 'lp_SFR_med_max68', 'lp_SFR_best', 'lp_sSFR_med',
# 'lp_sSFR_med_min68', 'lp_sSFR_med_max68', 'lp_sSFR_best',
# 'ez_z_phot', 'ez_z_phot_chi2', 'ez_z_phot_risk', 'ez_z_min_risk', 'ez_min_risk', 'ez_z_raw_chi2',
# 'ez_raw_chi2', 'ez_z_ml', 'ez_z_ml_chi2', 'ez_z_ml_risk', 'ez_z025', 'ez_z160', 'ez_z500', 'ez_z840',
# 'ez_z975', 'ez_nusefilt', 'ez_lc_min', 'ez_lc_max', 'ez_star_min_chi2', 'ez_star_teff'
#this table is too big, so only keep necessary columns
table.keep_columns(['ID', 'ALPHA_J2000', 'DELTA_J2000', # ',
'HSC_g_FLUX', 'HSC_g_FLUXERR', 'HSC_g_MAG', 'HSC_g_MAGERR', 'HSC_g_VALID',
'HSC_r_FLUX', 'HSC_r_FLUXERR', 'HSC_r_MAG', 'HSC_r_MAGERR', 'HSC_r_VALID',
'lp_zPDF', 'lp_zPDF_l68', 'lp_zPDF_u68',])
#'ez_z_phot','ez_z_phot_chi2',
#'ez_z_ml','ez_z_ml_chi2'])
table['ALPHA_J2000'].name = 'RA'
table['DELTA_J2000'].name = 'DEC'
except Exception as e:
if type(e) is astropy.io.registry.IORegistryError:
log.error(name + " Exception attempting to open catalog file: (IORegistryError, bad format)" + cat_loc, exc_info=False)
else:
log.error(name + " Exception attempting to open catalog file: " + cat_loc, exc_info=True)
continue #try the next one #exc_info = sys.exc_info()
try:
df = table.to_pandas()
#df = pd.read_csv(cat_loc, names=header,
# delim_whitespace=True, header=None, index_col=None, skiprows=0)
# old_names = ['RA_MODELING','DEC_MODELING']
# new_names = ['RA','DEC']
# df.rename(columns=dict(zip(old_names, new_names)), inplace=True)
# df['FILTER'] = 'r' #add the FILTER to the dataframe !!! case is important. must be lowercase
if cls.df is not None:
cls.df = pd.concat([cls.df, df])
else:
cls.df = df
cls.loaded_tracts.append(t)
except:
log.error(name + " Exception attempting to build pandas dataframe", exc_info=True)
continue
return cls.df
def get_mag_limit(self,image_identification=None,aperture_diameter=None):
"""
to be overwritten by subclasses to return their particular format of maglimit
:param image_identification: some way (sub-class specific) to identify which image
HERE we want a tuple ... [0] = tile name and [1] = filter name
:param aperture_diameter: in arcsec
:return:
"""
try:
#0.2 ~= 2.5 * log(1.2) ... or a 20% error
if image_identification[0] in self.MAG_LIMIT_DICT.keys():
return self.MAG_LIMIT_DICT[image_identification[0]][image_identification[1]] + 0.2
else:
return self.MAG_LIMIT_DICT['default'][image_identification[1]] + 0.2
except:
log.warning("cat_hsc_nep.py get_mag_limit fail.",exc_info=True)
try:
return self.MAG_LIMIT
except:
return 99.9
def build_catalog_of_images(self):
for t in self.Tile_Dict.keys(): #tile is the key (the filename)
#for f in self.Filters: # each image now only has one filter
path = self.HSC_IMAGE_PATH #op.join(self.HSC_IMAGE_PATH,self.Tile_Dict[t]['tract'])
name = t
wcs_manual = False
f = self.Tile_Dict[t]['filter']
self.CatalogImages.append(
{'path': path,
'name': name, #filename is the tilename
'tile': t,
'pos': self.Tile_Dict[t]['pos'], #the position tuple i.e. (0,3) or (2,8) ... in the name as 03 or 28
'filter': f,
'instrument': "HSC NEP",
'cols': [],
'labels': [],
'image': None,
'expanded': False,
'wcs_manual': wcs_manual,
'aperture': self.mean_FWHM * 0.5 + 0.5, #since a radius, half the FWHM + 0.5" for astrometric error
'mag_func': hsc_count_to_mag
})
def find_target_tile(self,ra,dec):
#assumed to have already confirmed this target is at least in coordinate range of this catalog
#return at most one tile, but maybe more than one tract (for the catalog ... HSC does not completely
# overlap the tracts so if multiple tiles are valid, depending on which one is selected, you may
# not find matching objects for the associated tract)
tile = None
tracts = []
positions = []
keys = []
for k in self.Tile_Dict.keys():
#20240212 DD: since the new version has just one filter per image, only check 'r'
if self.Tile_Dict[k]['filter'].lower() != 'r':
continue
# don't bother to load if ra, dec not in range
# try:
# if not ((ra >= self.Tile_Dict[k]['RA_min']) and (ra <= self.Tile_Dict[k]['RA_max']) and
# (dec >= self.Tile_Dict[k]['Dec_min']) and (dec <= self.Tile_Dict[k]['Dec_max'])) :
# continue
# else:
# keys.append(k)
# except:
# pass
try:
if self.Tile_Dict[k]['RA_max'] - self.Tile_Dict[k]['RA_min'] < 30: #30 deg as a big value
#ie. we are NOT crossing the 0 or 360 deg line
if not ((ra >= self.Tile_Dict[k]['RA_min']) and (ra <= self.Tile_Dict[k]['RA_max']) and
(dec >= self.Tile_Dict[k]['Dec_min']) and (dec <= self.Tile_Dict[k]['Dec_max'])) :
continue
else:
keys.append(k)
else: # we are crossing the 0/360 boundary, so we need to be greater than the max (ie. between max and 360)
# OR less than the min (between 0 and the minimum)
if not ((ra <= self.Tile_Dict[k]['RA_min']) or (ra >= self.Tile_Dict[k]['RA_max']) and
(dec >= self.Tile_Dict[k]['Dec_min']) and (dec <= self.Tile_Dict[k]['Dec_max'])) :
continue
else:
keys.append(k)
except:
pass
if len(keys) == 0: #we're done ... did not find any
return None, None, None
elif len(keys) == 1: #found exactly one
tile = keys[0] #remember tile is a string ... there can be only one
positions.append(self.Tile_Dict[tile]['pos'])
tracts.append(self.Tile_Dict[tile]['tract']) #remember, tract is a list (there can be more than one)
elif len(keys) > 1: #find the best one
log.info("Multiple overlapping tiles %s. Sub-selecting tile with maximum angular coverage around target." %keys)
# min = 9e9
# #we don't have the actual corners anymore, so just assume a rectangle
# #so there are 2 of each min, max coords. Only need the smallest distance so just sum one
# for k in keys:
# tracts.append(self.Tile_Dict[k]['tract'])
# positions.append(self.Tile_Dict[k]['pos'])
# sqdist = (ra-self.Tile_Dict[k]['RA_min'])**2 + (dec-self.Tile_Dict[k]['Dec_min'])**2 + \
# (ra-self.Tile_Dict[k]['RA_max'])**2 + (dec-self.Tile_Dict[k]['Dec_max'])**2
# if sqdist < min:
# min = sqdist
# tile = k
#
max_dist = 0
#we don't have the actual corners anymore, so just assume a rectangle
#so there are 2 of each min, max coords. Only need the smallest distance so just sum one
for k in keys:
tracts.append(self.Tile_Dict[k]['tract'])
positions.append(self.Tile_Dict[k]['pos'])
#should not be negative, but could be?
#in any case, the min is the smallest distance to an edge in RA and Dec
inside_ra = min((ra-self.Tile_Dict[k]['RA_min']),(self.Tile_Dict[k]['RA_max']-ra))
inside_dec = min((dec-self.Tile_Dict[k]['Dec_min']),(self.Tile_Dict[k]['Dec_max']-dec))
edge_dist = min(inside_dec,inside_ra)
#we want the tile with the largest minium edge distance
if edge_dist > max_dist and op.exists(self.Tile_Dict[k]['path']):
max_dist = edge_dist
tile = k
else: #really?? len(keys) < 0 : this is just a sanity catch
log.error("ERROR! len(keys) < 0 in cat_hsc::find_target_tile.")
return None, None, None
#'H20_EDFN_v2.3_HSC-R.fits'
tile = tile.replace("HSC-R.fits", "HSC-?.fits")
log.info("Selected tile: %s" % tile)
#now we have the tile key (filename)
#do we want to find the matching catalog and see if there is an entry in it?
#sanity check the image
# try:
# image = science_image.science_image(wcs_manual=self.WCS_Manual,wcs_idx=0,
# image_location=op.join(self.HSC_IMAGE_PATH,tile))
# if image.contains_position(ra, dec):
# pass
# else:
# log.debug("position (%f, %f) is not in image. %s" % (ra, dec,tile))
# tile = None
# except:
# pass
return tile, tracts, positions
def get_mask_cutout(self,tile,ra,dec,error):
"""
Given an image tile, get the corresponding mask tile
:param tile:
:return:
"""
mask_cutout = None
try:
#modify name for mask
mask_tile_name = tile.rstrip(".fits") +"_mask.fits"
# Find in Tile Dict for the path
path = op.join(G.HSC_IMAGE_PATH,self.Tile_Dict[tile]['tract'],op.basename(self.Tile_Dict[tile]['path']))
path = path.replace('image_tract_patch','mask_tract_patch')
path = path.replace(tile,mask_tile_name)
#now get the fits image cutout (don't use get_cutout as that is strictly for images)
mask_image = science_image.science_image(wcs_manual=self.WCS_Manual,wcs_idx=0, image_location=path)
mask_cutout = mask_image.get_mask_cutout(ra,dec,error,mask_image.image_location)
except:
log.info(f"Could not get mask cutout for tile ({tile})",exc_info=True)
return mask_cutout
#reworked as average of 3" (17pix), lsq and model mags
def get_filter_flux(self, df):
"""
:param df:
:return: flux in uJy
"""
filter_fl = None
filter_fl_err = None
mag = None
mag_bright = None
mag_faint = None
filter_str = 'r' # has grizy, but we will use r (preferred) then g
flux_list = []
flux_err_list = []
mag_list = []
mag_err_list = []
filter_str=None
try:
valid_g = df[f'HSC_g_VALID'].values[0]
valid_r = df[f'HSC_r_VALID'].values[0]
except:
valid_g = True
valid_r = True
if G.BANDPASS_PREFER_G and valid_g or (not valid_r):
first_name = "HSC_g_FLUX" #'FLUX_hsc_g'
second_name = "HSC_r_FLUX" #'FLUX_hsc_r'
else:
first_name = "HSC_r_FLUX" #'FLUX_hsc_r'
second_name = "HSC_g_FLUX" #'FLUX_hsc_g'
try:
if df[first_name].values[0]:
filter_str = first_name.split("_")[1]
elif df[second_name].values[0]:
filter_str = second_name.split("_")[1]
else:
log.info("Unable to use r or g filter flux.")
if filter_str:
filter_fl = df[f'HSC_{filter_str}_FLUX'].values[0] #df['FLUX_hsc_'+filter_str].values[0]
filter_fl_err = df[f'HSC_{filter_str}_FLUXERR'].values[0] #df['FLUXERR_hsc_'+filter_str].values[0]
#!!Notice: the flux HERE is a bit different than the image directly ... different zero
# flux to mag HERE is -2.5 * np.log10(filter_fl) + 23.9
# in the images it is -2.5 * np.log10(counts) + 27.0
mag = df[f'HSC_{filter_str}_MAG'].values[0]
mag_err = df[f'HSC_{filter_str}_MAGERR'].values[0]
mag_bright = mag - mag_err
mag_faint = mag + mag_err
# mag = df['MAG_hsc_'+filter_str].values[0]
# mag_bright = mag - df['MAGERR_hsc_'+filter_str].values[0]
# mag_faint = mag + df['MAGERR_hsc_'+filter_str].values[0]
except:
log.error("Exception in cat_hsc_nep.get_filter_flux", exc_info=True)
return filter_fl, filter_fl_err, mag, mag_bright, mag_faint, filter_str
try:
log.debug(f"HSC NEP {filter_str} mag {mag},{mag_bright},{mag_faint}")
except:
pass
return filter_fl, filter_fl_err, mag, mag_bright, mag_faint, filter_str
def get_photz(self, df):
"""
similar to get_filter_flux
:param df:
:return:
"""
lp_zPDF, lp_zPDF_low, lp_zPDF_hi = None, None, None
try:
lp_zPDF = df['lp_zPDF'].values[0]
lp_zPDF_low = df['lp_zPDF_l68'].values[0]
lp_zPDF_hi = df['lp_zPDF_u68'].values[0]
#todo: could also use other photz measures and decide based on risk or chi2, which is the "best"
#ez_z = df['ez_z_ml'].values[0]
except:
log.error("Exception in cat_hsc_nep.get_phot", exc_info=True)
return None, None, None
try:
log.debug(f"HSC NEP photz {lp_zPDF} [{lp_zPDF_low},{lp_zPDF_hi}]")
except:
pass
return lp_zPDF, lp_zPDF_low, lp_zPDF_hi
def build_list_of_bid_targets(self, ra, dec, error):
'''ra and dec in decimal degrees. error in arcsec.
returns a pandas dataframe'''
#even if not None, could be we need a different catalog, so check and append
tile, tracts, positions = self.find_target_tile(ra,dec)
if tile is None:
log.info("Could not locate tile for HSC. Discontinuing search of this catalog.")
return -1,None,None
#could be none or could be not loaded yet
#if self.df is None or not (self.Tile_Dict[tile]['tract'] in self.loaded_tracts):
if self.df is None or not (set(tracts).issubset(self.loaded_tracts)):
#self.read_main_catalog()
#self.read_catalog(tract=self.Tile_Dict[tile]['tract'])
self.read_catalog(tract=tracts,position=positions)
error_in_deg = np.float64(error) / 3600.0
self.dataframe_of_bid_targets = None
self.dataframe_of_bid_targets_photoz = None
self.num_targets = 0
coord_scale = np.cos(np.deg2rad(dec))
# can't actually happen for this catalog
if coord_scale < 0.1: # about 85deg
print("Warning! Excessive declination (%f) for this method of defining error window. Not supported" % (dec))
log.error(
"Warning! Excessive declination (%f) for this method of defining error window. Not supported" % (dec))
return 0, None, None
ra_min = np.float64(ra - error_in_deg)
ra_max = np.float64(ra + error_in_deg)
dec_min = np.float64(dec - error_in_deg)
dec_max = np.float64(dec + error_in_deg)
log.info(self.Name + " searching for bid targets in range: RA [%f +/- %f], Dec [%f +/- %f] ..."
% (ra, error_in_deg, dec, error_in_deg))
try:
self.dataframe_of_bid_targets = \
self.df[ (self.df['RA'] >= ra_min) & (self.df['RA'] <= ra_max)
& (self.df['DEC'] >= dec_min) & (self.df['DEC'] <= dec_max)
].copy()
#may contain duplicates (across tiles)
#remove duplicates (assuming same RA,DEC between tiles has same data)
#so, different tiles that have the same ra,dec and filter get dropped (keep only 1)
#but if the filter is different, it is kept
#this could be done at construction time, but given the smaller subset I think
#this is faster here
try:
self.dataframe_of_bid_targets = self.dataframe_of_bid_targets.drop_duplicates(
subset=['RA','DEC'])
except:
pass
#relying on auto garbage collection here ...
try:
self.dataframe_of_bid_targets_unique = self.dataframe_of_bid_targets.copy()
self.dataframe_of_bid_targets_unique = \
self.dataframe_of_bid_targets_unique.drop_duplicates(subset=['RA','DEC'])#,'FILTER'])
self.num_targets = self.dataframe_of_bid_targets_unique.iloc[:,0].count()
except:
self.num_targets = 0
except:
if self.df is not None:
log.error(self.Name + " Exception in build_list_of_bid_targets", exc_info=True)
self.num_targets = 0
if self.dataframe_of_bid_targets_unique is not None:
#self.num_targets = self.dataframe_of_bid_targets.iloc[:, 0].count()
self.sort_bid_targets_by_likelihood(ra, dec)
log.info(self.Name + " searching for objects in [%f - %f, %f - %f] " % (ra_min, ra_max, dec_min, dec_max) +
". Found = %d" % (self.num_targets))
return self.num_targets, self.dataframe_of_bid_targets_unique, None
def build_bid_target_reports(self, cat_match, target_ra, target_dec, error, num_hits=0, section_title="",
base_count=0,
target_w=0, fiber_locs=None, target_flux=None,detobj=None):
self.clear_pages()
num_targets, _, _ = self.build_list_of_bid_targets(target_ra, target_dec, error)
#could be there is no matching tile, if so, the dataframe will be none
#if (num_targets == 0) or
if (self.dataframe_of_bid_targets_unique is None):
ras = []
decs = []
else:
try:
ras = self.dataframe_of_bid_targets_unique.loc[:, ['RA']].values
decs = self.dataframe_of_bid_targets_unique.loc[:, ['DEC']].values
except:
ras = []
dec = []
# display the exact (target) location
if G.SINGLE_PAGE_PER_DETECT:
if G.BUILD_REPORT_BY_FILTER:
#here we return a list of dictionaries (the "cutouts" from this catalog)
return self.build_cat_summary_details(cat_match,target_ra, target_dec, error, ras, decs,
target_w=target_w, fiber_locs=fiber_locs, target_flux=target_flux,
detobj=detobj)
else:
entry = self.build_cat_summary_figure(cat_match,target_ra, target_dec, error, ras, decs,
target_w=target_w, fiber_locs=fiber_locs, target_flux=target_flux,
detobj=detobj)
else:
log.error("ERROR!!! Unexpected state of G.SINGLE_PAGE_PER_DETECT")
return None
if entry is not None:
self.add_bid_entry(entry)
if G.SINGLE_PAGE_PER_DETECT: # and (len(ras) <= G.MAX_COMBINE_BID_TARGETS):
entry = self.build_multiple_bid_target_figures_one_line(cat_match, ras, decs, error,
target_ra=target_ra, target_dec=target_dec,
target_w=target_w, target_flux=target_flux,
detobj=detobj)
if entry is not None:
self.add_bid_entry(entry)
else:
return None
# else: # each bid taget gets its own line
if (not G.FORCE_SINGLE_PAGE) and (len(ras) > G.MAX_COMBINE_BID_TARGETS): # each bid taget gets its own line
log.error("ERROR!!! Unexpected state of G.FORCE_SINGLE_PAGE")
return self.pages
def get_stacked_cutout(self,ra,dec,window):
stacked_cutout = None
error = window
# for a given Tile, iterate over all filters
tile, tracts, positions = self.find_target_tile(ra, dec)
if tile is None:
# problem
print("No appropriate tile found in HSC for RA,DEC = [%f,%f]" % (ra, dec))
log.error("No appropriate tile found in HSC for RA,DEC = [%f,%f]" % (ra, dec))
return None
for f in self.Filters:
try:
i = self.CatalogImages[
next(i for (i, d) in enumerate(self.CatalogImages)
if ((d['filter'] == f) and (d['tile'] == tile)))]
except:
i = None
if i is None:
continue
try:
wcs_manual = i['wcs_manual']
except:
wcs_manual = self.WCS_Manual
wcs_idx = 1
try:
if i['image'] is None:
i['image'] = science_image.science_image(wcs_manual=wcs_manual,wcs_idx=wcs_idx,
image_location=op.join(i['path'], i['name']))
sci = i['image']
cutout, _, _, _ = sci.get_cutout(ra, dec, error, window=window, aperture=None, mag_func=None)
#don't need pix_counts or mag, etc here, so don't pass aperture or mag_func
if cutout is not None: # construct master cutout
if stacked_cutout is None:
stacked_cutout = copy.deepcopy(cutout)
ref_exptime = sci.exptime
total_adjusted_exptime = 1.0
else:
stacked_cutout.data = np.add(stacked_cutout.data, cutout.data * sci.exptime / ref_exptime)
total_adjusted_exptime += sci.exptime / ref_exptime
except:
log.error("Error in get_stacked_cutout.",exc_info=True)
return stacked_cutout
def build_cat_summary_details(self,cat_match, ra, dec, error, bid_ras, bid_decs, target_w=0,
fiber_locs=None, target_flux=None,detobj=None):
"""
similar to build_cat_summary_figure, but rather than build up an image section to be displayed in the
elixer report, this builds up a dictionary of information to be aggregated later over multiple catalogs
***note: here we call the base class implementation to get the cutouts and then update those cutouts with
any catalog specific changes
:param cat_match: a match summary object (contains info about the PDF location, etc)
:param ra: the RA of the HETDEX detection
:param dec: the Dec of the HETDEX detection
:param error: radius (or half-side of a box) in which to search for matches (the cutout is 3x this on a side)
:param bid_ras: RAs of potential catalog counterparts
:param bid_decs: Decs of potential catalog counterparts
:param target_w: observed wavelength (from HETDEX)
:param fiber_locs: array (or list) of 6-tuples that describe fiber locations (which fiber, position, color, etc)
:param target_flux: HETDEX integrated line flux in CGS flux units (erg/s/cm2)
:param detobj: the DetObj instance
:return: cutouts list of dictionaries with bid-target objects as well
"""
cutouts = super().build_cat_summary_details(cat_match, ra, dec, error, bid_ras, bid_decs, target_w,
fiber_locs, target_flux,detobj)
if not cutouts:
return cutouts
for c in cutouts:
try:
details = c['details']
except:
pass
#####################################################
# BidTarget format is Unique to each child catalog
#####################################################
#now the bid targets
#2. catalog entries as a new key under cutouts (like 'details') ... 'counterparts'
# this should be similar to the build_multiple_bid_target_figures_one_line()
if len(bid_ras) > 0:
#if there are no cutouts (but we do have a catalog), create a cutouts list of dictionries to hold the
#counterparts
if not cutouts or len(cutouts) == 0:
cutouts = [{}]
cutouts[0]['counterparts'] = []
#create an empty list of counterparts under the 1st cutout
#counterparts are not filter specific, so we will just keep one list under the 1st cutout
target_count = 0
# targets are in order of increasing distance
for r, d in zip(bid_ras, bid_decs):
target_count += 1
if target_count > G.MAX_COMBINE_BID_TARGETS:
break
try: #DO NOT WANT _unique as that has wiped out the filters
df = self.dataframe_of_bid_targets.loc[(self.dataframe_of_bid_targets['RA'] == r[0]) &
(self.dataframe_of_bid_targets['DEC'] == d[0])]
#multiple filters
except:
log.error("Exception attempting to find object in dataframe_of_bid_targets", exc_info=True)
continue # this must be here, so skip to next ra,dec
if df is not None:
#add flux (cont est)
try:
#fluxes for HSC NEP are in micro-Jansky
filter_fl, filter_fl_err, filter_mag, filter_mag_bright, filter_mag_faint, filter_str = self.get_filter_flux(df)
except:
filter_fl = 0.0
filter_fl_err = 0.0
filter_mag = 0.0
filter_mag_bright = 0.0
filter_mag_faint = 0.0
filter_str = "NA"
bid_target = None
if (target_flux is not None) and (filter_fl):
if (filter_fl is not None):# and (filter_fl > 0):
#fluxes for HSC NEP are in micro-Jansky
filter_fl_cgs = self.micro_jansky_to_cgs(filter_fl,SU.filter_iso(filter_str,target_w)) #filter_fl * 1e-32 * 3e18 / (target_w ** 2) # 3e18 ~ c in angstroms/sec
filter_fl_cgs_unc = self.micro_jansky_to_cgs(filter_fl_err, SU.filter_iso(filter_str,target_w))
# assumes no error in wavelength or c
try:
bid_target = match_summary.BidTarget()
bid_target.catalog_name = self.Name
bid_target.bid_ra = df['RA'].values[0]
bid_target.bid_dec = df['DEC'].values[0]
bid_target.distance = df['distance'].values[0] * 3600
bid_target.prob_match = df['dist_prior'].values[0]
bid_target.bid_flux_est_cgs = filter_fl_cgs
bid_target.bid_filter = filter_str
bid_target.bid_mag = filter_mag
bid_target.bid_mag_err_bright = filter_mag_bright
bid_target.bid_mag_err_faint = filter_mag_faint
bid_target.bid_flux_est_cgs_unc = filter_fl_cgs_unc
if G.CHECK_ALL_CATALOG_BID_Z: #only load if we are going to use it
bid_target.phot_z, bid_target.phot_z_low, bid_target.phot_z_hi = \
self.get_photz(df)
#notice for the 'tract' we want it as the array
if target_w:
lineFlux_err = 0.
if detobj is not None:
try:
lineFlux_err = detobj.estflux_unc
except:
lineFlux_err = 0.
try:
# ew = (target_flux / filter_fl_cgs / (target_w / G.LyA_rest))
# ew_u = abs(ew * np.sqrt(
# (detobj.estflux_unc / target_flux) ** 2 +
# (filter_fl_err / filter_fl) ** 2))
#
# bid_target.bid_ew_lya_rest = ew
# bid_target.bid_ew_lya_rest_err = ew_u
#
bid_target.bid_ew_lya_rest, bid_target.bid_ew_lya_rest_err = \
SU.lya_ewr(target_flux,lineFlux_err,target_w, bid_target.bid_filter,
bid_target.bid_flux_est_cgs,bid_target.bid_flux_est_cgs_unc)
except:
log.debug("Exception computing catalog EW: ", exc_info=True)
addl_waves = None
addl_flux = None
addl_ferr = None
try:
addl_waves = cat_match.detobj.spec_obj.addl_wavelengths
addl_flux = cat_match.detobj.spec_obj.addl_fluxes
addl_ferr = cat_match.detobj.spec_obj.addl_fluxerrs
except:
pass
# # build EW error from lineFlux_err and aperture estimate error
# ew_obs = (target_flux / bid_target.bid_flux_est_cgs)
# try:
# ew_obs_err = abs(ew_obs * np.sqrt(
# (lineFlux_err / target_flux) ** 2 +
# (bid_target.bid_flux_est_cgs_unc / bid_target.bid_flux_est_cgs) ** 2))
# except:
# ew_obs_err = 0.
ew_obs, ew_obs_err = SU.ew_obs(target_flux,lineFlux_err,target_w,
filter_str, filter_fl_cgs,filter_fl_cgs_unc)
bid_target.p_lae_oii_ratio, bid_target.p_lae, bid_target.p_oii,plae_errors = \
line_prob.mc_prob_LAE(
wl_obs=target_w,
lineFlux=target_flux,
lineFlux_err=lineFlux_err,
continuum=bid_target.bid_flux_est_cgs * SU.continuum_band_adjustment(target_w,bid_target.bid_filter),
continuum_err=bid_target.bid_flux_est_cgs_unc * SU.continuum_band_adjustment(target_w,bid_target.bid_filter),
c_obs=None, which_color=None,
addl_wavelengths=addl_waves,
addl_fluxes=addl_flux,
addl_errors=addl_ferr,
sky_area=None,
cosmo=None, lae_priors=None,
ew_case=None, W_0=None,
z_OII=None, sigma=None)
try:
if plae_errors:
bid_target.p_lae_oii_ratio_min = plae_errors['ratio'][1]
bid_target.p_lae_oii_ratio_max = plae_errors['ratio'][2]
except:
pass
try:
bid_target.add_filter('HSC NEP',filter_str,filter_fl_cgs,filter_fl_err)
except:
log.debug('Unable to build filter entry for bid_target.',exc_info=True)
cat_match.add_bid_target(bid_target)
try: # no downstream edits so they can both point to same bid_target
detobj.bid_target_list.append(bid_target)
except:
if detobj is not None:
log.warning("Unable to append bid_target to detobj.", exc_info=True)
try:
cutouts[0]['counterparts'].append(bid_target)
except:
log.warning("Unable to append bid_target to cutouts.", exc_info=True)
except:
log.debug('Unable to build bid_target.',exc_info=True)
return cutouts
def build_cat_summary_figure (self, cat_match, ra, dec, error,bid_ras, bid_decs, target_w=0,
fiber_locs=None, target_flux=None,detobj=None):
'''Builds the figure (page) the exact target location. Contains just the filter images ...
Returns the matplotlib figure. Due to limitations of matplotlib pdf generation, each figure = 1 page'''
# note: error is essentially a radius, but this is done as a box, with the 0,0 position in lower-left
# not the middle, so need the total length of each side to be twice translated error or 2*2*error
# ... change to 1.5 times twice the translated error (really sqrt(2) * 2* error, but 1.5 is close enough)
window = error * 3
target_box_side = error/4.0 #basically, the box is 1/32 of the window size
rows = 10
#cols = 1 + len(self.CatalogImages)/len(self.Tiles)
#note: setting size to 7 from 6 so they will be the right size (the 7th position will not be populated)
cols = 7 # 1 for the fiber position and up to 5 filters for any one tile (u,g,r,i,z)
fig_sz_x = 18 #cols * 3
fig_sz_y = 3 #ows * 3
fig = plt.figure(figsize=(fig_sz_x, fig_sz_y))
plt.subplots_adjust(left=0.05, right=0.95, top=0.95, bottom=0.05)
gs = gridspec.GridSpec(rows, cols, wspace=0.25, hspace=0.0)
# reminder gridspec indexing is 0 based; matplotlib.subplot is 1-based
font = FontProperties()
font.set_family('monospace')
font.set_size(12)
# for a given Tile, iterate over all filters
tile, tracts, positions = self.find_target_tile(ra, dec)
if tile is None:
# problem
print("No appropriate tile found in HSC for RA,DEC = [%f,%f]" % (ra, dec))
log.error("No appropriate tile found in HSC for RA,DEC = [%f,%f]" % (ra, dec))
return None
# All on one line now across top of plots
if G.ZOO:
title = "Possible Matches = %d (within +/- %g\")" \
% (len(self.dataframe_of_bid_targets_unique), error)
else:
title = self.Name + " : Possible Matches = %d (within +/- %g\")" \
% (len(self.dataframe_of_bid_targets_unique), error)
cont_est = -1
if target_flux and self.CONT_EST_BASE:
title += " Minimum (no match) 3$\sigma$ rest-EW: "
cont_est = self.CONT_EST_BASE*3
if cont_est != -1:
title += " LyA = %g $\AA$ " % ((target_flux / cont_est) / (target_w / G.LyA_rest))
if target_w >= G.OII_rest:
title = title + " OII = %g $\AA$" % ((target_flux / cont_est) / (target_w / G.OII_rest))
else:
title = title + " OII = N/A"
else:
title += " LyA = N/A OII = N/A"
plt.subplot(gs[0, :])
text = plt.text(0, 0.7, title, ha='left', va='bottom', fontproperties=font)
plt.gca().set_frame_on(False)
plt.gca().axis('off')
ref_exptime = 1.0
total_adjusted_exptime = 1.0
bid_colors = self.get_bid_colors(len(bid_ras))
exptime_cont_est = -1
index = 0 #images go in positions 1+ (0 is for the fiber positions)
if self.INCLUDE_KPNO_G:
try:
#IF KPNO g-band is available,
# advance the index by 1 and insert the KPNO g-band image in 1st position AFTER the "master" cutout (made
# from only HSC)
kpno = cat_kpno.KPNO()
kpno_cuts = kpno.get_cutouts(ra,dec,window/3600.0,aperture=kpno.mean_FWHM * 0.5 + 0.5,filter='g',first=True,detobj=detobj)
if (kpno_cuts is not None) and (len(kpno_cuts) == 1):
index += 1
_ = plt.subplot(gs[1:, index])
sci = science_image.science_image()
vmin, vmax = sci.get_vrange(kpno_cuts[0]['cutout'].data)
pix_size = sci.calc_pixel_size(kpno_cuts[0]['cutout'].wcs)
ext = kpno_cuts[0]['cutout'].shape[0] * pix_size / 2.
plt.imshow(kpno_cuts[0]['cutout'].data, origin='lower', interpolation='none', cmap=plt.get_cmap('gray_r'),
vmin=vmin, vmax=vmax, extent=[-ext, ext, -ext, ext])
#plt.title(i['instrument'] + " " + i['filter'])
plt.title("(" + kpno.name + " g)")
plt.xticks([int(ext), int(ext / 2.), 0, int(-ext / 2.), int(-ext)])
plt.yticks([int(ext), int(ext / 2.), 0, int(-ext / 2.), int(-ext)])
# plt.plot(0, 0, "r+")
self.add_zero_position(plt)
if kpno_cuts[0]['details'] is not None:
try:
if (kpno.MAG_LIMIT < kpno_cuts[0]['details']['mag'] < 100) and (kpno_cuts[0]['details']['radius'] > 0):
kpno_cuts[0]['details']['fail_mag_limit'] = True
kpno_cuts[0]['details']['raw_mag'] = kpno_cuts[0]['details']['mag']
kpno_cuts[0]['details']['raw_mag_bright'] = kpno_cuts[0]['details']['mag_bright']
kpno_cuts[0]['details']['raw_mag_faint'] = kpno_cuts[0]['details']['mag_faint']
kpno_cuts[0]['details']['raw_mag_err'] = kpno_cuts[0]['details']['mag_err']
log.warning(f"Cutout mag {kpno_cuts[0]['details']['mag']} greater than limit {kpno.MAG_LIMIT}. Setting to limit.")
kpno_cuts[0]['details']['mag'] = kpno.MAG_LIMIT
try:
kpno_cuts[0]['details']['mag_bright'] = min(kpno.MAG_LIMIT, kpno_cuts[0]['details']['mag_bright'])
except:
kpno_cuts[0]['details']['mag_bright'] = kpno.MAG_LIMIT
try:
kpno_cuts[0]['details']['mag_faint'] = max(kpno.MAG_LIMIT, G.MAX_MAG_FAINT)
except:
kpno_cuts[0]['details']['mag_faint'] = G.MAX_MAG_FAINT
except:
pass
#this will happen anyway under KPNO itself
# (note: the "details" are actually populated in the independent KPNO catalog calls)
#build up the needed parameters from the kpno_cuts
cx = kpno_cuts[0]['ap_center'][0]
cy = kpno_cuts[0]['ap_center'][1]
#and need to get an EW and PLAE/POII
if detobj is not None:
try:
lineFlux_err = detobj.estflux_unc
except:
lineFlux_err = 0.
try:
flux_faint = None
flux_bright = None
bid_flux_est_cgs_unc = None
bid_flux_est_cgs = None
if kpno_cuts[0]['details']['mag'] < 99:
bid_flux_est_cgs = self.obs_mag_to_cgs_flux(kpno_cuts[0]['details']['mag'],
SU.filter_iso('g', target_w))
if kpno_cuts[0]['details']['mag_faint'] < 99:
flux_faint = self.obs_mag_to_cgs_flux(kpno_cuts[0]['details']['mag_faint'],
SU.filter_iso('g', target_w))
if kpno_cuts[0]['details']['mag_bright'] < 99:
flux_bright = self.obs_mag_to_cgs_flux(kpno_cuts[0]['details']['mag_bright'],
SU.filter_iso('g', target_w))
if flux_bright and flux_faint:
bid_flux_est_cgs_unc = max((bid_flux_est_cgs - flux_faint),
(flux_bright - bid_flux_est_cgs))
elif flux_bright:
bid_flux_est_cgs_unc = flux_bright - bid_flux_est_cgs
except:
pass
addl_waves = None
addl_flux = None
addl_ferr = None
try:
addl_waves = cat_match.detobj.spec_obj.addl_wavelengths
addl_flux = cat_match.detobj.spec_obj.addl_fluxes
addl_ferr = cat_match.detobj.spec_obj.addl_fluxerrs
except:
pass
p_lae_oii_ratio, p_lae, p_oii, plae_errors = \
line_prob.mc_prob_LAE(
wl_obs=target_w,
lineFlux=target_flux,
lineFlux_err=lineFlux_err,
continuum=bid_flux_est_cgs,
continuum_err=bid_flux_est_cgs_unc,
c_obs=None, which_color=None,
addl_wavelengths=addl_waves,
addl_fluxes=addl_flux,
addl_errors=addl_ferr,
sky_area=None,
cosmo=None, lae_priors=None,
ew_case=None, W_0=None,
z_OII=None, sigma=None)
ew_obs = (target_flux / bid_flux_est_cgs)
cutout_ewr = ew_obs / (1. + target_w / G.LyA_rest)
cutout_plae = p_lae_oii_ratio
if (kpno_cuts[0]['details']['sep_objects'] is not None): # and (details['sep_obj_idx'] is not None):
self.add_elliptical_aperture_positions(plt, kpno_cuts[0]['details']['sep_objects'],
kpno_cuts[0]['details']['sep_obj_idx'],
kpno_cuts[0]['details']['radius'],
kpno_cuts[0]['details']['mag'],
cx, cy, cutout_ewr, cutout_plae)
else:
self.add_aperture_position(plt, kpno_cuts[0]['details']['radius'],
kpno_cuts[0]['details']['mag'],
cx, cy, cutout_ewr, cutout_plae)
self.add_north_box(plt, sci, kpno_cuts[0]['cutout'], error, 0, 0, theta=None)
#don't want KPNO catalog objects, just the HSC ones
except:
log.warning("Exception adding KPNO to HSC report",exc_info=True)
for f in self.Filters:
try:
i = self.CatalogImages[
next(i for (i, d) in enumerate(self.CatalogImages)
if ((d['filter'] == f) and (d['tile'] == tile)))]
except:
i = None
if i is None:
continue
index += 1
if index > cols:
log.warning("Exceeded max number of grid spec columns.")
break #have to be done
try:
wcs_manual = i['wcs_manual']
aperture = i['aperture']
mag_func = i['mag_func']
except:
wcs_manual = self.WCS_Manual
aperture = 0.0
mag_func = None
wcs_idx = 1
if i['image'] is None:
i['image'] = science_image.science_image(wcs_manual=wcs_manual,wcs_idx=wcs_idx,
image_location=op.join(i['path'], i['name']))
sci = i['image']
#the filters are in order, use r if g is not there
if (f == 'r') and (sci.exptime is not None) and (exptime_cont_est == -1):
exptime_cont_est = sci.exptime
# the filters are in order, so this will overwrite r
if (f == 'g') and (sci.exptime is not None):
exptime_cont_est = sci.exptime
# sci.load_image(wcs_manual=True)
cutout, pix_counts, mag, mag_radius,details = sci.get_cutout(ra, dec, error, window=window,
aperture=aperture,mag_func=mag_func,return_details=True,detobj=detobj)
if (self.MAG_LIMIT < mag < 100) and (mag_radius > 0):
log.warning(f"Cutout mag {mag} greater than limit {self.MAG_LIMIT}. Setting to limit.")
details['fail_mag_limit'] = True
details['raw_mag'] = mag
details['raw_mag_bright'] = details['mag_bright']
details['raw_mag_faint'] = details['mag_faint']
details['raw_mag_err'] = details['mag_err']
mag = self.MAG_LIMIT
if details:
details['mag'] = mag
try:
details['mag_bright'] = min(mag,details['mag_bright'])
except:
details['mag_bright'] = mag
try:
details['mag_faint'] = max(mag,G.MAX_MAG_FAINT)
except:
details['mag_faint'] = G.MAX_MAG_FAINT
ext = sci.window / 2. # extent is from the 0,0 center, so window/2
bid_target = None
cutout_ewr = None
cutout_ewr_err = None
cutout_plae = None
try: # update non-matched source line with PLAE()
if ((mag < 99) or (cont_est != -1)) and (target_flux is not None) and (i['filter'] == 'r'):
# make a "blank" catalog match (e.g. at this specific RA, Dec (not actually from catalog)
bid_target = match_summary.BidTarget()
bid_target.catalog_name = self.Name
bid_target.bid_ra = 666 # nonsense RA
bid_target.bid_dec = 666 # nonsense Dec
bid_target.distance = 0.0
bid_target.bid_filter = i['filter']
bid_target.bid_mag = mag
bid_target.bid_mag_err_bright = 0.0 #todo: right now don't have error on aperture mag
bid_target.bid_mag_err_faint = 0.0
bid_target.bid_flux_est_cgs_unc = 0.0
if mag < 99:
#bid_target.bid_flux_est_cgs = self.obs_mag_to_cgs_flux(mag, target_w)
bid_target.bid_flux_est_cgs = self.obs_mag_to_cgs_flux(mag,SU.filter_iso(i['filter'],target_w))
try:
flux_faint = None
flux_bright = None
if details['mag_faint'] < 99:
flux_faint = self.obs_mag_to_cgs_flux(details['mag_faint'], SU.filter_iso(i['filter'],target_w))
if details['mag_bright'] < 99:
flux_bright = self.obs_mag_to_cgs_flux(details['mag_bright'], SU.filter_iso(i['filter'],target_w))
if flux_bright and flux_faint:
bid_target.bid_flux_est_cgs_unc = max((bid_target.bid_flux_est_cgs - flux_faint),
(flux_bright -bid_target.bid_flux_est_cgs))
elif flux_bright:
bid_target.bid_flux_est_cgs_unc = flux_bright -bid_target.bid_flux_est_cgs
except:
pass
else:
bid_target.bid_flux_est_cgs = cont_est
try:
bid_target.bid_mag_err_bright = mag - details['mag_bright']
bid_target.bid_mag_err_faint = details['mag_faint'] - mag
except:
pass
bid_target.add_filter(i['instrument'], i['filter'], bid_target.bid_flux_est_cgs, -1)
addl_waves = None
addl_flux = None
addl_ferr = None
try:
addl_waves = cat_match.detobj.spec_obj.addl_wavelengths
addl_flux = cat_match.detobj.spec_obj.addl_fluxes
addl_ferr = cat_match.detobj.spec_obj.addl_fluxerrs
except:
pass
lineFlux_err = 0.
if detobj is not None:
try:
lineFlux_err = detobj.estflux_unc
except:
lineFlux_err = 0.
# build EW error from lineFlux_err and aperture estimate error
# ew_obs = (target_flux / bid_target.bid_flux_est_cgs)
# try:
# ew_obs_err = abs(ew_obs * np.sqrt(
# (lineFlux_err / target_flux) ** 2 +
# (bid_target.bid_flux_est_cgs_unc / bid_target.bid_flux_est_cgs) ** 2))
# except:
# ew_obs_err = 0.
ew_obs, ew_obs_err = SU.ew_obs(target_flux,lineFlux_err,target_w, bid_target.bid_filter,
bid_target.bid_flux_est_cgs,bid_target.bid_flux_est_cgs_unc)
# bid_target.p_lae_oii_ratio, bid_target.p_lae, bid_target.p_oii, plae_errors = \
# line_prob.prob_LAE(wl_obs=target_w, lineFlux=target_flux,
# ew_obs=ew_obs,
# lineFlux_err=lineFlux_err,
# ew_obs_err=ew_obs_err,
# c_obs=None, which_color=None, addl_fluxes=addl_flux,
# addl_wavelengths=addl_waves, addl_errors=addl_ferr, sky_area=None,
# cosmo=None, lae_priors=None, ew_case=None, W_0=None, z_OII=None,
# sigma=None,estimate_error=True)
bid_target.p_lae_oii_ratio, bid_target.p_lae, bid_target.p_oii, plae_errors = \
line_prob.mc_prob_LAE(
wl_obs=target_w,
lineFlux=target_flux,
lineFlux_err=lineFlux_err,
continuum=bid_target.bid_flux_est_cgs * SU.continuum_band_adjustment(target_w,bid_target.bid_filter),
continuum_err=bid_target.bid_flux_est_cgs_unc * SU.continuum_band_adjustment(target_w,bid_target.bid_filter),
c_obs=None, which_color=None,
addl_wavelengths=addl_waves,
addl_fluxes=addl_flux,
addl_errors=addl_ferr,
sky_area=None,
cosmo=None, lae_priors=None,
ew_case=None, W_0=None,
z_OII=None, sigma=None)
try:
if plae_errors:
bid_target.p_lae_oii_ratio_min = plae_errors['ratio'][1]
bid_target.p_lae_oii_ratio_max = plae_errors['ratio'][2]
except:
pass
cutout_plae = bid_target.p_lae_oii_ratio
cutout_ewr = ew_obs / (1. + target_w / G.LyA_rest)
cutout_ewr_err = ew_obs_err / (1. + target_w / G.LyA_rest)
# if (not G.ZOO) and (bid_target is not None) and (bid_target.p_lae_oii_ratio is not None):
# text.set_text(text.get_text() + " P(LAE)/P(OII) = %0.4g (%s)" % (bid_target.p_lae_oii_ratio,i['filter']))
if (not G.ZOO) and (bid_target is not None) and (bid_target.p_lae_oii_ratio is not None):
try:
text.set_text(
text.get_text() + " P(LAE)/P(OII): $%.4g\ ^{%.4g}_{%.4g}$ (%s)" %
(round(bid_target.p_lae_oii_ratio, 3),
round(bid_target.p_lae_oii_ratio_max, 3),
round(bid_target.p_lae_oii_ratio_min, 3),
f))
except:
log.debug("Exception adding PLAE with range", exc_info=True)
try:
text.set_text(
text.get_text() + " P(LAE)/P(OII) = %0.4g (%s)" % (bid_target.p_lae_oii_ratio, f))
except:
text.set_text(
text.get_text() + " P(LAE)/P(OII): (%s) (%s)" % ("---", f))
# text.set_text(text.get_text() + " P(LAE)/P(OII) = %0.4g [%0.4g:%0.4g] (%s)"
# %(utilities.saferound(bid_target.p_lae_oii_ratio,3),
# utilities.saferound(bid_target.p_lae_oii_ratio_min,3),
# utilities.saferound(bid_target.p_lae_oii_ratio_max,3),i['filter']))
cat_match.add_bid_target(bid_target)
try: # no downstream edits so they can both point to same bid_target
if detobj is not None:
detobj.bid_target_list.append(bid_target)
except:
log.warning("Unable to append bid_target to detobj.", exc_info=True)
except:
log.debug('Could not build exact location photometry info.', exc_info=True)
if cutout is not None: # construct master cutout
# 1st cutout might not be what we want for the master (could be a summary image from elsewhere)
if self.master_cutout:
if self.master_cutout.shape != cutout.shape:
del self.master_cutout
self.master_cutout = None
# master cutout needs a copy of the data since it is going to be modified (stacked)
# repeat the cutout call, but get a copy
if self.master_cutout is None:
self.master_cutout,_,_, _ = sci.get_cutout(ra, dec, error, window=window, copy=True,reset_center=False,detobj=detobj)
#self.master_cutout,_,_, _ = sci.get_cutout(ra, dec, error, window=window, copy=True)
if sci.exptime:
ref_exptime = sci.exptime
total_adjusted_exptime = 1.0
else:
try:
self.master_cutout.data = np.add(self.master_cutout.data, cutout.data * sci.exptime / ref_exptime)
total_adjusted_exptime += sci.exptime / ref_exptime
except:
log.warning("Unexpected exception.", exc_info=True)
_ = plt.subplot(gs[1:, index])
plt.imshow(cutout.data, origin='lower', interpolation='none', cmap=plt.get_cmap('gray_r'),
vmin=sci.vmin, vmax=sci.vmax, extent=[-ext, ext, -ext, ext])
plt.title(i['instrument'] + " " + i['filter'])
plt.xticks([int(ext), int(ext / 2.), 0, int(-ext / 2.), int(-ext)])
plt.yticks([int(ext), int(ext / 2.), 0, int(-ext / 2.), int(-ext)])
#plt.plot(0, 0, "r+")
self.add_zero_position(plt)
if pix_counts is not None:
details['catalog_name'] = self.name
details['filter_name'] = f
details['aperture_eqw_rest_lya'] = cutout_ewr
details['aperture_eqw_rest_lya_err'] = cutout_ewr_err
details['aperture_plae'] = cutout_plae
try:
if plae_errors:
details['aperture_plae_min'] = plae_errors['ratio'][1]
details['aperture_plae_max'] = plae_errors['ratio'][2]
except:
details['aperture_plae_min'] = None
details['aperture_plae_max'] = None
cx = sci.last_x0_center
cy = sci.last_y0_center
if (details['sep_objects'] is not None): # and (details['sep_obj_idx'] is not None):
self.add_elliptical_aperture_positions(plt,details['sep_objects'],details['sep_obj_idx'],
mag_radius,mag,cx,cy,cutout_ewr,cutout_plae)
else:
self.add_aperture_position(plt,mag_radius,mag,cx,cy,cutout_ewr,cutout_plae)
self.add_north_box(plt, sci, cutout, error, 0, 0, theta=None)
x, y = sci.get_position(ra, dec, cutout) # zero (absolute) position
for br, bd, bc in zip(bid_ras, bid_decs, bid_colors):
fx, fy = sci.get_position(br, bd, cutout)
self.add_catalog_position(plt,
x=(fx-x)-target_box_side / 2.0,
y=(fy-y)-target_box_side / 2.0,
size=target_box_side, color=bc)
# plt.gca().add_patch(plt.Rectangle(((fx - x) - target_box_side / 2.0, (fy - y) - target_box_side / 2.0),
# width=target_box_side, height=target_box_side,
# angle=0.0, color=bc, fill=False, linewidth=1.0, zorder=1))
if (details is not None) and (detobj is not None):
#check for flags
#get the mask cutout
mask_cutout = self.get_mask_cutout(tile,ra,dec,error)
if mask_cutout is not None:
#iterate over the Elixer Apertures and the SEP apertures
if details['elixer_apertures']:
for a in details['elixer_apertures']:
# get the masks under the aperture
# do this for each step (increase in size)
pixels = sci.get_pixels_under_aperture(mask_cutout,a['ra'],a['dec'],
a['radius'],a['radius'],
angle= 0., north_angle=np.pi/2.)
mask_frac = self.update_mask_counts(pixels,None) / len(pixels)
#check the mask for any counts > 10% of total pixels
trip_mask = np.where(mask_frac > 0.10)[0]
# not all flags are 'bad' (i.e. 32 = detection)
a['image_flags'] = np.sum([2**x for x in trip_mask])
if details['sep_objects']:
for a in details['sep_objects']:
mask_frac = None
pixels = sci.get_pixels_under_aperture(mask_cutout, a['ra'], a['dec'],
a['a'], a['b'],
angle=a['theta'], north_angle=np.pi / 2.)
mask_frac = self.update_mask_counts(pixels, None)
# check the mask for any counts > 10% of total pixels
trip_mask = np.where(mask_frac > 0.10)[0]
#not all flags are 'bad' (i.e. 32 = detection)
a['image_flags'] = np.sum([2 ** x for x in trip_mask])
detobj.aperture_details_list.append(details)
if self.master_cutout is None:
# cannot continue
print("No catalog image available in %s" % self.Name)
plt.close()
return None
else:
self.master_cutout.data /= total_adjusted_exptime
plt.subplot(gs[1:, 0])
self.add_fiber_positions(plt, ra, dec, fiber_locs, error, ext, self.master_cutout)
#self.add_zero_position(plt)
# complete the entry
plt.close()
# get zoo style cutout as png
if G.ZOO_MINI and (detobj is not None):
plt.figure()
self.add_fiber_positions(plt, ra, dec, fiber_locs, error, ext, self.master_cutout, unlabeled=True)
plt.gca().set_axis_off()
box_ratio = 1.0#0.99
# add window outline
xl, xr = plt.gca().get_xlim()
yb, yt = plt.gca().get_ylim()
zero_x = (xl + xr) / 2.
zero_y = (yb + yt) / 2.
rx = (xr - xl) * box_ratio / 2.0
ry = (yt - yb) * box_ratio / 2.0
plt.gca().add_patch(plt.Rectangle((zero_x - rx, zero_y - ry), width=rx * 2 , height=ry * 2,
angle=0, color='red', fill=False,linewidth=8))
buf = io.BytesIO()
plt.savefig(buf, format='png', dpi=300,transparent=True)
detobj.image_cutout_fiber_pos = buf
plt.close()
return fig
def update_mask_counts(self,pixels=None,mask_counts=None):
"""
:param pixels:
:param mask:
:return:
"""
try:
if mask_counts is None:
mask_counts = np.zeros(self.MASK_LENGTH,dtype=int)
if pixels is None:
return mask_counts
if type(pixels) is list:
pixels = np.array(pixels)
for i in range(len(mask_counts)):
try:
num_pix = len(np.where(pixels & 2**i)[0])
mask_counts[i] += num_pix
except:
pass
except:
pass
return mask_counts
def build_multiple_bid_target_figures_one_line(self, cat_match, ras, decs, error, target_ra=None, target_dec=None,
target_w=0, target_flux=None,detobj=None):
rows = 1
cols = 6
fig_sz_x = cols * 3
fig_sz_y = rows * 3
fig = plt.figure(figsize=(fig_sz_x, fig_sz_y))
plt.subplots_adjust(left=0.05, right=0.95, top=0.9, bottom=0.2)
#col(0) = "labels", 1..3 = bid targets, 4..5= Zplot
gs = gridspec.GridSpec(rows, cols, wspace=0.25, hspace=0.5)
# entry text
font = FontProperties()
font.set_family('monospace')
font.set_size(12)
#row labels
plt.subplot(gs[0, 0])
plt.gca().set_frame_on(False)
plt.gca().axis('off')
if len(ras) < 1:
# per Karl insert a blank row
text = "No matching targets in catalog.\nRow intentionally blank."
plt.text(0, 0, text, ha='left', va='bottom', fontproperties=font)
plt.close()
return fig
elif (not G.FORCE_SINGLE_PAGE) and (len(ras) > G.MAX_COMBINE_BID_TARGETS):
text = "Too many matching targets in catalog.\nIndividual target reports on followin pages."
plt.text(0, 0, text, ha='left', va='bottom', fontproperties=font)
plt.close()
return fig
bid_colors = self.get_bid_colors(len(ras))
if G.ZOO:
text = "Separation\n" + \
"Match score\n" + \
"Spec z\n" + \
"Photo z\n" + \
"Est LyA rest-EW\n" + \
"mag\n\n"
else:
text = "Separation\n" + \
"Match score\n" + \
"RA, Dec\n" + \
"Spec z\n" + \
"Photo z\n" + \
"Est LyA rest-EW\n" + \
"mag\n" + \
"P(LAE)/P(OII)\n"
plt.text(0, 0, text, ha='left', va='bottom', fontproperties=font)
col_idx = 0
target_count = 0
# targets are in order of increasing distance
for r, d in zip(ras, decs):
target_count += 1
if target_count > G.MAX_COMBINE_BID_TARGETS:
break
col_idx += 1
try: #DO NOT WANT _unique as that has wiped out the filters
df = self.dataframe_of_bid_targets.loc[(self.dataframe_of_bid_targets['RA'] == r[0]) &
(self.dataframe_of_bid_targets['DEC'] == d[0]) &
(self.dataframe_of_bid_targets['FILTER'] == 'r')]
if (df is None) or (len(df) == 0):
df = self.dataframe_of_bid_targets.loc[(self.dataframe_of_bid_targets['RA'] == r[0]) &
(self.dataframe_of_bid_targets['DEC'] == d[0]) &
(self.dataframe_of_bid_targets['FILTER'] == 'g')]
if (df is None) or (len(df) == 0):
df = self.dataframe_of_bid_targets.loc[(self.dataframe_of_bid_targets['RA'] == r[0]) &
(self.dataframe_of_bid_targets['DEC'] == d[0])]
except:
log.error("Exception attempting to find object in dataframe_of_bid_targets", exc_info=True)
continue # this must be here, so skip to next ra,dec
if df is not None:
text = ""
if G.ZOO:
text = text + "%g\"\n%0.3f\n" \
% (df['distance'].values[0] * 3600.,df['dist_prior'].values[0])
else:
text = text + "%g\"\n%0.3f\n%f, %f\n" \
% ( df['distance'].values[0] * 3600.,df['dist_prior'].values[0],
df['RA'].values[0], df['DEC'].values[0])
text += "N/A\nN/A\n" #dont have specz or photoz for HSC
#todo: add flux (cont est)
try:
filter_fl, filter_fl_err, filter_mag, filter_mag_bright, filter_mag_faint, filter_str = self.get_filter_flux(df)
except:
filter_fl = 0.0
filter_fl_err = 0.0
filter_mag = 0.0
filter_mag_bright = 0.0
filter_mag_faint = 0.0
filter_str = "NA"
bid_target = None
if (target_flux is not None) and (filter_fl != 0.0):
if (filter_fl is not None):# and (filter_fl > 0):
filter_fl_cgs = self.nano_jansky_to_cgs(filter_fl,SU.filter_iso(filter_str,target_w)) #filter_fl * 1e-32 * 3e18 / (target_w ** 2) # 3e18 ~ c in angstroms/sec
filter_fl_cgs_unc = self.nano_jansky_to_cgs(filter_fl_err, SU.filter_iso(filter_str,target_w))
# assumes no error in wavelength or c
# try:
# ew = (target_flux / filter_fl_cgs / (target_w / G.LyA_rest))
# ew_u = abs(ew * np.sqrt(
# (detobj.estflux_unc / target_flux) ** 2 +
# (filter_fl_err / filter_fl) ** 2))
# text = text + utilities.unc_str((ew,ew_u)) + "$\AA$\n"
# except:
# log.debug("Exception computing catalog EW: ",exc_info=True)
# text = text + "%g $\AA$\n" % (target_flux / filter_fl_cgs / (target_w / G.LyA_rest))
#
# if target_w >= G.OII_rest:
# text = text + "%g $\AA$\n" % (target_flux / filter_fl_cgs / (target_w / G.OII_rest))
# else:
# text = text + "N/A\n"
try:
bid_target = match_summary.BidTarget()
bid_target.catalog_name = self.Name
bid_target.bid_ra = df['RA'].values[0]
bid_target.bid_dec = df['DEC'].values[0]
bid_target.distance = df['distance'].values[0] * 3600
bid_target.prob_match = df['dist_prior'].values[0]
bid_target.bid_flux_est_cgs = filter_fl_cgs
bid_target.bid_filter = filter_str
bid_target.bid_mag = filter_mag
bid_target.bid_mag_err_bright = filter_mag_bright
bid_target.bid_mag_err_faint = filter_mag_faint
bid_target.bid_flux_est_cgs_unc = filter_fl_cgs_unc
lineFlux_err = 0.
if detobj is not None:
try:
lineFlux_err = detobj.estflux_unc
except:
lineFlux_err = 0.
try:
# ew = (target_flux / filter_fl_cgs / (target_w / G.LyA_rest))
# ew_u = abs(ew * np.sqrt(
# (detobj.estflux_unc / target_flux) ** 2 +
# (filter_fl_err / filter_fl) ** 2))
#
# bid_target.bid_ew_lya_rest = ew
# bid_target.bid_ew_lya_rest_err = ew_u
bid_target.bid_ew_lya_rest, bid_target.bid_ew_lya_rest_err = \
SU.lya_ewr(target_flux,lineFlux_err,target_w, bid_target.bid_filter,
bid_target.bid_flux_est_cgs,bid_target.bid_flux_est_cgs_unc)
text = text + utilities.unc_str((bid_target.bid_ew_lya_rest, bid_target.bid_ew_lya_rest_err)) + "$\AA$\n"
except:
log.debug("Exception computing catalog EW: ", exc_info=True)
text = text + "%g $\AA$\n" % (target_flux / filter_fl_cgs / (target_w / G.LyA_rest))
addl_waves = None
addl_flux = None
addl_ferr = None
try:
addl_waves = cat_match.detobj.spec_obj.addl_wavelengths
addl_flux = cat_match.detobj.spec_obj.addl_fluxes
addl_ferr = cat_match.detobj.spec_obj.addl_fluxerrs
except:
pass
# build EW error from lineFlux_err and aperture estimate error
# ew_obs = (target_flux / bid_target.bid_flux_est_cgs)
# try:
# ew_obs_err = abs(ew_obs * np.sqrt(
# (lineFlux_err / target_flux) ** 2 +
# (bid_target.bid_flux_est_cgs_unc / bid_target.bid_flux_est_cgs) ** 2))
# except:
# ew_obs_err = 0.
ew_obs, ew_obs_err = SU.ew_obs(target_flux,lineFlux_err,target_w, bid_target.bid_filter,
bid_target.bid_flux_est_cgs,bid_target.bid_flux_est_cgs_unc)
# bid_target.p_lae_oii_ratio, bid_target.p_lae, bid_target.p_oii,plae_errors = \
# line_prob.prob_LAE(wl_obs=target_w,
# lineFlux=target_flux,
# ew_obs=ew_obs,
# lineFlux_err=lineFlux_err,
# ew_obs_err=ew_obs_err,
# c_obs=None, which_color=None, addl_wavelengths=addl_waves,
# addl_fluxes=addl_flux, addl_errors=addl_ferr, sky_area=None,
# cosmo=None, lae_priors=None,
# ew_case=None, W_0=None,
# z_OII=None, sigma=None,estimate_error=True)
bid_target.p_lae_oii_ratio, bid_target.p_lae, bid_target.p_oii,plae_errors = \
line_prob.mc_prob_LAE(
wl_obs=target_w,
lineFlux=target_flux,
lineFlux_err=lineFlux_err,
continuum=bid_target.bid_flux_est_cgs * SU.continuum_band_adjustment(target_w,bid_target.bid_filter),
continuum_err=bid_target.bid_flux_est_cgs_unc * SU.continuum_band_adjustment(target_w,bid_target.bid_filter),
c_obs=None, which_color=None,
addl_wavelengths=addl_waves,
addl_fluxes=addl_flux,
addl_errors=addl_ferr,
sky_area=None,
cosmo=None, lae_priors=None,
ew_case=None, W_0=None,
z_OII=None, sigma=None)
#dfx = self.dataframe_of_bid_targets.loc[(self.dataframe_of_bid_targets['RA'] == r[0]) &
# (self.dataframe_of_bid_targets['DEC'] == d[0])]
try:
if plae_errors:
bid_target.p_lae_oii_ratio_min = plae_errors['ratio'][1]
bid_target.p_lae_oii_ratio_max = plae_errors['ratio'][2]
except:
pass
try:
bid_target.add_filter('HSC','R',filter_fl_cgs,filter_fl_err)
except:
log.debug('Unable to build filter entry for bid_target.',exc_info=True)
cat_match.add_bid_target(bid_target)
try: # no downstream edits so they can both point to same bid_target
detobj.bid_target_list.append(bid_target)
except:
log.warning("Unable to append bid_target to detobj.", exc_info=True)
except:
log.debug('Unable to build bid_target.',exc_info=True)
else:
text += "N/A\nN/A\n"
try:
text = text + "%0.2f(%0.2f,%0.2f)\n" % (filter_mag, filter_mag_bright, filter_mag_faint)
except:
log.warning("Magnitude info is none: mag(%s), mag_bright(%s), mag_faint(%s)"
% (filter_mag, filter_mag_bright, filter_mag_faint))
text += "No mag info\n"
if (not G.ZOO) and (bid_target is not None) and (bid_target.p_lae_oii_ratio is not None):
try:
text += r"$%0.4g\ ^{%.4g}_{%.4g}$" % (utilities.saferound(bid_target.p_lae_oii_ratio, 3),
utilities.saferound(bid_target.p_lae_oii_ratio_max, 3),
utilities.saferound(bid_target.p_lae_oii_ratio_min, 3))
text += "\n"
except:
text += "%0.4g\n" % ( utilities.saferound(bid_target.p_lae_oii_ratio,3))
else:
text += "\n"
else:
text = "%s\n%f\n%f\n" % ("--",r, d)
plt.subplot(gs[0, col_idx])
plt.gca().set_frame_on(False)
plt.gca().axis('off')
plt.text(0, 0, text, ha='left', va='bottom', fontproperties=font,color=bid_colors[col_idx-1])
# fig holds the entire page
#todo: photo z plot if becomes available
plt.subplot(gs[0, 4:])
plt.gca().set_frame_on(False)
plt.gca().axis('off')
text = "Photo z plot not available."
plt.text(0, 0.5, text, ha='left', va='bottom', fontproperties=font)
plt.close()
return fig
def get_single_cutout(self, ra, dec, window, catalog_image,aperture=None,error=None,do_sky_subtract=True,detobj=None):
"""
:param ra:
:param dec:
:param window:
:param catalog_image:
:param aperture:
:return:
"""
d = {'cutout':None,
'hdu':None,
'path':None,
'filter':catalog_image['filter'],
'instrument':catalog_image['instrument'],
'mag':None,
'aperture':None,
'ap_center':None,
'mag_limit':None,
'details': None}
try:
wcs_manual = catalog_image['wcs_manual']
mag_func = catalog_image['mag_func']
except:
wcs_manual = self.WCS_Manual
mag_func = None
# 20240212 DD: older tiles had multiple filters per tile, new ones are single large image per filter
# try:
# wcs_idx = self.Filter_HDU_Image_Idx[catalog_image['filter']]
# except:
# wcs_idx = 0
wcs_idx = 1
try:
if catalog_image['image'] is None:
catalog_image['image'] = science_image.science_image(wcs_manual=wcs_manual,wcs_idx=wcs_idx,
image_location=op.join(catalog_image['path'],
catalog_image['name']))
catalog_image['image'].catalog_name = catalog_image['name']
catalog_image['image'].filter_name = catalog_image['filter']
sci = catalog_image['image']
if (sci.headers is None) or (len(sci.headers) == 0): #the catalog_image['image'] is no good? reload?
sci.load_image(wcs_manual=wcs_manual)
d['path'] = sci.image_location
d['hdu'] = sci.headers
# to here, window is in degrees so ...
window = 3600. * window
if not error:
error = window
cutout,pix_counts, mag, mag_radius,details = sci.get_cutout(ra, dec, error=error, window=window, aperture=aperture,
mag_func=mag_func,copy=True,return_details=True,detobj=detobj)
# don't need pix_counts or mag, etc here, so don't pass aperture or mag_func
if cutout is not None: # construct master cutout
d['cutout'] = cutout
details['catalog_name']=self.name
details['filter_name']=catalog_image['filter']
d['mag_limit']=self.get_mag_limit([catalog_image['name'],catalog_image['filter']],mag_radius*2.)
try:
if d['mag_limit']:
details['mag_limit']=d['mag_limit']
else:
details['mag_limit'] = None
except:
details['mag_limit'] = None
if (mag is not None) and (mag < 999):
if d['mag_limit'] and (d['mag_limit'] < mag < 100):
log.warning(f"Cutout mag {mag} greater than limit {d['mag_limit']}. Setting to limit.")
details['fail_mag_limit'] = True
details['raw_mag'] = mag
details['raw_mag_bright'] = details['mag_bright']
details['raw_mag_faint'] = details['mag_faint']
details['raw_mag_err'] = details['mag_err']
mag = d['mag_limit']
details['mag'] = mag
try:
details['mag_bright'] = min(mag,details['mag_bright'])
except:
details['mag_bright'] = mag
try:
details['mag_faint'] = max(mag,G.MAX_MAG_FAINT)
except:
details['mag_faint'] = G.MAX_MAG_FAINT
d['mag'] = mag
d['aperture'] = mag_radius
d['ap_center'] = (sci.last_x0_center, sci.last_y0_center)
d['details'] = details
except:
log.error("Error in get_single_cutout.", exc_info=True)
return d
def get_cutouts(self,ra,dec,window,aperture=None,filter=None,first=False,error=None,do_sky_subtract=True,detobj=None):
l = list()
tile, tracts, positions = self.find_target_tile(ra, dec)
if tile is None:
# problem
log.error("No appropriate tile found in HSC for RA,DEC = [%f,%f]" % (ra, dec))
return None
# try:
# #cat_filters = list(set([x['filter'].lower() for x in self.CatalogImages]))
# cat_filters = list(dict((x['filter'], {}) for x in self.CatalogImages).keys())
# except:
# cat_filters = None
if filter:
outer = filter
inner = [x.lower() for x in self.Filters]
else:
outer = [x.lower() for x in self.Filters]
inner = None
if aperture == -1:
try:
aperture = self.mean_FWHM * 0.5 + 0.5
except:
pass
wild_filters = iter([x.lower() for x in self.Filters])
if outer:
for f in outer:
try:
if f == '*':
f = next(wild_filters, None)
if f is None:
break
elif inner and (f not in inner):
# if filter list provided but the image is NOT in the filter list go to next one
continue
i = self.CatalogImages[
next(i for (i, d) in enumerate(self.CatalogImages)
if ((d['filter'] == f) and (d['tile'] == tile.replace("?", f.upper()))))]
#if ((d['filter'] == f) and (d['tile'] == tile)))]
if i is not None:
cutout = self.get_single_cutout(ra, dec, window, i, aperture,error,detobj=detobj)
if first:
if cutout['cutout'] is not None:
l.append(cutout)
break
else:
# if we are not escaping on the first hit, append ALL cutouts (even if no image was collected)
l.append(cutout)
except Exception as e:
if type(e) is StopIteration:
#just did not find any more catalog images to use that match the criteria
pass
else:
log.error("Exception! collecting image cutouts.", exc_info=True)
else:
for f in self.Filters:
try:
i = self.CatalogImages[
next(i for (i, d) in enumerate(self.CatalogImages)
if ((d['filter'] == f) and (d['tile'] == tile)))]
except:
i = None
if i is None:
continue
l.append(self.get_single_cutout(ra,dec,window,i,aperture,detobj=detobj))
return l
#END ALL ... this is just as temporary prior notes
"""
This is for a special, restricted use of HSC data in the NEP
################################################################
IMAGES:
################################################################
01/2021
Hawaii Two-0 NEP subset of imaging
image file names: 'H20_NEP_18211_B15_e.fits'
: 'H20_NEP_18211_B16_e.fits'
Included here are two sets of images, where each image is 3800*3800 pixels, with a pixel scale of 0.168"/pix. In the framework of the Farmer (Weaver, Zalesky et al. in prep)
photometry pipeline, each image is called a brick. A brick is a subset of a larger mosaic created to enable efficient multiprocessing. Each brick overlaps with its neighbors by
N pixels, where in this case N=100. This overlap region is called the brick-buffer. Sources which extend into the brick-buffer shared by bricks A & B, but have centroids
within brick A are considered to belong to brick A -- thus sources are not double counted nor are they improperly segmented. The total effective area across which sources can
be detected, not accounting for masking (e.g., bright star masks), is then 3600*3600 pixels = 10.08*10.08 arcminutes.
contact: Lukas Zalesky - zalesky@hawaii.edu
########################################################
IMAGE INFORMATION
########################################################
Images are included for the following bands: ['hsc_g', 'hsc_r', 'hsc_i', 'hsc_z', 'hsc_y']
Each band has the following information contained in this multi-extension .fits file:
'###_IMAGE' # reduced science image. unit = 'uJy'
'###_WEIGHT # weight image obtained during reduction. unit = '1/variance'
'###_MASK' # bright star masks, obtained in a manner similar to that of the HSC-SSP survey. Masks do not cover 100% of all bright stars. Same for all bands.
#########################################################
#CATALOGS
#########################################################
#\1;95;0c01/2021
Hawaii Two-0 NEP subset catalog
catalog file name: 'H20_NEP_subset_catalog.fits'
This catalog contains photometry and output from the EAZY-py SED fitting code for 18164 detected sources. These sources were detected
in the two sets of images attached along with these files. Each image is square with approximately 10 arcminutes on one side. A brief description of
the source extraction and photometry method follows.
Source extraction and photometry is performed using the new software called "The Farmer" (Weaver, Zalesky et al. in prep). This software utilizes the profile-fitting
tools provided by Tractor (Lang et al. 2014, 2016). Sources are detected with pythonic source extractor (SEP) from an r+i+z CHIMEAN stack created with SWARP. Each
source is then modeled as either a point source or an extended source, with nearby sources modeled simultaneously.
Appropriate models are determined from the r, i, and z, bands collectively. Flux is measured by forcing the final model on to each source, convolving the model with
the appropriate PSF, and optimizing the flux which is left as a free parameter.
*** DISCLAIMER: all detected sources are reported, but not all detected sources have valid photometry. Some sources which are detected cannot be modeled, as they are near
bright stars which were not masked out, artifacts, or they may be artifacts themselves, such as those created by scattered light. To identify sources with valid photometry,
use the flag below called 'VALID_SOURCE_MODELING'. Sources with 'VALID_SOURCE_MODELING' == False will not have FLUX, FLUXERR, MAG, MAGERR, nor photometric redshifts.
However, it may be useful to determine whether or not a source was detected at all. For this reason, the detection parameters (e.g., position at source detection)
have been included even for sources that could not be modeled.
contact: Lukas Zalesky - zalesky@hawaii.edu
########################################################
History
########################################################
01/2021 -- catalog created and shared with Steve Finkelstein
########################################################
DETECTION
########################################################
Except for the identifier ('id') which was added at the end of catalog creation, each parameter value provided here is measured directly with SEP.
Please keep the 'id' flag in tact, as it uniquely identifies the source in context of the entire region of the NEP completed to full-depth by H20.
Note that these parameters are measured on the CHIMEAN detection image and thus do not correspond to physical units.
Nonetheless, the ratio of peak flux / convolved peak flux recorded here is still useful for separating stars from galaxies.
'id' # identifier, arbitrary
'cpeak' # convolved peak flux; unit = 'signal/noise' i.e., no unit, since this is measured from the CHIMEAN image
'peak' # peak flux; unit = 'signal/noise', i.e., no unit, since this is measured from the CHIMEAN image
'RA_DETECTION' # right ascension (J2000) of detected source; unit = 'deg'
'DEC_DETECTION' # declination (J2000) of detected source; unit = 'deg'
########################################################
SOURCE MODELING
########################################################
'N_BLOB' # number of sources modeled simultaneously with this source
'VALID_SOURCE_MODELING' # flag, True if model optimization succeeded, False if model optimization failed
'SOLMODEL_MODELING' # flag, indicates final model type for successful models (PointSource, SimpleGalaxy, ExpGalaxy, DevGalaxy, FixedCompositeGalaxy)
'CHISQ_MODELING_hsc_r' # chi-squared statistic during modeling as measured in the hsc-r band
'CHISQ_MODELING_hsc_i' # chi-squared statistic during modeling as measured in the hsc-i band
'CHISQ_MODELING_hsc_z' # chi-squared statistic during modeling as measured in the hsc-z band
'RA_MODELING' # right ascension (J2000) of source determined during model optimization; unit = 'deg'
'DEC_MODELING' # declination (J2000) of source determined during model optimization; unit = 'deg'
########################################################
PHOTOMETRY
########################################################
Photometry is included for the following bands: ['hsc_g', 'hsc_r', 'hsc_i', 'hsc_z', 'hsc_y', 'irac_ch1', 'irac_ch2']
Total model magnitudes and fluxes are reported -- no aperture corrections are needed!
Catalog entries follow the convention below for each available band:
'MAG_###' # AB magnitude; unit = 'mag'
'MAGERR_###' # AB magnitude error; unit = 'mag'
'FLUX_###' # flux; unit = 'uJy'
'FLUXERR_###' # flux error; unit = 'uJy'
'CHISQ_###' # chi-squared statistic in given band during forced photometry
########################################################
SED FITTING
########################################################
Photometric redshifts are calculated using EAZY-py, a pythonic implementation of EAZY (Brammer et al. 2008).
Github page: https://github.com/gbrammer/eazy-py
The templates used are the same as those used in the new COSMOS2020 catalog. These templates are not available online, but may be available upon request (contact Gabe Brammer).
'nusefilt' # number of filters used for photo-z
'lc_min' # minimum effective wavelength of valid filters, Angstrom
'lc_max' # maximum effective wavelength of valid filters, Angstrom
'z_raw_chi2' # redshift where chi2 is minimized
'z_phot_chi2' # min chi2
'z025' # 2.5 percentile of pdf(z) (2-sigma)
'z160' # 16 percentile of pdf(z) (1-sigma)
'z500' # 50 percentile of pdf(z)
'z840' # 84 percentile of pdf(z) (1-sigma)
'z975' # 97.5 percentile of pdf(z) (2-sigma)
'PSTAR_chi2' # chi-squared of best stellar template, using the PHOENIX stellar library
'LSTAR_chi2' # chi-squared of best stellar template, using the LePhare stellar library
To compare the chi-squared of the fit with galaxy templates with that of stellar templates, one should use chi2/(nusefilt-1), if the goal is to compare the reduced chi-squared.
Also, note that z500 is typically more reliable than z_raw_chi2.
########################################################
MISCELLANEOUS EXTRA COLUMNS
########################################################
'EBV' # E(B-V) values from Schlegel, Finkbeiner & Davis (1998) dust map, with 2011 recalibration; unit = 'mag'
'tract_id' # The tract id corresponds to a region of the sky pre-defined by the HSC-SSP team.
"""
#I hope all is well with you!! I found a little problema with the photometric catalog that the H20 team sent us.
# I do not know if you need this for Elixer but just thought I should let you know about it. The photometric catalog
# the H20 team sent us has two RA and DEC columns one for RA_DETECTIONS, DEC_DETECTION and the other RA_MODEL, DEC_MODEL
# it turns out that something went wrong on their end for DETECTION RA and DEC and we should use the RA_MODEL, DEC_MODEL
# instead for getting the proper RA and DEC.
# Brick 15: 2" at 3 sigma (5 sigma) [will use 5-sigma]
# g - 27.7 (27.1)
# r - 27.3 (26.7)
# i - 26.9 (26.4)
# z - 26.6 (26.0)
# y - 25.5 (24.9)
# ch1 - 25.6 (25.0) #not in the FITS (IRAC channel1 3.6 micron)
# ch2 - 25.6 (25.0) #not in the FITS (IRAC channel2 4.8 micron)
#
# Brick 16:
# g - 27.7 (27.2)
# r - 27.3 (26.7)
# i - 26.9 (26.4)
# z - 26.5 (26.0)
# y - 25.4 (24.9)
# ch1 - 24.9 (24.3) #not in the FITS (IRAC channel1 3.6 micron)
# ch2 - 25.1 (24.6) #not in the FITS (IRAC channel2 4.8 micron)
#
# MAG_LIMIT_DICT = {'default': {'g': 27.2, 'r': 26.7, 'i': 26.4, 'z': 26.0, 'y': 24.9},
# 'H20_NEP_18211_B15_e.fits': {'g': 27.1, 'r': 26.7, 'i': 26.9, 'z': 26.0, 'y': 24.9},
# 'H20_NEP_18211_B16_e.fits': {'g': 27.2, 'r': 26.7, 'i': 26.4, 'z': 26.0, 'y': 24.9}}
#20240214 old catalog format
##----------------------------
## Catalog Format
##----------------------------
# 1 unique IDs
# 2 position in X [pixel]
# 3 position in Y [pixel]
# 4 position in RA [degree]
# 5 position in Dec [degree]
# 6 flux within 3.000000-pixel aperture
# 7 1-sigma flux uncertainty
# 8 General Failure Flag
# 9 magnitude within 3.000000-pixel aperture
# 10 1-sigma magnitude uncertainty
# 11 flux within 4.500000-pixel aperture
# 12 1-sigma flux uncertainty
# 13 General Failure Flag
# 14 magnitude within 4.500000-pixel aperture
# 15 1-sigma magnitude uncertainty
# 16 flux within 6.000000-pixel aperture
# 17 1-sigma flux uncertainty
# 18 General Failure Flag
# 19 magnitude within 6.000000-pixel aperture
# 20 1-sigma magnitude uncertainty
# 21 flux within 9.000000-pixel aperture
# 22 1-sigma flux uncertainty
# 23 General Failure Flag
# 24 magnitude within 9.000000-pixel aperture
# 25 1-sigma magnitude uncertainty
# 26 flux within 12.000000-pixel aperture
# 27 1-sigma flux uncertainty
# 28 General Failure Flag
# 29 magnitude within 12.000000-pixel aperture
# 30 1-sigma magnitude uncertainty
# 31 flux within 17.000000-pixel aperture
# 32 1-sigma flux uncertainty
# 33 General Failure Flag
# 34 magnitude within 17.000000-pixel aperture
# 35 1-sigma magnitude uncertainty
# 36 flux within 25.000000-pixel aperture
# 37 1-sigma flux uncertainty
# 38 General Failure Flag
# 39 magnitude within 25.000000-pixel aperture
# 40 1-sigma magnitude uncertainty
# 41 flux within 35.000000-pixel aperture
# 42 1-sigma flux uncertainty
# 43 General Failure Flag
# 44 magnitude within 35.000000-pixel aperture
# 45 1-sigma magnitude uncertainty
# 46 flux within 50.000000-pixel aperture'
# 47 1-sigma flux uncertainty
# 48 General Failure Flag
# 49 magnitude within 50.000000-pixel aperture
# 50 1-sigma magnitude uncertainty
# 51 flux within 70.000000-pixel aperture
# 52 1-sigma flux uncertainty
# 53 General Failure Flag
# 54 magnitude within 70.000000-pixel aperture
# 55 1-sigma magnitude uncertainty
# 56 flux derived from linear least-squares fit of PSF model
# 57 1-sigma flux uncertainty
# 58 General Failure Flag
# 59 magnitude derived from linear least-squares fit of PSF model
# 60 1-sigma magnitude uncertainty
# 61 convolved Kron flux: seeing 3.500000
# 62 1-sigma flux uncertainty
# 63 convolved Kron flux failed: seeing 3.500000
# 64 convolved magnitude: seeing 3.500000
# 65 1-sigma magnitude uncertainty
# 66 convolved Kron flux: seeing 5.000000
# 67 1-sigma flux uncertainty
# 68 convolved Kron flux failed: seeing 5.000000
# 69 convolved magnitude: seeing 5.00000
# 70 1-sigma magnitude uncertainty
# 71 convolved Kron flux: seeing 6.500000
# 72 1-sigma flux uncertainty
# 73 convolved Kron flux failed: seeing 6.500000
# 74 convolved magnitude: seeing 6.500000
# 75 1-sigma magnitude uncertainty
# 76 convolved Kron flux: seeing 8.000000
# 77 1-sigma flux uncertainty
# 78 convolved Kron flux failed: seeing 8.000000
# 79 convolved magnitude: seeing 8.000000
# 80 1-sigma magnitude uncertainty
# 81 flux from the final cmodel fit
# 82 flux uncertainty from the final cmodel fit
# 83 flag set if the final cmodel fit (or any previous fit) failed
# 84 magnitude from the final cmodel fit
# 85 magnitude uncertainty from the final cmodel fit
# 86 Number of children this object has (defaults to 0)
# 87 Source is outside usable exposure region (masked EDGE or NO_DATA)
# 88 Interpolated pixel in the Source center
# 89 Saturated pixel in the Source center
# 90 Cosmic ray in the Source center
# 91 Bad pixel in the Source footprint
# 92 Source center is close to BRIGHT_OBJECT pixels
# 93 Source footprint includes BRIGHT_OBJECT pixels
# 94 General Failure Flag
# 95 true if source is in the inner region of a coadd tract
# 96 true if source is in the inner region of a coadd patch
# 97 Number of images contributing at center, not including anyclipping
# 98 original seeing (Gaussian sigma) at position [pixel]
#
# BidCols = [
# 'id',
# 'cpeak',
# 'peak',
# 'RA_DETECTION', #DON'T USE, ... use RA_MODELING instead
# 'Dec_DETECTION', #DON'T USE, ... use DEC_MODELING instead
#
# 'N_BLOB', # number of sources modeled simultaneously with this source
# 'VALID_SOURCE_MODELING' , # flag, True if model optimization succeeded, False if model optimization failed
# 'SOLMODEL_MODELING' , # flag, indicates final model type for successful models (PointSource, SimpleGalaxy, ExpGalaxy, DevGalaxy, FixedCompositeGalaxy)
# 'CHISQ_MODELING_hsc_r' , # chi-squared statistic during modeling as measured in the hsc-r band
# 'CHISQ_MODELING_hsc_i' , # chi-squared statistic during modeling as measured in the hsc-i band
# 'CHISQ_MODELING_hsc_z' , # chi-squared statistic during modeling as measured in the hsc-z band
#
#
# 'RA_MODELING' , # right ascension (J2000) of source determined during model optimization; unit = 'deg'
# 'DEC_MODELING' , # declination (J2000) of source determined during model optimization; unit = 'de
#
# 'MAG_hsc_g' , # AB magnitude; unit = 'mag'
# 'MAGERR_hsc_g' , # AB magnitude error; unit = 'mag'
# 'FLUX_hsc_g' , # flux; unit = 'uJy'
# 'FLUXERR_hsc_g' , # flux error; unit = 'uJy'
# 'CHISQ_hsc_g',
#
# 'MAG_hsc_r' , # AB magnitude; unit = 'mag'
# 'MAGERR_hsc_r' , # AB magnitude error; unit = 'mag'
# 'FLUX_hsc_r' , # flux; unit = 'uJy'
# 'FLUXERR_hsc_r' , # flux error; unit = 'uJy'
# 'CHISQ_hsc_r',
#
# 'MAG_hsc_i' , # AB magnitude; unit = 'mag'
# 'MAGERR_hsc_i' , # AB magnitude error; unit = 'mag'
# 'FLUX_hsc_i' , # flux; unit = 'uJy'
# 'FLUXERR_hsc_i' , # flux error; unit = 'uJy'
# 'CHISQ_hsc_i',
#
# 'MAG_hsc_z' , # AB magnitude; unit = 'mag'
# 'MAGERR_hsc_z' , # AB magnitude error; unit = 'mag'
# 'FLUX_hsc_z' , # flux; unit = 'uJy'
# 'FLUXERR_hsc_z' , # flux error; unit = 'uJy'
# 'CHISQ_hsc_z',
#
# 'MAG_hsc_y' , # AB magnitude; unit = 'mag'
# 'MAGERR_hsc_y' , # AB magnitude error; unit = 'mag'
# 'FLUX_hsc_y' , # flux; unit = 'uJy'
# 'FLUXERR_hsc_y' , # flux error; unit = 'uJy'
# 'CHISQ_hsc_y',
#
# 'MAG_irac_ch1' , # AB magnitude; unit = 'mag'
# 'MAGERR_irac_ch1' , # AB magnitude error; unit = 'mag'
# 'FLUX_irac_ch1' , # flux; unit = 'uJy'
# 'FLUXERR_irac_ch1' , # flux error; unit = 'uJy'
# 'CHISQ_irac_ch1',
#
#
# 'MAG_irac_ch2' , # AB magnitude; unit = 'mag'
# 'MAGERR_irac_ch2' , # AB magnitude error; unit = 'mag'
# 'FLUX_irac_ch2' , # flux; unit = 'uJy'
# 'FLUXERR_irac_ch2' , # flux error; unit = 'uJy'
# 'CHISQ_irac_ch2',
#
# 'nusefilt' , # number of filters used for photo-z
# 'lc_min' , # minimum effective wavelength of valid filters, Angstrom
# 'lc_max' , # maximum effective wavelength of valid filters, Angstrom
# 'z_raw_chi2' , # redshift where chi2 is minimized
# 'z_phot_chi2' , # min chi2
# 'z025', # 2.5 percentile of pdf(z) (2-sigma)
# 'z160', # 16 percentile of pdf(z) (1-sigma)
# 'z500', # 50 percentile of pdf(z)
# 'z840', # 84 percentile of pdf(z) (1-sigma)
# 'z975', # 97.5 percentile of pdf(z) (2-sigma)
#
# 'PSTAR_chi2', # chi-squared of best stellar template, using the PHOENIX stellar library
# 'LSTAR_chi2' , # chi-squared of best stellar template, using the LePhare stellar librar
# 'EBV' , # E(B-V) values from Schlegel, Finkbeiner & Davis (1998) dust map, with 2011 recalibration; unit = 'mag'
# 'tract_id', # The tract id corresponds to a region of the sky pre-defined by the HSC-SSP team.
# ]
|
HETDEXREPO_NAMEelixerPATH_START.@elixer_extracted@elixer-main@elixer@cat_hsc_nep.py@.PATH_END.py
|
{
"filename": "s3arrow.py",
"repo_name": "vaexio/vaex",
"repo_path": "vaex_extracted/vaex-master/packages/vaex-core/vaex/file/s3arrow.py",
"type": "Python"
}
|
import os
import io
from .s3 import patch_profile
import pyarrow as pa
import pyarrow.fs
from . import split_options, FileProxy, split_scheme
from .cache import FileSystemHandlerCached
from ..cache import fingerprint
region_cache = {}
fs_arrow_cache = {}
def glob(path, fs_options={}):
from .s3 import glob
return glob(path, fs_options)
def parse(path, fs_options, for_arrow=False):
# Remove this line for testing purposes to fake not having s3 support
# raise pyarrow.lib.ArrowNotImplementedError('FOR TESTING')
path, fs_options = split_options(path, fs_options)
path = path.replace('arrow+s3://', 's3://')
fullpath = path
scheme, path = split_scheme(path)
assert scheme == 's3'
# anon is for backwards compatibility
fs_options['anonymous'] = (fs_options.pop('anon', None) in [True, 'true', 'True', '1']) or (fs_options.pop('anonymous', None) in [True, 'true', 'True', '1'])
fs_options = patch_profile(fs_options)
use_cache = fs_options.pop('cache', 'true') in [True, 'true', 'True', '1']
bucket = path.split('/')[0]
if 'region' not in fs_options:
# cache region
if bucket not in region_cache:
# we use this to get the default region
file_system, _ = pa.fs.FileSystem.from_uri(fullpath)
region = file_system.region
region_cache[bucket] = region
else:
region = region_cache[bucket]
fs_options['region'] = region
# bucket and options make up a unique key
key = fingerprint(bucket, fs_options)
if key not in fs_arrow_cache:
fs = pa.fs.S3FileSystem(**fs_options)
fs_arrow_cache[key] = fs
else:
fs = fs_arrow_cache[key]
if use_cache:
fs = FileSystemHandlerCached(fs, scheme='s3', for_arrow=for_arrow)
if for_arrow:
fs = pyarrow.fs.PyFileSystem(fs)
return fs, path
|
vaexioREPO_NAMEvaexPATH_START.@vaex_extracted@vaex-master@packages@vaex-core@vaex@file@s3arrow.py@.PATH_END.py
|
{
"filename": "deconvolve_vis.py",
"repo_name": "SKA-INAF/caesar",
"repo_path": "caesar_extracted/caesar-master/scripts/deconvolve_vis.py",
"type": "Python"
}
|
#!/usr/bin/env python
##################################################
### MODULE IMPORT
##################################################
## STANDARD MODULES
import os
import sys
import subprocess
import string
import time
import signal
from threading import Thread
import datetime
import numpy as np
import random
import math
import errno
## ASTRO MODULES
from scipy import constants
from astropy.io import fits
## COMMAND-LINE ARG MODULES
import getopt
import argparse
import collections
def str2bool(v):
if v.lower() in ('yes', 'true', 't', 'y', '1'):
return True
elif v.lower() in ('no', 'false', 'f', 'n', '0'):
return False
else:
raise argparse.ArgumentTypeError('Boolean value expected.')
def get_args():
"""This function parses and return arguments passed in"""
parser = argparse.ArgumentParser(description="Parse args.")
# - MANDATORY OPTIONS
parser.add_argument('-vis', '--vis', dest='vis', required=True, type=str,action='store',help='Input visibility CASA table name to be deconvolved')
# OPTIONAL OPTIONS
parser.add_argument('-outvis', '--outvis', dest='outvis', required=False, type=str, default='vis_deconv.ms',action='store',help='Output visibility CASA table name stored after deconvolution')
parser.add_argument('-bmaj', '--bmaj', dest='bmaj', required=True, type=float, default=10, action='store',help='Beam bmaj in arcsec (default=5)')
parser.add_argument('-bmin', '--bmin', dest='bmin', required=True, type=float, default=5, action='store',help='Beam bmin in arcsec (default=5)')
parser.add_argument('-uvdist_min', '--uvdist_min', dest='uvdist_min', required=False, type=float, default='0',action='store',help='uvdist min value used in flagging (default=0)')
parser.add_argument('-uvdist_max', '--uvdist_max', dest='uvdist_max', required=False, type=float, default='1000000',action='store',help='uvdist max value used in flagging (default=0)')
parser.add_argument('-flagdata','--flagdata', dest='flagdata', action='store_true')
parser.set_defaults(flagdata=False)
parser.add_argument('-c', dest='scriptname', required=False, type=str, default='',action='store',help='Script name')
args = parser.parse_args()
return args
def sigma2fwhm():
f= 2.*np.sqrt(2*np.log(2.))
return f
def gauss(x,sigma,A=1,mu=0):
return A*np.exp(-(x-mu)**2/(2.*sigma**2))
def get_deconv_gaus_sigma(bmaj,bmin,freq):
""" Get gaussian sigma of deconvolution gaussian (in fourier plane)"""
""" bmaj/bmin in arcsec """
""" freq in GHz """
c= constants.c
f= sigma2fwhm()
sigmaX= 91000./bmaj*c/freq*2/f
sigmaY= 91000./bmin*c/freq*2/f
return (sigmaX,sigmaY)
def deconvolve(vis,visout,bmaj,bmin):
""" Deconvolve visibility """
## Create a new visibility set and work with that
print('INFO: Copying input visibility set in %s for later modification...' % str(visout))
concat(vis=vis,concatvis=visout)
###################################
## Open SPECTRAL_WINDOW table and compute sigmaX/sigmaY in Fourier plane for all frequency channels
###################################
print('INFO: Opening SPECTRAL_WINDOW table and computing sigmaX/sigmaY in Fourier plane for all frequency channels present...')
tb.open(visout + '/SPECTRAL_WINDOW')
freq_channels= tb.getcol('CHAN_FREQ')
sigmaU_list= []
sigmaV_list= []
for freq in freq_channels:
print('INFO: Computing gaus sigma for frequency %s ...' % str(freq) )
(sigmaU,sigmaV)= get_deconv_gaus_sigma(bmaj,bmin,freq)
sigmaU_list.append(sigmaU)
sigmaV_list.append(sigmaV)
print('INFO: sigmaU')
print(sigmaU_list)
print('INFO: sigmaV')
print(sigmaV_list)
nchannels= len(freq_channels)
print('INFO: %s channels present' % str(nchannels) )
if nchannels <= 0:
print('ERROR: Empty number of channels (check table reading or data integrity!)')
return -1
tb.close()
#############################
## Open vis MAIN table
#############################
print ('Opening vis file %s' % str(vis))
tb.open(visout,nomodify=False)
## Compute uvdistance
uvdist_list= np.sqrt(tb.getcol('UVW')[0]**2+tb.getcol('UVW')[1]**2+tb.getcol('UVW')[2]**2)
## Compute corrected u & v
data= tb.getcol('DATA')
data_corr= data.copy()
for freq_index in range(data.shape[1]):
for uv_index in range(data.shape[2]):
ucorr= gauss(uvdist_list[uv_index],sigmaU_list[freq_index])
vcorr= gauss(uvdist_list[uv_index],sigmaV_list[freq_index])
data_corr[0,freq_index,uv_index]/= ucorr[0]
data_corr[1,freq_index,uv_index]/= vcorr[0]
#data_corr[0,freq_index,uv_index]= data[0,freq_index,uv_index]/ucorr[0]
#data_corr[1,freq_index,uv_index]= data[1,freq_index,uv_index]/vcorr[0]
## Write corrected u & v to table
tb.putcol('CORRECTED_DATA',data_corr)
## Close table
tb.close()
return 0
def flag(vis,uvdist_min,uvdist_max):
""" Flag visibility data by uvdist range """
uvrange= str(uvdist_min) + '~' + str(uvdist_max)
flagdata(vis=vis,uvrange=uvrange)
##############
## MAIN ##
##############
def main():
"""Main function"""
#===========================
#== Get script args
#===========================
print('INFO: Get script args')
try:
args= get_args()
except Exception as ex:
print("Failed to get and parse options (err=%s)",str(ex))
return 1
vis= args.vis
outvis= args.outvis
Bmaj= args.bmaj
Bmin= args.bmin
flagdata= args.flagdata
uvdist_min= args.uvdist_min
uvdist_max= args.uvdist_max
print("*** ARGS ***")
print("vis: %s" % vis)
print("outvis: %s" % outvis)
print("Beam (Bmaj/Bmin): (%s,%s)" % (Bmaj, Bmin))
print("flag data? %s uvdist min/max=%s/%s" % (flagdata,uvdist_min,uvdist_max) )
print("************")
#===========================
#== Deconvolve vis
#===========================
print('INFO: Running visibility deconvolution...')
deconvolve(vis=vis,visout=outvis,bmaj=Bmaj,bmin=Bmin)
#===========================
#== Flagging data
#===========================
if flagdata:
print('INFO: Flagging deconvolvedibilities by uvdist range...')
flag(outvis,uvdist_min,uvdist_max)
###################
## MAIN EXEC ##
###################
if __name__ == "__main__":
#main()
sys.exit(main())
|
SKA-INAFREPO_NAMEcaesarPATH_START.@caesar_extracted@caesar-master@scripts@deconvolve_vis.py@.PATH_END.py
|
{
"filename": "MetroTool.py",
"repo_name": "lsst-ts/ts_phosim",
"repo_path": "ts_phosim_extracted/ts_phosim-main/python/lsst/ts/phosim/utils/MetroTool.py",
"type": "Python"
}
|
# This file is part of ts_phosim.
#
# Developed for the LSST Telescope and Site Systems.
# This product includes software developed by the LSST Project
# (https://www.lsst.org).
# See the COPYRIGHT file at the top-level directory of this distribution
# for details of code ownership.
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
import numpy as np
import warnings
import scipy.special as sp
from lsst.ts.wep.utils import padArray, extractArray
def calc_pssn(
array,
wlum,
aType="opd",
D=8.36,
r0inmRef=0.1382,
zen=0,
pmask=0,
imagedelta=0,
fno=1.2335,
debugLevel=0,
):
"""Calculate the normalized point source sensitivity (PSSN).
Parameters
----------
array : numpy.ndarry
Array that contains either opd or pdf. opd need to be in microns.
wlum : float
Wavelength in microns.
aType : str, optional
What is used to calculate pssn - either opd or psf. (the default is
"opd".)
D : float, optional
Side length of OPD image in meter. (the default is 8.36.)
r0inmRef : float, optional
Fidicial atmosphere r0 @ 500nm in meter, Konstantinos uses 0.20. (the
default is 0.1382.)
zen : float, optional
Telescope zenith angle in degree. (the default is 0.)
pmask : int or numpy.ndarray[int], optional
Pupil mask. when opd is used, it can be generated using opd image, we
can put 0 or -1 or whatever here. When psf is used, this needs to be
provided separately with same size as array. (the default is 0.)
imagedelta : float, optional
Only needed when psf is used. use 0 for opd. (the default is 0.)
fno : float, optional
Only needed when psf is used. use 0 for opd. (the default is 1.2335.)
debugLevel : int, optional
Debug level. The higher value gives more information. (the default
is 0.)
Returns
-------
float
PSSN value.
"""
# Only needed for psf: pmask, imagedelta, fno
# THE INTERNAL RESOLUTION THAT FFTS OPERATE ON IS VERY IMPORTANT
# TO THE ACCUARCY OF PSSN.
# WHEN TYPE='OPD', NRESO=SIZE(ARRAY,1)
# WHEN TYPE='PSF', NRESO=SIZE(PMASK,1)
# for the psf option, we can not first convert psf back to opd then
# start over,
# because psf=|exp(-2*OPD)|^2. information has been lost in the | |^2.
# we need to go forward with psf->mtf,
# and take care of the coordinates properly.
# PSSN = (n_eff)_atm / (n_eff)_atm+sys
# (n_eff))_atm = 1 / (int (PSF^2)_atm dOmega)
# (n_eff))_atm+sys = 1 / (int (PSF^2)_atm+sys dOmega)
# Check the type is "OPD" or "PSF"
if aType not in ("opd", "psf"):
raise ValueError("The type of %s is not allowed." % aType)
# Squeeze the array if necessary
if array.ndim == 3:
array2D = array[0, :, :].squeeze()
# Get the k value (magnification ratio used in creating MTF)
if aType == "opd":
try:
m = max(array2D.shape)
except NameError:
m = max(array.shape)
k = 1
elif aType == "psf":
m = max(pmask.shape)
# Pupil needs to be padded k times larger to get imagedelta
# Do not know where to find this formular. Check with Bo.
k = fno * wlum / imagedelta
# Get the modulation transfer function with the van Karman power spectrum
mtfa = createMTFatm(D, m, k, wlum, zen, r0inmRef, model="vonK")
# Get the pupil function
if aType == "opd":
try:
iad = array2D != 0
except NameError:
iad = array != 0
elif aType == "psf":
# Add even number
mk = int(m + np.rint((m * (k - 1) + 1e-5) / 2) * 2)
# padArray(pmask, m)
iad = pmask
# OPD --> PSF --> OTF --> OTF' (OTF + atmosphere) --> PSF'
# Check with Bo that we could get OTF' or PSF' from PhoSim or not directly.
# The above question might not be a concern in the simulation.
# However, for the real image, it loooks like this is hard to do
# What should be the standard way to judge the PSSN in the real telescope?
# OPD is zero for perfect telescope
opdt = np.zeros((m, m))
# OPD to PSF
psft = opd2psf(
opdt,
iad,
wlum,
imagedelta=imagedelta,
sensorFactor=1,
fno=fno,
debugLevel=debugLevel,
)
# PSF to optical transfer function (OTF)
otft = psf2otf(psft)
# Add atmosphere to perfect telescope
otfa = otft * mtfa
# OTF to PSF
psfa = otf2psf(otfa)
# Atmospheric PSS (point spread sensitivity) = 1/neff_atm
pssa = np.sum(psfa**2)
# Calculate PSF with error (atmosphere + system)
if aType == "opd":
if array.ndim == 2:
ninst = 1
else:
ninst = array.shape[0]
for ii in range(ninst):
if array.ndim == 2:
array2D = array
else:
array2D = array[ii, :, :].squeeze()
psfei = opd2psf(array2D, iad, wlum, debugLevel=debugLevel)
if ii == 0:
psfe = psfei
else:
psfe += psfei
# Do the normalization based on the number of instrument
psfe = psfe / ninst
elif aType == "psf":
if array.shape[0] == mk:
psfe = array
elif array.shape[0] > mk:
psfe = extractArray(array, mk)
else:
print(
"calc_pssn: image provided too small, %d < %d x %6.4f."
% (array.shape[0], m, k)
)
print("IQ is over-estimated !!!")
psfe = padArray(array, mk)
# Do the normalization of PSF
psfe = psfe / np.sum(psfe) * np.sum(psft)
# OTF with system error
otfe = psf2otf(psfe)
# Add the atmosphere error
# OTF with system and atmosphere errors
otftot = otfe * mtfa
# PSF with system and atmosphere errors
psftot = otf2psf(otftot)
# atmospheric + error PSS
pss = np.sum(psftot**2)
# normalized PSS
pssn = pss / pssa
if debugLevel >= 3:
print("pssn = %10.8e/%10.8e = %6.4f." % (pss, pssa, pssn))
return pssn
def createMTFatm(D, m, k, wlum, zen, r0inmRef, model="vonK"):
"""Generate the modulation transfer function (MTF) for atmosphere.
Parameters
----------
D : float
Side length of optical path difference (OPD) image in m.
m : int
Dimension of OPD image in pixel. The the number of pixel we want to
have to cover the length of D.
k : int
Use a k-times bigger array to pad the MTF. Use k=1 for the same size.
wlum : float
Wavelength in um.
zen : float
Telescope zenith angle in degree.
r0inmRef : float
Reference r0 in meter at the wavelength of 0.5 um.
model : str, optional
Kolmogorov power spectrum ("Kolm") or van Karman power spectrum
("vonK"). (the default is "vonK".)
Returns
-------
numpy.ndarray
MTF at specific atmosphere model.
"""
# Get the atmosphere phase structure function
sfa = atmSF(D, m, wlum, zen, r0inmRef, model)
# Get the modular transfer function for atmosphere
mtfa = np.exp(-0.5 * sfa)
# Add even number
N = int(m + np.rint((m * (k - 1) + 1e-5) / 2) * 2)
# Pad the matrix if necessary
mtfa = padArray(mtfa, N)
return mtfa
def atmSF(D, m, wlum, zen, r0inmRef, model):
"""Get the atmosphere phase structure function.
Parameters
----------
D : float
Side length of optical path difference (OPD) image in m.
m : int
Dimension of OPD image in pixel.
wlum : float
Wavelength in um.
zen : float
Telescope zenith angle in degree.
r0inmRef : float
Reference r0 in meter at the wavelength of 0.5 um.
model : str
Kolmogorov power spectrum ("Kolm") or van Karman power spectrum
("vonK").
Returns
-------
numpy.ndarray
Atmosphere phase structure function.
Raises
------
ValueError
The model type is not supported.
"""
# Check the model
if model not in ("Kolm", "vonK"):
raise ValueError("Does not support %s atmosphere model." % model)
# Get the atomosphere reference r0 in meter.
r0a = r0Wz(r0inmRef, zen, wlum)
# Round elements of the array to the nearest integer.
m0 = np.rint(0.5 * (m + 1) + 1e-5)
# Get the x, y coordinates index
aa = np.arange(1, m + 1)
x, y = np.meshgrid(aa, aa)
# Frequency resolution in 1/rad
dr = D / (m - 1)
# Atmosphere r
r = dr * np.sqrt((x - m0) ** 2 + (y - m0) ** 2)
# Calculate the structure function
# Kolmogorov power spectrum
if model == "Kolm":
# D(r) = 6.88 * (r/r0)^(5/3) in p.117, Chap. 11 of PhoSim referece
sfa = 6.88 * (r / r0a) ** (5 / 3)
# van Karman power spectrum
elif model == "vonK":
# Outer scale in meter
L0 = 30
# Gamma function is used
sfa_c = (
2
* sp.gamma(11 / 6)
/ 2 ** (5 / 6)
/ np.pi ** (8 / 3)
* (24 / 5 * sp.gamma(6 / 5)) ** (5 / 6)
* (r0a / L0) ** (-5 / 3)
)
# Modified bessel of 2nd/3rd kind
sfa_k = sp.kv(5 / 6, (2 * np.pi / L0 * r))
# There is the undefined value (nan = 0 * inf, 0 from 'r' and inf from
# 'sfa_k')
with warnings.catch_warnings():
warnings.simplefilter("ignore", category=RuntimeWarning)
sfa = sfa_c * (
2 ** (-1 / 6) * sp.gamma(5 / 6)
- (2 * np.pi / L0 * r) ** (5 / 6) * sfa_k
)
np.nan_to_num(sfa, copy=False)
return sfa
def r0Wz(r0inmRef, zen, wlum):
"""Get the atomosphere reference r0, which is a function of zenith angle
and wavelength.
Parameters
----------
r0inmRef : float
Reference r0 in meter at the wavelength of 0.5 um.
zen : float
Telescope zenith angle in degree.
wlum : float
Wavelength in um.
Returns
-------
float
Atomosphere reference r0 in meter.
"""
# Telescope zenith angle, change the unit from degree to radian
zen = zen * np.pi / 180
# Get the atmosphere reference r0
r0aref = r0inmRef * np.cos(zen) ** 0.6
# Atmosphere reference r0 at the specific wavelength in um
# 0.5 um is the reference wavelength
r0a = r0aref * (wlum / 0.5) ** 1.2
return r0a
def psf2eAtmW(
array,
wlum,
aType="opd",
D=8.36,
pmask=0,
r0inmRef=0.1382,
sensorFactor=1,
zen=0,
imagedelta=0.2,
fno=1.2335,
debugLevel=0,
):
"""Calculate the ellipticity with the error of atmosphere and weighting
function.
Parameters
----------
array : numpy.ndarray
Wavefront OPD in micron, or psf image.
wlum : float
Wavelength in microns.
aType : str, optional
Type of image ("opd" or "psf"). (the default is "opd".)
D : float, optional
Side length of optical path difference (OPD) image in m. (the default
is 8.36.)
pmask : int or numpy.ndarray[int], optional
Pupil mask. (the default is 0.)
r0inmRef : float, optional
Fidicial atmosphere r0 @ 500nm in meter. (the default is 0.1382.)
sensorFactor : float, optional
Factor of sensor. (the default is 1.)
zen : float, optional
Telescope zenith angle in degree. (the default is 0.)
imagedelta : float, optional
Only needed when psf is used. 1 pixel = 0.2 arcsec. (the default is
0.2.)
fno : float, optional
Only needed when psf is used. use 0 for opd. (the default is 1.2335.)
debugLevel : int, optional
The higher value gives more information. (the default is 0.)
Returns
-------
float
Ellipticity.
numpy.ndarray
Correlation function (XX).
numpy.ndarray
Correlation function (YY).
numpy.ndarray
Correlation function (XY).
"""
# Unlike calc_pssn(), here imagedelta needs to be provided for type='opd'
# because the ellipticity calculation operates on psf.
# Get the k value
k = fno * wlum / imagedelta
# Get the PSF with the system error
if aType == "opd":
m = array.shape[0] / sensorFactor
psfe = opd2psf(
array,
0,
wlum,
imagedelta=imagedelta,
sensorFactor=sensorFactor,
fno=fno,
debugLevel=debugLevel,
)
else:
m = max(pmask.shape)
psfe = array
# Opitcal transfer function (OTF) of system error
otfe = psf2otf(psfe)
# Modulation transfer function (MTF) with atmosphere
mtfa = createMTFatm(D, m, k, wlum, zen, r0inmRef)
# OTF with system and atmosphere errors
otf = otfe * mtfa
# PSF with system and atmosphere errors
psf = otf2psf(otf)
if debugLevel >= 3:
print("Below from the Gaussian weigting function on ellipticity.")
# Get the ellipticity and correlation function
# The second input of psfeW should be pixeinum (1 pixel = 10 um).
# Check this part with Bo.
e, q11, q22, q12 = psf2eW(
psf, imagedelta, wlum, atmModel="Gau", debugLevel=debugLevel
)
return e, q11, q22, q12
def psf2eW(psf, pixinum, wlum, atmModel="Gau", debugLevel=0):
"""Calculate the ellipticity with the weighting function.
Parameters
----------
psf : numpy.ndarray
Point spread function (PSF).
pixinum : float
Pixel in um.
wlum : float
Wavelength in microns.
atmModel : str, optional
Atmosphere model ("Gau" or "2Gau"). (the default is "Gau".)
debugLevel : int, optional
The higher value gives more information. (the default is 0.)
Returns
-------
float
Ellipticity.
numpy.ndarray
Correlation function (XX).
numpy.ndarray
Correlation function (YY).
numpy.ndarray
Correlation function (XY).
"""
# x, y positions
x, y = np.meshgrid(np.arange(1, psf.shape[0] + 1), np.arange(1, psf.shape[1] + 1))
# Average x and y
xbar = np.sum(x * psf) / np.sum(psf)
ybar = np.sum(y * psf) / np.sum(psf)
# Show the averaged x and y
if debugLevel >= 3:
print("xbar=%6.3f, ybar=%6.3f" % (xbar, ybar))
# Distance^2 to center
r2 = (x - xbar) ** 2 + (y - ybar) ** 2
# Weighting function based on the atmospheric model
# FWHM is assigned to be 0.6 arcsec. Need to check with Bo for this.
fwhminarcsec = 0.6
oversample = 1
W = createAtm(
wlum,
fwhminarcsec,
r2,
pixinum,
oversample,
model=atmModel,
debugLevel=debugLevel,
)
# Apply the weighting function to PSF
psf = psf * W
# Correlation function
Q11 = np.sum(((x - xbar) ** 2) * psf) / np.sum(psf)
Q22 = np.sum(((y - ybar) ** 2) * psf) / np.sum(psf)
Q12 = np.sum(((x - xbar) * (y - ybar)) * psf) / np.sum(psf)
# Calculate the ellipticity
T = Q11 + Q22
if T > 1e-20:
e1 = (Q11 - Q22) / T
e2 = 2 * Q12 / T
e = np.sqrt(e1**2 + e2**2)
# No correlation
else:
e = 0
return e, Q11, Q22, Q12
def createAtm(
wlum, fwhminarcsec, gridsize, pixinum, oversample, model="Gau", debugLevel=0
):
"""Calculate the weighting function for a certain atmosphere model.
Parameters
----------
wlum : float
Wavelength in microns.
fwhminarcsec : float
Full width in half maximum (FWHM) in arcsec.
gridsize : int or numpy.ndarray[int]
Size of grid. If it is the array, it should be (distance to center)^2.
That means r2.
pixinum : int
Pixel in um.
oversample : int
k times of image resolution compared with the original one.
model : str, optional
Atmosphere model ("Gau" or "2Gau"). (the default is "Gau".)
debugLevel : int, optional
The higher value gives more information. (the default is 0.)
Returns
-------
numpy.ndarray
Weighting function of atmosphere.
"""
# Get the weighting function
# Distance^2 to center
if isinstance(gridsize, (int)):
nreso = gridsize * oversample
# n for radius length
nr = nreso / 2
aa = np.linspace(-nr + 0.5, nr - 0.5, nreso)
x, y = np.meshgrid(aa)
r2 = x * x + y * y
else:
r2 = gridsize
# FWHM in arcsec --> FWHM in um
fwhminum = fwhminarcsec / 0.2 * 10
# Calculate the weighting function
if model == "Gau":
# Sigma in um
sig = fwhminum / 2 / np.sqrt(2 * np.log(2))
sig = sig / (pixinum / oversample)
z = np.exp(-r2 / 2 / sig**2)
elif model == "2Gau":
# Below is used to manually solve for sigma
# let x = exp(-r^2/(2*alpha^2)), which results in 1/2*max
# we want to get (1+.1)/2=0.55 from below
# x=0.4673194304; printf('%20.10f\n'%x**.25*.1+x);
sig = fwhminum / (2 * np.sqrt(-2 * np.log(0.4673194304)))
# In (oversampled) pixel
sig = sig / (pixinum / oversample)
z = np.exp(-r2 / 2 / sig**2) + 0.4 / 4 * np.exp(-r2 / 8 / sig**2)
if debugLevel >= 3:
print("sigma1=%6.4f arcsec" % (sig * (pixinum / oversample) / 10 * 0.2))
return z
def opd2psf(
opd, pupil, wavelength, imagedelta=0, sensorFactor=1, fno=1.2335, debugLevel=0
):
"""Optical path difference (OPD) to point spread function (PSF).
Parameters
----------
opd : numpy.ndarray
Optical path difference.
pupil : float or numpy.ndarray
Pupil function. If pupil is a number, not an array, we will get pupil
geometry from OPD.
wavelength : float
Wavelength in um.
imagedelta : float, optional
Pixel size in um. Use 0 if pixel size is not specified. (the default
is 0.)
sensorFactor : float, optional
Factor of sensor. Only need this if imagedelta != 0. (the default
is 1.)
fno : float, optional
Only need this if imagedelta=0. (the default is 1.2335.)
debugLevel : int, optional
The higher value gives more information. (the default is 0.)
Returns
-------
numpy.ndarray
Normalized PSF.
Raises
------
ValueError
Shapes of OPD and pupil are different.
ValueError
OPD shape is not square.
ValueError
Padding value is less than 1.
"""
# Make sure all NaN in OPD to be 0
opd[np.isnan(opd)] = 0
# Get the pupil function from OPD if necessary
if not isinstance(pupil, np.ndarray):
pupil = opd != 0
# Check the dimension of pupil and OPD should be the same
if opd.shape != pupil.shape:
raise ValueError("Shapes of OPD and pupil are different.")
# For the PSF
if imagedelta != 0:
# Check the dimension of OPD
if opd.shape[0] != opd.shape[1]:
raise ValueError(
"Error (opd2psf): OPD image size = (%d, %d)."
% (opd.shape[0], opd.shape[1])
)
# Get the k value and the padding
k = fno * wavelength / imagedelta
padding = k / sensorFactor
# Check the padding
if padding < 1:
errorMes = "opd2psf: Sampling too low, data inaccurate.\n"
errorMes += "Imagedelta needs to be smaller than"
errorMes += " fno * wlum = %4.2f um.\n" % (fno * wavelength)
errorMes += "So that the padding factor > 1.\n"
errorMes += "Otherwise we have to cut pupil to be < D."
raise ValueError(errorMes)
# Size of sensor
sensorSamples = opd.shape[0]
# Add even number for padding
N = int(sensorSamples + np.rint(((padding - 1) * sensorSamples + 1e-5) / 2) * 2)
pupil = padArray(pupil, N)
opd = padArray(opd, N)
# Show the padding information or not
if debugLevel >= 3:
print("padding = %8.6f." % padding)
# If imagedelta = 0, we don't do any padding, and go with below
z = pupil * np.exp(-2j * np.pi * opd / wavelength)
z = np.fft.fftshift(np.fft.fft2(np.fft.fftshift(z), s=z.shape))
z = np.absolute(z**2)
# Normalize the PSF
z = z / np.sum(z)
# Show the information of PSF from OPD
if debugLevel >= 3:
print("opd2psf(): imagedelta = %8.6f." % imagedelta, end="")
if imagedelta == 0:
print("0 means using OPD with padding as provided.")
print("Verify psf has been normalized: %4.1f." % np.sum(z))
return z
def psf2otf(psf):
"""Point spread function (PSF) to optical transfer function (OTF).
Parameters
----------
psf : numpy.ndarray
Point spread function.
Returns
-------
numpy.ndarray
Optacal transfer function.
"""
otf = np.fft.fftshift(np.fft.fft2(np.fft.fftshift(psf), s=psf.shape))
return otf
def otf2psf(otf):
"""Optical transfer function (OTF) to point spread function (PSF).
Parameters
----------
otf : numpy.ndarray
Optical transfer function.
Returns
-------
numpy.ndarray
Point spread function.
"""
psf = np.absolute(np.fft.fftshift(np.fft.ifft2(np.fft.fftshift(otf), s=otf.shape)))
return psf
if __name__ == "__main__":
pass
|
lsst-tsREPO_NAMEts_phosimPATH_START.@ts_phosim_extracted@ts_phosim-main@python@lsst@ts@phosim@utils@MetroTool.py@.PATH_END.py
|
{
"filename": "_nbinsy.py",
"repo_name": "catboost/catboost",
"repo_path": "catboost_extracted/catboost-master/contrib/python/plotly/py2/plotly/validators/histogram2dcontour/_nbinsy.py",
"type": "Python"
}
|
import _plotly_utils.basevalidators
class NbinsyValidator(_plotly_utils.basevalidators.IntegerValidator):
def __init__(
self, plotly_name="nbinsy", parent_name="histogram2dcontour", **kwargs
):
super(NbinsyValidator, self).__init__(
plotly_name=plotly_name,
parent_name=parent_name,
edit_type=kwargs.pop("edit_type", "calc"),
min=kwargs.pop("min", 0),
role=kwargs.pop("role", "style"),
**kwargs
)
|
catboostREPO_NAMEcatboostPATH_START.@catboost_extracted@catboost-master@contrib@python@plotly@py2@plotly@validators@histogram2dcontour@_nbinsy.py@.PATH_END.py
|
{
"filename": "airbyte.py",
"repo_name": "langchain-ai/langchain",
"repo_path": "langchain_extracted/langchain-master/libs/community/langchain_community/document_loaders/airbyte.py",
"type": "Python"
}
|
from typing import Any, Callable, Iterator, Mapping, Optional
from langchain_core.documents import Document
from langchain_core.utils.utils import guard_import
from langchain_community.document_loaders.base import BaseLoader
RecordHandler = Callable[[Any, Optional[str]], Document]
class AirbyteCDKLoader(BaseLoader):
"""Load with an `Airbyte` source connector implemented using the `CDK`."""
def __init__(
self,
config: Mapping[str, Any],
source_class: Any,
stream_name: str,
record_handler: Optional[RecordHandler] = None,
state: Optional[Any] = None,
) -> None:
"""Initializes the loader.
Args:
config: The config to pass to the source connector.
source_class: The source connector class.
stream_name: The name of the stream to load.
record_handler: A function that takes in a record and an optional id and
returns a Document. If None, the record will be used as the document.
Defaults to None.
state: The state to pass to the source connector. Defaults to None.
"""
from airbyte_cdk.models.airbyte_protocol import AirbyteRecordMessage
from airbyte_cdk.sources.embedded.base_integration import (
BaseEmbeddedIntegration,
)
from airbyte_cdk.sources.embedded.runner import CDKRunner
class CDKIntegration(BaseEmbeddedIntegration):
"""A wrapper around the CDK integration."""
def _handle_record(
self, record: AirbyteRecordMessage, id: Optional[str]
) -> Document:
if record_handler:
return record_handler(record, id)
return Document(page_content="", metadata=record.data)
self._integration = CDKIntegration(
config=config,
runner=CDKRunner(source=source_class(), name=source_class.__name__),
)
self._stream_name = stream_name
self._state = state
def lazy_load(self) -> Iterator[Document]:
return self._integration._load_data(
stream_name=self._stream_name, state=self._state
)
@property
def last_state(self) -> Any:
return self._integration.last_state
class AirbyteHubspotLoader(AirbyteCDKLoader):
"""Load from `Hubspot` using an `Airbyte` source connector."""
def __init__(
self,
config: Mapping[str, Any],
stream_name: str,
record_handler: Optional[RecordHandler] = None,
state: Optional[Any] = None,
) -> None:
"""Initializes the loader.
Args:
config: The config to pass to the source connector.
stream_name: The name of the stream to load.
record_handler: A function that takes in a record and an optional id and
returns a Document. If None, the record will be used as the document.
Defaults to None.
state: The state to pass to the source connector. Defaults to None.
"""
source_class = guard_import(
"source_hubspot", pip_name="airbyte-source-hubspot"
).SourceHubspot
super().__init__(
config=config,
source_class=source_class,
stream_name=stream_name,
record_handler=record_handler,
state=state,
)
class AirbyteStripeLoader(AirbyteCDKLoader):
"""Load from `Stripe` using an `Airbyte` source connector."""
def __init__(
self,
config: Mapping[str, Any],
stream_name: str,
record_handler: Optional[RecordHandler] = None,
state: Optional[Any] = None,
) -> None:
"""Initializes the loader.
Args:
config: The config to pass to the source connector.
stream_name: The name of the stream to load.
record_handler: A function that takes in a record and an optional id and
returns a Document. If None, the record will be used as the document.
Defaults to None.
state: The state to pass to the source connector. Defaults to None.
"""
source_class = guard_import(
"source_stripe", pip_name="airbyte-source-stripe"
).SourceStripe
super().__init__(
config=config,
source_class=source_class,
stream_name=stream_name,
record_handler=record_handler,
state=state,
)
class AirbyteTypeformLoader(AirbyteCDKLoader):
"""Load from `Typeform` using an `Airbyte` source connector."""
def __init__(
self,
config: Mapping[str, Any],
stream_name: str,
record_handler: Optional[RecordHandler] = None,
state: Optional[Any] = None,
) -> None:
"""Initializes the loader.
Args:
config: The config to pass to the source connector.
stream_name: The name of the stream to load.
record_handler: A function that takes in a record and an optional id and
returns a Document. If None, the record will be used as the document.
Defaults to None.
state: The state to pass to the source connector. Defaults to None.
"""
source_class = guard_import(
"source_typeform", pip_name="airbyte-source-typeform"
).SourceTypeform
super().__init__(
config=config,
source_class=source_class,
stream_name=stream_name,
record_handler=record_handler,
state=state,
)
class AirbyteZendeskSupportLoader(AirbyteCDKLoader):
"""Load from `Zendesk Support` using an `Airbyte` source connector."""
def __init__(
self,
config: Mapping[str, Any],
stream_name: str,
record_handler: Optional[RecordHandler] = None,
state: Optional[Any] = None,
) -> None:
"""Initializes the loader.
Args:
config: The config to pass to the source connector.
stream_name: The name of the stream to load.
record_handler: A function that takes in a record and an optional id and
returns a Document. If None, the record will be used as the document.
Defaults to None.
state: The state to pass to the source connector. Defaults to None.
"""
source_class = guard_import(
"source_zendesk_support", pip_name="airbyte-source-zendesk-support"
).SourceZendeskSupport
super().__init__(
config=config,
source_class=source_class,
stream_name=stream_name,
record_handler=record_handler,
state=state,
)
class AirbyteShopifyLoader(AirbyteCDKLoader):
"""Load from `Shopify` using an `Airbyte` source connector."""
def __init__(
self,
config: Mapping[str, Any],
stream_name: str,
record_handler: Optional[RecordHandler] = None,
state: Optional[Any] = None,
) -> None:
"""Initializes the loader.
Args:
config: The config to pass to the source connector.
stream_name: The name of the stream to load.
record_handler: A function that takes in a record and an optional id and
returns a Document. If None, the record will be used as the document.
Defaults to None.
state: The state to pass to the source connector. Defaults to None.
"""
source_class = guard_import(
"source_shopify", pip_name="airbyte-source-shopify"
).SourceShopify
super().__init__(
config=config,
source_class=source_class,
stream_name=stream_name,
record_handler=record_handler,
state=state,
)
class AirbyteSalesforceLoader(AirbyteCDKLoader):
"""Load from `Salesforce` using an `Airbyte` source connector."""
def __init__(
self,
config: Mapping[str, Any],
stream_name: str,
record_handler: Optional[RecordHandler] = None,
state: Optional[Any] = None,
) -> None:
"""Initializes the loader.
Args:
config: The config to pass to the source connector.
stream_name: The name of the stream to load.
record_handler: A function that takes in a record and an optional id and
returns a Document. If None, the record will be used as the document.
Defaults to None.
state: The state to pass to the source connector. Defaults to None.
"""
source_class = guard_import(
"source_salesforce", pip_name="airbyte-source-salesforce"
).SourceSalesforce
super().__init__(
config=config,
source_class=source_class,
stream_name=stream_name,
record_handler=record_handler,
state=state,
)
class AirbyteGongLoader(AirbyteCDKLoader):
"""Load from `Gong` using an `Airbyte` source connector."""
def __init__(
self,
config: Mapping[str, Any],
stream_name: str,
record_handler: Optional[RecordHandler] = None,
state: Optional[Any] = None,
) -> None:
"""Initializes the loader.
Args:
config: The config to pass to the source connector.
stream_name: The name of the stream to load.
record_handler: A function that takes in a record and an optional id and
returns a Document. If None, the record will be used as the document.
Defaults to None.
state: The state to pass to the source connector. Defaults to None.
"""
source_class = guard_import(
"source_gong", pip_name="airbyte-source-gong"
).SourceGong
super().__init__(
config=config,
source_class=source_class,
stream_name=stream_name,
record_handler=record_handler,
state=state,
)
|
langchain-aiREPO_NAMElangchainPATH_START.@langchain_extracted@langchain-master@libs@community@langchain_community@document_loaders@airbyte.py@.PATH_END.py
|
{
"filename": "_textcase.py",
"repo_name": "catboost/catboost",
"repo_path": "catboost_extracted/catboost-master/contrib/python/plotly/py3/plotly/validators/surface/colorbar/tickfont/_textcase.py",
"type": "Python"
}
|
import _plotly_utils.basevalidators
class TextcaseValidator(_plotly_utils.basevalidators.EnumeratedValidator):
def __init__(
self, plotly_name="textcase", parent_name="surface.colorbar.tickfont", **kwargs
):
super(TextcaseValidator, self).__init__(
plotly_name=plotly_name,
parent_name=parent_name,
edit_type=kwargs.pop("edit_type", "calc"),
values=kwargs.pop("values", ["normal", "word caps", "upper", "lower"]),
**kwargs,
)
|
catboostREPO_NAMEcatboostPATH_START.@catboost_extracted@catboost-master@contrib@python@plotly@py3@plotly@validators@surface@colorbar@tickfont@_textcase.py@.PATH_END.py
|
{
"filename": "DataGeneration.py",
"repo_name": "wlxu/RelicClass",
"repo_path": "RelicClass_extracted/RelicClass-master/RealSpaceInterface/Calc2D/DataGeneration.py",
"type": "Python"
}
|
import logging
import numpy as np
import cv2
from Calc2D.rFourier import realFourier, realInverseFourier
def GenerateGaussianData(sigma, size, points, A=1):
xr = np.linspace(-size / 2.0, size / 2.0, points)
yr = np.linspace(-size / 2.0, size / 2.0, points)
step = xr[1] - xr[0]
x, y = np.meshgrid(
xr, yr, indexing='ij', sparse=True) # indexing is important
del xr, yr
#use the more easy formula
Value = A * np.exp(-(x**2 + y**2) / (2 * sigma**2))
kx, ky, FValue = realFourier(step, Value)
kxr, kyr = np.meshgrid(kx, ky, indexing='ij', sparse=True)
k = np.sqrt(kxr**2 + kyr**2)
del kxr, kyr
kx = (min(kx), max(kx)) #just return the extremal values to save memory
ky = (min(ky), max(ky))
ValueE = (Value.min(), Value.max())
return ValueE, FValue, k, kx, ky
def GenerateSIData(A, size, points, limit=None, ns=0.96):
xr = np.linspace(-size / 2.0, size / 2.0, points)
yr = np.linspace(-size / 2.0, size / 2.0, points)
step = xr[1] - xr[0]
x, y = np.meshgrid(
xr, yr, indexing='ij', sparse=True) # indexing is important
del xr, yr
Value = 0 * x + 0 * y
kx, ky, FValue = realFourier(step, Value) #FValue==0
kxr, kyr = np.meshgrid(kx, ky, indexing='ij', sparse=True)
k = np.sqrt(kxr**2 + kyr**2)
del kxr, kyr
if limit == None:
ktilde = k.flatten()
ktilde[np.argmin(k)] = 10**9 #just let the background be arbitrary low
ktilde = ktilde.reshape(k.shape)
FValue = np.random.normal(
loc=0,
scale=np.sqrt(A / ktilde**(
2 - (ns - 1) * 2. / 3.)) / np.sqrt(2)) + np.random.normal(
loc=0,
scale=np.sqrt(A / ktilde**
(2 - (ns - 1) * 2. / 3.)) / np.sqrt(2)) * 1j
elif type(limit) == list or type(limit) == tuple:
iunder, junder = np.where(k < limit[1])
for t in range(len(iunder)):
if k[iunder[t]][junder[t]] > limit[0] and k[iunder[t]][junder[t]] > 0:
FValue[iunder[t]][junder[t]] = np.random.normal(
loc=0,
scale=np.sqrt(A / k[iunder[t]][junder[t]]**
(2 - (ns - 1) * 2. / 3.)) /
np.sqrt(2)) + np.random.normal(
loc=0,
scale=np.sqrt(A / k[iunder[t]][junder[t]]**
(2 -
(ns - 1) * 2. / 3.)) / np.sqrt(2)) * 1j
else:
raise ValueError("limit must be None or tuple or list")
Value = realInverseFourier(FValue)
kx = (min(kx), max(kx))
ky = (min(ky), max(ky))
ValueE = (Value.min(), Value.max())
return ValueE, FValue, k, kx, ky
|
wlxuREPO_NAMERelicClassPATH_START.@RelicClass_extracted@RelicClass-master@RealSpaceInterface@Calc2D@DataGeneration.py@.PATH_END.py
|
{
"filename": "emulator_analysis_convergence.py",
"repo_name": "dylancromer/maszcal",
"repo_path": "maszcal_extracted/maszcal-main/scripts/emulator_analysis_convergence.py",
"type": "Python"
}
|
import datetime
import numpy as np
import pathos.pools as pp
from sklearn.gaussian_process.kernels import Matern
import supercubos
import maszcal.cosmology
import maszcal.corrections
import maszcal.data.sims
import maszcal.data.obs
import maszcal.density
import maszcal.fitutils
import maszcal.lensing
import maszcal.likelihoods
import maszcal.twohalo
PARAM_MINS = np.array([-2, 0, 1, 0.1, 2.2]) # a_sz, con, alpha, beta
PARAM_MAXES = np.array([2, 5, 6, 2.1, 8.1])
GAMMA = 0.2
MIN_MU = np.log(3e13)
MAX_MU = np.log(3e14)
Z_MIN = 0.2
Z_MAX = 1
FROM_ARCMIN = 2 * np.pi / 360 / 60
TO_ARCMIN = 1/FROM_ARCMIN
THETAS = np.linspace(0.01*FROM_ARCMIN, 20*FROM_ARCMIN, 88)
SEED = 13
NUM_CLUSTERS = 101
NUM_ERRORCHECK_SAMPLES = 10
NUM_PROCESSES = 1
MIN_SAMPLES = 200
MAX_SAMPLES = 2000
SAMPLE_STEP_SIZE = 200
FID_SAMPLE_SIZE = 1000
FID_PRINCIPAL_COMPONENTS = 8
TIMESTAMP = datetime.datetime.now().strftime("%Y-%m-%d-%H%M%S")
DIR = 'data/emulator/'
SETUP_SLUG = 'emulator-errors_bary-2h_convergence'
class bcolors:
HEADER = '\033[95m'
OKBLUE = '\033[94m'
OKGREEN = '\033[92m'
WARNING = '\033[93m'
FAIL = '\033[91m'
ENDC = '\033[0m'
BOLD = '\033[1m'
UNDERLINE = '\033[4m'
def get_density_model():
return maszcal.density.Gnfw(
cosmo_params=maszcal.cosmology.CosmoParams(),
mass_definition='mean',
delta=200,
comoving_radii=True,
nfw_class=maszcal.density.NfwModel,
)
def get_hmf_interp():
return maszcal.tinker.HmfInterpolator(
mu_samples=np.log(np.geomspace(1e12, 1e16, 600)),
redshift_samples=np.linspace(0.01, 4, 120),
delta=200,
mass_definition='mean',
cosmo_params=maszcal.cosmology.CosmoParams(),
)
def get_two_halo_convergence():
model = maszcal.twohalo.TwoHaloConvergenceModel(
cosmo_params=maszcal.cosmology.CosmoParams(),
mass_definition='mean',
delta=200,
)
return model.radius_space_convergence
def get_2h_emulator(two_halo_conv):
return maszcal.twohalo.TwoHaloEmulator.from_function(
two_halo_func=two_halo_conv,
r_grid=np.geomspace(1e-4, 100, 120),
z_lims=np.array([0, Z_MAX+0.01]),
mu_lims=np.log(np.array([1e13, 3e15])),
num_emulator_samples=800,
separate_mu_and_z_axes=True,
).with_redshift_dependent_radii
def get_corrected_lensing_func(density_model, two_halo_emulator):
return maszcal.corrections.TwoHaloCorrection(
one_halo_func=density_model.convergence,
two_halo_func=two_halo_emulator,
).corrected_profile
def get_convergence_model(hmf_interp, lensing_func):
rng = np.random.default_rng(seed=SEED)
sz_mus = (MAX_MU - MIN_MU) * rng.random(size=NUM_CLUSTERS) + MIN_MU
sz_masses = np.exp(sz_mus)
zs = Z_MAX*rng.random(size=NUM_CLUSTERS) + Z_MIN
weights = np.ones(NUM_CLUSTERS)
cosmo_params = maszcal.cosmology.CosmoParams()
return maszcal.lensing.ScatteredMatchingConvergenceModel(
sz_masses=sz_masses,
redshifts=zs,
lensing_weights=weights,
cosmo_params=cosmo_params,
lensing_func=lensing_func,
logmass_prob_dist_func=hmf_interp,
)
def _pool_map(func, array):
pool = pp.ProcessPool(NUM_PROCESSES)
mapped_array = np.array(
pool.map(func, array),
).T
pool.close()
pool.join()
pool.clear()
pool.terminate()
pool.restart()
return mapped_array
def get_emulator_errors(PARAM_MINS, PARAM_MAXES, emulator, conv_func):
rand = supercubos.LatinSampler().get_rand_sample(
param_mins=PARAM_MINS,
param_maxes=PARAM_MAXES,
num_samples=NUM_ERRORCHECK_SAMPLES,
)
convs_check = _pool_map(conv_func, rand)
data_check = convs_check
emulated_data = emulator(rand)
return (emulated_data - data_check)/data_check
def generate_header():
terminator = '\n'
configs = [
f'PARAM_MINS = {PARAM_MINS}'
f'PARAM_MAXES = {PARAM_MAXES}'
f'GAMMA = {GAMMA}'
f'MIN_MU = {MIN_MU}'
f'MAX_MU = {MAX_MU}'
f'Z_MIN = {Z_MIN}'
f'Z_MAX = {Z_MAX}'
f'THETAS = {THETAS}'
f'SEED = {SEED}'
f'NUM_CLUSTERS = {NUM_CLUSTERS}'
f'NUM_ERRORCHECK_SAMPLES = {NUM_ERRORCHECK_SAMPLES}'
f'MIN_SAMPLES = {MIN_SAMPLES}'
f'MAX_SAMPLES = {MAX_SAMPLES}'
f'SAMPLE_STEP_SIZE = {SAMPLE_STEP_SIZE}'
f'FID_SAMPLE_SIZE = {FID_SAMPLE_SIZE}'
f'FID_PRINCIPAL_COMPONENTS = {FID_PRINCIPAL_COMPONENTS}'
]
header = [conf + terminator for conf in configs]
return ''.join(header)
def save_arr(array, name):
filename = name + '_' + TIMESTAMP
header = generate_header()
nothing = np.array([])
np.savetxt(DIR + filename + '.header.txt', nothing, header=header)
np.save(DIR + filename + '.npy', array)
def do_estimation(sample_size, num_pcs):
rng = np.random.default_rng(seed=SEED)
lh = supercubos.LatinSampler(rng=rng).get_lh_sample(PARAM_MINS, PARAM_MAXES, sample_size)
density_model = get_density_model()
two_halo_conv = get_two_halo_convergence()
two_halo_emulator = get_2h_emulator(two_halo_conv)
corrected_lensing_func = get_corrected_lensing_func(density_model, two_halo_emulator)
hmf_interp = get_hmf_interp()
conv_model = get_convergence_model(hmf_interp, corrected_lensing_func)
def wrapped_conv_func(params):
a_sz = params[0:1]
a_2h = params[1:2]
con = params[2:3]
alpha = params[3:4]
beta = params[4:5]
gamma = np.array([GAMMA])
conv = conv_model.stacked_convergence(THETAS, a_sz, a_2h, con, alpha, beta, gamma).squeeze()
return THETAS * conv
convs = _pool_map(wrapped_conv_func, lh)
emulator = maszcal.emulate.PcaEmulator.create_from_data(
coords=lh,
data=convs,
interpolator_class=maszcal.interpolate.GaussianProcessInterpolator,
interpolator_kwargs={'kernel': Matern()},
num_components=num_pcs,
)
return get_emulator_errors(PARAM_MINS, PARAM_MAXES, emulator, wrapped_conv_func)
if __name__ == '__main__':
sample_sizes = np.arange(MIN_SAMPLES, MAX_SAMPLES+SAMPLE_STEP_SIZE, SAMPLE_STEP_SIZE)
print('Calculating errors as a function of LH size...')
for num_samples in sample_sizes:
emulator_errs = do_estimation(num_samples, FID_PRINCIPAL_COMPONENTS)
save_arr(emulator_errs, SETUP_SLUG+f'_nsamples={num_samples}')
print(f'Finished calculation for LH of size {num_samples}')
print('Calculating errors as a function of principal components...')
num_component_range = np.arange(3, 13, 1)
for num_components in num_component_range:
emulator_errs = do_estimation(FID_SAMPLE_SIZE, num_components)
save_arr(emulator_errs, SETUP_SLUG+f'_ncomponents={num_components}')
print(f'Finished calculation for {num_components} PCs')
print(bcolors.OKGREEN + '--- Analysis complete ---' + bcolors.ENDC)
|
dylancromerREPO_NAMEmaszcalPATH_START.@maszcal_extracted@maszcal-main@scripts@emulator_analysis_convergence.py@.PATH_END.py
|
{
"filename": "__init__.py",
"repo_name": "catboost/catboost",
"repo_path": "catboost_extracted/catboost-master/contrib/python/pyzmq/py2/zmq/green/eventloop/__init__.py",
"type": "Python"
}
|
from zmq.green.eventloop.ioloop import IOLoop
__all__ = ['IOLoop']
|
catboostREPO_NAMEcatboostPATH_START.@catboost_extracted@catboost-master@contrib@python@pyzmq@py2@zmq@green@eventloop@__init__.py@.PATH_END.py
|
{
"filename": "msvc9compiler.py",
"repo_name": "catboost/catboost",
"repo_path": "catboost_extracted/catboost-master/contrib/python/numpy/py3/numpy/distutils/msvc9compiler.py",
"type": "Python"
}
|
import os
from distutils.msvc9compiler import MSVCCompiler as _MSVCCompiler
from .system_info import platform_bits
def _merge(old, new):
"""Concatenate two environment paths avoiding repeats.
Here `old` is the environment string before the base class initialize
function is called and `new` is the string after the call. The new string
will be a fixed string if it is not obtained from the current environment,
or the same as the old string if obtained from the same environment. The aim
here is not to append the new string if it is already contained in the old
string so as to limit the growth of the environment string.
Parameters
----------
old : string
Previous environment string.
new : string
New environment string.
Returns
-------
ret : string
Updated environment string.
"""
if not old:
return new
if new in old:
return old
# Neither new nor old is empty. Give old priority.
return ';'.join([old, new])
class MSVCCompiler(_MSVCCompiler):
def __init__(self, verbose=0, dry_run=0, force=0):
_MSVCCompiler.__init__(self, verbose, dry_run, force)
def initialize(self, plat_name=None):
# The 'lib' and 'include' variables may be overwritten
# by MSVCCompiler.initialize, so save them for later merge.
environ_lib = os.getenv('lib')
environ_include = os.getenv('include')
_MSVCCompiler.initialize(self, plat_name)
# Merge current and previous values of 'lib' and 'include'
os.environ['lib'] = _merge(environ_lib, os.environ['lib'])
os.environ['include'] = _merge(environ_include, os.environ['include'])
# msvc9 building for 32 bits requires SSE2 to work around a
# compiler bug.
if platform_bits == 32:
self.compile_options += ['/arch:SSE2']
self.compile_options_debug += ['/arch:SSE2']
def manifest_setup_ldargs(self, output_filename, build_temp, ld_args):
ld_args.append('/MANIFEST')
_MSVCCompiler.manifest_setup_ldargs(self, output_filename,
build_temp, ld_args)
|
catboostREPO_NAMEcatboostPATH_START.@catboost_extracted@catboost-master@contrib@python@numpy@py3@numpy@distutils@msvc9compiler.py@.PATH_END.py
|
{
"filename": "__init__.py",
"repo_name": "catboost/catboost",
"repo_path": "catboost_extracted/catboost-master/contrib/python/plotly/py3/plotly/validators/scattermapbox/selected/marker/__init__.py",
"type": "Python"
}
|
import sys
from typing import TYPE_CHECKING
if sys.version_info < (3, 7) or TYPE_CHECKING:
from ._size import SizeValidator
from ._opacity import OpacityValidator
from ._color import ColorValidator
else:
from _plotly_utils.importers import relative_import
__all__, __getattr__, __dir__ = relative_import(
__name__,
[],
[
"._size.SizeValidator",
"._opacity.OpacityValidator",
"._color.ColorValidator",
],
)
|
catboostREPO_NAMEcatboostPATH_START.@catboost_extracted@catboost-master@contrib@python@plotly@py3@plotly@validators@scattermapbox@selected@marker@__init__.py@.PATH_END.py
|
{
"filename": "test_jump.py",
"repo_name": "nanograv/PINT",
"repo_path": "PINT_extracted/PINT-master/tests/test_jump.py",
"type": "Python"
}
|
"""Tests for jump model component """
import logging
import os
import pytest
import pytest
import astropy.units as u
import numpy as np
import pint.models.model_builder as mb
import pint.toa as toa
from pint.residuals import Residuals
from pinttestdata import datadir
from pint.models import parameter as p
from pint.models import PhaseJump
import pint.models.timing_model
import pint.fitter
class SimpleSetup:
def __init__(self, par, tim):
self.par = par
self.tim = tim
self.m = mb.get_model(self.par)
self.t = toa.get_TOAs(
self.tim, ephem="DE405", planets=False, include_bipm=False
)
@pytest.fixture
def setup_NGC6440E():
os.chdir(datadir)
return SimpleSetup("NGC6440E.par", "NGC6440E.tim")
def test_add_jumps_and_flags(setup_NGC6440E):
setup_NGC6440E.m.add_component(PhaseJump(), validate=False)
cp = setup_NGC6440E.m.components["PhaseJump"]
# simulate selecting TOAs in pintk and jumping them
selected_toa_ind = [1, 2, 3] # arbitrary set of TOAs
cp.add_jump_and_flags(setup_NGC6440E.t.table["flags"][selected_toa_ind])
for d in setup_NGC6440E.t.table["flags"][selected_toa_ind]:
assert d["gui_jump"] == "1"
# add second jump to different set of TOAs
selected_toa_ind2 = [10, 11, 12]
cp.add_jump_and_flags(setup_NGC6440E.t.table["flags"][selected_toa_ind2])
# check previous jump flags unaltered
for d in setup_NGC6440E.t.table["flags"][selected_toa_ind]:
assert d["gui_jump"] == "1"
# check appropriate flags added
for d in setup_NGC6440E.t.table["flags"][selected_toa_ind2]:
assert d["gui_jump"] == "2"
def test_add_overlapping_jump(setup_NGC6440E):
setup_NGC6440E.m.add_component(PhaseJump(), validate=False)
cp = setup_NGC6440E.m.components["PhaseJump"]
selected_toa_ind = [1, 2, 3]
selected_toa_ind2 = [10, 11, 12]
cp.add_jump_and_flags(setup_NGC6440E.t.table["flags"][selected_toa_ind])
cp.add_jump_and_flags(setup_NGC6440E.t.table["flags"][selected_toa_ind2])
# attempt to add overlapping jump - should not add jump
selected_toa_ind3 = [9, 10, 11]
cp.add_jump_and_flags(setup_NGC6440E.t.table["flags"][selected_toa_ind3])
# check previous jump flags unaltered
for d in setup_NGC6440E.t.table["flags"][selected_toa_ind]:
assert d["gui_jump"] == "1"
for d in setup_NGC6440E.t.table["flags"][selected_toa_ind2]:
assert d["gui_jump"] == "2"
# check that no flag added to index 9
assert "jump" not in setup_NGC6440E.t.table[9].colnames
assert "gui_jump" not in setup_NGC6440E.t.table[9].colnames
def test_remove_jump_and_flags(setup_NGC6440E):
setup_NGC6440E.m.add_component(PhaseJump(), validate=False)
cp = setup_NGC6440E.m.components["PhaseJump"]
selected_toa_ind = [1, 2, 3]
selected_toa_ind2 = [10, 11, 12]
cp.add_jump_and_flags(setup_NGC6440E.t.table["flags"][selected_toa_ind])
cp.add_jump_and_flags(setup_NGC6440E.t.table["flags"][selected_toa_ind2])
# test delete_jump_and_flags
setup_NGC6440E.m.delete_jump_and_flags(setup_NGC6440E.t.table["flags"], 1)
assert len(cp.jumps) == 1
f = pint.fitter.Fitter.auto(setup_NGC6440E.t, setup_NGC6440E.m)
# delete last jump
setup_NGC6440E.m.delete_jump_and_flags(setup_NGC6440E.t.table["flags"], 2)
for d in setup_NGC6440E.t.table["flags"][selected_toa_ind2]:
assert "jump" not in d
assert "PhaseJump" not in setup_NGC6440E.m.components
f = pint.fitter.Fitter.auto(setup_NGC6440E.t, setup_NGC6440E.m)
def test_jump_params_to_flags(setup_NGC6440E):
"""Check jump_params_to_flags function."""
setup_NGC6440E.m.add_component(PhaseJump(), validate=False)
cp = setup_NGC6440E.m.components["PhaseJump"]
par = p.maskParameter(
name="JUMP",
key="freq",
value=0.2,
key_value=[1440, 1700],
units=u.s,
tcb2tdb_scale_factor=u.Quantity(1),
) # TOAs indexed 48, 49, 54 in NGC6440E are within this frequency range
cp.add_param(par, setup=True)
# sanity check - ensure no jump flags from initialization
for i in range(setup_NGC6440E.t.ntoas):
assert "jump" not in setup_NGC6440E.t.table["flags"][i]
# add flags based off jumps added to model
setup_NGC6440E.m.jump_params_to_flags(setup_NGC6440E.t)
# index to affected TOAs and ensure appropriate flags set
toa_indeces = [48, 49, 54]
for i in toa_indeces:
assert "jump" in setup_NGC6440E.t.table["flags"][i]
assert setup_NGC6440E.t.table["flags"][i]["jump"][0] == "1"
# ensure no extraneous flags added to unaffected TOAs
for i in range(setup_NGC6440E.t.ntoas):
if i not in toa_indeces:
assert "jump" not in setup_NGC6440E.t.table["flags"][i]
# check case where multiple calls are performed (no-ops)
old_table = setup_NGC6440E.t.table
setup_NGC6440E.m.jump_params_to_flags(setup_NGC6440E.t)
assert all(old_table) == all(setup_NGC6440E.t.table)
# check that adding overlapping jump works
par2 = p.maskParameter(
name="JUMP",
key="freq",
value=0.2,
key_value=[1600, 1900],
units=u.s,
tcb2tdb_scale_factor=u.Quantity(1),
) # frequency range overlaps with par, 2nd jump will have common TOAs w/ 1st
cp.add_param(par2, setup=True)
# add flags based off jumps added to model
setup_NGC6440E.m.jump_params_to_flags(setup_NGC6440E.t)
mask2 = par2.select_toa_mask(setup_NGC6440E.t)
intersect = np.intersect1d(toa_indeces, mask2)
assert intersect is not []
for i in mask2:
assert "2" in setup_NGC6440E.t.table["flags"][i]["jump"]
for i in toa_indeces:
assert "1" in setup_NGC6440E.t.table["flags"][i]["jump"]
def test_multijump_toa(setup_NGC6440E):
setup_NGC6440E.m.add_component(PhaseJump(), validate=False)
cp = setup_NGC6440E.m.components["PhaseJump"]
par = p.maskParameter(
name="JUMP",
key="freq",
value=0.2,
key_value=[1440, 1700],
units=u.s,
tcb2tdb_scale_factor=u.Quantity(1),
) # TOAs indexed 48, 49, 54 in NGC6440E are within this frequency range
selected_toa_ind = [48, 49, 54]
cp.add_param(par, setup=True)
# check that one can still add "gui jumps" to model-jumped TOAs
cp.add_jump_and_flags(setup_NGC6440E.t.table["flags"][selected_toa_ind])
# add flags based off jumps added to model
setup_NGC6440E.m.jump_params_to_flags(setup_NGC6440E.t)
for dict in setup_NGC6440E.t.table["flags"][selected_toa_ind]:
assert dict["jump"] in ["1,2", "2,1"]
assert dict["gui_jump"] == "2"
assert len(cp.jumps) == 2
setup_NGC6440E.m.delete_jump_and_flags(setup_NGC6440E.t.table["flags"], 2)
for dict in setup_NGC6440E.t.table["flags"][selected_toa_ind]:
assert "jump" in dict
assert len(cp.jumps) == 1
assert "JUMP1" in cp.jumps
def test_unfrozen_jump(setup_NGC6440E):
setup_NGC6440E.m.add_component(PhaseJump(), validate=False)
# this has no TOAs
par = p.maskParameter(
name="JUMP",
key="freq",
value=0.2,
key_value=[3000, 3200],
units=u.s,
tcb2tdb_scale_factor=u.Quantity(1),
)
setup_NGC6440E.m.components["PhaseJump"].add_param(par, setup=True)
setup_NGC6440E.m.JUMP1.frozen = False
with pytest.raises(pint.models.timing_model.MissingTOAs):
setup_NGC6440E.m.validate_toas(setup_NGC6440E.t)
def test_find_empty_masks(setup_NGC6440E):
setup_NGC6440E.m.add_component(PhaseJump(), validate=False)
# this has no TOAs
par = p.maskParameter(
name="JUMP",
key="freq",
value=0.2,
key_value=[3000, 3200],
units=u.s,
tcb2tdb_scale_factor=u.Quantity(1),
)
setup_NGC6440E.m.components["PhaseJump"].add_param(par, setup=True)
setup_NGC6440E.m.JUMP1.frozen = False
bad_parameters = setup_NGC6440E.m.find_empty_masks(setup_NGC6440E.t)
assert "JUMP1" in bad_parameters
bad_parameters = setup_NGC6440E.m.find_empty_masks(setup_NGC6440E.t, freeze=True)
setup_NGC6440E.m.validate_toas(setup_NGC6440E.t)
class TestJUMP:
@classmethod
def setup_class(cls):
os.chdir(datadir)
cls.parf = "B1855+09_NANOGrav_dfg+12_TAI.par"
cls.timf = "B1855+09_NANOGrav_dfg+12.tim"
cls.JUMPm = mb.get_model(cls.parf)
cls.toas = toa.get_TOAs(
cls.timf, ephem="DE405", planets=False, include_bipm=False
)
# libstempo calculation
cls.ltres = np.genfromtxt(
f"{cls.parf}.tempo_test", unpack=False, names=True, dtype=np.longdouble
)
def test_jump(self):
presids_s = Residuals(
self.toas, self.JUMPm, use_weighted_mean=False
).time_resids.to(u.s)
assert np.all(
np.abs(presids_s.value - self.ltres["residuals"]) < 1e-7
), "JUMP test failed."
def test_derivative(self):
log = logging.getLogger("Jump phase test")
p = "JUMP2"
log.debug("Runing derivative for %s", f"d_delay_d_{p}")
ndf = self.JUMPm.d_phase_d_param_num(self.toas, p)
adf = self.JUMPm.d_phase_d_param(self.toas, self.JUMPm.delay(self.toas), p)
diff = adf - ndf
if np.all(diff.value) != 0.0:
mean_der = (adf + ndf) / 2.0
relative_diff = np.abs(diff) / np.abs(mean_der)
# print "Diff Max is :", np.abs(diff).max()
msg = (
"Derivative test failed at d_phase_d_%s with max relative difference %lf"
% (p, np.nanmax(relative_diff).value)
)
assert np.nanmax(relative_diff) < 0.001, msg
|
nanogravREPO_NAMEPINTPATH_START.@PINT_extracted@PINT-master@tests@test_jump.py@.PATH_END.py
|
{
"filename": "base.py",
"repo_name": "langchain-ai/langchain",
"repo_path": "langchain_extracted/langchain-master/libs/langchain/langchain/evaluation/exact_match/base.py",
"type": "Python"
}
|
import string
from typing import Any, List
from langchain.evaluation.schema import StringEvaluator
class ExactMatchStringEvaluator(StringEvaluator):
"""Compute an exact match between the prediction and the reference.
Examples
----------
>>> evaluator = ExactMatchChain()
>>> evaluator.evaluate_strings(
prediction="Mindy is the CTO",
reference="Mindy is the CTO",
) # This will return {'score': 1.0}
>>> evaluator.evaluate_strings(
prediction="Mindy is the CTO",
reference="Mindy is the CEO",
) # This will return {'score': 0.0}
"""
def __init__(
self,
*,
ignore_case: bool = False,
ignore_punctuation: bool = False,
ignore_numbers: bool = False,
**kwargs: Any,
):
super().__init__()
self.ignore_case = ignore_case
self.ignore_punctuation = ignore_punctuation
self.ignore_numbers = ignore_numbers
@property
def requires_input(self) -> bool:
"""
This evaluator does not require input.
"""
return False
@property
def requires_reference(self) -> bool:
"""
This evaluator requires a reference.
"""
return True
@property
def input_keys(self) -> List[str]:
"""
Get the input keys.
Returns:
List[str]: The input keys.
"""
return ["reference", "prediction"]
@property
def evaluation_name(self) -> str:
"""
Get the evaluation name.
Returns:
str: The evaluation name.
"""
return "exact_match"
def _evaluate_strings( # type: ignore[arg-type,override]
self,
*,
prediction: str,
reference: str,
**kwargs: Any,
) -> dict:
"""
Evaluate the exact match between the prediction and the reference.
Args:
prediction (str): The prediction string.
reference (Optional[str], optional): The reference string.
Returns:
dict: The evaluation results containing the score.
"""
if self.ignore_case:
prediction = prediction.lower()
reference = reference.lower()
if self.ignore_punctuation:
prediction = prediction.translate(str.maketrans("", "", string.punctuation))
reference = reference.translate(str.maketrans("", "", string.punctuation))
if self.ignore_numbers:
prediction = prediction.translate(str.maketrans("", "", string.digits))
reference = reference.translate(str.maketrans("", "", string.digits))
return {"score": int(prediction == reference)}
|
langchain-aiREPO_NAMElangchainPATH_START.@langchain_extracted@langchain-master@libs@langchain@langchain@evaluation@exact_match@base.py@.PATH_END.py
|
{
"filename": "test_erfinv.py",
"repo_name": "scipy/scipy",
"repo_path": "scipy_extracted/scipy-main/scipy/special/tests/test_erfinv.py",
"type": "Python"
}
|
import numpy as np
from numpy.testing import assert_allclose, assert_equal
import pytest
import scipy.special as sc
class TestInverseErrorFunction:
def test_compliment(self):
# Test erfcinv(1 - x) == erfinv(x)
x = np.linspace(-1, 1, 101)
assert_allclose(sc.erfcinv(1 - x), sc.erfinv(x), rtol=0, atol=1e-15)
def test_literal_values(self):
# The expected values were calculated with mpmath:
#
# import mpmath
# mpmath.mp.dps = 200
# for y in [0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9]:
# x = mpmath.erfinv(y)
# print(x)
#
y = np.array([0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9])
actual = sc.erfinv(y)
expected = [
0.0,
0.08885599049425769,
0.1791434546212917,
0.2724627147267543,
0.37080715859355795,
0.4769362762044699,
0.5951160814499948,
0.7328690779592167,
0.9061938024368233,
1.1630871536766743,
]
assert_allclose(actual, expected, rtol=0, atol=1e-15)
@pytest.mark.parametrize(
'f, x, y',
[
(sc.erfinv, -1, -np.inf),
(sc.erfinv, 0, 0),
(sc.erfinv, 1, np.inf),
(sc.erfinv, -100, np.nan),
(sc.erfinv, 100, np.nan),
(sc.erfcinv, 0, np.inf),
(sc.erfcinv, 1, -0.0),
(sc.erfcinv, 2, -np.inf),
(sc.erfcinv, -100, np.nan),
(sc.erfcinv, 100, np.nan),
],
ids=[
'erfinv at lower bound',
'erfinv at midpoint',
'erfinv at upper bound',
'erfinv below lower bound',
'erfinv above upper bound',
'erfcinv at lower bound',
'erfcinv at midpoint',
'erfcinv at upper bound',
'erfcinv below lower bound',
'erfcinv above upper bound',
]
)
def test_domain_bounds(self, f, x, y):
assert_equal(f(x), y)
def test_erfinv_asympt(self):
# regression test for gh-12758: erfinv(x) loses precision at small x
# expected values precomputed with mpmath:
# >>> mpmath.mp.dps = 100
# >>> expected = [float(mpmath.erfinv(t)) for t in x]
x = np.array([1e-20, 1e-15, 1e-14, 1e-10, 1e-8, 0.9e-7, 1.1e-7, 1e-6])
expected = np.array([8.86226925452758e-21,
8.862269254527581e-16,
8.86226925452758e-15,
8.862269254527581e-11,
8.86226925452758e-09,
7.97604232907484e-08,
9.74849617998037e-08,
8.8622692545299e-07])
assert_allclose(sc.erfinv(x), expected,
rtol=1e-15)
# also test the roundtrip consistency
assert_allclose(sc.erf(sc.erfinv(x)),
x,
rtol=5e-15)
|
scipyREPO_NAMEscipyPATH_START.@scipy_extracted@scipy-main@scipy@special@tests@test_erfinv.py@.PATH_END.py
|
{
"filename": "_fresnel.py",
"repo_name": "plotly/plotly.py",
"repo_path": "plotly.py_extracted/plotly.py-master/packages/python/plotly/plotly/validators/surface/lighting/_fresnel.py",
"type": "Python"
}
|
import _plotly_utils.basevalidators
class FresnelValidator(_plotly_utils.basevalidators.NumberValidator):
def __init__(self, plotly_name="fresnel", parent_name="surface.lighting", **kwargs):
super(FresnelValidator, self).__init__(
plotly_name=plotly_name,
parent_name=parent_name,
edit_type=kwargs.pop("edit_type", "calc"),
max=kwargs.pop("max", 5),
min=kwargs.pop("min", 0),
**kwargs,
)
|
plotlyREPO_NAMEplotly.pyPATH_START.@plotly.py_extracted@plotly.py-master@packages@python@plotly@plotly@validators@surface@lighting@_fresnel.py@.PATH_END.py
|
{
"filename": "calc_act_general.ipynb",
"repo_name": "gomesdasilva/ACTIN2",
"repo_path": "ACTIN2_extracted/ACTIN2-master/docs/calc_act_general.ipynb",
"type": "Jupyter Notebook"
}
|
## Using ACTIN with any spectra
In this tutorial we will learn how to use `ACTIN` for spectrographs other than the ones included or when we only have access to wavelength and flux vectors.
```python
from actin2 import ACTIN
actin = ACTIN()
```
Let's use the test spectra that come with `ACTIN` as an example to extract only the wavelength and flux
```python
import glob, os
files = glob.glob(os.path.join(os.pardir, "actin2/test/HARPS/HD41248", "*_s1d_A.fits"))
files[0]
```
'../actin2/test/HARPS/HD41248/HARPS.2014-01-24T01:18:06.472_s1d_A.fits'
Now we are going to read this file and retrieve the spectra (in this case already at the stellar rest frame)
```python
read_spec = actin.ReadSpec(files[0])
wave = read_spec.spectrum['wave']
flux = read_spec.spectrum['flux']
```
To calculate indices from a spectrograph that is not included in `ACTIN`, create a `spectrum` dictionary with `wave` and `flux` keys and a `headers` dictionary with extra information like the time of observation, target name, etc. (could be empty).
```python
spectrum = dict(wave=wave, flux=flux)
headers = dict()
```
Now we call `actin.CalcIndices` using the dictionaries and a list of indices with IDs as in the indices table to calculate the indices (to check the table print `actin.IndTable().table`). The results will be stored in the `indices` dictionary. Below is an example of an output for the Ca II H&K (aka S-index), `I_CaII`.
```python
indices = actin.CalcIndices(spectrum, headers, ['I_CaII']).indices
indices
```
{'I_CaII': 0.13968442101525144,
'I_CaII_err': 0.0010479854259144701,
'I_CaII_Rneg': 0.00019538018295861548}
|
gomesdasilvaREPO_NAMEACTIN2PATH_START.@ACTIN2_extracted@ACTIN2-master@docs@calc_act_general.ipynb@.PATH_END.py
|
{
"filename": "__init__.py",
"repo_name": "astropy/astropy",
"repo_path": "astropy_extracted/astropy-main/astropy/utils/__init__.py",
"type": "Python"
}
|
# Licensed under a 3-clause BSD style license - see LICENSE.rst
"""
This subpackage contains developer-oriented utilities used by Astropy.
Public functions and classes in this subpackage are safe to be used by other
packages, but this subpackage is for utilities that are primarily of use for
developers or to implement python hacks.
This subpackage also includes the ``astropy.utils.compat`` package,
which houses utilities that provide compatibility and bugfixes across
all versions of Python that Astropy supports. However, the content of this
module is solely for internal use of ``astropy`` and subject to changes
without deprecations. Do not use it in external packages or code.
"""
from .codegen import *
from .decorators import *
from .introspection import *
from .misc import *
from .shapes import *
|
astropyREPO_NAMEastropyPATH_START.@astropy_extracted@astropy-main@astropy@utils@__init__.py@.PATH_END.py
|
{
"filename": "test_grid.py",
"repo_name": "nanograv/PINT",
"repo_path": "PINT_extracted/PINT-master/tests/test_grid.py",
"type": "Python"
}
|
"""Test chi^2 gridding routines"""
import concurrent.futures
import pytest
import astropy.units as u
import numpy as np
import pint.config
import pint.gridutils
import pint.models.parameter as param
from pint.fitter import GLSFitter, WLSFitter, DownhillWLSFitter, DownhillGLSFitter
from pint.models.model_builder import get_model_and_toas
import pint.logging
pint.logging.setup("INFO")
# for multi-core tests, don't use all available CPUs
ncpu = 2
@pytest.fixture
def get_data_and_fit():
parfile = pint.config.examplefile("NGC6440E.par")
timfile = pint.config.examplefile("NGC6440E.tim")
m, t = get_model_and_toas(parfile, timfile)
f = WLSFitter(t, m)
f.fit_toas()
bestfit = f.resids.chi2
return f, bestfit
def test_grid_singleprocessor(get_data_and_fit):
f, bestfit = get_data_and_fit
F0 = np.linspace(
f.model.F0.quantity - 1 * f.model.F0.uncertainty,
f.model.F0.quantity + 1 * f.model.F0.uncertainty,
3,
)
F1 = np.linspace(
f.model.F1.quantity - 1 * f.model.F1.uncertainty,
f.model.F1.quantity + 1 * f.model.F1.uncertainty,
5,
)
chi2grid, _ = pint.gridutils.grid_chisq(f, ("F0", "F1"), (F0, F1), ncpu=1)
assert np.isclose(bestfit, chi2grid.min())
def test_grid_extraparams_singleprocessor(get_data_and_fit):
f, bestfit = get_data_and_fit
F0 = np.linspace(
f.model.F0.quantity - 1 * f.model.F0.uncertainty,
f.model.F0.quantity + 1 * f.model.F0.uncertainty,
3,
)
F1 = np.linspace(
f.model.F1.quantity - 1 * f.model.F1.uncertainty,
f.model.F1.quantity + 1 * f.model.F1.uncertainty,
5,
)
chi2grid, extraparams = pint.gridutils.grid_chisq(
f, ("F0", "F1"), (F0, F1), ("DM",), ncpu=1
)
assert np.isclose(bestfit, chi2grid.min())
def test_grid_multiprocessor(get_data_and_fit):
f, bestfit = get_data_and_fit
F0 = np.linspace(
f.model.F0.quantity - 1 * f.model.F0.uncertainty,
f.model.F0.quantity + 1 * f.model.F0.uncertainty,
3,
)
F1 = np.linspace(
f.model.F1.quantity - 1 * f.model.F1.uncertainty,
f.model.F1.quantity + 1 * f.model.F1.uncertainty,
5,
)
chi2grid, _ = pint.gridutils.grid_chisq(f, ("F0", "F1"), (F0, F1), ncpu=ncpu)
assert np.isclose(bestfit, chi2grid.min())
def test_grid_oneparam(get_data_and_fit):
f, bestfit = get_data_and_fit
F0 = np.linspace(
f.model.F0.quantity - 1 * f.model.F0.uncertainty,
f.model.F0.quantity + 1 * f.model.F0.uncertainty,
3,
)
chi2grid, _ = pint.gridutils.grid_chisq(f, ("F0",), (F0,), ncpu=ncpu)
assert np.isclose(bestfit, chi2grid.min())
def test_grid_oneparam_extraparam(get_data_and_fit):
f, bestfit = get_data_and_fit
F0 = np.linspace(
f.model.F0.quantity - 1 * f.model.F0.uncertainty,
f.model.F0.quantity + 1 * f.model.F0.uncertainty,
3,
)
chi2grid, extraparams = pint.gridutils.grid_chisq(
f, ("F0",), (F0,), ("DM",), ncpu=ncpu
)
assert np.isclose(bestfit, chi2grid.min())
def test_grid_oneparam_existingexecutor(get_data_and_fit):
f, bestfit = get_data_and_fit
F0 = np.linspace(
f.model.F0.quantity - 1 * f.model.F0.uncertainty,
f.model.F0.quantity + 1 * f.model.F0.uncertainty,
3,
)
with concurrent.futures.ProcessPoolExecutor(
max_workers=ncpu,
) as executor:
chi2grid, _ = pint.gridutils.grid_chisq(f, ("F0",), (F0,), executor=executor)
assert np.isclose(bestfit, chi2grid.min())
def test_grid_3param_singleprocessor(get_data_and_fit):
f, bestfit = get_data_and_fit
F0 = np.linspace(
f.model.F0.quantity - 1 * f.model.F0.uncertainty,
f.model.F0.quantity + 1 * f.model.F0.uncertainty,
3,
)
F1 = np.linspace(
f.model.F1.quantity - 1 * f.model.F1.uncertainty,
f.model.F1.quantity + 1 * f.model.F1.uncertainty,
3,
)
DM = np.linspace(
f.model.DM.quantity - 1 * f.model.DM.uncertainty,
f.model.DM.quantity + 1 * f.model.DM.uncertainty,
5,
)
chi2grid, _ = pint.gridutils.grid_chisq(f, ("F0", "F1", "DM"), (F0, F1, DM), ncpu=1)
assert np.isclose(bestfit, chi2grid.min())
def test_grid_3param_multiprocessor(get_data_and_fit):
f, bestfit = get_data_and_fit
F0 = np.linspace(
f.model.F0.quantity - 1 * f.model.F0.uncertainty,
f.model.F0.quantity + 1 * f.model.F0.uncertainty,
3,
)
F1 = np.linspace(
f.model.F1.quantity - 1 * f.model.F1.uncertainty,
f.model.F1.quantity + 1 * f.model.F1.uncertainty,
3,
)
DM = np.linspace(
f.model.DM.quantity - 1 * f.model.DM.uncertainty,
f.model.DM.quantity + 1 * f.model.DM.uncertainty,
5,
)
chi2grid, _ = pint.gridutils.grid_chisq(
f, ("F0", "F1", "DM"), (F0, F1, DM), ncpu=ncpu
)
assert np.isclose(bestfit, chi2grid.min())
def test_grid_derived_singleprocessor(get_data_and_fit):
f, bestfit = get_data_and_fit
F0 = np.linspace(
f.model.F0.quantity - 1 * f.model.F0.uncertainty,
f.model.F0.quantity + 1 * f.model.F0.uncertainty,
5,
)
tau = (-f.model.F0.quantity / 2 / f.model.F1.quantity) * np.linspace(0.99, 1.01, 3)
chi2grid_tau, params, _ = pint.gridutils.grid_chisq_derived(
f,
("F0", "F1"),
(lambda x, y: x, lambda x, y: -x / 2 / y),
(F0, tau),
ncpu=1,
)
assert np.isclose(bestfit, chi2grid_tau.min(), atol=1)
def test_grid_derived_extraparam_singleprocessor(get_data_and_fit):
f, bestfit = get_data_and_fit
F0 = np.linspace(
f.model.F0.quantity - 3 * f.model.F0.uncertainty,
f.model.F0.quantity + 3 * f.model.F0.uncertainty,
5,
)
tau = (-f.model.F0.quantity / 2 / f.model.F1.quantity) * np.linspace(0.99, 1.01, 3)
chi2grid_tau, params, extraparams = pint.gridutils.grid_chisq_derived(
f,
("F0", "F1"),
(lambda x, y: x, lambda x, y: -x / 2 / y),
(F0, tau),
("DM",),
ncpu=1,
)
assert np.isclose(bestfit, chi2grid_tau.min(), atol=1)
def test_grid_derived_multiprocessor(get_data_and_fit):
f, bestfit = get_data_and_fit
F0 = np.linspace(
f.model.F0.quantity - 3 * f.model.F0.uncertainty,
f.model.F0.quantity + 3 * f.model.F0.uncertainty,
5,
)
tau = (-f.model.F0.quantity / 2 / f.model.F1.quantity) * np.linspace(0.99, 1.01, 3)
chi2grid_tau, params, _ = pint.gridutils.grid_chisq_derived(
f, ("F0", "F1"), (lambda x, y: x, lambda x, y: -x / 2 / y), (F0, tau), ncpu=ncpu
)
assert np.isclose(bestfit, chi2grid_tau.min(), atol=1)
def test_grid_derived_existingexecutor(get_data_and_fit):
f, bestfit = get_data_and_fit
F0 = np.linspace(
f.model.F0.quantity - 3 * f.model.F0.uncertainty,
f.model.F0.quantity + 3 * f.model.F0.uncertainty,
5,
)
tau = (-f.model.F0.quantity / 2 / f.model.F1.quantity) * np.linspace(0.99, 1.01, 3)
with concurrent.futures.ProcessPoolExecutor(max_workers=ncpu) as executor:
chi2grid_tau, params, _ = pint.gridutils.grid_chisq_derived(
f,
("F0", "F1"),
(lambda x, y: x, lambda x, y: -x / 2 / y),
(F0, tau),
executor=executor,
)
assert np.isclose(bestfit, chi2grid_tau.min(), atol=1)
def test_grid_derived_extraparam_existingexecutor(get_data_and_fit):
f, bestfit = get_data_and_fit
F0 = np.linspace(
f.model.F0.quantity - 3 * f.model.F0.uncertainty,
f.model.F0.quantity + 3 * f.model.F0.uncertainty,
5,
)
tau = (-f.model.F0.quantity / 2 / f.model.F1.quantity) * np.linspace(0.99, 1.01, 3)
with concurrent.futures.ProcessPoolExecutor(max_workers=ncpu) as executor:
chi2grid_tau, params, extraparams = pint.gridutils.grid_chisq_derived(
f,
("F0", "F1"),
(lambda x, y: x, lambda x, y: -x / 2 / y),
(F0, tau),
("DM",),
executor=executor,
)
assert np.isclose(bestfit, chi2grid_tau.min(), atol=1)
def test_grid_3param_prefix_singleprocessor():
parfile = pint.config.examplefile("NGC6440E.par")
timfile = pint.config.examplefile("NGC6440E.tim")
m, t = get_model_and_toas(parfile, timfile)
# add a F2 to the model
modelcomponent = m.components["Spindown"]
p = param.prefixParameter(
parameter_type="float",
name="F2",
value=0,
units=modelcomponent.F_unit(2),
uncertainty=0,
description=modelcomponent.F_description(2),
longdouble=True,
frozen=False,
tcb2tdb_scale_factor=u.Quantity(1),
)
modelcomponent.add_param(p, setup=True)
m.validate()
f = WLSFitter(t, m)
f.fit_toas()
bestfit = f.resids.chi2
F0 = np.linspace(
f.model.F0.quantity - 1 * f.model.F0.uncertainty,
f.model.F0.quantity + 1 * f.model.F0.uncertainty,
3,
)
F1 = np.linspace(
f.model.F1.quantity - 1 * f.model.F1.uncertainty,
f.model.F1.quantity + 1 * f.model.F1.uncertainty,
3,
)
F2 = np.linspace(
f.model.F2.quantity - 1 * f.model.F2.uncertainty,
f.model.F2.quantity + 1 * f.model.F2.uncertainty,
5,
)
chi2grid, _ = pint.gridutils.grid_chisq(f, ("F0", "F1", "F2"), (F0, F1, F2), ncpu=1)
assert np.isclose(bestfit, chi2grid.min())
def test_grid_3param_prefix_multiprocessor():
parfile = pint.config.examplefile("NGC6440E.par")
timfile = pint.config.examplefile("NGC6440E.tim")
m, t = get_model_and_toas(parfile, timfile)
# add a F2 to the model
modelcomponent = m.components["Spindown"]
p = param.prefixParameter(
parameter_type="float",
name="F2",
value=0,
units=modelcomponent.F_unit(2),
uncertainty=0,
description=modelcomponent.F_description(2),
longdouble=True,
frozen=False,
tcb2tdb_scale_factor=u.Quantity(1),
)
modelcomponent.add_param(p, setup=True)
m.validate()
f = WLSFitter(t, m)
f.fit_toas()
bestfit = f.resids.chi2
F0 = np.linspace(
f.model.F0.quantity - 1 * f.model.F0.uncertainty,
f.model.F0.quantity + 1 * f.model.F0.uncertainty,
3,
)
F1 = np.linspace(
f.model.F1.quantity - 1 * f.model.F1.uncertainty,
f.model.F1.quantity + 1 * f.model.F1.uncertainty,
3,
)
F2 = np.linspace(
f.model.F2.quantity - 1 * f.model.F2.uncertainty,
f.model.F2.quantity + 1 * f.model.F2.uncertainty,
5,
)
chi2grid, _ = pint.gridutils.grid_chisq(
f, ("F0", "F1", "F2"), (F0, F1, F2), ncpu=ncpu
)
assert np.isclose(bestfit, chi2grid.min())
@pytest.mark.parametrize(
"fitter", [GLSFitter, WLSFitter, DownhillWLSFitter, DownhillGLSFitter]
)
def test_grid_fitters_singleprocessor(fitter):
parfile = pint.config.examplefile("NGC6440E.par")
timfile = pint.config.examplefile("NGC6440E.tim")
m, t = get_model_and_toas(parfile, timfile)
f = fitter(t, m)
f.fit_toas()
bestfit = f.resids.chi2
F0 = np.linspace(
f.model.F0.quantity - 1 * f.model.F0.uncertainty,
f.model.F0.quantity + 1 * f.model.F0.uncertainty,
3,
)
chi2grid, _ = pint.gridutils.grid_chisq(
f, ("F0",), (F0,), printprogress=False, ncpu=1
)
assert np.isclose(bestfit, chi2grid.min())
@pytest.mark.parametrize(
"fitter", [GLSFitter, WLSFitter, DownhillWLSFitter, DownhillGLSFitter]
)
def test_grid_fitters_multiprocessor(fitter):
parfile = pint.config.examplefile("NGC6440E.par")
timfile = pint.config.examplefile("NGC6440E.tim")
m, t = get_model_and_toas(parfile, timfile)
f = fitter(t, m)
f.fit_toas()
bestfit = f.resids.chi2
F0 = np.linspace(
f.model.F0.quantity - 3 * f.model.F0.uncertainty,
f.model.F0.quantity + 3 * f.model.F0.uncertainty,
7,
)
chi2grid, _ = pint.gridutils.grid_chisq(
f, ("F0",), (F0,), printprogress=False, ncpu=ncpu
)
assert np.isclose(bestfit, chi2grid.min())
def test_tuple_fit():
parfile = pint.config.examplefile("NGC6440E.par")
timfile = pint.config.examplefile("NGC6440E.tim")
m, t = get_model_and_toas(parfile, timfile)
f = WLSFitter(t, m)
# find the best-fit
f.fit_toas()
bestfit = f.resids.chi2
F0 = np.linspace(
f.model.F0.quantity - 3 * f.model.F0.uncertainty,
f.model.F0.quantity + 3 * f.model.F0.uncertainty,
25,
)
F1 = np.ones(len(F0)) * f.model.F1.quantity
parnames = ("F0", "F1")
parvalues = list(zip(F0, F1))
chi2, extra = pint.gridutils.tuple_chisq(
f, parnames, parvalues, extraparnames=("DM",)
)
f.model.F0.quantity = F0[3]
f.model.F1.quantity = F1[3]
f.model.F0.frozen = True
f.model.F1.frozen = True
f.fit_toas()
assert np.isclose(chi2[3], f.resids.calc_chi2())
assert np.isclose(chi2.min(), bestfit)
assert np.isclose(extra["DM"][3], f.model.DM.quantity)
def test_derived_tuple_fit():
parfile = pint.config.examplefile("NGC6440E.par")
timfile = pint.config.examplefile("NGC6440E.tim")
m, t = get_model_and_toas(parfile, timfile)
f = WLSFitter(t, m)
# find the best-fit
f.fit_toas()
bestfit = f.resids.chi2
F0 = np.linspace(
f.model.F0.quantity - 3 * f.model.F0.uncertainty,
f.model.F0.quantity + 3 * f.model.F0.uncertainty,
53,
)
tau = np.linspace(8.1, 8.3, 53) * 100 * u.Myr
parvalues = list(zip(F0, tau))
chi2_tau, params, _ = pint.gridutils.tuple_chisq_derived(
f,
("F0", "F1"),
(lambda x, y: x, lambda x, y: -x / 2 / y),
parvalues, # ncpu=1,
)
assert np.isclose(bestfit, chi2_tau.min(), atol=3)
|
nanogravREPO_NAMEPINTPATH_START.@PINT_extracted@PINT-master@tests@test_grid.py@.PATH_END.py
|
{
"filename": "data.py",
"repo_name": "plotly/plotly.py",
"repo_path": "plotly.py_extracted/plotly.py-master/packages/python/plotly/plotly/tests/test_optional/test_matplotlylib/data/data.py",
"type": "Python"
}
|
D = dict(
x1=[0, 1, 2, 3, 4, 5],
y1=[10, 20, 50, 80, 100, 200],
x2=[0, 1, 2, 3, 4, 5, 6],
y2=[1, 4, 8, 16, 32, 64, 128],
)
|
plotlyREPO_NAMEplotly.pyPATH_START.@plotly.py_extracted@plotly.py-master@packages@python@plotly@plotly@tests@test_optional@test_matplotlylib@data@data.py@.PATH_END.py
|
{
"filename": "__init__.py",
"repo_name": "sdss/lvmagp",
"repo_path": "lvmagp_extracted/lvmagp-main/python/lvmagp/guide/offset/__init__.py",
"type": "Python"
}
|
"""
TODO: write docs
"""
__title__ = "Guide Offset"
from .base import GuideOffset
from .pwi import GuideOffsetPWI
|
sdssREPO_NAMElvmagpPATH_START.@lvmagp_extracted@lvmagp-main@python@lvmagp@guide@offset@__init__.py@.PATH_END.py
|
{
"filename": "snsedextend_README.md",
"repo_name": "jpierel14/SNSED_Repository",
"repo_path": "SNSED_Repository_extracted/SNSED_Repository-master/SEDs.H18-WFIRST/NON1A.H18-WFIRST/snsedextend_README.md",
"type": "Markdown"
}
|
J.R. Pierel & S.Rodney
2017.04.26
__SUMMARY__
Extrapolate SED up to J, H and K bands, allowing user to define V-J, V-H and/or V-K. Extrapolation is improvement upon previous method in that
it chooses a linear slope such that the integration of the J (H,K) band will correspond to the user-defined V-J (V-H,V-K). The script assumes
that the defined color is in ABmag, but Vega can be defined using the --vega flag (see below).
__SETUP__
The script assumes that you have set the SNDATA_ROOT environment variable (automatically set if you
run the normal SNDATA setup process). Then you can place this script inside any directory, and it will
look for your filenames (or .SED files) in the '~/SNDATA_ROOT/snsed/NON1A' directory. The directory you
place this file in will be populated with the extended SED files (same filename). Careful, as this means
that the SED files will be overwritten if you are running the script from within the NON1A directory. An
error.log file is also created, containing error messages from running the script.
__SYNOPSIS__
python snsedextend.py -i <file1,file2,...> -p <day# or all> -v <vTrans.dat> -j <jTrans.dat> -h <hTrans.dat> -k <kTrans.dat> --vh <V-H> --vk <V-K> --jh <J-H> --jk <J-K> --vj <V-J> --vega
__DESCRIPTION__
The options are as follows:
-i This allows user to define a filename (example.SED) or list of filenames
(example.SED,example2.SED,...) separated by commas (no spaces). If this flag
is not set, then all .SED files in the SNDATA_ROOT/snsed/NON1A directory will
be extrapolated.
-p This allows user to flag to plot or not. If you want to plot, add the day (6)
or you can plot all epochs sequentially (all)
-v This allows user to define a transmission file to use for the v-band.
-j This allows user to define a transmission file to use for the j-band.
-h This allows user to define a transmission file to use for the h-band.
-k This allows user to define a transmission file to use for the k-band.
--vh User defined V-H
--vk User defined V-K
--jh User defined J-H
--jk User defined J-K
--vj User defined V-J
--vega This allows user to input color in Vega instead of (assumed) AB
__Transmission Files__
You may use your own transmission files to define the filters used in the extrapolation. The default filters are tophat filters for J,H, and K, and Bessell for V.
To use your own transmission file, use the flags described above, and place the files in the appropriate folder (i.e. transmission file for V should go in vBand folder, etc.)
Alternatively, you may change the dictionary 'filters' at the top of the script:
filters={
'V':'vBand/bessellv.dat',
'J':'jBand/tophatJ.dat',
'H':'hBand/tophatH.dat',
'K':'kBand/tophatK.dat'
}
This is the default dictionary, simply change the '.dat' filenames to the filenames of your transmission files, and place the files in the appropriate folder. The
format of the transmission files should be two-column, with the first column being wavelength and the second column being transmission.
|
jpierel14REPO_NAMESNSED_RepositoryPATH_START.@SNSED_Repository_extracted@SNSED_Repository-master@SEDs.H18-WFIRST@NON1A.H18-WFIRST@snsedextend_README.md@.PATH_END.py
|
{
"filename": "wrightomega.py",
"repo_name": "scipy/scipy",
"repo_path": "scipy_extracted/scipy-main/scipy/special/_precompute/wrightomega.py",
"type": "Python"
}
|
import numpy as np
try:
import mpmath
except ImportError:
pass
def mpmath_wrightomega(x):
return mpmath.lambertw(mpmath.exp(x), mpmath.mpf('-0.5'))
def wrightomega_series_error(x):
series = x
desired = mpmath_wrightomega(x)
return abs(series - desired) / desired
def wrightomega_exp_error(x):
exponential_approx = mpmath.exp(x)
desired = mpmath_wrightomega(x)
return abs(exponential_approx - desired) / desired
def main():
desired_error = 2 * np.finfo(float).eps
print('Series Error')
for x in [1e5, 1e10, 1e15, 1e20]:
with mpmath.workdps(100):
error = wrightomega_series_error(x)
print(x, error, error < desired_error)
print('Exp error')
for x in [-10, -25, -50, -100, -200, -400, -700, -740]:
with mpmath.workdps(100):
error = wrightomega_exp_error(x)
print(x, error, error < desired_error)
if __name__ == '__main__':
main()
|
scipyREPO_NAMEscipyPATH_START.@scipy_extracted@scipy-main@scipy@special@_precompute@wrightomega.py@.PATH_END.py
|
{
"filename": "test_resource.py",
"repo_name": "astropy/astropy",
"repo_path": "astropy_extracted/astropy-main/astropy/io/votable/tests/test_resource.py",
"type": "Python"
}
|
# Licensed under a 3-clause BSD style license - see LICENSE.rst
# LOCAL
import io
from astropy.io.votable import parse
from astropy.utils.data import get_pkg_data_filename
def test_resource_groups():
# Read the VOTABLE
votable = parse(get_pkg_data_filename("data/resource_groups.xml"))
resource = votable.resources[0]
groups = resource.groups
params = resource.params
# Test that params inside groups are not outside
assert len(groups[0].entries) == 1
assert groups[0].entries[0].name == "ID"
assert len(params) == 2
assert params[0].name == "standardID"
assert params[1].name == "accessURL"
def test_roundtrip():
# Issue #16511 VOTable writer does not write out GROUPs within RESOURCEs
# Read the VOTABLE
votable = parse(get_pkg_data_filename("data/resource_groups.xml"))
bio = io.BytesIO()
votable.to_xml(bio)
bio.seek(0)
votable = parse(bio)
resource = votable.resources[0]
groups = resource.groups
params = resource.params
# Test that params inside groups are not outside
assert len(groups[0].entries) == 1
assert groups[0].entries[0].name == "ID"
assert len(params) == 2
assert params[0].name == "standardID"
assert params[1].name == "accessURL"
|
astropyREPO_NAMEastropyPATH_START.@astropy_extracted@astropy-main@astropy@io@votable@tests@test_resource.py@.PATH_END.py
|
{
"filename": "__init__.py",
"repo_name": "threeML/hawc_hal",
"repo_path": "hawc_hal_extracted/hawc_hal-master/hawc_hal/convenience_functions/__init__.py",
"type": "Python"
}
|
threeMLREPO_NAMEhawc_halPATH_START.@hawc_hal_extracted@hawc_hal-master@hawc_hal@convenience_functions@__init__.py@.PATH_END.py
|
|
{
"filename": "survey_paper.py",
"repo_name": "gbrammer/unicorn",
"repo_path": "unicorn_extracted/unicorn-master/survey_paper.py",
"type": "Python"
}
|
import os
#import pyfits
import astropy.io.fits as pyfits
import numpy as np
import glob
import shutil
import matplotlib.pyplot as plt
USE_PLOT_GUI=False
from matplotlib.figure import Figure
from matplotlib.backends.backend_agg import FigureCanvasAgg
import matplotlib
import threedhst
import threedhst.eazyPy as eazy
import threedhst.catIO as catIO
import unicorn
import re
root = None
def throughput():
os.chdir('/research/HST/GRISM/3DHST/ANALYSIS/SURVEY_PAPER')
xg141, yg141 = np.loadtxt('g141.dat', unpack=True)
xf140, yf140 = np.loadtxt('f140w.dat', unpack=True)
xf814, yf814 = np.loadtxt('f814w.dat', unpack=True)
xg800l, yg800l = np.loadtxt('g800l.dat', unpack=True)
plt.rcParams['text.usetex'] = True
plt.rcParams['font.family'] = 'serif'
plt.rcParams['font.serif'] = 'Times'
fig = unicorn.catalogs.plot_init(square=True, xs=8, aspect=1./3, left=0.105, bottom=0.08, top=0.01, right=0.01)
ax = fig.add_subplot(111)
ax.plot(xg141, yg141, color='black', linewidth=2, alpha=0.5)
ax.fill(xg141, yg141, color='red', linewidth=2, alpha=0.1)
ax.plot(xf140, yf140, color='black', linewidth=2, alpha=0.7)
ax.plot(xg800l, yg800l, color='black', linewidth=2, alpha=0.5)
ax.fill(xg800l, yg800l, color='blue', linewidth=2, alpha=0.1)
ax.plot(xf814, yf814, color='black', linewidth=2, alpha=0.7)
em_lines = [3727, 4861, 4959, 5007, 6563.]
offset = np.array([0,-0.05,0,0,0])
offset = 0.25+np.array([-0.2,-0.05,0,-0.01,-0.1])
xoffset = np.array([0,-120,200,150,0])
line_scale = np.array([1,1,1./2.98,1,1])
em_names = ['[OII]',r'H$\beta$','','[OIII]',r'H$\alpha$']
dlam = 30
zi = 1
show_spectra = True
colors=['blue','green','red']
if show_spectra:
for zi in [1,2,3]:
sedx, sedy = np.loadtxt('templates/EAZY_v1.0_lines/eazy_v1.0_sed4_nolines.dat', unpack=True)
sedy *= 1.*sedy.max()
dl = dlam/(1+zi)
#dl = dlam
for i,em_line in enumerate(em_lines):
em_gauss = 1./np.sqrt(2*np.pi*dl**2)*np.exp(-1*(sedx-em_line)**2/2/dl**2)
sedy += em_gauss/em_gauss.max()*0.6*line_scale[i]
ax.plot(sedx*(1+zi), sedy*0.4+0.5, color=colors[zi-1], alpha=0.7, linewidth=2)
ax.text(5500.,1.18-zi*0.13,r'$z=%d$' %(zi), color=colors[zi-1], fontsize=11)
for i in range(len(em_lines)):
ax.text(em_lines[i]*(1+1)+xoffset[i], 1+offset[i], em_names[i], horizontalalignment='center', fontsize=10)
show_continuous = False
if show_continuous:
em_lines = [3727, 5007, 6563.]
zgrid = np.arange(1000)/1000.*4
for line in em_lines:
ax.plot(line*(1+zgrid), zgrid/4.*0.8+0.5, linewidth=2, alpha=0.5, color='black')
for zi in [0,1,2,3]:
ax.plot([0.1,2.e4],np.array([zi,zi])/4.*0.8+0.5, linestyle='--', color='black', alpha=0.2)
ax.text(5800, 0.08,'G800L',rotation=33., color='black', alpha=0.7)
ax.text(5800, 0.08,'G800L',rotation=33., color='blue', alpha=0.4)
ax.text(7100, 0.03,'F814W',rotation=80., color='black', alpha=0.9)
ax.text(1.115e4, 0.17,'G141',rotation=15., color='black', alpha=0.7)
ax.text(1.115e4, 0.17,'G141',rotation=15., color='red', alpha=0.4)
ax.text(1.21e4, 0.03,'F140W',rotation=88., color='black', alpha=0.9)
ax.set_xlim(4500, 1.79e4)
ax.set_ylim(0,1.4)
ax.set_xlabel(r'$\lambda$ [\AA]')
ax.set_ylabel('throughput')
#ax.set_yticklabels([]);
ytick = ax.set_yticks([0,0.25,0.5,0.75,1.0])
fig.savefig('throughput.pdf')
plt.rcParams['text.usetex'] = False
#
def throughput_v2():
from matplotlib.ticker import MultipleLocator, FormatStrFormatter
#os.chdir('/research/HST/GRISM/3DHST/ANALYSIS/SURVEY_PAPER')
# xg141, yg141 = np.loadtxt('g141.dat', unpack=True)
# xf140, yf140 = np.loadtxt('f140w.dat', unpack=True)
# xf814, yf814 = np.loadtxt('f814w.dat', unpack=True)
# xg800l, yg800l = np.loadtxt('g800l.dat', unpack=True)
import pysynphot as S
bp = S.ObsBandpass('wfc3,ir,g141')
xg141, yg141 = bp.wave, bp.throughput
bp = S.ObsBandpass('wfc3,ir,g102')
xg102, yg102 = bp.wave, bp.throughput
bp = S.ObsBandpass('wfc3,ir,f140w')
xf140, yf140 = bp.wave, bp.throughput
bp = S.ObsBandpass('acs,wfc1,f814w')
xf814, yf814 = bp.wave, bp.throughput
bp = S.ObsBandpass('acs,wfc1,g800l')
xg800l, yg800l = bp.wave, bp.throughput
plt.rcParams['text.usetex'] = True
plt.rcParams['font.family'] = 'Serif'
plt.rcParams['font.serif'] = 'Times'
plt.ioff()
fig = unicorn.catalogs.plot_init(square=True, xs=8, aspect=1./3, left=0.095, bottom=0.1, top=0.095, right=0.01)
#ax = fig.add_subplot(111)
#ax = fig.add_axes(((x0+(dx+x0)*0), y0+0.5, dx, 0.5-top_panel-y0))
ysplit = 0.65
ax = fig.add_axes((0.06, 0.135, 0.935, (ysplit-0.135)))
ax.plot(xg141, yg141, color='black', linewidth=2, alpha=0.5)
ax.fill(xg141, yg141, color='red', linewidth=2, alpha=0.1)
ax.plot(xf140, yf140, color='black', linewidth=2, alpha=0.7)
ax.plot(xg102, yg102, color='black', linewidth=2, alpha=0.5)
ax.fill(xg102, yg102, color='orange', linewidth=2, alpha=0.1)
ax.plot(xg800l, yg800l, color='black', linewidth=2, alpha=0.5)
ax.fill(xg800l, yg800l, color='blue', linewidth=2, alpha=0.1)
ax.plot(xf814, yf814, color='black', linewidth=2, alpha=0.7)
em_names = ['[OII]',r'H$\beta$','','[OIII]',r'H$\alpha$']
dlam = 30
zi = 1
yy = 0.1
ax.text(5800, 0.12+yy,'G800L',rotation=48., color='black', alpha=0.7)
ax.text(5800, 0.12+yy,'G800L',rotation=48., color='blue', alpha=0.4)
ax.text(7100, 0.14+yy,'F814W',rotation=80., color='black', alpha=0.9)
# ax.text(1.115e4, 0.17+yy,'G141',rotation=15., color='black', alpha=0.7)
# ax.text(1.115e4, 0.17+yy,'G141',rotation=15., color='red', alpha=0.4)
ax.text(1.3e4, 0.29+yy,'G141',rotation=10., color='black', alpha=0.7)
ax.text(1.3e4, 0.29+yy,'G141',rotation=10., color='red', alpha=0.4)
ax.text(1.e4, 0.24+yy,'G102',rotation=15., color='black', alpha=0.7)
ax.text(1.e4, 0.24+yy,'G102',rotation=15., color='orange', alpha=0.4)
ax.text(1.21e4, 0.14+yy,'F140W',rotation=88., color='black', alpha=0.9)
#ax.set_yticklabels([]);
#ax2 = ax.twiny()
ax2 = fig.add_axes((0.06, ysplit+0.02, 0.935, (0.86-ysplit)))
ax2.xaxis.set_label_position('top')
ax2.xaxis.set_ticks_position('top')
### H-alpha
xbox = np.array([0,1,1,0,0])
dy= 0.333333
y0= 1.0
ybox = np.array([0,0,1,1,0])*dy
width_acs = np.array([6000.,9000.])
width_wfc3 = np.array([1.1e4,1.65e4])
width_g102 = np.array([0.82e4,1.13e4])
line_names = [r'H$\alpha$',r'H$\beta$ / [OIII]','[OII]']
for i, l0 in enumerate([6563.,4934.,3727]):
zline = width_acs/l0-1
ax2.fill(xbox*(zline[1]-zline[0])+zline[0],y0-ybox-i*dy, color='blue', alpha=0.1)
zline = width_wfc3/l0-1
ax2.fill(xbox*(zline[1]-zline[0])+zline[0],y0-ybox-i*dy, color='red', alpha=0.1)
zline = width_g102/l0-1
ax2.fill(xbox*(zline[1]-zline[0])+zline[0],y0-ybox-i*dy, color='orange', alpha=0.1)
ax2.plot([0,4],np.array([0,0])+y0-(i+1)*dy, color='black')
ax2.text(3.7,y0-(i+0.5)*dy, line_names[i], horizontalalignment='right', verticalalignment='center')
ax.set_xlim(4500, 1.79e4)
ax.set_ylim(0,0.65)
ytick = ax.set_yticks([0,0.2,0.4,0.6])
ax.set_xlabel(r'$\lambda$ [\AA]')
ax.set_ylabel('Throughput')
minorLocator = MultipleLocator(1000)
ax.xaxis.set_minor_locator(minorLocator)
minorLocator = MultipleLocator(0.1)
ax.yaxis.set_minor_locator(minorLocator)
ax2.set_xlim(0,3.8)
ytick = ax2.set_yticks([])
ax2.set_ylim(0,1)
minorLocator = MultipleLocator(0.1)
ax2.xaxis.set_minor_locator(minorLocator)
ax2.set_xlabel(r'$z_\mathrm{line}$')
#fig.savefig('throughput.eps')
fig.savefig('throughput.pdf')
plt.rcParams['text.usetex'] = False
def orbit_structure():
"""
Show the POSTARG offsets in WFC3 / ACS
"""
os.chdir('/research/HST/GRISM/3DHST/ANALYSIS/SURVEY_PAPER')
plt.rcParams['text.usetex'] = True
plt.rcParams['font.family'] = 'serif'
plt.rcParams['font.serif'] = 'Times'
wfc3_color, acs_color = 'red','blue'
wfc3_color, acs_color = 'blue','green'
fig = unicorn.catalogs.plot_init(square=True, xs=4.4, aspect=1, left=0.09, bottom=0.08)
ax = fig.add_subplot(111)
a11 = 0.1355
b10 = 0.1211 # arcsec / pix, from instrument HB
#dxs = np.array([0,-20,-13,7]) + np.int(np.round(xsh[0]))*0
#dys = np.array([0,-7,-20,-13]) + np.int(np.round(ysh[0]))*0
x3dhst = np.array([0, 1.355, 0.881, -0.474])/a11
y3dhst = np.array([0, 0.424, 1.212, 0.788])/b10
xgoodsn = np.array([0,0.6075, 0.270, -0.3375])/a11
ygoodsn = np.array([0,0.1815, 0.6655, 0.484])/b10
#### SN fields:
test = """
files=`ls ibfuw1*flt.fits.gz`
for file in $files; do result=`dfitsgz $file |fitsort FILTER APERTURE POSTARG1 POSTARG2 | grep -v POST`; echo "${file} ${result}"; done
"""
xmarshall = np.array([-0.34, -0.540, 0.0, 0.608, 0.273])/a11
ymarshall = np.array([-0.34, -0.243, 0.0, 0.244, 0.302])/b10
xgeorge = np.array([0.0, -0.608, 0.273, -0.340, 0.540])/a11
ygeorge = np.array([0.0, 0.244, 0.302, -0.340, -0.243])/b10
x41 = np.array([0.273, -0.608, 0.540, -0.340, -0.340, 0.540])/a11
y41 = np.array([0.302, 0.244, -0.243, -0.301, -0.301, -0.243])/b10
xprimo = np.array([0.0, 0.474, 0.290, 0.764, -0.290, 0.184])/a11
yprimo = np.array([0.0, 0.424, -0.290, 0.134, 0.290, 0.714])/b10
xers = np.array([-10.012, 9.988, 9.971, -9.958])/a11
yers = np.array([5.058, 5.050, -5.045, -5.045])/b10
xcooper = np.array([0, 0.6075, 0.270 ,-0.3375])/a11
ycooper = np.array([0, 0.1815, 0.6655, 0.484])/b10
xstanford = np.array([-0.169, 0.372, 0.169, -0.372])/a11
ystanford = np.array([-0.242, 0.06064, 0.242, 0.06064])/b10
xstanford += 0.2
xoff = x3dhst
yoff = y3dhst
print np.round(xoff*10)/10.*2
print np.round(yoff*10)/10.*2
plt.plot(np.round(xoff*10)/10. % 1, np.round(yoff*10)/10. % 1)
plt.xlim(-0.1,1.1); plt.ylim(-0.1,1.1)
ax.plot(xoff, yoff, marker='o', markersize=10, color=wfc3_color, alpha=0.8, zorder=10)
if 1 == 1:
for i in range(4):
ax.text(xoff[i], yoff[i]+0.5, 'F140W + G141', horizontalalignment='center', backgroundcolor='white', zorder=20)
ax.set_xlabel(r'$x$ offset [pix]')
ax.set_ylabel(r'$y$ offset [pix]')
scale = 4
x0 = -2.9
y0 = -5.5
ax.fill(np.array([0,1,1,0])*scale+x0, np.array([0,0,1,1])*scale+y0, color='white', zorder=10)
ax.fill(np.array([0,1,1,0])*scale+x0, np.array([0,0,1,1])*scale+y0, color='black', alpha=0.1, zorder=11)
ax.plot(np.array([0.5,0.5])*scale+x0, np.array([0,1])*scale+y0, color='black', alpha=0.2, zorder=12)
ax.plot(np.array([0,1])*scale+x0, np.array([0.5,0.5])*scale+y0, color='black', alpha=0.2, zorder=12)
ax.plot(np.abs(xoff-np.cast[int](xoff))*scale+x0, np.abs(yoff-np.cast[int](yoff))*scale+y0, marker='o', markersize=10, color=wfc3_color, alpha=0.8, zorder=13)
ax.text(x0+scale/2., y0-1, 'WFC3 Primary', horizontalalignment='center')
#plt.xlim(-5,11)
#plt.ylim(-5,11)
#### ACS:
# XPOS = x*a11 + y*a10
# YPOS = x*b11 + y*b10
#
# a10 a11 b10 b11
# WFC: 0.0000 0.0494 0.0494 0.0040
a10, a11, b10, b11 = 0.0000,0.0494,0.0494,0.0040
#xoff = np.cumsum(xpostarg)
#yoff = np.cumsum(ypostarg)
acsang = 45.
acsang = 92.16- -45.123
xpos_acs, ypos_acs = threedhst.utils.xyrot(xpostarg, ypostarg, acsang)
xpix_acs = xpos_acs / a11
ypix_acs = (ypos_acs-xpix_acs*b11)/b10
x0 = 5.5
#y0 = -4.5
ax.fill(np.array([0,1,1,0])*scale+x0, np.array([0,0,1,1])*scale+y0, color='white', zorder=10)
ax.fill(np.array([0,1,1,0])*scale+x0, np.array([0,0,1,1])*scale+y0, color='black', alpha=0.1, zorder=11)
ax.plot(np.array([0.5,0.5])*scale+x0, np.array([0,1])*scale+y0, color='black', alpha=0.2, zorder=12)
ax.plot(np.array([0,1])*scale+x0, np.array([0.5,0.5])*scale+y0, color='black', alpha=0.2, zorder=12)
ax.plot(np.abs(xpix_acs-np.cast[int](xpix_acs))*scale+x0, np.abs(ypix_acs-np.cast[int](ypix_acs))*scale+y0, marker='o', markersize=10, color=acs_color, alpha=0.8, zorder=13)
#ax.plot(np.array([0,0.5,1,0.5])*scale+x0, np.array([0,0.5,0.5,1])*scale+y0, marker='o', marker='None', color=acs_color, linestyle='--', alpha=0.6, zorder=13)
ax.plot(np.array([0,0.5,0.5,1])*scale+x0, np.array([0,1,0.5,0.5])*scale+y0, marker='o', color=acs_color, linestyle='--', alpha=0.6, zorder=13)
ax.text(x0+scale/2., y0-1, 'ACS Parallel', horizontalalignment='center')
#plt.grid(alpha=0.5, zorder=1, markevery=5)
from matplotlib.ticker import MultipleLocator, FormatStrFormatter
ax.set_xlim(-5.9,12.5)
#ax.set_ylim(-5.9,12.5)
ax.set_ylim(-6.9,11.5)
majorLocator = MultipleLocator(5)
majorFormatter = FormatStrFormatter('%d')
minorLocator = MultipleLocator(1)
ax.xaxis.set_major_locator(majorLocator)
ax.xaxis.set_minor_locator(minorLocator)
ax.xaxis.set_major_formatter(majorFormatter)
ax.xaxis.grid(alpha=0.5, zorder=1, which='major')
ax.xaxis.grid(alpha=0.2, zorder=1, which='minor')
ax.yaxis.set_major_locator(majorLocator)
ax.yaxis.set_minor_locator(minorLocator)
ax.yaxis.set_major_formatter(majorFormatter)
ax.yaxis.grid(alpha=0.5, zorder=1, which='major')
ax.yaxis.grid(alpha=0.2, zorder=1, which='minor')
fig.savefig('dither_box.pdf')
plt.rcParams['text.usetex'] = False
def exptimes():
"""
Extract the range of exposure times from the formatted Phase-II files
"""
os.system('grep F814W 12???.pro |grep " S " > f814w.exptime')
os.system('grep G800L 12???.pro |grep " S " > g800l.exptime')
os.system('grep F140W *.pro |grep MULTIAC |grep NSAMP > f140w.exptime')
os.system('grep G141 *.pro |grep MULTIAC |grep NSAMP > g141.exptime')
### G141
fp = open('g141.exptime')
lines = fp.readlines()
fp.close()
nsamp = []
object = []
for line in lines:
nsamp.append(line.split('NSAMP=')[1][0:2])
object.append(line.split()[1])
expsamp = np.zeros(17)
expsamp[12] = 1002.94
expsamp[13] = 1102.94
expsamp[14] = 1202.94
expsamp[15] = 1302.94
expsamp[16] = 1402.94
nsamp = np.cast[int](nsamp)
nsamp+=1 ### this seems to be the case for all actual observations !!
objects = object[::4]
NOBJ = len(nsamp)/4
texp = np.zeros(NOBJ)
for i in range(NOBJ):
texp[i] = np.sum(expsamp[nsamp[i*4:(i+1)*4]])
print objects[i], texp[i]
print 'G141: %.1f - %.1f' %(texp.min(), texp.max())
def spectral_features():
wmin, wmax = 1.1e4, 1.65e4
print '%-8s %.1f -- %.1f' %('Halpha', wmin/6563.-1, wmax/6563.-1)
print '%-8s %.1f -- %.1f' %('OIII', wmin/5007.-1, wmax/5007.-1)
print '%-8s %.1f -- %.1f' %('OII', wmin/3727.-1, wmax/3727.-1)
print '%-8s %.1f -- %.1f' %('4000', wmin/4000.-1, wmax/4000.-1)
wmin, wmax = 0.55e4, 1.0e4
print '\n\nACS\n\n'
print '%-8s %.1f -- %.1f' %('Halpha', wmin/6563.-1, wmax/6563.-1)
print '%-8s %.1f -- %.1f' %('OIII', wmin/5007.-1, wmax/5007.-1)
print '%-8s %.1f -- %.1f' %('OII', wmin/3727.-1, wmax/3727.-1)
print '%-8s %.1f -- %.1f' %('4000', wmin/4000.-1, wmax/4000.-1)
def aXe_model():
import copy
import scipy.ndimage as nd
os.chdir('/research/HST/GRISM/3DHST/ANALYSIS/SURVEY_PAPER')
dir = pyfits.open('/research/HST/GRISM/3DHST/GOODS-S/PREP_FLT/UDF-F140W_drz.fits')
gri = pyfits.open('/research/HST/GRISM/3DHST/GOODS-S/PREP_FLT/UDF-FC-G141_drz.fits')
mod = pyfits.open('/research/HST/GRISM/3DHST/GOODS-S/PREP_FLT/UDF-FC-G141CONT_drz.fits')
#### rotate all the images so that dispersion axis is along X
angle = gri[1].header['PA_APER']#+180
direct = nd.rotate(dir[1].data, angle, reshape=False)
grism = nd.rotate(gri[1].data, angle, reshape=False)
model = nd.rotate(mod[1].data, angle, reshape=False)
xc, yc = 1877, 2175
NX, NY = 1270, 365
aspect = 3.*NY/NX
xc, yc = 1731, 977
NX, NY = 882, 467
aspect = 1.*NY/NX
plt.gray()
plt.rcParams['lines.linewidth'] = 2
plt.rcParams['text.usetex'] = True
plt.rcParams['font.family'] = 'serif'
plt.rcParams['font.serif'] = 'Times'
fig = unicorn.catalogs.plot_init(square=True, xs=8, aspect=aspect, left=0.12)
fig.subplots_adjust(wspace=0.0,hspace=0.0,left=0.01,
bottom=0.005,right=0.99,top=0.995)
fs1 = 12 ### Font size of label
xlab, ylab = 0.04*NX/3., NY-0.02*NY-0.02*NY
xd, yd = 65, 412 ### my direct images don't line up any more (deleted old ones?)
xd, yd = 412, 65*2
ax = fig.add_subplot(221)
ax.imshow(0-direct[yc-NY/2+yd:yc+NY/2+yd, xc-NX/2+xd:xc+NX/2+xd], vmin=-0.2, vmax=0.02, interpolation='nearest')
ax.set_yticklabels([]); ax.set_xticklabels([])
xtick = ax.set_xticks([0,NX]); ytick = ax.set_yticks([0,NY])
ax.text(xlab, ylab, 'a) Direct F140W', fontsize=fs1, verticalalignment='top')
#ax.text(xlab, ylab, r'$%d\times\ $$%d^{\prime\prime}$' %(NX*0.06, NY*0.06), fontsize=18, backgroundcolor='white', verticalalignment='top')
ax = fig.add_subplot(222)
ax.imshow(0-grism[yc-NY/2:yc+NY/2, xc-NX/2:xc+NX/2], vmin=-0.04, vmax=0.004, interpolation='nearest')
ax.set_yticklabels([]); ax.set_xticklabels([])
xtick = ax.set_xticks([0,NX]); ytick = ax.set_yticks([0,NY])
ax.text(xlab, ylab, 'b) Grism G141', fontsize=fs1, verticalalignment='top')
ax = fig.add_subplot(224)
diff = grism-model
### Flag em lines and 0th order
dy0 = 20
emx, emy = [223, 272, 487, 754, 520, 850, 565, 558, 51, 834, 345, 495], [122, 189, 83, 240, 148, 124, 336, 418, 338, 225, 197, 268]
ax.plot(np.array(emx)+(xc-1731), np.array(emy)-dy0+(yc-977), marker='^', markersize=6, linestyle='None', color='blue', alpha=0.9)
zx, zy = [301, 393, 648], [183, 321, 446]
ax.plot(np.array(zx)+(xc-1731), np.array(zy)-dy0+(yc-977), marker='^', markersize=6, linestyle='None', markeredgecolor='black', markerfacecolor='None', alpha=0.9, markeredgewidth=1.2)
fonts = matplotlib.font_manager.FontProperties()
fonts.set_size(9)
ax.legend(['Emission',r'0th order'], numpoints=1, prop=fonts, handletextpad=0.001, borderaxespad=0.001)
# ax.text(0.04*NX/3., 0.02*NY, 'Em.', fontsize=fs1*0.8, backgroundcolor='white', verticalalignment='bottom', color='green')
# ax.text(0.3*NX/3., 0.02*NY, r'0th', fontsize=fs1*0.8, backgroundcolor='white', verticalalignment='bottom', color='red')
ax.imshow(0-diff[yc-NY/2:yc+NY/2, xc-NX/2:xc+NX/2], vmin=-0.02, vmax=0.002, interpolation='nearest')
ax.set_yticklabels([]); ax.set_xticklabels([])
xtick = ax.set_xticks([0,NX]); ytick = ax.set_yticks([0,NY])
ax.text(xlab, ylab, r'd) Model-subtracted grism', fontsize=fs1, verticalalignment='top')
ax = fig.add_subplot(223)
ax.imshow(0-model[yc-NY/2:yc+NY/2, xc-NX/2:xc+NX/2], vmin=-0.04, vmax=0.004, interpolation='nearest')
ax.set_yticklabels([]); ax.set_xticklabels([])
xtick = ax.set_xticks([0,NX]); ytick = ax.set_yticks([0,NY])
ax.text(xlab, ylab, r'c) aXe Model', fontsize=fs1, verticalalignment='top')
fig.savefig('grism_model.pdf')
plt.rcParams['text.usetex'] = False
def sync():
pass
"""
paths="RAW HTML/scripts PREP_FLT"
dirs="AEGIS COSMOS ERS GOODS-N GOODS-S SN-GEORGE SN-MARSHALL SN-PRIMO UDS"
for dir in $dirs; do
mkdir ${dir}
mkdir ${dir}/HTML
for path in $paths; do
# du -sh ${dir}/${path}
mkdir ${dir}/${path}
rsync -avz --progress $UNICORN:/3DHST/Spectra/Work/${dir}/${path} ${dir}/${path}
done
done
"""
def all_pointings():
pointings(ROOT='GOODS-SOUTH')
pointings(ROOT='COSMOS')
pointings(ROOT='AEGIS')
pointings(ROOT='UDS')
pointings(ROOT='GOODS-N')
def all_pointings_width():
"""
This is the mosaic figure from the paper. The individual fields are combined
manually with Adobe Illustrator.
"""
from unicorn.survey_paper import pointings
fs, left = 10, 0.22
fs, left = 12, 0.25
fs, left = 14, 0.28
fs, left = 16, 0.32
pointings(ROOT='GOODS-SOUTH', width=7, corner='ll', fontsize=fs, left=left)
pointings(ROOT='COSMOS', width=6, corner='lr', fontsize=fs, left=left, right=0.03, bottom=0.115)
pointings(ROOT='AEGIS', width=7, corner='ll', fontsize=fs, left=left, right=0.045)
pointings(ROOT='UDS', width=9, corner='lr', fontsize=fs, left=left-0.02, right=0.04, top=0.02)
pointings(ROOT='GOODS-N', width=6, corner='ur', fontsize=fs, left=left, bottom=0.115)
def pointings_with_status():
"""
Highlight pointings that have been observed (status == 'Archived')
"""
from unicorn.survey_paper import pointings
pointings(ROOT='GOODS-SOUTH', width=7, corner='ll', use_status=True)
pointings(ROOT='GOODS-SOUTH', width=7, corner='ll', use_status=True, show_acs=False)
pointings(ROOT='GOODS-SOUTH', width=7, corner='ll', use_status=True, show_wfc3=False)
pointings(ROOT='COSMOS', width=6, corner='lr', use_status=True)
pointings(ROOT='COSMOS', width=6, corner='lr', use_status=True, show_acs=False)
pointings(ROOT='COSMOS', width=6, corner='lr', use_status=True, show_wfc3=False)
pointings(ROOT='AEGIS', width=7, corner='ll', use_status=True)
pointings(ROOT='AEGIS', width=7, corner='ll', use_status=True, show_acs=False)
pointings(ROOT='AEGIS', width=7, corner='ll', use_status=True, show_wfc3=False)
pointings(ROOT='UDS', width=9, corner='lr', use_status=True)
pointings(ROOT='UDS', width=9, corner='lr', use_status=True, show_acs=False)
pointings(ROOT='UDS', width=9, corner='lr', use_status=True, show_wfc3=False)
pointings(ROOT='UDS', width=9, corner='lr', show_sn_fields=True, use_status=True)
pointings(ROOT='GOODS-SOUTH', width=7, corner='ll', show_sn_fields=True, use_status=True)
def pointings(ROOT='GOODS-SOUTH', width=None, corner='lr', use_status=False, show_acs=True, show_wfc3=True, show_sn_fields=False, fontsize=10, left=22, right=0.02, top=0.01, bottom=0.11):
"""
Make a figure showing the 3D-HST pointing poisitions, read from region files
"""
import unicorn.survey_paper as sup
plt.rcParams['lines.linewidth'] = 0.3
wfc3_color = 'blue'
acs_color = 'green'
os.chdir(unicorn.GRISM_HOME+'ANALYSIS/SURVEY_PAPER')
yticklab = None
dx_ref = (53.314005633802822-52.886197183098595)*np.cos(-27.983151830808076/360*2*np.pi)
CANDELS_PATH = '/research/HST/GRISM/3DHST/REGIONS/CANDELS/'
candels_files = []
candels_alpha = 0.3
candels_color = '0.1'
if use_status:
pointing_list, pointing_status = np.loadtxt('/research/HST/GRISM/3DHST/REGIONS/pointing_status.dat', dtype=np.str, unpack=True)
pointing_list, pointing_status = np.array(pointing_list), np.array(pointing_status)
else:
pointing_list, pointing_status = np.array([]), np.array([])
#### GOODS-S
if ROOT=='GOODS-SOUTH':
x0, x1 = 53.314005633802822, 52.886197183098595
y0, y1 = -27.983151830808076, -27.654474431818176
xticklab = [r'$3^\mathrm{h}33^\mathrm{m}00^\mathrm{s}$', r'$3^\mathrm{h}32^\mathrm{m}30^\mathrm{s}$', r'$3^\mathrm{h}32^\mathrm{m}00^\mathrm{s}$']
xtickv = [degrees(3,33,00, hours=True), degrees(3,32,30, hours=True), degrees(3,32,00, hours=True)]
yticklab = [r'$-27^\circ40^\prime00^{\prime\prime}$', r'$45^\prime00^{\prime\prime}$', r'$-27^\circ50^\prime00^{\prime\prime}$', r'$55^\prime00^{\prime\prime}$']
ytickv = [degrees(-27, 40, 00, hours=False), degrees(-27, 45, 00, hours=False), degrees(-27, 50, 00, hours=False), degrees(-27, 55, 00, hours=False)]
candels_files = glob.glob(CANDELS_PATH+'/GOODS-S*reg')
candels_files.extend(glob.glob(CANDELS_PATH+'/GOODS-W*reg'))
#### COSMOS
if ROOT=='COSMOS':
x1, x0 = 149.99120563380279, 150.23823661971829
y0, y1 = 2.1678109478476815, 2.5996973302980129
xticklab = [r'$10^\mathrm{h}00^\mathrm{m}30^\mathrm{s}$', r'$00^\mathrm{m}00^\mathrm{s}$']
xtickv = [degrees(10,00,30, hours=True), degrees(10,00,00, hours=True)]
yticklab = [r'$+2^\circ15^\prime00^{\prime\prime}$', r'$25^\prime00^{\prime\prime}$', r'$35^\prime00^{\prime\prime}$']
ytickv = [degrees(02, 15, 00, hours=False), degrees(02, 25, 00, hours=False), degrees(02, 35, 00, hours=False)]
candels_files = glob.glob(CANDELS_PATH+'/COSMOS*reg')
#### AEGIS
if ROOT=='AEGIS':
x1, x0 = 214.49707154104345, 215.12704734584406
y0, y1 = 52.680946433013482, 53.01597137966467
xticklab = [r'$18^\mathrm{m}00^\mathrm{s}$', r'$14^\mathrm{h}19^\mathrm{m}00^\mathrm{s}$', r'$20^\mathrm{m}00^\mathrm{s}$']
xticklab = [r'$18^\mathrm{m}$', r'$14^\mathrm{h}19^\mathrm{m}$', r'$20^\mathrm{m}$']
xtickv = [degrees(14,18,00, hours=True), degrees(14,19,00, hours=True), degrees(14,20,00, hours=True)]
yticklab = [r'$+52^\circ45^\prime00^{\prime\prime}$', r'$50^\prime00^{\prime\prime}$', r'$55^\prime00^{\prime\prime}$']
ytickv = [degrees(52, 45, 00, hours=False), degrees(52, 50, 00, hours=False), degrees(52, 55, 00, hours=False)]
candels_files = glob.glob(CANDELS_PATH+'/EGS*reg')
#### UDS
if ROOT=='UDS':
x1, x0 = 34.116935194128146, 34.51871547581829
y0, y1 = -5.2957542206957582, -5.0834327182123147+2./3600
xticklab = [r'$18^\mathrm{m}00^\mathrm{s}$', r'$2^\mathrm{h}17^\mathrm{m}30^\mathrm{s}$', r'$17^\mathrm{m}00^\mathrm{s}$', r'$16^\mathrm{m}30^\mathrm{s}$']
xtickv = [degrees(2,18,00, hours=True), degrees(2,17,30, hours=True), degrees(2,17,00, hours=True), degrees(2,16,30, hours=True)]
yticklab = [r'$05^\prime00^{\prime\prime}$', r'$-5^\circ10^\prime00^{\prime\prime}$', r'$15^\prime00^{\prime\prime}$']
ytickv = [degrees(-5, 05, 00, hours=False), degrees(-5, 10, 00, hours=False), degrees(-5, 15, 00, hours=False)]
candels_files = glob.glob(CANDELS_PATH+'/UDS*reg')
#### GOODS-N
if ROOT=='GOODS-N':
wfc3_color = 'orange'
acs_color = None
x1, x0 = 188.9139017749491, 189.44688055895648
y0, y1 = 62.093791549511998, 62.384068625281309
xticklab = [r'$12^\mathrm{h}37^\mathrm{m}30^\mathrm{s}$', r'$37^\mathrm{m}00^\mathrm{s}$', r'$36^\mathrm{m}30^\mathrm{s}$', r'$36^\mathrm{m}00^\mathrm{s}$']
xtickv = [degrees(12,37,30, hours=True), degrees(12,37,00, hours=True), degrees(12,36,30, hours=True), degrees(12,36,00, hours=True)]
yticklab = [r'$+62^\circ10^\prime00^{\prime\prime}$', r'$15^\prime00^{\prime\prime}$', r'$20^\prime00^{\prime\prime}$']
ytickv = [degrees(62, 10, 00, hours=False), degrees(62, 15, 00, hours=False), degrees(62, 20, 00, hours=False)]
candels_files = glob.glob(CANDELS_PATH+'/GOODSN-OR*reg')
candels_files.extend(glob.glob(CANDELS_PATH+'/GOODSN-SK*reg'))
#### Make square for given plot dimensions
dx = np.abs(x1-x0)*np.cos(y0/360*2*np.pi)
dy = (y1-y0)
if width is None:
width = 7*dx/dx_ref
print '%s: plot width = %.2f\n' %(ROOT, width)
fig = unicorn.catalogs.plot_init(square=True, xs=width, aspect=dy/dx, fontsize=fontsize, left=left, right=right, top=top, bottom=bottom)
#fig = unicorn.catalogs.plot_init(square=True)
ax = fig.add_subplot(111)
polys = []
for file in candels_files:
fp = open(file)
lines = fp.readlines()
fp.close()
#
polys.append(sup.polysplit(lines[1], get_shapely=True))
#fi = ax.fill(wfcx, wfcy, alpha=candels_alpha, color=candels_color)
sup.polys = polys
un = polys[0]
for pp in polys[1:]:
un = un.union(pp)
if un.geometryType() is 'MultiPolygon':
for sub_poly in un.geoms:
x,y = sub_poly.exterior.xy
ax.plot(x,y, alpha=candels_alpha, color=candels_color, linewidth=1)
ax.fill(x,y, alpha=0.1, color='0.7')
else:
x,y = un.exterior.xy
ax.plot(x,y, alpha=candels_alpha, color=candels_color, linewidth=1)
ax.fill(x,y, alpha=0.1, color='0.7')
files=glob.glob(unicorn.GRISM_HOME+'REGIONS/'+ROOT+'-[0-9]*reg')
if ROOT == 'UDS':
p18 = files.pop(9)
print '\n\nPOP %s\n\n' %(p18)
if show_sn_fields:
files.extend(glob.glob(unicorn.GRISM_HOME+'REGIONS/SN*reg'))
files.extend(glob.glob(unicorn.GRISM_HOME+'REGIONS/ERS*reg'))
wfc3_polys = []
acs_polys = []
for file in files:
#
base = os.path.basename(file.split('.reg')[0])
#print base, base in pointing_list
if base in pointing_list:
status = pointing_status[pointing_list == base][0] == 'Archived'
else:
status = False
if not use_status:
status = True
field = re.split('-[0-9]', file)[0]
pointing = file.split(field+'-')[1].split('.reg')[0]
fp = open(file)
lines = fp.readlines()
fp.close()
if base.startswith('SN') | base.startswith('ERS'):
wfc3_color = 'purple'
acs_color = None
pointing = os.path.basename(field)
status = True
#
wfc3_polys.append(sup.polysplit(lines[1], get_shapely=True))
acs_polys.append(sup.polysplit(lines[2], get_shapely=True))
acs_polys.append(sup.polysplit(lines[3], get_shapely=True))
#
wfcx, wfcy = sup.polysplit(lines[1])
if show_wfc3:
if status:
fi = ax.fill(wfcx, wfcy, alpha=0.2, color=wfc3_color)
fi = ax.plot(wfcx, wfcy, alpha=0.8, color=wfc3_color)
else:
fi = ax.fill(wfcx, wfcy, alpha=0.05, color=wfc3_color)
fi = ax.plot(wfcx, wfcy, alpha=0.8, color=wfc3_color)
#
if acs_color is not None:
acsx1, acsy1 = sup.polysplit(lines[2])
acsx2, acsy2 = sup.polysplit(lines[3])
#
if show_acs:
if show_wfc3:
afact = 3
else:
afact = 3
if status:
fi = ax.fill(acsx1, acsy1, alpha=0.05*afact, color=acs_color)
fi = ax.fill(acsx2, acsy2, alpha=0.05*afact, color=acs_color)
#
pl = ax.plot(acsx1, acsy1, alpha=0.1*afact, color=acs_color)
pl = ax.plot(acsx2, acsy2, alpha=0.1*afact, color=acs_color)
else:
pl = ax.plot(acsx1, acsy1, alpha=0.3*afact, color=acs_color)
pl = ax.plot(acsx2, acsy2, alpha=0.3*afact, color=acs_color)
#
xoff, yoff = 0.0, 0.0
if ROOT=='GOODS-SOUTH':
#print pointing
if pointing == '36':
xoff, yoff = 0.002,0.0075
if pointing == '37':
xoff, yoff = -0.005,-0.007
if pointing == '38':
xoff, yoff = 0.007,-0.007
#
if show_wfc3:
te = ax.text(np.mean(wfcx[:-1])+xoff, np.mean(wfcy[:-1])+yoff, pointing, va='center', ha='center', fontsize=13)
#### Get field area from full WFC3 polygons
un_wfc3 = wfc3_polys[0]
for pp in wfc3_polys[1:]:
un_wfc3 = un_wfc3.union(pp)
#
wfc3_union= []
if un_wfc3.geometryType() is 'MultiPolygon':
total_area = 0
xavg, yavg, wht = 0, 0, 0
for sub_poly in un_wfc3.geoms:
area_i = sub_poly.area*np.cos(y0/360.*2*np.pi)
total_area += area_i
x,y = sub_poly.exterior.xy
wfc3_union.append(sub_poly)
xavg += np.mean(x)*area_i**2
yavg += np.mean(y)*area_i**2
wht += area_i**2
#ax.plot(x,y, alpha=0.8, color='orange', linewidth=1)
xavg, yavg = xavg/wht, yavg/wht
else:
total_area = un_wfc3.area*np.cos(y0/360.*2*np.pi)
x,y = un_wfc3.exterior.xy
wfc3_union.append(un_wfc3)
#ax.plot(x,y, alpha=0.8, color='orange', linewidth=1)
xavg, yavg = np.mean(x), np.mean(y)
#plt.plot([xavg,xavg],[yavg,yavg], marker='x', markersize=20)
#### Get ACS overlap fraction
if ROOT != 'GOODS-N':
un_acs = acs_polys[0]
for pp in acs_polys[1:]:
un_acs = un_acs.union(pp)
acs_union = []
if un_acs.geometryType() is 'MultiPolygon':
for sub_poly in un_acs.geoms:
x,y = sub_poly.exterior.xy
acs_union.append(sub_poly)
else:
x,y = un_acs.exterior.xy
acs_union.append(un_acs)
wfc3_area = 0.
acs_overlap_area = 0.
for wun in wfc3_union:
wfc3_area += wun.area*np.cos(y0/360.*2*np.pi)*3600.
for aun in acs_union:
overlap = wun.intersection(aun)
acs_overlap_area += overlap.area*np.cos(y0/360.*2*np.pi)*3600.
print '== Combined areas ==\nWFC3, ACS, frac: %.1f %.1f %.1f' %(wfc3_area, acs_overlap_area ,acs_overlap_area/wfc3_area*100)
dummy = """
wf = 147.3 + 122.2 + 121.9 + 114.0
ac = 134.6 + 112.7 + 102.4 + 102.8
print ac/wf*100.
"""
#
if yticklab is not None:
ax.set_xticklabels(xticklab)
xtick = ax.set_xticks(xtickv)
ax.set_yticklabels(yticklab)
ytick = ax.set_yticks(ytickv)
ax.set_xlabel(r'$\alpha$')
ax.set_ylabel(r'$\delta$')
fsi = '20'
if ROOT == 'GOODS-SOUTH':
field_label = 'GOODS-S'
else:
field_label = ROOT
if corner=='lr':
ax.text(0.95, 0.05,r'$\mathit{%s}$' %(field_label),
horizontalalignment='right',
verticalalignment='bottom',
transform = ax.transAxes, fontsize=fsi)
#
if corner=='ll':
ax.text(0.05, 0.05,r'$\mathit{%s}$' %(field_label),
horizontalalignment='left',
verticalalignment='bottom',
transform = ax.transAxes, fontsize=fsi)
#
if corner=='ur':
ax.text(0.95, 0.95,r'$\mathit{%s}$' %(field_label),
horizontalalignment='right',
verticalalignment='top',
transform = ax.transAxes, fontsize=fsi)
#
if corner=='ul':
ax.text(0.05, 0.95,r'$\mathit{%s}$' %(field_label),
horizontalalignment='left',
verticalalignment='top',
transform = ax.transAxes, fontsize=fsi)
ax.set_xlim(x0, x1)
ax.set_ylim(y0, y1)
# print 'RA - ', hexagesimal(x0), hexagesimal(x1)
# print 'Dec - ', hexagesimal(y0, hours=False), hexagesimal(y1, hours=False)
print 'RA - ', hexagesimal(xavg)
print 'Dec - ', hexagesimal(yavg, hours=False)
print 'Area: %.1f\n' %(total_area*3600.)
tag = ''
if not show_acs:
tag += '_noacs'
if not show_wfc3:
tag += '_nowfc3'
if show_sn_fields:
tag += '_sn'
if use_status:
plt.savefig('%s_pointings_status%s.pdf' %(ROOT, tag))
else:
plt.savefig('%s_pointings%s.pdf' %(ROOT, tag))
def get_UDF_center():
file='/research/HST/GRISM/3DHST/REGIONS/GOODS-SOUTH-38.reg'
field = re.split('-[0-9]', file)[0]
pointing = file.split(field+'-')[1].split('.reg')[0]
fp = open(file)
lines = fp.readlines()
fp.close()
#
px, py = sup.polysplit(lines[1], get_shapely=False)
print 'UDF: ', hexagesimal(np.mean(px[:-1]), hours=True), hexagesimal(np.mean(py[:-1]), hours=False)
def degrees(deg, min, sec, hours=True):
adeg = np.abs(deg)
degrees = adeg + min/60. + sec/3600.
if deg < 0:
degrees *= -1
if hours:
degrees *= 360./24
return degrees
def hexagesimal(degrees, hours=True, string=True):
if hours:
degrees *= 24/360.
if degrees < 0:
sign = -1
si = '-'
else:
sign = 1
si = ''
degrees = np.abs(degrees)
deg = np.int(degrees)
min = np.int((degrees-deg)*60)
sec = (degrees-deg-min/60.)*3600
if string:
return '%s%02d:%02d:%05.2f' %(si, deg, min, sec)
else:
return sign*deg, min, sec
def polysplit(region='polygon(150.099223,2.391097,150.086084,2.422515,150.050573,2.407277,150.064586,2.376241)', get_shapely=False):
spl = region[region.find('(')+1:region.find(')')].split(',')
px = spl[0::2]
py = spl[1::2]
px.append(px[0])
py.append(py[0])
px, py = np.cast[float](px), np.cast[float](py)
if get_shapely:
from shapely.geometry import Polygon
list = []
for i in range(len(px)):
list.append((px[i], py[i]))
poly = Polygon(tuple(list))
return poly
else:
return px, py
#
def demo_background_subtract(root='COSMOS-13'):
"""
Make a figure demonstrating the background subtraction of the grism images
"""
import threedhst
import threedhst.prep_flt_files
import threedhst.grism_sky as bg
import unicorn
import unicorn.survey_paper as sup
import pyfits
path = unicorn.analysis.get_grism_path(root)
os.chdir(path)
if not os.path.exists('EXAMPLE'):
os.system('mkdir EXAMPLE')
os.chdir('EXAMPLE')
files = glob.glob('../PREP_FLT/%s-G141*' %(root))
files.append('../PREP_FLT/%s-F140W_tweak.fits' %(root))
for file in files:
if 'drz.fits' not in file:
os.system('cp %s .' %(file))
print file
threedhst.process_grism.fresh_flt_files(root+'-G141_asn.fits')
asn = threedhst.utils.ASNFile(root+'-G141_asn.fits')
flt = pyfits.open(asn.exposures[0]+'_flt.fits')
angle = flt[1].header['PA_APER']
#### First run on uncorrected images
threedhst.prep_flt_files.startMultidrizzle(root+'-G141_asn.fits',
use_shiftfile=True, skysub=False,
final_scale=0.128254, pixfrac=0.8, driz_cr=True,
updatewcs=True, median=True, clean=True, final_rot=angle)
os.system('mv %s-G141_drz.fits %s-G141_drz_first.fits' %(root, root))
sup.root = root
sup.first_prof = []
for exp in asn.exposures:
xp, yp = threedhst.grism_sky.profile(exp+'_flt.fits', flatcorr=False, biweight=True)
sup.first_prof.append(yp)
# for i,exp in enumerate(asn.exposures):
# plt.plot(xp, prof[i])
#### Now divide by the flat
threedhst.process_grism.fresh_flt_files(root+'-G141_asn.fits')
sup.flat_prof = []
for exp in asn.exposures:
bg.remove_grism_sky(flt=exp+'_flt.fits', list=['sky_cosmos.fits', 'sky_goodsn_lo.fits', 'sky_goodsn_hi.fits', 'sky_goodsn_vhi.fits'], path_to_sky = '../CONF/', out_path='./', verbose=False, plot=False, flat_correct=True, sky_subtract=False, second_pass=False, overall=False)
xp, yp = threedhst.grism_sky.profile(exp+'_flt.fits', flatcorr=False, biweight=True)
sup.flat_prof.append(yp)
threedhst.prep_flt_files.startMultidrizzle(root+'-G141_asn.fits',
use_shiftfile=True, skysub=False,
final_scale=0.128254, pixfrac=0.8, driz_cr=True,
updatewcs=True, median=True, clean=True, final_rot=angle)
os.system('mv %s-G141_drz.fits %s-G141_drz_flat.fits' %(root, root))
#### Divide by the sky
threedhst.process_grism.fresh_flt_files(root+'-G141_asn.fits')
sup.sky_prof = []
for exp in asn.exposures:
print exp
bg.remove_grism_sky(flt=exp+'_flt.fits', list=['sky_cosmos.fits', 'sky_goodsn_lo.fits', 'sky_goodsn_hi.fits', 'sky_goodsn_vhi.fits'], path_to_sky = '../CONF/', out_path='./', verbose=False, plot=False, flat_correct=True, sky_subtract=True, second_pass=False, overall=False)
xp, yp = threedhst.grism_sky.profile(exp+'_flt.fits', flatcorr=False, biweight=True)
sup.sky_prof.append(yp)
threedhst.prep_flt_files.startMultidrizzle(root+'-G141_asn.fits',
use_shiftfile=True, skysub=False,
final_scale=0.128254, pixfrac=0.8, driz_cr=True,
updatewcs=True, median=True, clean=True, final_rot=angle)
os.system('mv %s-G141_drz.fits %s-G141_drz_sky.fits' %(root, root))
#### Last pass along columns
threedhst.process_grism.fresh_flt_files(root+'-G141_asn.fits')
sup.final_prof = []
for exp in asn.exposures:
print exp
bg.remove_grism_sky(flt=exp+'_flt.fits', list=['sky_cosmos.fits', 'sky_goodsn_lo.fits', 'sky_goodsn_hi.fits', 'sky_goodsn_vhi.fits'], path_to_sky = '../CONF/', out_path='./', verbose=False, plot=False, flat_correct=True, sky_subtract=True, second_pass=True, overall=True)
xp, yp = threedhst.grism_sky.profile(exp+'_flt.fits', flatcorr=False, biweight=True)
sup.final_prof.append(yp)
threedhst.prep_flt_files.startMultidrizzle(root+'-G141_asn.fits',
use_shiftfile=True, skysub=False,
final_scale=0.128254, pixfrac=0.8, driz_cr=True,
updatewcs=True, median=True, clean=True, final_rot=angle)
# ##### Make segmentation images
# run = threedhst.prep_flt_files.MultidrizzleRun(root+'-G141')
# for i,exp in enumerate(asn.exposures):
# run.blot_back(ii=i, copy_new=(i is 0))
# threedhst.prep_flt_files.make_segmap(run.flt[i])
os.system('mv %s-G141_drz.fits %s-G141_drz_final.fits' %(root, root))
make_background_demo(root=root)
def make_background_demo(root='AEGIS-11', range1=(0.90,1.08), range2=(-0.02, 0.02)):
import unicorn.survey_paper as sup
if sup.root != root:
print "Need to run sup.demo_background_subtract(root='%s')." %(root)
path = unicorn.analysis.get_grism_path(root)
os.chdir(path+'EXAMPLE')
first = pyfits.open('%s-G141_drz_first.fits' %(root))
flat = pyfits.open('%s-G141_drz_flat.fits' %(root))
sky = pyfits.open('%s-G141_drz_sky.fits' %(root))
final = pyfits.open('%s-G141_drz_final.fits' %(root))
im_shape = first[1].data.shape
sup.im_shape = im_shape
med = threedhst.utils.biweight(flat[1].data, mean=True)
ysize = 3.
#fig = plt.figure(figsize=[ysize*im_shape[1]*1./im_shape[0]*4,ysize], dpi=100)
top_panel = 0.2
NPANEL = 4
#plt.hot()
plt.gray()
plt.close()
plt.rcParams['image.origin'] = 'lower'
plt.rcParams['image.interpolation'] = 'nearest'
if USE_PLOT_GUI:
fig = plt.figure(figsize=[ysize*im_shape[1]*1./im_shape[0]*NPANEL*(1-top_panel)/2,ysize*2],dpi=100)
else:
fig = Figure(figsize=[ysize*im_shape[1]*1./im_shape[0]*NPANEL*(1-top_panel)/2.,ysize*2], dpi=100)
fig.subplots_adjust(wspace=0.02,hspace=0.02,left=0.02,
bottom=0.07,right=0.99,top=0.97)
#
plt.rcParams['lines.linewidth'] = 1
vmin, vmax = -0.15, 0.075
vmin, vmax= -0.08, 0.08
x0 = 0.005*2
y0 = x0/2.
dx = (1.-(NPANEL+1)*x0)/NPANEL*2
top_panel/=2.
#ax = fig.add_subplot(141)
ax = fig.add_axes(((x0+(dx+x0)*0), y0+0.5, dx, 0.5-top_panel-y0))
ax.imshow((first[1].data-threedhst.utils.biweight(first[1].data, mean=True)), interpolation='nearest',aspect='auto',vmin=vmin-0.1*0,vmax=vmax+0.15*0)
sup.axis_imshow(ax, text='a)\ Raw')
ax.text(0.12, 0.85, r'$\mathrm{%s}$' %(root), horizontalalignment='left', verticalalignment='center',
transform = ax.transAxes, color='black', fontsize=14)
#
#show_limits(ax, -(vmax+0.15)+med, -(vmin-0.1)+med)
#### Show profiles
ax = fig.add_axes(((x0+(dx+x0)*0), (0.5-top_panel)+0.5, dx, top_panel-2*y0))
pp = sup.first_prof[0]*0.
for i in range(4):
#ax.plot(sup.first_prof[i])
pp += sup.first_prof[i]
ax.plot(pp/4., color='black')
sup.axis_profile(ax, yrange=range1, text='a)\ Raw')
#ax = fig.add_subplot(142)
ax = fig.add_axes(((x0+(dx+x0)*1)+x0, y0+0.5, dx, 0.5-top_panel-y0))
ax.imshow((flat[1].data-med), interpolation='nearest',aspect='auto',vmin=vmin,vmax=vmax)
sup.axis_imshow(ax, text='b)\ Flat')
#show_limits(ax, -vmax+med, -vmin+med)
#### Show profiles
ax = fig.add_axes(((x0+(dx+x0)*1)+x0, (0.5-top_panel)+0.5, dx, top_panel-2*y0))
pp = sup.flat_prof[0]*0.
for i in range(4):
#ax.plot(sup.flat_prof[i])
pp += sup.flat_prof[i]
ax.plot(pp/4.+1, color='black')
sup.axis_profile(ax, yrange=range1, text='b)\ Flat')
###########
#ax = fig.add_subplot(143)
ax = fig.add_axes(((x0+(dx+x0)*0), y0, dx, 0.5-top_panel-y0))
ax.imshow((sky[1].data-med), interpolation='nearest',aspect='auto',vmin=vmin,vmax=vmax)
sup.axis_imshow(ax, text='c)\ Background')
#show_limits(ax, -vmax+med, -vmin+med)
#### Show profiles
ax = fig.add_axes(((x0+(dx+x0)*0), (0.5-top_panel), dx, top_panel-2*y0))
pp = sup.sky_prof[0]*0.
for i in range(4):
#ax.plot(sup.sky_prof[i])
pp += sup.sky_prof[i]
ax.plot(pp/4., color='black')
sup.axis_profile(ax, yrange=range2, text='c)\ Background')
#ax = fig.add_subplot(144)
ax = fig.add_axes(((x0+(dx+x0)*1)+x0, y0, dx, 0.5-top_panel-y0))
ax.imshow((final[1].data), interpolation='nearest',aspect='auto',vmin=vmin,vmax=vmax)
sup.axis_imshow(ax, text='d)\ Final')
#show_limits(ax, -vmax, -vmin)
#### Show profiles
ax = fig.add_axes(((x0+(dx+x0)*1)+x0, (0.5-top_panel), dx, top_panel-2*y0))
pp = sup.final_prof[0]*0.
for i in range(4):
#ax.plot(sup.final_prof[i])
pp += sup.final_prof[i]
ax.plot(pp/4., color='black')
sup.axis_profile(ax, yrange=range2, text='d)\ Final')
outfile = '%s-G141_demo.pdf' %(root)
if USE_PLOT_GUI:
fig.savefig(outfile,dpi=100,transparent=False)
else:
canvas = FigureCanvasAgg(fig)
canvas.print_figure(outfile, dpi=100, transparent=False)
def show_limits(ax, vmin, vmax):
ax.text(0.98, -0.02,r'$\mathrm{[%.1f,\ %.1f]}$' %(vmin, vmax),
horizontalalignment='right',
verticalalignment='top',
transform = ax.transAxes, color='black', fontsize=12)
def axis_profile(ax, yrange=None, prof=None, text=''):
ax.set_yticklabels([])
ax.set_xticklabels([])
ax.set_xlim(0,1000)
if yrange is not None:
ax.set_ylim(yrange[0], yrange[1])
ylimits = yrange
else:
ylimits = ax.get_ylim()
#
if text is not '':
ax.text(0.5, 0.06,r'$\mathrm{'+text+'}$',
horizontalalignment='center',
verticalalignment='bottom',
transform = ax.transAxes, color='black', fontsize=10)
ax.text(0.98, 0.08,r'$\mathrm{[%.2f,\ %.2f]}$' %(ylimits[0], ylimits[1]),
horizontalalignment='right',
verticalalignment='bottom',
transform = ax.transAxes, color='black', fontsize=8)
def axis_imshow(ax, text='', shape=None):
import numpy as np
import unicorn.survey_paper as sup
if shape is None:
shape = sup.im_shape
ax.set_yticklabels([])
xtick = ax.set_xticks([0,shape[1]])
ax.set_xticklabels([])
ytick = ax.set_yticks([0,shape[0]])
#
# if text is not '':
# ax.text(0.5, 1.02,r'$\mathrm{'+text+'}$',
# horizontalalignment='center',
# verticalalignment='bottom',
# transform = ax.transAxes, color='black', fontsize=12)
def compare_sky():
"""
Make a figure showing the aXe default and derived sky images.
"""
import unicorn.survey_paper as sup
sco = pyfits.open('../CONF/sky_cosmos.fits')
shi = pyfits.open('../CONF/sky_goodsn_hi.fits')
slo = pyfits.open('../CONF/sky_goodsn_lo.fits')
svh = pyfits.open('../CONF/sky_goodsn_vhi.fits')
shape = sco[0].data.shape
figsize = 6
fig = Figure(figsize=[figsize,figsize], dpi=200)
#fig = plt.figure(figsize=[figsize,figsize], dpi=100)
fig.subplots_adjust(wspace=0.02,hspace=0.02,left=0.02,
bottom=0.02,right=0.98,top=0.98)
####### COSMOS
ax = fig.add_subplot(221)
ax.imshow(sco[0].data, interpolation='nearest',aspect='auto',vmin=0.95,vmax=1.05)
sup.axis_imshow(ax, shape=shape)
ax.fill_between([850,950],[50,50],[150,150], color='white', alpha=0.8)
ax.text(900/1014., 100./1014,r'$\mathrm{a)}$', horizontalalignment='center', verticalalignment='center', transform = ax.transAxes, color='black', fontsize=12)
####### GOODS-N Lo
ax = fig.add_subplot(222)
ax.imshow(slo[0].data, interpolation='nearest',aspect='auto',vmin=0.95,vmax=1.05)
sup.axis_imshow(ax, shape=shape)
ax.fill_between([850,950],[50,50],[150,150], color='white', alpha=0.8)
ax.text(900/1014., 100./1014,r'$\mathrm{b)}$', horizontalalignment='center', verticalalignment='center', transform = ax.transAxes, color='black', fontsize=12)
####### GOODS-N Hi
ax = fig.add_subplot(223)
ax.imshow(shi[0].data, interpolation='nearest',aspect='auto',vmin=0.95,vmax=1.05)
sup.axis_imshow(ax, shape=shape)
ax.fill_between([850,950],[50,50],[150,150], color='white', alpha=0.8)
ax.text(900/1014., 100./1014,r'$\mathrm{c)}$', horizontalalignment='center', verticalalignment='center', transform = ax.transAxes, color='black', fontsize=12)
####### GOODS-N Very hi
ax = fig.add_subplot(224)
ax.imshow(svh[0].data, interpolation='nearest',aspect='auto',vmin=0.95,vmax=1.05)
sup.axis_imshow(ax, shape=shape)
ax.fill_between([850,950],[50,50],[150,150], color='white', alpha=0.8)
ax.text(900/1014., 100./1014,r'$\mathrm{d)}$', horizontalalignment='center', verticalalignment='center', transform = ax.transAxes, color='black', fontsize=12)
#### Done
canvas = FigureCanvasAgg(fig)
canvas.print_figure('sky_backgrounds.pdf', dpi=100, transparent=False)
def axeFlat(flat_file='/research/HST/GRISM/3DHST/CONF/WFC3.IR.G141.flat.2.fits', wave=1.4e4):
"""
Compute the aXe flat-field image at a specified wavelength.
"""
flat = pyfits.open(flat_file)
wmin = flat[0].header['WMIN']
wmax = flat[0].header['WMAX']
x = (wave-wmin)/(wmax-wmin)
img = np.zeros((1014,1014), dtype='float')
for i in range(len(flat)):
img += flat[i].data*x**i
return img
def get_flat_function(x=507, y=507, wave=np.arange(1.1e4,1.6e4,500), flat_file='/research/HST/GRISM/3DHST/CONF/WFC3.IR.G141.flat.2.fits'):
#wave = np.arange(1.1e4, 1.6e4, .5e3)
flat = pyfits.open(flat_file)
wmin = flat[0].header['WMIN']
wmax = flat[0].header['WMAX']
xx = (wave-wmin)/(wmax-wmin)
flat_func = xx*0.
for i in range(len(flat)):
flat_func += flat[i].data[y,x]*xx**i
return flat_func
def show_flat_function():
wave = np.arange(1.05e4, 1.7e4, 250.)
color='blue'
for xi in range(50,951,50):
print unicorn.noNewLine+'%d' %(xi)
for yi in range(50,951,50):
ffunc = unicorn.survey_paper.get_flat_function(x=xi, y=yi, wave=wave)
ffunc /= np.interp(1.4e4, wave, ffunc)
p = plt.plot(wave, ffunc , alpha=0.05, color=color)
def grism_flat_dependence():
"""
Compute the higher order terms for the grism flat-field
"""
import unicorn
import threedhst
# f140 = threedhst.grism_sky.flat_f140[1].data[5:-5, 5:-5]
#
# flat = pyfits.open(unicorn.GRISM_HOME+'CONF/WFC3.IR.G141.flat.2.fits')
# wmin, wmax = flat[0].header['WMIN'], flat[0].header['WMAX']
#
# a0 = flat[0].data
#
# lam = 1.1e4
# x = (lam-wmin)/(wmax-wmin)
#
# aX = a0*0.
# for i,ext in enumerate(flat[1:]):
# print i
# aX += ext.data*x**(i+1)
f105 = pyfits.open(os.getenv('iref')+'/uc72113oi_pfl.fits')[1].data[5:-5,5:-5]
#f105 = pyfits.open(os.getenv('iref')+'/uc72113ni_pfl.fits')[1].data[5:-5,5:-5] # F098M
f140 = pyfits.open(os.getenv('iref')+'/uc721143i_pfl.fits')[1].data[5:-5,5:-5]
yi, xi= np.indices(f140.shape)
death_star = (f140 < 0.65) & (xi < 390) & (yi < 80) & (xi > 330) & (yi > 30)
REF = 'F140W'
f160 = pyfits.open(os.getenv('iref')+'/uc721145i_pfl.fits')[1].data[5:-5,5:-5]
#### Narrow bands
#f140 = pyfits.open(os.getenv('iref')+'/PFL/uc72113si_pfl.fits')[1].data[5:-5,5:-5]
#REF = 'F127M'
#f140 = pyfits.open(os.getenv('iref')+'/PFL/uc721140i_pfl.fits')[1].data[5:-5,5:-5]
#f160 = pyfits.open(os.getenv('iref')+'/PFL/uc721146i_pfl.fits')[1].data[5:-5,5:-5]
plt.rcParams['patch.edgecolor'] = 'None'
#plt.rcParams['font.size'] = 12
plt.rcParams['image.origin'] = 'lower'
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['text.usetex'] = True
plt.rcParams['font.family'] = 'serif'
plt.rcParams['font.serif'] = 'Times'
xs = 8
fig = plt.figure(figsize=(xs,xs/3.), dpi=100)
fig.subplots_adjust(wspace=0.01,hspace=0.01,left=0.01, bottom=0.01,right=0.99,top=0.99)
vmin, vmax = 0.95, 1.05
#vmin, vmax = 0.9, 1.1
### put scale within the box
NX = 100
y0, y1 = 1014-1.5*NX, 1014-1*NX
y0 -= NX; y1 -= NX
### F140W flat
textbbox = dict(facecolor='white', alpha=0.6, edgecolor='white')
## correct for pixel area map
PIXEL_AREA = True
if PIXEL_AREA:
pam = pyfits.open(os.getenv('iref')+'/ir_wfc3_map.fits')[1].data
else:
pam = np.ones((1014,1014),dtype='float')
ax = fig.add_subplot(131)
ax.imshow(f140/pam, vmin=vmin, vmax=vmax, interpolation='nearest')
ax.text(50,950, REF, verticalalignment='top', fontsize=14, bbox=textbbox)
#ax.text(50,950, REF, verticalalignment='top', fontsize=14)
ax.set_xlim(0,1014)
ax.set_ylim(0,1014)
ax.set_yticklabels([])
ax.set_xticklabels([])
### F105W/F140W, with label
ratio = f105/f140
label = 'F105W / ' + REF
ratio = unicorn.survey_paper.axeFlat(wave=1.1e4)/unicorn.survey_paper.axeFlat(wave=1.6e4)
label = r'aXe 1.1 $\mu$m / 1.6 $\mu$m'
ratio[death_star] = 0.
#### Color bar for label
x0 = 300
# ratio[y0:y1,x0-2.5*NX:x0-1.5*NX] = vmin
# ratio[y0:y1,x0-1.5*NX:x0-0.5*NX] = (vmin+1)/2.
# ratio[y0:y1,x0-0.5*NX:x0+0.5*NX] = 1.
# ratio[y0:y1,x0+0.5*NX:x0+1.5*NX] = (vmax+1)/2.
# ratio[y0:y1,x0+1.5*NX:x0+2.5*NX] = vmax
NSPLIT = 5
NXi = NX*2./NSPLIT
for i in range(1,NSPLIT+1):
#print i,NXi, 1+(vmin-1)*i/NSPLIT, x0-(i-0.5)*NXi
ratio[y0:y1,x0-(i+0.5)*NXi:x0-(i-0.5)*NXi] = 1+(vmin-1)*i/NSPLIT
#
ratio[y0:y1,x0-0.5*NXi:x0+0.5*NXi] = 1
for i in range(1,NSPLIT+1):
#print i,NXi, 1+(vmin-1)*i/NSPLIT, x0-(i-0.5)*NXi
ratio[y0:y1,x0+(i-0.5)*NXi:x0+(i+0.5)*NXi] = 1+(vmax-1)*i/NSPLIT
xbox = np.array([0,1,1,0,0])*NXi
ybox = np.array([0,0,1,1,0])*NX/2
ax = fig.add_subplot(132)
ax.imshow(ratio, vmin=vmin, vmax=vmax, interpolation='nearest')
ax.plot(xbox+x0-0.5*NXi, ybox+y1-0.5*NX, color='0.6', alpha=0.1)
fs = 9
ax.text(x0-2*NX, y0-0.5*NX, '%.2f' %(vmin), horizontalalignment='center', verticalalignment='center', fontsize=fs)
ax.text(x0-0*NX, y0-0.5*NX, '%.2f' %(1), horizontalalignment='center', verticalalignment='center', fontsize=fs)
ax.text(x0+2*NX, y0-0.5*NX, '%.2f' %(vmax), horizontalalignment='center', verticalalignment='center', fontsize=fs)
ax.text(50,950, label, verticalalignment='top', fontsize=14, bbox=textbbox)
ax.set_xlim(0,1014)
ax.set_ylim(0,1014)
ax.set_yticklabels([])
ax.set_xticklabels([])
### F160W/F140W
ax = fig.add_subplot(133)
ratio = f160/f140
label = 'F160W / '+REF
ratio = f105/f160
label = 'F105W / F160W'
# ratio = unicorn.survey_paper.axeFlat(wave=1.0552e4)/unicorn.survey_paper.axeFlat(wave=1.392e4)
ratio[death_star] = 0.
ax.imshow(ratio, vmin=vmin, vmax=vmax, interpolation='nearest')
ax.text(50,950, label, verticalalignment='top', fontsize=14,bbox=textbbox)
ax.set_xlim(0,1014)
ax.set_ylim(0,1014)
ax.set_yticklabels([])
ax.set_xticklabels([])
fig.savefig('compare_flats_v2.pdf')
def process_sky_background():
import threedhst.catIO as catIO
import re
info = catIO.Readfile('/research/HST/GRISM/3DHST/ANALYSIS/SURVEY_PAPER/sky_background.dat')
field = []
for targ in info.targname:
targ = targ.replace('GNGRISM','GOODS-NORTH-')
field.append(re.split('-[1-9]',targ)[0].upper())
field = np.array(field)
is_UDF = field == 'xxx'
for i,targ in enumerate(info.targname):
m = re.match('GOODS-SOUTH-3[4678]',targ)
if m is not None:
is_UDF[i] = True
fields = ['AEGIS','COSMOS','GOODS-SOUTH','GOODS-NORTH','UDS']
colors = ['red','blue','green','purple','orange']
for i in range(len(fields)):
match = (field == fields[i]) & (info.filter == 'G141')
h = plt.hist(info.bg_mean[match], range=(0,5), bins=50, color=colors[i], alpha=0.5)
bg = threedhst.utils.biweight(info.bg_mean[match], both=True)
print '%-14s %.3f %.3f ' %(fields[i], bg[0], bg[1])
match = is_UDF & (info.filter == 'G141')
bg = threedhst.utils.biweight(info.bg_mean[match], both=True)
print '%-14s %.3f %.3f ' %('HUDF09', bg[0], bg[1])
plt.xlim(0,3.5)
def get_background_level():
"""
Get the sky background levels from the raw FLT images, with an object and dq mask
"""
xi, yi = np.indices((1014, 1014))
DPIX = 300
flat = pyfits.open(os.getenv('iref')+'uc721143i_pfl.fits')[1].data[5:-5,5:-5]
fp = open(unicorn.GRISM_HOME+'ANALYSIS/sky_background.dat','w')
fp.write('# file targname filter date_obs bg_mean bg_sigma\n')
for path in ['AEGIS','COSMOS','GOODS-S','GOODS-N','UDS']:
os.chdir(unicorn.GRISM_HOME+path+'/PREP_FLT')
info = catIO.Readfile('files.info')
print '\n\n%s\n\n' %(path)
for ii, file in enumerate(info.file):
file = file.replace('.gz','')
print file
#
try:
flt_raw = pyfits.open(threedhst.utils.find_fits_gz('../RAW/'+file))
flt_fix = pyfits.open(threedhst.utils.find_fits_gz(file))
seg = pyfits.open(threedhst.utils.find_fits_gz(file.replace('fits','seg.fits')))
except:
continue
#
mask = (seg[0].data == 0) & ((flt_fix[3].data & (4+32+16+512+2048+4096)) == 0) & (np.abs(xi-507) < DPIX) & (np.abs(yi-507) < DPIX)
#
if info.filter[ii].startswith('G'):
flt_raw[1].data /= flat
#
background = threedhst.utils.biweight(flt_raw[1].data[mask], both=True)
fp.write('%s %16s %5s %s %.3f %.3f\n' %(file, info.targname[ii], info.filter[ii], info.date_obs[ii], background[0], background[1]))
fp.close()
def get_spec_signal_to_noise():
"""
Measure the S/N directly from the spectrum of all objects.
"""
fp = open(unicorn.GRISM_HOME+'ANALYSIS/spec_signal_to_noise.dat','w')
fp.write('# object mag_auto flux_radius sig_noise n_bins\n')
for path in ['AEGIS','COSMOS','GOODS-S','GOODS-N','UDS']:
os.chdir(unicorn.GRISM_HOME+path)
SPC_files = glob.glob('DRIZZLE_G141/*opt.SPC.fits')
for file in SPC_files:
pointing = os.path.basename(file).split('_2_opt')[0]
#
SPC = threedhst.plotting.SPCFile(file,axe_drizzle_dir='./')
cat = threedhst.sex.mySexCat('DATA/%s_drz.cat' %(pointing))
try:
mag_auto = np.cast[float](cat.MAG_F1392W)
flux_radius = np.cast[float](cat.FLUX_RADIUS)
except:
continue
#
for id in SPC._ext_map:
print unicorn.noNewLine+'%s_%05d' %(pointing, id)
#
spec = SPC.getSpec(id)
mask = (spec.LAMBDA > 1.15e4) & (spec.LAMBDA < 1.6e4) & (spec.CONTAM/spec.FLUX < 0.1) & np.isfinite(spec.FLUX)
if len(mask[mask]) > 1:
signal_noise = threedhst.utils.biweight(spec.FLUX[mask]/spec.FERROR[mask], mean=True)
mat = cat.id == id
fp.write('%s_%05d %8.3f %8.3f %13.3e %-0d\n' %(pointing, id, mag_auto[mat][0], flux_radius[mat][0], signal_noise, len(mask[mask])))
#
else:
continue
fp.close()
def process_signal_to_noise():
import threedhst.catIO as catIO
import re
os.chdir('/research/HST/GRISM/3DHST/ANALYSIS/SURVEY_PAPER/')
info = catIO.Readfile('spec_signal_to_noise.dat')
field = []
for targ in info.object:
targ = targ.replace('GNGRISM','GOODS-NORTH-')
field.append(re.split('-[1-9]',targ)[0].upper())
field = np.array(field)
snscale = 2.5 ### aXe errors too large
ma = 'o'
ms = 5
##### S/N vs mag.
plt.rcParams['text.usetex'] = True
plt.rcParams['font.family'] = 'serif'
plt.rcParams['font.serif'] = 'Times'
fig = unicorn.catalogs.plot_init(square=True, xs=5, aspect=1, left=0.10)
fig.subplots_adjust(wspace=0.2,hspace=0.24,left=0.09, bottom=0.08,right=0.975,top=0.99)
ax = fig.add_subplot(211)
ff = (field == 'COSMOS') & (info.sig_noise > 0)
ax.plot(info.mag_auto[ff], info.sig_noise[ff]*snscale, marker=ma, markersize=ms, color='red', alpha=0.1, linestyle='None')
xm, ym, ys, ns = threedhst.utils.runmed(info.mag_auto[ff], info.sig_noise[ff]*snscale, NBIN=30)
ff = (field == 'AEGIS') & (info.sig_noise > 0)
ax.plot(info.mag_auto[ff], info.sig_noise[ff]*snscale, marker=ma, markersize=ms, color='blue', alpha=0.1, linestyle='None')
ax.plot(xm,ym,color='white',linewidth=6,alpha=0.5)
ax.plot(xm,ym,color='red',linewidth=3,alpha=0.9)
xm, ym, ys, ns = threedhst.utils.runmed(info.mag_auto[ff], info.sig_noise[ff]*snscale, NBIN=30)
ax.plot(xm,ym,color='white',linewidth=6,alpha=0.5)
ax.plot(xm,ym,color='blue',linewidth=3,alpha=0.9)
ax.semilogy()
ax.plot([12,30],[3,3], linewidth=3, alpha=0.4, color='black', linestyle='--')
ax.set_ylim(0.2,300)
ax.set_xlim(16,25)
ax.set_yticklabels(['1','3','10','100']) #; ax.set_xticklabels([])
ytick = ax.set_yticks([1,3,10,100]) #; xtick = ax.set_xticks([0,NX]);
if plt.rcParams['text.usetex']:
ax.set_xlabel('MAG\_AUTO (F140W)')
else:
ax.set_xlabel('MAG_AUTO (F140W)')
ax.set_ylabel('S / N')
ax.text(16.5,1,'AEGIS',color='blue', fontsize=12)
ax.text(16.5,0.5,'COSMOS',color='red', fontsize=12)
##### S/N vs size for a mag bin
ax = fig.add_subplot(212)
m0, m1 = 22., 22.5
mm = (info.mag_auto > m0) & (info.mag_auto < m1) & (info.sig_noise > 0)
ax.plot(info.flux_radius[mm & (field == 'COSMOS')], info.sig_noise[mm & (field == 'COSMOS')]*snscale, marker=ma, markersize=ms, color='red', alpha=0.3, linestyle='None')
xm, ym, ys, ns = threedhst.utils.runmed(info.flux_radius[mm & (field == 'COSMOS')], info.sig_noise[mm & (field == 'COSMOS')]*snscale, NBIN=5)
ax.plot(info.flux_radius[mm & (field == 'AEGIS')], info.sig_noise[mm & (field == 'AEGIS')]*snscale, marker=ma, markersize=ms, color='blue', alpha=0.3, linestyle='None')
ax.plot(xm,ym,color='white',linewidth=6,alpha=0.5)
ax.plot(xm,ym,color='red',linewidth=3,alpha=0.9)
xm, ym, ys, ns = threedhst.utils.runmed(info.flux_radius[mm & (field == 'AEGIS')], info.sig_noise[mm & (field == 'AEGIS')]*snscale, NBIN=5)
ax.plot(xm,ym,color='white',linewidth=6,alpha=0.5)
ax.plot(xm,ym,color='blue',linewidth=3,alpha=0.9)
#plt.plot(xm,ym/np.sqrt(2),color='blue',linewidth=3,alpha=0.5)
ax.semilogy()
#ax.plot([0,30],[3,3], linewidth=3, alpha=0.5, color='black')
ax.set_ylim(2,15)
ax.set_xlim(1.5,12)
ax.set_yticklabels(['3','5','10']) #; ax.set_xticklabels([])
ytick = ax.set_yticks([3,5,10]) #; xtick = ax.set_xticks([0,NX]);
ax.set_xlabel(r'R$_{50}$ [$0.06^{\prime\prime}$ pix]')
ax.set_ylabel('S / N')
if plt.rcParams['text.usetex']:
ax.text(11.8,13, r'$%.1f < H_{140} < %.1f$' %(m0, m1), horizontalalignment='right', verticalalignment='top')
else:
ax.text(11.8,13, r'%.1f < $m_{140}$ < %.1f' %(m0, m1), horizontalalignment='right', verticalalignment='top')
fig.savefig('spec_signal_to_noise.pdf')
plt.rcParams['text.usetex'] = False
def clash_empty_apertures():
for cluster in ['a2261','a383','macs1149','macs1206','macs2129']:
os.chdir('/Users/gbrammer/CLASH/%s' %(cluster))
images = glob.glob('*drz.fits')
for image in images:
wht = image.replace('drz','wht')
head = pyfits.getheader(image)
zp=-2.5*np.log10(head['PHOTFLAM']) - 21.10 - 5 *np.log10(head['PHOTPLAM']) + 18.6921
unicorn.candels.clash_make_rms_map(image=wht, include_poisson=False)
unicorn.survey_paper.empty_apertures(SCI_IMAGE=image, SCI_EXT=0, WHT_IMAGE=wht.replace('wht','rms'), WHT_EXT=0, aper_params=(0.4/0.065/2.,0.4/0.065/2.+1,2), ZP=zp, make_plot=False, NSIM=1000, MAP_TYPE='MAP_RMS')
#### Sequence of apertures for measuring Beta
for cluster in ['a2261','a383','macs1149','macs1206','macs2129']:
os.chdir('/Users/gbrammer/CLASH/%s' %(cluster))
images = glob.glob('*_f160w*_drz.fits')
for image in images:
wht = image.replace('drz','wht')
head = pyfits.getheader(image)
zp=-2.5*np.log10(head['PHOTFLAM']) - 21.10 - 5 *np.log10(head['PHOTPLAM']) + 18.6921
#unicorn.candels.clash_make_rms_map(image=wht, include_poisson=False)
unicorn.survey_paper.empty_apertures(SCI_IMAGE=image, SCI_EXT=0, WHT_IMAGE=wht.replace('wht','rms'), WHT_EXT=0, aper_params=(0.2/0.065,3.05/0.065,0.2/0.065), ZP=zp, make_plot=False, NSIM=500, MAP_TYPE='MAP_RMS')
for cluster in ['a2261','a383','macs1149','macs1206','macs2129'][:-1]:
os.chdir('/Users/gbrammer/CLASH/%s' %(cluster))
files = glob.glob('*empty.fits')
print '\n--------------\n%s\n--------------\n' %(cluster.center(14))
for file in files:
head = pyfits.getheader(file.replace('_empty',''))
zp=-2.5*np.log10(head['PHOTFLAM']) - 21.10 - 5 *np.log10(head['PHOTPLAM']) + 18.6921
em = pyfits.open(file)
print '%-7s %.2f' %(file.split('_')[5], zp-2.5*np.log10(5*np.std(em[2].data)))
def run_empty_apertures_fields():
import glob
import os
import unicorn
os.chdir(unicorn.GRISM_HOME+'ANALYSIS/EMPTY_APERTURES/')
files = glob.glob('/3DHST/Spectra/Work/COSMOS/PREP_FLT/COSMOS-*-F140W_drz.fits')
files = glob.glob('/3DHST/Spectra/Work/GOODS-N/PREP_FLT/GOODS-N-*-F140W_drz.fits')
for file in files[1:]:
unicorn.survey_paper.empty_apertures(SCI_IMAGE=file, SCI_EXT=1, WHT_IMAGE=file, WHT_EXT=2, aper_params=(1,17,1), NSIM=1000, ZP=26.46, make_plot=True)
def grism_apertures_plot(SHOW_FLUX=False):
test = """
The following derives the correction, R, needed to scale the pixel standard
deviations in the drizzled images with small pixels, to the noise in
"nominal" pixels.
files = glob.glob('/research/HST/GRISM/3DHST/ANALYSIS/SURVEY_PAPER/EMPTY_APERTURES/*pix*empty.fits')
ratio = []
for file in files:
impix = pyfits.open(file)
im = pyfits.open(file.replace('pix',''))
print threedhst.utils.biweight(im[2].data[:,2]) / threedhst.utils.biweight(impix[2].data[:,2])
ratio.append(threedhst.utils.biweight(im[2].data[:,2]) / threedhst.utils.biweight(impix[2].data[:,2]))
print 'R ~ %.2f' %(1./np.mean(ratio))
# Emission line flux
texp, Nexp, R, dark, sky, ee_fraction = 1277, 4, 2, 0.05, 1.5, 0.75
area = np.pi*R**2
total_counts_resel = (xarr+dark)*texp*area*Nexp #
eq1_cts = np.sqrt(total_counts_resel+rn**2*area*Nexp)
eq1_cps = eq1_cts/texp/Nexp
#eq1_flam = eq1_cps/(sens_14um*46.5)
print eq1_cps
"""
os.chdir(unicorn.GRISM_HOME+'ANALYSIS/SURVEY_PAPER')
# files = glob.glob('/research/HST/GRISM/3DHST/ANALYSIS/SURVEY_PAPER/EMPTY_APERTURES_CIRCULAR/*G141_drz_empty.fits')
# aper_use = 0
files = glob.glob('/research/HST/GRISM/3DHST/ANALYSIS/SURVEY_PAPER/EMPTY_APERTURES/*G141_drz_empty.fits')
aper_use = 1
#### Parameters of the plot
lam_int = 1.4e4
SN_show = 5
if SHOW_FLUX:
aper_use = 0
bg = threedhst.catIO.Readfile('/research/HST/GRISM/3DHST/ANALYSIS/SURVEY_PAPER/sky_background.dat')
sens = pyfits.open(unicorn.GRISM_HOME+'CONF/WFC3.IR.G141.1st.sens.2.fits')[1].data
sens_14um = np.interp(lam_int,sens.WAVELENGTH,sens.SENSITIVITY)
colors = {'AEGIS':'red','COSMOS':'blue','UDS':'purple', 'GNGRISM':'orange','GOODSS':'green'}
plt.rcParams['text.usetex'] = True
plt.rcParams['font.family'] = 'serif'
plt.rcParams['font.serif'] = 'Times'
fig = unicorn.catalogs.plot_init(xs=4, left=0.125, bottom=0.085, right=0.01, top=0.01, square=True, fontsize=12)
ax = fig.add_subplot(111)
for file in files:
root = os.path.basename(file).split('-G141')[0].replace('GOODS-N-','GNGRISM').replace('S-S','S-SOUTH')
print root
mat = (bg.targname == root) & (bg.filter == 'G141')
if len(mat[mat]) == 0:
continue
mean_bg = np.mean(bg.bg_mean[mat])
err_bg = np.std(bg.bg_mean[mat])
aps = pyfits.open(file)
#
fluxes = aps[2].data
stats = threedhst.utils.biweight(fluxes, both=True)
sigmas = fluxes[0,:].flatten()*0
for i in range(len(sigmas)):
sigmas[i] = threedhst.utils.biweight(fluxes[:,i])
#sigmas[i] = threedhst.utils.nmad(fluxes[:,i])
#sigmas[i] = np.std(fluxes[:,i])
#sigmas *= 1.34 ### scale factor for correlated pixels, determined empirically
inv_sens_flam = sigmas/(sens_14um*22*4)
field = os.path.basename(file).replace('GOODS-N','GNGRISM').replace('GOODS-S','GOODSS').split('-')[0]
print field, aps[1].data[aper_use]
if SHOW_FLUX:
sig3_flux = inv_sens_flam*SN_show
sig3_flux /= 0.75**2 ### Include for encircled energy within 3 nominal spatial pixles
sig3_ab = sig3_flux*2*46.5/1.e-17
else:
inv_sens_fnu = inv_sens_flam*lam_int**2/3.e18
sig3_flux = inv_sens_fnu*SN_show
sig3_flux /= 0.75 ### Include for encircled energy within 3 nominal spatial pixles
sig3_ab = -2.5*np.log10(sig3_flux)-48.6#-2.5*np.log10(np.sqrt(2))
p = ax.errorbar(mean_bg,sig3_ab[aper_use], xerr=err_bg, marker='o', ms=8, alpha=0.4, color=colors[field], ecolor=colors[field])
xarr = np.arange(0,5,0.02) ### background rate, cts / s
yarr = 22.9-2.5*np.log10(np.sqrt(xarr/2.))
#### Eq. 1 from paper
scale = 0.06/0.128254
Nexp, texp, rn, R, dark = 4, 1277, 20, 2*scale, 0.05
#dark += 0.181/3.
#area = np.pi*R**2
area = 3
ee_fraction = 0.75 # for 3 pix aperture in spatial direction, nominal pixels
total_counts_resel = (xarr+dark)*texp*area*2*Nexp #
eq1_cts = np.sqrt(total_counts_resel+rn**2*area*Nexp)
eq1_cps = eq1_cts/texp/Nexp/ee_fraction/2
eq1_flam = eq1_cps/(sens_14um*46.5)
eq1_fnu = eq1_flam*lam_int**2/3.e18
eq1_ab = -2.5*np.log10(eq1_fnu*SN_show)-48.6#-2.5*np.log10(np.sqrt(0.5))
plt.plot(xarr, eq1_ab+2.5*np.log10(1.35), color='black', alpha=0.4, linewidth=2, linestyle='--')
plt.plot(xarr, eq1_ab, color='black', alpha=0.4, linewidth=2)
plt.text(1.,24.21-2.5*np.log10(SN_show/3.),'ETC', rotation=-44, color='black', alpha=0.4,horizontalalignment='center')
plt.arrow(1.5, 24.1-2.5*np.log10(SN_show/3.), 0, 0.1, color='0.6', alpha=1, fill=True, width=0.02, head_width=0.06, head_length=0.02, overhang=0.05)
plt.text(1.5+0.05, 24.1-2.5*np.log10(SN_show/3.)+0.05,r'$R_\mathrm{driz}$', verticalalignment='center', alpha=0.4)
#plt.plot(xarr, eq1_ab-2.5*np.log10(np.sqrt(2*46.5/22.5)), color='purple', linewidth=2)
### Predict source counts
mag = 23
source_fnu = 10**(-0.4*(mag+48.6))
source_flam = source_fnu/lam_int**2*3.e18
source_cps = source_flam*sens_14um*46.5*ee_fraction
source_counts = source_cps*Nexp*texp*2
print np.interp(1.88,xarr,total_counts_resel), np.interp(1.88,xarr,np.sqrt(total_counts_resel+rn**2*area*Nexp)), np.interp(1.88,xarr,eq1_ab), source_counts, source_counts / np.interp(1.88,xarr,np.sqrt(total_counts_resel+rn**2*area*Nexp))
#ax.plot(xarr, yarr, color='black', alpha=0.2, linewidth=3)
ax.set_xlim(0.6,3.2)
if SHOW_FLUX:
ax.set_ylim(1,5.1)
ax.set_ylabel(r'$%0d\sigma$ emission line sensitivity ($10^{-17}\,\mathrm{erg\,s^{-1}\,cm^{-2}}$)' %(SN_show))
#ax.semilogy()
else:
ax.set_ylim(23.8-2.5*np.log10(SN_show/3.),25.0-2.5*np.log10(SN_show/3.))
ax.set_ylabel(r'$%0d\sigma$ continuum depth (1.4$\mu$m, $\Delta=92\,$\AA)' %(SN_show))
ax.set_xlabel(r'Background level [electrons / s]')
#ax.set_ylabel(r'$3\sigma$ continuum depth @ 1.4$\mu$m, $D_\mathrm{ap}=0.24^{\prime\prime}/\,90\,$\AA')
x0, y0, dy = 3, 24.8-2.5*np.log10(SN_show/3.), 0.1
for i,field in enumerate(colors.keys()):
field_txt = field.replace('GNGRISM','GOODS-N').replace('GOODSS','GOODS-S')
ax.text(x0, y0-i*dy, field_txt, color=colors[field], horizontalalignment='right')
plt.savefig('grism_empty_apertures.pdf')
plt.close()
def grism_empty_apertures():
"""
Try simple empty apertures routine to measure depth of grism exposures,
to compare with the values measured directly from the spectra.
"""
unicorn.survey_paper.empty_apertures(SCI_IMAGE='GOODS-S-34-G141_drz.fits', WHT_IMAGE='GOODS-S-34-G141_drz.fits', aper_params=(2,8.1,2), NSIM=500, ZP=25, make_plot=False, verbose=True, threshold=0.8, is_grism=True)
for field in ['AEGIS','GOODS-S','UDS']: #,'COSMOS','GOODS-N']:
os.chdir(unicorn.GRISM_HOME+field+'/PREP_FLT/')
images = glob.glob(field+'*[0-9]-G141_drz.fits')
print images
for image in images[1:]:
unicorn.survey_paper.empty_apertures(SCI_IMAGE=image, WHT_IMAGE=image, aper_params=(2,8.1,2), NSIM=1000, ZP=25, make_plot=False, verbose=True, threshold=0.8, is_grism=True, rectangle_apertures = [(4,4),(2,6),(4,6)])
#
#### Make a version with nominal pixels for testing
threedhst.shifts.make_grism_shiftfile(image.replace('drz','asn').replace('G141','F140W'), image.replace('drz','asn'))
threedhst.utils.combine_asn_shifts([image.replace('drz','asn')], out_root=image.split('_drz')[0]+'pix')
threedhst.prep_flt_files.startMultidrizzle(image.split('_drz')[0] +'pix_asn.fits',
use_shiftfile=True, skysub=False,
final_scale=0.128254, pixfrac=0.01, driz_cr=False,
updatewcs=False, clean=True, median=False)
new = image.replace('G141','G141pix')
unicorn.survey_paper.empty_apertures(SCI_IMAGE=new, WHT_IMAGE=new, aper_params=(2,8.1,2), NSIM=500, ZP=25, make_plot=False, verbose=True, threshold=1.0, is_grism=True, rectangle_apertures = [(2,2),(1,3),(2,3),(4,4)])
aps = pyfits.open('GOODS-S-34-G141_drz_empty.fits')
fluxes = aps[2].data.flatten()
stats = threedhst.utils.biweight(fluxes, both=True)
sens = pyfits.open('../../CONF/WFC3.IR.G141.1st.sens.2.fits')[1].data
wave = sens.WAVELENGTH
inv_sens_flam = 1./(sens.SENSITIVITY*0.06/0.128254*46.5)
inv_sens_fnu = inv_sens_flam*wave**2/3.e18
sig3 = inv_sens_fnu*3*stats[1]
sig3_ab = -2.5*np.log10(sig5)-48.6
plt.plot(wave, sig5_ab)
mag = sig5_ab*0+23
input_fnu = 10**(-0.4*(mag+48.6))
input_flam = input_fnu*3.e18/wave**2
input_counts = input_flam * sens.SENSITIVITY * 46.5
#plt.plot(sens.WAVELENGTH, sens.SENSITIVITY)
def empty_apertures(SCI_IMAGE='PRIMO_F125W_drz.fits', SCI_EXT=1, WHT_IMAGE='PRIMO_F125W_drz.fits', WHT_EXT=2, aper_params=(1,17,0.5), NSIM=1000, ZP=26.25, make_plot=True, verbose=True, MAP_TYPE='MAP_WEIGHT', threshold=1.5, is_grism=False, rectangle_apertures = None):
"""
1) Run SExtractor on the input image to generate a segmentation map.
2) Place `NSIM` empty apertures on the image, avoiding objects and areas with
zero weight as defined in the `WHT_IMAGE`.
The list of aperture radii used are np.arange(`aper_params`).
3) Store the results in a FITS file `SCI_IMAGE`_empty.fits.
Circular apertures are placed on the science image with the fractional pixel
coverage determined with a polygon approximation, and this can take a while.
"""
from shapely.geometry import Point, Polygon
if SCI_EXT == 0:
SCI_EXT_SEX = 1
else:
SCI_EXT_SEX = SCI_EXT
if WHT_EXT == 0:
WHT_EXT_SEX = 1
else:
WHT_EXT_SEX = WHT_EXT
ROOT = os.path.basename(SCI_IMAGE).split('.fits')[0]
#### Open the science image
if verbose:
print 'Read images...'
img = pyfits.open(SCI_IMAGE)
img_data = img[SCI_EXT].data
img_head = img[SCI_EXT].header
img_shape = img_data.shape
#### Setup SExtractor and run to generate a segmentation image
if is_grism:
threedhst.sex.USE_CONVFILE = 'grism.conv'
else:
threedhst.sex.USE_CONVFILE = 'gauss_4.0_7x7.conv'
se = threedhst.sex.SExtractor()
se.aXeParams()
se.copyConvFile()
se.overwrite = True
se.options['CATALOG_NAME'] = '%s_empty.cat' %(ROOT)
se.options['CHECKIMAGE_NAME'] = '%s_empty_seg.fits' %(ROOT)
se.options['CHECKIMAGE_TYPE'] = 'SEGMENTATION'
if WHT_IMAGE is None:
se.options['WEIGHT_TYPE'] = 'NONE'
img_wht = img_data*0.
img_wht[img_data != 0] = 1
else:
se.options['WEIGHT_TYPE'] = MAP_TYPE
se.options['WEIGHT_IMAGE'] = '%s[%d]' %(WHT_IMAGE, WHT_EXT_SEX-1)
wht = pyfits.open(WHT_IMAGE)
img_wht = wht[WHT_EXT].data
##### Needed for very faint limits
se.options['MEMORY_OBJSTACK'] = '8000'
se.options['MEMORY_PIXSTACK'] = '800000'
se.options['FILTER'] = 'Y'
se.options['DETECT_THRESH'] = '%.2f' %(threshold)
se.options['ANALYSIS_THRESH'] = '%.2f' %(threshold)
se.options['MAG_ZEROPOINT'] = '%.2f' %(ZP) ### arbitrary, actual mags don't matter
status = se.sextractImage('%s[%d]' %(SCI_IMAGE, SCI_EXT_SEX-1))
#### Read the Segmentation image
segim = pyfits.open('%s_empty_seg.fits' %(ROOT))
seg = segim[0].data
segim.close()
#### Set up the apertures
#NSIM = 1000
#apertures = np.arange(1,17,0.5)
apertures = np.arange(aper_params[0], aper_params[1], aper_params[2])
if rectangle_apertures is not None:
IS_RECTANGLE = True
apertures = rectangle_apertures ### list of tuples with (dx,dy) sizes
else:
IS_RECTANGLE = False
fluxes = np.zeros((NSIM, len(apertures)))
centers = np.zeros((NSIM, len(apertures), 2))
#### Loop throuth the desired apertures and randomly place NSIM of them
aper = np.zeros(img_shape, dtype=np.float)
for iap, ap in enumerate(apertures):
#aper_image = np.zeros(img_shape)
icount = 0
if IS_RECTANGLE:
print 'Aperture %.2f x %.2f pix\n' %(ap[0], ap[1])
rap = (ap[0]/2.,ap[1]/2.)
else:
print 'Aperture radius: %.2f pix\n' %(ap)
rap = (ap, ap)
while icount < NSIM:
#### Random coordinate
xc = np.random.rand()*(img_shape[1]-4*rap[0])+2*rap[0]
yc = np.random.rand()*(img_shape[0]-4*rap[1])+2*rap[1]
#### Quick test to see if the coordinate is within an object or
#### where weight is zero
if (seg[int(yc), int(xc)] != 0) | (img_wht[int(yc), int(xc)] <= 0) | (img_data[int(yc), int(xc)] == 0):
continue
#### Shapely point + buffer to define the circular aperture
if IS_RECTANGLE:
aperture_polygon = Polygon(((xc-ap[0]/2.,yc-ap[1]/2.), (xc+ap[0]/2.,yc-ap[1]/2.), (xc+ap[0]/2.,yc+ap[1]/2.), (xc-ap[0]/2.,yc+ap[1]/2.)))
else:
point = Point(xc, yc)
aperture_polygon = point.buffer(ap, resolution=16)
#### initialize the aperture
aper*=0
#### Loop through pixels to compute fractional pixel coverage within
#### the circular aperture using the intersection of Shapely polygons
smax = 0
wmin = 1.e10
for i in range(int(np.floor(xc-rap[0])),int(np.ceil(xc+rap[0]))):
for j in range(int(np.floor(yc-rap[1])),int(np.ceil(yc+rap[1]))):
pix = Polygon(((i+0.5,j+0.5), (i+1.5,j+0.5), (i+1.5,j+1.5), (i+0.5,j+1.5)))
isect = pix.intersection(aperture_polygon)
aper[j,i] = isect.area
if isect.area > 0:
smax = np.array([smax, seg[j,i]]).max()
wmin = np.array([wmin, img_wht[j,i]]).min()
#### Only keep the result if the aperture doesn't intersect with an object
#### as defined in the segmention image and if all weights within the
#### aperture are greater than zero
if (smax == 0) & (wmin > 0):
fluxes[icount, iap] = (aper*img_data).sum()
centers[icount, iap, : ] = np.array([xc, yc])
#aper_image += aper
print unicorn.noNewLine+'%d' %(icount)
icount += 1
else:
print unicorn.noNewLine+'Skip: %f %f' %((seg*aper).max(), (img_wht*aper).min())
continue
#### Make the output FITS file. List of aperture radii in extension 1, aperture
#### fluxes in extension 2.
ap_head = pyfits.Header()
ap_head.update('NSIM',NSIM, comment='Number of apertures')
ap_head.update('SCI_IMG',SCI_IMAGE, comment='Science image')
ap_head.update('SCI_EXT',SCI_EXT, comment='Science extension')
ap_head.update('WHT_IMG',WHT_IMAGE, comment='Weight image')
ap_head.update('WHT_EXT',WHT_EXT, comment='Weight extension')
prim = pyfits.PrimaryHDU(header=ap_head)
ap_hdu = pyfits.ImageHDU(data=np.array(apertures))
fl_hdu = pyfits.ImageHDU(data=fluxes)
ce_hdu = pyfits.ImageHDU(data=centers)
pyfits.HDUList([prim, ap_hdu, fl_hdu, ce_hdu]).writeto('%s_empty.fits' %(ROOT), clobber='True')
if make_plot is True:
make_empty_apertures_plot(empty_file='%s_empty.fits' %(ROOT), ZP=ZP)
def make_empty_apertures_plot(empty_file='PRIMO_F125W_drz_empty.fits', ZP=26.25, NSIG=5):
"""
Plot the results from the `empty_apertures` routine.
"""
import matplotlib.pyplot as plt
import numpy as np
from scipy import polyfit, polyval
import threedhst
import unicorn
im = pyfits.open(empty_file)
apertures = im[1].data
fluxes = im[2].data
ROOT = empty_file.split('_empty')[0]
sigma = apertures*0.
means = apertures*0.
for iap, ap in enumerate(apertures):
sigma[iap] = threedhst.utils.biweight(fluxes[:,iap])
means[iap] = threedhst.utils.biweight(fluxes[:,iap], mean=True)
#plt.plot(apertures, means/np.pi/apertures**2)
#plt.ylim(0,1.5*np.max(means/np.pi/apertures**2))
#threedhst.utils.biweight(img_data[(seg == 0) & (img_wht > 0)], both=True)
fig = unicorn.catalogs.plot_init(xs=6, aspect=0.5, left=0.12)
fig.subplots_adjust(wspace=0.24,hspace=0.0,left=0.095,
bottom=0.17,right=0.97,top=0.97)
ax = fig.add_subplot(121)
################################## plot sigma vs radius
ax.plot(apertures, sigma, marker='o', linestyle='None', color='black', alpha=0.8, markersize=5)
coeffs = polyfit(np.log10(apertures), np.log10(sigma), 1)
ax.plot(apertures, 10**polyval(coeffs, np.log10(apertures)), color='red')
xint = 1
y2 = apertures**2
y2 = y2*np.interp(xint,apertures,sigma) / np.interp(xint,apertures,y2)
ax.plot(apertures, y2, linestyle='--', color='black', alpha=0.3)
y1 = apertures**1
y1 = y1*np.interp(xint,apertures,sigma) / np.interp(xint,apertures,y1)
ax.plot(apertures, y1, linestyle='--', color='black', alpha=0.3)
ax.set_xlabel(r'$R_\mathrm{aper}$ [pix]')
ax.set_ylabel(r'$\sigma_\mathrm{biw}$')
#ax.text(apertures.max(), 0.1*sigma.max(), r'$\beta=%.2f$' %(coeffs[0]), horizontalalignment='right')
ax.text(0.08, 0.85, r'$N_\mathrm{ap}=%d$' %(im[0].header['NSIM']), transform=ax.transAxes)
ax.set_ylim(0,2)
################################# Plot AB depth
ax = fig.add_subplot(122)
ax.plot(apertures, ZP-2.5*np.log10(sigma*NSIG), marker='o', linestyle='None', color='black', alpha=0.8, markersize=5)
ax.plot(apertures, ZP-2.5*np.log10(10**polyval(coeffs, np.log10(apertures))*NSIG), color='red')
ax.plot(apertures, ZP-2.5*np.log10(y2*NSIG), linestyle='--', color='black', alpha=0.3)
ax.plot(apertures, ZP-2.5*np.log10(y1*NSIG), linestyle='--', color='black', alpha=0.3)
ax.set_xlabel(r'$R_\mathrm{aper}$ [pix]')
ax.set_ylabel(r'Depth AB mag (%d$\sigma$)' %(NSIG))
ax.text(0.95, 0.9, ROOT, horizontalalignment='right', verticalalignment='top', transform=ax.transAxes)
ax.text(0.95, 0.8, r'$\beta=%.2f$' %(coeffs[0]), horizontalalignment='right', verticalalignment='top', transform=ax.transAxes)
ax.set_ylim(23, 30)
################################## Save the result
outfile = ROOT+'_empty.pdf'
if USE_PLOT_GUI:
fig.savefig(outfile,dpi=100,transparent=False)
else:
canvas = FigureCanvasAgg(fig)
canvas.print_figure(outfile, dpi=100, transparent=False)
print ROOT+'_empty.pdf'
def show_background_flux_distribution():
"""
Extract the SKYSCALE parameter from the G141 FLT images and plot their distribution
by field.
"""
from matplotlib.ticker import MultipleLocator, FormatStrFormatter
# for field in ['AEGIS','COSMOS','GOODS-S','UDS']:
# os.chdir(unicorn.GRISM_HOME+'%s/PREP_FLT' %(field))
# print field
# status = os.system('dfits *flt.fits |fitsort TARGNAME FILTER SKYSCALE |grep G141 > %s_skyscale.dat' %(field))
#
# os.chdir(unicorn.GRISM_HOME)
# status = os.system('cat AEGIS/PREP_FLT/AEGIS_skyscale.dat COSMOS/PREP_FLT/COSMOS_skyscale.dat GOODS-S/PREP_FLT/GOODS-S_skyscale.dat UDS/PREP_FLT/UDS_skyscale.dat > 3DHST_skyscale.dat')
###
os.chdir("/research/HST/GRISM/3DHST/SIMULATIONS")
sky = catIO.Readfile('3DHST_skyscale.dat')
pointings = np.unique(sky.pointing)
colors = {}
colors['UDS'] = 'purple'; colors['COSMOS'] = 'blue'; colors['AEGIS'] = 'red'; colors['GOODS-SOUTH'] = 'green'
off = {}
off['UDS'] = 1
nexp = np.arange(1,5)
fields = []
for pointing in pointings:
this = sky.pointing == pointing
field = '-'.join(pointing.split('-')[:-1])
fields.extend([field]*4)
#print pointing, field
#p = plt.plot(sky.skyscale[this], nexp, marker='o', linestyle='-', alpha=0.5, color=colors[field])
fields = np.array(fields)
plt.ylim(0,5)
fig = unicorn.catalogs.plot_init(xs=3.8, left=0.06, bottom=0.08, use_tex=True, fontsize=10)
ax = fig.add_subplot(111)
for i, field in enumerate(np.unique(fields)[::-1]):
this = fields == field
yh, xh = np.histogram(sky.skyscale[this], range=(0,4), bins=40)
p = ax.plot(xh[1:], yh*1./yh.max()+i*3.5, marker='None', linestyle='steps-', color=colors[field], alpha=0.5, linewidth=3)
#
pointings = np.unique(sky.pointing[fields == field])
for pointing in pointings:
this = sky.pointing == pointing
p = ax.plot(sky.skyscale[this], np.arange(4)/4.*2+1.5+i*3.5, marker='o', linestyle='-', alpha=0.3, color=colors[field], ms=4)
#
ax.text(0.1, i*3.5+1.5, field.replace('SOUTH','S'), horizontalalignment='left')
ax.set_xlim(0,3.4)
ax.yaxis.set_major_locator(MultipleLocator(3.5))
ax.set_yticklabels([])
ax.set_ylabel('Field')
ax.set_xlabel(r'Background per exposure [e$^-$/ s]')
unicorn.catalogs.savefig(fig, 'pointing_backgrounds.pdf')
def eqw_as_fn_mag():
"""
For a given line flux, plot the equivalent width
as a function of magnitude to show the equivalent width sensitivity.
"""
lam = np.arange(1.e4,1.8e4,0.1)
l0 = 1.4e4
dv = 120 # km/s
line = 1./np.sqrt(2*np.pi*(dv/3.e5*l0)**2)*np.exp(-(lam-l0)**2/2/(dv/3.e5*l0)**2)
continuum = lam*0.+line.max()*0.1
line_fnu = line*lam**2/3.e18
continuum_fnu = continuum*lam**2/3.e18
xfilt, yfilt = np.loadtxt(os.getenv('iref')+'/F140W.dat', unpack=True)
yfilt_int = np.interp(lam, xfilt, yfilt) #/np.trapz(yfilt, xfilt)
## filter width
piv = np.sqrt(np.trapz(yfilt*xfilt, xfilt)/np.trapz(yfilt/xfilt, xfilt))
INT, SQRT, LAM, THRU, LN = np.trapz, np.sqrt, xfilt, yfilt, np.log
BARLAM = INT(THRU * LN(LAM) / LAM, LAM) / INT(THRU / LAM, LAM)
BANDW = BARLAM * SQRT(INT(THRU * LN(LAM / BARLAM)**2 / LAM, LAM)) / INT(THRU / LAM, LAM)
barlam = np.trapz(yfilt*np.log(xfilt)/xfilt, xfilt) / np.trapz(yfilt/xfilt, xfilt)
bandw = barlam*np.sqrt(np.trapz(yfilt*np.log(xfilt/barlam)**2/xfilt, xfilt)) / np.trapz(yfilt/xfilt, xfilt)
nu = 3.e8/(lam*1.e-10)
bigL = -np.trapz(line_fnu*yfilt_int, nu)
bigC = -np.trapz(continuum_fnu*yfilt_int, nu)
bigF = -np.trapz(yfilt_int, nu)
bigW = np.trapz(line/continuum, lam)
xfilt_125, yfilt_125 = np.loadtxt(os.getenv('iref')+'/F125W.dat', unpack=True)
yfilt_int_125 = np.interp(lam, xfilt_125, yfilt_125) #/np.trapz(yfilt, xfilt)
bigL_125 = -np.trapz(line_fnu*yfilt_int_125, nu)
bigC_125 = -np.trapz(continuum_fnu*yfilt_int_125, nu)
bigF_125 = -np.trapz(yfilt_int_125, nu)
# integrated line flux
alpha = 5.e-17
### plot trend
mag = np.arange(20, 25.5, 0.05)
fnu = 10**(-0.4*(mag+48.6))
EQW = bigW*bigC / (bigF/alpha*fnu-bigL)
#plt.plot(mag, EQW)
#
# EQW2 = bigW*bigC*alpha/bigF / (fnu-alpha/bigF*bigL)
# plt.plot(mag, EQW2)
#
# EQW3 = 3.19e-27 / (fnu - 8.04e-31)
# plt.plot(mag, EQW3)
#
# EQW4 = 3.19e-27 / (10**(-0.4*mag)*3.63e-20 - 8.04e-31)
# plt.plot(mag, EQW4)
#
# EQW5 = 8.78e-8*(alpha/5.e-17) / (10**(-0.4*mag)-2.21e-11*(alpha/5.e-17))
# plt.plot(mag, EQW5)
#
# m0 = 23
# EQW6 = 8.78e-8*(alpha/5.e-17) / (10**(-0.4*m0)-2.21e-11*(alpha/5.e-17))
#### above equation reaches a limit when continuum = 0, mag can't be less than
#### that of just the integrated line
line_only = (line+continuum*0)
line_only_fnu = line_only*lam**2/3.e18
fnu_filt = np.trapz(line_only_fnu*yfilt_int, nu) / np.trapz(yfilt_int, nu)
mag_limit = -2.5*np.log10(alpha*fnu_filt)-48.6
mag_limit2 = -2.5*np.log10(alpha/5.e-17)-2.5*np.log10(5.e-17)-2.5*np.log10(fnu_filt)-48.6
mag_limit3 = -2.5*np.log10(alpha/5.e-17)+26.64
### test if test case comes out right
spec_obs = alpha*(line+continuum)
spec_obs_fnu = spec_obs*lam**2/3.e18
fnu_filt = np.trapz(spec_obs_fnu*yfilt_int, nu) / np.trapz(yfilt_int, nu)
mag_obs = -2.5*np.log10(fnu_filt)-48.6
eqw_obs = np.trapz(line/continuum, lam)
#plt.plot([mag_obs,mag_obs], [eqw_obs,eqw_obs], marker='o', ms=15)
#### Make figure
mag = np.arange(20, 26, 0.1)
fnu = 10**(-0.4*(mag+48.6))
#fig = unicorn.catalogs.plot_init(xs=3.8, left=0.10, bottom=0.08, use_tex=True, square=True, fontsize=11)
fig = unicorn.catalogs.plot_init(left=0.11, bottom=0.08, xs=3.8, right=0.09, top=0.01, use_tex=True)
ax = fig.add_subplot(111)
lst = ['-.','-','--',':']
#### Show Arjen's sample
elg = catIO.Readfile('elg_mag.txt')
elg.oiii = (5+15)*1./(5+15+23)*elg.ew_obs_tot
elg.avmag = -2.5*np.log10(0.5*10**(-0.4*elg.j)+0.5*10**(-0.4*elg.h))
ax.scatter(elg.j, elg.oiii, alpha=0.4, s=10, color='red', label=r'van der Wel et al. 2011 ($J_{125}$)')
EQW_125 = bigW*bigC_125 / (bigF_125/5.e-17*fnu-bigL_125)
ax.plot(mag, EQW_125, color='red', alpha=0.5)
for ii, limit in enumerate([1.e-17, 3.e-17, 5.e-17, 1.e-16][::-1]):
EQW = bigW*bigC / (bigF/limit*fnu-bigL)
ll = np.log10(limit)
l0 = np.round(10**(ll-np.floor(ll)))
l10 = np.floor(ll)
ax.plot(mag, EQW, label=r'$f_{\lambda,\mathrm{line}} = %0d\times10^{%0d}$' %(l0,l10), linestyle=lst[ii], color='black')
ax.semilogy()
ax.set_xlabel(r'$m_{140}$')
ax.set_ylabel(r'Equivalent width (\AA)')
ax.set_ylim(1,1.e4)
ax.set_xlim(20,26)
#8.78e-8*(alpha/5.e-17) / (10**(-0.4*mag)-2.21e-11*(alpha/5.e-17))
ax.text(23,1.7,r'$\mathrm{EQW} = \frac{8.78\times10^{-8}\left(f_\mathrm{line}/5\times10^{-17}\right)}{10^{-0.4\,m_{140}}-2.21\times10^{-11}\left(f_\mathrm{line}/5\times10^{-17}\right)}$', horizontalalignment='center')
ax.legend(prop=matplotlib.font_manager.FontProperties(size=8), loc=2, bbox_to_anchor=(0.02,0.98), frameon=False)
unicorn.catalogs.savefig(fig, 'eqw_as_fn_mag.pdf')
#### compute where poisson error of source counts (1.4 um) is similar to sky background error
sky, texp = 1.6, 1200.
var = sky*texp + 20**2
bg_error = np.sqrt(var)/1200.
sens = np.interp(1.4e4, unicorn.reduce.sens_files['A'].field('WAVELENGTH'), unicorn.reduce.sens_files['A'].field('SENSITIVITY'))*46.5
mag = np.arange(14,25,0.1)
fnu = 10**(-0.4*(mag+48.6))
ctrate = fnu*3.e18/1.4e4**2*sens
re = 3 # pix
peak = 1./np.sqrt(2*np.pi*re**2)
poisson = np.sqrt(ctrate*peak*texp)/texp
m_crit = np.interp(bg_error, poisson[::-1], mag[::-1])
#### SIMULATIONS
stats = catIO.Readfile('all_simspec.dat')
plt.scatter(stats.mag, stats.ha_eqw, marker='s', alpha=0.1, s=4)
def make_star_thumbnails():
"""
Extract thumbnails for isolated stars in COSMOS
"""
os.chdir(unicorn.GRISM_HOME+'ANALYSIS/SURVEY_PAPER')
######### Make full COSMOS catalog
file=unicorn.GRISM_HOME+'COSMOS/PREP_FLT/COSMOS-F140W_drz.fits'
ROOT_GRISM = os.path.basename(file).split('_drz.fits')[0]
se = threedhst.sex.SExtractor()
se.aXeParams()
se.copyConvFile()
se.overwrite = True
se.options['CATALOG_NAME'] = ROOT_GRISM+'_drz.cat'
se.options['CHECKIMAGE_NAME'] = ROOT_GRISM+'_seg.fits'
se.options['CHECKIMAGE_TYPE'] = 'SEGMENTATION'
se.options['WEIGHT_TYPE'] = 'MAP_WEIGHT'
se.options['WEIGHT_IMAGE'] = file+'[1]'
se.options['FILTER'] = 'Y'
se.options['DETECT_THRESH'] = '1.4'
se.options['ANALYSIS_THRESH'] = '1.4'
se.options['MAG_ZEROPOINT'] = '26.46'
status = se.sextractImage(file+'[0]', mode='direct')
cat = threedhst.sex.mySexCat('COSMOS-F140W_drz.cat')
mag, radius = np.cast[float](cat.MAG_AUTO), np.cast[float](cat.FLUX_RADIUS)
xpix, ypix = np.cast[float](cat.X_IMAGE), np.cast[float](cat.Y_IMAGE)
ra, dec = np.cast[float](cat.X_WORLD), np.cast[float](cat.Y_WORLD)
#### Find isolated point sources
points = (mag > 17) & (mag < 22) & (radius < 2.7)
# plt.plot(mag, radius, marker='o', linestyle='None', alpha=0.5, color='blue')
# plt.plot(mag[points], radius[points], marker='o', linestyle='None', alpha=0.8, color='red')
# plt.ylim(0,20)
# plt.xlim(14,26)
idx = np.arange(len(points))
isolated = mag > 1.e10
buff = 3 ## buffer, in arcsec
dmag = 2.5
scale = 0.06
for i in idx[points]:
dr = np.sqrt((xpix[i]-xpix)**2+(ypix[i]-ypix)**2)*scale
near = (dr > 0) & (dr < buff) & (mag < (mag[i]+dmag))
if len(near[near]) == 0:
isolated[i] = True
else:
isolated[i] = False
#### Make thumbnails
img = pyfits.open(unicorn.GRISM_HOME+'COSMOS/PREP_FLT/COSMOS-F140W_drz.fits')
img_data = img[1].data
img_wht = img[2].data
NPIX = int(np.ceil(buff/scale))
prim = pyfits.PrimaryHDU()
list_d = [prim]
list_w = [prim]
head = img[1].header
head['CRPIX1'], head['CRPIX2'] = NPIX, NPIX
for i in idx[points & isolated]:
print unicorn.noNewLine+'%d' %(i)
id = np.int(cat.NUMBER[i])
xi, yi = int(np.round(xpix[i])), int(np.round(ypix[i]))
sub_data = img_data[yi-NPIX:yi+NPIX, xi-NPIX: xi+NPIX]
sub_wht = img_wht[yi-NPIX:yi+NPIX, xi-NPIX: xi+NPIX]
#
head['CRVAL1'], head['CRVAL2'] = ra[i], dec[i]
head.update('MAG',mag[i])
head.update('RADIUS',radius[i])
head.update('XCENTER',xpix[i]-xi+NPIX)
head.update('YCENTER',ypix[i]-yi+NPIX)
#
list_d.append(pyfits.ImageHDU(sub_data, header=head))
list_w.append(pyfits.ImageHDU(sub_wht, header=head))
pyfits.HDUList(list_d).writeto('stars_sci.fits', clobber=True)
pyfits.HDUList(list_w).writeto('stars_wht.fits', clobber=True)
def curve_of_growth():
"""
Some code to evaluate the curve of growth of the F140W PSF, the
optimal aperture taken to be the ratio of the CoG divided by the empty aperture
sigmas, and the overall depth within some aperture.
"""
import threedhst
import unicorn
sci = pyfits.open('stars_sci.fits')
wht = pyfits.open('stars_wht.fits')
apers = np.arange(1,25,0.5)
lstep = np.arange(0, np.log10(25), 0.05)
apers = 10**lstep
NOBJ = len(sci)-1
count = 0
average_fluxes = apers*0.
stack = sci[1].data*0.
for i in range(NOBJ):
print unicorn.noNewLine+'%d' %(i)
star = sci[i+1].data
yy, xx = np.indices(star.shape)
center = (np.abs(xx-50) < 5) & (np.abs(yy-50) < 5)
xc = np.sum((star*xx)[center])/np.sum(star[center])
yc = np.sum((star*yy)[center])/np.sum(star[center])
#xc, yc = sci[i+1].header['XCENTER'], sci[i+1].header['YCENTER']
#
bg = threedhst.utils.biweight(star, both=True)
bg = threedhst.utils.biweight(star[star < (bg[0]+4*bg[1])], both=True)
star = star-bg[0]
stack = stack + star
#
NAP = len(apers)
fluxes = np.zeros(apers.shape)
for i in range(NAP):
#print unicorn.noNewLine+'%.2f' %(apers[i])
fluxes[i] = unicorn.survey_paper.aper_phot(star, xc, yc, apers[i])
#
pp = plt.plot(apers, fluxes/fluxes[20], alpha=0.2, color='blue')
average_fluxes += fluxes/fluxes[20]
count = count + 1
stack = stack / count
stack_fluxes = np.zeros(apers.shape)
for i in range(NAP):
print unicorn.noNewLine+'%.2f' %(apers[i])
stack_fluxes[i] = unicorn.survey_paper.aper_phot(star, xc, yc, apers[i])
plt.xlabel(r'$R_\mathrm{aper}$')
plt.ylabel(r'$f/f_{10}$')
plt.text(15,0.4,'$N=%d$' %(count))
plt.savefig('curve_of_growth.pdf')
plt.close()
#plt.plot(apers, average_fluxes/count, color='black', linewidth=2)
#plt.plot(apers, stack_fluxes/stack_fluxes[20], color='red', linewidth=2)
# plt.plot(apers, average_fluxes/count / (stack_fluxes/stack_fluxes[20]))
fp = open('curve_of_growth.dat','w')
for i in range(len(apers)):
fp.write('%.2f %.3f\n' %(apers[i], (average_fluxes/count)[i]))
fp.close()
#### optimal color aperture:
empty = pyfits.open('../../EMPTY_APERTURES/COSMOS-1-F140W_drz_empty.fits')
files = glob.glob('../../EMPTY_APERTURES/*empty.fits')
for file in files:
empty = pyfits.open(file)
apertures = empty[1].data
fluxes = empty[2].data
#
sigma = apertures*0.
means = apertures*0.
for iap, ap in enumerate(apertures):
sigma[iap] = threedhst.utils.biweight(fluxes[:,iap])
means[iap] = threedhst.utils.biweight(fluxes[:,iap], mean=True)
#
ycog = np.interp(apertures, apers, (average_fluxes/count))
pp = plt.plot(apertures, ycog/(sigma/np.interp(6, apertures, sigma)), alpha=0.5)
plt.xlabel(r'$R_\mathrm{aper}$')
plt.ylabel(r'CoG / $\sigma$')
plt.plot(apertures, ycog/ycog.max(), color='black', linewidth=2)
plt.plot(apertures, (sigma/np.interp(6, apertures, sigma))*0.3, color='black', alpha=0.4, linewidth=2)
plt.savefig('optimal_aperture.pdf', dpi=100)
plt.close()
#### Actual calculation of the depth
from scipy import polyfit, polyval
APER = 0.5 # arcsec, diameter
ycog = average_fluxes/count
ycog = ycog / ycog.max()
print 'Aperture, D=%.2f"' %(APER)
files = glob.glob('../../EMPTY_APERTURES/[CG]*empty.fits')
for file in files:
empty = pyfits.open(file)
apertures = empty[1].data
fluxes = empty[2].data
#
sigma = apertures*0.
means = apertures*0.
for iap, ap in enumerate(apertures):
sigma[iap] = threedhst.utils.biweight(fluxes[:,iap])
means[iap] = threedhst.utils.biweight(fluxes[:,iap], mean=True)
#
#plt.plot(apertures, 26.46-2.5*np.log10(5*sigma), marker='o', color='black', linestyle='None')
coeffs = polyfit(np.log10(apertures), np.log10(sigma), 1)
yfit = 10**polyval(coeffs, np.log10(apertures))
pp = plt.plot(apertures, 26.46-2.5*np.log10(5*yfit), color='red')
#
apcorr = -2.5*np.log10(np.interp(APER/0.06/2, apers, ycog))
#
sig_at_aper = 10**polyval(coeffs, np.log10(APER/0.06/2))
depth = 26.46-2.5*np.log10(5*sig_at_aper)-apcorr
print '%s - %.2f' %(os.path.basename(file).split('-F14')[0], depth)
plt.ylim(23,30)
def aper_phot(array, xc, yc, aper_radius):
"""
Aperture photometry on an array
"""
from shapely.geometry import Point, Polygon
point = Point(xc, yc)
buff = point.buffer(aper_radius, resolution=16)
#### Make the aperture
im_aper = array*0.
yy, xx = np.indices(array.shape)
dr = np.sqrt((xx-xc)**2+(yy-yc)**2)
#### these are obviously in the aperture
solid = dr < (aper_radius-1.5)
im_aper[solid] = 1.
#### This is the edge
edge = (dr <= (aper_radius+1.5)) & (dr >= (aper_radius-1.5))
# for i in range(int(np.floor(xc-aper_radius)),int(np.ceil(xc+aper_radius))):
# for j in range(int(np.floor(yc-aper_radius)),int(np.ceil(yc+aper_radius))):
for i, j in zip(xx[edge], yy[edge]):
pix = Polygon(((i+0.5,j+0.5), (i+1.5,j+0.5), (i+1.5,j+1.5), (i+0.5,j+1.5)))
isect = pix.intersection(buff)
im_aper[j,i] = isect.area
return np.sum(array*im_aper)
def make_examples():
import unicorn
unicorn.survey_paper.redshift_fit_example(id='GOODS-N-33-G141_00946')
unicorn.survey_paper.redshift_fit_example(id='GOODS-N-17-G141_00573')
unicorn.survey_paper.redshift_fit_example(id='GOODS-N-33-G141_01028')
unicorn.survey_paper.redshift_fit_example(id='COSMOS-1-G141_00252')
unicorn.survey_paper.redshift_fit_example(id='AEGIS-4-G141_00266')
unicorn.survey_paper.redshift_fit_example(id='COSMOS-5-G141_00751')
unicorn.survey_paper.redshift_fit_example(id='PRIMO-1101-G141_01022')
unicorn.survey_paper.redshift_fit_example(id='GOODS-S-24-G141_00029')
#### Examples
unicorn.survey_paper.redshift_fit_example(id='COSMOS-14-G141_00100')
unicorn.survey_paper.redshift_fit_example(id='COSMOS-18-G141_00485')
import unicorn
import unicorn.catalogs
unicorn.catalogs.read_catalogs()
from unicorn.catalogs import zout, phot, mcat, lines, rest, gfit
mat = lines.id == 'COSMOS-14-G141_00100'
print lines.id[mat][0], lines.halpha_eqw[mat][0], lines.halpha_eqw_err[mat][0], lines.halpha_flux[mat][0]
mat = lines.id == 'COSMOS-18-G141_00485'
print lines.id[mat][0], lines.halpha_eqw[mat][0], lines.halpha_eqw_err[mat][0], lines.halpha_flux[mat][0]
def redshift_fit_example(id='COSMOS-18-G141_00485', force=False):
"""
Make a big plot showing how the whole redshift fitting works
"""
#id = 'COSMOS-14-G141_00100' ### aligned along dispersion axis, weak line
#id = 'GOODS-N-33-G141_00946' ### classic spiral
#id = 'GOODS-N-17-G141_00573'
#id = 'COSMOS-18-G141_00485' ### asymmetric line, spiral
os.chdir('/research/HST/GRISM/3DHST/ANALYSIS/SURVEY_PAPER/EXAMPLE_FITS')
#### Get the necessary files from unicorn
if (not os.path.exists('%s_thumb.fits.gz' %(id))) | force:
os.system('wget http://3dhst:getspecs@unicorn.astro.yale.edu/P/GRISM_v1.6/images/%s_thumb.fits.gz' %(id))
os.system('wget http://3dhst:getspecs@unicorn.astro.yale.edu/P/GRISM_v1.6/images/%s_2D.fits.gz' %(id))
os.system('wget http://3dhst:getspecs@unicorn.astro.yale.edu/P/GRISM_v1.6/ascii/%s.dat' %(id))
os.system('rsync -avz --progress $UNICORN:/3DHST/Spectra/Work/ANALYSIS/REDSHIFT_FITS/OUTPUT/%s* OUTPUT/ ' %(id))
os.system('rsync -avz --progress $UNICORN:/3DHST/Spectra/Work/ANALYSIS/REDSHIFT_FITS/%s* ./ ' %(id))
zo = threedhst.catIO.Readfile('OUTPUT/%s.zout' %(id))
plt.rcParams['text.usetex'] = True
plt.rcParams['font.family'] = 'serif'
plt.rcParams['font.serif'] = 'Times'
fig = plt.figure(figsize=(6,6))
dsep = 0.05
xsep = 0.6
left = 0.085
bottom = 0.07
spec_color = 'purple'
dy2d = 0.13
#spec_color = 'blue'
spec_color = (8/255.,47/255.,101/255.)
spec_color = 'red'
phot_color = 'orange'
#phot_color = (78/255.,97/255.,131/255.)
#phot_color = '0.7'
spec_color = 'black'
phot_color = '0.7'
temp_color = (8/255.,47/255.,101/255.)
temp_color = 'red'
########### Full spectrum
ax = fig.add_axes((left, 0.5+bottom+dy2d, 0.99-left-(1-xsep), 0.49-bottom-dy2d))
lambdaz, temp_sed, lci, obs_sed, fobs, efobs = eazy.getEazySED(0, MAIN_OUTPUT_FILE='%s' %(id), OUTPUT_DIRECTORY='OUTPUT', CACHE_FILE = 'Same')
tempfilt, coeffs, temp_seds, pz = eazy.readEazyBinary(MAIN_OUTPUT_FILE=id, OUTPUT_DIRECTORY='OUTPUT', CACHE_FILE = 'Same')
dlam_spec = lci[-1]-lci[-2]
is_spec = np.append(np.abs(1-np.abs(lci[1:]-lci[0:-1])/dlam_spec) < 0.05,True)
obs_convert = 10**(-0.4*(25+48.6))*3.e18/lci**2/10.**-19*(lci/5500.)**2
temp_convert = 10**(-0.4*(25+48.6))*3.e18/lambdaz**2/10.**-19*(lambdaz/5500.)**2
fobs, efobs, obs_sed, temp_sed = fobs*obs_convert, efobs*obs_convert, obs_sed*obs_convert, temp_sed*temp_convert
ymax = max(fobs[is_spec & (fobs > 0)])
ax.semilogx([1],[1])
## photometry
ax.plot(lci[~is_spec], obs_sed[~is_spec], marker='o', color='black', linestyle='None', markersize=6, alpha=0.2)
## best-fit SED
## Spectrum + convolved fit
#ax.plot(lci[is_spec], obs_sed[is_spec], color='black', markersize=6, alpha=0.7, linewidth=1)
ax.plot(lci[is_spec], fobs[is_spec], marker='None', alpha=0.8, color=spec_color, linewidth=2)
ax.plot(lambdaz, temp_sed, color='white', linewidth=3, alpha=0.6)
ax.plot(lambdaz, temp_sed, color=temp_color, alpha=0.6)
ax.errorbar(lci[~is_spec], fobs[~is_spec], efobs[~is_spec], marker='o', linestyle='None', alpha=0.6, color=phot_color, markersize=10)
ax.set_yticklabels([])
ax.set_ylabel(r'$f_\lambda$')
ax.set_xlabel(r'$\lambda$')
xtick = ax.set_xticks(np.array([0.5, 1., 2, 4])*1.e4)
ax.set_xticklabels(np.array([0.5, 1., 2, 4]))
#ax.set_xlim(3000,9.e4)
ax.set_xlim(3290,2.5e4)
ax.set_ylim(-0.1*ymax, 1.2*ymax)
############# Sub spectrum
ax = fig.add_axes((left, bottom, 0.99-left, 0.49-bottom))
obs_sed_continuum = np.dot(tempfilt['tempfilt'][:,0:7,coeffs['izbest'][0]],coeffs['coeffs'][0:7,0])/(lci/5500.)**2*obs_convert
temp_sed_continuum = np.dot(temp_seds['temp_seds'][:,0:7],coeffs['coeffs'][0:7,0])/(1+zo.z_peak[0])**2*temp_convert
ymax = max(fobs[is_spec & (fobs > 0)]-obs_sed_continuum[is_spec & (fobs > 0)])
#ymin = min(fobs[is_spec & (fobs > 0)])
# ax.semilogx([1],[1])
## photometry
ax.plot(lci[~is_spec], obs_sed[~is_spec]-obs_sed_continuum[~is_spec], marker='o', color='black', linestyle='None', markersize=6, alpha=0.2, zorder=10)
## best-fit SED
ax.plot(lci[is_spec], fobs[is_spec]-obs_sed_continuum[is_spec], marker='None', alpha=0.8, color=spec_color, linewidth=2, zorder=10)
ax.plot(lambdaz, temp_sed-temp_sed_continuum, color=temp_color, alpha=0.3, zorder=10)
## Spectrum + convolved fit
ax.plot(lci[is_spec], obs_sed[is_spec]-obs_sed_continuum[is_spec], color='white', markersize=6, alpha=0.7, linewidth=4, zorder=10)
ax.plot(lci[is_spec], obs_sed[is_spec]-obs_sed_continuum[is_spec], color=temp_color, markersize=6, alpha=0.7, linewidth=1, zorder=10)
#ax.plot(lci[is_spec], obs_sed_continuum[is_spec]-obs_sed_continuum[is_spec], color='black', markersize=6, alpha=0.3, linewidth=2)
ax.errorbar(lci[~is_spec], fobs[~is_spec]-obs_sed_continuum[~is_spec], efobs[~is_spec], marker='o', linestyle='None', alpha=0.6, color=phot_color, markersize=10)
#ax.set_yticklabels([])
#ax.set_ylabel(r'$f_\lambda-\ \mathrm{continuum}$')
ax.set_ylabel(r'$f_\lambda - f_{\lambda,\ \mathrm{cont.}}\ [10^{-19}\ \mathrm{erg\ s^{-1}\ cm^{-2}\ \AA^{-1}}]$')
ax.set_xlabel(r'$\lambda\ [\mu\mathrm{m}]$')
xtick = ax.set_xticks(np.array([1.2, 1.4,1.6])*1.e4)
ax.set_xticklabels(np.array([1.2, 1.4,1.6]))
#ax.set_xlim(3000,9.e4)
ax.set_xlim(1.05e4,1.7e4)
ax.set_ylim(-0.2*ymax, 1.2*ymax)
########### p(z)
ax = fig.add_axes((xsep+left, 0.5+bottom+dy2d, 0.99-left-xsep, 0.49-bottom-dy2d))
colors = [spec_color,phot_color,'blue']
alpha = [0.5, 0.5, 0.2]
zmin = 4
zmax = 0
ymax = 0
for i in range(2):
zgrid, pz = eazy.getEazyPz(i, MAIN_OUTPUT_FILE='%s' %(id),
OUTPUT_DIRECTORY='./OUTPUT',
CACHE_FILE='Same')
ax.fill_between(zgrid, pz, pz*0., color=colors[i], alpha=alpha[i], edgecolor=colors[i])
ax.fill_between(zgrid, pz, pz*0., color=colors[i], alpha=alpha[i], edgecolor=colors[i])
#
if pz.max() > ymax:
ymax = pz.max()
#
if zgrid[pz > 1.e-3].min() < zmin:
zmin = zgrid[pz > 1.e-2].min()
#
if zgrid[pz > 1.e-6].max() > zmax:
zmax = zgrid[pz > 1.e-2].max()
ax.plot(zo.z_spec[0]*np.array([1,1]),[0,1.e4], color='green', linewidth=1)
ax.set_yticklabels([])
ax.set_xlabel(r'$z$')
ax.set_ylabel(r'$p(z)$')
ax.xaxis.set_major_locator(unicorn.analysis.MyLocator(4, prune='both'))
### Plot labels
#ax.text(0.5, 0.9, '%s' %(id), transform = ax.transAxes, horizontalalignment='center')
xtxt, align = 0.95,'right'
xtxt, align = 0.5,'right'
fs, dyt = 9, 0.1
fs, dyt = 10,0.13
ax.text(xtxt, 0.8, r'$z_\mathrm{phot}=$'+'%5.3f' %(zo.z_peak[1]), transform = ax.transAxes, horizontalalignment=align, fontsize=fs)
ax.text(xtxt, 0.8-dyt, r'$z_\mathrm{gris}=$'+'%5.3f' %(zo.z_peak[0]), transform = ax.transAxes, horizontalalignment=align, fontsize=fs)
if zo.z_spec[0] > 0:
ax.text(xtxt, 0.8-2*dyt, r'$z_\mathrm{spec}=$'+'%5.3f' %(zo.z_spec[0]), transform = ax.transAxes, horizontalalignment=align, fontsize=fs)
ax.set_xlim(zmin, zmax)
#ax.set_xlim(zgrid.min(), zgrid.max())
ax.set_ylim(0,1.1*ymax)
#################### 2D spectrum
thumb = pyfits.open('%s_thumb.fits.gz' %(id))
thumb_data = thumb[0].data
#thumb_data[10,:] = 1000
profile = np.sum(thumb_data, axis=1)
NSUB = int(np.round(0.5*thumb_data.shape[0]))/2
yc = thumb_data.shape[0]/2
dx = NSUB*2*22/(ax.get_xlim()[1]-ax.get_xlim()[0])*(0.98-left)
dx = dy2d
ax = fig.add_axes((left, 0.49, 0.99-left, dy2d))
#ax.errorbar(lci[~is_spec], fobs[~is_spec]-obs_sed_continuum[~is_spec], efobs[~is_spec], marker='o', linestyle='None', alpha=0.6, color=phot_color, markersize=10)
twod_file = '%s_2D.fits.gz' %(id)
twod = pyfits.open(twod_file)
spec2d = twod[1].data-twod[4].data
head = twod[1].header
lam_idx = np.arange(head['NAXIS1'])
lam = (lam_idx+1-head['CRPIX1'])*head['CDELT1']+head['CRVAL1']
lam_mima = np.cast[int](np.round(np.interp(np.array([1.05e4,1.7e4]), lam, lam_idx)))
tick_int = np.interp(np.array([1.2,1.4,1.6])*1.e4, lam, lam_idx)-np.interp(1.05e4, lam, lam_idx)
spec2d_sub = spec2d[yc-NSUB:yc+NSUB,lam_mima[0]:lam_mima[1]]
ax.imshow(0-spec2d_sub, aspect='auto', vmin=-0.1, vmax=0.01, interpolation='nearest')
ax.set_yticklabels([]); ax.set_xticklabels([])
xtick = ax.set_xticks(tick_int); ytick = ax.set_yticks([0,2*NSUB])
########### Thumbnail
#ax = fig.add_axes((left+left*0.3, 0.49-dx-left*0.3, dx, dx))
ax = fig.add_axes((left, 0.49, dx, dx))
#ax.imshow(thumb_data[yc-NSUB:yc+NSUB, yc-NSUB:yc+NSUB], vmin=-0.05, vmax=0.5, interpolation='nearest')
ax.imshow(0-thumb_data[yc-NSUB:yc+NSUB, yc-NSUB:yc+NSUB], vmin=-0.7, vmax=0.05, interpolation='nearest', zorder=2)
ax.set_yticklabels([])
ax.set_xticklabels([])
#ax = fig.add_axes((left+left*0.3*2+dx, 0.49-dx-left*0.3, dx, dx))
#profile = np.sum(thumb_data[yc-NSUB:yc+NSUB, yc-NSUB:yc+NSUB], axis=0)
#ax.plot(profile/profile.max(), color='black', alpha=0.4)
size = thumb[0].data.shape
twod_file = '%s_2D.fits.gz' %(id)
twod = pyfits.open(twod_file)
model1D = np.matrix(twod[5].data.sum(axis=1))
model1D /= np.max(model1D)
model2D = np.array(np.dot(np.transpose(model1D),np.ones((1,size[0]))))
thumb_data *= model2D
profile = np.sum(thumb_data[yc-NSUB:yc+NSUB, yc-NSUB:yc+NSUB], axis=0)
ax.plot(profile/profile.max()*2*NSUB*0.8, color='black', alpha=0.3, zorder=2)
ax.set_xlim(0,2*NSUB); ax.set_ylim(0,2*NSUB)
ax.set_yticklabels([])
ax.set_xticklabels([])
xtick = ax.set_xticks([0,2*NSUB]); ytick = ax.set_yticks([0,2*NSUB])
fig.savefig('%s_example.pdf' %(id))
def equivalent_width_errors():
"""
Compute the limiting equivalent widths as a function of magnitude or mass
or something
"""
import unicorn
import unicorn.catalogs
unicorn.catalogs.read_catalogs()
from unicorn.catalogs import zout, phot, mcat, lines, rest, gfit
os.chdir('/research/HST/GRISM/3DHST/ANALYSIS/SURVEY_PAPER')
keep = unicorn.catalogs.run_selection(zmin=0.8, zmax=5.5, fcontam=0.2, qzmin=0., qzmax=0.1, dr=1.0, has_zspec=False, fcovermin=0.9, fcovermax=1.0, massmin=8.5, massmax=15, magmin=17, magmax=23.5)
keep_22 = unicorn.catalogs.run_selection(zmin=0.8, zmax=5.5, fcontam=0.2, qzmin=0., qzmax=0.1, dr=1.0, has_zspec=False, fcovermin=0.9, fcovermax=1.0, massmin=8.5, massmax=15, magmin=21.7, magmax=22.3)
halpha_sn = lines.halpha_eqw / lines.halpha_eqw_err
#halpha_sn[(halpha_sn > 0) & (halpha_sn < 1)] = 2
keep_ha = keep & (halpha_sn[lines.idx] > 0)
oiii_sn = lines.oiii_eqw / lines.oiii_eqw_err
keep_oiii = keep & (oiii_sn[lines.idx] > 0)
#plt.scatter(phot.mag_f1392w[phot.idx][keep], phot.flux_radius[phot.idx][keep], marker='o')
marker_size = phot.flux_radius[phot.idx]**1.5
colors = 'purple'
# colors = (phot.mag_f1392w[phot.idx]-17)
# colors[colors < 0] = 0
# colors[colors > 5] = 5
##### FLux
plt.rcParams['text.usetex'] = True
plt.rcParams['font.family'] = 'serif'
plt.rcParams['font.serif'] = 'Times'
fig = unicorn.catalogs.plot_init(square=True, xs=5, aspect=5/4., left=0.12)
fig.subplots_adjust(wspace=0.20,hspace=0.24,left=0.12,
bottom=0.08,right=0.98,top=0.98)
plt.rcParams['patch.edgecolor'] = 'k'
ax = fig.add_subplot(211)
ax.scatter(lines.halpha_flux[lines.idx][keep_ha], halpha_sn[lines.idx][keep_ha], marker='o', c='purple', alpha=0.1, s=marker_size[keep_ha])
ax.scatter(lines.oiii_flux[lines.idx][keep_oiii], oiii_sn[lines.idx][keep_oiii], marker='o', c='orange', alpha=0.1, s=marker_size[keep_oiii])
xm, ym, ys, ns = threedhst.utils.runmed(lines.halpha_flux[lines.idx][keep_ha], halpha_sn[lines.idx][keep_ha], NBIN=20, median=True)
ax.plot(xm, ym, color='white', alpha=0.6, linewidth=4)
ax.plot(xm, ym, color='purple', alpha=0.8, linewidth=3)
xm, ym, ys, ns = threedhst.utils.runmed(lines.oiii_flux[lines.idx][keep_oiii], oiii_sn[lines.idx][keep_oiii], NBIN=20, median=True)
ax.plot(xm[:-1], ym[:-1], color='white', alpha=0.6, linewidth=4)
ax.plot(xm[:-1], ym[:-1], color='orange', alpha=0.8, linewidth=3)
## label
for si in [2,4,8,16]:
ax.scatter(np.array([1,1])*2.e-17, np.array([1,1])*25*si**0.4, s=si**1.5, color='black', alpha=0.2)
ax.text(2.e-17*1.3, 25*si**0.4, '%.1f' %(si*0.06), verticalalignment='center')
nha = len(lines.halpha_flux[lines.idx][keep_ha])
noiii = len(lines.halpha_flux[lines.idx][keep_oiii])
ax.text(2.e-17*1.15, 25*(0.5)**0.4, r'$N_\mathrm{H\alpha}=%d$' %(nha), color='purple', horizontalalignment='center')
ax.text(2.e-17*1.15, 25*(0.5/3)**0.4, r'$N_\mathrm{O III}=%d$' %(noiii), color='orange', horizontalalignment='center')
ax.semilogy()
ax.semilogx()
ax.set_ylim(1,100)
ax.set_xlim(1.e-17,2.e-15)
ax.set_yticklabels([1,3,10,30,100])
ytick = ax.set_yticks([1,3,10,30,100])
ax.set_ylabel('line S / N')
ax.set_xlabel(r'line flux $\mathrm{[ergs\ s^{-1}\ cm^{-2}]}$')
#### EqW
#fig = unicorn.catalogs.plot_init(square=True, xs=5, aspect=1, left=0.12)
#plt.rcParams['patch.edgecolor'] = 'k'
ax = fig.add_subplot(212)
marker_size = 10**(-0.4*(18-phot.mag_f1392w[phot.idx]))**0.8
zz = lines.z_grism[lines.idx]*0
zz = lines.z_grism[lines.idx]
ax.scatter(lines.halpha_eqw[lines.idx][keep_ha]*(1+zz[keep_ha]), halpha_sn[lines.idx][keep_ha], marker='o', c='purple', alpha=0.1, s=marker_size[keep_ha])
ax.scatter(lines.oiii_eqw[lines.idx][keep_oiii]*(1+zz[keep_oiii]), oiii_sn[lines.idx][keep_oiii], marker='o', c='orange', alpha=0.1, s=marker_size[keep_oiii])
xm, ym, ys, ns = threedhst.utils.runmed(lines.halpha_eqw[lines.idx][keep_ha]*(1+zz[keep_ha]), halpha_sn[lines.idx][keep_ha], NBIN=20, median=False)
ax.plot(xm, ym, color='white', alpha=0.6, linewidth=4)
ax.plot(xm, ym, color='purple', alpha=0.8, linewidth=3)
xm, ym, ys, ns = threedhst.utils.runmed(lines.oiii_eqw[lines.idx][keep_oiii]*(1+zz[keep_oiii]), oiii_sn[lines.idx][keep_oiii], NBIN=20, median=True)
ax.plot(xm, ym, color='white', alpha=0.6, linewidth=4)
ax.plot(xm, ym, color='orange', alpha=0.8, linewidth=3)
for si, mag in enumerate([19, 21, 23]):
ax.scatter(np.array([1,1])*10, np.array([1,1])*25*(2**(si+1))**0.4, s=10**(-0.4*(18-mag))**0.8, color='black', alpha=0.2)
ax.text(10*1.3, 25*(2**(si+1))**0.4, '%d' %(mag), verticalalignment='center')
ax.semilogy()
ax.semilogx()
ax.set_ylim(1,100)
ax.set_xlim(5,1000)
ax.set_yticklabels([1,3,10,30,100])
ytick = ax.set_yticks([1,3,10,30,100])
ax.set_xticklabels([5,10,100,500])
xtick = ax.set_xticks([5,10,100, 500])
ax.set_ylabel('line S / N')
if plt.rcParams['text.usetex']:
ax.set_xlabel(r'Equivalent width [\AA]')
else:
ax.set_xlabel(r'Equivalent width [$\AA$]')
fig.savefig('equivalent_width_errors.pdf')
plt.rcParams['text.usetex'] = False
def zphot_zspec_plot():
import unicorn
import unicorn.catalogs
import copy
os.chdir(unicorn.GRISM_HOME+'/ANALYSIS/SURVEY_PAPER')
unicorn.catalogs.read_catalogs()
from unicorn.catalogs import zout, phot, mcat, lines, rest, gfit
if unicorn.catalogs.zsp is None:
unicorn.catalogs.make_specz_catalog()
zsp = unicorn.catalogs.zsp
USE_NEW_FITS=True
if USE_NEW_FITS:
##### Refit redshifts gets rid of the offset
zout_new = catIO.Readfile('/research/HST/GRISM/3DHST/UDF/CATALOGS/LINE_TEMPLATES/full_redshift_fixed_noTilt.cat')
#zout_new = catIO.Readfile('/research/HST/GRISM/3DHST/UDF/CATALOGS/LINE_TEMPLATES/full_redshift_origTemp_noTilt.cat')
zout_new = catIO.Readfile('/research/HST/GRISM/3DHST/UDF/CATALOGS/LINE_TEMPLATES/full_redshift_scaleSpecErr3_noTilt.cat')
zout_new = catIO.Readfile('/research/HST/GRISM/3DHST/UDF/CATALOGS/LINE_TEMPLATES/full_redshift_scaleSpecErr2_noTilt.cat')
zout_new = catIO.Readfile('/research/HST/GRISM/3DHST/UDF/CATALOGS/LINE_TEMPLATES/full_redshift_scaleSpecErr2_yesTilt.cat')
refit = zout.id[0::3] == 'x'
refit_idx = zout.z_peak[0::3]*0.
for i in range(len(zout.id[0::3])):
print unicorn.noNewLine+'%d' %(i)
if zout.id[i*3] in zout_new.id:
refit[i] = True
refit_idx[i] = np.where(zout_new.id[0::3] == zout.id[i*3])[0][0]
refit_idx = np.cast[int](refit_idx)
zphot = zout_new.z_peak[0::3][refit_idx]
qz = zout_new.q_z[0::3][refit_idx]
qz2 = zout_new.q_z[2::3][refit_idx]
else:
zphot = zout.z_peak[0::3]
qz = zout.q_z[0::3]
qz2 = zout.q_z[2::3]
maglim = 24
qzmax = 0.2
contam_max = 0.05
stats_zmin = 0.7
keep = (phot.mag_f1392w[phot.idx] < maglim) & (phot.fcontam[phot.idx] < contam_max) & (qz < qzmax) & (phot.fcover[phot.idx] > 0.9) & (mcat.logm[mcat.idx] > 0) & (mcat.rmatch[mcat.idx] < 0.5) & (zsp.zspec[zsp.mat_idx] > 0) & (zsp.dr < 1)
#### Same selection but nothing on specz
keep_nospec = (phot.mag_f1392w[phot.idx] < maglim) & (phot.fcontam[phot.idx] < 0.05) & (qz < qzmax) & (phot.fcover[phot.idx] > 0.9) & (mcat.logm[mcat.idx] > 0) & (mcat.rmatch[mcat.idx] < 0.5) & (zout.z_peak[0::3] > stats_zmin)
keep_nospec_goods = (phot.mag_f1392w[phot.idx] < maglim) & (phot.fcontam[phot.idx] < 0.05) & (qz < qzmax) & (phot.fcover[phot.idx] > 0.9) & (mcat.logm[mcat.idx] > 0) & (mcat.rmatch[mcat.idx] < 0.5) & (zout.z_peak[0::3] > stats_zmin) & ((phot.field[phot.idx] == 'GOODS-N') | (phot.field[phot.idx] == 'GOODS-X'))
keep_hasspec = (phot.mag_f1392w[phot.idx] < maglim) & (phot.fcontam[phot.idx] < 0.05) & (qz < qzmax) & (phot.fcover[phot.idx] > 0.9) & (mcat.logm[mcat.idx] > 0) & (mcat.rmatch[mcat.idx] < 0.5) & (zout.z_peak[0::3] > stats_zmin) & (zsp.zspec[zsp.mat_idx] > 0) & (zsp.dr < 1)
keep_hasspec_goods = (phot.mag_f1392w[phot.idx] < maglim) & (phot.fcontam[phot.idx] < 0.05) & (qz < qzmax) & (phot.fcover[phot.idx] > 0.9) & (mcat.logm[mcat.idx] > 0) & (mcat.rmatch[mcat.idx] < 0.5) & (zout.z_peak[0::3] > stats_zmin) & (zsp.zspec[zsp.mat_idx] > 0) & (zsp.dr < 1) & ((phot.field[phot.idx] == 'GOODS-N') | (phot.field[phot.idx] == 'GOODS-X'))
#### Spectroscopic redshift ratio by field
for field in ['GOODS-N', 'GOODS-S', 'COSMOS', 'AEGIS']:
print '%s %.2f' %(field, len(keep[keep_hasspec & (phot.field[phot.idx] == field)])*1. / len(keep[keep_nospec & (phot.field[phot.idx] == field)]))
print len(keep[keep_hasspec])*1./len(keep[keep_nospec]), len(keep[keep_nospec])
#### Only way to get out a few objects where the photometry wasn't found for the fit
keep = keep & (qz != qz2)
if USE_NEW_FITS:
keep = keep & (refit_idx > 0)
plt.rcParams['text.usetex'] = True
plt.rcParams['font.family'] = 'serif'
plt.rcParams['font.serif'] = 'Times'
fig = unicorn.catalogs.plot_init(left=0.07, xs=3, bottom=0.07)
ax = fig.add_subplot(111)
zsplit = stats_zmin
ms=2
ax.plot(np.log10(1+zsp.zspec[zsp.mat_idx][keep & (zsp.zspec[zsp.mat_idx] > zsplit)]), np.log10(1+zphot[keep & (zsp.zspec[zsp.mat_idx] > zsplit)]), marker='o', linestyle='None', alpha=0.2, color='black', markersize=ms)
ax.plot(np.log10(1+zsp.zspec[zsp.mat_idx][keep & (zsp.zspec[zsp.mat_idx] < zsplit)]), np.log10(1+zphot[keep & (zsp.zspec[zsp.mat_idx] < zsplit)]), marker='o', linestyle='None', alpha=0.2, color='0.9', markersize=ms)
ax.plot([0,5],[0,5], color='white', alpha=0.2, linewidth=3)
#ax.plot([0,5],[0,5], color='black', alpha=0.3, linewidth=1)
zz = np.array([0,4])
ax.plot(np.log10(1+zz), np.log10(1+zz+0.1*(1+zz)), linestyle='--', color='0.8', alpha=0.5)
ax.plot(np.log10(1+zz), np.log10(1+zz-0.1*(1+zz)), linestyle='--', color='0.8', alpha=0.5)
ax.set_xticklabels(['0','1','2','3','4'])
ax.set_xticks(np.log10(1+np.array([0,1,2,3,4])))
ax.set_yticklabels(['0','1','2','3','4'])
ax.set_yticks(np.log10(1+np.array([0,1,2,3,4])))
ax.set_xlim(np.log10(1+0),np.log10(4+1))
ax.set_ylim(np.log10(1+0),np.log10(4+1))
ax.set_xlabel(r'$z_\mathrm{spec}$')
ax.set_ylabel(r'$z_\mathrm{G141+phot}$')
dz = (zphot - zsp.zspec[zsp.mat_idx])/(1+zsp.zspec[zsp.mat_idx])
clip = np.abs(dz) < 0.1
sigma_gt1 = threedhst.utils.nmad(dz[keep & (zout.z_spec[0::3] > 1)])
sigma_gt1_clip = threedhst.utils.nmad(dz[keep & (zout.z_spec[0::3] > 1) & clip])
sigma_gt0_biw = threedhst.utils.biweight(dz[keep & (zout.z_spec[0::3] > stats_zmin)])
sigma_gt0 = threedhst.utils.nmad(dz[keep & (zout.z_spec[0::3] > stats_zmin)])
sigma_gt0_clip = threedhst.utils.nmad(dz[keep & (zout.z_spec[0::3] > stats_zmin) & clip])
NOUT = len(dz[keep & (zout.z_spec[0::3] > stats_zmin) & ~clip])*1./len(dz[keep & (zout.z_spec[0::3] > stats_zmin)])
fs = 9
print sigma_gt0, sigma_gt0_clip, sigma_gt1, sigma_gt1_clip, NOUT
ax.text(0.1,0.9,r'$H_{140} <\ %.1f,\ z_\mathrm{spec} >\ %.1f,\ Q_z <\ %.2f$' %(maglim, stats_zmin, qzmax), transform=ax.transAxes, fontsize=fs)
ax.text(0.1,0.81,r'$N=%d$' %(len(dz[keep & (zout.z_spec[0::3] > stats_zmin)])), transform=ax.transAxes, fontsize=fs)
ax.text(0.1,0.72,r'$\sigma_\mathrm{NMAD}=%.4f$' %(sigma_gt0), transform=ax.transAxes, fontsize=fs)
pct = '\%'
ax.text(0.1,0.63,r'$f_\mathrm{>0.1}=%.1f%s$' %(NOUT*100,pct), transform=ax.transAxes, fontsize=fs)
# zbox = np.log10(1+stats_zmin)
# ax.fill_between([0,zbox],[0,0],[zbox,zbox], color='red', alpha=0.1)
ax.set_xlim(np.log10(0.0+1),np.log10(3.5+1))
ax.set_ylim(np.log10(0.0+1),np.log10(3.5+1))
fig.savefig('zphot_zspec.pdf')
plt.rcParams['text.usetex'] = False
##### Show line misidentifications
# zha = np.log10(np.array([1.05e4,1.68e4])/6563.)
# ax.plot(zha, zha+np.log10(6563./5007), color='green', alpha=0.5)
# ax.plot(zha, zha+np.log10(6563./3727), color='purple', alpha=0.5)
# ax.plot(zha, zha+np.log10(6563./4863), color='orange', alpha=0.5)
# zhb = np.log10(np.array([1.05e4,1.68e4])/4861.)
# ax.plot(zhb, zhb+np.log10(4861./3727), color='blue', alpha=0.5)
# zoiii = np.log10(np.array([1.05e4,1.68e4])/3727.)
# ax.plot(zoiii, zoiii+np.log10(3727./4861), color='blue', alpha=0.5)
# plt.xlim(0,np.log10(1+5))
# plt.ylim(0,np.log10(1+5))
#### Show dz as a function of parameters
if 1 == 0:
"""
Make plots to see how the redshift residuals depend on things like mag,
contamination fraction, Qz.
"""
keep = (phot.mag_f1392w[phot.idx] < 25) & (phot.fcontam[phot.idx] < 1) & (zout.q_z[0::3] < 1000) & (phot.fcover[phot.idx] > 0.9) & (mcat.logm[mcat.idx] > 0) & (mcat.rmatch[mcat.idx] < 0.5) & (zsp.zspec[zsp.mat_idx] > stats_zmin) & (zsp.dr < 1)
keep = keep & (zout.q_z[0::3] != zout.q_z[2::3])
dz = (zphot - zsp.zspec[zsp.mat_idx])/(1+zsp.zspec[zsp.mat_idx])
yr = (-0.5,0.5)
alpha, ms, color = 0.5, 2,'black'
fig = unicorn.catalogs.plot_init(xs=8,aspect=0.7,left=0.12)
#### Mag
ax = fig.add_subplot(221)
ax.plot(phot.mag_f1392w[phot.idx][keep], dz[keep], marker='o', alpha=alpha, linestyle='None', ms=ms, color=color)
xm, ym, ys, ns = threedhst.utils.runmed(phot.mag_f1392w[phot.idx][keep], dz[keep], NBIN=20)
ax.plot(xm, ys*10, color='red', linewidth=2)
ax.set_ylim(yr[0], yr[1])
ax.set_xlim(19,25)
ax.set_xlabel(r'$H_{140}$')
#### Contam
ax = fig.add_subplot(222)
ax.plot(phot.fcontam[phot.idx][keep], dz[keep], marker='o', alpha=alpha, linestyle='None', ms=ms, color=color)
xm, ym, ys, ns = threedhst.utils.runmed(phot.fcontam[phot.idx][keep], dz[keep], NBIN=20)
ax.plot(xm, ys*10, color='red', linewidth=2)
#ax.semilogx()
ax.set_ylim(yr[0], yr[1])
ax.set_xlim(0.01,1)
ax.set_xlabel(r'$f_\mathrm{cont}\ \mathrm{at}\ 1.4\ \mu m$')
#### Q_z
ax = fig.add_subplot(223)
ax.plot(zout.q_z[0::3][keep], dz[keep], marker='o', alpha=alpha, linestyle='None', ms=ms, color=color)
xm, ym, ys, ns = threedhst.utils.runmed(zout.q_z[0::3][keep], dz[keep], NBIN=20)
ax.plot(xm, ys*10, color='red', linewidth=2)
ax.semilogx()
ax.set_ylim(yr[0], yr[1])
ax.set_xlim(0.001,10)
ax.set_xlabel(r'$Q_z$')
#### Offset near z=1, appears to be due to the tilted slope of the spectrum being a bit too steep. If you just use a 0th order offset correction to the spectrum (TILT_ORDER=0), the redshifts for many of these objects become correct
#keep = keep & (np.array(zsp.source)[zsp.mat_idx] == 'Barger')
if (1==0):
dzlog = np.log10(1+zout.z_peak[0::3]) - np.log10(1+zsp.zspec[zsp.mat_idx])
bad = (dzlog > 0.027) & (dzlog < 0.047 ) & (np.log10(1+zsp.zspec[zsp.mat_idx]) > 0.2) & keep
bad = (dzlog > 0.18) & keep
bad = (np.abs(dz) > 0.1) & (zsp.zspec[zsp.mat_idx] > stats_zmin) & keep
print np.array(zsp.source)[zsp.mat_idx][bad]
print phot.id[phot.idx][bad]
for id in phot.id[phot.idx][bad]:
os.system('wget http://3dhst:getspecs@unicorn.astro.yale.edu/P/GRISM_v1.6/EAZY/%s_eazy.png' %(id))
fig = unicorn.catalogs.plot_init(left=0.12)
ax = fig.add_subplot(111)
ax.plot(np.log10(1+zsp.zspec[zsp.mat_idx][keep]), dzlog[keep], marker='o', linestyle='None', alpha=0.5, color='black', markersize=5)
ax.set_xlim(0,np.log10(4+1))
ax.set_ylim(-1,1)
#### nearby offset at z~1 ~ 0.035 in log(1+z)
offset = 0.036
print 6563*10**(-offset), 5007*10**(-offset), 4861*10**(-offset), 3727*10**(-offset)
def zphot_zspec_lines():
"""
Investigate how the redshfit errors depend on the emission line signal to noise.
"""
import unicorn
import unicorn.catalogs
import copy
os.chdir(unicorn.GRISM_HOME+'/ANALYSIS/SURVEY_PAPER')
unicorn.catalogs.read_catalogs()
from unicorn.catalogs import zout, phot, mcat, lines, rest, gfit
if unicorn.catalogs.zsp is None:
unicorn.catalogs.make_specz_catalog()
zsp = unicorn.catalogs.zsp
zphot = zout.z_peak[0::3]
##### Refit redshifts gets rid of the offset
zout_new = catIO.Readfile('/research/HST/GRISM/3DHST/UDF/CATALOGS/LINE_TEMPLATES/full_redshift_fixed_centering.cat')
refit = zout.id[0::3] == 'x'
refit_idx = zout.z_peak[0::3]*0.
for i in range(len(zout.id[0::3])):
print unicorn.noNewLine+'%d' %(i)
if zout.id[i*3] in zout_new.id:
refit[i] = True
refit_idx[i] = np.where(zout_new.id[0::3] == zout.id[i*3])[0][0]
refit_idx = np.cast[int](refit_idx)
zphot = zout_new.z_peak[0::3][refit_idx]
maglim = 24
qzmax = 0.2
contam_max = 0.05
keep = (phot.mag_f1392w[phot.idx] < maglim) & (phot.fcontam[phot.idx] < contam_max) & (zout.q_z[0::3] < qzmax) & (phot.fcover[phot.idx] > 0.9) & (mcat.logm[mcat.idx] > 0) & (mcat.rmatch[mcat.idx] < 0.5) & (zsp.zspec[zsp.mat_idx] > 0) & (zsp.dr < 1)
keep = keep & (zout.q_z[0::3] != zout.q_z[2::3])
lmin, lmax = 1.2e4, 1.6e4
z_ha = (zsp.zspec[zsp.mat_idx] > (lmin/6563.-1)) & (zsp.zspec[zsp.mat_idx] < (lmax/6563.-1))
z_oiii = (zsp.zspec[zsp.mat_idx] > (lmin/5007.-1)) & (zsp.zspec[zsp.mat_idx] < (lmax/5007.-1))
dz = (zphot - zsp.zspec[zsp.mat_idx])/(1+zsp.zspec[zsp.mat_idx])
halpha_eqw = lines.halpha_eqw*1.
oiii_eqw = lines.oiii_eqw*1.
eqw_min = 0.5
rnd_halpha = np.random.rand(len(halpha_eqw))*2+1
rnd_oiii = np.random.rand(len(oiii_eqw))*2+1
halpha_eqw[halpha_eqw < eqw_min] = eqw_min*rnd_halpha[halpha_eqw < eqw_min]
oiii_eqw[oiii_eqw < eqw_min] = eqw_min*rnd_oiii[oiii_eqw < eqw_min]
ha_color, oiii_color = 'black', 'orange'
fig = unicorn.catalogs.plot_init(left=0.15, bottom=0.075, xs=4, right=0.01, top=0.01, square=True)
plt.rcParams['text.usetex'] = True
plt.rcParams['font.family'] = 'serif'
plt.rcParams['font.serif'] = 'Times'
fig.subplots_adjust(wspace=0.0)
################ Ha eqw
# ax = fig.add_subplot(122)
# unicorn.survey_paper.dz_trend(halpha_eqw[lines.idx][keep & z_ha], dz[keep & z_ha], xrange=[0.8*eqw_min,1000], yrange=[-0.015, 0.02], xlog=True, ax=ax, xlabel=r'EQW H$\alpha$')
# yticks = [r'$0.01$',r'$0.1$',r'$1$',r'$10$',r'$10^{2}$']
# ax.set_yticklabels([])
# xtick = ax.set_xticks([1,10,100,1000])
# ax.set_xticklabels([1,10,100,1000])
halpha_sn = lines.halpha_eqw / lines.halpha_eqw_err
sn_min = 0.2
rnd_halpha = np.random.rand(len(halpha_sn))*2+1
halpha_sn[halpha_sn < sn_min] = sn_min*rnd_halpha[halpha_sn < sn_min]
ax = fig.add_subplot(122)
unicorn.survey_paper.dz_trend(halpha_sn[lines.idx][keep & z_ha], dz[keep & z_ha], xrange=[0.1,300], yrange=[-0.015, 0.015], xlog=True, ax=ax, xlabel=r'H$\alpha$ S/N')
yticks = [r'$0.01$',r'$0.1$',r'$1$',r'$10$',r'$10^{2}$']
ax.set_yticklabels([])
xtick = ax.set_xticks([1,10,100])
ax.set_xticklabels([1,10,100])
################ Mag F140W
ax = fig.add_subplot(121)
unicorn.survey_paper.dz_trend(phot.mag_f1392w[phot.idx][keep & z_ha], dz[keep & z_ha], xrange=[19,24], yrange=[-0.015, 0.015], ax=ax, xlabel=r'$m_{140}$')
ax.text(0.08,0.9,r'H$\alpha$, $%.1f < z < %.1f$' %((lmin/6563.-1), (lmax/6563.-1)), color='black', transform=ax.transAxes, fontsize=12)
ax.text(0.08,0.83,r'$N=%d$' %(len(z_ha[keep & z_ha])), color='black', transform=ax.transAxes, fontsize=12)
ax.text(0.08,0.12,r'$\sigma_\mathrm{NMAD}=0.0025$', color='black', transform=ax.transAxes, alpha=0.8)
ax.text(0.08,0.12,r'$\sigma_\mathrm{NMAD}=0.0025$', color='orange', transform=ax.transAxes, alpha=0.8)
ax.text(0.08,0.07,r'$\sigma_\mathrm{NMAD}=0.0050$', color='red', transform=ax.transAxes)
# ################ z
# ax = fig.add_subplot(131)
# unicorn.survey_paper.dz_trend(zsp.zspec[zsp.mat_idx][keep & z_ha], dz[keep & z_ha], xrange=[0.7,1.5], yrange=[-0.015, 0.02], ax=ax)
fig.savefig('zphot_zspec_lines.pdf')
def dz_trend(xin, yin, xrange=[0.7,1.5], yrange=[-0.015, 0.015], xlabel=r'$z_\mathrm{spec}$', xlog=False, ax=None, ms=3):
if ax is None:
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(xin, yin, linestyle='None', marker='o', alpha=0.2, color='white', zorder=200, ms=ms)
ax.plot(xin, yin, linestyle='None', marker='o', alpha=0.2, color='black', zorder=201, ms=ms)
xm, ym, ys, ns = threedhst.utils.runmed(xin, yin, NBIN=12, use_nmad=True)
xm_, ym_, ys_, ns_ = threedhst.utils.runmed(xin, yin, NBIN=10, use_nmad=True)
xm[0], ym[0], ys[0], ns[0] = xrange[0]*0.8, ym_[0], ys_[0], ns_[0]
xm[1:11], ym[1:11], ys[1:11], ns[1:11] = xm_, ym_, ys_, ns_
xm[-1], ym[-1], ys[-1], ns[-1] = xrange[1]*1.2, ym_[-1], ys_[-1], ns_[-1]
ax.plot(xm, ym, color='black', alpha=0.9, zorder=101, marker='o', linewidth=3)
ax.fill_between(xm, ym+ys, ym-ys, color='black', alpha=0.4, zorder=100)
yx = ys*0+0.0025
ax.plot(xm, ym+yx, color='orange', alpha=0.9, zorder=101, linewidth=3)
ax.plot(xm, ym-yx, color='orange', alpha=0.9, zorder=101, linewidth=3)
yx = ys*0+0.005
ax.plot(xm, ym+yx, color='red', alpha=0.9, zorder=101, linewidth=3)
ax.plot(xm, ym-yx, color='red', alpha=0.9, zorder=101, linewidth=3)
ax.plot(xm, ym*0, color='white', alpha=0.8, zorder=301, linewidth=3, linestyle='--')
ax.plot(xm, ym*0, color='black', alpha=0.8, zorder=302, linewidth=3, linestyle='--')
if xlog:
ax.semilogx()
ax.set_xlim(xrange[0],xrange[1])
ax.set_ylim(yrange[0],yrange[1])
ax.set_ylabel(r'$\Delta z / (1+z)$')
ax.set_xlabel(xlabel)
def compute_SFR_limits():
import cosmocalc as cc
limiting_flux = 4.e-17
### z=1, H-alpha
cosmo = cc.cosmocalc(z=1.0)
SFR_ha = 7.9e-42 * limiting_flux * 4 * np.pi * cosmo['DL_cm']**2
### z=2, OII
cosmo = cc.cosmocalc(z=2.0)
SFR_oii = 1.4e-41 * limiting_flux * 4 * np.pi * cosmo['DL_cm']**2
print SFR_ha, SFR_oii
def number_counts():
import unicorn
import unicorn.catalogs
import copy
os.chdir(unicorn.GRISM_HOME+'/ANALYSIS/SURVEY_PAPER')
unicorn.catalogs.read_catalogs()
from unicorn.catalogs import zout, phot, mcat, lines, rest, gfit, zsp
keep = unicorn.catalogs.run_selection(zmin=0, zmax=8, fcontam=1, qzmin=0., qzmax=10, dr=1.0, has_zspec=False, fcovermin=0.5, fcovermax=1.0, massmin=0, massmax=15, magmin=12, magmax=27)
fields = (phot.field[phot.idx] == 'AEGIS') | (phot.field[phot.idx] == 'COSMOS') | (phot.field[phot.idx] == 'GOODS-N') | (phot.field[phot.idx] == 'GOODS-S')
#fields = (phot.field[phot.idx] == 'COSMOS') | (phot.field[phot.idx] == 'AEGIS') | (phot.field[phot.idx] == 'GOODS-S')
#fields = (phot.field[phot.idx] == 'COSMOS') | (phot.field[phot.idx] == 'AEGIS')
#fields = (phot.field[phot.idx] == 'GOODS-N')
fields = fields & (phot.fcover[phot.idx] > 0.5)
pointings = []
for field, pointing in zip(phot.field, phot.pointing):
pointings.append('%s-%d' %(field, pointing))
pointings = np.array(pointings)
NPOINT = len(np.unique(pointings[phot.idx][fields]))
xrange = (12,25)
nbin = np.int(np.round((xrange[1]-xrange[0])*10.))
binwidth = (xrange[1]-xrange[0])*1./nbin
normal = 1./binwidth/NPOINT
cumul = True
normal = 1./NPOINT
#normal = 1./NPOINT*148
##### OFFSET TO TOTAL!
m140 = phot.mag_f1392w - 0.22
#m140 = phot.mag_f1392w
#### Full histogram
y_full, x_full = np.histogram(m140[phot.idx][fields], bins=nbin, range=xrange)
x_full = (x_full[1:]+x_full[:-1])/2.
if cumul:
y_full = np.cumsum(y_full)
y_full, x_full = np.histogram(m140[phot.idx][fields], bins=nbin, range=xrange)
x_full = (x_full[1:]+x_full[:-1])/2.
if cumul:
y_full = np.cumsum(y_full)
lo_full, hi_full = threedhst.utils.gehrels(y_full)
#### Matched in photometric catalogs
matched = mcat.rmatch[mcat.idx] < 1.
matched = zout.z_peak[0::3] != zout.z_peak[2::3]
y_matched, x_matched = np.histogram(m140[phot.idx][fields & matched], bins=nbin, range=xrange)
x_matched = (x_matched[1:]+x_matched[:-1])/2.
if cumul:
y_matched = np.cumsum(y_matched)
#### point sources
xpoint, ypoint = np.array([14,18,23]), np.array([6,3.18, 2.8])
ypoint_int = np.interp(m140, xpoint, ypoint)
points = (phot.flux_radius[phot.idx] < ypoint_int[phot.idx]) #& (m140[phot.idx] < 23)
y_points, x_points = np.histogram(m140[phot.idx][fields & matched & points], bins=nbin, range=xrange)
x_points = (x_points[1:]+x_points[:-1])/2.
if cumul:
y_points = np.cumsum(y_points)
#### Low contamination
contam = phot.fcontam[phot.idx] < 0.1
y_contam, x_contam = np.histogram(m140[phot.idx][fields & contam & matched], bins=nbin, range=xrange)
x_contam = (x_contam[1:]+x_contam[:-1])/2.
if cumul:
y_contam = np.cumsum(y_contam)
#### z > 1
z1 = (zout.z_peak[0::3] > 1) & (zout.q_z[0::3] < 50.5) & ~points
y_z1, x_z1 = np.histogram(m140[phot.idx][fields & matched & z1], bins=nbin, range=xrange)
x_z1 = (x_z1[1:]+x_z1[:-1])/2.
if cumul:
y_z1 = np.cumsum(y_z1)
lo_z1, hi_z1 = threedhst.utils.gehrels(y_z1)
#wx, wy = np.loadtxt('whitaker_completeness.dat', unpack=True)
#wscale = np.interp(x_z1, wx, wy)
wscale = y_matched*1. / y_full
wscale[~np.isfinite(wscale)] = 1
wscale[wscale > 1] = 1
wscale[wscale == 0] = 1
#hi_z1 /= wscale
# lo_z1 /= wscale
# y_z1 /= wscale
#### No cut on Q_z
z1q = (zout.z_peak[0::3] > 1) & (zout.q_z[0::3] < 100) & ~points
y_z1q, x_z1q = np.histogram(m140[phot.idx][fields & matched & z1q], bins=nbin, range=xrange)
x_z1q = (x_z1q[1:]+x_z1q[:-1])/2.
if cumul:
y_z1q = np.cumsum(y_z1q)
#### Total number at z>1
print 'NPOINT: %d' %(NPOINT)
#z1q_mag = unicorn.catalogs.run_selection(zmin=1, zmax=5.5, fcontam=1, qzmin=0., qzmax=100, dr=1.0, has_zspec=False, fcovermin=0.5, fcovermax=1.0, massmin=0, massmax=15, magmin=0, magmax=23)
z1q_mag = z1q & fields & (m140[phot.idx] <= 23.8) & ~points
N_z1_total = len(z1q_mag[z1q_mag])*1./NPOINT*149.
N_total = len(z1q_mag[matched & fields & (m140[phot.idx] <= 23.8)])*1./NPOINT*149.
print 'N (z>1, m<23) = %d, N_total = %d' %(N_z1_total, N_total)
print 'N (z>1, m<23) = %d' %(np.interp(23.8, x_z1, y_z1*149./NPOINT))
#### z > 2
z2 = (zout.z_peak[0::3] > 2) & (zout.q_z[0::3] < 50.5) & ~points
y_z2, x_z2 = np.histogram(m140[phot.idx][fields & matched & z2], bins=nbin, range=xrange)
x_z2 = (x_z2[1:]+x_z2[:-1])/2.
if cumul:
y_z2 = np.cumsum(y_z2)
lo_z2, hi_z2 = threedhst.utils.gehrels(y_z2)
#hi_z2 /= wscale
#### Tail of bright objects in the z>2 set
tail = (zout.z_peak[0::3] > 2) & (zout.q_z[0::3] < 50.5) & ~points & fields & matched & (m140[phot.idx] < 21)
print 'z2 tail:', zout.id[0::3][tail], mcat.rmatch[mcat.idx][tail], phot.flux_radius[phot.idx][tail], np.interp(m140[phot.idx][tail], xpoint, ypoint)
#### No cut on Q_z
z2q = (zout.z_peak[0::3] > 2) & (zout.q_z[0::3] < 100) & ~points
y_z2q, x_z2q = np.histogram(m140[phot.idx][fields & matched & z2q], bins=nbin, range=xrange)
x_z2q = (x_z2q[1:]+x_z2q[:-1])/2.
if cumul:
y_z2q = np.cumsum(y_z2q)
#### NMBS comparison
cat_nmbs, zout_nmbs, fout_nmbs = unicorn.analysis.read_catalogs(root='COSMOS-1')
#nmbs_hmag = 25-2.5*np.log10(cat_nmbs.H1*cat_nmbs.Ktot/cat_nmbs.K)
#nmbs_hmag = 25-2.5*np.log10((cat_nmbs.H1+cat_nmbs.J3+cat_nmbs.J2+cat_nmbs.H2)/4.*cat_nmbs.Ktot/cat_nmbs.K)
nmbs_hmag = 25-2.5*np.log10((cat_nmbs.H1+cat_nmbs.J3)/2.*cat_nmbs.Ktot/cat_nmbs.K)
keep_nmbs = (cat_nmbs.wmin > 0.3)
y_nmbs, x_nmbs = np.histogram(nmbs_hmag[keep_nmbs], bins=nbin, range=xrange)
x_nmbs = (x_nmbs[1:]+x_nmbs[:-1])/2.
if cumul:
y_nmbs = np.cumsum(y_nmbs)
y_nmbs *= 1./(0.21*2*3600.)*4*NPOINT
z1_nmbs = (zout_nmbs.z_peak > 1) & (cat_nmbs.star_flag == 0)
y_nmbs_z1, x_nmbs_z1 = np.histogram(nmbs_hmag[keep_nmbs & z1_nmbs], bins=nbin, range=xrange)
x_nmbs_z1 = (x_nmbs_z1[1:]+x_nmbs_z1[:-1])/2.
if cumul:
y_nmbs_z1 = np.cumsum(y_nmbs_z1)
y_nmbs_z1 *= 1./(0.21*2*3600)*4*NPOINT
z2_nmbs = (zout_nmbs.z_peak > 2) & (cat_nmbs.star_flag == 0)
y_nmbs_z2, x_nmbs_z2 = np.histogram(nmbs_hmag[keep_nmbs & z2_nmbs], bins=nbin, range=xrange)
x_nmbs_z2 = (x_nmbs_z2[1:]+x_nmbs_z2[:-1])/2.
if cumul:
y_nmbs_z2 = np.cumsum(y_nmbs_z2)
y_nmbs_z2 *= 1./(0.21*2*3600)*4*NPOINT
#
nmbs_stars = (cat_nmbs.star_flag == 1)
y_nmbs_stars, x_nmbs_stars = np.histogram(nmbs_hmag[keep_nmbs & nmbs_stars], bins=nbin, range=xrange)
x_nmbs_stars = (x_nmbs_stars[1:]+x_nmbs_stars[:-1])/2.
if cumul:
y_nmbs_stars = np.cumsum(y_nmbs_stars)
y_nmbs_stars *= 1./(0.21*2*3600)*4*NPOINT
#### Make the plot
fig = unicorn.catalogs.plot_init(left=0.11, bottom=0.08, xs=3.8, right=0.09, top=0.01)
plt.rcParams['text.usetex'] = True
plt.rcParams['font.family'] = 'serif'
plt.rcParams['font.serif'] = 'Times'
ax = fig.add_subplot(111)
ax.plot(x_full, y_full*normal, color='black')
ax.fill_between(x_full,lo_full*normal, hi_full*normal, color='black', alpha=0.4)
ax.plot(x_matched, y_matched*normal, color='blue',alpha=0.8)
ax.plot(x_contam, y_contam*normal, color='green',alpha=0.8)
ax.plot(x_points[x_points <= 23], y_points[x_points < 23]*normal, color='purple',alpha=0.8)
ax.plot(x_points[x_points >= 23], y_points[x_points >= 23]*normal, color='purple',alpha=0.8, linestyle=':')
ax.plot(x_z1, y_z1*normal, color='orange',alpha=0.7)
ax.fill_between(x_z1,lo_z1*normal, hi_z1*normal, color='orange', alpha=0.4)
ax.plot(x_z1q, y_z1q*normal, color='orange',alpha=0.7, linestyle='--')
ax.plot(x_z2, y_z2*normal, color='red',alpha=0.7)
ax.fill_between(x_z2,lo_z2*normal, hi_z2*normal, color='red', alpha=0.4)
ax.plot(x_z2q, y_z2q*normal, color='red',alpha=0.7, linestyle='--')
# ax.plot(x_nmbs, y_nmbs*normal, color='black',alpha=0.8, linewidth=3, alpha=0.2)
# ax.plot(x_nmbs_z1, y_nmbs_z1*normal, color='orange',alpha=0.8, linewidth=3, alpha=0.2)
# ax.plot(x_nmbs_z2, y_nmbs_z2*normal, color='red',alpha=0.8, linewidth=3, alpha=0.2)
# ax.plot(x_nmbs_stars, y_nmbs_stars*normal, color='purple',alpha=0.8, linewidth=3, alpha=0.2)
#ax.text(0.05,0.92,r'%s ($N=%d$)' %(', '.join(np.unique(phot.field[phot.idx][fields])), NPOINT), color='black', transform=ax.transAxes)
ax.text(0.05,0.92,r'Total, from $N=%d$ pointings' %(NPOINT), color='black', transform=ax.transAxes)
ax.text(0.05,0.87,r'matched', color='blue', transform=ax.transAxes)
ax.text(0.05,0.82,r'$f_\mathrm{contam} < 10\%$', color='green', transform=ax.transAxes)
ax.text(0.05,0.77,r'point sources', color='purple', transform=ax.transAxes)
ax.text(0.05,0.72,r'$z > 1$', color='orange', transform=ax.transAxes)
ax.text(0.05,0.67,r'$z > 2$', color='red', transform=ax.transAxes)
ax.set_xlabel('MAG\_AUTO (F140W $\sim$ $H$)')
if cumul:
ax.set_ylabel('N($<m$) per WFC3 pointing')
else:
ax.set_ylabel('N / pointing / mag')
ax.semilogy()
yticks = [r'$0.01$',r'$0.1$',r'$1$',r'$10$',r'$10^{2}$']
ax.set_yticklabels(yticks)
ytick = ax.set_yticks([0.01,0.1,1,10,100])
from matplotlib.ticker import MultipleLocator, FormatStrFormatter
minorLocator = MultipleLocator(1)
ax.xaxis.set_minor_locator(minorLocator)
ax.set_xlim(xrange[0], xrange[1])
ax.set_ylim(0.01, 500)
ax2 = ax.twinx()
ax2.semilogy()
yticks = [r'$10$',r'$10^{2}$',r'$10^{3}$',r'$10^{4}$']
ax2.set_yticklabels(yticks)
ytick = ax2.set_yticks([10,100,1000,1.e4])
ax2.set_ylim(0.01*149, 500*149)
ax2.set_ylabel('N($<m$), full survey')
ax2.set_xlim(xrange[0], xrange[1])
ax2.xaxis.set_minor_locator(minorLocator)
### Grid
ax.xaxis.grid(alpha=0.35, zorder=1, which='major')
ax.xaxis.grid(alpha=0.2, zorder=1, which='minor')
ax2.yaxis.grid(alpha=0.35, zorder=1, which='major')
fig.savefig('number_counts.pdf')
plt.rcParams['text.usetex'] = False
def ancillary_matches():
"""
Get an idea of how the matching to the ancillary catalogs depends on mag:
matched fraction
multiple matches
"""
import unicorn
import unicorn.catalogs
import copy
os.chdir(unicorn.GRISM_HOME+'/ANALYSIS/SURVEY_PAPER')
unicorn.catalogs.read_catalogs()
from unicorn.catalogs import zout, phot, mcat, lines, rest, gfit, zsp
keep = unicorn.catalogs.run_selection(zmin=0, zmax=8, fcontam=1, qzmin=0., qzmax=10, dr=1.0, has_zspec=False, fcovermin=0.5, fcovermax=1.0, massmin=0, massmax=15, magmin=12, magmax=27)
fields = (phot.field[phot.idx] == 'AEGIS') | (phot.field[phot.idx] == 'COSMOS') | (phot.field[phot.idx] == 'GOODS-N') | (phot.field[phot.idx] == 'GOODS-S')
phot_dr = np.zeros(phot.field.shape)+100
phot_id = np.zeros(phot.field.shape)
phot_kmag = np.zeros(phot.field.shape)
idx = np.arange(phot.field.shape[0])
#### Do separate matching again on every object in photometric catalog
for field in ['COSMOS','AEGIS','GOODS-S','GOODS-N']:
this = phot.field == field
cat, zout, fout = unicorn.analysis.read_catalogs(field+'-1')
cos_dec = np.cos(np.median(cat.dec)/360.*2*np.pi)**2
for i in idx[this]:
print unicorn.noNewLine+'%d / %d' %(i, idx[this][-1])
dr = np.sqrt((cat.ra-phot.x_world[i])**2*cos_dec+(cat.dec-phot.y_world[i])**2)*3600.
ma = dr == dr.min()
phot_dr[i] = dr.min()
phot_id[i] = cat.id[ma][0]
phot_kmag[i] = cat.kmag[ma][0]
#### Ask, "what fraction of F140W objects have multiple matches to the same ancillary object"
n_match = phot_dr*0
n_brighter = phot_dr*0.
i = 0
base_selection = (phot.fcover > 0.5) & (phot.has_spec == 1) & (phot_dr < 1.0)
for f,id,m,p in zip(phot.field, phot_id, phot.mag_f1392w,phot.pointing):
print unicorn.noNewLine+'%d / %d' %(i, len(phot_id))
mat = (phot.field == f) & (phot_id == id) & (phot.pointing == p) & base_selection
n_match[i] = mat.sum()
brighter = mat & (phot.mag_f1392w-m < 0.75)
n_brighter[i] = brighter.sum()-1
i = i+1
use = n_match > 0
yh_full, xh_full = np.histogram(phot.mag_f1392w[use], range=(12,26), bins=14*4)
fig = unicorn.plotting.plot_init(square=True, use_tex=True, left=0.09, bottom=0.07, xs=3.5)
ax = fig.add_subplot(111)
yh_n, xh_n = np.histogram(phot.mag_f1392w[use & (n_match > 1)], range=(12,26), bins=14*4)
ax.plot(xh_n[1:]-0.22, yh_n*1./yh_full, color='blue', linestyle='steps', label=r'$N_{\rm match} > 1$')
yh_n, xh_n = np.histogram(phot.mag_f1392w[use & (n_match > 1) & (n_brighter == 1)], range=(12,26), bins=14*4)
ax.plot(xh_n[1:]-0.22, yh_n*1./yh_full, color='red', linestyle='steps', label=r'$N_{\rm brighter} = 1$')
yh_n, xh_n = np.histogram(phot.mag_f1392w[use & (n_match > 1) & (n_brighter > 1)], range=(12,26), bins=14*4)
ax.plot(xh_n[1:]-0.22, yh_n*1./yh_full, color='orange', linestyle='steps', label=r'$N_{\rm brighter} > 1$')
ax.set_xlabel(r'$m_{140}$')
ax.set_ylabel('fraction')
ax.set_xlim(19,24.5)
ax.set_ylim(0,0.21)
ax.legend(loc='upper left', frameon=False)
unicorn.plotting.savefig(fig, 'ancillary_matched_from_f140w.pdf')
#### Check these cases of n_brighter == 1
test = (n_match > 0) & (n_brighter == 1)
idx = np.arange(len(n_match))[test]
i = 0
id = phot_id[idx][i]
mat = base_selection & (phot.field == phot.field[idx][i]) & (phot_id == phot_id[idx][i])
### Some pointings, such as GOODS-S flanking fields don't overlap with photom. catalog
test_field_goodsn = (phot.field == 'GOODS-N')
test_field_goodss = (phot.field == 'GOODS-S') & (phot.pointing != 1) & (phot.pointing != 28)
test_field_cosmos = phot.field == 'COSMOS'
test_field_aegis = phot.field == 'AEGIS' ### out of NMBS
for i in [11,2,1,6]:
test_field_aegis = test_field_aegis & (phot.pointing != i)
fig = unicorn.plotting.plot_init(square=True, use_tex=True, left=0.09, bottom=0.07, xs=3.5)
ax = fig.add_subplot(111)
#### Make a plot showing the fraction of matched galaxies
for test_field, c in zip([test_field_goodsn, test_field_goodss, test_field_cosmos, test_field_aegis], ['orange','red','blue','green']):
base_selection = (phot.fcover > 0.5) & test_field & (phot.has_spec == 1)
has_match = phot_dr < 1.0
yh_full, xh_full = np.histogram(phot.mag_f1392w[base_selection], range=(12,26), bins=14*4)
yh_mat, xh_mat = np.histogram(phot.mag_f1392w[base_selection & has_match], range=(12,26), bins=14*4)
yh_full, yh_mat = np.maximum(yh_full, 0.01), np.maximum(yh_mat, 0.01)
# plt.plot(xh_full[1:], yh_full, linestyle='steps', color='blue', alpha=0.5)
# plt.plot(xh_mat[1:], yh_mat, linestyle='steps', color='red', alpha=0.5)
# plt.semilogy()
# plt.ylim(0.5,500)
#
ax.plot(xh_full[1:]-0.22, yh_mat/yh_full, linestyle='-', linewidth=3, color=c, alpha=0.5, label=np.unique(phot.field[test_field])[0])
ax.legend(loc='lower left')
ax.plot([0,100],[1,1], color='black', linestyle='-', alpha=0.8, linewidth=2)
ax.plot([0,100],[0.9,0.9], color='black', linestyle=':', alpha=0.8, linewidth=2)
ax.set_ylim(0,1.1)
ax.set_xlim(21,25.)
ax.set_xlabel(r'$m_{140}$')
ax.set_ylabel(r'Matched fraction')
unicorn.plotting.savefig(fig, 'ancillary_matched_fraction.pdf')
#### Look at multiple matches
base_selection = (phot.fcover > 0.5) & (phot.has_spec == 1)
use = base_selection & test_field_cosmos & (phot_dr < 1.0)
plt.scatter(phot.mag_f1392w[use], phot_kmag[use], color='blue', alpha=0.1, s=10)
matched_id = np.unique(phot_id[use])
kmag = matched_id*0.
dmag1 = matched_id*0.+100
dmag2 = matched_id*0.+100
N = matched_id*0
for ii, id in enumerate(matched_id):
print unicorn.noNewLine+'%d / %d' %(ii, len(matched_id))
this = (phot_id == id) & use
dmag = phot.mag_f1392w[this]-phot_kmag[this]
kmag[ii] = phot_kmag[this][0]
dmag1[ii] = dmag[0]
N[ii] = this.sum()
if this.sum() > 1:
so = np.argsort(dmag)
dmag2[ii] = dmag[so][1]
#
fig = unicorn.plotting.plot_init(square=True, use_tex=True, left=0.09, bottom=0.07, xs=3.5)
ax = fig.add_subplot(111)
ax.scatter(kmag, dmag1-0.22, color='blue', alpha=0.2, s=10, label='1st matched')
ax.scatter(kmag, dmag2-0.22, color='red', alpha=0.2, s=10, label='2nd matched')
ax.set_xlim(17,24)
ax.set_ylim(-1,5)
ax.legend(loc='upper left')
ax.set_xlabel(r'$K_\mathrm{matched}$')
ax.set_ylabel(r'$m_{140} - K_\mathrm{matched}$')
unicorn.plotting.savefig(fig,'ancillary_delta_mag.pdf')
### Show fraction of ancillary objects that have multiple matches a function of magnitude
fig = unicorn.plotting.plot_init(square=True, use_tex=True, left=0.09, bottom=0.07, xs=3.5)
ax = fig.add_subplot(111)
yh_full, xh_full = np.histogram(kmag, range=(17,24), bins=7*4)
yh, xh = np.histogram(kmag[dmag2 < 1.2], range=(17,24), bins=7*4)
ax.plot(xh[1:], yh*1./yh_full, linestyle='steps', color='red', linewidth=3, alpha=0.5, label=r'$\Delta 2^{\rm nd} < 1.2$')
yh, xh = np.histogram(kmag[N > 1], range=(17,24), bins=7*4)
ax.plot(xh[1:], yh*1./yh_full, linestyle='steps', color='red', linewidth=3, alpha=0.3, label=r'$N_\mathrm{match} > 1$')
yh, xh = np.histogram(kmag[N > 3], range=(17,24), bins=7*4)
ax.plot(xh[1:], yh*1./yh_full, linestyle='steps', color='blue', linewidth=3, alpha=0.5, label=r'$N_\mathrm{match} > 3$')
ax.set_xlabel(r'$K_\mathrm{matched}$')
ax.set_ylabel(r'fraction')
ax.legend(loc='upper left', prop=matplotlib.font_manager.FontProperties(size=9))
unicorn.plotting.savefig(fig,'ancillary_multiple_fraction.pdf')
def get_iband_mags():
"""
On Unicorn, loop through the ascii spectra to retrieve the iband mags, should all be ZP=25.
"""
os.chdir(unicorn.GRISM_HOME+'ANALYSIS/')
unicorn.catalogs.read_catalogs()
from unicorn.catalogs import zout, phot, mcat, lines, rest, gfit, zsp
ids = zout.id[0::3]
fields = phot.field[phot.idx]
iflux = zout.z_peak[0::3]*0.-1
imod = iflux*1.
lc_i = iflux*1.
hflux = iflux*1
hmod = iflux*1.
lc_h = iflux*1
count = 0
for id, field in zip(ids, fields):
path = unicorn.GRISM_HOME+'ANALYSIS/REDSHIFT_FITS_v1.6/ASCII/%s/%s_obs_sed.dat' %(field, id)
if os.path.exists(path):
print unicorn.noNewLine+id
obs = catIO.Readfile(path)
dlam_spec = obs.lc[-1]-obs.lc[-2]
is_spec = np.append(np.abs(1-np.abs(obs.lc[1:]-obs.lc[0:-1])/dlam_spec) < 0.05,True)
dl_i = np.abs(obs.lc-7688.1)
dl_h = np.abs(obs.lc[~is_spec]-1.6315e4)
ix_i = np.where(dl_i == dl_i.min())[0][0]
ix_h = np.where(dl_h == dl_h.min())[0][0]
iflux[count] = obs.fnu[ix_i]
imod[count] = obs.obs_sed[ix_i]
lc_i[count] = obs.lc[ix_i]
hflux[count] = obs.fnu[ix_h]
hmod[count] = obs.obs_sed[ix_h]
lc_h[count] = obs.lc[ix_h]
#
count = count+1
fp = open('full_imag_hmag.dat','w')
fp.write('# id iflux imodel lc_i hflux hmodel lc_h\n')
for i in range(len(ids)):
fp.write('%s %.5e %.5e %.1f %.5e %.5e %.1f\n' %(ids[i], iflux[i], imod[i], lc_i[i], hflux[i], hmod[i], lc_h[i]))
fp.close()
def zspec_colors():
"""
Show as a function if H / (i-H) where the galaxies with zspec fall
"""
import unicorn
import unicorn.catalogs
import copy
from matplotlib.ticker import MultipleLocator, FormatStrFormatter
os.chdir(unicorn.GRISM_HOME+'/ANALYSIS/SURVEY_PAPER')
unicorn.catalogs.read_catalogs()
from unicorn.catalogs import zout, phot, mcat, lines, rest, gfit
if unicorn.catalogs.zsp is None:
unicorn.catalogs.make_specz_catalog()
zsp = unicorn.catalogs.zsp
maglim = 25
qzmax = 200
contam_max = 0.5
###### Selection criteria
keep = (phot.mag_f1392w[phot.idx] < maglim) & (phot.fcontam[phot.idx] < contam_max) & (zout.q_z[0::3] < qzmax) & (phot.fcover[phot.idx] > 0.9) & (mcat.logm[mcat.idx] > 0) & (mcat.rmatch[mcat.idx] < 0.5) #& (zsp.zspec[zsp.mat_idx] > 0) & (zsp.dr < 1)
keep = keep & (zout.q_z[0::3] != zout.q_z[2::3])
has_specz = (zsp.zspec[zsp.mat_idx] > 0) & (zsp.dr < 1)
#mag, radius = np.cast[float](cat.MAG_AUTO), np.cast[float](cat.FLUX_RADIUS)
#### Find isolated point sources
points = (phot.flux_radius[phot.idx] < 2.7)
keep = keep & (~points)
zphot = zout.z_peak[0::3]
##### H mags from F140W and matches
icat = catIO.Readfile('full_imag_hmag.dat')
IH = -2.5*np.log10(icat.iflux / icat.hflux)
phot_zp = zphot*0.+25
phot_zp[(phot.field[phot.idx] == 'GOODS-S') | (phot.field[phot.idx] == 'PRIMO') | (phot.field[phot.idx] == 'WFC3-ERSII-G01') | (phot.field[phot.idx] == 'GEORGE')] = 23.86
m140 = phot.mag_f1392w[phot.idx]-0.22 #### Offset to total in catalogs!
hmag = phot_zp-2.5*np.log10(icat.hflux)
fin = np.isfinite(hmag) & (icat.iflux > 0) & (mcat.rmatch[mcat.idx] < 1)
#### Few wierd objects with very discrepant H mags in GOODS-N
bad = (zout.z_peak[0::3] < 1) & (IH > 3.5)
fin = fin & (~bad)
######### Compare h mags
# use = fin
#
# use = (phot.field[phot.idx] == 'GOODS-S') | (phot.field[phot.idx] == 'PRIMO') | (phot.field[phot.idx] == 'WFC3-ERSII-G01') | (phot.field[phot.idx] == 'GEORGE')
# use = phot.field[phot.idx] == 'GOODS-N'
#
# dmag = m140-hmag
# plt.plot(m140[use & fin], dmag[use & fin], marker='o', linestyle='None', alpha=0.5)
# plt.plot([0,30],[0,0], color='black', alpha=0.5)
# plt.xlim(15,25)
# plt.ylim(-2,2)
#
# plt.plot(phot.kron_radius[phot.idx][use & fin], dmag[use & fin], marker='o', linestyle='None', alpha=0.2)
# xm, ym, ys, ns = threedhst.utils.runmed(phot.kron_radius[phot.idx][use & fin & (m140 < 23)], dmag[use & fin & (m140 < 23)], NBIN=30)
# plt.plot(xm, ym, color='orange', linewidth=2)
#
# plt.xlim(3,8)
# plt.ylim(-2,2)
#
#
# plt.plot(phot.kron_radius[phot.idx][use & fin], m140[use & fin], marker='o', linestyle='None', alpha=0.2)
# plt.xlim(3,8)
# plt.ylim(16,25)
########## H vs I-H
field = phot.field[phot.idx] != 'xx'
fields = {'COSMOS': ['COSMOS'], 'AEGIS': ['AEGIS'], 'GOODS-N':['GOODS-N'], 'GOODS-S':['GOODS-S','PRIMO','WFC3-ERSII-G01','GEORGE']}
field_use = 'COSMOS'
ix = 220
fig = unicorn.catalogs.plot_init(square=True, xs=8, aspect=1./2, left=0.1, right=0.12, bottom=0.10, top=0.01, fontsize=10)
fig.subplots_adjust(wspace=0.01,hspace=0.02, left=0.05, right=0.94, bottom=0.10, top=0.98)
#ax = fig.add_subplot(111)
for field_use in ['AEGIS','COSMOS','GOODS-N','GOODS-S']:
ix += 1
ax = fig.add_subplot(ix)
field = phot.field[phot.idx] == 'xx'
print field_use
for mat in fields[field_use]:
field = field | (phot.field[phot.idx] == mat)
ms = 6
plt.rcParams['text.usetex'] = True
plt.rcParams['font.family'] = 'serif'
plt.rcParams['font.serif'] = 'Times'
ax2 = ax.twinx()
ax.plot(m140[fin & keep & ~has_specz & field], IH[fin & keep & ~has_specz & field], marker='.', linestyle='None', color='black', alpha=0.1, ms=ms)
ax.plot(m140[fin & keep & has_specz & field], IH[fin & keep & has_specz & field], marker='.', linestyle='None', color='green', alpha=0.5, ms=ms)
#ax.plot(m140[fin & keep & field & (zout.z_peak[0::3] > 1)], IH[fin & keep & field & (zout.z_peak[0::3] > 1)], marker='o', linestyle='None', color='orange', alpha=0.5, ms=ms)
ax.plot(np.array([10,30]), 22.5-np.array([10,30]), color='black', alpha=0.5, linewidth=3, linestyle='--')
#ax.plot(np.array([10,30]), 24-np.array([10,30]), color='orange', alpha=0.5, linewidth=3)
ax.plot(np.array([10,30]), [2.25, 2.25], color='purple', alpha=0.8, linewidth=3)
#### Fraction histograms
z1_red = IH > 2.25
yh_a, xh_a = np.histogram(m140[fin & keep & field & ~z1_red], range=(16,25), bins=18)
yh_z, xh_z = np.histogram(m140[fin & keep & field & has_specz & ~z1_red], range=(16,25), bins=18)
show = yh_a > 0
ax2.plot((xh_a[1:]+xh_a[:-1])[show]/2., (yh_z*1./yh_a)[show], color='white', linewidth=4, alpha=0.7, linestyle='steps-mid')
ax2.plot((xh_a[1:]+xh_a[:-1])[show]/2., (yh_z*1./yh_a)[show], color='blue', linewidth=3, alpha=0.8, linestyle='steps-mid')
yh_a, xh_a = np.histogram(m140[fin & keep & field & z1_red], range=(16,25), bins=18)
yh_z, xh_z = np.histogram(m140[fin & keep & field & has_specz & z1_red], range=(16,25), bins=18)
show = yh_a > 0
ax2.plot((xh_a[1:]+xh_a[:-1])[show]/2., (yh_z*1./yh_a)[show], color='white', linewidth=4, alpha=0.7, linestyle='steps-mid')
ax2.plot((xh_a[1:]+xh_a[:-1])[show]/2., (yh_z*1./yh_a)[show], color='red', linewidth=3, alpha=0.8, linestyle='steps-mid')
ax.text(0.95, 0.88, field_use, transform=ax.transAxes, fontsize=12, backgroundcolor='white', horizontalalignment='right')
if field_use == 'AEGIS':
ax.text(19,3.5, r'$i=22.5$', horizontalalignment='center', verticalalignment='bottom', rotation=-30)
if field_use == 'GOODS-N':
ax.text(17.5,2.4, r'$\uparrow\ z > 1$, red $\uparrow$', horizontalalignment='left', verticalalignment='bottom')
minorLocator = MultipleLocator(1)
ax.xaxis.set_minor_locator(minorLocator)
ax.set_xlim(17,24.5)
ax.set_ylim(-0.5,5.2)
minorLocator = MultipleLocator(0.1)
ax2.yaxis.set_minor_locator(minorLocator)
ax2.set_xlim(17.1,24.5)
ax2.set_ylim(-0.1,1.1)
if field_use in ['AEGIS','COSMOS']:
ax.set_xticklabels([])
else:
ax.set_xlabel(r'$m_{140}\ \sim\ H$')
if field_use in ['COSMOS','GOODS-S']:
ax.set_yticklabels([])
else:
ax.set_ylabel(r'$(i-H)$')
#
if field_use in ['AEGIS','GOODS-N']:
ax2.set_yticklabels([])
else:
ax2.set_ylabel(r'$f(z_\mathrm{spec})$')
#fig.savefig('zspec_fraction_%s.pdf' %(field_use))
fig.savefig('zspec_fraction_all.pdf')
#plt.plot(zout.z_peak[0::3][fin], IH[fin], marker='o', linestyle='None', color='black', alpha=0.1, ms=4)
# plt.plot(zout.z_peak[0::3][fin & keep & field], IH[fin & keep & field], marker='o', linestyle='None', color='black', alpha=0.1, ms=ms)
# plt.plot(zout.z_peak[0::3][fin & keep & has_specz & field], IH[fin & keep & has_specz & field], marker='o', linestyle='None', color='blue', alpha=0.5, ms=ms)
# plt.plot(np.array([0,30]), [2.25, 2.25], color='red', alpha=0.8, linewidth=3)
#
#
# z1_red = IH > 2.25
# yh_a, xh_a = np.histogram(zout.z_peak[0::3][fin & keep & field & ~z1_red], range=(0,4), bins=8)
# yh_z, xh_z = np.histogram(zout.z_peak[0::3][fin & keep & field & has_specz & ~z1_red], range=(0,4), bins=8)
# show = yh_a > 0
# plt.plot((xh_a[1:]+xh_a[:-1])[show]/2., (yh_z*1./yh_a)[show], color='blue', linewidth=3, alpha=0.8)
#
# yh_a, xh_a = np.histogram(zout.z_peak[0::3][fin & keep & field & z1_red], range=(0,4), bins=8)
# yh_z, xh_z = np.histogram(zout.z_peak[0::3][fin & keep & field & has_specz & z1_red], range=(0,4), bins=8)
# show = yh_a > 0
# plt.plot((xh_a[1:]+xh_a[:-1])[show]/2., (yh_z*1./yh_a)[show], color='red', linewidth=3, alpha=0.8)
#
# plt.xlim(0,4)
# plt.ylim(-0.5,5)
def find_brown_dwarf():
import unicorn
import unicorn.catalogs
import copy
os.chdir(unicorn.GRISM_HOME+'/ANALYSIS/SURVEY_PAPER')
unicorn.catalogs.read_catalogs()
from unicorn.catalogs import zout, phot, mcat, lines, rest, gfit, zsp
int_lam = np.array([0.77e4, 1.25e4, 2.1e4])
fp = open('stars_ijk.dat','w')
fp.write('# id ii jj kk\n')
##### Known Brown dwarf
object = 'AEGIS-3-G141_00195'
lambdaz, temp_sed, lci, obs_sed, fobs, efobs = eazy.getEazySED(0, MAIN_OUTPUT_FILE='%s' %(object), OUTPUT_DIRECTORY=unicorn.GRISM_HOME+'/ANALYSIS/REDSHIFT_FITS_v1.6/OUTPUT/', CACHE_FILE = 'Same')
dlam_spec = lci[-1]-lci[-2]
is_spec = np.append(np.abs(1-np.abs(lci[1:]-lci[0:-1])/dlam_spec) < 0.05,True)
so = np.argsort(lci[~is_spec])
yint = np.interp(int_lam, lci[~is_spec][so], fobs[~is_spec][so])/(int_lam/5500.)**2
fp.write('%s %.3e %.3e %.3e\n' %(object, yint[0], yint[1], yint[2]))
###### Loop through all point sources
stars = (phot.flux_radius[phot.idx] < 3) & (phot.mag_f1392w[phot.idx] < 24) & (mcat.rmatch[mcat.idx] < 0.5)
for object in phot.id[phot.idx][stars]:
print unicorn.noNewLine+'%s' %(object)
try:
lambdaz, temp_sed, lci, obs_sed, fobs, efobs = eazy.getEazySED(0, MAIN_OUTPUT_FILE='%s' %(object), OUTPUT_DIRECTORY=unicorn.GRISM_HOME+'/ANALYSIS/REDSHIFT_FITS_v1.6/OUTPUT/', CACHE_FILE = 'Same')
except:
pass
#
dlam_spec = lci[-1]-lci[-2]
is_spec = np.append(np.abs(1-np.abs(lci[1:]-lci[0:-1])/dlam_spec) < 0.05,True)
so = np.argsort(lci[~is_spec])
yint = np.interp(int_lam, lci[~is_spec][so], fobs[~is_spec][so])/(int_lam/5500.)**2
fp.write('%s %.3e %.3e %.3e\n' %(object, yint[0], yint[1], yint[2]))
fp.close()
if (1 == 0):
ijk = catIO.Readfile('stars_ijk.dat')
ij = -2.5*np.log10(ijk.ii/ijk.jj)
jk = -2.5*np.log10(ijk.jj/ijk.kk)
plt.plot(ij, jk, marker='o', markersize=3, color='black', alpha=0.8, linestyle='None')
plt.plot(ij[0], jk[0], marker='o', markersize=8, color='red', alpha=0.5, linestyle='None')
mat = ijk.id == 'AEGIS-11-G141_00314'
plt.plot(ij[mat], jk[mat], marker='o', markersize=8, color='orange', alpha=0.5, linestyle='None')
bd = phot.id[phot.idx] == 'x'
for i, obj in enumerate(ijk.id):
bd[phot.id[phot.idx] == obj] = (ij[i] > -0.) & (jk[i] < -1.7)
#
bdf = unicorn.analysis.BD_fit()
for obj in phot.id[phot.idx][bd]:
bdf.fit('/Users/gbrammer/Sites_GLOBAL/P/GRISM/ascii/%s.dat' %(obj), chi2_limit=100, trim_mtype=False, max_contam=0.8)
unicorn.catalogs.make_selection_catalog(bd, filename='massive_lines.cat', make_html=True)
os.system('rsync -avz massive_lines* ~/Sites_GLOBAL/P/GRISM_v1.6/ANALYSIS/')
|
gbrammerREPO_NAMEunicornPATH_START.@unicorn_extracted@unicorn-master@survey_paper.py@.PATH_END.py
|
{
"filename": "_color.py",
"repo_name": "plotly/plotly.py",
"repo_path": "plotly.py_extracted/plotly.py-master/packages/python/plotly/plotly/validators/scatter3d/textfont/_color.py",
"type": "Python"
}
|
import _plotly_utils.basevalidators
class ColorValidator(_plotly_utils.basevalidators.ColorValidator):
def __init__(self, plotly_name="color", parent_name="scatter3d.textfont", **kwargs):
super(ColorValidator, self).__init__(
plotly_name=plotly_name,
parent_name=parent_name,
array_ok=kwargs.pop("array_ok", True),
edit_type=kwargs.pop("edit_type", "calc"),
**kwargs,
)
|
plotlyREPO_NAMEplotly.pyPATH_START.@plotly.py_extracted@plotly.py-master@packages@python@plotly@plotly@validators@scatter3d@textfont@_color.py@.PATH_END.py
|
{
"filename": "test_cache.py",
"repo_name": "D-arioSpace/astroquery",
"repo_path": "astroquery_extracted/astroquery-main/astroquery/tests/test_cache.py",
"type": "Python"
}
|
import requests
import os
import pytest
from time import mktime
from datetime import datetime
from astropy.config import paths
from astroquery.query import QueryWithLogin
from astroquery import cache_conf
URL1 = "http://fakeurl.edu"
URL2 = "http://fakeurl.ac.uk"
TEXT1 = "Penguin"
TEXT2 = "Walrus"
def _create_response(response_text):
mock_response = requests.Response()
mock_response._content = response_text
mock_response.request = requests.PreparedRequest()
mock_response.status_code = 200
return mock_response
@pytest.fixture
def changing_mocked_response(monkeypatch):
"""Provide responses that can change after being queried once."""
first_responses = {URL1: _create_response(TEXT1), "ceb": _create_response(TEXT1)}
default_response = _create_response(TEXT2)
def get_mockreturn(*args, **kwargs):
return first_responses.pop(args[2], default_response)
monkeypatch.setattr(requests.Session, "request", get_mockreturn)
class CacheTestClass(QueryWithLogin):
"""Bare bones class for testing caching"""
def test_func(self, requrl):
return self._request(method="GET", url=requrl)
def _login(self, username):
return self._request(method="GET", url=username).content == "Penguin"
def test_conf():
cache_conf.reset()
default_timeout = cache_conf.cache_timeout
default_active = cache_conf.cache_active
assert default_timeout == 604800
assert default_active is True
with cache_conf.set_temp("cache_timeout", 5):
assert cache_conf.cache_timeout == 5
with cache_conf.set_temp("cache_active", False):
assert cache_conf.cache_active is False
assert cache_conf.cache_timeout == default_timeout
assert cache_conf.cache_active == default_active
cache_conf.cache_timeout = 5
cache_conf.cache_active = False
cache_conf.reset()
assert cache_conf.cache_timeout == default_timeout
assert cache_conf.cache_active == default_active
def test_basic_caching(changing_mocked_response):
cache_conf.reset()
mytest = CacheTestClass()
assert cache_conf.cache_active
mytest.clear_cache()
assert len(os.listdir(mytest.cache_location)) == 0
resp = mytest.test_func(URL1)
assert resp.content == TEXT1
assert len(os.listdir(mytest.cache_location)) == 1
resp = mytest.test_func(URL2) # query that has not been cached
assert resp.content == TEXT2
assert len(os.listdir(mytest.cache_location)) == 2
resp = mytest.test_func(URL1)
assert resp.content == TEXT1 # query that was cached
assert len(os.listdir(mytest.cache_location)) == 2 # no new cache file
mytest.clear_cache()
assert len(os.listdir(mytest.cache_location)) == 0
resp = mytest.test_func(URL1)
assert resp.content == TEXT2 # Now get new response
def test_change_location(tmp_path):
cache_conf.reset()
mytest = CacheTestClass()
default_cache_location = mytest.cache_location
assert paths.get_cache_dir() in str(default_cache_location)
assert "astroquery" in mytest.cache_location.parts
assert mytest.name in mytest.cache_location.parts
new_loc = tmp_path.joinpath("new_dir")
mytest.cache_location = new_loc
assert mytest.cache_location == new_loc
mytest.reset_cache_location()
assert mytest.cache_location == default_cache_location
new_loc.mkdir(parents=True, exist_ok=True)
with paths.set_temp_cache(new_loc):
assert str(new_loc) in str(mytest.cache_location)
assert "astroquery" in mytest.cache_location.parts
assert mytest.name in mytest.cache_location.parts
def test_login(changing_mocked_response):
cache_conf.reset()
mytest = CacheTestClass()
assert cache_conf.cache_active
mytest.clear_cache()
assert len(os.listdir(mytest.cache_location)) == 0
mytest.login("ceb")
assert mytest.authenticated()
assert len(os.listdir(mytest.cache_location)) == 0 # request should not be cached
mytest.login("ceb")
assert not mytest.authenticated() # Should not be accessing cache
def test_timeout(changing_mocked_response, monkeypatch):
cache_conf.reset()
mytest = CacheTestClass()
assert cache_conf.cache_active
mytest.clear_cache()
assert len(os.listdir(mytest.cache_location)) == 0
resp = mytest.test_func(URL1) # should be cached
assert resp.content == TEXT1
resp = mytest.test_func(URL1) # should access cached value
assert resp.content == TEXT1
# Changing the file date so the cache will consider it expired
cache_file = next(mytest.cache_location.iterdir())
modTime = mktime(datetime(1970, 1, 1).timetuple())
os.utime(cache_file, (modTime, modTime))
resp = mytest.test_func(URL1)
assert resp.content == TEXT2 # now see the new response
# Testing a cache timeout of "none"
cache_conf.cache_timeout = None
# Ensure response can only come from cache.
monkeypatch.delattr(requests.Session, "request")
resp = mytest.test_func(URL1)
assert resp.content == TEXT2 # cache is accessed
def test_deactivate_directly(changing_mocked_response):
cache_conf.reset()
mytest = CacheTestClass()
cache_conf.cache_active = False
mytest.clear_cache()
assert len(os.listdir(mytest.cache_location)) == 0
resp = mytest.test_func(URL1)
assert resp.content == TEXT1
assert len(os.listdir(mytest.cache_location)) == 0
resp = mytest.test_func(URL1)
assert resp.content == TEXT2
assert len(os.listdir(mytest.cache_location)) == 0
cache_conf.reset()
assert cache_conf.cache_active is True
def test_deactivate_with_set_temp(changing_mocked_response):
mytest = CacheTestClass()
with cache_conf.set_temp('cache_active', False):
mytest.clear_cache()
assert len(os.listdir(mytest.cache_location)) == 0
resp = mytest.test_func(URL1)
assert resp.content == TEXT1
assert len(os.listdir(mytest.cache_location)) == 0
resp = mytest.test_func(URL1)
assert resp.content == TEXT2
assert len(os.listdir(mytest.cache_location)) == 0
assert cache_conf.cache_active is True
|
D-arioSpaceREPO_NAMEastroqueryPATH_START.@astroquery_extracted@astroquery-main@astroquery@tests@test_cache.py@.PATH_END.py
|
{
"filename": "riess2020.py",
"repo_name": "ggalloni/cobaya",
"repo_path": "cobaya_extracted/cobaya-master/cobaya/likelihoods/H0/riess2020.py",
"type": "Python"
}
|
from cobaya.likelihoods.base_classes import H0
class riess2020(H0):
r"""
Local $H_0$ measurement from \cite{Riess:2020fzl}.
"""
pass
|
ggalloniREPO_NAMEcobayaPATH_START.@cobaya_extracted@cobaya-master@cobaya@likelihoods@H0@riess2020.py@.PATH_END.py
|
{
"filename": "main.py",
"repo_name": "cdslaborg/paramonte",
"repo_path": "paramonte_extracted/paramonte-main/benchmark/fortran/pm_distUnif/setUnifRand_vs_random_number/main.py",
"type": "Python"
}
|
#!/usr/bin/env python
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
fontsize = 14
df = pd.read_csv("main.out")
####################################################################################################################################
#### Plot the runtimes.
####################################################################################################################################
ax = plt.figure(figsize = 1.25 * np.array([6.4,4.6]), dpi = 200)
ax = plt.subplot()
plt.plot( df.values[:, 0]
, df.values[:,1:]
, linewidth = 2
)
plt.xticks(fontsize = fontsize)
plt.yticks(fontsize = fontsize)
ax.set_xlabel("Array Size", fontsize = fontsize)
ax.set_ylabel("Runtime [ seconds ]", fontsize = fontsize)
ax.set_title("Runtime:\nsetUnifRand() vs. random_number().\nLower is better.", fontsize = fontsize)
ax.set_xscale("log")
ax.set_yscale("log")
plt.minorticks_on()
plt.grid(visible = True, which = "both", axis = "both", color = "0.85", linestyle = "-")
ax.tick_params(axis = "y", which = "minor")
ax.tick_params(axis = "x", which = "minor")
ax.legend ( list(df.columns.values[1:])
#, loc='center left'
#, bbox_to_anchor=(1, 0.5)
, fontsize = fontsize
)
plt.tight_layout()
plt.savefig("benchmark.setUnifRand_vs_random_number.runtime.png")
####################################################################################################################################
#### Plot the runtime ratios.
####################################################################################################################################
ax = plt.figure(figsize = 1.25 * np.array([6.4,4.6]), dpi = 200)
ax = plt.subplot()
# baseline
plt.plot( df.values[:, 0]
, np.ones( len(df["arraySize"].values) )
, linestyle = "-"
, linewidth = 2
)
for colname in df.columns.values[2:]:
plt.plot( df.values[:, 0]
, df[colname].values / df.values[:, 1]
, linewidth = 2
)
plt.xticks(fontsize = fontsize)
plt.yticks(fontsize = fontsize)
ax.set_xlabel("Array Size", fontsize = fontsize)
ax.set_ylabel("Runtime Ratio", fontsize = fontsize)
ax.set_title("""Uniform RNG Runtime Ratio: setUnifRand() to random_number().
A value < 1 implies better performance of setUnifRand().""", fontsize = fontsize)
ax.set_xscale("log")
#ax.set_yscale("log")
plt.minorticks_on()
plt.grid(visible = True, which = "both", axis = "both", color = "0.85", linestyle = "-")
ax.tick_params(axis = "y", which = "minor")
ax.tick_params(axis = "x", which = "minor")
ax.legend ( list(df.columns.values[1:])
#, bbox_to_anchor=(1, 0.5)
#, loc='center left'
, fontsize = fontsize
)
plt.tight_layout()
plt.savefig("benchmark.setUnifRand_vs_random_number.runtime.ratio.png")
|
cdslaborgREPO_NAMEparamontePATH_START.@paramonte_extracted@paramonte-main@benchmark@fortran@pm_distUnif@setUnifRand_vs_random_number@main.py@.PATH_END.py
|
{
"filename": "__init__.py",
"repo_name": "annadeg/jwst-msafit",
"repo_path": "jwst-msafit_extracted/jwst-msafit-main/msafit/__init__.py",
"type": "Python"
}
|
from . import fpa
from . import model
from . import fitting
from . import lsf
|
annadegREPO_NAMEjwst-msafitPATH_START.@jwst-msafit_extracted@jwst-msafit-main@msafit@__init__.py@.PATH_END.py
|
{
"filename": "rayleigh.py",
"repo_name": "pcubillos/pyratbay",
"repo_path": "pyratbay_extracted/pyratbay-master/pyratbay/atmosphere/rayleigh/rayleigh.py",
"type": "Python"
}
|
# Copyright (c) 2021 Patricio Cubillos
# Pyrat Bay is open-source software under the GNU GPL-2.0 license (see LICENSE)
__all__ = [
'Dalgarno',
'Lecavelier',
]
import numpy as np
from ... import tools as pt
class Dalgarno():
"""
Rayleigh-scattering model from Dalgarno (1962), Kurucz (1970), and
Dalgarno & Williams (1962).
"""
def __init__(self, mol):
"""
Parameters
----------
mol: String
The species, which can be H, He, or H2.
"""
self.name = 'dalgarno_{:s}'.format(mol) # Model name
self.mol = mol # Species causing the extinction
self.npars = 0 # Number of model fitting parameters
self.pars = [] # Model fitting parameters
self.ec = None # Model extinction coefficient (cm2 molec-1)
self.pnames = [] # Fitting-parameter names
self.texnames = [] # Fitting-parameter names
if self.mol == 'H':
self.coef = np.array([5.799e-45, 1.422e-54, 2.784e-64])
self.extinction = self._extH
elif self.mol == 'He':
self.coef = np.array([5.484e-46, 2.440e-11, 5.940e-42, 2.900e-11])
self.extinction = self._extHe
elif self.mol == 'H2':
self.coef = np.array([8.140e-45, 1.280e-54, 1.610e-64])
self.extinction = self._extH
def _extH(self, wn):
"""
Calculate the opacity cross-section in cm2 molec-1 units.
Parameters
----------
wn: 1D float ndarray
Wavenumber in cm-1.
"""
self.ec = (self.coef[0]*wn**4.0 + self.coef[1]*wn**6.0
+ self.coef[2]*wn**8.0)
def _extHe(self, wn):
"""
Calculate the opacity cross-section in cm2 molec-1 units.
Parameters
----------
wn: 1D float ndarray
Wavenumber in cm-1.
"""
self.ec = self.coef[0]*wn**4 * (1 + self.coef[1]*wn**2 +
self.coef[2]*wn**4/(1 - self.coef[3]*wn**2))**2
def __str__(self):
fw = pt.Formatted_Write()
fw.write("Model name (name): '{}'", self.name)
fw.write('Model species (mol): {}', self.mol)
fw.write('Number of model parameters (npars): {}', self.npars)
fw.write('Extinction-coefficient (ec, cm2 molec-1):\n {}', self.ec,
fmt={'float':'{: .3e}'.format}, edge=3)
return fw.text
class Lecavelier():
"""
Rayleigh-scattering model from Lecavelier des Etangs et al. (2008).
AA, 485, 865.
"""
def __init__(self):
self.name = 'lecavelier' # Model name
self.mol = 'H2' # Species causing the extinction
self.pars = [ 0.0, # Cross-section scale factor (unitless)
-4.0] # Power-law exponent
self.npars = len(self.pars) # Number of model fitting parameters
self.ec = None # Model extinction coefficient
self.pnames = ['log(f_ray)', 'alpha_ray']
self.texnames = [r'$\log_{10}(f_{\rm ray})$', r'$\alpha_{\rm ray}$']
self.s0 = 5.31e-27 # Cross section (cm2 molec-1) at l0
self.l0 = 3.5e-5 # Nominal wavelength (cm)
def extinction(self, wn):
"""
Calculate the Rayleigh cross section in cm2 molec-1:
cross section = f_ray * s0 * (lambda/l0)**alpha_ray,
parameterized as params = [log10(f_ray), alpha_ray).
Parameters
----------
wn: 1D float ndarray
Wavenumber array in cm-1.
"""
# Rayleigh cross section opacity in cm2 molec-1:
self.ec = 10.0**self.pars[0] * self.s0 * (wn*self.l0)**(-self.pars[1])
def __str__(self):
fw = pt.Formatted_Write()
fw.write("Model name (name): '{}'", self.name)
fw.write('Model species (mol): {}', self.mol)
fw.write('Number of model parameters (npars): {}', self.npars)
fw.write('Parameter name Value\n'
' (pnames) (pars)\n')
for pname, param in zip(self.pnames, self.pars):
fw.write(' {:15s} {: .3e}', pname, param)
fw.write('Opacity cross section (ec, cm2 molec-1):\n {}', self.ec,
fmt={'float':'{: .3e}'.format}, edge=3)
return fw.text
|
pcubillosREPO_NAMEpyratbayPATH_START.@pyratbay_extracted@pyratbay-master@pyratbay@atmosphere@rayleigh@rayleigh.py@.PATH_END.py
|
{
"filename": "exp.py",
"repo_name": "tensorflow/tensorflow",
"repo_path": "tensorflow_extracted/tensorflow-master/tensorflow/lite/testing/op_tests/exp.py",
"type": "Python"
}
|
# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Test configs for exp."""
import tensorflow as tf
from tensorflow.lite.testing.zip_test_utils import create_tensor_data
from tensorflow.lite.testing.zip_test_utils import make_zip_of_tests
from tensorflow.lite.testing.zip_test_utils import register_make_test_function
@register_make_test_function()
def make_exp_tests(options):
"""Make a set of tests to do exp."""
test_parameters = [
{
"input_dtype": [tf.float32],
"input_shape": [[], [3], [1, 100], [4, 2, 3], [5, 224, 224, 3]],
"input_range": [(-100, 9)],
},
{
"input_dtype": [tf.float32],
"input_shape": [[], [3], [1, 100], [4, 2, 3], [5, 224, 224, 3]],
"input_range": [(-2, 2)],
"fully_quantize": [True],
"quant_16x8": [False, True],
},
]
def build_graph(parameters):
"""Build the exp op testing graph."""
input_tensor = tf.compat.v1.placeholder(
dtype=parameters["input_dtype"],
name="input",
shape=parameters["input_shape"])
out = tf.exp(input_tensor)
return [input_tensor], [out]
def build_inputs(parameters, sess, inputs, outputs):
min_value, max_value = parameters["input_range"]
values = [
create_tensor_data(
parameters["input_dtype"],
parameters["input_shape"],
min_value=min_value,
max_value=max_value,
)
]
return values, sess.run(outputs, feed_dict=dict(zip(inputs, values)))
make_zip_of_tests(options, test_parameters, build_graph, build_inputs)
|
tensorflowREPO_NAMEtensorflowPATH_START.@tensorflow_extracted@tensorflow-master@tensorflow@lite@testing@op_tests@exp.py@.PATH_END.py
|
{
"filename": "cloud_run_v2.py",
"repo_name": "PrefectHQ/prefect",
"repo_path": "prefect_extracted/prefect-main/src/integrations/prefect-gcp/prefect_gcp/models/cloud_run_v2.py",
"type": "Python"
}
|
import time
from typing import Dict, List, Literal, Optional
# noinspection PyProtectedMember
from googleapiclient.discovery import Resource
from pydantic import BaseModel, Field
class SecretKeySelector(BaseModel):
"""
SecretKeySelector is a data model for specifying a GCP secret to inject
into a Cloud Run V2 Job as an environment variable.
Follows Cloud Run V2 rest API, docs:
https://cloud.google.com/run/docs/reference/rest/v2/Container#SecretKeySelector
"""
secret: str
version: str
class JobV2(BaseModel):
"""
JobV2 is a data model for a job that will be run on Cloud Run with the V2 API.
"""
name: str
uid: str
generation: str
labels: Dict[str, str] = Field(default_factory=dict)
annotations: Dict[str, str] = Field(default_factory=dict)
createTime: str
updateTime: str
deleteTime: Optional[str] = Field(None)
expireTime: Optional[str] = Field(None)
creator: Optional[str] = Field(None)
lastModifier: Optional[str] = Field(None)
client: Optional[str] = Field(None)
clientVersion: Optional[str] = Field(None)
launchStage: Literal[
"ALPHA",
"BETA",
"GA",
"DEPRECATED",
"EARLY_ACCESS",
"PRELAUNCH",
"UNIMPLEMENTED",
"LAUNCH_TAG_UNSPECIFIED",
]
binaryAuthorization: Dict = Field(default_factory=dict)
template: Dict = Field(default_factory=dict)
observedGeneration: Optional[str] = Field(None)
terminalCondition: Dict = Field(default_factory=dict)
conditions: List[Dict] = Field(default_factory=list)
executionCount: int
latestCreatedExecution: Dict = Field(default_factory=dict)
reconciling: bool = Field(False)
satisfiesPzs: bool = Field(False)
etag: Optional[str] = Field(None)
def is_ready(self) -> bool:
"""
Check if the job is ready to run.
Returns:
Whether the job is ready to run.
"""
ready_condition = self.get_ready_condition()
if self._is_missing_container(ready_condition=ready_condition):
raise Exception(f"{ready_condition.get('message')}")
return ready_condition.get("state") == "CONDITION_SUCCEEDED"
def get_ready_condition(self) -> Dict:
"""
Get the ready condition for the job.
Returns:
The ready condition for the job.
"""
if self.terminalCondition.get("type") == "Ready":
return self.terminalCondition
return {}
@classmethod
def get(
cls,
cr_client: Resource,
project: str,
location: str,
job_name: str,
):
"""
Get a job from Cloud Run with the V2 API.
Args:
cr_client: The base client needed for interacting with GCP
Cloud Run V2 API.
project: The GCP project ID.
location: The GCP region.
job_name: The name of the job to get.
"""
# noinspection PyUnresolvedReferences
request = cr_client.jobs().get(
name=f"projects/{project}/locations/{location}/jobs/{job_name}",
)
response = request.execute()
return cls(
name=response["name"],
uid=response["uid"],
generation=response["generation"],
labels=response.get("labels", {}),
annotations=response.get("annotations", {}),
createTime=response["createTime"],
updateTime=response["updateTime"],
deleteTime=response.get("deleteTime"),
expireTime=response.get("expireTime"),
creator=response.get("creator"),
lastModifier=response.get("lastModifier"),
client=response.get("client"),
clientVersion=response.get("clientVersion"),
launchStage=response.get("launchStage", "GA"),
binaryAuthorization=response.get("binaryAuthorization", {}),
template=response.get("template"),
observedGeneration=response.get("observedGeneration"),
terminalCondition=response.get("terminalCondition", {}),
conditions=response.get("conditions", []),
executionCount=response.get("executionCount", 0),
latestCreatedExecution=response["latestCreatedExecution"],
reconciling=response.get("reconciling", False),
satisfiesPzs=response.get("satisfiesPzs", False),
etag=response["etag"],
)
@staticmethod
def create(
cr_client: Resource,
project: str,
location: str,
job_id: str,
body: Dict,
) -> Dict:
"""
Create a job on Cloud Run with the V2 API.
Args:
cr_client: The base client needed for interacting with GCP
Cloud Run V2 API.
project: The GCP project ID.
location: The GCP region.
job_id: The ID of the job to create.
body: The job body.
Returns:
The response from the Cloud Run V2 API.
"""
# noinspection PyUnresolvedReferences
request = cr_client.jobs().create(
parent=f"projects/{project}/locations/{location}",
jobId=job_id,
body=body,
)
response = request.execute()
return response
@staticmethod
def delete(
cr_client: Resource,
project: str,
location: str,
job_name: str,
) -> Dict:
"""
Delete a job on Cloud Run with the V2 API.
Args:
cr_client (Resource): The base client needed for interacting with GCP
Cloud Run V2 API.
project: The GCP project ID.
location: The GCP region.
job_name: The name of the job to delete.
Returns:
Dict: The response from the Cloud Run V2 API.
"""
# noinspection PyUnresolvedReferences
list_executions_request = (
cr_client.jobs()
.executions()
.list(
parent=f"projects/{project}/locations/{location}/jobs/{job_name}",
)
)
list_executions_response = list_executions_request.execute()
for execution_to_delete in list_executions_response.get("executions", []):
# noinspection PyUnresolvedReferences
delete_execution_request = (
cr_client.jobs()
.executions()
.delete(
name=execution_to_delete["name"],
)
)
delete_execution_request.execute()
# Sleep 3 seconds so that the execution is deleted before deleting the job
time.sleep(3)
# noinspection PyUnresolvedReferences
request = cr_client.jobs().delete(
name=f"projects/{project}/locations/{location}/jobs/{job_name}",
)
response = request.execute()
return response
@staticmethod
def run(
cr_client: Resource,
project: str,
location: str,
job_name: str,
):
"""
Run a job on Cloud Run with the V2 API.
Args:
cr_client: The base client needed for interacting with GCP
Cloud Run V2 API.
project: The GCP project ID.
location: The GCP region.
job_name: The name of the job to run.
"""
# noinspection PyUnresolvedReferences
request = cr_client.jobs().run(
name=f"projects/{project}/locations/{location}/jobs/{job_name}",
)
response = request.execute()
return response
@staticmethod
def _is_missing_container(ready_condition: Dict) -> bool:
"""
Check if the job is missing a container.
Args:
ready_condition: The ready condition for the job.
Returns:
Whether the job is missing a container.
"""
if (
ready_condition.get("state") == "CONTAINER_FAILED"
and ready_condition.get("reason") == "ContainerMissing"
):
return True
return False
class ExecutionV2(BaseModel):
"""
ExecutionV2 is a data model for an execution of a job that will be run on
Cloud Run API v2.
"""
name: str
uid: str
generation: str
labels: Dict[str, str]
annotations: Dict[str, str]
createTime: str
startTime: Optional[str]
completionTime: Optional[str]
deleteTime: Optional[str]
expireTime: Optional[str]
launchStage: Literal[
"ALPHA",
"BETA",
"GA",
"DEPRECATED",
"EARLY_ACCESS",
"PRELAUNCH",
"UNIMPLEMENTED",
"LAUNCH_TAGE_UNSPECIFIED",
]
job: str
parallelism: int
taskCount: int
template: Dict
reconciling: bool
conditions: List[Dict]
observedGeneration: Optional[str]
runningCount: Optional[int]
succeededCount: Optional[int]
failedCount: Optional[int]
cancelledCount: Optional[int]
retriedCount: Optional[int]
logUri: str
satisfiesPzs: bool
etag: str
def is_running(self) -> bool:
"""
Return whether the execution is running.
Returns:
Whether the execution is running.
"""
return self.completionTime is None
def succeeded(self):
"""Whether or not the Execution completed is a successful state."""
completed_condition = self.condition_after_completion()
if (
completed_condition
and completed_condition["state"] == "CONDITION_SUCCEEDED"
):
return True
return False
def condition_after_completion(self) -> Dict:
"""
Return the condition after completion.
Returns:
The condition after completion.
"""
if isinstance(self.conditions, List):
for condition in self.conditions:
if condition["type"] == "Completed":
return condition
@classmethod
def get(
cls,
cr_client: Resource,
execution_id: str,
):
"""
Get an execution from Cloud Run with the V2 API.
Args:
cr_client: The base client needed for interacting with GCP
Cloud Run V2 API.
execution_id: The name of the execution to get, in the form of
projects/{project}/locations/{location}/jobs/{job}/executions
/{execution}
"""
# noinspection PyUnresolvedReferences
request = cr_client.jobs().executions().get(name=execution_id)
response = request.execute()
return cls(
name=response["name"],
uid=response["uid"],
generation=response["generation"],
labels=response.get("labels", {}),
annotations=response.get("annotations", {}),
createTime=response["createTime"],
startTime=response.get("startTime"),
completionTime=response.get("completionTime"),
deleteTime=response.get("deleteTime"),
expireTime=response.get("expireTime"),
launchStage=response.get("launchStage", "GA"),
job=response["job"],
parallelism=response["parallelism"],
taskCount=response["taskCount"],
template=response["template"],
reconciling=response.get("reconciling", False),
conditions=response.get("conditions", []),
observedGeneration=response.get("observedGeneration"),
runningCount=response.get("runningCount"),
succeededCount=response.get("succeededCount"),
failedCount=response.get("failedCount"),
cancelledCount=response.get("cancelledCount"),
retriedCount=response.get("retriedCount"),
logUri=response["logUri"],
satisfiesPzs=response.get("satisfiesPzs", False),
etag=response["etag"],
)
|
PrefectHQREPO_NAMEprefectPATH_START.@prefect_extracted@prefect-main@src@integrations@prefect-gcp@prefect_gcp@models@cloud_run_v2.py@.PATH_END.py
|
{
"filename": "_value.py",
"repo_name": "catboost/catboost",
"repo_path": "catboost_extracted/catboost-master/contrib/python/plotly/py3/plotly/validators/heatmap/colorbar/tickformatstop/_value.py",
"type": "Python"
}
|
import _plotly_utils.basevalidators
class ValueValidator(_plotly_utils.basevalidators.StringValidator):
def __init__(
self,
plotly_name="value",
parent_name="heatmap.colorbar.tickformatstop",
**kwargs,
):
super(ValueValidator, self).__init__(
plotly_name=plotly_name,
parent_name=parent_name,
edit_type=kwargs.pop("edit_type", "colorbars"),
**kwargs,
)
|
catboostREPO_NAMEcatboostPATH_START.@catboost_extracted@catboost-master@contrib@python@plotly@py3@plotly@validators@heatmap@colorbar@tickformatstop@_value.py@.PATH_END.py
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.