metadata
dict | text
stringlengths 0
40.6M
| id
stringlengths 14
255
|
|---|---|---|
{
"filename": "detectors.md",
"repo_name": "LouisDesdoigts/dLux",
"repo_path": "dLux_extracted/dLux-main/docs/API/core/detectors.md",
"type": "Markdown"
}
|
# Detectors
???+ info "LayeredDetector"
::: dLux.detectors.LayeredDetector
|
LouisDesdoigtsREPO_NAMEdLuxPATH_START.@dLux_extracted@dLux-main@docs@API@core@detectors.md@.PATH_END.py
|
{
"filename": "abscal_inspect_2458115.ipynb",
"repo_name": "HERA-Team/H1C_IDR3_Notebooks",
"repo_path": "H1C_IDR3_Notebooks-main/abscal_inspect/abscal_inspect_2458115.ipynb",
"type": "Jupyter Notebook"
}
|
# Stage 2 Absolute Calibration Nightly Notebook
**Josh Dillon**, Last Revised 9/23/20
```python
import numpy as np
import matplotlib.pyplot as plt
import matplotlib
from hera_cal import io, redcal, apply_cal, abscal, utils
from hera_cal.smooth_cal import build_time_blacklist
from hera_qm.metrics_io import load_metric_file
import pyuvdata
import glob
import os
from copy import deepcopy
import inspect
import h5py
import matplotlib.cm as cm
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
```
```python
# If you want to run this notebook locally, copy the output of the next cell into the first few lines of this cell.
# JD = '2459122'
# data_path = '/lustre/aoc/projects/hera/H4C/2459122'
# lst_blacklist_string = '0-1.3 2.5-4.3 5.0-5.7 6.5-9.1 10.6-11.5 11.9-14.3 16.3-1.3'
# abscal_model_glob = '/lustre/aoc/projects/hera/zmartino/hera_calib_model/H3C/abscal_files_unique_baselines/zen.2458894.?????.uvh5'
# os.environ["JULIANDATE"] = JD
# os.environ["DATA_PATH"] = data_path
# os.environ["LST_BLACKLIST_STRING"] = lst_blacklist_string
# os.environ["ABSCAL_MODEL_GLOB"] = abscal_model_glob
```
```python
# Use environment variables to figure out path to data
JD = os.environ['JULIANDATE']
data_path = os.environ['DATA_PATH']
lst_blacklist_string = os.environ['LST_BLACKLIST_STRING']
abscal_model_glob = os.environ['ABSCAL_MODEL_GLOB']
print(f'JD = "{JD}"')
print(f'data_path = "{data_path}"')
print(f'lst_blacklist_string = "{lst_blacklist_string}"')
print(f'abscal_model_glob = "{abscal_model_glob}"')
```
JD = "2458115"
data_path = "/lustre/aoc/projects/hera/H1C_IDR3/IDR3_2/2458115"
lst_blacklist_string = ""
abscal_model_glob = "/lustre/aoc/projects/hera/H1C_IDR3/abscal_model/zen.245804*.HH.uvRXLS.uvh5"
```python
print('Looking for data in', data_path, 'on JD', JD)
data_list = sorted(glob.glob(os.path.join(data_path, f'zen.{JD}.?????.sum.uvh5')))
if len(data_list) == 0:
data_list = sorted(glob.glob(os.path.join(data_path, f'zen.{JD}.?????.uvh5')))
print('...found {} data files.'.format(len(data_list)))
abscal_list = sorted(glob.glob(os.path.join(data_path, f'zen.{JD}.*.abs.calfits')))
print('...found {} abscal files.'.format(len(abscal_list)))
omnical_list = sorted(glob.glob(os.path.join(data_path, f'zen.{JD}.*.sum.omni.calfits')))
print('...found {} omnical files.'.format(len(omnical_list)))
```
Looking for data in /lustre/aoc/projects/hera/H1C_IDR3/IDR3_2/2458115 on JD 2458115
...found 73 data files.
...found 73 abscal files.
...found 73 omnical files.
# Load And Inspect a Single File
```python
# get all JDs and LSTs
_, _, file_lst_arrays, file_time_arrays = io.get_file_times(data_list)
# parse lst_blacklist_string
lst_blacklists = []
if len(lst_blacklist_string) > 0:
lst_blacklists = [tuple([float(arg) for arg in arg_pair.split('-', maxsplit=1)])
for arg_pair in lst_blacklist_string.split(' ')]
# get times that are blacklisted and reshape them like file_time_arrays
time_blacklisted_flat = build_time_blacklist(np.hstack(file_time_arrays), lst_blacklists=lst_blacklists)
time_blacklisted = [fta.astype(bool) for fta in file_time_arrays]
n = 0
for i in range(len(file_time_arrays)):
time_blacklisted[i] = np.zeros_like(time_blacklisted[i], dtype=bool)
for j in range(len(file_time_arrays[i])):
time_blacklisted[i][j] = time_blacklisted_flat[n]
n += 1
# pick the central time from among the not-LST blacklisted files, if possible
good_indices = [i for i, tb in enumerate(time_blacklisted) if not np.any(tb)]
if len(good_indices) > 0:
file_index = good_indices[len(good_indices)//2]
else:
file_index = len(data_list)//2
file_JD = '.'.join([s for s in data_list[file_index].split('.') if s.isdigit()])
```
/lustre/aoc/projects/hera/heramgr/anaconda2/envs/h1c_idr3/lib/python3.7/site-packages/numpy/core/_asarray.py:83: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray
return array(a, dtype, copy=False, order=order)
```python
# Load abscal gains and determine ex_ants
hc = io.HERACal(abscal_list[file_index])
gains, gain_flags, _, _ = hc.read()
ex_ants = [ant for ant in gain_flags if np.all(gain_flags[ant])]
# Get min_bl_cut, we only want to compare baselines actually used in absolute calibration
try:
min_bl_cut = float(hc.history.replace('\n','').split('--min_bl_cut')[-1].split('--')[0].strip())
except:
print('Could not find min_bl_cut, setting to 1 m.')
min_bl_cut = 1.0
# Load the most common redundant baseline longer than min_bl_cut
hd = io.HERAData(data_list[file_index])
bls_to_plot = []
for pol in ['ee', 'nn']:
reds = redcal.get_reds(hd.antpos, pols=[pol])
# reds = redcal.filter_reds(reds, ex_ants=ex_ants)
reds = sorted(reds, key=len, reverse=True)
bl_lens = np.array([np.linalg.norm(hd.antpos[red[0][1]] - hd.antpos[red[0][0]]) for red in reds])
try:
bl_group_to_plot = (np.array(reds)[bl_lens >= min_bl_cut])[0]
except:
bl_group_to_plot = reds[0]
bls_to_plot.extend(bl_group_to_plot)
# reds = sorted(reds, key=len, reverse=True)
data, flags, nsamples = hd.read(bls=bls_to_plot)
apply_cal.calibrate_in_place(data, gains, data_flags=flags, cal_flags=gain_flags)
```
Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray
```python
plt.figure(figsize=(8,8))
plt.scatter(np.array(list(hd.antpos.values()))[:,0],
np.array(list(hd.antpos.values()))[:,1], c='w', s=0)
for ant,pos in hd.antpos.items():
bad = ant in [ant[0] for ant in ex_ants]
plt.gca().add_artist(plt.Circle(tuple(pos[0:2]), radius=7,
fill=(~bad), color=['grey','r'][bad]))
plt.text(pos[0],pos[1],str(ant), va='center', ha='center', color='w')
plt.xlabel("Antenna East-West Position (meters)")
plt.ylabel("Antenna North-South Position (meters)")
plt.title('Antenna Positions on {} (Red = Flagged)'.format(file_JD));
plt.axis('equal')
plt.tight_layout()
plt.show()
```

### Figure 1: Array and Flagged Antennas
#### OBSERVER CHECKLIST:
* Check that the array configuration looks reasonable.
* Check that all flags expected to be flagged are actually flagged but also that not everything is getting flagged.
```python
#check whether the model is redudnant by looking at the history
model_is_redundant = ('--model_is_redundant' in "".join(hc.history.split()))
# Find files that overlap with this file
abscal_matched_files = list(abscal.match_times(data_list[file_index],
sorted(glob.glob(abscal_model_glob)),
filetype='uvh5', atol=1e-5))
hdm = io.HERAData(abscal_matched_files)
# Get model baselines to load
model_bls = hdm.bls
model_antpos = hdm.antpos
if isinstance(model_bls, dict):
model_bls = list(model_bls.values())[0]
model_antpos = {ant: pos for antpos in hdm.antpos.values() for ant, pos in antpos.items()}
_, model_bl_to_load, data_to_model_bl_map = abscal.match_baselines(bls_to_plot, model_bls,
hd.antpos, model_antpos=model_antpos,
model_is_redundant=model_is_redundant)
model, model_flags, _ = hdm.read(bls=model_bl_to_load)
# Rephase model at index of best match to mean LST in the data
model_index = np.argmin(np.abs(model.lsts - np.mean(data.lsts)))
model_blvecs = {bl: model.antpos[bl[0]] - model.antpos[bl[1]] for bl in model.keys()}
utils.lst_rephase(model, model_blvecs, model.freqs, np.mean(data.lsts) - model.lsts[model_index],
lat=hdm.telescope_location_lat_lon_alt_degrees[0], inplace=True)
if not model_is_redundant:
model, _, _ = utils.red_average(model, flags=model_flags)
```
Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray
```python
import warnings
with warnings.catch_warnings():
warnings.filterwarnings('ignore', r'All-NaN (slice|axis) encountered')
for pol in ['ee', 'nn']:
for func, plot, ylabel in zip([np.abs, np.angle], [plt.semilogy, plt.plot], ['Amplitude (Jy)', 'Phase (Radians)']):
plt.figure(figsize=(16,4))
for bl in [k for k in bls_to_plot if k[2] == pol]:
ant0, ant1 = utils.split_bl(bl)
blvec = hd.antpos[ant0[0]] - hd.antpos[ant1[0]]
if (ant0 not in ex_ants) and (ant1 not in ex_ants):
to_plot = deepcopy(data[bl])
to_plot[flags[bl]] = np.nan + 1.0j * np.nan
to_plot = np.nanmedian(np.real(to_plot), axis=0) + 1.0j * np.nanmedian(np.imag(to_plot), axis=0)
plot(hd.freqs/1e6, func(to_plot))
for bl in [k for k in model if k[2] == pol]:
plot(hd.freqs/1e6, func(model[bl][model_index]), 'k-', label='Abscal Model')
plt.xlabel('Frequency (MHz)')
plt.ylabel(ylabel)
plt.legend(loc='lower right')
plt.title('{}-Polarized, {:f} m East, {:f} m North Visibility on {}'.format(pol, blvec[0], blvec[1], file_JD))
```




### Figure 2: Example redundant baseline group, absolute calibrated, compared to the Abscal Model
#### OBSERVER CHECKLIST:
* Check that the data all look pretty redundant.
* Check that the model isn't wildly out of line with the data.
# Load a whole day
```python
# Load chisq and flagging info from abscal gains
ant_flags_dict = {}
chisq_ee_dict = {}
chisq_nn_dict = {}
cspa_med_dict = {}
ants = set([])
for cal in abscal_list:
hc = io.HERACal(cal)
_, flags, cspa, chisq = hc.read()
ants |= set(flags.keys())
ant_flags_dict[cal] = {ant: np.all(flags[ant]) for ant in flags}
chisq_ee_dict[cal] = chisq['Jee']
chisq_nn_dict[cal] = chisq['Jnn']
cspa_med_dict[cal] = {ant: np.nanmedian(cspa[ant], axis=1) for ant in cspa}
all_flagged_dict = {ant: np.all([af[ant] for af in ant_flags_dict.values()]) for ant in ants}
cspa = {ant: np.hstack([np.squeeze(cspa_med_dict[cal][ant]) / \
~ant_flags_dict[cal][ant] for cal in abscal_list]) for ant in ants}
ee_chisq = np.vstack(np.array(list(chisq_ee_dict.values())))
nn_chisq = np.vstack(np.array(list(chisq_nn_dict.values())))
```
invalid value encountered in true_divide
Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray
Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray
```python
# save middle-numbered ants with a minimal number of flags
ants_to_save = {}
for pol in ['Jee', 'Jnn']:
min_flags = np.min([np.sum(~np.isfinite(cspa[ant]))
for ant in cspa if ant[1] == pol])
ant_candidates = sorted([ant for ant in cspa if ant[1] == pol and
np.sum(~np.isfinite(cspa[ant])) == min_flags])
Nac = len(ant_candidates)
ants_to_save[pol] = ant_candidates[(Nac // 2 - 1):(Nac // 2 + 1)]
# Reload abscal gains
times_dict = {}
gain_dict = {}
flag_dict = {}
for cal in abscal_list:
hc = io.HERACal(cal)
gains, flags, _, _ = hc.read()
times_dict[cal] = hc.times
gain_dict[cal] = {ant: gains[ant] for pol in ants_to_save for ant in ants_to_save[pol]}
flag_dict[cal] = {ant: flags[ant] for pol in ants_to_save for ant in ants_to_save[pol]}
times = np.hstack(list(times_dict.values()))
lsts = 12 / np.pi * pyuvdata.utils.get_lst_for_time(times, *hd.telescope_location_lat_lon_alt_degrees)
gains = {ant: np.vstack([gain_dict[cal][ant] for cal in gain_dict])
for pol in ants_to_save for ant in ants_to_save[pol]}
flags = {ant: np.vstack([flag_dict[cal][ant] for cal in flag_dict])
for pol in ants_to_save for ant in ants_to_save[pol]}
flag_mask = np.all([f for f in flags.values()], axis=0)
```
# Inspect a whole day
```python
# for overplotting blacklisted LSTs
my_cmap = cm.binary
my_cmap.set_under('k', alpha=0)
blacklist = np.ones_like(ee_chisq) * np.hstack(time_blacklisted)[:, np.newaxis]
```
You are modifying the state of a globally registered colormap. In future versions, you will not be able to modify a registered colormap in-place. To remove this warning, you can make a copy of the colormap first. cmap = copy.copy(mpl.cm.get_cmap("binary"))
```python
# Grid and plot overall chi^2 for each polarization
ee_chisq = np.vstack(np.array(list(chisq_ee_dict.values())))
nn_chisq = np.vstack(np.array(list(chisq_nn_dict.values())))
fig, axes = plt.subplots(1, 2, figsize=(20,12))
for ax, cs, t in zip(axes, [ee_chisq, nn_chisq], ['ee-polarized', 'nn-polarized']):
extent=[hd.freqs[0]/1e6, hd.freqs[-1]/1e6, times[-1], times[0]]
im = ax.imshow(cs / ~flag_mask, aspect='auto', vmin=0, cmap='inferno', vmax=10, interpolation='nearest', extent=extent)
ax.imshow(blacklist, aspect='auto', cmap=my_cmap, interpolation=None, clim=[0.9, 1], alpha=.25, extent=extent)
ax.set_title('Overall Abscal $\chi^2$ / $N_{bls}$: ' + t)
ax.set_xlabel('Frequency (MHz)')
ax.set_ylabel('LST (Hours)')
ax.set_yticklabels(np.around(lsts[[min(max(np.searchsorted(times, t), 0), len(times) - 1) for t in ax.get_yticks()]], 2))
plt.colorbar(im, ax=ax, label='$\chi^2$ / $N_{bls}$ (unitless)')
```
Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray
Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray
invalid value encountered in true_divide
FixedFormatter should only be used together with FixedLocator

### Figure 3 Overall Abscal $\chi^2 / N_{bls}$
This computes the difference between the calibrated data and the abscal model, normalized by the thermal noise. Grayed out regions are "blacklisted," meaning they are not flagged but they are given zero weight when performing calibration smoothing.
#### OBSERVER CHECKLIST:
* Look for regions of high $\chi^2$ that are not blacklisted.
```python
# Pick vmax to not saturate 90% of the data
vmax = np.max([np.percentile(np.abs(gains[ants_to_save[pol][1]][~flag_mask]), 90) for pol in ['Jee', 'Jnn']])
# Plot abscal gain amplitude waterfalls for a single antenna
fig, axes = plt.subplots(3, 2, figsize=(16,16), gridspec_kw={'height_ratios': [1, .25, .25]})
for ax, pol in zip(axes[0], ['Jee', 'Jnn']):
ant = ants_to_save[pol][1]
gains_here = deepcopy(gains[ant])
gains_here[flags[ant]] = np.nan
extent=[hd.freqs[0]/1e6, hd.freqs[-1]/1e6, times[-1], times[0]]
im = ax.imshow(np.abs(gains_here), aspect='auto', cmap='inferno',
interpolation='nearest', vmin=0, vmax=vmax, extent=extent)
ax.imshow(blacklist, aspect='auto', cmap=my_cmap, interpolation=None, clim=[0.9, 1], alpha=.25, extent=extent)
ax.set_title(f'Abscal Gain Amplitude of Antenna {ant[0]}: {pol[1:]}-polarized' )
ax.set_xlabel('Frequency (MHz)')
ax.set_ylabel('LST (Hours)')
ax.set_yticklabels(np.around(lsts[[min(max(np.searchsorted(times, t), 0), len(times) - 1) for t in ax.get_yticks()]], 2))
plt.colorbar(im, ax=ax, orientation='horizontal', pad=.07)
# Now plot median gain spectra and time series
for ax, pol in zip(axes[1], ['Jee', 'Jnn']):
ant = ants_to_save[pol][1]
gains_here = deepcopy(gains[ant])
gains_here[flags[ant]] = np.nan
if not np.all(np.hstack(time_blacklisted)):
ax.plot(hd.freqs / 1e6, np.nanmedian(np.abs(gains_here[~np.hstack(time_blacklisted), :]), axis=0))
ax.set_ylim([0, vmax])
ax.set_xlabel('Frequency (MHz)')
ax.set_ylabel('|g| (unitless)')
ax.set_title(f'Median Non-Blacklisted Abscal Gain Amplitude Spectrum of Antenna {ant[0]}: {pol[1:]}-polarized')
# Now plot median gain time series
for ax, pol in zip(axes[2], ['Jee', 'Jnn']):
ant = ants_to_save[pol][1]
gains_here = deepcopy(gains[ant])
gains_here[flags[ant]] = np.nan
if not np.all(np.hstack(time_blacklisted)):
ax.plot(lsts[~np.hstack(time_blacklisted)],
np.nanmedian(np.abs(gains_here[~np.hstack(time_blacklisted), :]), axis=1),
'b.', label='Not Blacklisted LSTs')
if np.any(np.hstack(time_blacklisted)):
ax.plot(lsts[np.hstack(time_blacklisted)],
np.nanmedian(np.abs(gains_here[np.hstack(time_blacklisted), :]), axis=1),
'r.', label='Blacklisted LSTs')
ax.set_ylim([0, vmax])
ax.set_xlabel('LST (hours)')
ax.set_ylabel('|g| (unitless)')
ax.set_title(f'Median Abscal Gain Amplitude Time-Series of Antenna {ant[0]}: {pol[1:]}-polarized')
ax.legend()
plt.tight_layout()
```
FixedFormatter should only be used together with FixedLocator
All-NaN slice encountered
All-NaN slice encountered
All-NaN slice encountered
All-NaN slice encountered

### Figure 4: Example Abscal Gain Amplitudes
Abscal gain amplitudes for an example antenna. In the waterfall, grayed out regions are "blacklisted," meaning they are not flagged but they are given zero weight when performing calibration smoothing. We also plot median non-blacklisted amplitude as a function of frequency (middle row) and the median amplitude as a function of time (bottom row)
#### OBSERVER CHECKLIST:
* Look to see that non-blacklisted times are relatively stable in amplitude
* Check to see if the bandpass looks reasonable
```python
# Plot abscal gain phase waterfalls for a single antenna/refant
fig, axes = plt.subplots(3, 2, figsize=(16,16), gridspec_kw={'height_ratios': [1, .25, .25]})
for ax, pol in zip(axes[0], ['Jee', 'Jnn']):
ant0, ant1 = ants_to_save[pol]
gains_ratio_here = gains[ant0] / gains[ant1]
gains_ratio_here[flags[ant0] | flags[ant1]] = np.nan
extent=[hd.freqs[0]/1e6, hd.freqs[-1]/1e6, times[-1], times[0]]
im = ax.imshow(np.angle(gains_ratio_here), aspect='auto', cmap='inferno',
interpolation='nearest', vmin=-np.pi, vmax=np.pi, extent=extent)
ax.imshow(blacklist, aspect='auto', cmap=my_cmap, interpolation=None, clim=[0.9, 1], alpha=.25, extent=extent)
ax.set_title(f'Abscal Gain Phase of Ant {ant0[0]} / Ant {ant1[0]}: {pol[1:]}-polarized' )
ax.set_xlabel('Frequency (MHz)')
ax.set_ylabel('LST (Hours)')
ax.set_yticklabels(np.around(lsts[[min(max(np.searchsorted(times, t), 0), len(times) - 1) for t in ax.get_yticks()]], 2))
plt.colorbar(im, ax=ax, orientation='horizontal', pad=.07)
# Now plot median gain spectra and time series
for ax, pol in zip(axes[1], ['Jee', 'Jnn']):
ant0, ant1 = ants_to_save[pol]
gains_ratio_here = gains[ant0] / gains[ant1]
gains_ratio_here[flags[ant0] | flags[ant1]] = np.nan
if not np.all(np.hstack(time_blacklisted)):
re_med = np.nanmedian(gains_ratio_here[~np.hstack(time_blacklisted), :].real, axis=0)
im_med = np.nanmedian(gains_ratio_here[~np.hstack(time_blacklisted), :].imag, axis=0)
ax.plot(hd.freqs / 1e6, np.angle(re_med + 1.0j * im_med))
ax.set_ylim([-np.pi, np.pi])
ax.set_xlabel('Frequency (MHz)')
ax.set_ylabel(f'Phase of g$_{{ant0[0]}}$ / g$_{{ant1[0]}}$')
ax.set_title(f'Median Non-Blacklisted Abscal Gain Phase Spectrum of Ant {ant0[0]} / Ant {ant1[0]}: {pol[1:]}-polarized')
# Now plot a single gain angle time series
for ax, pol in zip(axes[2], ['Jee', 'Jnn']):
ant0, ant1 = ants_to_save[pol]
gains_ratio_here = gains[ant0] / gains[ant1]
gains_ratio_here[flags[ant0] | flags[ant1]] = np.nan
# pick channel with minimum phase variance in the middle 100 channels
possible_chans = np.arange(len(hd.freqs))[len(hd.freqs)//2 - 50:len(hd.freqs)//2 + 50]
best_chan = np.argmin(np.var(np.angle(gains_ratio_here), axis=0)[len(hd.freqs)//2 - 50:len(hd.freqs)//2 + 50])
chan = possible_chans[best_chan]
if not np.all(np.hstack(time_blacklisted)):
ax.plot(lsts[~np.hstack(time_blacklisted)],
np.angle(gains_ratio_here[~np.hstack(time_blacklisted), chan]),
'b.', label='Not Blacklisted LSTs')
if np.any(np.hstack(time_blacklisted)):
ax.plot(lsts[np.hstack(time_blacklisted)],
np.angle(gains_ratio_here[np.hstack(time_blacklisted), chan]),
'r.', label='Blacklisted LSTs')
ax.set_ylim([-np.pi, np.pi])
ax.set_xlabel('LST (hours)')
ax.set_ylabel(f'Phase of g$_{ant0[0]}$ / g$_{ant1[0]}$')
ax.set_title(f'Abscal Gain Phase of Ant {ant0[0]} / Ant {ant1[0]} at Channel {chan}: {pol[1:]}-polarized')
ax.legend()
plt.tight_layout()
```
FixedFormatter should only be used together with FixedLocator
All-NaN slice encountered
All-NaN slice encountered

### Figure 5: Example Abscal Gain Phases
Relative gain phases of two example antennas. In the waterfall, grayed out regions are "blacklisted," meaning they are not flagged but they are given zero weight when performing calibration smoothing. We also plot median non-blacklisted phases as a function of frequency (middle row) and the phase of the specific channel within 50 channels of the middle with minimal phase variance (bottom row).
#### OBSERVER CHECKLIST:
* Look for regions of "hashy" phase structure that are not blacklisted or attributable to RFI.
# Metadata
```python
print(redcal.version.history_string())
```
------------
This file was produced by the function <module>() in <ipython-input-1-c6de44361328> using:
git_branch: master
git_description: v3.0-733-gd2dd8ccf
git_hash: d2dd8ccf3fe43d5e5eb6a4c28ceaf4a6e3d1fcb7
git_origin: git@github.com:HERA-Team/hera_cal.git
version: 3.0
------------
|
HERA-TeamREPO_NAMEH1C_IDR3_NotebooksPATH_START.@H1C_IDR3_Notebooks-main@abscal_inspect@abscal_inspect_2458115.ipynb@.PATH_END.py
|
{
"filename": "simulate_sed.py",
"repo_name": "asmuzsoy/bayesn-VI-paper",
"repo_path": "bayesn-VI-paper_extracted/bayesn-VI-paper-main/simulate_sed.py",
"type": "Python"
}
|
import numpy as np
from timeit import default_timer
from bayesn_model import SEDmodel
model = SEDmodel(load_model='T21_model')
N = 100
start = default_timer()
lc, params = model.simulate_light_curve(np.arange(-8, 40, 4), N, ['g_PS1', 'r_PS1', 'i_PS1', 'z_PS1'],
z=np.random.uniform(0, 0.1, N), mu='z', write_to_files=True, sim_name='T21_sim_100')
end = default_timer()
print(f'Simulating {N} objects took {end - start} seconds')
print(lc.shape)
|
asmuzsoyREPO_NAMEbayesn-VI-paperPATH_START.@bayesn-VI-paper_extracted@bayesn-VI-paper-main@simulate_sed.py@.PATH_END.py
|
{
"filename": "utils.py",
"repo_name": "nanograv/PINT",
"repo_path": "PINT_extracted/PINT-master/src/pint/utils.py",
"type": "Python"
}
|
"""Miscellaneous potentially-helpful functions.
Warning
-------
Functions:
- :func:`~pint.derived_quantities.a1sini`
- :func:`~pint.derived_quantities.companion_mass`
- :func:`~pint.derived_quantities.gamma`
- :func:`~pint.derived_quantities.mass_funct`
- :func:`~pint.derived_quantities.mass_funct2`
- :func:`~pint.derived_quantities.omdot`
- :func:`~pint.derived_quantities.omdot_to_mtot`
- :func:`~pint.derived_quantities.p_to_f`
- :func:`~pint.derived_quantities.pbdot`
- :func:`~pint.derived_quantities.pferrs`
- :func:`~pint.derived_quantities.pulsar_B`
- :func:`~pint.derived_quantities.pulsar_B_lightcyl`
- :func:`~pint.derived_quantities.pulsar_age`
- :func:`~pint.derived_quantities.pulsar_edot`
- :func:`~pint.derived_quantities.pulsar_mass`
- :func:`~pint.derived_quantities.shklovskii_factor`
have moved to :mod:`pint.derived_quantities`.
- :func:`pint.simulation.calculate_random_models`
has moved to :mod:`pint.simulation`.
"""
import configparser
import datetime
import getpass
import hashlib
import os
import platform
import re
import sys
import textwrap
from contextlib import contextmanager
from pathlib import Path
from warnings import warn
from scipy.optimize import minimize
from numdifftools import Hessian
from typing import (
Optional,
List,
Union,
Callable,
Any,
Tuple,
IO,
Dict,
Iterable,
Type,
Iterator,
Literal,
)
import uncertainties
import astropy.constants as const
import astropy.coordinates as coords
import astropy.units as u
import numpy as np
from astropy import constants
from astropy.time import Time
from loguru import logger as log
from scipy.special import fdtrc
from scipy.linalg import cho_factor, cho_solve
from copy import deepcopy
import warnings
import pint
import pint.pulsar_ecliptic
from pint.toa_select import TOASelect
from pint.types import file_like, quantity_like
from pint.exceptions import PINTPrecisionError, PrefixError
__all__ = [
"check_longdouble_precision",
"require_longdouble_precision",
"PosVel",
"numeric_partial",
"numeric_partials",
"check_all_partials",
"has_astropy_unit",
"PrefixError",
"split_prefixed_name",
"taylor_horner",
"taylor_horner_deriv",
"open_or_use",
"lines_of",
"interesting_lines",
"pmtot",
"dmxrange",
"sum_print",
"dmx_ranges_old",
"dmx_ranges",
"dmx_setup",
"dmxselections",
"xxxselections",
"dmxstats",
"dmxparse",
"get_prefix_timerange",
"get_prefix_timeranges",
"find_prefix_bytime",
"merge_dmx",
"split_dmx",
"split_swx",
"wavex_setup",
"translate_wave_to_wavex",
"get_wavex_freqs",
"get_wavex_amps",
"translate_wavex_to_wave",
"weighted_mean",
"ELL1_check",
"FTest",
"add_dummy_distance",
"remove_dummy_distance",
"info_string",
"list_parameters",
"colorize",
"print_color_examples",
"group_iterator",
"compute_hash",
"get_conjunction",
"divide_times",
"convert_dispersion_measure",
"parse_time",
"get_unit",
"normalize_designmatrix",
"akaike_information_criterion",
"bayesian_information_criterion",
"sherman_morrison_dot",
"woodbury_dot",
"plrednoise_from_wavex",
"pldmnoise_from_dmwavex",
"find_optimal_nharms",
]
COLOR_NAMES = ["black", "red", "green", "yellow", "blue", "magenta", "cyan", "white"]
TEXT_ATTRIBUTES = [
"normal",
"bold",
"subdued",
"italic",
"underscore",
"blink",
"reverse",
"concealed",
]
# Actual exported tools
# A warning is emitted in pint.pulsar_mjd if sufficient precision is not available
def check_longdouble_precision() -> bool:
"""Check whether long doubles have adequate precision.
Returns True if long doubles have enough precision to use PINT
for sub-microsecond timing on this machine.
"""
return np.finfo(np.longdouble).eps < 2e-19
def require_longdouble_precision() -> None:
"""Raise an exception if long doubles do not have enough precision.
Raises RuntimeError if PINT cannot be run with high precision on this
machine.
"""
if not check_longdouble_precision():
raise PINTPrecisionError(
"PINT needs higher precision floating point than you have available. PINT uses the numpy longdouble type to represent modified Julian days, and this machine does not have sufficient numerical precision to represent sub-microsecond times with np.longdouble. On an M1 Mac you will need to use a Rosetta environment, or on a Windows machine you will need to us a different Python interpreter. Some PINT operations can work with reduced precision, but you have requested one that cannot."
)
class PosVel:
"""Position/Velocity class.
The class is used to represent the 6 values describing position
and velocity vectors. Instances have 'pos' and 'vel' attributes
that are numpy arrays of floats (and can have attached astropy
units). The 'pos' and 'vel' params are 3-vectors of the positions
and velocities respectively.
The coordinates are generally assumed to be aligned with ICRF (J2000),
i.e. they are in an inertial, not earth-rotating frame
The 'obj' and 'origin' components are strings that can optionally
be used to specify names for endpoints of the vectors. If present,
addition/subtraction will check that vectors are being combined in
a consistent way.
Specifically, if two PosVel objects are added, the obj of one must
equal the origin of the other (either way around). If the two
vectors agree on both ends, then the result vector will choose the
origin of the vector on the left.
"""
def __init__(
self,
pos: Union[List, np.ndarray, u.Quantity],
vel: Union[List, np.ndarray, u.Quantity],
obj: Optional[str] = None,
origin: Optional[str] = None,
) -> None:
if len(pos) != 3:
raise ValueError(f"Position vector has length {len(pos)} instead of 3")
self.pos = pos if isinstance(pos, u.Quantity) else np.asarray(pos)
if len(vel) != 3:
raise ValueError(f"Position vector has length {len(pos)} instead of 3")
self.vel = vel if isinstance(vel, u.Quantity) else np.asarray(vel)
if len(self.pos.shape) != len(self.vel.shape):
# FIXME: could broadcast them, but have to be careful
raise ValueError(
f"pos and vel must have the same number of dimensions but are {self.pos.shape} and {self.vel.shape}"
)
elif self.pos.shape != self.vel.shape:
self.pos, self.vel = np.broadcast_arrays(self.pos, self.vel, subok=True)
if (obj is None) != (origin is None):
raise ValueError(
"If one of obj and origin is specified, the other must be too."
)
self.obj = obj
self.origin = origin
# FIXME: what about dtype compatibility?
def _has_labels(self) -> bool:
return (self.obj is not None) and (self.origin is not None)
def __neg__(self) -> "PosVel":
return PosVel(-self.pos, -self.vel, obj=self.origin, origin=self.obj)
def __add__(self, other: "PosVel") -> "PosVel":
obj = None
origin = None
if self._has_labels() and other._has_labels():
# here we check that the addition "makes sense", ie the endpoint
# of self is the origin of other (or vice-versa)
if self.obj == other.origin:
origin = self.origin
obj = other.obj
elif self.origin == other.obj:
origin = other.origin
obj = self.obj
else:
raise ValueError(
f"Attempting to add incompatible vectors: {self.origin}->{self.obj} + {other.origin}->{other.obj}"
)
return PosVel(
self.pos + other.pos, self.vel + other.vel, obj=obj, origin=origin
)
def __sub__(self, other: "PosVel") -> "PosVel":
return self.__add__(other.__neg__())
def __str__(self) -> str:
return (
f"PosVel({str(self.pos)}, {str(self.vel)} {self.origin}->{self.obj})"
if self._has_labels()
else f"PosVel({str(self.pos)}, {str(self.vel)})"
)
def __getitem__(self, k: Union[int, Tuple]) -> "PosVel":
"""Allow extraction of slices of the contained arrays"""
colon = slice(None, None, None)
ix = (colon,) + k if isinstance(k, tuple) else (colon, k)
return self.__class__(
self.pos[ix], self.vel[ix], obj=self.obj, origin=self.origin
)
def numeric_partial(
f: Callable, args: Union[List, Tuple], ix: int = 0, delta: float = 1e-6
) -> float:
"""Compute the partial derivative of f numerically.
This uses symmetric differences to estimate the partial derivative
of a function (that takes some number of numeric arguments and may
return an array) with respect to one of its arguments.
"""
# r = np.array(f(*args))
args2 = list(args)
args2[ix] = args[ix] + delta / 2.0
r2 = np.array(f(*args2))
args3 = list(args)
args3[ix] = args[ix] - delta / 2.0
r3 = np.array(f(*args3))
return (r2 - r3) / delta
def numeric_partials(
f: Callable, args: Union[List, Tuple], delta: float = 1e-6
) -> float:
"""Compute all the partial derivatives of f numerically.
Returns a matrix of the partial derivative of every return value
with respect to every input argument. f is assumed to take a flat list
of numeric arguments and return a list or array of values.
"""
r = [numeric_partial(f, args, i, delta) for i in range(len(args))]
return np.array(r).T
def check_all_partials(
f: Callable,
args: Union[List, Tuple],
delta: float = 1e-6,
atol: float = 1e-4,
rtol: float = 1e-4,
) -> None:
"""Check the partial derivatives of a function that returns derivatives.
The function is assumed to return a pair (values, partials), where
partials is supposed to be a matrix of the partial derivatives of f
with respect to all its arguments. These values are checked against
numerical partial derivatives.
"""
_, jac = f(*args)
jac = np.asarray(jac)
njac = numeric_partials(lambda *args: f(*args)[0], args, delta)
try:
np.testing.assert_allclose(jac, njac, atol=atol, rtol=rtol)
except AssertionError:
d = np.abs(jac - njac) / (atol + rtol * np.abs(njac))
print("fail fraction:", np.sum(d > 1) / float(np.sum(d >= 0)))
worst_ix = np.unravel_index(np.argmax(d.reshape((-1,))), d.shape)
print("max fail:", np.amax(d), "at", worst_ix)
print("jac there:", jac[worst_ix], "njac there:", njac[worst_ix])
raise
def has_astropy_unit(x: Any) -> bool:
"""Test whether x has a unit attribute containing an astropy unit.
This is useful, because different data types can still have units
associated with them.
"""
return hasattr(x, "unit") and isinstance(x.unit, u.core.UnitBase)
# Define prefix parameter pattern
prefix_pattern = [
re.compile(r"^([a-zA-Z]*\d+[a-zA-Z]+)(\d+)$"), # For the prefix like T2EFAC2
re.compile(r"^([a-zA-Z]+)0*(\d+)$"), # For the prefix like F12
re.compile(r"^([a-zA-Z0-9]+_)(\d+)$"), # For the prefix like DMXR1_3
re.compile(r"([a-zA-Z]+_[a-zA-Z]+)(\d+)$"), # for prefixes like NE_SW2?
]
def split_prefixed_name(name: str) -> Tuple[str, str, int]:
"""Split a prefixed name.
Parameters
----------
name : str
Prefixed name
Returns
-------
prefixPart : str
The prefix part of the name
indexPart : str
The index part from the name
indexValue : int
The absolute index value
Example
-------
>>> split_prefixed_name("DMX_0123")
('DMX_', '0123', 123)
>>> split_prefixed_name("T2EFAC17")
('T2EFAC', '17', 17)
>>> split_prefixed_name("F12")
('F', '12', 12)
>>> split_prefixed_name("DMXR1_2")
('DMXR1_', '2', 2)
>>> split_prefixed_name("PEPOCH")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "pint/utils.py", line 406, in split_prefixed_name
raise PrefixError("Unrecognized prefix name pattern '%s'." % name)
pint.utils.PrefixError: Unrecognized prefix name pattern 'PEPOCH'.
"""
for pt in prefix_pattern:
try:
prefix_part, index_part = pt.match(name).groups()
break
except AttributeError:
continue
else:
raise PrefixError(f"Unrecognized prefix name pattern '{name}'.")
return prefix_part, index_part, int(index_part)
def taylor_horner(
x: quantity_like,
coeffs: Union[List[u.Quantity], List[uncertainties.ufloat]],
) -> quantity_like:
"""Evaluate a Taylor series of coefficients at x via the Horner scheme.
For example, if we want: 10 + 3*x/1! + 4*x^2/2! + 12*x^3/3! with
x evaluated at 2.0, we would do::
In [1]: taylor_horner(2.0, [10, 3, 4, 12])
Out[1]: 40.0
Parameters
----------
x: float or numpy.ndarray or astropy.units.Quantity
Input value; may be an array.
coeffs: list of astropy.units.Quantity or uncertainties.ufloat
Coefficient array; must have length at least one. The coefficient in
position ``i`` is multiplied by ``x**i``. Each coefficient should
just be a number, not an array. The units should be compatible once
multiplied by an appropriate power of x.
Returns
-------
float or numpy.ndarray or astropy.units.Quantity
Output value; same shape as input. Units as inferred from inputs.
"""
return taylor_horner_deriv(x, coeffs, deriv_order=0)
def taylor_horner_deriv(
x: quantity_like,
coeffs: Union[List[u.Quantity], List[uncertainties.ufloat]],
deriv_order: int = 1,
) -> quantity_like:
"""Evaluate the nth derivative of a Taylor series.
For example, if we want: first order of (10 + 3*x/1! + 4*x^2/2! + 12*x^3/3!)
with respect to x evaluated at 2.0, we would do::
In [1]: taylor_horner_deriv(2.0, [10, 3, 4, 12], 1)
Out[1]: 15.0
Parameters
----------
x: float or numpy.ndarray or astropy.units.Quantity
Input value; may be an array.
coeffs: list of astropy.units.Quantity or uncertainties.ufloat
Coefficient array; must have length at least one. The coefficient in
position ``i`` is multiplied by ``x**i``. Each coefficient should
just be a number, not an array. The units should be compatible once
multiplied by an appropriate power of x.
deriv_order: int
The order of the derivative to take (that is, how many times to differentiate).
Must be non-negative.
Returns
-------
float or numpy.ndarray or astropy.units.Quantity
Output value; same shape as input. Units as inferred from inputs.
"""
assert deriv_order >= 0
result = 0.0
if hasattr(coeffs[-1], "unit"):
if not hasattr(x, "unit"):
x = x * u.Unit("")
result *= coeffs[-1].unit / x.unit
der_coeffs = coeffs[deriv_order::]
fact = len(der_coeffs)
for coeff in der_coeffs[::-1]:
result = result * x / fact + coeff
fact -= 1.0
return result
@contextmanager
def open_or_use(f: file_like, mode: Literal["r", "rb", "w", "wb"] = "r") -> Iterator:
"""Open a filename or use an open file.
Specifically, if f is a string, try to use it as an argument to
open. Otherwise just yield it. In particular anything that is not
a subclass of ``str`` will be passed through untouched.
"""
if isinstance(f, (str, bytes, Path)):
with open(f, mode) as fl:
yield fl
else:
yield f
def lines_of(f: Any) -> Iterator:
"""Iterate over the lines of a file, an open file, or an iterator.
If ``f`` is a string, try to open a file of that name. Otherwise
treat it as an iterator and yield its values. For open files, this
results in the lines one-by-one. For lists or other iterators it
just yields them right through.
"""
with open_or_use(f) as fo:
yield from fo
def interesting_lines(lines: Iterable, comments: Optional[str] = None) -> Iterator[str]:
"""Iterate over lines skipping whitespace and comments.
Each line has its whitespace stripped and then it is checked whether
it .startswith(comments) . This means comments can be a string or
a list of strings.
"""
if comments is None:
cc = ()
elif isinstance(comments, (str, bytes)):
cc = (comments,)
else:
cc = tuple(comments)
for c in cc:
cs = c.strip()
if not cs or not c.startswith(cs):
raise ValueError(
"Unable to deal with comments that start with whitespace, "
"but comment string {!r} was requested.".format(c)
)
for ln in lines:
ln = ln.strip()
if not ln:
continue
if ln.startswith(cc):
continue
yield ln
def pmtot(model: "pint.models.TimingModel") -> u.Quantity:
"""Compute and return the total proper motion from a model object
Calculates total proper motion from the parameters of the model, in either
equatorial or ecliptic coordinates. Note that in both cases, pulsar timing
codes define the proper motion in the longitude coordinate to be the
the actual angular rate of change of position on the sky rather than the change in coordinate value,
so PMRA = (d(RAJ)/dt)*cos(DECJ). This is different from the astrometry community where mu_alpha = d(alpha)/dt.
Thus, we don't need to include cos(DECJ) or cos(ELAT) in our calculation.
Parameters
----------
model: pint.models.timing_model.TimingModel
Returns
-------
pmtot : astropy.units.Quantity
Returns total proper motion with units of ``u.mas/u.yr``
Raises
------
AttributeError
If no Astrometry component is found in the model
"""
if "AstrometryEcliptic" in model.components.keys():
return np.sqrt(model.PMELONG.quantity**2 + model.PMELAT.quantity**2).to(
u.mas / u.yr
)
elif "AstrometryEquatorial" in model.components.keys():
return np.sqrt(model.PMRA.quantity**2 + model.PMDEC.quantity**2).to(
u.mas / u.yr
)
else:
raise AttributeError("No Astrometry component found")
class dmxrange:
"""Internal class for building DMX ranges"""
def __init__(self, lofreqs: List[float], hifreqs: List[float]):
"""lofreqs and hifreqs are lists of MJDs that are in the low or high band respectively"""
self.los = lofreqs
self.his = hifreqs
self.min = min(lofreqs + hifreqs) - 0.001 * u.d
self.max = max(lofreqs + hifreqs) + 0.001 * u.d
def sum_print(self) -> None:
print(
"{:8.2f}-{:8.2f} ({:8.2f}): NLO={:5d} NHI={:5d}".format(
self.min.value,
self.max.value,
self.max - self.min,
len(self.los),
len(self.his),
)
)
def dmx_ranges_old(
toas: "pint.toa.TOAs",
divide_freq: u.Quantity = 1000.0 * u.MHz,
offset: u.Quantity = 0.01 * u.d,
max_diff: u.Quantity = 15.0 * u.d,
verbose: bool = False,
) -> Tuple[np.ndarray, "pint.models.Component"]:
"""Compute initial DMX ranges for a set of TOAs
This is a rudimentary translation of $TEMPO/utils/dmx_ranges/DMX_ranges2.py
Parameters
----------
toas : pint.toa.TOAs
divide_freq : Quantity, MHz
Requires TOAs above and below this freq for a good DMX range
offset : Quantity, days
The buffer to include around each DMX range. Warning, may cause bins to overlap?!?
max_diff : Quantity, days
Maximum duration of a DMX bin
verbose : bool
If True, print out verbose information about the DMX ranges including par file lines.
Returns
-------
mask : bool array
Array with True for all TOAs that got assigned to a DMX bin
component : TimingModel.Component object
A DMX Component class with the DMX ranges included
"""
import pint.models.parameter
from pint.models.timing_model import Component
MJDs = toas.get_mjds()
freqs = toas.table["freq"]
loMJDs = MJDs[freqs < divide_freq]
hiMJDs = MJDs[freqs > divide_freq]
# Round off the dates to 0.1 days and only keep unique values so we ignore closely spaced TOAs
loMJDs = np.unique(loMJDs.round(1))
hiMJDs = np.unique(hiMJDs.round(1))
log.info(f"There are {len(hiMJDs)} dates with freqs > {divide_freq} MHz")
log.info(f"There are {len(loMJDs)} dates with freqs < {divide_freq} MHz\n")
DMXs = []
good_his = set([])
bad_los = []
# Walk through all of the low freq obs
for ii, loMJD in enumerate(loMJDs):
# find all the high freq obs within max_diff days
# of the low freq obs
hi_close = hiMJDs[np.fabs(hiMJDs - loMJD) < max_diff]
# and where they are closer to this loMJD compared to the
# other nearby ones
if ii > 0:
diffs = np.fabs(hi_close - loMJD)
lodiffs = np.fabs(hi_close - loMJDs[ii - 1])
hi_close = hi_close[diffs < lodiffs]
if ii < len(loMJDs) - 1:
diffs = np.fabs(hi_close - loMJD)
hidiffs = np.fabs(hi_close - loMJDs[ii + 1])
hi_close = hi_close[diffs < hidiffs]
if len(hi_close): # add a DMXrange
DMXs.append(dmxrange([loMJD], list(hi_close)))
good_his = good_his.union(set(hi_close))
else:
bad_los.append(loMJD)
bad_los = set(bad_los)
saved_los = []
# print bad_los
# Now walk through the DMXs and see if we can't fit a bad_lo freq in
for bad_lo in bad_los:
absmindiff = 2 * max_diff
ind = 0
for ii, DMX in enumerate(DMXs):
if (
np.fabs(bad_lo - DMX.min) < max_diff
and np.fabs(bad_lo - DMX.max) < max_diff
):
mindiff = min(np.fabs(bad_lo - DMX.min), np.fabs(bad_lo - DMX.max))
if mindiff < absmindiff:
absmindiff = mindiff
ind = ii
if absmindiff < max_diff:
# print DMXs[ind].min, DMXs[ind].max, bad_lo
DMXs[ind].los.append(bad_lo)
# update the min and max vals
DMXs[ind].min = min(DMXs[ind].los + DMXs[ind].his)
DMXs[ind].max = max(DMXs[ind].los + DMXs[ind].his)
saved_los.append(bad_lo)
# These are the low-freq obs we can't save
bad_los -= set(saved_los)
bad_los = sorted(list(bad_los))
# These are the high-freq obs we can't save
bad_his = set(hiMJDs) - good_his
bad_his = sorted(list(bad_his))
if verbose:
print("\n These are the 'good' ranges for DMX and days are low/high freq:")
for DMX in DMXs:
DMX.sum_print()
print("\nRemove high-frequency data from these days:")
for hibad in bad_his:
print("{:8.2f}".format(hibad.value))
print("\nRemove low-frequency data from these days:")
for lobad in bad_los:
print("{:8.2f}".format(lobad.value))
print("\n Enter the following in your parfile")
print("-------------------------------------")
print("DMX {:.2f}".format(max_diff.value))
oldmax = 0.0
for ii, DMX in enumerate(DMXs):
print("DMX_{:04d} 0.0 {}".format(ii + 1, 1))
print("DMXR1_{:04d} {:10.4f}".format(ii + 1, (DMX.min - offset).value))
print("DMXR2_{:04d} {:10.4f}".format(ii + 1, (DMX.max + offset).value))
if DMX.min < oldmax:
print("Ack! This shouldn't be happening!")
oldmax = DMX.max
# Init mask to all False
mask = np.zeros_like(MJDs.value, dtype=bool)
# Mark TOAs as True if they are in any DMX bin
for DMX in DMXs:
mask[np.logical_and(MJDs > DMX.min - offset, MJDs < DMX.max + offset)] = True
log.info(f"{mask.sum()} out of {len(mask)} TOAs are in a DMX bin")
# Instantiate a DMX component
dmx_class = Component.component_types["DispersionDMX"]
dmx_comp = dmx_class()
# Add parameters
for ii, DMX in enumerate(DMXs):
if ii == 0:
# Already have DMX_0001 in component, so just set parameters
dmx_comp.DMX_0001.value = 0.0
dmx_comp.DMX_0001.frozen = False
dmx_comp.DMXR1_0001.value = (DMX.min - offset).value
dmx_comp.DMXR2_0001.value = (DMX.max + offset).value
else:
# Add the DMX parameters
dmx_par = pint.models.parameter.prefixParameter(
parameter_type="float",
name="DMX_{:04d}".format(ii + 1),
value=0.0,
units=u.pc / u.cm**3,
frozen=False,
)
dmx_comp.add_param(dmx_par, setup=True)
dmxr1_par = pint.models.parameter.prefixParameter(
parameter_type="mjd",
name="DMXR1_{:04d}".format(ii + 1),
value=(DMX.min - offset).value,
units=u.d,
)
dmx_comp.add_param(dmxr1_par, setup=True)
dmxr2_par = pint.models.parameter.prefixParameter(
parameter_type="mjd",
name="DMXR2_{:04d}".format(ii + 1),
value=(DMX.max + offset).value,
units=u.d,
)
dmx_comp.add_param(dmxr2_par, setup=True)
# Validate component
dmx_comp.validate()
return mask, dmx_comp
def dmx_ranges(
toas: "pint.toa.TOAs",
divide_freq=1000.0 * u.MHz,
binwidth=15.0 * u.d,
verbose=False,
) -> Tuple[np.ndarray, "pint.models.timing_model.Component"]:
"""Compute initial DMX ranges for a set of TOAs
This is an alternative algorithm for computing DMX ranges
Parameters
----------
divide_freq : Quantity, MHz
Requires TOAs above and below this freq for a good DMX range
offset : Quantity, days
The buffer to include around each DMX range. Warning, may cause bins to overlap?!?
max_diff : Quantity, days
Maximum duration of a DMX bin
verbose : bool
If True, print out verbose information about the DMX ranges including par file lines.
Returns
-------
mask : bool array
Array with True for all TOAs that got assigned to a DMX bin
component : TimingModel.Component object
A DMX Component class with the DMX ranges included
"""
import pint.models.parameter
from pint.models.timing_model import Component
MJDs = toas.get_mjds()
freqs = toas.table["freq"].quantity
DMXs = []
prevbinR2 = MJDs[0] - 0.001 * u.d
while np.any(MJDs > prevbinR2):
# Consider all TOAs with times after the last bin up through a total span of binwidth
# Get indexes that should be in this bin
# If there are no more MJDs to process, we are done.
startMJD = MJDs[MJDs > prevbinR2][0]
binidx = np.logical_and(MJDs > prevbinR2, MJDs <= startMJD + binwidth)
if not np.any(binidx):
break
binMJDs = MJDs[binidx]
binfreqs = freqs[binidx]
loMJDs = binMJDs[binfreqs < divide_freq]
hiMJDs = binMJDs[binfreqs >= divide_freq]
# If we have freqs below and above the divide, this is a good bin
if np.any(binfreqs < divide_freq) and np.any(binfreqs > divide_freq):
DMXs.append(dmxrange(list(loMJDs), list(hiMJDs)))
else:
# These TOAs cannot be used
pass
prevbinR2 = binMJDs.max()
if verbose:
print(
"\n These are the good DMX ranges with number of TOAs above/below the dividing freq:"
)
for DMX in DMXs:
DMX.sum_print()
# Init mask to all False
mask = np.zeros_like(MJDs.value, dtype=bool)
# Mark TOAs as True if they are in any DMX bin
for DMX in DMXs:
mask[np.logical_and(MJDs >= DMX.min, MJDs <= DMX.max)] = True
log.info(f"{mask.sum()} out of {len(mask)} TOAs are in a DMX bin")
# Instantiate a DMX component
dmx_class = Component.component_types["DispersionDMX"]
dmx_comp = dmx_class()
# Add parameters
for ii, DMX in enumerate(DMXs):
if ii == 0:
# Already have DMX_0001 in component, so just set parameters
dmx_comp.DMX_0001.value = 0.0
dmx_comp.DMX_0001.frozen = False
dmx_comp.DMXR1_0001.value = DMX.min.value
dmx_comp.DMXR2_0001.value = DMX.max.value
else:
# Add the DMX parameters
dmx_par = pint.models.parameter.prefixParameter(
parameter_type="float",
name="DMX_{:04d}".format(ii + 1),
value=0.0,
units=u.pc / u.cm**3,
frozen=False,
)
dmx_comp.add_param(dmx_par, setup=True)
dmxr1_par = pint.models.parameter.prefixParameter(
parameter_type="mjd",
name="DMXR1_{:04d}".format(ii + 1),
value=DMX.min.value,
units=u.d,
)
dmx_comp.add_param(dmxr1_par, setup=True)
dmxr2_par = pint.models.parameter.prefixParameter(
parameter_type="mjd",
name="DMXR2_{:04d}".format(ii + 1),
value=DMX.max.value,
units=u.d,
)
dmx_comp.add_param(dmxr2_par, setup=True)
# Validate component
dmx_comp.validate()
return mask, dmx_comp
def dmx_setup(
t: Union["pint.toa.TOAs", u.Quantity, Time],
minwidth: u.Quantity = 10 * u.d,
mintoas: int = 1,
) -> Tuple[u.Quantity, u.Quantity, np.ndarray]:
"""Set up DMX bins with a minimal binning strategy
The nominal binwidth will be >=`minwidth`, but will always include >=`mintoas` TOAs.
No dividing based on observing frequency is done.
Parameters
----------
t : `pint.toa.TOAs` or astropy.units.Quantity or astropy.time.Time
Input TOAs to divide. If Quantity, assume MJD
minwidth : astropy.units.Quantity
Minimum bin width
mintoas : int
Minimum number of TOAs in a bin
Returns
-------
R1 : astropy.units.Quantity
Start times of the bins
R2 : astropy.units.Quantity
Stop times of the bins
N : np.ndarray
Number of TOAs in each bin
Example
-------
To use the output of this function::
>>> R1, R2, N = dmx_setup(t)
>>> model.add_component(pint.models.dispersion_model.DispersionDMX())
>>> model.DMXR1_0001.value = R1[0].value
>>> model.DMXR2_0001.value = R2[0].value
>>> model.add_DMX_ranges(R1[1:].value, R2[1:].value, frozens=False)
Since the first DMX range already exists, we update those values before adding the other ranges.
"""
if isinstance(t, Time):
MJDs = np.sort(t.mjd * u.d)
elif isinstance(t, u.Quantity):
MJDs = np.sort(t)
else:
# assume TOAs, although we don't want to check explicitly to avoid circular imports
MJDs = np.sort(t.get_mjds())
itoa = 0
idmx = 0
R1 = []
R2 = []
while itoa < len(MJDs) - 1:
if idmx == 0:
R1.append(MJDs[itoa])
else:
R1.append(R2[-1])
R2.append(R1[idmx] + minwidth)
itoa = np.where(MJDs <= R2[-1])[0].max()
while ((MJDs >= R1[idmx]) & (MJDs < R2[idmx])).sum() < mintoas:
itoa += 1
if itoa < len(MJDs):
R2[idmx] = MJDs[itoa] + 1 * u.d
else:
R2[idmx] = MJDs[itoa - 1] + 1 * u.d
break
idmx += 1
if (R2[-1] - R1[-1] < minwidth) or (
((MJDs >= R1[-1]) & (MJDs < R2[-1])).sum() < mintoas
):
# in case the last bin is too short
R2[-2] = R2[-1]
R1.pop()
R2.pop()
R1 = np.array([x.value for x in R1]) * u.d
R2 = np.array([x.value for x in R2]) * u.d
N = np.zeros(len(R1), dtype=int)
for idmx in range(len(R1)):
N[idmx] = ((MJDs >= R1[idmx]) & (MJDs < R2[idmx])).sum()
return R1, R2, N
def xxxselections(
model: "pint.models.TimingModel", toas: "pint.toa.TOAs", prefix: str = "DM"
) -> Dict[str, np.ndarray]:
"""Map DMX/SWX/other selections to TOAs
Parameters
----------
model : pint.models.TimingModel
toas : pint.toa.TOAs
prefix : str
Name of selection
Returns
-------
dict :
keys are XXX indices, values are the TOAs selected for each index
"""
if not any(p.startswith(f"{prefix}X") for p in model.params):
return {}
toas_selector = TOASelect(is_range=True)
X_mapping = model.get_prefix_mapping(f"{prefix}X_")
XR1_mapping = model.get_prefix_mapping(f"{prefix}XR1_")
XR2_mapping = model.get_prefix_mapping(f"{prefix}XR2_")
condition = {}
for ii in X_mapping:
r1 = getattr(model, XR1_mapping[ii]).quantity
r2 = getattr(model, XR2_mapping[ii]).quantity
condition[X_mapping[ii]] = (r1.mjd, r2.mjd)
return toas_selector.get_select_index(condition, toas["mjd_float"])
def dmxselections(
model: "pint.models.TimingModel", toas: "pint.toa.TOAs"
) -> Dict[str, np.ndarray]:
"""Map DMX selections to TOAs
Parameters
----------
model : pint.models.TimingModel
toas : pint.toa.TOAs
Returns
-------
dict :
keys are DMX indices, values are the TOAs selected for each index
"""
toas_selector = TOASelect(is_range=True)
DMX_mapping = model.get_prefix_mapping("DMX_")
DMXR1_mapping = model.get_prefix_mapping("DMXR1_")
DMXR2_mapping = model.get_prefix_mapping("DMXR2_")
condition = {}
for ii in DMX_mapping:
r1 = getattr(model, DMXR1_mapping[ii]).quantity
r2 = getattr(model, DMXR2_mapping[ii]).quantity
condition[DMX_mapping[ii]] = (r1.mjd, r2.mjd)
return toas_selector.get_select_index(condition, toas["mjd_float"])
def dmxstats(
model: "pint.models.TimingModel", toas: "pint.toa.TOAs", file: IO = sys.stdout
) -> None:
"""Print DMX statistics
Based off dmxparse by P. Demorest (https://github.com/nanograv/tempo/tree/master/util/dmxparse)
Parameters
----------
model : pint.models.TimingModel
toas : pint.toa.TOAs
file : a file-like object (stream); defaults to the current sys.stdout
"""
mjds = toas.get_mjds()
freqs = toas.table["freq"]
selected = np.zeros(len(toas), dtype=np.bool_)
DMX_mapping = model.get_prefix_mapping("DMX_")
select_idx = dmxselections(model, toas)
for ii in DMX_mapping:
if f"DMX_{ii:04d}" in select_idx:
selection = select_idx[f"DMX_{ii:04d}"]
selected[selection] = True
print(
"DMX_{:04d}: NTOAS={:5d}, MJDSpan={:14.4f}, FreqSpan={:8.3f}-{:8.3f}".format(
ii,
len(selection),
(mjds[selection].max() - mjds[selection.min()]),
freqs[selection].min() * u.MHz,
freqs[selection].max() * u.MHz,
),
file=file,
)
else:
print(
"DMX_{:04d}: NTOAS={:5d}, MJDSpan={:14.4f}, FreqSpan={:8.3f}-{:8.3f}".format(
ii, 0, 0 * u.d, 0 * u.MHz, 0 * u.MHz
),
file=file,
)
if not np.all(selected):
print(f"{(1-selected).sum()} TOAs not selected in any DMX window", file=file)
def dmxparse(
fitter: "pint.fitter.Fitter", save: bool = False
) -> Dict[str, Union[u.Quantity, List]]:
"""Run dmxparse in python using PINT objects and results.
Based off dmxparse by P. Demorest (https://github.com/nanograv/tempo/tree/master/util/dmxparse)
Parameters
----------
fitter
PINT fitter used to get timing residuals, must have already run a fit
save : bool or str or file-like object, optional
If not False or None, saves output to specified file in the format of the TEMPO version. If ``True``, assumes output file is ``dmxparse.out``
Returns
-------
dict :
``dmxs`` : mean-subtraced dmx values
``dmx_verrs`` : dmx variance errors
``dmxeps`` : center mjds of the dmx bins
``r1s`` : lower mjd bounds on the dmx bins
``r2s`` : upper mjd bounds on the dmx bins
``bins`` : dmx bins
``mean_dmx`` : mean dmx value
``avg_dm_err`` : uncertainty in average dmx
Raises
------
RuntimeError
If the model has no DMX parameters, or if there is a parsing problem
"""
# We get the DMX values, errors, and mjds (same as in getting the DMX values for DMX v. time)
# Get number of DMX epochs
try:
DMX_mapping = fitter.model.get_prefix_mapping("DMX_")
except ValueError as e:
raise RuntimeError("No DMX values in model!") from e
dmx_epochs = [f"{x:04d}" for x in DMX_mapping.keys()]
DMX_keys = list(DMX_mapping.values())
DMXs = np.zeros(len(dmx_epochs))
DMX_Errs = np.zeros(len(dmx_epochs))
DMX_R1 = np.zeros(len(dmx_epochs))
DMX_R2 = np.zeros(len(dmx_epochs))
mask_idxs = np.zeros(len(dmx_epochs), dtype=np.bool_)
# Get DMX values (will be in units of 10^-3 pc cm^-3)
for ii, epoch in enumerate(dmx_epochs):
DMXs[ii] = getattr(fitter.model, "DMX_{:}".format(epoch)).value
mask_idxs[ii] = getattr(fitter.model, "DMX_{:}".format(epoch)).frozen
DMX_Errs[ii] = getattr(fitter.model, "DMX_{:}".format(epoch)).uncertainty_value
DMX_R1[ii] = getattr(fitter.model, "DMXR1_{:}".format(epoch)).value
DMX_R2[ii] = getattr(fitter.model, "DMXR2_{:}".format(epoch)).value
DMX_center_MJD = (DMX_R1 + DMX_R2) / 2
# If any value need to be masked, do it
if True in mask_idxs:
log.warning(
"Some DMX bins were not fit for, masking these bins for computation."
)
DMX_Errs = np.ma.array(DMX_Errs, mask=mask_idxs)
DMX_keys_ma = np.ma.array(DMX_keys, mask=mask_idxs)
else:
DMX_keys_ma = None
# Make sure that the fitter has a covariance matrix, otherwise return the initial values
if hasattr(fitter, "parameter_covariance_matrix"):
# now get the full parameter covariance matrix from pint
# access by label name to make sure we get the right values
# make sure they are sorted in ascending order
cc = fitter.parameter_covariance_matrix.get_label_matrix(
sorted([f"DMX_{x}" for x in dmx_epochs])
)
n = len(DMX_Errs) - np.sum(mask_idxs)
# Find error in mean DM
DMX_mean = np.mean(DMXs)
DMX_mean_err = np.sqrt(cc.matrix.sum()) / float(n)
# Do the correction for varying DM
m = np.identity(n) - np.ones((n, n)) / float(n)
cc = np.dot(np.dot(m, cc.matrix), m)
DMX_vErrs = np.zeros(n)
# We also need to correct for the units here
for i in range(n):
DMX_vErrs[i] = np.sqrt(cc[i, i])
# If array was masked, we need to add values back in where they were masked
if DMX_keys_ma is not None:
# Only need to add value to DMX_vErrs
DMX_vErrs = np.insert(DMX_vErrs, np.where(mask_idxs)[0], None)
else:
log.warning(
"Fitter does not have covariance matrix, returning values from model"
)
DMX_mean = np.mean(DMXs)
DMX_mean_err = np.mean(DMX_Errs)
DMX_vErrs = DMX_Errs
# Check we have the right number of params
if len(DMXs) != len(DMX_Errs) or len(DMXs) != len(DMX_vErrs):
raise RuntimeError("Number of DMX entries do not match!")
# Output the results'
if save is not None and save:
if isinstance(save, bool):
save = "dmxparse.out"
lines = [
f"# Mean DMX value = {DMX_mean:+.6e} \n",
f"# Uncertainty in average DM = {DMX_mean_err:.5e} \n",
f"# Columns: DMXEP DMX_value DMX_var_err DMXR1 DMXR2 %s_bin \n",
]
lines.extend(
f"{DMX_center_MJD[k]:.4f} {DMXs[k] - DMX_mean:+.7e} {DMX_vErrs[k]:.3e} {DMX_R1[k]:.4f} {DMX_R2[k]:.4f} {DMX_keys[k]} \n"
for k in range(len(dmx_epochs))
)
with open_or_use(save, mode="w") as dmxout:
dmxout.writelines(lines)
if isinstance(save, (str, Path)):
log.debug(f"Wrote dmxparse output to '{save}'")
# return the new mean subtracted values
mean_sub_DMXs = DMXs - DMX_mean
# Get units to multiply returned arrays by
DMX_units = getattr(fitter.model, "DMX_{:}".format(dmx_epochs[0])).units
DMXR_units = getattr(fitter.model, "DMXR1_{:}".format(dmx_epochs[0])).units
return {
"dmxs": mean_sub_DMXs * DMX_units,
"dmx_verrs": DMX_vErrs * DMX_units,
"dmxeps": DMX_center_MJD * DMXR_units,
"r1s": DMX_R1 * DMXR_units,
"r2s": DMX_R2 * DMXR_units,
"bins": DMX_keys,
"mean_dmx": DMX_mean * DMX_units,
"avg_dm_err": DMX_mean_err * DMX_units,
}
def get_prefix_timerange(
model: "pint.models.TimingModel", prefixname: str
) -> Tuple[Time, ...]:
"""Get time range for a prefix quantity like DMX or SWX
Parameters
----------
model: pint.models.timing_model.TimingModel
prefixname : str
Something like ``DMX_0001`` or ``SWX_0005``
Returns
-------
tuple
Each element is astropy.time.Time
Example
-------
To match a range between SWX and DMX, you can do:
>>> m.add_DMX_range(*(59077.33674631197, 59441.34020807681), index=1, frozen=False)
Which sets ``DMX_0001`` to cover the same time range as ``SWX_0002``
"""
prefix, index, indexnum = split_prefixed_name(prefixname)
r1 = prefix.replace("_", "R1_") + index
r2 = prefix.replace("_", "R2_") + index
return getattr(model, r1).quantity, getattr(model, r2).quantity
def get_prefix_timeranges(
model: "pint.models.TimingModel", prefixname: str
) -> Tuple[np.ndarray, Time, Time]:
"""Get all time ranges and indices for a prefix quantity like DMX or SWX
Parameters
----------
model: pint.models.timing_model.TimingModel
prefixname : str
Something like ``DMX`` or ``SWX`` (no trailing ``_``)
Returns
-------
indices : np.ndarray
starts : astropy.time.Time
ends : astropy.time.Time
"""
if prefixname.endswith("_"):
prefixname = prefixname[:-1]
prefix_mapping = model.get_prefix_mapping(f"{prefixname}_")
r1 = np.zeros(len(prefix_mapping))
r2 = np.zeros(len(prefix_mapping))
indices = np.zeros(len(prefix_mapping), dtype=np.int32)
for j, index in enumerate(prefix_mapping.keys()):
if (
getattr(model, f"{prefixname}R1_{index:04d}").quantity is not None
and getattr(model, f"{prefixname}R2_{index:04d}").quantity is not None
):
r1[j] = getattr(model, f"{prefixname}R1_{index:04d}").quantity.mjd
r2[j] = getattr(model, f"{prefixname}R2_{index:04d}").quantity.mjd
indices[j] = index
return (
indices,
Time(r1, format="pulsar_mjd"),
Time(r2, format="pulsar_mjd"),
)
def find_prefix_bytime(
model: "pint.models.TimingModel", prefixname: str, t: Union[float, Time, u.Quantity]
) -> Union[int, np.ndarray]:
"""Identify matching index(es) for a prefix parameter like DMX
Parameters
----------
model: pint.models.timing_model.TimingModel
prefixname : str
Something like ``DMX`` or ``SWX`` (no trailing ``_``)
t : astropy.time.Time or float or astropy.units.Quantity
If not :class:`astropy.time.Time`, then MJD is assumed
Returns
-------
int or np.ndarray
Index or indices that match
"""
if not isinstance(t, Time):
t = Time(t, format="pulsar_mjd")
indices, r1, r2 = get_prefix_timeranges(model, prefixname)
matches = np.where((t >= r1) & (t < r2))[0]
if len(matches) == 1:
matches = int(matches)
return indices[matches]
def merge_dmx(
model: "pint.models.TimingModel",
index1: int,
index2: int,
value: Literal["first", "second", "mean"] = "mean",
frozen: bool = True,
) -> int:
"""Merge two DMX bins
Parameters
----------
model: pint.models.timing_model.TimingModel
index1: int
index2 : int
value : str, optional
One of "first", "second", "mean". Determines value of new bin
frozen : bool, optional
Returns
-------
int
New DMX index
"""
assert value.lower() in ["first", "second", "mean"]
tstart1, tend1 = get_prefix_timerange(model, f"DMX_{index1:04d}")
tstart2, tend2 = get_prefix_timerange(model, f"DMX_{index2:04d}")
tstart = min([tstart1, tstart2])
tend = max([tend1, tend2])
intervening_indices = find_prefix_bytime(model, "DMX", (tstart.mjd + tend.mjd) / 2)
if len(np.setdiff1d(intervening_indices, [index1, index2])) > 0:
for k in np.setdiff1d(intervening_indices, [index1, index2]):
log.warning(
f"Attempting to merge DMX_{index1:04d} and DMX_{index2:04d}, but DMX_{k:04d} is in between"
)
if value.lower() == "first":
dmx = getattr(model, f"DMX_{index1:04d}").quantity
elif value.lower == "second":
dmx = getattr(model, f"DMX_{index2:04d}").quantity
elif value.lower() == "mean":
dmx = (
getattr(model, f"DMX_{index1:04d}").quantity
+ getattr(model, f"DMX_{index2:04d}").quantity
) / 2
# add the new one before we delete previous ones to make sure we have >=1 present
newindex = model.add_DMX_range(tstart, tend, dmx=dmx, frozen=frozen)
model.remove_DMX_range([index1, index2])
return newindex
def split_dmx(model: "pint.models.TimingModel", time: Time) -> Tuple[int, int]:
"""
Split an existing DMX bin at the desired time
Parameters
----------
model : pint.models.timing_model.TimingModel
time : astropy.time.Time
Returns
-------
index : int
Index of existing bin that was split
newindex : int
Index of new bin that was added
"""
try:
DMX_mapping = model.get_prefix_mapping("DMX_")
except ValueError:
raise RuntimeError("No DMX values in model!")
dmx_epochs = [f"{x:04d}" for x in DMX_mapping.keys()]
DMX_R1 = np.zeros(len(dmx_epochs))
DMX_R2 = np.zeros(len(dmx_epochs))
for ii, epoch in enumerate(dmx_epochs):
DMX_R1[ii] = getattr(model, "DMXR1_{:}".format(epoch)).value
DMX_R2[ii] = getattr(model, "DMXR2_{:}".format(epoch)).value
ii = np.where((time.mjd > DMX_R1) & (time.mjd < DMX_R2))[0]
if len(ii) == 0:
raise ValueError(f"Time {time} not in any DMX bins")
ii = ii[0]
index = int(dmx_epochs[ii])
t1 = DMX_R1[ii]
t2 = DMX_R2[ii]
getattr(model, f"DMXR2_{index:04d}").value = time.mjd
newindex = model.add_DMX_range(
time.mjd,
t2,
dmx=getattr(model, f"DMX_{index:04d}").quantity,
frozen=getattr(model, f"DMX_{index:04d}").frozen,
)
return index, newindex
def split_swx(model: "pint.models.TimingModel", time: Time) -> Tuple[int, int]:
"""
Split an existing SWX bin at the desired time
Parameters
----------
model : pint.models.timing_model.TimingModel
time : astropy.time.Time
Returns
-------
index : int
Index of existing bin that was split
newindex : int
Index of new bin that was added
"""
try:
SWX_mapping = model.get_prefix_mapping("SWXDM_")
except ValueError:
raise RuntimeError("No SWX values in model!")
swx_epochs = [f"{x:04d}" for x in SWX_mapping.keys()]
SWX_R1 = np.zeros(len(swx_epochs))
SWX_R2 = np.zeros(len(swx_epochs))
for ii, epoch in enumerate(swx_epochs):
SWX_R1[ii] = getattr(model, "SWXR1_{:}".format(epoch)).value
SWX_R2[ii] = getattr(model, "SWXR2_{:}".format(epoch)).value
ii = np.where((time.mjd > SWX_R1) & (time.mjd < SWX_R2))[0]
if len(ii) == 0:
raise ValueError(f"Time {time} not in any SWX bins")
ii = ii[0]
index = int(swx_epochs[ii])
t1 = SWX_R1[ii]
t2 = SWX_R2[ii]
getattr(model, f"SWXR2_{index:04d}").value = time.mjd
newindex = model.add_swx_range(
time.mjd,
t2,
swxdm=getattr(model, f"SWXDM_{index:04d}").quantity,
frozen=getattr(model, f"SWXDM_{index:04d}").frozen,
)
return index, newindex
def wavex_setup(
model: "pint.models.TimingModel",
T_span: Union[float, u.Quantity],
freqs: Optional[Iterable[Union[float, u.Quantity]]] = None,
n_freqs: Optional[int] = None,
freeze_params: bool = False,
) -> List[int]:
"""
Set-up a WaveX model based on either an array of user-provided frequencies or the wave number
frequency calculation. Sine and Cosine amplitudes are initially set to zero
User specifies T_span and either freqs or n_freqs. This function assumes that the timing model does not already
have any WaveX components. See add_wavex_component() or add_wavex_components() to add WaveX components
to an existing WaveX model.
Parameters
----------
model : pint.models.timing_model.TimingModel
T_span : float, astropy.quantity.Quantity
Time span used to calculate nyquist frequency when using freqs
Time span used to calculate WaveX frequencies when using n_freqs
Usually to be set as the length of the timing baseline the model is being used for
freqs : iterable of float or astropy.quantity.Quantity, None
User inputed base frequencies
n_freqs : int, None
Number of wave frequencies to calculate using the equation: freq_n = 2 * pi * n / T_span
Where n is the wave number, and T_span is the total time span of the toas in the fitter object
freeze_params : bool, optional
Whether the new parameters should be frozen
Returns
-------
indices : list
Indices that have been assigned to new WaveX components
"""
from pint.models.wavex import WaveX
if (freqs is None) and (n_freqs is None):
raise ValueError(
"WaveX component base frequencies are not specified. "
"Please input either freqs or n_freqs"
)
if (freqs is not None) and (n_freqs is not None):
raise ValueError(
"Both freqs and n_freqs are specified. Only one or the other should be used"
)
if n_freqs is not None and n_freqs <= 0:
raise ValueError("Must use a non-zero number of wave frequencies")
model.add_component(WaveX())
if isinstance(T_span, u.quantity.Quantity):
T_span.to(u.d)
else:
T_span *= u.d
nyqist_freq = 1.0 / (2.0 * T_span)
if freqs is not None:
if isinstance(freqs, u.quantity.Quantity):
freqs.to(u.d**-1)
else:
freqs *= u.d**-1
if len(freqs) == 1:
model.WXFREQ_0001.quantity = freqs
else:
freqs = np.array(freqs)
freqs.sort()
if min(np.diff(freqs)) < nyqist_freq:
warnings.warn(
"Wave frequency spacing is finer than frequency resolution of data"
)
model.WXFREQ_0001.quantity = freqs[0]
model.components["WaveX"].add_wavex_components(freqs[1:])
if n_freqs is not None:
if n_freqs == 1:
wave_freq = 1 / T_span
model.WXFREQ_0001.quantity = wave_freq
else:
wave_numbers = np.arange(1, n_freqs + 1)
wave_freqs = wave_numbers / T_span
model.WXFREQ_0001.quantity = wave_freqs[0]
model.components["WaveX"].add_wavex_components(wave_freqs[1:])
for p in model.params:
if p.startswith("WXSIN") or p.startswith("WXCOS"):
model[p].frozen = freeze_params
return model.components["WaveX"].get_indices()
def dmwavex_setup(
model: "pint.models.TimingModel",
T_span: Union[float, u.Quantity],
freqs: Optional[Iterable[Union[float, u.Quantity]]] = None,
n_freqs: Optional[int] = None,
freeze_params: bool = False,
) -> List[int]:
"""
Set-up a DMWaveX model based on either an array of user-provided frequencies or the wave number
frequency calculation. Sine and Cosine amplitudes are initially set to zero
User specifies T_span and either freqs or n_freqs. This function assumes that the timing model does not already
have any DMWaveX components. See add_dmwavex_component() or add_dmwavex_components() to add components
to an existing DMWaveX model.
Parameters
----------
model : pint.models.timing_model.TimingModel
T_span : float, astropy.quantity.Quantity
Time span used to calculate nyquist frequency when using freqs
Time span used to calculate DMWaveX frequencies when using n_freqs
Usually to be set as the length of the timing baseline the model is being used for
freqs : iterable of float or astropy.quantity.Quantity, None
User inputed base frequencies
n_freqs : int, None
Number of wave frequencies to calculate using the equation: freq_n = 2 * pi * n / T_span
Where n is the wave number, and T_span is the total time span of the toas in the fitter object
freeze_params : bool, optional
Whether the new parameters should be frozen
Returns
-------
indices : list
Indices that have been assigned to new WaveX components
"""
from pint.models.dmwavex import DMWaveX
if (freqs is None) and (n_freqs is None):
raise ValueError(
"DMWaveX component base frequencies are not specified. "
"Please input either freqs or n_freqs"
)
if (freqs is not None) and (n_freqs is not None):
raise ValueError(
"Both freqs and n_freqs are specified. Only one or the other should be used"
)
if n_freqs is not None and n_freqs <= 0:
raise ValueError("Must use a non-zero number of wave frequencies")
model.add_component(DMWaveX())
if isinstance(T_span, u.quantity.Quantity):
T_span.to(u.d)
else:
T_span *= u.d
nyqist_freq = 1.0 / (2.0 * T_span)
if freqs is not None:
if isinstance(freqs, u.quantity.Quantity):
freqs.to(u.d**-1)
else:
freqs *= u.d**-1
if len(freqs) == 1:
model.DMWXFREQ_0001.quantity = freqs
else:
freqs = np.array(freqs)
freqs.sort()
if min(np.diff(freqs)) < nyqist_freq:
warnings.warn(
"DMWaveX frequency spacing is finer than frequency resolution of data"
)
model.DMWXFREQ_0001.quantity = freqs[0]
model.components["DMWaveX"].add_dmwavex_components(freqs[1:])
if n_freqs is not None:
if n_freqs == 1:
wave_freq = 1 / T_span
model.DMWXFREQ_0001.quantity = wave_freq
else:
wave_numbers = np.arange(1, n_freqs + 1)
wave_freqs = wave_numbers / T_span
model.DMWXFREQ_0001.quantity = wave_freqs[0]
model.components["DMWaveX"].add_dmwavex_components(wave_freqs[1:])
for p in model.params:
if p.startswith("DMWXSIN") or p.startswith("DMWXCOS"):
model[p].frozen = freeze_params
return model.components["DMWaveX"].get_indices()
def cmwavex_setup(
model: "pint.models.TimingModel",
T_span: Union[float, u.Quantity],
freqs: Optional[Iterable[Union[float, u.Quantity]]] = None,
n_freqs: Optional[int] = None,
freeze_params: bool = False,
) -> List[int]:
"""
Set-up a CMWaveX model based on either an array of user-provided frequencies or the wave number
frequency calculation. Sine and Cosine amplitudes are initially set to zero
User specifies T_span and either freqs or n_freqs. This function assumes that the timing model does not already
have any CMWaveX components. See add_cmwavex_component() or add_cmwavex_components() to add components
to an existing CMWaveX model.
Parameters
----------
model : pint.models.timing_model.TimingModel
T_span : float, astropy.quantity.Quantity
Time span used to calculate nyquist frequency when using freqs
Time span used to calculate CMWaveX frequencies when using n_freqs
Usually to be set as the length of the timing baseline the model is being used for
freqs : iterable of float or astropy.quantity.Quantity, None
User inputed base frequencies
n_freqs : int, None
Number of wave frequencies to calculate using the equation: freq_n = 2 * pi * n / T_span
Where n is the wave number, and T_span is the total time span of the toas in the fitter object
freeze_params : bool, optional
Whether the new parameters should be frozen
Returns
-------
indices : list
Indices that have been assigned to new WaveX components
"""
from pint.models.cmwavex import CMWaveX
if (freqs is None) and (n_freqs is None):
raise ValueError(
"CMWaveX component base frequencies are not specified. "
"Please input either freqs or n_freqs"
)
if (freqs is not None) and (n_freqs is not None):
raise ValueError(
"Both freqs and n_freqs are specified. Only one or the other should be used"
)
if n_freqs is not None and n_freqs <= 0:
raise ValueError("Must use a non-zero number of wave frequencies")
model.add_component(CMWaveX())
if isinstance(T_span, u.quantity.Quantity):
T_span.to(u.d)
else:
T_span *= u.d
nyqist_freq = 1.0 / (2.0 * T_span)
if freqs is not None:
if isinstance(freqs, u.quantity.Quantity):
freqs.to(u.d**-1)
else:
freqs *= u.d**-1
if len(freqs) == 1:
model.CMWXFREQ_0001.quantity = freqs
else:
freqs = np.array(freqs)
freqs.sort()
if min(np.diff(freqs)) < nyqist_freq:
warnings.warn(
"CMWaveX frequency spacing is finer than frequency resolution of data"
)
model.CMWXFREQ_0001.quantity = freqs[0]
model.components["CMWaveX"].add_cmwavex_components(freqs[1:])
if n_freqs is not None:
if n_freqs == 1:
wave_freq = 1 / T_span
model.CMWXFREQ_0001.quantity = wave_freq
else:
wave_numbers = np.arange(1, n_freqs + 1)
wave_freqs = wave_numbers / T_span
model.CMWXFREQ_0001.quantity = wave_freqs[0]
model.components["CMWaveX"].add_cmwavex_components(wave_freqs[1:])
for p in model.params:
if p.startswith("CMWXSIN") or p.startswith("CMWXCOS"):
model[p].frozen = freeze_params
return model.components["CMWaveX"].get_indices()
def _translate_wave_freqs(om: Union[float, u.Quantity], k: int) -> u.Quantity:
"""
Use Wave model WAVEOM parameter to calculate a WaveX WXFREQ_ frequency parameter for wave number k
Parameters
----------
om : float or astropy.quantity.Quantity
Base frequency of Wave model solution - parameter WAVEOM
If float is given default units of 1/d assigned
k : int
wave number to use to calculate WaveX WXFREQ_ frequency parameter
Returns
-------
astropy.units.Quantity
WXFREQ_ quantity in units 1/d that can be used in WaveX model
"""
om <<= u.rad / u.d
return (om * (k + 1)) / (2.0 * np.pi * u.rad)
def _translate_wavex_freqs(wxfreq: Union[float, u.Quantity], k: int) -> u.Quantity:
"""
Use WaveX model WXFREQ_ parameters and wave number k to calculate the Wave model WAVEOM frequency parameter.
Parameters
----------
wxfreq : float or astropy.quantity.Quantity
WaveX frequency from which the WAVEOM parameter will be calculated
If float is given default units of 1/d assigned
k : int
wave number to use to calculate Wave WAVEOM parameter
Returns
-------
astropy.units.Quantity
WAVEOM quantity in units 1/d that can be used in Wave model
"""
wxfreq <<= u.d**-1
if len(wxfreq) == 1:
return (2.0 * np.pi * u.rad * wxfreq) / (k + 1.0)
wave_om = [
((2.0 * np.pi * u.rad * wxfreq[i]) / (k[i] + 1.0)) for i in range(len(wxfreq))
]
return (
sum(wave_om) / len(wave_om)
if np.allclose(wave_om, wave_om[0], atol=1e-3)
else False
)
def translate_wave_to_wavex(
model: "pint.models.TimingModel",
) -> "pint.models.TimingModel":
"""
Go from a Wave model to a WaveX model
WaveX frequencies get calculated based on the Wave model WAVEOM parameter and the number of WAVE parameters.
WXFREQ_000k = [WAVEOM * (k+1)] / [2 * pi]
WaveX amplitudes are taken from the WAVE pair parameters
Paramters
---------
model : pint.models.timing_model.TimingModel
TimingModel containing a Wave model to be converted to a WaveX model
Returns
-------
pint.models.timing_model.TimingModel
New timing model with converted WaveX model included
"""
from pint.models.wavex import WaveX
new_model = deepcopy(model)
wave_names = [
f"WAVE{ii}" for ii in range(1, model.components["Wave"].num_wave_terms + 1)
]
wave_terms = [getattr(model.components["Wave"], name) for name in wave_names]
wave_om = model.components["Wave"].WAVE_OM.quantity
wave_epoch = model.components["Wave"].WAVEEPOCH.quantity
new_model.remove_component("Wave")
new_model.add_component(WaveX())
new_model.WXEPOCH.value = wave_epoch.value
for k, wave_term in enumerate(wave_terms):
wave_sin_amp, wave_cos_amp = wave_term.quantity
wavex_freq = _translate_wave_freqs(wave_om, k)
if k == 0:
new_model.WXFREQ_0001.value = wavex_freq.value
new_model.WXSIN_0001.value = -wave_sin_amp.value
new_model.WXCOS_0001.value = -wave_cos_amp.value
else:
new_model.components["WaveX"].add_wavex_component(
wavex_freq, wxsin=-wave_sin_amp, wxcos=-wave_cos_amp
)
return new_model
def get_wavex_freqs(
model: "pint.models.TimingModel",
index: Optional[Union[float, int, List, np.ndarray]] = None,
quantity: bool = False,
) -> List[Union[float, u.Quantity]]:
"""
Return the WaveX frequencies for a timing model.
If index is specified, returns the frequencies corresponding to the user-provided indices.
If index isn't specified, returns all WaveX frequencies in timing model
Parameters
----------
model : pint.models.timing_model.TimingModel
Timing model from which to return WaveX frequencies
index : float, int, list, np.ndarray, None
Number or list/array of numbers corresponding to WaveX frequencies to return
quantity : bool
If set to True, returns a list of astropy.quanitity.Quantity rather than a list of prefixParameters
Returns
-------
List of WXFREQ_ parameters
"""
if index is None:
freqs = model.components["WaveX"].get_prefix_mapping_component("WXFREQ_")
if len(freqs) == 1:
values = getattr(model.components["WaveX"], freqs.values())
else:
values = [
getattr(model.components["WaveX"], param) for param in freqs.values()
]
elif isinstance(index, (int, float, np.int64)):
idx_rf = f"{int(index):04d}"
values = getattr(model.components["WaveX"], f"WXFREQ_{idx_rf}")
elif isinstance(index, (list, set, np.ndarray)):
idx_rf = [f"{int(idx):04d}" for idx in index]
values = [getattr(model.components["WaveX"], f"WXFREQ_{ind}") for ind in idx_rf]
else:
raise TypeError(
f"index most be a float, int, set, list, array, or None - not {type(index)}"
)
if quantity:
if len(values) == 1:
values = [values[0].quantity]
else:
values = [v.quantity for v in values]
return values
def get_wavex_amps(
model: "pint.models.TimingModel",
index: Optional[Union[float, int, List, np.ndarray]] = None,
quantity: bool = False,
) -> List[Union[float, u.Quantity]]:
"""
Return the WaveX amplitudes for a timing model.
If index is specified, returns the sine/cosine amplitudes corresponding to the user-provided indices.
If index isn't specified, returns all WaveX sine/cosine amplitudes in timing model
Parameters
----------
model : pint.models.timing_model.TimingModel
Timing model from which to return WaveX frequencies
index : float, int, list, np.ndarray, None
Number or list/array of numbers corresponding to WaveX amplitudes to return
quantity : bool
If set to True, returns a list of tuples of astropy.quanitity.Quantity rather than a list of prefixParameters tuples
Returns
-------
List of WXSIN_ and WXCOS_ parameters
"""
if index is None:
indices = (
model.components["WaveX"].get_prefix_mapping_component("WXSIN_").keys()
)
if len(indices) == 1:
values = getattr(
model.components["WaveX"], f"WXSIN_{int(indices):04d}"
), getattr(model.components["WaveX"], f"WXCOS_{int(indices):04d}")
else:
values = [
(
getattr(model.components["WaveX"], f"WXSIN_{int(idx):04d}"),
getattr(model.components["WaveX"], f"WXCOS_{int(idx):04d}"),
)
for idx in indices
]
elif isinstance(index, (int, float, np.int64)):
idx_rf = f"{int(index):04d}"
values = getattr(model.components["WaveX"], f"WXSIN_{idx_rf}"), getattr(
model.components["WaveX"], f"WXCOS_{idx_rf}"
)
elif isinstance(index, (list, set, np.ndarray)):
idx_rf = [f"{int(idx):04d}" for idx in index]
values = [
(
getattr(model.components["WaveX"], f"WXSIN_{ind}"),
getattr(model.components["WaveX"], f"WXCOS_{ind}"),
)
for ind in idx_rf
]
else:
raise TypeError(
f"index most be a float, int, set, list, array, or None - not {type(index)}"
)
if quantity:
if isinstance(values, tuple):
values = tuple(v.quantity for v in values)
if isinstance(values, list):
values = [(v[0].quantity, v[1].quantity) for v in values]
return values
def translate_wavex_to_wave(
model: "pint.models.TimingModel",
) -> "pint.models.TimingModel":
"""
Go from a WaveX timing model to a Wave timing model.
WARNING: Not every WaveX model can be appropriately translated into a Wave model. This is dependent on the user's choice of frequencies in the WaveX model.
In order for a WaveX model to be able to be converted into a Wave model, every WaveX frequency must produce the same value of WAVEOM in the calculation:
WAVEOM = [2 * pi * WXFREQ_000k] / (k + 1)
Paramters
---------
model : pint.models.timing_model.TimingModel
TimingModel containing a WaveX model to be converted to a Wave model
Returns
-------
pint.models.timing_model.TimingModel
New timing model with converted Wave model included
"""
from pint.models.wave import Wave
new_model = deepcopy(model)
indices = model.components["WaveX"].get_indices()
wxfreqs = get_wavex_freqs(model, indices, quantity=True)
wave_om = _translate_wavex_freqs(wxfreqs, (indices - 1))
if wave_om == False:
raise ValueError(
"This WaveX model cannot be properly translated into a Wave model due to the WaveX frequencies not producing a consistent WAVEOM value"
)
wave_amps = get_wavex_amps(model, index=indices, quantity=True)
new_model.remove_component("WaveX")
new_model.add_component(Wave())
new_model.WAVEEPOCH.quantity = model.WXEPOCH.quantity
new_model.WAVE_OM.quantity = wave_om
new_model.WAVE1.quantity = tuple(w * -1.0 for w in wave_amps[0])
if len(indices) > 1:
for i in range(1, len(indices)):
print(wave_amps[i])
wave_amps[i] = tuple(w * -1.0 for w in wave_amps[i])
new_model.components["Wave"].add_wave_component(
wave_amps[i], index=indices[i]
)
return new_model
def weighted_mean(
arrin: np.ndarray,
weights_in: np.ndarray,
inputmean: Optional[float] = None,
calcerr: bool = False,
sdev: bool = False,
) -> Tuple[float, ...]:
"""Compute weighted mean of input values
Calculate the weighted mean, error, and optionally standard deviation of
an input array. By default error is calculated assuming the weights are
1/err^2, but if you send calcerr=True this assumption is dropped and the
error is determined from the weighted scatter.
Parameters
----------
arrin : array
Array containing the numbers whose weighted mean is desired.
weights: array
A set of weights for each element in array. For measurements with
uncertainties, these should be 1/sigma^2.
inputmean: float, optional
An input mean value, around which the mean is calculated.
calcerr : bool, optional
Calculate the weighted error. By default the error is calculated as
1/sqrt( weights.sum() ). If calcerr=True it is calculated as
sqrt((w**2 * (arr-mean)**2).sum() )/weights.sum().
sdev : bool, optional
If True, also return the weighted standard deviation as a third
element in the tuple. Defaults to False.
Returns
-------
wmean, werr: tuple
A tuple of the weighted mean and error. If sdev=True the
tuple will also contain sdev: wmean,werr,wsdev
Notes
-----
Converted from IDL: 2006-10-23. Erin Sheldon, NYU
Copied from PRESTO to PINT : 2020-04-18
"""
arr = arrin
weights = weights_in
wtot = weights.sum()
# user has input a mean value
wmean = (weights * arr).sum() / wtot if inputmean is None else float(inputmean)
# how should error be calculated?
if calcerr:
werr2 = (weights**2 * (arr - wmean) ** 2).sum()
werr = np.sqrt(werr2) / wtot
else:
werr = 1.0 / np.sqrt(wtot)
# should output include the weighted standard deviation?
if sdev:
wvar = (weights * (arr - wmean) ** 2).sum() / wtot
wsdev = np.sqrt(wvar)
return wmean, werr, wsdev
else:
return wmean, werr
@u.quantity_input
def ELL1_check(
A1: u.cm, E: u.dimensionless_unscaled, TRES: u.us, NTOA: int, outstring=True
):
"""Check for validity of assumptions in ELL1 binary model
Checks whether the assumptions that allow ELL1 to be safely used are
satisfied. To work properly, we should have:
:math:`asini/c e^4 \\ll {\\rm timing precision} / \\sqrt N_{\\rm TOA}`
or :math:`A1 E^4 \\ll TRES / \\sqrt N_{\\rm TOA}`
since the ELL1 model now includes terms up to O(E^3)
Parameters
----------
A1 : astropy.units.Quantity
Projected semi-major axis (aka ASINI) in `pint.ls`
E : astropy.units.Quantity (dimensionless)
Eccentricity
TRES : astropy.units.Quantity
RMS TOA uncertainty
NTOA : int
Number of TOAs in the fit
outstring : bool, optional
Returns
-------
bool or str
Returns True if ELL1 is safe to use, otherwise False.
If outstring is True then returns a string summary instead.
"""
lhs = A1 / const.c * E**4.0
rhs = TRES / np.sqrt(NTOA)
if outstring:
s = (
f"Checking applicability of ELL1 model -- \n"
f" Condition is asini/c * ecc**4 << timing precision / sqrt(# TOAs) to use ELL1\n"
f" asini/c * ecc**4 = {lhs.to(u.us):.3g} \n"
f" TRES / sqrt(# TOAs) = {rhs.to(u.us):.3g} \n"
)
if lhs * 50.0 < rhs:
if outstring:
s += " Should be fine.\n"
return s
return True
elif lhs * 5.0 < rhs:
if outstring:
s += " Should be OK, but not optimal.\n"
return s
return True
else:
if outstring:
s += " *** WARNING*** Should probably use BT or DD instead!\n"
return s
return False
def FTest(chi2_1: float, dof_1: int, chi2_2: float, dof_2: int) -> float:
"""Run F-test.
Compute an F-test to see if a model with extra parameters is
significant compared to a simpler model. The input values are the
(non-reduced) chi^2 values and the numbers of DOF for '1' the
original model and '2' for the new model (with more fit params).
The probability is computed exactly like Sherpa's F-test routine
(in Ciao) and is also described in the Wikipedia article on the
F-test: http://en.wikipedia.org/wiki/F-test
The returned value is the probability that the improvement in
chi2 is due to chance (i.e. a low probability means that the
new fit is quantitatively better, while a value near 1 means
that the new model should likely be rejected).
Parameters
-----------
chi2_1 : float
Chi-squared value of model with fewer parameters
dof_1 : int
Degrees of freedom of model with fewer parameters
chi2_2 : float
Chi-squared value of model with more parameters
dof_2 : int
Degrees of freedom of model with more parameters
Returns
--------
ft : float
F-test significance value for the model with the larger number of
components over the other.
"""
delta_chi2 = chi2_1 - chi2_2
if delta_chi2 > 0 and dof_1 != dof_2:
delta_dof = dof_1 - dof_2
new_redchi2 = chi2_2 / dof_2
F = float((delta_chi2 / delta_dof) / new_redchi2) # fdtr doesn't like float128
return fdtrc(delta_dof, dof_2, F)
elif dof_1 == dof_2:
log.warning("Models have equal degrees of freedom, cannot perform F-test.")
return np.nan
else:
log.warning(
"Chi^2 for Model 2 is larger than Chi^2 for Model 1, cannot perform F-test."
)
return 1.0
def add_dummy_distance(
c: coords.SkyCoord, distance: u.Quantity = 1 * u.kpc
) -> coords.SkyCoord:
"""Adds a dummy distance to a SkyCoord object for applying proper motion
Parameters
----------
c: astropy.coordinates.SkyCoord
current SkyCoord object without distance but with proper motion and obstime
distance: astropy.units.Quantity, optional
distance to supply
Returns
-------
cnew : astropy.coordinates.SkyCoord
new SkyCoord object with a distance attached
"""
# import here to avoid circular imports
import pint.pulsar_ecliptic
if c.frame.data.differentials == {}:
log.warning(
f"No proper motions available for {c}: returning coordinates unchanged"
)
return c
if isinstance(c.frame, coords.builtin_frames.icrs.ICRS):
return (
coords.SkyCoord(
ra=c.ra,
dec=c.dec,
pm_ra_cosdec=c.pm_ra_cosdec,
pm_dec=c.pm_dec,
obstime=c.obstime,
distance=distance,
frame=coords.ICRS,
)
if hasattr(c, "pm_ra_cosdec")
else coords.SkyCoord(
ra=c.ra,
dec=c.dec,
pm_ra_cosdec=c.pm_ra,
pm_dec=c.pm_dec,
obstime=c.obstime,
distance=distance,
frame=coords.ICRS,
)
)
elif isinstance(c.frame, coords.builtin_frames.galactic.Galactic):
return coords.SkyCoord(
l=c.l,
b=c.b,
pm_l_cosb=c.pm_l_cosb,
pm_b=c.pm_b,
obstime=c.obstime,
distance=distance,
frame=coords.Galactic,
)
elif isinstance(c.frame, pint.pulsar_ecliptic.PulsarEcliptic):
return coords.SkyCoord(
lon=c.lon,
lat=c.lat,
pm_lon_coslat=c.pm_lon_coslat,
pm_lat=c.pm_lat,
obstime=c.obstime,
distance=distance,
obliquity=c.obliquity,
frame=pint.pulsar_ecliptic.PulsarEcliptic,
)
else:
log.warning(
f"Do not know coordinate frame for {c}: returning coordinates unchanged"
)
return c
def remove_dummy_distance(c: coords.SkyCoord) -> coords.SkyCoord:
"""Removes a dummy distance from a SkyCoord object after applying proper motion
Parameters
----------
c: astropy.coordinates.SkyCoord
current SkyCoord object with distance and with proper motion and obstime
Returns
-------
cnew : astropy.coordinates.SkyCoord
new SkyCoord object with a distance removed
"""
# import here to avoid circular imports
import pint.pulsar_ecliptic
if c.frame.data.differentials == {}:
log.warning(
f"No proper motions available for {c}: returning coordinates unchanged"
)
return c
if isinstance(c.frame, coords.builtin_frames.icrs.ICRS):
return (
coords.SkyCoord(
ra=c.ra,
dec=c.dec,
pm_ra_cosdec=c.pm_ra_cosdec,
pm_dec=c.pm_dec,
obstime=c.obstime,
frame=coords.ICRS,
)
if hasattr(c, "pm_ra_cosdec")
else coords.SkyCoord(
ra=c.ra,
dec=c.dec,
pm_ra_cosdec=c.pm_ra,
pm_dec=c.pm_dec,
obstime=c.obstime,
frame=coords.ICRS,
)
)
elif isinstance(c.frame, coords.builtin_frames.galactic.Galactic):
return coords.SkyCoord(
l=c.l,
b=c.b,
pm_l_cosb=c.pm_l_cosb,
pm_b=c.pm_b,
obstime=c.obstime,
frame=coords.Galactic,
)
elif isinstance(c.frame, pint.pulsar_ecliptic.PulsarEcliptic):
return coords.SkyCoord(
lon=c.lon,
lat=c.lat,
pm_lon_coslat=c.pm_lon_coslat,
pm_lat=c.pm_lat,
obstime=c.obstime,
obliquity=c.obliquity,
frame=pint.pulsar_ecliptic.PulsarEcliptic,
)
else:
log.warning(
f"Do not know coordinate frame for {c}: returning coordinates unchanged"
)
return c
def info_string(
prefix_string: str = "# ", comment: Optional[str] = None, detailed: bool = False
) -> str:
"""Returns an informative string about the current state of PINT.
Adds:
* Creation date
* PINT version
* Username (given by the `gitpython`_ global configuration ``user.name``
if available, in addition to :func:`getpass.getuser`).
* Host (given by :func:`platform.node`)
* OS (given by :func:`platform.platform`)
* plus a user-supplied comment (if present).
Parameters
----------
prefix_string: str, default='# '
a string to be prefixed to the output (often to designate as a
comment or similar)
comment: str, optional
a free-form comment string to be included if present
detailed: bool, optional
Include detailed version info on dependencies.
Returns
-------
str
informative string
Examples
--------
>>> import pint.utils
>>> print(pint.utils.info_string(prefix_string="# ",comment="Example comment"))
# Created: 2021-07-21T09:39:45.606894
# PINT_version: 0.8.2+311.ge351099d
# User: David Kaplan (dlk)
# Host: margle-2.local
# OS: macOS-10.14.6-x86_64-i386-64bit
# Comment: Example comment
Multi-line comments are allowed:
>>> import pint.utils
>>> print(pint.utils.info_string(prefix_string="C ",
... comment="Example multi-line comment\\nAlso using a different comment character"))
C Created: 2021-07-21T09:40:34.172333
C PINT_version: 0.8.2+311.ge351099d
C User: David Kaplan (dlk)
C Host: margle-2.local
C OS: macOS-10.14.6-x86_64-i386-64bit
C Comment: Example multi-line comment
C Comment: Also using a different comment character
Full example of writing a par and tim file:
>>> from pint.models import get_model_and_toas
>>> # the locations of these may vary
>>> timfile = "tests/datafile/NGC6440E.tim"
>>> parfile = "tests/datafile/NGC6440E.par"
>>> m, t = get_model_and_toas(parfile, timfile)
>>> print(m.as_parfile(comment="Here is a comment on the par file"))
# Created: 2021-07-22T08:24:27.101479
# PINT_version: 0.8.2+439.ge81c9b11.dirty
# User: David Kaplan (dlk)
# Host: margle-2.local
# OS: macOS-10.14.6-x86_64-i386-64bit
# Comment: Here is a comment on the par file
PSR 1748-2021E
EPHEM DE421
CLK UTC(NIST)
...
>>> from pint.models import get_model_and_toas
>>> import io
>>> # the locations of these may vary
>>> timfile = "tests/datafile/NGC6440E.tim"
>>> parfile = "tests/datafile/NGC6440E.par"
>>> m, t = get_model_and_toas(parfile, timfile)
>>> f = io.StringIO(parfile)
>>> t.write_TOA_file(f, comment="Here is a comment on the tim file")
>>> f.seek(0)
>>> print(f.getvalue())
FORMAT 1
C Created: 2021-07-22T08:24:27.213529
C PINT_version: 0.8.2+439.ge81c9b11.dirty
C User: David Kaplan (dlk)
C Host: margle-2.local
C OS: macOS-10.14.6-x86_64-i386-64bit
C Comment: Here is a comment on the tim file
unk 1949.609000 53478.2858714192189005 21.710 gbt -format Princeton -ddm 0.0
unk 1949.609000 53483.2767051885165973 21.950 gbt -format Princeton -ddm 0.0
unk 1949.609000 53489.4683897879295023 29.950 gbt -format Princeton -ddm 0.0
....
Notes
-----
This can be called via :func:`~pint.toa.TOAs.write_TOA_file` on a :class:`~~pint.toa.TOAs` object,
or :func:`~pint.models.timing_model.TimingModel.as_parfile` on a
:class:`~pint.models.timing_model.TimingModel` object.
.. _gitpython: https://gitpython.readthedocs.io/en/stable/
"""
# try to get the git user if defined
try:
import git
# user-level git config
c = git.GitConfigParser()
username = c.get_value("user", option="name") + f" ({getpass.getuser()})"
except (configparser.NoOptionError, configparser.NoSectionError, ImportError):
username = getpass.getuser()
info_dict = {
"Created": f"{datetime.datetime.now().isoformat()}",
"PINT_version": pint.__version__,
"User": username,
"Host": platform.node(),
"OS": platform.platform(),
"Python": sys.version,
}
if detailed:
from numpy import __version__ as numpy_version
from scipy import __version__ as scipy_version
from astropy import __version__ as astropy_version
from erfa import __version__ as erfa_version
from jplephem import __version__ as jpleph_version
from matplotlib import __version__ as matplotlib_version
from loguru import __version__ as loguru_version
from pint import __file__ as pint_file
info_dict.update(
{
"endian": sys.byteorder,
"numpy_version": numpy_version,
"numpy_longdouble_precision": np.dtype(np.longdouble).name,
"scipy_version": scipy_version,
"astropy_version": astropy_version,
"pyerfa_version": erfa_version,
"jplephem_version": jpleph_version,
"matplotlib_version": matplotlib_version,
"loguru_version": loguru_version,
"Python_prefix": sys.prefix,
"PINT_file": pint_file,
}
)
if "CONDA_PREFIX" in os.environ:
conda_prefix = os.environ["CONDA_PREFIX"]
info_dict.update(
{
"Environment": "conda",
"conda_prefix": conda_prefix,
}
)
elif "VIRTUAL_ENV" in os.environ:
venv_prefix = os.environ["VIRTUAL_ENV"]
info_dict.update(
{
"Environment": "virtualenv",
"virtualenv_prefix": venv_prefix,
}
)
s = "".join(f"{key}: {val}\n" for key, val in info_dict.items())
s = textwrap.dedent(s)
# remove blank lines
s = os.linesep.join([x for x in s.splitlines() if x])
if comment is not None:
if os.linesep in comment:
s += os.linesep + os.linesep.join(
[f"Comment: {x}" for x in comment.splitlines()]
)
else:
s += f"{os.linesep}Comment: {comment}"
if prefix_string is not None and prefix_string != "":
s = os.linesep.join([prefix_string + x for x in s.splitlines()])
return s
def list_parameters(class_: Optional[Type] = None) -> List[Dict[str, Union[str, List]]]:
"""List parameters understood by PINT.
Parameters
----------
class_: type, optional
If provided, produce a list of parameters understood by the Component type; if None,
return a list of parameters understood by all Components known to PINT.
Returns
-------
list of dict
Each entry is a dictionary describing one parameter. Dictionary values are all strings
or lists of strings, and will include at least "name", "classes", and "description".
"""
if class_ is not None:
from pint.models.parameter import (
boolParameter,
intParameter,
maskParameter,
prefixParameter,
strParameter,
)
result = []
inst = class_()
for p in inst.params:
pm = getattr(inst, p)
d = dict(
name=pm.name,
class_=f"{class_.__module__}.{class_.__name__}",
description=pm.description,
)
if pm.aliases:
d["aliases"] = [a for a in pm.aliases if a != pm.name]
if pm.units:
d["kind"] = pm.units.to_string()
if not d["kind"]:
d["kind"] = "number"
elif isinstance(pm, boolParameter):
d["kind"] = "boolean"
elif isinstance(pm, strParameter):
d["kind"] = "string"
elif isinstance(pm, intParameter):
d["kind"] = "integer"
if isinstance(pm, prefixParameter):
d["name"] = pm.prefix + "{number}"
d["aliases"] = [a + "{number}" for a in pm.prefix_aliases]
if isinstance(pm, maskParameter):
d["name"] = pm.origin_name + " {flag} {value}"
d["aliases"] = [a + " {flag} {value}" for a in pm.prefix_aliases]
if "aliases" in d and not d["aliases"]:
del d["aliases"]
result.append(d)
return result
else:
import pint.models.timing_model
results = {}
ct = pint.models.timing_model.Component.component_types.copy()
ct["TimingModel"] = pint.models.timing_model.TimingModel
for v in ct.values():
for d in list_parameters(v):
n = d["name"]
class_ = d.pop("class_")
if n not in results:
d["classes"] = [class_]
results[n] = d
else:
r = results[n].copy()
r.pop("classes")
if r != d:
raise ValueError(
f"Parameter {d} in class {class_} does not match {results[n]}"
)
results[n]["classes"].append(class_)
return sorted(results.values(), key=lambda d: d["name"])
def colorize(
text: str,
fg_color: Optional[str] = None,
bg_color: Optional[str] = None,
attribute: Optional[str] = None,
) -> str:
"""Colorizes a string (including unicode strings) for printing on the terminal
For an example of usage, as well as a demonstration as to what the
attributes and colors look like, check out :func:`~pint.utils.print_color_examples`
Parameters
----------
text : string
The text to colorize. Can include unicode.
fg_color : _type_, optional
Foreground color name. The color names (fg or bg) are one of:
'black', 'red', 'green', 'yellow', 'blue', 'magenta', 'cyan',
or 'white'.
bg_color : _type_, optional
Background color name, by default None. Same choices as for `fg_color`.
attribute : _type_, optional
Text attribute, by default None. The text attributes are one of:
'normal', 'bold', 'subdued', 'italic', 'underscore', 'blink',
'reverse', or 'concealed'.
Returns
-------
string
The colorized string using the defined text attribute.
"""
COLOR_FORMAT = "\033[%dm\033[%d;%dm%s\033[0m"
FOREGROUND = dict(zip(COLOR_NAMES, list(range(30, 38))))
BACKGROUND = dict(zip(COLOR_NAMES, list(range(40, 48))))
ATTRIBUTE = dict(zip(TEXT_ATTRIBUTES, [0, 1, 2, 3, 4, 5, 7, 8]))
fg = FOREGROUND.get(fg_color, 39)
bg = BACKGROUND.get(bg_color, 49)
att = ATTRIBUTE.get(attribute, 0)
return COLOR_FORMAT % (att, bg, fg, text)
def print_color_examples() -> None:
"""Print example terminal colors and attributes for/using :func:`~pint.utils.colorize`"""
for att in TEXT_ATTRIBUTES:
for fg in COLOR_NAMES:
for bg in COLOR_NAMES:
print(
colorize(f"{fg:>8} {att:<11}", fg, bg_color=bg, attribute=att),
end="",
)
print("")
def group_iterator(items: np.ndarray) -> Iterator[Tuple]:
"""An iterator to step over identical items in a :class:`numpy.ndarray`
Example
-------
This will step over all of the observatories in the TOAs.
For each iteration it gives the observatory name and the indices that correspond to it:
>>> t = pint.toa.get_TOAs("grouptest.tim")
>>> for o, i in group_iterator(t["obs"]):
>>> print(f"{o} {i}")
"""
for item in np.unique(items):
yield item, np.where(items == item)[0]
def compute_hash(filename: file_like) -> bytes:
"""Compute a unique hash of a file.
This is designed to keep around to detect changes, not to be
cryptographically robust. It uses the SHA256 algorithm, which
is known to be vulnerable to a length-extension attack.
Parameters
----------
f : str or Path or file-like
The source of input. If file-like, it should return ``bytes`` not ``str`` -
that is, the file should be opened in binary mode.
Returns
-------
bytes
A cryptographic hash of the input.
"""
h = hashlib.sha256()
with open_or_use(filename, "rb") as f:
# Reading in larger chunks saves looping without using
# huge amounts of memory; and multiples of the hash
# function block size are more efficient.
blocks = 128
while block := f.read(blocks * h.block_size):
h.update(block)
return h.digest()
def get_conjunction(
coord: coords.SkyCoord,
t0: Time,
precision: Literal["low", "high"] = "low",
ecl: str = "IERS2010",
) -> Time:
"""
Find first time of Solar conjuction after t0 and approximate elongation at conjunction
Offers a low-precision version (based on analytic expression of Solar longitude)
Or a higher-precision version (based on interpolating :func:`astropy.coordinates.get_sun`)
Parameters
----------
coord : astropy.coordinates.SkyCoord
t0 : astropy.time.Time
precision : str, optional
"low" or "high" precision
ecl : str, optional
Obliquity for PulsarEcliptic coordinates
Returns
-------
astropy.time.Time
Time of conjunction
astropy.units.Quantity
Elongation at conjunction
"""
# import here to avoid circular import
import pint.pulsar_ecliptic
assert precision.lower() in ["low", "high"]
coord = coord.transform_to(pint.pulsar_ecliptic.PulsarEcliptic(ecl=ecl))
# low precision version
# use analytic form for Sun's ecliptic longitude
# and interpolate
tt = t0 + np.linspace(0, 365) * u.d
# Allen's Astrophysical Quantities
# Low precision solar coordinates (27.4.1)
# number of days since J2000
n = tt.jd - 2451545
# mean longitude of Sun, corrected for abberation
L = 280.460 * u.deg + 0.9854674 * u.deg * n
# Mean anomaly
g = 357.528 * u.deg + 0.9856003 * u.deg * n
# Ecliptic longitude
longitude = L + 1.915 * u.deg * np.sin(g) + 0.20 * u.deg * np.sin(2 * g)
dlongitude = longitude - coord.lon
dlongitude -= (dlongitude // (360 * u.deg)).max() * 360 * u.deg
conjunction = Time(np.interp(0, dlongitude.value, tt.mjd), format="mjd")
if precision.lower() == "low":
return conjunction, coord.lat
# do higher precision
# use astropy solar coordinates
# start with 10 days on either side of the low precision value
tt = conjunction + np.linspace(-10, 10) * u.d
csun = coords.get_sun(tt)
# this seems to be needed in old astropy
csun = coords.SkyCoord(ra=csun.ra, dec=csun.dec)
elongation = csun.separation(coord)
# get min value and interpolate with a quadratic fit
j = np.where(elongation == elongation.min())[0][0]
x = tt.mjd[j - 3 : j + 4]
y = elongation.value[j - 3 : j + 4]
f = np.polyfit(x, y, 2)
conjunction = Time(-f[1] / 2 / f[0], format="mjd")
csun = coords.get_sun(conjunction)
# this seems to be needed in old astropy
csun = coords.SkyCoord(ra=csun.ra, dec=csun.dec)
return conjunction, csun.separation(coord)
def divide_times(t: Time, t0: Time, offset: float = 0.5) -> np.ndarray:
"""
Divide input times into years relative to t0
Years are centered around the requested offset value
Parameters
----------
t : astropy.time.Time
t0 : astropy.time.Time
Reference time
offset : float, optional
Offset value for division. A value of 0.5 divides the results into intervals [-0.5,0.5].
Returns
-------
np.ndarray
Array of indices for division
Example
-------
Divide into years around each conjunction
>>> elongation = astropy.coordinates.get_sun(Time(t.get_mjds(), format="mjd")).separation(m.get_psr_coords())
>>> t0 = get_conjunction(m.get_psr_coords(), m.PEPOCH.quantity, precision="high")[0]
>>> indices = divide_times(Time(t.get_mjds(), format="mjd"), t0)
>>> plt.clf()
>>> for i in np.unique(indices):
plt.plot(t.get_mjds()[indices == i], elongation[indices == i].value, ".")
"""
dt = t - t0
values = (dt.to(u.yr).value + offset) // 1
return np.digitize(values, np.unique(values), right=True)
def convert_dispersion_measure(
dm: u.Quantity, dmconst: Optional[u.Quantity] = None
) -> u.Quantity:
"""Convert dispersion measure to a different value of the DM constant.
Parameters
----------
dm : astropy.units.Quantity
DM measured according to the conventional value of the DM constant
dmconst : astropy.units.Quantity
Value of the DM constant. Default value is computed from CODATA physical
constants.
Returns
-------
dm : astropy.units.Quantity
DM measured according to the value of the DM constant computed from the
latest values of the physical constants
Notes
-----
See https://nanograv-pint.readthedocs.io/en/latest/explanation.html#dispersion-measure
for an explanation.
"""
if dmconst is None:
e = constants.e.si
eps0 = constants.eps0.si
c = constants.c.si
me = constants.m_e.si
dmconst = e**2 / (8 * np.pi**2 * c * eps0 * me)
return (dm * pint.DMconst / dmconst).to(pint.dmu)
def parse_time(
input: Union[float, Time, u.Quantity, int, str],
scale: str = "tdb",
precision: int = 9,
) -> Time:
"""Parse an :class:`astropy.time.Time` object from a range of input types
Parameters
----------
input : astropy.time.Time, astropy.units.Quantity, numpy.ndarray, float, int, str
Value to parse
scale : str, optional
Scale of time for conversion
precision : int, optional
Precision for time
Returns
-------
astropy.time.Time
"""
if isinstance(input, Time):
return input if input.scale == scale else getattr(input, scale)
elif isinstance(input, u.Quantity):
return Time(
input.to(u.d), format="pulsar_mjd", scale=scale, precision=precision
)
elif isinstance(input, (np.ndarray, float, int)):
return Time(input, format="pulsar_mjd", scale=scale, precision=precision)
elif isinstance(input, str):
return Time(input, format="pulsar_mjd_string", scale=scale, precision=precision)
else:
raise TypeError(f"Do not know how to parse times from {type(input)}")
def get_unit(parname: str) -> u.Unit:
"""Return the unit associated with a parameter
Handles normal parameters, along with aliases and indexed parameters
(e.g., `pint.models.parameter.prefixParameter`
and `pint.models.parameter.maskParameter`) with an index beyond those currently
initialized.
This can be used without an existing :class:`~pint.models.TimingModel`.
Parameters
----------
name : str
Name of PINT parameter or alias
Returns
-------
astropy.u.Unit
"""
# import in the function to avoid circular dependencies
from pint.models.timing_model import AllComponents
ac = AllComponents()
return ac.param_to_unit(parname)
def normalize_designmatrix(M, params):
"""Normalize each row of the design matrix.
This is used while computing the GLS chi2 and the GLS fitting step. The
normalized and unnormalized design matrices Mn and M are related by
M = Mn @ S
where S is a diagonal matrix containing the norms. This normalization is
OK because the GLS operations (fitting step, chi2 computation etc.) involve
the form
M @ (M.T @ N.inv() @ M).inv() @ M.T
and it is easy to see that the above expression doesn't change if we replace
M -> Mn.
Different parameters can have different units and numerically vastly different
design matrix entries. The normalization step forces the design matrix entries
to have similar numericall values and hence improves the numerical precision of
the matrix operations.
"""
from pint.fitter import DegeneracyWarning
norm = np.sqrt(np.sum(M**2, axis=0))
bad_params = [params[i] for i in np.where(norm == 0)[0]]
if len(bad_params) > 0 and params is not None:
warn(
f"Parameter degeneracy found in designmatrix! The offending parameters are {bad_params}.",
DegeneracyWarning,
)
norm[norm == 0] = 1
return M / norm, norm
def akaike_information_criterion(
model: "pint.models.timing_model.TimingModel", toas: "pint.toas.TOAs"
) -> float:
"""Compute the Akaike information criterion (AIC). The AIC is used for comparing different
models for the given dataset.
Given a model with best-fit parameters, the AIC is defined as
AIC = 2*k - 2*ln(L)
where k is the number of free parameters in the model and L is the maximum value of the likelihood
for the model.
Given n models with AIC values AIC1, ..., AICn, the preferred model is the one that minimizes the
AIC value.
If AIC_min is the minimum AIC value, then the i'th model can be said to be exp[AIC_min - AICi]
times as probable as the favored model in minimizing information loss.
See, e.g., Burnham & Anderson 2004 for further details.
Unlike the F-test (:function:`~pint.utils.FTest`), the AIC does not require the models to be nested.
See also :function:`~pint.utils.bayesian_information_criterion` for the Bayesian Information Criterion (BIC),
a similar quantity used for model comparison. The main practical difference between AIC and BIC is that the
BIC more heavily penalizes the number of free parameters.
Parameters
----------
model: pint.models.timing_model.TimingModel
The best-fit timing model
toas: pint.toas.TOAs
TOAs
Returns
-------
aic: float
The Akaike information criterion
"""
from pint.residuals import Residuals
if not toas.is_wideband():
k = (
len(model.free_params)
if "PhaseOffset" in model.components
else len(model.free_params) + 1
)
lnL = Residuals(toas, model).lnlikelihood()
return 2 * (k - lnL)
else:
raise NotImplementedError(
"akaike_information_criterion is not yet implemented for wideband data."
)
def bayesian_information_criterion(
model: "pint.models.timing_model.TimingModel", toas: "pint.toas.TOAs"
) -> float:
"""Compute the Bayesian information criterion (BIC). The BIC is used for comparing different
models for the given dataset.
Given a model with best-fit parameters, the BIC is defined as
BIC = k*ln(N) - 2*ln(L)
where k is the number of free parameters in the model, N is the number of data points/samples,
and L is the maximum value of the likelihood for the model.
Given n models with BIC values BIC1, ..., BICn, the preferred model is the one that minimizes the
BIC value.
The BIC is an approximation for the Bayesian evidence. It is computed by Taylor-expanding the log-likelihood
function up to the second order in the vicinity of the maximum-likelihood point and assuming that the
prior distribution doesn't vary appreciably in this neighbourhood.
See, e.g., Burnham & Anderson 2004 for further details.
Unlike the F-test (:function:`~pint.utils.FTest`), the BIC does not require the models to be nested.
See also :function:`~pint.utils.akaike_information_criterion` for the Akaike Information Criterion (AIC),
a similar quantity used for model comparison. The main practical difference between AIC and BIC is that the
BIC more heavily penalizes the number of free parameters.
Parameters
----------
model: pint.models.timing_model.TimingModel
The best-fit timing model
toas: pint.toas.TOAs
TOAs
Returns
-------
bic: float
The Bayesian information criterion
"""
from pint.residuals import Residuals
if not toas.is_wideband():
k = (
len(model.free_params)
if "PhaseOffset" in model.components
else len(model.free_params) + 1
)
lnN = np.log(len(toas))
lnL = Residuals(toas, model).lnlikelihood()
return k * lnN - 2 * lnL
else:
raise NotImplementedError(
"bayesian_information_criterion is not yet implemented for wideband data."
)
def sherman_morrison_dot(
Ndiag: np.ndarray, v: np.ndarray, w: float, x: np.ndarray, y: np.ndarray
) -> Tuple[float, float]:
"""
Compute an inner product of the form
(x| C^-1 |y)
where
C = N + w |v)(v| ,
N is a diagonal matrix, and w is a positive real number,
using the Sherman-Morrison identity
C^-1 = N^-1 - ( w N^-1 |v)(v| N^-1 / (1 + w (v| N^-1 |v)) )
Additionally,
det[C] = det[N] * (1 + w (v| N^-1 |v)) )
Paremeters
----------
Ndiag: array-like
Diagonal elements of the diagonal matrix N
v: array-like
A vector that represents a rank-1 update to N
w: float
Weight associated with the rank-1 update
x: array-like
Vector 1 for the inner product
y: array-like
Vector 2 for the inner product
Returns
-------
result: float
The inner product
logdetC: float
log-determinant of C
"""
Ninv = 1 / Ndiag
Ninv_v = Ninv * v
denom = 1 + w * np.dot(v, Ninv_v)
numer = w * np.dot(x, Ninv_v) * np.dot(y, Ninv_v)
result = np.dot(x, Ninv * y) - numer / denom
logdet_C = np.sum(np.log(Ndiag.to_value(u.s**2))) + np.log(
denom.to_value(u.dimensionless_unscaled)
)
return result, logdet_C
def woodbury_dot(
Ndiag: np.ndarray, U: np.ndarray, Phidiag: np.ndarray, x: np.ndarray, y: np.ndarray
) -> Tuple[float, float]:
"""
Compute an inner product of the form
(x| C^-1 |y)
where
C = N + U Phi U^T ,
N and Phi are diagonal matrices, using the Woodbury
identity
C^-1 = N^-1 - N^-1 - N^-1 U Sigma^-1 U^T N^-1
where
Sigma = Phi^-1 + U^T N^-1 U
Additionally,
det[C] = det[N] * det[Phi] * det[Sigma]
Paremeters
----------
Ndiag: array-like
Diagonal elements of the diagonal matrix N
U: array-like
A matrix that represents a rank-n update to N
Phidiag: array-like
Weights associated with the rank-n update
x: array-like
Vector 1 for the inner product
y: array-like
Vector 2 for the inner product
Returns
-------
result: float
The inner product
logdetC: float
log-determinant of C
"""
x_Ninv_y = np.sum(x * y / Ndiag)
x_Ninv_U = (x / Ndiag) @ U
y_Ninv_U = (y / Ndiag) @ U
Sigma = np.diag(1 / Phidiag) + (U.T / Ndiag) @ U
Sigma_cf = cho_factor(Sigma)
x_Cinv_y = x_Ninv_y - x_Ninv_U @ cho_solve(Sigma_cf, y_Ninv_U)
logdet_N = np.sum(np.log(Ndiag))
logdet_Phi = np.sum(np.log(Phidiag))
_, logdet_Sigma = np.linalg.slogdet(Sigma.astype(float))
logdet_C = logdet_N + logdet_Phi + logdet_Sigma
return x_Cinv_y, logdet_C
def _get_wx2pl_lnlike(
model: "pint.models.TimingModel", component_name: str, ignore_fyr: bool = True
) -> float:
from pint.models.noise_model import powerlaw
from pint import DMconst
assert component_name in {"WaveX", "DMWaveX", "CMWaveX"}
prefix_dict = {"WaveX": "WX", "DMWaveX": "DMWX", "CMWaveX": "CMWX"}
prefix = prefix_dict[component_name]
idxs = np.array(model.components[component_name].get_indices())
fs = np.array(
[model[f"{prefix}FREQ_{idx:04d}"].quantity.to_value(u.Hz) for idx in idxs]
)
f0 = np.min(fs)
fyr = (1 / u.year).to_value(u.Hz)
assert np.allclose(
np.diff(np.diff(fs)), 0
), "WaveX/DMWaveX/CMWaveX frequencies must be uniformly spaced for this conversion to work."
if ignore_fyr:
year_mask = np.abs(((fs - fyr) / f0)) > 0.5
idxs = idxs[year_mask]
fs = np.array(
[model[f"{prefix}FREQ_{idx:04d}"].quantity.to_value(u.Hz) for idx in idxs]
)
f0 = np.min(fs)
scaling_factor = (
1
if component_name == "WaveX"
else (
DMconst / (1400 * u.MHz) ** 2
if component_name == "DMWaveX"
else DMconst / 1400**model.TNCHROMIDX.value
)
)
a = np.array(
[
(scaling_factor * model[f"{prefix}SIN_{idx:04d}"].quantity).to_value(u.s)
for idx in idxs
]
)
da = np.array(
[
(scaling_factor * model[f"{prefix}SIN_{idx:04d}"].uncertainty).to_value(u.s)
for idx in idxs
]
)
b = np.array(
[
(scaling_factor * model[f"{prefix}COS_{idx:04d}"].quantity).to_value(u.s)
for idx in idxs
]
)
db = np.array(
[
(scaling_factor * model[f"{prefix}COS_{idx:04d}"].uncertainty).to_value(u.s)
for idx in idxs
]
)
def powl_model(params: Tuple[float, float]) -> float:
"""Get the powerlaw spectrum for the WaveX frequencies for a given
set of parameters. This calls the powerlaw function used by `PLRedNoise`/`PLDMNoise`/`PLChromNoise`.
"""
gamma, log10_A = params
return (powerlaw(fs, A=10**log10_A, gamma=gamma) * f0) ** 0.5
def mlnlike(params: Tuple[float, ...]) -> float:
"""Negative of the likelihood function that acts on the
`[DM/CM]WaveX` amplitudes."""
sigma = powl_model(params)
return 0.5 * float(
np.sum(
(a**2 / (sigma**2 + da**2))
+ (b**2 / (sigma**2 + db**2))
+ np.log(sigma**2 + da**2)
+ np.log(sigma**2 + db**2)
)
)
return mlnlike
def plrednoise_from_wavex(
model: "pint.models.TimingModel", ignore_fyr: bool = True
) -> "pint.models.TimingModel":
"""Convert a `WaveX` representation of red noise to a `PLRedNoise`
representation. This is done by minimizing a likelihood function
that acts on the `WaveX` amplitudes over the powerlaw spectral
parameters.
Parameters
----------
model: pint.models.timing_model.TimingModel
The timing model with a `WaveX` component.
ignore_fyr: bool
Whether to ignore the frequency bin containinf 1 yr^-1
while fitting for the spectral parameters.
Returns
-------
pint.models.timing_model.TimingModel
The timing model with a converted `PLRedNoise` component.
"""
from pint.models.noise_model import PLRedNoise
mlnlike = _get_wx2pl_lnlike(model, "WaveX", ignore_fyr=ignore_fyr)
result = minimize(mlnlike, [4, -13], method="Nelder-Mead")
if not result.success:
raise ValueError("Log-likelihood maximization failed to converge.")
gamma_val, log10_A_val = result.x
hess = Hessian(mlnlike)
gamma_err, log10_A_err = np.sqrt(
np.diag(np.linalg.pinv(hess((gamma_val, log10_A_val))))
)
tnredc = len(model.components["WaveX"].get_indices())
model1 = deepcopy(model)
model1.remove_component("WaveX")
model1.add_component(PLRedNoise())
model1.TNREDAMP.value = log10_A_val
model1.TNREDGAM.value = gamma_val
model1.TNREDC.value = tnredc
model1.TNREDAMP.uncertainty_value = log10_A_err
model1.TNREDGAM.uncertainty_value = gamma_err
return model1
def pldmnoise_from_dmwavex(
model: "pint.models.TimingModel", ignore_fyr: bool = False
) -> "pint.models.TimingModel":
"""Convert a `DMWaveX` representation of red noise to a `PLDMNoise`
representation. This is done by minimizing a likelihood function
that acts on the `DMWaveX` amplitudes over the powerlaw spectral
parameters.
Parameters
----------
model: pint.models.timing_model.TimingModel
The timing model with a `DMWaveX` component.
Returns
-------
pint.models.timing_model.TimingModel
The timing model with a converted `PLDMNoise` component.
"""
from pint.models.noise_model import PLDMNoise
mlnlike = _get_wx2pl_lnlike(model, "DMWaveX", ignore_fyr=ignore_fyr)
result = minimize(mlnlike, [4, -13], method="Nelder-Mead")
if not result.success:
raise ValueError("Log-likelihood maximization failed to converge.")
gamma_val, log10_A_val = result.x
hess = Hessian(mlnlike)
H = hess((gamma_val, log10_A_val))
assert np.all(np.linalg.eigvals(H) > 0), "The Hessian is not positive definite!"
Hinv = np.linalg.pinv(H)
assert np.all(
np.linalg.eigvals(Hinv) > 0
), "The inverse Hessian is not positive definite!"
gamma_err, log10_A_err = np.sqrt(np.diag(Hinv))
tndmc = len(model.components["DMWaveX"].get_indices())
model1 = deepcopy(model)
model1.remove_component("DMWaveX")
model1.add_component(PLDMNoise())
model1.TNDMAMP.value = log10_A_val
model1.TNDMGAM.value = gamma_val
model1.TNDMC.value = tndmc
model1.TNDMAMP.uncertainty_value = log10_A_err
model1.TNDMGAM.uncertainty_value = gamma_err
return model1
def plchromnoise_from_cmwavex(
model: "pint.models.TimingModel", ignore_fyr: bool = False
) -> "pint.models.TimingModel":
"""Convert a `CMWaveX` representation of red noise to a `PLChromNoise`
representation. This is done by minimizing a likelihood function
that acts on the `CMWaveX` amplitudes over the powerlaw spectral
parameters.
Parameters
----------
model: pint.models.timing_model.TimingModel
The timing model with a `CMWaveX` component.
Returns
-------
pint.models.timing_model.TimingModel
The timing model with a converted `PLChromNoise` component.
"""
from pint.models.noise_model import PLChromNoise
mlnlike = _get_wx2pl_lnlike(model, "CMWaveX", ignore_fyr=ignore_fyr)
result = minimize(mlnlike, [4, -13], method="Nelder-Mead")
if not result.success:
raise ValueError("Log-likelihood maximization failed to converge.")
gamma_val, log10_A_val = result.x
hess = Hessian(mlnlike)
H = hess((gamma_val, log10_A_val))
assert np.all(np.linalg.eigvals(H) > 0), "The Hessian is not positive definite!"
Hinv = np.linalg.pinv(H)
assert np.all(
np.linalg.eigvals(Hinv) > 0
), "The inverse Hessian is not positive definite!"
gamma_err, log10_A_err = np.sqrt(np.diag(Hinv))
tndmc = len(model.components["CMWaveX"].get_indices())
model1 = deepcopy(model)
model1.remove_component("CMWaveX")
model1.add_component(PLChromNoise())
model1.TNCHROMAMP.value = log10_A_val
model1.TNCHROMGAM.value = gamma_val
model1.TNCHROMC.value = tndmc
model1.TNCHROMAMP.uncertainty_value = log10_A_err
model1.TNCHROMGAM.uncertainty_value = gamma_err
return model1
def find_optimal_nharms(
model: "pint.models.TimingModel",
toas: "pint.toa.TOAs",
component: Literal["WaveX", "DMWaveX"],
nharms_max: int = 45,
) -> Tuple[int, np.ndarray]:
"""Find the optimal number of harmonics for `WaveX`/`DMWaveX` using the Akaike Information
Criterion.
Parameters
----------
model: `pint.models.timing_model.TimingModel`
The timing model. Should not already contain `WaveX`/`DMWaveX` or `PLRedNoise`/`PLDMNoise`.
toas: `pint.toa.TOAs`
Input TOAs
component: str
Component name; "WaveX" or "DMWaveX"
nharms_max: int
Maximum number of harmonics
Returns
-------
nharms_opt: int
Optimal number of harmonics
aics: ndarray
Array of normalized AIC values.
"""
from pint.fitter import Fitter
assert component in ["WaveX", "DMWaveX"]
assert (
component not in model.components
), f"{component} is already included in the model."
assert (
"PLRedNoise" not in model.components and "PLDMNoise" not in model.components
), "PLRedNoise/PLDMNoise cannot be included in the model."
model1 = deepcopy(model)
ftr = Fitter.auto(toas, model1, downhill=False)
ftr.fit_toas(maxiter=5)
aics = [akaike_information_criterion(model1, toas)]
model1 = ftr.model
T_span = toas.get_mjds().max() - toas.get_mjds().min()
setup_component = wavex_setup if component == "WaveX" else dmwavex_setup
setup_component(model1, T_span, n_freqs=1, freeze_params=False)
for _ in range(nharms_max):
ftr = Fitter.auto(toas, model1, downhill=False)
ftr.fit_toas(maxiter=5)
aics.append(akaike_information_criterion(ftr.model, toas))
model1 = ftr.model
if component == "WaveX":
model1.components[component].add_wavex_component(
(len(model1.components[component].get_indices()) + 1) / T_span,
frozen=False,
)
else:
model1.components[component].add_dmwavex_component(
(len(model1.components[component].get_indices()) + 1) / T_span,
frozen=False,
)
assert all(np.isfinite(aics)), "Infs/NaNs found in AICs!"
return np.argmin(aics), np.array(aics) - np.min(aics)
|
nanogravREPO_NAMEPINTPATH_START.@PINT_extracted@PINT-master@src@pint@utils.py@.PATH_END.py
|
{
"filename": "trackzone.md",
"repo_name": "ultralytics/ultralytics",
"repo_path": "ultralytics_extracted/ultralytics-main/docs/en/guides/trackzone.md",
"type": "Markdown"
}
|
---
comments: true
description: Discover how TrackZone leverages Ultralytics YOLO11 to precisely track objects within specific zones, enabling real-time insights for crowd analysis, surveillance, and targeted monitoring.
keywords: TrackZone, object tracking, YOLO11, Ultralytics, real-time object detection, AI, deep learning, crowd analysis, surveillance, zone-based tracking, resource optimization
---
# TrackZone using Ultralytics YOLO11
## What is TrackZone?
TrackZone specializes in monitoring objects within designated areas of a frame instead of the whole frame. Built on [Ultralytics YOLO11](https://github.com/ultralytics/ultralytics/), it integrates object detection and tracking specifically within zones for videos and live camera feeds. YOLO11's advanced algorithms and [deep learning](https://www.ultralytics.com/glossary/deep-learning-dl) technologies make it a perfect choice for real-time use cases, offering precise and efficient object tracking in applications like crowd monitoring and surveillance.
## Advantages of Object Tracking in Zones (TrackZone)
- **Targeted Analysis:** Tracking objects within specific zones allows for more focused insights, enabling precise monitoring and analysis of areas of interest, such as entry points or restricted zones.
- **Improved Efficiency:** By narrowing the tracking scope to defined zones, TrackZone reduces computational overhead, ensuring faster processing and optimal performance.
- **Enhanced Security:** Zonal tracking improves surveillance by monitoring critical areas, aiding in the early detection of unusual activity or security breaches.
- **Scalable Solutions:** The ability to focus on specific zones makes TrackZone adaptable to various scenarios, from retail spaces to industrial settings, ensuring seamless integration and scalability.
## Real World Applications
| Agriculture | Transportation |
| :-----------------------------------------------------------------------------------------------------------------------------------------------------------------: | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------: |
|  |  |
| Plants Tracking in Field Using Ultralytics YOLO11 | Vehicles Tracking on Road using Ultralytics YOLO11 |
!!! example "TrackZone using YOLO11 Example"
=== "CLI"
```bash
# Run a trackzone example
yolo solutions trackzone show=True
# Pass a source video
yolo solutions trackzone show=True source="path/to/video/file.mp4"
# Pass region coordinates
yolo solutions trackzone show=True region=[(150, 150), (1130, 150), (1130, 570), (150, 570)]
```
=== "Python"
```python
import cv2
from ultralytics import solutions
cap = cv2.VideoCapture("path/to/video/file.mp4")
assert cap.isOpened(), "Error reading video file"
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
# Define region points
region_points = [(150, 150), (1130, 150), (1130, 570), (150, 570)]
# Video writer
video_writer = cv2.VideoWriter("object_counting_output.avi", cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h))
# Init TrackZone (Object Tracking in Zones, not complete frame)
trackzone = solutions.TrackZone(
show=True, # Display the output
region=region_points, # Pass region points
model="yolo11n.pt", # You can use any model that Ultralytics support, i.e. YOLOv9, YOLOv10
# line_width=2, # Adjust the line width for bounding boxes and text display
# classes=[0, 2], # If you want to count specific classes i.e. person and car with COCO pretrained model.
)
# Process video
while cap.isOpened():
success, im0 = cap.read()
if not success:
print("Video frame is empty or video processing has been successfully completed.")
break
im0 = trackzone.trackzone(im0)
video_writer.write(im0)
cap.release()
video_writer.release()
cv2.destroyAllWindows()
```
### Argument `TrackZone`
Here's a table with the `TrackZone` arguments:
| Name | Type | Default | Description |
| ------------ | ------ | ---------------------------------------------------- | ---------------------------------------------------- |
| `model` | `str` | `None` | Path to Ultralytics YOLO Model File |
| `region` | `list` | `[(150, 150), (1130, 150), (1130, 570), (150, 570)]` | List of points defining the object tracking region. |
| `line_width` | `int` | `2` | Line thickness for bounding boxes. |
| `show` | `bool` | `False` | Flag to control whether to display the video stream. |
### Arguments `model.track`
{% include "macros/track-args.md" %}
## FAQ
### How do I track objects in a specific area or zone of a video frame using Ultralytics YOLO11?
Tracking objects in a defined area or zone of a video frame is straightforward with Ultralytics YOLO11. Simply use the command provided below to initiate tracking. This approach ensures efficient analysis and accurate results, making it ideal for applications like surveillance, crowd management, or any scenario requiring zonal tracking.
```bash
yolo solutions trackzone source="path/to/video/file.mp4" show=True
```
### How can I use TrackZone in Python with Ultralytics YOLO11?
With just a few lines of code, you can set up object tracking in specific zones, making it easy to integrate into your projects.
```python
import cv2
from ultralytics import solutions
cap = cv2.VideoCapture("path/to/video/file.mp4")
assert cap.isOpened(), "Error reading video file"
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
# Define region points
region_points = [(150, 150), (1130, 150), (1130, 570), (150, 570)]
# Video writer
video_writer = cv2.VideoWriter("object_counting_output.avi", cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h))
# Init TrackZone (Object Tracking in Zones, not complete frame)
trackzone = solutions.TrackZone(
show=True, # Display the output
region=region_points, # Pass region points
model="yolo11n.pt",
)
# Process video
while cap.isOpened():
success, im0 = cap.read()
if not success:
print("Video frame is empty or video processing has been successfully completed.")
break
im0 = trackzone.trackzone(im0)
video_writer.write(im0)
cap.release()
video_writer.release()
cv2.destroyAllWindows()
```
### How do I configure the zone points for video processing using Ultralytics TrackZone?
Configuring zone points for video processing with Ultralytics TrackZone is simple and customizable. You can directly define and adjust the zones through a Python script, allowing precise control over the areas you want to monitor.
```python
# Define region points
region_points = [(150, 150), (1130, 150), (1130, 570), (150, 570)]
# Init TrackZone (Object Tracking in Zones, not complete frame)
trackzone = solutions.TrackZone(
show=True, # Display the output
region=region_points, # Pass region points
)
```
|
ultralyticsREPO_NAMEultralyticsPATH_START.@ultralytics_extracted@ultralytics-main@docs@en@guides@trackzone.md@.PATH_END.py
|
{
"filename": "modified_blackbody.py",
"repo_name": "aconley/mbb_emcee",
"repo_path": "mbb_emcee_extracted/mbb_emcee-master/mbb_emcee/modified_blackbody.py",
"type": "Python"
}
|
import math
import numpy
import scipy.optimize
from scipy.special import lambertw
from .utility import isiterable
import fnu
"""Modified blackbody and blackbody SEDs"""
__all__ = ["modified_blackbody", "blackbody"]
# Some constants
c = 299792458e6 # in microns
h = 6.6260693e-34 # J/s
k = 1.3806505e-23 # J/K
um_to_GHz = 299792458e-3
class blackbody(object):
"""A class representing a Blackbody"""
def __init__(self, T, fnorm, wavenorm=500.0):
"""Initializer
Parameters:
-----------
T : float
Temperature/(1+z) in K
fnorm : float
Normalization flux, in mJy
wavenorm : float
Wavelength of normalization flux, in microns (def: 500)
"""
self._T = float(T)
self._fnorm = float(fnorm)
self._wavenorm = float(wavenorm)
# Some constants -- eventually, replace these with
# astropy.constants, but that is in development, so hardwire
# for now
self._hcokt = h * c / (k * self._T)
self._xnorm = self._hcokt / self._wavenorm
self._normfac = self._fnorm * math.expm1(self._xnorm) / \
self._xnorm**3
@property
def T(self):
""" Get temperature / (1+z) in K"""
return self._T
@property
def fnorm(self):
""" Get normalization flux at wavenorm in mJy"""
return self._fnorm
@property
def wavenorm(self):
""" Get normalization flux wavelength in microns"""
return self._wavenorm
def __repr__(self):
retstr = "blackbody({:.2g}, {:.2g}, wavenorm={:.2g})"
return retstr.format(self._T, self._fnorm, self._wavenorm)
def __str__(self):
retstr = "blackbody(T: {:.2g} fnorm: {:.2g} wavenorm: {:.2g})"
return retstr.format(self._T, self._fnorm, self._wavenorm)
def f_nu(self, freq):
"""Evaluate blackbody at specified frequencies.
Parameters
----------
freq : array_like
Input frequencies, in GHz
Returns
-------
fnu : ndarray, or float if input scalar
The flux density in mJy
"""
# Convert to some form of numarray
if not isiterable(freq):
frequency = numpy.asarray([freq], dtype=numpy.float)
else:
frequency = numpy.asanyarray(freq, dtype=numpy.float)
hokt = h / (k * self._T)
# Convert wavelengths to x = h nu / k T
x = hokt * 1e9 * frequency # 1e9 to convert to Hz from GHz
return self._normfac * x**3.0 / numpy.expm1(x)
def __call__(self, wave):
""" Evaluate modified blackbody at specified wavelengths
Parameters
----------
wave : array_like
Input wavelengths, in microns
Returns
-------
fnu : ndarray, or float if input scalar
The flux density in mJy
"""
wviter = isiterable(wave)
if wviter:
wave = numpy.asanyarray(wave, dtype=numpy.float)
return self.f_nu(c / wave)
else:
return self.f_nu(um_to_GHz / float(wave))
def alpha_merge_eqn(x, alpha, beta, x0, opthin=False):
"""Equation we need the root for to merge power law to modified
blackbody
Parameters
----------
x : float
h nu / k T to evaluate at
alpha : float
blue side power law index
beta : float
Dust attenuation power law index
x0 : float
h nu_0 / k T
opthin : bool
Assume optically thin case
"""
try:
# This can overflow badly
xox0beta = (x / x0)**beta
bterm = xox0beta / math.expm1(xox0beta)
except OverflowError:
# If xox0beta is very large, then the bterm is zero
bterm = 0.0
return x - (1.0 - math.exp(-x)) * (3.0 + alpha + beta * bterm)
class modified_blackbody(object):
"""A class representing a modified greybody
The form for the modified blackbody is
.. math::
f_{\\nu} \\propto \\left(1 - \\exp\\left[ - \\left(\\nu /
\\nu_0\\right)^{\\beta} B_{\\nu}\\left( \\nu ; T \\right)
where :math:`B_{\\nu}` is the Planck blackbody function in frequency
units. Class instances are static.
"""
def __init__(self, T, beta, lambda0, alpha, fnorm, wavenorm=500.0,
noalpha=False, opthin=False):
"""Initializer
Parameters:
-----------
T : float
Temperature/(1+z) in K
beta : float
Extinction slope
lambda0 : float
Wavelength where emission becomes optically thick * (1+z), in
microns
alpha : float
Blue side power law slope
fnorm : float
Normalization flux, in mJy
wavenorm : float
Wavelength of normalization flux, in microns (def: 500)
noalpha : bool
Do not use blue side power law
opthin : bool
Assume emission is optically thin
"""
self._T = float(T)
self._beta = float(beta)
if bool(noalpha):
self._hasalpha = False
self._alpha = None
else:
self._hasalpha = True
self._alpha = float(alpha)
self._fnorm = float(fnorm)
self._wavenorm = float(wavenorm)
if bool(opthin):
self._opthin = True
self._lambda0 = None
else:
self._opthin = False
self._lambda0 = float(lambda0)
if self._hasalpha and alpha <= 0.0:
errmsg = "alpha must be positive. You gave: {:.5g}"
raise ValueError(errmsg.format(self._alpha))
if self._beta < 0.0:
errmsg = "beta must be non-negative. You gave: {:.5g}"
raise ValueError(errmsg.format(self._beta))
# Some constants -- eventually, replace these with
# astropy.constants, but that is in development, so hardwire for now
self._hcokt = h * c / (k * self._T)
# Convert wavelengths to x = h nu / k T
if not self._opthin:
self._x0 = self._hcokt / lambda0
self._xnorm = self._hcokt / self._wavenorm
# Two cases -- optically thin and not.
# Each has two sub-cases -- with power law merge and without
if self._opthin:
if not self._hasalpha:
# No merge to power law, easy
self._normfac = self._fnorm * math.expm1(self._xnorm) / \
self._xnorm**(3.0 + beta)
else:
# First, figure out the x (frequency) where the join
# happens. At frequencies above this (x > xmarge)
# are on the blue, alpha power law side
# The equation we are trying to find the root for is:
# x - (1-exp(-x))*(3+alpha+beta)
# Amazingly, this has a special function solution
# A = (3+alpha+beta)
# xmerge = A + LambertW[ -A Exp[-A] ]
# This has a positive solution for all A > 1 -- and since
# we require alpha and beta > 0, this is always the case
a = 3.0 + self._alpha + self._beta
self._xmerge = a + lambertw(-a * math.exp(-a)).real
# Get merge constant -- note this is -before- flux
# normalization to allow for the case where wavenorm is
# on the power law part
self._kappa = self._xmerge**(3.0 + self._alpha +
self._beta) / \
math.expm1(self._xmerge)
# Compute normalization constant
if self._xnorm > self._xmerge:
self._normfac = self._fnorm * self._xnorm**self._alpha / \
self._kappa
else:
self._normfac = self._fnorm * math.expm1(self._xnorm) / \
self._xnorm**(3.0 + self._beta)
else:
#Optically thick case
if not self._hasalpha:
self._normfac = - self._fnorm * math.expm1(self._xnorm) / \
(math.expm1(-(self._xnorm / self._x0)**self._beta) *
self._xnorm**3)
else:
# This is harder, and does not have a special function
# solution. Hence, we have to do this numerically.
# The equation we need to find the root for is given by
# alpha_merge_eqn.
# First, we bracket. For positive alpha, beta
# we expect this to be negative for small a and positive
# for large a. We try to step out until we achieve that
maxiters = 100
a = 0.1
aval = alpha_merge_eqn(a, self._alpha, self._beta, self._x0)
iter = 0
while aval >= 0.0:
a /= 2.0
aval = alpha_merge_eqn(a, self._alpha,
self._beta, self._x0)
if iter > maxiters:
errmsg = "Couldn't bracket low alpha merge point for "\
"T: {:f} beta: {:f} lambda0: {:f} "\
"alpha {:f}, last a: {:f} value: {:f}"
raise ValueError(errmsg.format(self._T, self._beta,
self._lambda0,
self._alpha, a, aval))
iter += 1
b = 15.0
bval = alpha_merge_eqn(b, self._alpha, self._beta, self._x0)
iter = 0
while bval <= 0.0:
b *= 2.0
bval = alpha_merge_eqn(b, self._alpha, self._beta,
self._x0)
if iter > maxiters:
errmsg = "Couldn't bracket high alpha merge point "\
"for T: {:f} beta: {:f} lambda0: {:f} "\
"alpha {:f}, last a: {:f} value: {:f}"
raise ValueError(errmsg.format(self._T, self._beta,
self._lambda0,
self._alpha, a, aval))
iter += 1
# Now find root
args = (self._alpha, self._beta, self._x0)
self._xmerge = scipy.optimize.brentq(alpha_merge_eqn, a, b,
args=args, disp=True)
#Merge constant
# Note this will overflow and crash for large xmerge, alpha
self._kappa = - self._xmerge**(3 + self._alpha) * \
math.expm1(-(self._xmerge / self._x0)**self._beta) / \
math.expm1(self._xmerge)
#Normalization factor
if self._xnorm > self._xmerge:
self._normfac = self._fnorm * self._xnorm**self._alpha / \
self._kappa
else:
expmfac = math.expm1(-(self._xnorm / self._x0)**self._beta)
self._normfac = -self._fnorm * math.expm1(self._xnorm) / \
(self._xnorm**3 * expmfac)
@property
def T(self):
""" Get temperature / (1+z) in K"""
return self._T
@property
def beta(self):
""" Get Beta"""
return self._beta
@property
def lambda0(self):
""" Get lambda_0 (1+z) in microns"""
if self._opthin:
return None
return self._lambda0
@property
def alpha(self):
""" Get alpha"""
if not self._hasalpha:
return None
return self._alpha
@property
def fnorm(self):
""" Get normalization flux at wavenorm in mJy"""
return self._fnorm
@property
def wavenorm(self):
""" Get normalization flux wavelength in microns"""
return self._wavenorm
@property
def has_alpha(self):
""" Does this modified_blackbody use a blue side power law?"""
return self._hasalpha
@property
def optically_thin(self):
""" Does this modified_blackbody assume it is optically thin?"""
return self._opthin
@property
def wavemerge(self):
"""Get the merge wavelength in microns"""
if not self._hasalpha:
return None
else:
return self._hcokt / self._xmerge
def __repr__(self):
if self._hasalpha:
if self._opthin:
retstr = "modified_blackbody({:.2g}, {:.2g}, None, {:.2g}," + \
"{:.2g}, opthin=True, wavenorm={:.2g})"
return retstr.format(self._T, self._beta, self._alpha,
self._fnorm, self._wavenorm)
else:
retstr = "modified_blackbody({:.2g}, {:.2g}, {:.2g}," + \
" {:.2g}, {:.2g}, wavenorm={:.2g})"
return retstr.format(self._T, self._beta, self.lambda0,
self.alpha, self._fnorm, self._wavenorm)
else:
if self._opthin:
retstr = "modified_blackbody({:.2g}, {:.2g}, None, None, " + \
"{:.2g}, noalpha=True, opthin=True, wavenorm={:.2g})"
return retstr.format(self._T, self._beta, self._fnorm,
self._wavenorm)
else:
retstr = "modified_blackbody({:.2g}, {:.2g}, {:.2g}, None," + \
" {:.2g}, noalpha=True, wavenorm={:.2g})"
return retstr.format(self._T, self._beta, self.lambda0,
self._fnorm, self._wavenorm)
def __str__(self):
if self._hasalpha:
if self._opthin:
retstr = "modified_blackbody(T: {:.2g} beta: {:.2g} " + \
"alpha: {:.2g} fnorm: {:.2g} wavenorm: {:.2g})"
return retstr.format(self._T, self._beta, self._alpha,
self._fnorm, self._wavenorm)
else:
retstr = "modified_blackbody(T: {:.2g} beta: {:.2g} " + \
"lambda0: {:.2g} alpha: {:.2g} fnorm: {:.2g} " + \
"wavenorm: {:.2g})"
return retstr.format(self._T, self._beta, self.lambda0,
self._alpha, self._fnorm,
self._wavenorm)
else:
if self._opthin:
retstr = "modified_blackbody(T: {:.2g} beta: {:.2g} " +\
"fnorm: {:.2g} wavenorm: {:.2g})"
return retstr.format(self._T, self._beta, self._fnorm,
self._wavenorm)
else:
retstr = "modified_blackbody(T: {:.2g} beta: {:.2g} " + \
"lambda0: {:.2g} fnorm: {:.2g} wavenorm: {:.2g})"
return retstr.format(self._T, self._beta, self.lambda0,
self._fnorm, self._wavenorm)
def f_nu(self, freq):
"""Evaluate modifed blackbody at specified frequencies.
Parameters
----------
freq : array_like
Input frequencies, in GHz
Returns
-------
fnu : ndarray, or float if input scalar
The flux density in mJy
"""
# Convert to some form of numarray
if not isiterable(freq):
frequency = numpy.asarray([freq], dtype=numpy.float)
else:
frequency = numpy.asanyarray(freq, dtype=numpy.float)
hokt = h / (k * self._T)
# Convert wavelengths to x = h nu / k T
x = hokt * 1e9 * frequency # 1e9 to convert to Hz from GHz
# Two cases -- optically thin and not.
# Each has two sub-cases -- with power law merge and without
if self._opthin:
if not self._hasalpha:
retval = self._normfac * x**(3.0 + self._beta) / numpy.expm1(x)
else:
retval = numpy.zeros_like(frequency)
ispower = x > self._xmerge
retval[ispower] = self._kappa * x[ispower]**(-self._alpha)
retval[~ispower] = x[~ispower]**(3.0 + self._beta) / \
numpy.expm1(x[~ispower])
retval *= self._normfac
else:
if not self._hasalpha:
retval = - self._normfac * \
numpy.expm1(-(x / self._x0)**self._beta) * x**3 / \
numpy.expm1(x)
else:
retval = numpy.zeros_like(frequency)
ispower = x > self._xmerge
retval[ispower] = self._kappa * x[ispower]**(-self._alpha)
retval[~ispower] = \
- numpy.expm1(-(x[~ispower]/self._x0)**self._beta) * \
x[~ispower]**3/numpy.expm1(x[~ispower])
retval *= self._normfac
return retval
def _f_nu_c(self, freq):
"""Evaluate modifed blackbody at specified frequencies.
This internal version makes various assumptions about freq
Parameters
----------
freq : array_like
Input frequencies, in GHz
Returns
-------
fnu : ndarray, or float if input scalar
The flux density in mJy
"""
# Convert to some form of numarray
if not isiterable(freq):
frequency = numpy.asarray([freq], dtype=numpy.float64)
else:
frequency = numpy.asanyarray(freq, dtype=numpy.float64)
if self._opthin:
if not self._hasalpha:
retval = fnu.fnueval_thin_noalpha(frequency, self._T,
self._beta, self._normfac)
else:
retval = fnu.fnueval_thin_walpha(frequency, self._T,
self._beta, self._alpha,
self._normfac, self._xmerge,
self._kappa)
else:
if not self._hasalpha:
retval = fnu.fnueval_thick_noalpha(frequency, self._T,
self._beta, self._x0,
self._normfac)
else:
retval = fnu.fnueval_thick_walpha(frequency, self._T,
self._beta, self._x0,
self._alpha, self._normfac,
self._xmerge, self._kappa)
return retval
def __call__(self, wave):
""" Evaluate modified blackbody at specified wavelengths
Parameters
----------
wave : array_like
Input wavelengths, in microns
Returns
-------
fnu : ndarray, or float if input scalar
The flux density in mJy
"""
wviter = isiterable(wave)
if wviter:
return self._f_nu_c(um_to_GHz /
numpy.asanyarray(wave, dtype=numpy.float64))
else:
return self.f_nu(um_to_GHz / float(wave))
def _snudev(self, x):
""" Evaluates derivative (modulo normalization) of S_nu at x
x = h nu / k T (scalar)
Ignores alpha side, since that should be rising -- so the peak
should lie at lower frequency than the merge to the alpha law"""
if self._opthin:
efac = math.expm1(x)
return x**(2.0 + self._beta) * (3.0 + self._beta) / efac - \
math.exp(x) * x**(3.0 + self._beta) / efac**2
else:
efac = math.expm1(x)
xx0 = x / self._x0
try:
xx0b = xx0**self._beta
ebfac = - math.expm1(-xx0b)
return 3 * x**2 * ebfac / efac - \
math.exp(x) * x**3 * ebfac / efac**2 + \
self._beta * x**3 * math.exp(-xx0b) * xx0b / (x * efac)
except OverflowError:
# (x/x0)**beta is too large, which simplifies the expression
return 3 * x**2 / efac - math.exp(x) * x**3 / efac**2
def max_wave(self):
""" Get the wavelength of maximum emission in f_nu units.
Returns
-------
wave : float
The wavelength of the maximum in microns
"""
# Note that the alpha portion is ignored, since we
# require alpha to be positive. That means that
# the power law part should be rising where it joins
# the modified blackbody part, and therefore it should
# not affect anything
from scipy.optimize import brentq
# Start with an expression for the maximum of a normal
# blackbody. We work in x = h nu / k T
xmax_bb = 2.82144
numax_bb = xmax_bb * k * self._T / h
if (self._opthin and self._beta == 0):
# This is just a blackbody, so easy cakes
return c / numax_bb
# Now, bracket the root in the derivative.
# At low x (low frequency) the derivative should be positive
a = xmax_bb / 2.0
aval = self._snudev(a)
maxiters = 20
iter = 0
while aval <= 0.0:
if iter > maxiters:
errmsg = "Couldn't bracket maximum from low frequency side"
raise Exception(errmsg)
a /= 2.0
aval = self._snudev(a)
iter += 1
# And the derivative should be negative at high frequencies
b = xmax_bb * 2.0
bval = self._snudev(b)
iter = 0
while bval >= 0.0:
if iter > maxiters:
errmsg = "Couldn't bracket maximum from high frequency side"
raise Exception(errmsg)
b *= 2.0
bval = self._snudev(b)
iter += 1
# Now find the actual root
xmax = brentq(self._snudev, a, b, disp=True)
# Convert to more conventional units
numax = xmax * k * self._T / h
return c / numax
def freq_integrate(self, minwave, maxwave):
"""Integrate f_nu over specified wavelength range
Parameters:
-----------
minwave : float
Minimum wavlength, in microns
maxwave : float
Maximum wavelength, in microns
Returns
-------
fint : float
The integral in erg/s/cm^2
"""
from scipy.integrate import quad
minwave = float(minwave)
maxwave = float(maxwave)
if minwave <= 0.0:
raise ValueError("Minimum wavelength must be > 0.0")
if minwave > maxwave:
minwave, maxwave = maxwave, minwave
# Tricky thing -- we are integrating over frequency (in GHz),
# not wavelength
minfreq = um_to_GHz / maxwave
maxfreq = um_to_GHz / minwave
fint = quad(self.f_nu, minfreq, maxfreq)[0]
# Integral comes back in mJy-GHz, convert to erg/s/cm^2
return 1e-17 * fint
|
aconleyREPO_NAMEmbb_emceePATH_START.@mbb_emcee_extracted@mbb_emcee-master@mbb_emcee@modified_blackbody.py@.PATH_END.py
|
{
"filename": "mac_arabic.py",
"repo_name": "catboost/catboost",
"repo_path": "catboost_extracted/catboost-master/contrib/tools/python3/Lib/encodings/mac_arabic.py",
"type": "Python"
}
|
""" Python Character Mapping Codec generated from 'VENDORS/APPLE/ARABIC.TXT' with gencodec.py.
"""#"
import codecs
### Codec APIs
class Codec(codecs.Codec):
def encode(self,input,errors='strict'):
return codecs.charmap_encode(input,errors,encoding_map)
def decode(self,input,errors='strict'):
return codecs.charmap_decode(input,errors,decoding_table)
class IncrementalEncoder(codecs.IncrementalEncoder):
def encode(self, input, final=False):
return codecs.charmap_encode(input,self.errors,encoding_map)[0]
class IncrementalDecoder(codecs.IncrementalDecoder):
def decode(self, input, final=False):
return codecs.charmap_decode(input,self.errors,decoding_table)[0]
class StreamWriter(Codec,codecs.StreamWriter):
pass
class StreamReader(Codec,codecs.StreamReader):
pass
### encodings module API
def getregentry():
return codecs.CodecInfo(
name='mac-arabic',
encode=Codec().encode,
decode=Codec().decode,
incrementalencoder=IncrementalEncoder,
incrementaldecoder=IncrementalDecoder,
streamreader=StreamReader,
streamwriter=StreamWriter,
)
### Decoding Map
decoding_map = codecs.make_identity_dict(range(256))
decoding_map.update({
0x0080: 0x00c4, # LATIN CAPITAL LETTER A WITH DIAERESIS
0x0081: 0x00a0, # NO-BREAK SPACE, right-left
0x0082: 0x00c7, # LATIN CAPITAL LETTER C WITH CEDILLA
0x0083: 0x00c9, # LATIN CAPITAL LETTER E WITH ACUTE
0x0084: 0x00d1, # LATIN CAPITAL LETTER N WITH TILDE
0x0085: 0x00d6, # LATIN CAPITAL LETTER O WITH DIAERESIS
0x0086: 0x00dc, # LATIN CAPITAL LETTER U WITH DIAERESIS
0x0087: 0x00e1, # LATIN SMALL LETTER A WITH ACUTE
0x0088: 0x00e0, # LATIN SMALL LETTER A WITH GRAVE
0x0089: 0x00e2, # LATIN SMALL LETTER A WITH CIRCUMFLEX
0x008a: 0x00e4, # LATIN SMALL LETTER A WITH DIAERESIS
0x008b: 0x06ba, # ARABIC LETTER NOON GHUNNA
0x008c: 0x00ab, # LEFT-POINTING DOUBLE ANGLE QUOTATION MARK, right-left
0x008d: 0x00e7, # LATIN SMALL LETTER C WITH CEDILLA
0x008e: 0x00e9, # LATIN SMALL LETTER E WITH ACUTE
0x008f: 0x00e8, # LATIN SMALL LETTER E WITH GRAVE
0x0090: 0x00ea, # LATIN SMALL LETTER E WITH CIRCUMFLEX
0x0091: 0x00eb, # LATIN SMALL LETTER E WITH DIAERESIS
0x0092: 0x00ed, # LATIN SMALL LETTER I WITH ACUTE
0x0093: 0x2026, # HORIZONTAL ELLIPSIS, right-left
0x0094: 0x00ee, # LATIN SMALL LETTER I WITH CIRCUMFLEX
0x0095: 0x00ef, # LATIN SMALL LETTER I WITH DIAERESIS
0x0096: 0x00f1, # LATIN SMALL LETTER N WITH TILDE
0x0097: 0x00f3, # LATIN SMALL LETTER O WITH ACUTE
0x0098: 0x00bb, # RIGHT-POINTING DOUBLE ANGLE QUOTATION MARK, right-left
0x0099: 0x00f4, # LATIN SMALL LETTER O WITH CIRCUMFLEX
0x009a: 0x00f6, # LATIN SMALL LETTER O WITH DIAERESIS
0x009b: 0x00f7, # DIVISION SIGN, right-left
0x009c: 0x00fa, # LATIN SMALL LETTER U WITH ACUTE
0x009d: 0x00f9, # LATIN SMALL LETTER U WITH GRAVE
0x009e: 0x00fb, # LATIN SMALL LETTER U WITH CIRCUMFLEX
0x009f: 0x00fc, # LATIN SMALL LETTER U WITH DIAERESIS
0x00a0: 0x0020, # SPACE, right-left
0x00a1: 0x0021, # EXCLAMATION MARK, right-left
0x00a2: 0x0022, # QUOTATION MARK, right-left
0x00a3: 0x0023, # NUMBER SIGN, right-left
0x00a4: 0x0024, # DOLLAR SIGN, right-left
0x00a5: 0x066a, # ARABIC PERCENT SIGN
0x00a6: 0x0026, # AMPERSAND, right-left
0x00a7: 0x0027, # APOSTROPHE, right-left
0x00a8: 0x0028, # LEFT PARENTHESIS, right-left
0x00a9: 0x0029, # RIGHT PARENTHESIS, right-left
0x00aa: 0x002a, # ASTERISK, right-left
0x00ab: 0x002b, # PLUS SIGN, right-left
0x00ac: 0x060c, # ARABIC COMMA
0x00ad: 0x002d, # HYPHEN-MINUS, right-left
0x00ae: 0x002e, # FULL STOP, right-left
0x00af: 0x002f, # SOLIDUS, right-left
0x00b0: 0x0660, # ARABIC-INDIC DIGIT ZERO, right-left (need override)
0x00b1: 0x0661, # ARABIC-INDIC DIGIT ONE, right-left (need override)
0x00b2: 0x0662, # ARABIC-INDIC DIGIT TWO, right-left (need override)
0x00b3: 0x0663, # ARABIC-INDIC DIGIT THREE, right-left (need override)
0x00b4: 0x0664, # ARABIC-INDIC DIGIT FOUR, right-left (need override)
0x00b5: 0x0665, # ARABIC-INDIC DIGIT FIVE, right-left (need override)
0x00b6: 0x0666, # ARABIC-INDIC DIGIT SIX, right-left (need override)
0x00b7: 0x0667, # ARABIC-INDIC DIGIT SEVEN, right-left (need override)
0x00b8: 0x0668, # ARABIC-INDIC DIGIT EIGHT, right-left (need override)
0x00b9: 0x0669, # ARABIC-INDIC DIGIT NINE, right-left (need override)
0x00ba: 0x003a, # COLON, right-left
0x00bb: 0x061b, # ARABIC SEMICOLON
0x00bc: 0x003c, # LESS-THAN SIGN, right-left
0x00bd: 0x003d, # EQUALS SIGN, right-left
0x00be: 0x003e, # GREATER-THAN SIGN, right-left
0x00bf: 0x061f, # ARABIC QUESTION MARK
0x00c0: 0x274a, # EIGHT TEARDROP-SPOKED PROPELLER ASTERISK, right-left
0x00c1: 0x0621, # ARABIC LETTER HAMZA
0x00c2: 0x0622, # ARABIC LETTER ALEF WITH MADDA ABOVE
0x00c3: 0x0623, # ARABIC LETTER ALEF WITH HAMZA ABOVE
0x00c4: 0x0624, # ARABIC LETTER WAW WITH HAMZA ABOVE
0x00c5: 0x0625, # ARABIC LETTER ALEF WITH HAMZA BELOW
0x00c6: 0x0626, # ARABIC LETTER YEH WITH HAMZA ABOVE
0x00c7: 0x0627, # ARABIC LETTER ALEF
0x00c8: 0x0628, # ARABIC LETTER BEH
0x00c9: 0x0629, # ARABIC LETTER TEH MARBUTA
0x00ca: 0x062a, # ARABIC LETTER TEH
0x00cb: 0x062b, # ARABIC LETTER THEH
0x00cc: 0x062c, # ARABIC LETTER JEEM
0x00cd: 0x062d, # ARABIC LETTER HAH
0x00ce: 0x062e, # ARABIC LETTER KHAH
0x00cf: 0x062f, # ARABIC LETTER DAL
0x00d0: 0x0630, # ARABIC LETTER THAL
0x00d1: 0x0631, # ARABIC LETTER REH
0x00d2: 0x0632, # ARABIC LETTER ZAIN
0x00d3: 0x0633, # ARABIC LETTER SEEN
0x00d4: 0x0634, # ARABIC LETTER SHEEN
0x00d5: 0x0635, # ARABIC LETTER SAD
0x00d6: 0x0636, # ARABIC LETTER DAD
0x00d7: 0x0637, # ARABIC LETTER TAH
0x00d8: 0x0638, # ARABIC LETTER ZAH
0x00d9: 0x0639, # ARABIC LETTER AIN
0x00da: 0x063a, # ARABIC LETTER GHAIN
0x00db: 0x005b, # LEFT SQUARE BRACKET, right-left
0x00dc: 0x005c, # REVERSE SOLIDUS, right-left
0x00dd: 0x005d, # RIGHT SQUARE BRACKET, right-left
0x00de: 0x005e, # CIRCUMFLEX ACCENT, right-left
0x00df: 0x005f, # LOW LINE, right-left
0x00e0: 0x0640, # ARABIC TATWEEL
0x00e1: 0x0641, # ARABIC LETTER FEH
0x00e2: 0x0642, # ARABIC LETTER QAF
0x00e3: 0x0643, # ARABIC LETTER KAF
0x00e4: 0x0644, # ARABIC LETTER LAM
0x00e5: 0x0645, # ARABIC LETTER MEEM
0x00e6: 0x0646, # ARABIC LETTER NOON
0x00e7: 0x0647, # ARABIC LETTER HEH
0x00e8: 0x0648, # ARABIC LETTER WAW
0x00e9: 0x0649, # ARABIC LETTER ALEF MAKSURA
0x00ea: 0x064a, # ARABIC LETTER YEH
0x00eb: 0x064b, # ARABIC FATHATAN
0x00ec: 0x064c, # ARABIC DAMMATAN
0x00ed: 0x064d, # ARABIC KASRATAN
0x00ee: 0x064e, # ARABIC FATHA
0x00ef: 0x064f, # ARABIC DAMMA
0x00f0: 0x0650, # ARABIC KASRA
0x00f1: 0x0651, # ARABIC SHADDA
0x00f2: 0x0652, # ARABIC SUKUN
0x00f3: 0x067e, # ARABIC LETTER PEH
0x00f4: 0x0679, # ARABIC LETTER TTEH
0x00f5: 0x0686, # ARABIC LETTER TCHEH
0x00f6: 0x06d5, # ARABIC LETTER AE
0x00f7: 0x06a4, # ARABIC LETTER VEH
0x00f8: 0x06af, # ARABIC LETTER GAF
0x00f9: 0x0688, # ARABIC LETTER DDAL
0x00fa: 0x0691, # ARABIC LETTER RREH
0x00fb: 0x007b, # LEFT CURLY BRACKET, right-left
0x00fc: 0x007c, # VERTICAL LINE, right-left
0x00fd: 0x007d, # RIGHT CURLY BRACKET, right-left
0x00fe: 0x0698, # ARABIC LETTER JEH
0x00ff: 0x06d2, # ARABIC LETTER YEH BARREE
})
### Decoding Table
decoding_table = (
'\x00' # 0x0000 -> CONTROL CHARACTER
'\x01' # 0x0001 -> CONTROL CHARACTER
'\x02' # 0x0002 -> CONTROL CHARACTER
'\x03' # 0x0003 -> CONTROL CHARACTER
'\x04' # 0x0004 -> CONTROL CHARACTER
'\x05' # 0x0005 -> CONTROL CHARACTER
'\x06' # 0x0006 -> CONTROL CHARACTER
'\x07' # 0x0007 -> CONTROL CHARACTER
'\x08' # 0x0008 -> CONTROL CHARACTER
'\t' # 0x0009 -> CONTROL CHARACTER
'\n' # 0x000a -> CONTROL CHARACTER
'\x0b' # 0x000b -> CONTROL CHARACTER
'\x0c' # 0x000c -> CONTROL CHARACTER
'\r' # 0x000d -> CONTROL CHARACTER
'\x0e' # 0x000e -> CONTROL CHARACTER
'\x0f' # 0x000f -> CONTROL CHARACTER
'\x10' # 0x0010 -> CONTROL CHARACTER
'\x11' # 0x0011 -> CONTROL CHARACTER
'\x12' # 0x0012 -> CONTROL CHARACTER
'\x13' # 0x0013 -> CONTROL CHARACTER
'\x14' # 0x0014 -> CONTROL CHARACTER
'\x15' # 0x0015 -> CONTROL CHARACTER
'\x16' # 0x0016 -> CONTROL CHARACTER
'\x17' # 0x0017 -> CONTROL CHARACTER
'\x18' # 0x0018 -> CONTROL CHARACTER
'\x19' # 0x0019 -> CONTROL CHARACTER
'\x1a' # 0x001a -> CONTROL CHARACTER
'\x1b' # 0x001b -> CONTROL CHARACTER
'\x1c' # 0x001c -> CONTROL CHARACTER
'\x1d' # 0x001d -> CONTROL CHARACTER
'\x1e' # 0x001e -> CONTROL CHARACTER
'\x1f' # 0x001f -> CONTROL CHARACTER
' ' # 0x0020 -> SPACE, left-right
'!' # 0x0021 -> EXCLAMATION MARK, left-right
'"' # 0x0022 -> QUOTATION MARK, left-right
'#' # 0x0023 -> NUMBER SIGN, left-right
'$' # 0x0024 -> DOLLAR SIGN, left-right
'%' # 0x0025 -> PERCENT SIGN, left-right
'&' # 0x0026 -> AMPERSAND, left-right
"'" # 0x0027 -> APOSTROPHE, left-right
'(' # 0x0028 -> LEFT PARENTHESIS, left-right
')' # 0x0029 -> RIGHT PARENTHESIS, left-right
'*' # 0x002a -> ASTERISK, left-right
'+' # 0x002b -> PLUS SIGN, left-right
',' # 0x002c -> COMMA, left-right; in Arabic-script context, displayed as 0x066C ARABIC THOUSANDS SEPARATOR
'-' # 0x002d -> HYPHEN-MINUS, left-right
'.' # 0x002e -> FULL STOP, left-right; in Arabic-script context, displayed as 0x066B ARABIC DECIMAL SEPARATOR
'/' # 0x002f -> SOLIDUS, left-right
'0' # 0x0030 -> DIGIT ZERO; in Arabic-script context, displayed as 0x0660 ARABIC-INDIC DIGIT ZERO
'1' # 0x0031 -> DIGIT ONE; in Arabic-script context, displayed as 0x0661 ARABIC-INDIC DIGIT ONE
'2' # 0x0032 -> DIGIT TWO; in Arabic-script context, displayed as 0x0662 ARABIC-INDIC DIGIT TWO
'3' # 0x0033 -> DIGIT THREE; in Arabic-script context, displayed as 0x0663 ARABIC-INDIC DIGIT THREE
'4' # 0x0034 -> DIGIT FOUR; in Arabic-script context, displayed as 0x0664 ARABIC-INDIC DIGIT FOUR
'5' # 0x0035 -> DIGIT FIVE; in Arabic-script context, displayed as 0x0665 ARABIC-INDIC DIGIT FIVE
'6' # 0x0036 -> DIGIT SIX; in Arabic-script context, displayed as 0x0666 ARABIC-INDIC DIGIT SIX
'7' # 0x0037 -> DIGIT SEVEN; in Arabic-script context, displayed as 0x0667 ARABIC-INDIC DIGIT SEVEN
'8' # 0x0038 -> DIGIT EIGHT; in Arabic-script context, displayed as 0x0668 ARABIC-INDIC DIGIT EIGHT
'9' # 0x0039 -> DIGIT NINE; in Arabic-script context, displayed as 0x0669 ARABIC-INDIC DIGIT NINE
':' # 0x003a -> COLON, left-right
';' # 0x003b -> SEMICOLON, left-right
'<' # 0x003c -> LESS-THAN SIGN, left-right
'=' # 0x003d -> EQUALS SIGN, left-right
'>' # 0x003e -> GREATER-THAN SIGN, left-right
'?' # 0x003f -> QUESTION MARK, left-right
'@' # 0x0040 -> COMMERCIAL AT
'A' # 0x0041 -> LATIN CAPITAL LETTER A
'B' # 0x0042 -> LATIN CAPITAL LETTER B
'C' # 0x0043 -> LATIN CAPITAL LETTER C
'D' # 0x0044 -> LATIN CAPITAL LETTER D
'E' # 0x0045 -> LATIN CAPITAL LETTER E
'F' # 0x0046 -> LATIN CAPITAL LETTER F
'G' # 0x0047 -> LATIN CAPITAL LETTER G
'H' # 0x0048 -> LATIN CAPITAL LETTER H
'I' # 0x0049 -> LATIN CAPITAL LETTER I
'J' # 0x004a -> LATIN CAPITAL LETTER J
'K' # 0x004b -> LATIN CAPITAL LETTER K
'L' # 0x004c -> LATIN CAPITAL LETTER L
'M' # 0x004d -> LATIN CAPITAL LETTER M
'N' # 0x004e -> LATIN CAPITAL LETTER N
'O' # 0x004f -> LATIN CAPITAL LETTER O
'P' # 0x0050 -> LATIN CAPITAL LETTER P
'Q' # 0x0051 -> LATIN CAPITAL LETTER Q
'R' # 0x0052 -> LATIN CAPITAL LETTER R
'S' # 0x0053 -> LATIN CAPITAL LETTER S
'T' # 0x0054 -> LATIN CAPITAL LETTER T
'U' # 0x0055 -> LATIN CAPITAL LETTER U
'V' # 0x0056 -> LATIN CAPITAL LETTER V
'W' # 0x0057 -> LATIN CAPITAL LETTER W
'X' # 0x0058 -> LATIN CAPITAL LETTER X
'Y' # 0x0059 -> LATIN CAPITAL LETTER Y
'Z' # 0x005a -> LATIN CAPITAL LETTER Z
'[' # 0x005b -> LEFT SQUARE BRACKET, left-right
'\\' # 0x005c -> REVERSE SOLIDUS, left-right
']' # 0x005d -> RIGHT SQUARE BRACKET, left-right
'^' # 0x005e -> CIRCUMFLEX ACCENT, left-right
'_' # 0x005f -> LOW LINE, left-right
'`' # 0x0060 -> GRAVE ACCENT
'a' # 0x0061 -> LATIN SMALL LETTER A
'b' # 0x0062 -> LATIN SMALL LETTER B
'c' # 0x0063 -> LATIN SMALL LETTER C
'd' # 0x0064 -> LATIN SMALL LETTER D
'e' # 0x0065 -> LATIN SMALL LETTER E
'f' # 0x0066 -> LATIN SMALL LETTER F
'g' # 0x0067 -> LATIN SMALL LETTER G
'h' # 0x0068 -> LATIN SMALL LETTER H
'i' # 0x0069 -> LATIN SMALL LETTER I
'j' # 0x006a -> LATIN SMALL LETTER J
'k' # 0x006b -> LATIN SMALL LETTER K
'l' # 0x006c -> LATIN SMALL LETTER L
'm' # 0x006d -> LATIN SMALL LETTER M
'n' # 0x006e -> LATIN SMALL LETTER N
'o' # 0x006f -> LATIN SMALL LETTER O
'p' # 0x0070 -> LATIN SMALL LETTER P
'q' # 0x0071 -> LATIN SMALL LETTER Q
'r' # 0x0072 -> LATIN SMALL LETTER R
's' # 0x0073 -> LATIN SMALL LETTER S
't' # 0x0074 -> LATIN SMALL LETTER T
'u' # 0x0075 -> LATIN SMALL LETTER U
'v' # 0x0076 -> LATIN SMALL LETTER V
'w' # 0x0077 -> LATIN SMALL LETTER W
'x' # 0x0078 -> LATIN SMALL LETTER X
'y' # 0x0079 -> LATIN SMALL LETTER Y
'z' # 0x007a -> LATIN SMALL LETTER Z
'{' # 0x007b -> LEFT CURLY BRACKET, left-right
'|' # 0x007c -> VERTICAL LINE, left-right
'}' # 0x007d -> RIGHT CURLY BRACKET, left-right
'~' # 0x007e -> TILDE
'\x7f' # 0x007f -> CONTROL CHARACTER
'\xc4' # 0x0080 -> LATIN CAPITAL LETTER A WITH DIAERESIS
'\xa0' # 0x0081 -> NO-BREAK SPACE, right-left
'\xc7' # 0x0082 -> LATIN CAPITAL LETTER C WITH CEDILLA
'\xc9' # 0x0083 -> LATIN CAPITAL LETTER E WITH ACUTE
'\xd1' # 0x0084 -> LATIN CAPITAL LETTER N WITH TILDE
'\xd6' # 0x0085 -> LATIN CAPITAL LETTER O WITH DIAERESIS
'\xdc' # 0x0086 -> LATIN CAPITAL LETTER U WITH DIAERESIS
'\xe1' # 0x0087 -> LATIN SMALL LETTER A WITH ACUTE
'\xe0' # 0x0088 -> LATIN SMALL LETTER A WITH GRAVE
'\xe2' # 0x0089 -> LATIN SMALL LETTER A WITH CIRCUMFLEX
'\xe4' # 0x008a -> LATIN SMALL LETTER A WITH DIAERESIS
'\u06ba' # 0x008b -> ARABIC LETTER NOON GHUNNA
'\xab' # 0x008c -> LEFT-POINTING DOUBLE ANGLE QUOTATION MARK, right-left
'\xe7' # 0x008d -> LATIN SMALL LETTER C WITH CEDILLA
'\xe9' # 0x008e -> LATIN SMALL LETTER E WITH ACUTE
'\xe8' # 0x008f -> LATIN SMALL LETTER E WITH GRAVE
'\xea' # 0x0090 -> LATIN SMALL LETTER E WITH CIRCUMFLEX
'\xeb' # 0x0091 -> LATIN SMALL LETTER E WITH DIAERESIS
'\xed' # 0x0092 -> LATIN SMALL LETTER I WITH ACUTE
'\u2026' # 0x0093 -> HORIZONTAL ELLIPSIS, right-left
'\xee' # 0x0094 -> LATIN SMALL LETTER I WITH CIRCUMFLEX
'\xef' # 0x0095 -> LATIN SMALL LETTER I WITH DIAERESIS
'\xf1' # 0x0096 -> LATIN SMALL LETTER N WITH TILDE
'\xf3' # 0x0097 -> LATIN SMALL LETTER O WITH ACUTE
'\xbb' # 0x0098 -> RIGHT-POINTING DOUBLE ANGLE QUOTATION MARK, right-left
'\xf4' # 0x0099 -> LATIN SMALL LETTER O WITH CIRCUMFLEX
'\xf6' # 0x009a -> LATIN SMALL LETTER O WITH DIAERESIS
'\xf7' # 0x009b -> DIVISION SIGN, right-left
'\xfa' # 0x009c -> LATIN SMALL LETTER U WITH ACUTE
'\xf9' # 0x009d -> LATIN SMALL LETTER U WITH GRAVE
'\xfb' # 0x009e -> LATIN SMALL LETTER U WITH CIRCUMFLEX
'\xfc' # 0x009f -> LATIN SMALL LETTER U WITH DIAERESIS
' ' # 0x00a0 -> SPACE, right-left
'!' # 0x00a1 -> EXCLAMATION MARK, right-left
'"' # 0x00a2 -> QUOTATION MARK, right-left
'#' # 0x00a3 -> NUMBER SIGN, right-left
'$' # 0x00a4 -> DOLLAR SIGN, right-left
'\u066a' # 0x00a5 -> ARABIC PERCENT SIGN
'&' # 0x00a6 -> AMPERSAND, right-left
"'" # 0x00a7 -> APOSTROPHE, right-left
'(' # 0x00a8 -> LEFT PARENTHESIS, right-left
')' # 0x00a9 -> RIGHT PARENTHESIS, right-left
'*' # 0x00aa -> ASTERISK, right-left
'+' # 0x00ab -> PLUS SIGN, right-left
'\u060c' # 0x00ac -> ARABIC COMMA
'-' # 0x00ad -> HYPHEN-MINUS, right-left
'.' # 0x00ae -> FULL STOP, right-left
'/' # 0x00af -> SOLIDUS, right-left
'\u0660' # 0x00b0 -> ARABIC-INDIC DIGIT ZERO, right-left (need override)
'\u0661' # 0x00b1 -> ARABIC-INDIC DIGIT ONE, right-left (need override)
'\u0662' # 0x00b2 -> ARABIC-INDIC DIGIT TWO, right-left (need override)
'\u0663' # 0x00b3 -> ARABIC-INDIC DIGIT THREE, right-left (need override)
'\u0664' # 0x00b4 -> ARABIC-INDIC DIGIT FOUR, right-left (need override)
'\u0665' # 0x00b5 -> ARABIC-INDIC DIGIT FIVE, right-left (need override)
'\u0666' # 0x00b6 -> ARABIC-INDIC DIGIT SIX, right-left (need override)
'\u0667' # 0x00b7 -> ARABIC-INDIC DIGIT SEVEN, right-left (need override)
'\u0668' # 0x00b8 -> ARABIC-INDIC DIGIT EIGHT, right-left (need override)
'\u0669' # 0x00b9 -> ARABIC-INDIC DIGIT NINE, right-left (need override)
':' # 0x00ba -> COLON, right-left
'\u061b' # 0x00bb -> ARABIC SEMICOLON
'<' # 0x00bc -> LESS-THAN SIGN, right-left
'=' # 0x00bd -> EQUALS SIGN, right-left
'>' # 0x00be -> GREATER-THAN SIGN, right-left
'\u061f' # 0x00bf -> ARABIC QUESTION MARK
'\u274a' # 0x00c0 -> EIGHT TEARDROP-SPOKED PROPELLER ASTERISK, right-left
'\u0621' # 0x00c1 -> ARABIC LETTER HAMZA
'\u0622' # 0x00c2 -> ARABIC LETTER ALEF WITH MADDA ABOVE
'\u0623' # 0x00c3 -> ARABIC LETTER ALEF WITH HAMZA ABOVE
'\u0624' # 0x00c4 -> ARABIC LETTER WAW WITH HAMZA ABOVE
'\u0625' # 0x00c5 -> ARABIC LETTER ALEF WITH HAMZA BELOW
'\u0626' # 0x00c6 -> ARABIC LETTER YEH WITH HAMZA ABOVE
'\u0627' # 0x00c7 -> ARABIC LETTER ALEF
'\u0628' # 0x00c8 -> ARABIC LETTER BEH
'\u0629' # 0x00c9 -> ARABIC LETTER TEH MARBUTA
'\u062a' # 0x00ca -> ARABIC LETTER TEH
'\u062b' # 0x00cb -> ARABIC LETTER THEH
'\u062c' # 0x00cc -> ARABIC LETTER JEEM
'\u062d' # 0x00cd -> ARABIC LETTER HAH
'\u062e' # 0x00ce -> ARABIC LETTER KHAH
'\u062f' # 0x00cf -> ARABIC LETTER DAL
'\u0630' # 0x00d0 -> ARABIC LETTER THAL
'\u0631' # 0x00d1 -> ARABIC LETTER REH
'\u0632' # 0x00d2 -> ARABIC LETTER ZAIN
'\u0633' # 0x00d3 -> ARABIC LETTER SEEN
'\u0634' # 0x00d4 -> ARABIC LETTER SHEEN
'\u0635' # 0x00d5 -> ARABIC LETTER SAD
'\u0636' # 0x00d6 -> ARABIC LETTER DAD
'\u0637' # 0x00d7 -> ARABIC LETTER TAH
'\u0638' # 0x00d8 -> ARABIC LETTER ZAH
'\u0639' # 0x00d9 -> ARABIC LETTER AIN
'\u063a' # 0x00da -> ARABIC LETTER GHAIN
'[' # 0x00db -> LEFT SQUARE BRACKET, right-left
'\\' # 0x00dc -> REVERSE SOLIDUS, right-left
']' # 0x00dd -> RIGHT SQUARE BRACKET, right-left
'^' # 0x00de -> CIRCUMFLEX ACCENT, right-left
'_' # 0x00df -> LOW LINE, right-left
'\u0640' # 0x00e0 -> ARABIC TATWEEL
'\u0641' # 0x00e1 -> ARABIC LETTER FEH
'\u0642' # 0x00e2 -> ARABIC LETTER QAF
'\u0643' # 0x00e3 -> ARABIC LETTER KAF
'\u0644' # 0x00e4 -> ARABIC LETTER LAM
'\u0645' # 0x00e5 -> ARABIC LETTER MEEM
'\u0646' # 0x00e6 -> ARABIC LETTER NOON
'\u0647' # 0x00e7 -> ARABIC LETTER HEH
'\u0648' # 0x00e8 -> ARABIC LETTER WAW
'\u0649' # 0x00e9 -> ARABIC LETTER ALEF MAKSURA
'\u064a' # 0x00ea -> ARABIC LETTER YEH
'\u064b' # 0x00eb -> ARABIC FATHATAN
'\u064c' # 0x00ec -> ARABIC DAMMATAN
'\u064d' # 0x00ed -> ARABIC KASRATAN
'\u064e' # 0x00ee -> ARABIC FATHA
'\u064f' # 0x00ef -> ARABIC DAMMA
'\u0650' # 0x00f0 -> ARABIC KASRA
'\u0651' # 0x00f1 -> ARABIC SHADDA
'\u0652' # 0x00f2 -> ARABIC SUKUN
'\u067e' # 0x00f3 -> ARABIC LETTER PEH
'\u0679' # 0x00f4 -> ARABIC LETTER TTEH
'\u0686' # 0x00f5 -> ARABIC LETTER TCHEH
'\u06d5' # 0x00f6 -> ARABIC LETTER AE
'\u06a4' # 0x00f7 -> ARABIC LETTER VEH
'\u06af' # 0x00f8 -> ARABIC LETTER GAF
'\u0688' # 0x00f9 -> ARABIC LETTER DDAL
'\u0691' # 0x00fa -> ARABIC LETTER RREH
'{' # 0x00fb -> LEFT CURLY BRACKET, right-left
'|' # 0x00fc -> VERTICAL LINE, right-left
'}' # 0x00fd -> RIGHT CURLY BRACKET, right-left
'\u0698' # 0x00fe -> ARABIC LETTER JEH
'\u06d2' # 0x00ff -> ARABIC LETTER YEH BARREE
)
### Encoding Map
encoding_map = {
0x0000: 0x0000, # CONTROL CHARACTER
0x0001: 0x0001, # CONTROL CHARACTER
0x0002: 0x0002, # CONTROL CHARACTER
0x0003: 0x0003, # CONTROL CHARACTER
0x0004: 0x0004, # CONTROL CHARACTER
0x0005: 0x0005, # CONTROL CHARACTER
0x0006: 0x0006, # CONTROL CHARACTER
0x0007: 0x0007, # CONTROL CHARACTER
0x0008: 0x0008, # CONTROL CHARACTER
0x0009: 0x0009, # CONTROL CHARACTER
0x000a: 0x000a, # CONTROL CHARACTER
0x000b: 0x000b, # CONTROL CHARACTER
0x000c: 0x000c, # CONTROL CHARACTER
0x000d: 0x000d, # CONTROL CHARACTER
0x000e: 0x000e, # CONTROL CHARACTER
0x000f: 0x000f, # CONTROL CHARACTER
0x0010: 0x0010, # CONTROL CHARACTER
0x0011: 0x0011, # CONTROL CHARACTER
0x0012: 0x0012, # CONTROL CHARACTER
0x0013: 0x0013, # CONTROL CHARACTER
0x0014: 0x0014, # CONTROL CHARACTER
0x0015: 0x0015, # CONTROL CHARACTER
0x0016: 0x0016, # CONTROL CHARACTER
0x0017: 0x0017, # CONTROL CHARACTER
0x0018: 0x0018, # CONTROL CHARACTER
0x0019: 0x0019, # CONTROL CHARACTER
0x001a: 0x001a, # CONTROL CHARACTER
0x001b: 0x001b, # CONTROL CHARACTER
0x001c: 0x001c, # CONTROL CHARACTER
0x001d: 0x001d, # CONTROL CHARACTER
0x001e: 0x001e, # CONTROL CHARACTER
0x001f: 0x001f, # CONTROL CHARACTER
0x0020: 0x0020, # SPACE, left-right
0x0020: 0x00a0, # SPACE, right-left
0x0021: 0x0021, # EXCLAMATION MARK, left-right
0x0021: 0x00a1, # EXCLAMATION MARK, right-left
0x0022: 0x0022, # QUOTATION MARK, left-right
0x0022: 0x00a2, # QUOTATION MARK, right-left
0x0023: 0x0023, # NUMBER SIGN, left-right
0x0023: 0x00a3, # NUMBER SIGN, right-left
0x0024: 0x0024, # DOLLAR SIGN, left-right
0x0024: 0x00a4, # DOLLAR SIGN, right-left
0x0025: 0x0025, # PERCENT SIGN, left-right
0x0026: 0x0026, # AMPERSAND, left-right
0x0026: 0x00a6, # AMPERSAND, right-left
0x0027: 0x0027, # APOSTROPHE, left-right
0x0027: 0x00a7, # APOSTROPHE, right-left
0x0028: 0x0028, # LEFT PARENTHESIS, left-right
0x0028: 0x00a8, # LEFT PARENTHESIS, right-left
0x0029: 0x0029, # RIGHT PARENTHESIS, left-right
0x0029: 0x00a9, # RIGHT PARENTHESIS, right-left
0x002a: 0x002a, # ASTERISK, left-right
0x002a: 0x00aa, # ASTERISK, right-left
0x002b: 0x002b, # PLUS SIGN, left-right
0x002b: 0x00ab, # PLUS SIGN, right-left
0x002c: 0x002c, # COMMA, left-right; in Arabic-script context, displayed as 0x066C ARABIC THOUSANDS SEPARATOR
0x002d: 0x002d, # HYPHEN-MINUS, left-right
0x002d: 0x00ad, # HYPHEN-MINUS, right-left
0x002e: 0x002e, # FULL STOP, left-right; in Arabic-script context, displayed as 0x066B ARABIC DECIMAL SEPARATOR
0x002e: 0x00ae, # FULL STOP, right-left
0x002f: 0x002f, # SOLIDUS, left-right
0x002f: 0x00af, # SOLIDUS, right-left
0x0030: 0x0030, # DIGIT ZERO; in Arabic-script context, displayed as 0x0660 ARABIC-INDIC DIGIT ZERO
0x0031: 0x0031, # DIGIT ONE; in Arabic-script context, displayed as 0x0661 ARABIC-INDIC DIGIT ONE
0x0032: 0x0032, # DIGIT TWO; in Arabic-script context, displayed as 0x0662 ARABIC-INDIC DIGIT TWO
0x0033: 0x0033, # DIGIT THREE; in Arabic-script context, displayed as 0x0663 ARABIC-INDIC DIGIT THREE
0x0034: 0x0034, # DIGIT FOUR; in Arabic-script context, displayed as 0x0664 ARABIC-INDIC DIGIT FOUR
0x0035: 0x0035, # DIGIT FIVE; in Arabic-script context, displayed as 0x0665 ARABIC-INDIC DIGIT FIVE
0x0036: 0x0036, # DIGIT SIX; in Arabic-script context, displayed as 0x0666 ARABIC-INDIC DIGIT SIX
0x0037: 0x0037, # DIGIT SEVEN; in Arabic-script context, displayed as 0x0667 ARABIC-INDIC DIGIT SEVEN
0x0038: 0x0038, # DIGIT EIGHT; in Arabic-script context, displayed as 0x0668 ARABIC-INDIC DIGIT EIGHT
0x0039: 0x0039, # DIGIT NINE; in Arabic-script context, displayed as 0x0669 ARABIC-INDIC DIGIT NINE
0x003a: 0x003a, # COLON, left-right
0x003a: 0x00ba, # COLON, right-left
0x003b: 0x003b, # SEMICOLON, left-right
0x003c: 0x003c, # LESS-THAN SIGN, left-right
0x003c: 0x00bc, # LESS-THAN SIGN, right-left
0x003d: 0x003d, # EQUALS SIGN, left-right
0x003d: 0x00bd, # EQUALS SIGN, right-left
0x003e: 0x003e, # GREATER-THAN SIGN, left-right
0x003e: 0x00be, # GREATER-THAN SIGN, right-left
0x003f: 0x003f, # QUESTION MARK, left-right
0x0040: 0x0040, # COMMERCIAL AT
0x0041: 0x0041, # LATIN CAPITAL LETTER A
0x0042: 0x0042, # LATIN CAPITAL LETTER B
0x0043: 0x0043, # LATIN CAPITAL LETTER C
0x0044: 0x0044, # LATIN CAPITAL LETTER D
0x0045: 0x0045, # LATIN CAPITAL LETTER E
0x0046: 0x0046, # LATIN CAPITAL LETTER F
0x0047: 0x0047, # LATIN CAPITAL LETTER G
0x0048: 0x0048, # LATIN CAPITAL LETTER H
0x0049: 0x0049, # LATIN CAPITAL LETTER I
0x004a: 0x004a, # LATIN CAPITAL LETTER J
0x004b: 0x004b, # LATIN CAPITAL LETTER K
0x004c: 0x004c, # LATIN CAPITAL LETTER L
0x004d: 0x004d, # LATIN CAPITAL LETTER M
0x004e: 0x004e, # LATIN CAPITAL LETTER N
0x004f: 0x004f, # LATIN CAPITAL LETTER O
0x0050: 0x0050, # LATIN CAPITAL LETTER P
0x0051: 0x0051, # LATIN CAPITAL LETTER Q
0x0052: 0x0052, # LATIN CAPITAL LETTER R
0x0053: 0x0053, # LATIN CAPITAL LETTER S
0x0054: 0x0054, # LATIN CAPITAL LETTER T
0x0055: 0x0055, # LATIN CAPITAL LETTER U
0x0056: 0x0056, # LATIN CAPITAL LETTER V
0x0057: 0x0057, # LATIN CAPITAL LETTER W
0x0058: 0x0058, # LATIN CAPITAL LETTER X
0x0059: 0x0059, # LATIN CAPITAL LETTER Y
0x005a: 0x005a, # LATIN CAPITAL LETTER Z
0x005b: 0x005b, # LEFT SQUARE BRACKET, left-right
0x005b: 0x00db, # LEFT SQUARE BRACKET, right-left
0x005c: 0x005c, # REVERSE SOLIDUS, left-right
0x005c: 0x00dc, # REVERSE SOLIDUS, right-left
0x005d: 0x005d, # RIGHT SQUARE BRACKET, left-right
0x005d: 0x00dd, # RIGHT SQUARE BRACKET, right-left
0x005e: 0x005e, # CIRCUMFLEX ACCENT, left-right
0x005e: 0x00de, # CIRCUMFLEX ACCENT, right-left
0x005f: 0x005f, # LOW LINE, left-right
0x005f: 0x00df, # LOW LINE, right-left
0x0060: 0x0060, # GRAVE ACCENT
0x0061: 0x0061, # LATIN SMALL LETTER A
0x0062: 0x0062, # LATIN SMALL LETTER B
0x0063: 0x0063, # LATIN SMALL LETTER C
0x0064: 0x0064, # LATIN SMALL LETTER D
0x0065: 0x0065, # LATIN SMALL LETTER E
0x0066: 0x0066, # LATIN SMALL LETTER F
0x0067: 0x0067, # LATIN SMALL LETTER G
0x0068: 0x0068, # LATIN SMALL LETTER H
0x0069: 0x0069, # LATIN SMALL LETTER I
0x006a: 0x006a, # LATIN SMALL LETTER J
0x006b: 0x006b, # LATIN SMALL LETTER K
0x006c: 0x006c, # LATIN SMALL LETTER L
0x006d: 0x006d, # LATIN SMALL LETTER M
0x006e: 0x006e, # LATIN SMALL LETTER N
0x006f: 0x006f, # LATIN SMALL LETTER O
0x0070: 0x0070, # LATIN SMALL LETTER P
0x0071: 0x0071, # LATIN SMALL LETTER Q
0x0072: 0x0072, # LATIN SMALL LETTER R
0x0073: 0x0073, # LATIN SMALL LETTER S
0x0074: 0x0074, # LATIN SMALL LETTER T
0x0075: 0x0075, # LATIN SMALL LETTER U
0x0076: 0x0076, # LATIN SMALL LETTER V
0x0077: 0x0077, # LATIN SMALL LETTER W
0x0078: 0x0078, # LATIN SMALL LETTER X
0x0079: 0x0079, # LATIN SMALL LETTER Y
0x007a: 0x007a, # LATIN SMALL LETTER Z
0x007b: 0x007b, # LEFT CURLY BRACKET, left-right
0x007b: 0x00fb, # LEFT CURLY BRACKET, right-left
0x007c: 0x007c, # VERTICAL LINE, left-right
0x007c: 0x00fc, # VERTICAL LINE, right-left
0x007d: 0x007d, # RIGHT CURLY BRACKET, left-right
0x007d: 0x00fd, # RIGHT CURLY BRACKET, right-left
0x007e: 0x007e, # TILDE
0x007f: 0x007f, # CONTROL CHARACTER
0x00a0: 0x0081, # NO-BREAK SPACE, right-left
0x00ab: 0x008c, # LEFT-POINTING DOUBLE ANGLE QUOTATION MARK, right-left
0x00bb: 0x0098, # RIGHT-POINTING DOUBLE ANGLE QUOTATION MARK, right-left
0x00c4: 0x0080, # LATIN CAPITAL LETTER A WITH DIAERESIS
0x00c7: 0x0082, # LATIN CAPITAL LETTER C WITH CEDILLA
0x00c9: 0x0083, # LATIN CAPITAL LETTER E WITH ACUTE
0x00d1: 0x0084, # LATIN CAPITAL LETTER N WITH TILDE
0x00d6: 0x0085, # LATIN CAPITAL LETTER O WITH DIAERESIS
0x00dc: 0x0086, # LATIN CAPITAL LETTER U WITH DIAERESIS
0x00e0: 0x0088, # LATIN SMALL LETTER A WITH GRAVE
0x00e1: 0x0087, # LATIN SMALL LETTER A WITH ACUTE
0x00e2: 0x0089, # LATIN SMALL LETTER A WITH CIRCUMFLEX
0x00e4: 0x008a, # LATIN SMALL LETTER A WITH DIAERESIS
0x00e7: 0x008d, # LATIN SMALL LETTER C WITH CEDILLA
0x00e8: 0x008f, # LATIN SMALL LETTER E WITH GRAVE
0x00e9: 0x008e, # LATIN SMALL LETTER E WITH ACUTE
0x00ea: 0x0090, # LATIN SMALL LETTER E WITH CIRCUMFLEX
0x00eb: 0x0091, # LATIN SMALL LETTER E WITH DIAERESIS
0x00ed: 0x0092, # LATIN SMALL LETTER I WITH ACUTE
0x00ee: 0x0094, # LATIN SMALL LETTER I WITH CIRCUMFLEX
0x00ef: 0x0095, # LATIN SMALL LETTER I WITH DIAERESIS
0x00f1: 0x0096, # LATIN SMALL LETTER N WITH TILDE
0x00f3: 0x0097, # LATIN SMALL LETTER O WITH ACUTE
0x00f4: 0x0099, # LATIN SMALL LETTER O WITH CIRCUMFLEX
0x00f6: 0x009a, # LATIN SMALL LETTER O WITH DIAERESIS
0x00f7: 0x009b, # DIVISION SIGN, right-left
0x00f9: 0x009d, # LATIN SMALL LETTER U WITH GRAVE
0x00fa: 0x009c, # LATIN SMALL LETTER U WITH ACUTE
0x00fb: 0x009e, # LATIN SMALL LETTER U WITH CIRCUMFLEX
0x00fc: 0x009f, # LATIN SMALL LETTER U WITH DIAERESIS
0x060c: 0x00ac, # ARABIC COMMA
0x061b: 0x00bb, # ARABIC SEMICOLON
0x061f: 0x00bf, # ARABIC QUESTION MARK
0x0621: 0x00c1, # ARABIC LETTER HAMZA
0x0622: 0x00c2, # ARABIC LETTER ALEF WITH MADDA ABOVE
0x0623: 0x00c3, # ARABIC LETTER ALEF WITH HAMZA ABOVE
0x0624: 0x00c4, # ARABIC LETTER WAW WITH HAMZA ABOVE
0x0625: 0x00c5, # ARABIC LETTER ALEF WITH HAMZA BELOW
0x0626: 0x00c6, # ARABIC LETTER YEH WITH HAMZA ABOVE
0x0627: 0x00c7, # ARABIC LETTER ALEF
0x0628: 0x00c8, # ARABIC LETTER BEH
0x0629: 0x00c9, # ARABIC LETTER TEH MARBUTA
0x062a: 0x00ca, # ARABIC LETTER TEH
0x062b: 0x00cb, # ARABIC LETTER THEH
0x062c: 0x00cc, # ARABIC LETTER JEEM
0x062d: 0x00cd, # ARABIC LETTER HAH
0x062e: 0x00ce, # ARABIC LETTER KHAH
0x062f: 0x00cf, # ARABIC LETTER DAL
0x0630: 0x00d0, # ARABIC LETTER THAL
0x0631: 0x00d1, # ARABIC LETTER REH
0x0632: 0x00d2, # ARABIC LETTER ZAIN
0x0633: 0x00d3, # ARABIC LETTER SEEN
0x0634: 0x00d4, # ARABIC LETTER SHEEN
0x0635: 0x00d5, # ARABIC LETTER SAD
0x0636: 0x00d6, # ARABIC LETTER DAD
0x0637: 0x00d7, # ARABIC LETTER TAH
0x0638: 0x00d8, # ARABIC LETTER ZAH
0x0639: 0x00d9, # ARABIC LETTER AIN
0x063a: 0x00da, # ARABIC LETTER GHAIN
0x0640: 0x00e0, # ARABIC TATWEEL
0x0641: 0x00e1, # ARABIC LETTER FEH
0x0642: 0x00e2, # ARABIC LETTER QAF
0x0643: 0x00e3, # ARABIC LETTER KAF
0x0644: 0x00e4, # ARABIC LETTER LAM
0x0645: 0x00e5, # ARABIC LETTER MEEM
0x0646: 0x00e6, # ARABIC LETTER NOON
0x0647: 0x00e7, # ARABIC LETTER HEH
0x0648: 0x00e8, # ARABIC LETTER WAW
0x0649: 0x00e9, # ARABIC LETTER ALEF MAKSURA
0x064a: 0x00ea, # ARABIC LETTER YEH
0x064b: 0x00eb, # ARABIC FATHATAN
0x064c: 0x00ec, # ARABIC DAMMATAN
0x064d: 0x00ed, # ARABIC KASRATAN
0x064e: 0x00ee, # ARABIC FATHA
0x064f: 0x00ef, # ARABIC DAMMA
0x0650: 0x00f0, # ARABIC KASRA
0x0651: 0x00f1, # ARABIC SHADDA
0x0652: 0x00f2, # ARABIC SUKUN
0x0660: 0x00b0, # ARABIC-INDIC DIGIT ZERO, right-left (need override)
0x0661: 0x00b1, # ARABIC-INDIC DIGIT ONE, right-left (need override)
0x0662: 0x00b2, # ARABIC-INDIC DIGIT TWO, right-left (need override)
0x0663: 0x00b3, # ARABIC-INDIC DIGIT THREE, right-left (need override)
0x0664: 0x00b4, # ARABIC-INDIC DIGIT FOUR, right-left (need override)
0x0665: 0x00b5, # ARABIC-INDIC DIGIT FIVE, right-left (need override)
0x0666: 0x00b6, # ARABIC-INDIC DIGIT SIX, right-left (need override)
0x0667: 0x00b7, # ARABIC-INDIC DIGIT SEVEN, right-left (need override)
0x0668: 0x00b8, # ARABIC-INDIC DIGIT EIGHT, right-left (need override)
0x0669: 0x00b9, # ARABIC-INDIC DIGIT NINE, right-left (need override)
0x066a: 0x00a5, # ARABIC PERCENT SIGN
0x0679: 0x00f4, # ARABIC LETTER TTEH
0x067e: 0x00f3, # ARABIC LETTER PEH
0x0686: 0x00f5, # ARABIC LETTER TCHEH
0x0688: 0x00f9, # ARABIC LETTER DDAL
0x0691: 0x00fa, # ARABIC LETTER RREH
0x0698: 0x00fe, # ARABIC LETTER JEH
0x06a4: 0x00f7, # ARABIC LETTER VEH
0x06af: 0x00f8, # ARABIC LETTER GAF
0x06ba: 0x008b, # ARABIC LETTER NOON GHUNNA
0x06d2: 0x00ff, # ARABIC LETTER YEH BARREE
0x06d5: 0x00f6, # ARABIC LETTER AE
0x2026: 0x0093, # HORIZONTAL ELLIPSIS, right-left
0x274a: 0x00c0, # EIGHT TEARDROP-SPOKED PROPELLER ASTERISK, right-left
}
|
catboostREPO_NAMEcatboostPATH_START.@catboost_extracted@catboost-master@contrib@tools@python3@Lib@encodings@mac_arabic.py@.PATH_END.py
|
{
"filename": "_lineposition.py",
"repo_name": "catboost/catboost",
"repo_path": "catboost_extracted/catboost-master/contrib/python/plotly/py3/plotly/validators/heatmap/textfont/_lineposition.py",
"type": "Python"
}
|
import _plotly_utils.basevalidators
class LinepositionValidator(_plotly_utils.basevalidators.FlaglistValidator):
def __init__(
self, plotly_name="lineposition", parent_name="heatmap.textfont", **kwargs
):
super(LinepositionValidator, self).__init__(
plotly_name=plotly_name,
parent_name=parent_name,
edit_type=kwargs.pop("edit_type", "plot"),
extras=kwargs.pop("extras", ["none"]),
flags=kwargs.pop("flags", ["under", "over", "through"]),
**kwargs,
)
|
catboostREPO_NAMEcatboostPATH_START.@catboost_extracted@catboost-master@contrib@python@plotly@py3@plotly@validators@heatmap@textfont@_lineposition.py@.PATH_END.py
|
{
"filename": "_textcase.py",
"repo_name": "plotly/plotly.py",
"repo_path": "plotly.py_extracted/plotly.py-master/packages/python/plotly/plotly/validators/barpolar/marker/colorbar/tickfont/_textcase.py",
"type": "Python"
}
|
import _plotly_utils.basevalidators
class TextcaseValidator(_plotly_utils.basevalidators.EnumeratedValidator):
def __init__(
self,
plotly_name="textcase",
parent_name="barpolar.marker.colorbar.tickfont",
**kwargs,
):
super(TextcaseValidator, self).__init__(
plotly_name=plotly_name,
parent_name=parent_name,
edit_type=kwargs.pop("edit_type", "colorbars"),
values=kwargs.pop("values", ["normal", "word caps", "upper", "lower"]),
**kwargs,
)
|
plotlyREPO_NAMEplotly.pyPATH_START.@plotly.py_extracted@plotly.py-master@packages@python@plotly@plotly@validators@barpolar@marker@colorbar@tickfont@_textcase.py@.PATH_END.py
|
{
"filename": "local.py",
"repo_name": "catboost/catboost",
"repo_path": "catboost_extracted/catboost-master/contrib/python/py/py2/py/_path/local.py",
"type": "Python"
}
|
"""
local path implementation.
"""
from __future__ import with_statement
from contextlib import contextmanager
import sys, os, atexit, io, uuid
import py
from py._path import common
from py._path.common import iswin32, fspath
from stat import S_ISLNK, S_ISDIR, S_ISREG
from os.path import abspath, normpath, isabs, exists, isdir, isfile, islink, dirname
if sys.version_info > (3,0):
def map_as_list(func, iter):
return list(map(func, iter))
else:
map_as_list = map
ALLOW_IMPORTLIB_MODE = sys.version_info > (3,5)
if ALLOW_IMPORTLIB_MODE:
import importlib
class Stat(object):
def __getattr__(self, name):
return getattr(self._osstatresult, "st_" + name)
def __init__(self, path, osstatresult):
self.path = path
self._osstatresult = osstatresult
@property
def owner(self):
if iswin32:
raise NotImplementedError("XXX win32")
import pwd
entry = py.error.checked_call(pwd.getpwuid, self.uid)
return entry[0]
@property
def group(self):
""" return group name of file. """
if iswin32:
raise NotImplementedError("XXX win32")
import grp
entry = py.error.checked_call(grp.getgrgid, self.gid)
return entry[0]
def isdir(self):
return S_ISDIR(self._osstatresult.st_mode)
def isfile(self):
return S_ISREG(self._osstatresult.st_mode)
def islink(self):
st = self.path.lstat()
return S_ISLNK(self._osstatresult.st_mode)
class PosixPath(common.PathBase):
def chown(self, user, group, rec=0):
""" change ownership to the given user and group.
user and group may be specified by a number or
by a name. if rec is True change ownership
recursively.
"""
uid = getuserid(user)
gid = getgroupid(group)
if rec:
for x in self.visit(rec=lambda x: x.check(link=0)):
if x.check(link=0):
py.error.checked_call(os.chown, str(x), uid, gid)
py.error.checked_call(os.chown, str(self), uid, gid)
def readlink(self):
""" return value of a symbolic link. """
return py.error.checked_call(os.readlink, self.strpath)
def mklinkto(self, oldname):
""" posix style hard link to another name. """
py.error.checked_call(os.link, str(oldname), str(self))
def mksymlinkto(self, value, absolute=1):
""" create a symbolic link with the given value (pointing to another name). """
if absolute:
py.error.checked_call(os.symlink, str(value), self.strpath)
else:
base = self.common(value)
# with posix local paths '/' is always a common base
relsource = self.__class__(value).relto(base)
reldest = self.relto(base)
n = reldest.count(self.sep)
target = self.sep.join(('..', )*n + (relsource, ))
py.error.checked_call(os.symlink, target, self.strpath)
def getuserid(user):
import pwd
if not isinstance(user, int):
user = pwd.getpwnam(user)[2]
return user
def getgroupid(group):
import grp
if not isinstance(group, int):
group = grp.getgrnam(group)[2]
return group
FSBase = not iswin32 and PosixPath or common.PathBase
class LocalPath(FSBase):
""" object oriented interface to os.path and other local filesystem
related information.
"""
class ImportMismatchError(ImportError):
""" raised on pyimport() if there is a mismatch of __file__'s"""
sep = os.sep
class Checkers(common.Checkers):
def _stat(self):
try:
return self._statcache
except AttributeError:
try:
self._statcache = self.path.stat()
except py.error.ELOOP:
self._statcache = self.path.lstat()
return self._statcache
def dir(self):
return S_ISDIR(self._stat().mode)
def file(self):
return S_ISREG(self._stat().mode)
def exists(self):
return self._stat()
def link(self):
st = self.path.lstat()
return S_ISLNK(st.mode)
def __init__(self, path=None, expanduser=False):
""" Initialize and return a local Path instance.
Path can be relative to the current directory.
If path is None it defaults to the current working directory.
If expanduser is True, tilde-expansion is performed.
Note that Path instances always carry an absolute path.
Note also that passing in a local path object will simply return
the exact same path object. Use new() to get a new copy.
"""
if path is None:
self.strpath = py.error.checked_call(os.getcwd)
else:
try:
path = fspath(path)
except TypeError:
raise ValueError("can only pass None, Path instances "
"or non-empty strings to LocalPath")
if expanduser:
path = os.path.expanduser(path)
self.strpath = abspath(path)
def __hash__(self):
s = self.strpath
if iswin32:
s = s.lower()
return hash(s)
def __eq__(self, other):
s1 = fspath(self)
try:
s2 = fspath(other)
except TypeError:
return False
if iswin32:
s1 = s1.lower()
try:
s2 = s2.lower()
except AttributeError:
return False
return s1 == s2
def __ne__(self, other):
return not (self == other)
def __lt__(self, other):
return fspath(self) < fspath(other)
def __gt__(self, other):
return fspath(self) > fspath(other)
def samefile(self, other):
""" return True if 'other' references the same file as 'self'.
"""
other = fspath(other)
if not isabs(other):
other = abspath(other)
if self == other:
return True
if not hasattr(os.path, "samefile"):
return False
return py.error.checked_call(
os.path.samefile, self.strpath, other)
def remove(self, rec=1, ignore_errors=False):
""" remove a file or directory (or a directory tree if rec=1).
if ignore_errors is True, errors while removing directories will
be ignored.
"""
if self.check(dir=1, link=0):
if rec:
# force remove of readonly files on windows
if iswin32:
self.chmod(0o700, rec=1)
import shutil
py.error.checked_call(
shutil.rmtree, self.strpath,
ignore_errors=ignore_errors)
else:
py.error.checked_call(os.rmdir, self.strpath)
else:
if iswin32:
self.chmod(0o700)
py.error.checked_call(os.remove, self.strpath)
def computehash(self, hashtype="md5", chunksize=524288):
""" return hexdigest of hashvalue for this file. """
try:
try:
import hashlib as mod
except ImportError:
if hashtype == "sha1":
hashtype = "sha"
mod = __import__(hashtype)
hash = getattr(mod, hashtype)()
except (AttributeError, ImportError):
raise ValueError("Don't know how to compute %r hash" %(hashtype,))
f = self.open('rb')
try:
while 1:
buf = f.read(chunksize)
if not buf:
return hash.hexdigest()
hash.update(buf)
finally:
f.close()
def new(self, **kw):
""" create a modified version of this path.
the following keyword arguments modify various path parts::
a:/some/path/to/a/file.ext
xx drive
xxxxxxxxxxxxxxxxx dirname
xxxxxxxx basename
xxxx purebasename
xxx ext
"""
obj = object.__new__(self.__class__)
if not kw:
obj.strpath = self.strpath
return obj
drive, dirname, basename, purebasename,ext = self._getbyspec(
"drive,dirname,basename,purebasename,ext")
if 'basename' in kw:
if 'purebasename' in kw or 'ext' in kw:
raise ValueError("invalid specification %r" % kw)
else:
pb = kw.setdefault('purebasename', purebasename)
try:
ext = kw['ext']
except KeyError:
pass
else:
if ext and not ext.startswith('.'):
ext = '.' + ext
kw['basename'] = pb + ext
if ('dirname' in kw and not kw['dirname']):
kw['dirname'] = drive
else:
kw.setdefault('dirname', dirname)
kw.setdefault('sep', self.sep)
obj.strpath = normpath(
"%(dirname)s%(sep)s%(basename)s" % kw)
return obj
def _getbyspec(self, spec):
""" see new for what 'spec' can be. """
res = []
parts = self.strpath.split(self.sep)
args = filter(None, spec.split(',') )
append = res.append
for name in args:
if name == 'drive':
append(parts[0])
elif name == 'dirname':
append(self.sep.join(parts[:-1]))
else:
basename = parts[-1]
if name == 'basename':
append(basename)
else:
i = basename.rfind('.')
if i == -1:
purebasename, ext = basename, ''
else:
purebasename, ext = basename[:i], basename[i:]
if name == 'purebasename':
append(purebasename)
elif name == 'ext':
append(ext)
else:
raise ValueError("invalid part specification %r" % name)
return res
def dirpath(self, *args, **kwargs):
""" return the directory path joined with any given path arguments. """
if not kwargs:
path = object.__new__(self.__class__)
path.strpath = dirname(self.strpath)
if args:
path = path.join(*args)
return path
return super(LocalPath, self).dirpath(*args, **kwargs)
def join(self, *args, **kwargs):
""" return a new path by appending all 'args' as path
components. if abs=1 is used restart from root if any
of the args is an absolute path.
"""
sep = self.sep
strargs = [fspath(arg) for arg in args]
strpath = self.strpath
if kwargs.get('abs'):
newargs = []
for arg in reversed(strargs):
if isabs(arg):
strpath = arg
strargs = newargs
break
newargs.insert(0, arg)
# special case for when we have e.g. strpath == "/"
actual_sep = "" if strpath.endswith(sep) else sep
for arg in strargs:
arg = arg.strip(sep)
if iswin32:
# allow unix style paths even on windows.
arg = arg.strip('/')
arg = arg.replace('/', sep)
strpath = strpath + actual_sep + arg
actual_sep = sep
obj = object.__new__(self.__class__)
obj.strpath = normpath(strpath)
return obj
def open(self, mode='r', ensure=False, encoding=None):
""" return an opened file with the given mode.
If ensure is True, create parent directories if needed.
"""
if ensure:
self.dirpath().ensure(dir=1)
if encoding:
return py.error.checked_call(io.open, self.strpath, mode, encoding=encoding)
return py.error.checked_call(open, self.strpath, mode)
def _fastjoin(self, name):
child = object.__new__(self.__class__)
child.strpath = self.strpath + self.sep + name
return child
def islink(self):
return islink(self.strpath)
def check(self, **kw):
if not kw:
return exists(self.strpath)
if len(kw) == 1:
if "dir" in kw:
return not kw["dir"] ^ isdir(self.strpath)
if "file" in kw:
return not kw["file"] ^ isfile(self.strpath)
return super(LocalPath, self).check(**kw)
_patternchars = set("*?[" + os.path.sep)
def listdir(self, fil=None, sort=None):
""" list directory contents, possibly filter by the given fil func
and possibly sorted.
"""
if fil is None and sort is None:
names = py.error.checked_call(os.listdir, self.strpath)
return map_as_list(self._fastjoin, names)
if isinstance(fil, py.builtin._basestring):
if not self._patternchars.intersection(fil):
child = self._fastjoin(fil)
if exists(child.strpath):
return [child]
return []
fil = common.FNMatcher(fil)
names = py.error.checked_call(os.listdir, self.strpath)
res = []
for name in names:
child = self._fastjoin(name)
if fil is None or fil(child):
res.append(child)
self._sortlist(res, sort)
return res
def size(self):
""" return size of the underlying file object """
return self.stat().size
def mtime(self):
""" return last modification time of the path. """
return self.stat().mtime
def copy(self, target, mode=False, stat=False):
""" copy path to target.
If mode is True, will copy copy permission from path to target.
If stat is True, copy permission, last modification
time, last access time, and flags from path to target.
"""
if self.check(file=1):
if target.check(dir=1):
target = target.join(self.basename)
assert self!=target
copychunked(self, target)
if mode:
copymode(self.strpath, target.strpath)
if stat:
copystat(self, target)
else:
def rec(p):
return p.check(link=0)
for x in self.visit(rec=rec):
relpath = x.relto(self)
newx = target.join(relpath)
newx.dirpath().ensure(dir=1)
if x.check(link=1):
newx.mksymlinkto(x.readlink())
continue
elif x.check(file=1):
copychunked(x, newx)
elif x.check(dir=1):
newx.ensure(dir=1)
if mode:
copymode(x.strpath, newx.strpath)
if stat:
copystat(x, newx)
def rename(self, target):
""" rename this path to target. """
target = fspath(target)
return py.error.checked_call(os.rename, self.strpath, target)
def dump(self, obj, bin=1):
""" pickle object into path location"""
f = self.open('wb')
import pickle
try:
py.error.checked_call(pickle.dump, obj, f, bin)
finally:
f.close()
def mkdir(self, *args):
""" create & return the directory joined with args. """
p = self.join(*args)
py.error.checked_call(os.mkdir, fspath(p))
return p
def write_binary(self, data, ensure=False):
""" write binary data into path. If ensure is True create
missing parent directories.
"""
if ensure:
self.dirpath().ensure(dir=1)
with self.open('wb') as f:
f.write(data)
def write_text(self, data, encoding, ensure=False):
""" write text data into path using the specified encoding.
If ensure is True create missing parent directories.
"""
if ensure:
self.dirpath().ensure(dir=1)
with self.open('w', encoding=encoding) as f:
f.write(data)
def write(self, data, mode='w', ensure=False):
""" write data into path. If ensure is True create
missing parent directories.
"""
if ensure:
self.dirpath().ensure(dir=1)
if 'b' in mode:
if not py.builtin._isbytes(data):
raise ValueError("can only process bytes")
else:
if not py.builtin._istext(data):
if not py.builtin._isbytes(data):
data = str(data)
else:
data = py.builtin._totext(data, sys.getdefaultencoding())
f = self.open(mode)
try:
f.write(data)
finally:
f.close()
def _ensuredirs(self):
parent = self.dirpath()
if parent == self:
return self
if parent.check(dir=0):
parent._ensuredirs()
if self.check(dir=0):
try:
self.mkdir()
except py.error.EEXIST:
# race condition: file/dir created by another thread/process.
# complain if it is not a dir
if self.check(dir=0):
raise
return self
def ensure(self, *args, **kwargs):
""" ensure that an args-joined path exists (by default as
a file). if you specify a keyword argument 'dir=True'
then the path is forced to be a directory path.
"""
p = self.join(*args)
if kwargs.get('dir', 0):
return p._ensuredirs()
else:
p.dirpath()._ensuredirs()
if not p.check(file=1):
p.open('w').close()
return p
def stat(self, raising=True):
""" Return an os.stat() tuple. """
if raising == True:
return Stat(self, py.error.checked_call(os.stat, self.strpath))
try:
return Stat(self, os.stat(self.strpath))
except KeyboardInterrupt:
raise
except Exception:
return None
def lstat(self):
""" Return an os.lstat() tuple. """
return Stat(self, py.error.checked_call(os.lstat, self.strpath))
def setmtime(self, mtime=None):
""" set modification time for the given path. if 'mtime' is None
(the default) then the file's mtime is set to current time.
Note that the resolution for 'mtime' is platform dependent.
"""
if mtime is None:
return py.error.checked_call(os.utime, self.strpath, mtime)
try:
return py.error.checked_call(os.utime, self.strpath, (-1, mtime))
except py.error.EINVAL:
return py.error.checked_call(os.utime, self.strpath, (self.atime(), mtime))
def chdir(self):
""" change directory to self and return old current directory """
try:
old = self.__class__()
except py.error.ENOENT:
old = None
py.error.checked_call(os.chdir, self.strpath)
return old
@contextmanager
def as_cwd(self):
"""
Return a context manager, which changes to the path's dir during the
managed "with" context.
On __enter__ it returns the old dir, which might be ``None``.
"""
old = self.chdir()
try:
yield old
finally:
if old is not None:
old.chdir()
def realpath(self):
""" return a new path which contains no symbolic links."""
return self.__class__(os.path.realpath(self.strpath))
def atime(self):
""" return last access time of the path. """
return self.stat().atime
def __repr__(self):
return 'local(%r)' % self.strpath
def __str__(self):
""" return string representation of the Path. """
return self.strpath
def chmod(self, mode, rec=0):
""" change permissions to the given mode. If mode is an
integer it directly encodes the os-specific modes.
if rec is True perform recursively.
"""
if not isinstance(mode, int):
raise TypeError("mode %r must be an integer" % (mode,))
if rec:
for x in self.visit(rec=rec):
py.error.checked_call(os.chmod, str(x), mode)
py.error.checked_call(os.chmod, self.strpath, mode)
def pypkgpath(self):
""" return the Python package path by looking for the last
directory upwards which still contains an __init__.py.
Return None if a pkgpath can not be determined.
"""
pkgpath = None
for parent in self.parts(reverse=True):
if parent.isdir():
if not parent.join('__init__.py').exists():
break
if not isimportable(parent.basename):
break
pkgpath = parent
return pkgpath
def _ensuresyspath(self, ensuremode, path):
if ensuremode:
s = str(path)
if ensuremode == "append":
if s not in sys.path:
sys.path.append(s)
else:
if s != sys.path[0]:
sys.path.insert(0, s)
def pyimport(self, modname=None, ensuresyspath=True):
""" return path as an imported python module.
If modname is None, look for the containing package
and construct an according module name.
The module will be put/looked up in sys.modules.
if ensuresyspath is True then the root dir for importing
the file (taking __init__.py files into account) will
be prepended to sys.path if it isn't there already.
If ensuresyspath=="append" the root dir will be appended
if it isn't already contained in sys.path.
if ensuresyspath is False no modification of syspath happens.
Special value of ensuresyspath=="importlib" is intended
purely for using in pytest, it is capable only of importing
separate .py files outside packages, e.g. for test suite
without any __init__.py file. It effectively allows having
same-named test modules in different places and offers
mild opt-in via this option. Note that it works only in
recent versions of python.
"""
if not self.check():
raise py.error.ENOENT(self)
if ensuresyspath == 'importlib':
if modname is None:
modname = self.purebasename
if not ALLOW_IMPORTLIB_MODE:
raise ImportError(
"Can't use importlib due to old version of Python")
spec = importlib.util.spec_from_file_location(
modname, str(self))
if spec is None:
raise ImportError(
"Can't find module %s at location %s" %
(modname, str(self))
)
mod = importlib.util.module_from_spec(spec)
spec.loader.exec_module(mod)
return mod
pkgpath = None
if modname is None:
pkgpath = self.pypkgpath()
if pkgpath is not None:
pkgroot = pkgpath.dirpath()
names = self.new(ext="").relto(pkgroot).split(self.sep)
if names[-1] == "__init__":
names.pop()
modname = ".".join(names)
else:
pkgroot = self.dirpath()
modname = self.purebasename
self._ensuresyspath(ensuresyspath, pkgroot)
__import__(modname)
mod = sys.modules[modname]
if self.basename == "__init__.py":
return mod # we don't check anything as we might
# be in a namespace package ... too icky to check
modfile = mod.__file__
if modfile[-4:] in ('.pyc', '.pyo'):
modfile = modfile[:-1]
elif modfile.endswith('$py.class'):
modfile = modfile[:-9] + '.py'
if modfile.endswith(os.path.sep + "__init__.py"):
if self.basename != "__init__.py":
modfile = modfile[:-12]
try:
issame = self.samefile(modfile)
except py.error.ENOENT:
issame = False
if not issame:
ignore = os.getenv('PY_IGNORE_IMPORTMISMATCH')
if ignore != '1':
raise self.ImportMismatchError(modname, modfile, self)
return mod
else:
try:
return sys.modules[modname]
except KeyError:
# we have a custom modname, do a pseudo-import
import types
mod = types.ModuleType(modname)
mod.__file__ = str(self)
sys.modules[modname] = mod
try:
py.builtin.execfile(str(self), mod.__dict__)
except:
del sys.modules[modname]
raise
return mod
def sysexec(self, *argv, **popen_opts):
""" return stdout text from executing a system child process,
where the 'self' path points to executable.
The process is directly invoked and not through a system shell.
"""
from subprocess import Popen, PIPE
argv = map_as_list(str, argv)
popen_opts['stdout'] = popen_opts['stderr'] = PIPE
proc = Popen([str(self)] + argv, **popen_opts)
stdout, stderr = proc.communicate()
ret = proc.wait()
if py.builtin._isbytes(stdout):
stdout = py.builtin._totext(stdout, sys.getdefaultencoding())
if ret != 0:
if py.builtin._isbytes(stderr):
stderr = py.builtin._totext(stderr, sys.getdefaultencoding())
raise py.process.cmdexec.Error(ret, ret, str(self),
stdout, stderr,)
return stdout
def sysfind(cls, name, checker=None, paths=None):
""" return a path object found by looking at the systems
underlying PATH specification. If the checker is not None
it will be invoked to filter matching paths. If a binary
cannot be found, None is returned
Note: This is probably not working on plain win32 systems
but may work on cygwin.
"""
if isabs(name):
p = py.path.local(name)
if p.check(file=1):
return p
else:
if paths is None:
if iswin32:
paths = os.environ['Path'].split(';')
if '' not in paths and '.' not in paths:
paths.append('.')
try:
systemroot = os.environ['SYSTEMROOT']
except KeyError:
pass
else:
paths = [path.replace('%SystemRoot%', systemroot)
for path in paths]
else:
paths = os.environ['PATH'].split(':')
tryadd = []
if iswin32:
tryadd += os.environ['PATHEXT'].split(os.pathsep)
tryadd.append("")
for x in paths:
for addext in tryadd:
p = py.path.local(x).join(name, abs=True) + addext
try:
if p.check(file=1):
if checker:
if not checker(p):
continue
return p
except py.error.EACCES:
pass
return None
sysfind = classmethod(sysfind)
def _gethomedir(cls):
try:
x = os.environ['HOME']
except KeyError:
try:
x = os.environ["HOMEDRIVE"] + os.environ['HOMEPATH']
except KeyError:
return None
return cls(x)
_gethomedir = classmethod(_gethomedir)
# """
# special class constructors for local filesystem paths
# """
@classmethod
def get_temproot(cls):
""" return the system's temporary directory
(where tempfiles are usually created in)
"""
import tempfile
return py.path.local(tempfile.gettempdir())
@classmethod
def mkdtemp(cls, rootdir=None):
""" return a Path object pointing to a fresh new temporary directory
(which we created ourself).
"""
import tempfile
if rootdir is None:
rootdir = cls.get_temproot()
return cls(py.error.checked_call(tempfile.mkdtemp, dir=str(rootdir)))
def make_numbered_dir(cls, prefix='session-', rootdir=None, keep=3,
lock_timeout=172800): # two days
""" return unique directory with a number greater than the current
maximum one. The number is assumed to start directly after prefix.
if keep is true directories with a number less than (maxnum-keep)
will be removed. If .lock files are used (lock_timeout non-zero),
algorithm is multi-process safe.
"""
if rootdir is None:
rootdir = cls.get_temproot()
nprefix = prefix.lower()
def parse_num(path):
""" parse the number out of a path (if it matches the prefix) """
nbasename = path.basename.lower()
if nbasename.startswith(nprefix):
try:
return int(nbasename[len(nprefix):])
except ValueError:
pass
def create_lockfile(path):
""" exclusively create lockfile. Throws when failed """
mypid = os.getpid()
lockfile = path.join('.lock')
if hasattr(lockfile, 'mksymlinkto'):
lockfile.mksymlinkto(str(mypid))
else:
fd = py.error.checked_call(os.open, str(lockfile), os.O_WRONLY | os.O_CREAT | os.O_EXCL, 0o644)
with os.fdopen(fd, 'w') as f:
f.write(str(mypid))
return lockfile
def atexit_remove_lockfile(lockfile):
""" ensure lockfile is removed at process exit """
mypid = os.getpid()
def try_remove_lockfile():
# in a fork() situation, only the last process should
# remove the .lock, otherwise the other processes run the
# risk of seeing their temporary dir disappear. For now
# we remove the .lock in the parent only (i.e. we assume
# that the children finish before the parent).
if os.getpid() != mypid:
return
try:
lockfile.remove()
except py.error.Error:
pass
atexit.register(try_remove_lockfile)
# compute the maximum number currently in use with the prefix
lastmax = None
while True:
maxnum = -1
for path in rootdir.listdir():
num = parse_num(path)
if num is not None:
maxnum = max(maxnum, num)
# make the new directory
try:
udir = rootdir.mkdir(prefix + str(maxnum+1))
if lock_timeout:
lockfile = create_lockfile(udir)
atexit_remove_lockfile(lockfile)
except (py.error.EEXIST, py.error.ENOENT, py.error.EBUSY):
# race condition (1): another thread/process created the dir
# in the meantime - try again
# race condition (2): another thread/process spuriously acquired
# lock treating empty directory as candidate
# for removal - try again
# race condition (3): another thread/process tried to create the lock at
# the same time (happened in Python 3.3 on Windows)
# https://ci.appveyor.com/project/pytestbot/py/build/1.0.21/job/ffi85j4c0lqwsfwa
if lastmax == maxnum:
raise
lastmax = maxnum
continue
break
def get_mtime(path):
""" read file modification time """
try:
return path.lstat().mtime
except py.error.Error:
pass
garbage_prefix = prefix + 'garbage-'
def is_garbage(path):
""" check if path denotes directory scheduled for removal """
bn = path.basename
return bn.startswith(garbage_prefix)
# prune old directories
udir_time = get_mtime(udir)
if keep and udir_time:
for path in rootdir.listdir():
num = parse_num(path)
if num is not None and num <= (maxnum - keep):
try:
# try acquiring lock to remove directory as exclusive user
if lock_timeout:
create_lockfile(path)
except (py.error.EEXIST, py.error.ENOENT, py.error.EBUSY):
path_time = get_mtime(path)
if not path_time:
# assume directory doesn't exist now
continue
if abs(udir_time - path_time) < lock_timeout:
# assume directory with lockfile exists
# and lock timeout hasn't expired yet
continue
# path dir locked for exclusive use
# and scheduled for removal to avoid another thread/process
# treating it as a new directory or removal candidate
garbage_path = rootdir.join(garbage_prefix + str(uuid.uuid4()))
try:
path.rename(garbage_path)
garbage_path.remove(rec=1)
except KeyboardInterrupt:
raise
except: # this might be py.error.Error, WindowsError ...
pass
if is_garbage(path):
try:
path.remove(rec=1)
except KeyboardInterrupt:
raise
except: # this might be py.error.Error, WindowsError ...
pass
# make link...
try:
username = os.environ['USER'] #linux, et al
except KeyError:
try:
username = os.environ['USERNAME'] #windows
except KeyError:
username = 'current'
src = str(udir)
dest = src[:src.rfind('-')] + '-' + username
try:
os.unlink(dest)
except OSError:
pass
try:
os.symlink(src, dest)
except (OSError, AttributeError, NotImplementedError):
pass
return udir
make_numbered_dir = classmethod(make_numbered_dir)
def copymode(src, dest):
""" copy permission from src to dst. """
import shutil
shutil.copymode(src, dest)
def copystat(src, dest):
""" copy permission, last modification time,
last access time, and flags from src to dst."""
import shutil
shutil.copystat(str(src), str(dest))
def copychunked(src, dest):
chunksize = 524288 # half a meg of bytes
fsrc = src.open('rb')
try:
fdest = dest.open('wb')
try:
while 1:
buf = fsrc.read(chunksize)
if not buf:
break
fdest.write(buf)
finally:
fdest.close()
finally:
fsrc.close()
def isimportable(name):
if name and (name[0].isalpha() or name[0] == '_'):
name = name.replace("_", '')
return not name or name.isalnum()
|
catboostREPO_NAMEcatboostPATH_START.@catboost_extracted@catboost-master@contrib@python@py@py2@py@_path@local.py@.PATH_END.py
|
{
"filename": "DEMNodeLists.py",
"repo_name": "LLNL/spheral",
"repo_path": "spheral_extracted/spheral-main/src/NodeList/DEMNodeLists.py",
"type": "Python"
}
|
from SpheralCompiledPackages import *
from spheralDimensions import spheralDimensions
dims = spheralDimensions()
#-------------------------------------------------------------------------------
# The generic SolidNodeList defintion.
#-------------------------------------------------------------------------------
def DEMNodeListFactory(ndim):
suffix = "{}d".format(ndim)
DEMNodeList = eval("DEMNodeList" + suffix)
TreeNeighbor = eval("TreeNeighbor" + suffix)
NestedGridNeighbor = eval("NestedGridNeighbor" + suffix)
Vector = eval("Vector" + suffix)
def factory(name,
numInternal = 0,
numGhost = 0,
hmin = 1.0e-20,
hmax = 1.0e20,
hminratio = 0.1,
nPerh = 1.01,
maxNumNeighbors = 500,
neighborSearchBuffer=0.1,
# Neighboring stuff
NeighborType = TreeNeighbor,
searchType = GatherScatter,
kernelExtent = 1.0,
# Parameters for TreeNeighbor
xmin = Vector.one * -10.0,
xmax = Vector.one * 10.0):
result = DEMNodeList(name, numInternal, numGhost,
hmin, hmax, hminratio,
nPerh, neighborSearchBuffer, maxNumNeighbors)
if NeighborType == TreeNeighbor:
result._neighbor = TreeNeighbor(result, searchType, kernelExtent, xmin, xmax)
elif NeighborType == NestedGridNeighbor:
print("DEMNodeList Deprecation Warning: NestedGridNeighbor is deprecated: suggest using TreeNeighbor.")
result._neighbor = NestedGridNeighbor(result, searchType,
kernelExtent = kernelExtent)
else:
raise ValueError("Unknown NeighborType")
result.registerNeighbor(result._neighbor)
return result
return factory
#-------------------------------------------------------------------------------
# Create the different SolidNodeLists.
#-------------------------------------------------------------------------------
for ndim in dims:
exec("makeDEMNodeList{ndim}d = DEMNodeListFactory({ndim})".format(ndim=ndim))
|
LLNLREPO_NAMEspheralPATH_START.@spheral_extracted@spheral-main@src@NodeList@DEMNodeLists.py@.PATH_END.py
|
{
"filename": "_tickformatstop.py",
"repo_name": "catboost/catboost",
"repo_path": "catboost_extracted/catboost-master/contrib/python/plotly/py3/plotly/graph_objs/parcats/line/colorbar/_tickformatstop.py",
"type": "Python"
}
|
from plotly.basedatatypes import BaseTraceHierarchyType as _BaseTraceHierarchyType
import copy as _copy
class Tickformatstop(_BaseTraceHierarchyType):
# class properties
# --------------------
_parent_path_str = "parcats.line.colorbar"
_path_str = "parcats.line.colorbar.tickformatstop"
_valid_props = {"dtickrange", "enabled", "name", "templateitemname", "value"}
# dtickrange
# ----------
@property
def dtickrange(self):
"""
range [*min*, *max*], where "min", "max" - dtick values which
describe some zoom level, it is possible to omit "min" or "max"
value by passing "null"
The 'dtickrange' property is an info array that may be specified as:
* a list or tuple of 2 elements where:
(0) The 'dtickrange[0]' property accepts values of any type
(1) The 'dtickrange[1]' property accepts values of any type
Returns
-------
list
"""
return self["dtickrange"]
@dtickrange.setter
def dtickrange(self, val):
self["dtickrange"] = val
# enabled
# -------
@property
def enabled(self):
"""
Determines whether or not this stop is used. If `false`, this
stop is ignored even within its `dtickrange`.
The 'enabled' property must be specified as a bool
(either True, or False)
Returns
-------
bool
"""
return self["enabled"]
@enabled.setter
def enabled(self, val):
self["enabled"] = val
# name
# ----
@property
def name(self):
"""
When used in a template, named items are created in the output
figure in addition to any items the figure already has in this
array. You can modify these items in the output figure by
making your own item with `templateitemname` matching this
`name` alongside your modifications (including `visible: false`
or `enabled: false` to hide it). Has no effect outside of a
template.
The 'name' property is a string and must be specified as:
- A string
- A number that will be converted to a string
Returns
-------
str
"""
return self["name"]
@name.setter
def name(self, val):
self["name"] = val
# templateitemname
# ----------------
@property
def templateitemname(self):
"""
Used to refer to a named item in this array in the template.
Named items from the template will be created even without a
matching item in the input figure, but you can modify one by
making an item with `templateitemname` matching its `name`,
alongside your modifications (including `visible: false` or
`enabled: false` to hide it). If there is no template or no
matching item, this item will be hidden unless you explicitly
show it with `visible: true`.
The 'templateitemname' property is a string and must be specified as:
- A string
- A number that will be converted to a string
Returns
-------
str
"""
return self["templateitemname"]
@templateitemname.setter
def templateitemname(self, val):
self["templateitemname"] = val
# value
# -----
@property
def value(self):
"""
string - dtickformat for described zoom level, the same as
"tickformat"
The 'value' property is a string and must be specified as:
- A string
- A number that will be converted to a string
Returns
-------
str
"""
return self["value"]
@value.setter
def value(self, val):
self["value"] = val
# Self properties description
# ---------------------------
@property
def _prop_descriptions(self):
return """\
dtickrange
range [*min*, *max*], where "min", "max" - dtick values
which describe some zoom level, it is possible to omit
"min" or "max" value by passing "null"
enabled
Determines whether or not this stop is used. If
`false`, this stop is ignored even within its
`dtickrange`.
name
When used in a template, named items are created in the
output figure in addition to any items the figure
already has in this array. You can modify these items
in the output figure by making your own item with
`templateitemname` matching this `name` alongside your
modifications (including `visible: false` or `enabled:
false` to hide it). Has no effect outside of a
template.
templateitemname
Used to refer to a named item in this array in the
template. Named items from the template will be created
even without a matching item in the input figure, but
you can modify one by making an item with
`templateitemname` matching its `name`, alongside your
modifications (including `visible: false` or `enabled:
false` to hide it). If there is no template or no
matching item, this item will be hidden unless you
explicitly show it with `visible: true`.
value
string - dtickformat for described zoom level, the same
as "tickformat"
"""
def __init__(
self,
arg=None,
dtickrange=None,
enabled=None,
name=None,
templateitemname=None,
value=None,
**kwargs,
):
"""
Construct a new Tickformatstop object
Parameters
----------
arg
dict of properties compatible with this constructor or
an instance of :class:`plotly.graph_objs.parcats.line.c
olorbar.Tickformatstop`
dtickrange
range [*min*, *max*], where "min", "max" - dtick values
which describe some zoom level, it is possible to omit
"min" or "max" value by passing "null"
enabled
Determines whether or not this stop is used. If
`false`, this stop is ignored even within its
`dtickrange`.
name
When used in a template, named items are created in the
output figure in addition to any items the figure
already has in this array. You can modify these items
in the output figure by making your own item with
`templateitemname` matching this `name` alongside your
modifications (including `visible: false` or `enabled:
false` to hide it). Has no effect outside of a
template.
templateitemname
Used to refer to a named item in this array in the
template. Named items from the template will be created
even without a matching item in the input figure, but
you can modify one by making an item with
`templateitemname` matching its `name`, alongside your
modifications (including `visible: false` or `enabled:
false` to hide it). If there is no template or no
matching item, this item will be hidden unless you
explicitly show it with `visible: true`.
value
string - dtickformat for described zoom level, the same
as "tickformat"
Returns
-------
Tickformatstop
"""
super(Tickformatstop, self).__init__("tickformatstops")
if "_parent" in kwargs:
self._parent = kwargs["_parent"]
return
# Validate arg
# ------------
if arg is None:
arg = {}
elif isinstance(arg, self.__class__):
arg = arg.to_plotly_json()
elif isinstance(arg, dict):
arg = _copy.copy(arg)
else:
raise ValueError(
"""\
The first argument to the plotly.graph_objs.parcats.line.colorbar.Tickformatstop
constructor must be a dict or
an instance of :class:`plotly.graph_objs.parcats.line.colorbar.Tickformatstop`"""
)
# Handle skip_invalid
# -------------------
self._skip_invalid = kwargs.pop("skip_invalid", False)
self._validate = kwargs.pop("_validate", True)
# Populate data dict with properties
# ----------------------------------
_v = arg.pop("dtickrange", None)
_v = dtickrange if dtickrange is not None else _v
if _v is not None:
self["dtickrange"] = _v
_v = arg.pop("enabled", None)
_v = enabled if enabled is not None else _v
if _v is not None:
self["enabled"] = _v
_v = arg.pop("name", None)
_v = name if name is not None else _v
if _v is not None:
self["name"] = _v
_v = arg.pop("templateitemname", None)
_v = templateitemname if templateitemname is not None else _v
if _v is not None:
self["templateitemname"] = _v
_v = arg.pop("value", None)
_v = value if value is not None else _v
if _v is not None:
self["value"] = _v
# Process unknown kwargs
# ----------------------
self._process_kwargs(**dict(arg, **kwargs))
# Reset skip_invalid
# ------------------
self._skip_invalid = False
|
catboostREPO_NAMEcatboostPATH_START.@catboost_extracted@catboost-master@contrib@python@plotly@py3@plotly@graph_objs@parcats@line@colorbar@_tickformatstop.py@.PATH_END.py
|
{
"filename": "py39.py",
"repo_name": "catboost/catboost",
"repo_path": "catboost_extracted/catboost-master/contrib/python/setuptools/py3/setuptools/compat/py39.py",
"type": "Python"
}
|
import sys
# Explicitly use the ``"locale"`` encoding in versions that support it,
# otherwise just rely on the implicit handling of ``encoding=None``.
# Since all platforms that support ``EncodingWarning`` also support
# ``encoding="locale"``, this can be used to suppress the warning.
# However, please try to use UTF-8 when possible
# (.pth files are the notorious exception: python/cpython#77102, pypa/setuptools#3937).
LOCALE_ENCODING = "locale" if sys.version_info >= (3, 10) else None
|
catboostREPO_NAMEcatboostPATH_START.@catboost_extracted@catboost-master@contrib@python@setuptools@py3@setuptools@compat@py39.py@.PATH_END.py
|
{
"filename": "_xcalendar.py",
"repo_name": "plotly/plotly.py",
"repo_path": "plotly.py_extracted/plotly.py-master/packages/python/plotly/plotly/validators/heatmap/_xcalendar.py",
"type": "Python"
}
|
import _plotly_utils.basevalidators
class XcalendarValidator(_plotly_utils.basevalidators.EnumeratedValidator):
def __init__(self, plotly_name="xcalendar", parent_name="heatmap", **kwargs):
super(XcalendarValidator, self).__init__(
plotly_name=plotly_name,
parent_name=parent_name,
edit_type=kwargs.pop("edit_type", "calc"),
values=kwargs.pop(
"values",
[
"chinese",
"coptic",
"discworld",
"ethiopian",
"gregorian",
"hebrew",
"islamic",
"jalali",
"julian",
"mayan",
"nanakshahi",
"nepali",
"persian",
"taiwan",
"thai",
"ummalqura",
],
),
**kwargs,
)
|
plotlyREPO_NAMEplotly.pyPATH_START.@plotly.py_extracted@plotly.py-master@packages@python@plotly@plotly@validators@heatmap@_xcalendar.py@.PATH_END.py
|
{
"filename": "copy_injection_recovery.py",
"repo_name": "ThibeauWouters/TurboPE-BNS",
"repo_path": "TurboPE-BNS_extracted/TurboPE-BNS-main/injections/outdir_NRTv2/injection_79/copy_injection_recovery.py",
"type": "Python"
}
|
"""
Idea: try different learning rate schemes to try and fix the injections
"""
import psutil
p = psutil.Process()
p.cpu_affinity([0])
import os
os.environ['CUDA_VISIBLE_DEVICES'] = "3"
os.environ["XLA_PYTHON_CLIENT_MEM_FRACTION"] = "0.10"
import numpy as np
import argparse
# Regular imports
import argparse
import copy
import numpy as np
from astropy.time import Time
import time
import shutil
import json
import jax
jax.config.update("jax_enable_x64", True)
import jax.numpy as jnp
from jimgw.jim import Jim
from jimgw.single_event.detector import H1, L1, V1
from jimgw.single_event.likelihood import HeterodynedTransientLikelihoodFD, TransientLikelihoodFD
from jimgw.single_event.waveform import RippleTaylorF2, RippleIMRPhenomD_NRTidalv2, RippleIMRPhenomD_NRTidalv2_no_taper
from jimgw.prior import Uniform, Composite
import utils # our plotting and postprocessing utilities script
import optax
# Names of the parameters and their ranges for sampling parameters for the injection
NAMING = ['M_c', 'q', 's1_z', 's2_z', 'lambda_1', 'lambda_2', 'd_L', 't_c', 'phase_c', 'cos_iota', 'psi', 'ra', 'sin_dec']
PRIOR = {
"M_c": [0.8759659737275101, 2.6060030916165484],
"q": [0.5, 1.0],
"s1_z": [-0.05, 0.05],
"s2_z": [-0.05, 0.05],
"lambda_1": [0.0, 5000.0],
"lambda_2": [0.0, 5000.0],
"d_L": [30.0, 300.0],
"t_c": [-0.1, 0.1],
"phase_c": [0.0, 2 * jnp.pi],
"cos_iota": [-1.0, 1.0],
"psi": [0.0, jnp.pi],
"ra": [0.0, 2 * jnp.pi],
"sin_dec": [-1, 1]
}
################
### ARGPARSE ###
################
# TODO save these into a new file
def get_parser(**kwargs):
add_help = kwargs.get("add_help", True)
parser = argparse.ArgumentParser(
description="Perform an injection recovery.",
add_help=add_help,
)
# TODO os does not use them
# parser.add_argument(
# "--GPU-device",
# type=int,
# default=0,
# help="Select GPU index to use.",
# )
# parser.add_argument(
# "--GPU-memory-fraction",
# type=float,
# default=0.5,
# help="Select percentage of GPU memory to use.",
# )
parser.add_argument(
"--outdir",
type=str,
default="./outdir/",
help="Output directory for the injection.",
)
parser.add_argument(
"--load-existing-config",
type=bool,
default=False,
help="Whether to load and redo an existing injection (True) or to generate a new set of parameters (False).",
)
parser.add_argument(
"--N",
type=str,
default="",
help="Number (or generically, a custom identifier) of this injection, used to locate the output directory. If an empty string is passed (default), we generate a new injection.",
)
parser.add_argument(
"--SNR-threshold",
type=float,
default=12,
help="Skip injections with SNR below this threshold.",
)
parser.add_argument(
"--waveform-approximant",
type=str,
default="TaylorF2",
help="Which waveform approximant to use. Recommended to use TaylorF2 for now, NRTidalv2 might still be a bit unstable.",
)
parser.add_argument(
"--relative-binning-binsize",
type=int,
default=100,
help="Number of bins for the relative binning.",
)
parser.add_argument(
"--relative-binning-ref-params-equal-true-params",
type=bool,
default=True,
help="Whether to set the reference parameters in the relative binning code to injection parameters.",
)
parser.add_argument(
"--save-training-chains",
type=bool,
default=False,
help="Whether to save training chains or not (can be very large!)",
)
parser.add_argument(
"--eps-mass-matrix",
type=float,
default=1e-6,
help="Overall scale factor to rescale the step size of the local sampler.",
)
parser.add_argument(
"--which-local-sampler",
type=str,
default="MALA",
help="Which local sampler to use.",
)
parser.add_argument(
"--smart-initial-guess",
type=bool,
default=False,
help="Distribute the walkers around the injected parameters. TODO change this to reference parameters found by the relative binning code.",
)
parser.add_argument(
"--use-scheduler",
type=bool,
default=True,
help="Use a learning rate scheduler instead of a fixed learning rate.",
)
parser.add_argument(
"--stopping-criterion-global-acc",
type=float,
default=1.0,
help="Stop the run once we reach this global acceptance rate.",
)
parser.add_argument(
"--save-likelihood",
type=bool,
default=False,
help="Whether to save the likelihood object",
)
parser.add_argument(
"--tight-Mc-prior",
type=bool,
default=False,
help="Whether to use a tight prior on the Mc values or not",
)
# # TODO this has to be implemented
# parser.add_argument(
# "--autotune_local_sampler",
# type=bool,
# default=False,
# help="TODO Still has to be implemented! Specify whether to use autotuning for the local sampler.",
# )
return parser
####################
### Script setup ###
####################
def body(args):
"""
Run an injection and recovery. To get an explanation of the hyperparameters, go to:
- jim hyperparameters: https://github.com/ThibeauWouters/jim/blob/8cb4ef09fefe9b353bfb89273a4bc0ee52060d72/src/jimgw/jim.py#L26
- flowMC hyperparameters: https://github.com/ThibeauWouters/flowMC/blob/ad1a32dcb6984b2e178d7204a53d5da54b578073/src/flowMC/sampler/Sampler.py#L40
"""
start_time = time.time()
# TODO move and get these as arguments
# Deal with the hyperparameters
naming = NAMING
HYPERPARAMETERS = {
"flowmc":
{
"n_loop_training": 400,
"n_loop_production": 50,
"n_local_steps": 5,
"n_global_steps": 400,
"n_epochs": 50,
"n_chains": 1000,
"learning_rate": 0.001, # using a scheduler below
"max_samples": 50000,
"momentum": 0.9,
"batch_size": 50000,
"use_global": True,
"logging": True,
"keep_quantile": 0.0,
"local_autotune": None,
"train_thinning": 10,
"output_thinning": 30,
"n_sample_max": 10000,
"precompile": False,
"verbose": False,
"outdir": args.outdir,
"stopping_criterion_global_acc": args.stopping_criterion_global_acc,
"which_local_sampler": "MALA"
},
"jim":
{
"seed": 0,
"n_chains": 1000,
"num_layers": 10,
"hidden_size": [128, 128],
"num_bins": 8,
}
}
flowmc_hyperparameters = HYPERPARAMETERS["flowmc"]
jim_hyperparameters = HYPERPARAMETERS["jim"]
hyperparameters = {**flowmc_hyperparameters, **jim_hyperparameters}
# TODO can I just replace this with update dict?
for key, value in args.__dict__.items():
if key in hyperparameters:
hyperparameters[key] = value
### POLYNOMIAL SCHEDULER
if args.use_scheduler:
print("Using polynomial learning rate scheduler")
total_epochs = hyperparameters["n_epochs"] * hyperparameters["n_loop_training"]
start = int(total_epochs / 10)
start_lr = 1e-3
end_lr = 1e-5
power = 4.0
schedule_fn = optax.polynomial_schedule(start_lr, end_lr, power, total_epochs-start, transition_begin=start)
hyperparameters["learning_rate"] = schedule_fn
print(f"Saving output to {args.outdir}")
# Fetch waveform used
supported_waveforms = ["TaylorF2", "NRTidalv2", "IMRPhenomD_NRTidalv2"]
if args.waveform_approximant not in supported_waveforms:
print(f"Waveform approximant {args.waveform_approximant} not supported. Supported waveforms are {supported_waveforms}. Changing to TaylorF2.")
args.waveform_approximant = "TaylorF2"
if args.waveform_approximant == "TaylorF2":
ripple_waveform_fn = RippleTaylorF2
elif args.waveform_approximant in ["IMRPhenomD_NRTidalv2", "NRTv2", "NRTidalv2"]:
ripple_waveform_fn = RippleIMRPhenomD_NRTidalv2
else:
raise ValueError(f"Waveform approximant {args.waveform_approximant} not supported.")
# Before main code, check if outdir is correct dir format TODO improve with sys?
if args.outdir[-1] != "/":
args.outdir += "/"
outdir = f"{args.outdir}injection_{args.N}/"
# Get the prior bounds, both as 1D and 2D arrays
prior_ranges = jnp.array([PRIOR[name] for name in naming])
prior_low, prior_high = prior_ranges[:, 0], prior_ranges[:, 1]
bounds = np.array(list(PRIOR.values()))
# Now go over to creating parameters, and potentially check SNR cutoff
network_snr = 0.0
print(f"The SNR threshold parameter is set to {args.SNR_threshold}")
while network_snr < args.SNR_threshold:
# Generate the parameters or load them from an existing file
if args.load_existing_config:
config_path = f"{outdir}config.json"
print(f"Loading existing config, path: {config_path}")
config = json.load(open(config_path))
else:
print(f"Generating new config")
config = utils.generate_config(prior_low, prior_high, naming, args.N, args.outdir)
key = jax.random.PRNGKey(config["seed"])
# Save the given script hyperparams
with open(f"{outdir}script_args.json", 'w') as json_file:
json.dump(args.__dict__, json_file)
# Start injections
print("Injecting signals . . .")
waveform = ripple_waveform_fn(f_ref=config["fref"])
# Create frequency grid
freqs = jnp.arange(
config["fmin"],
config["f_sampling"] / 2, # maximum frequency being halved of sampling frequency
1. / config["duration"]
)
# convert injected mass ratio to eta, and apply arccos and arcsin
q = config["q"]
eta = q / (1 + q) ** 2
iota = float(jnp.arccos(config["cos_iota"]))
dec = float(jnp.arcsin(config["sin_dec"]))
# Setup the timing setting for the injection
epoch = config["duration"] - config["post_trigger_duration"]
gmst = Time(config["trigger_time"], format='gps').sidereal_time('apparent', 'greenwich').rad
# Array of injection parameters
true_param = {
'M_c': config["M_c"], # chirp mass
'eta': eta, # symmetric mass ratio 0 < eta <= 0.25
's1_z': config["s1_z"], # aligned spin of priminary component s1_z.
's2_z': config["s2_z"], # aligned spin of secondary component s2_z.
'lambda_1': config["lambda_1"], # tidal deformability of priminary component lambda_1.
'lambda_2': config["lambda_2"], # tidal deformability of secondary component lambda_2.
'd_L': config["d_L"], # luminosity distance
't_c': config["t_c"], # timeshift w.r.t. trigger time
'phase_c': config["phase_c"], # merging phase
'iota': iota, # inclination angle
'psi': config["psi"], # polarization angle
'ra': config["ra"], # right ascension
'dec': dec # declination
}
# Get the true parameter values for the plots
truths = copy.deepcopy(true_param)
truths["eta"] = q
truths = np.fromiter(truths.values(), dtype=float)
detector_param = {
'ra': config["ra"],
'dec': dec,
'gmst': gmst,
'psi': config["psi"],
'epoch': epoch,
't_c': config["t_c"],
}
print(f"The injected parameters are {true_param}")
# Generating the geocenter waveform
h_sky = waveform(freqs, true_param)
# Setup interferometers
ifos = [H1, L1, V1]
psd_files = ["./psds/psd.txt", "./psds/psd.txt", "./psds/psd_virgo.txt"]
# inject signal into ifos
for idx, ifo in enumerate(ifos):
key, subkey = jax.random.split(key)
ifo.inject_signal(
subkey,
freqs,
h_sky,
detector_param,
psd_file=psd_files[idx] # note: the function load_psd actaully loads the asd
)
print("Signal injected")
# Compute the SNR
h1_snr = utils.compute_snr(H1, h_sky, detector_param)
l1_snr = utils.compute_snr(L1, h_sky, detector_param)
v1_snr = utils.compute_snr(V1, h_sky, detector_param)
network_snr = np.sqrt(h1_snr**2 + l1_snr**2 + v1_snr**2)
# If the SNR is too low, we need to generate new parameters
if network_snr < args.SNR_threshold:
print(f"Network SNR is less than {args.SNR_threshold}, generating new parameters")
if args.load_existing_config:
raise ValueError("SNR is less than threshold, but loading existing config. This should not happen!")
print("H1 SNR:", h1_snr)
print("L1 SNR:", l1_snr)
print("V1 SNR:", v1_snr)
print("Network SNR:", network_snr)
print(f"Saving network SNR")
with open(outdir + 'network_snr.txt', 'w') as file:
file.write(str(network_snr))
print("Start prior setup")
# Priors without transformation
if args.tight_Mc_prior:
print("INFO: Using a tight chirp mass prior")
true_mc = true_param["M_c"]
Mc_prior = Uniform(true_mc - 0.1, true_mc + 0.1, naming=['M_c'])
else:
Mc_prior = Uniform(prior_low[0], prior_high[0], naming=['M_c'])
q_prior = Uniform(prior_low[1], prior_high[1], naming=['q'],
transforms={
'q': (
'eta',
lambda params: params['q'] / (1 + params['q']) ** 2
)
}
)
s1z_prior = Uniform(prior_low[2], prior_high[2], naming=['s1_z'])
s2z_prior = Uniform(prior_low[3], prior_high[3], naming=['s2_z'])
lambda_1_prior = Uniform(prior_low[4], prior_high[4], naming=['lambda_1'])
lambda_2_prior = Uniform(prior_low[5], prior_high[5], naming=['lambda_2'])
dL_prior = Uniform(prior_low[6], prior_high[6], naming=['d_L'])
tc_prior = Uniform(prior_low[7], prior_high[7], naming=['t_c'])
phic_prior = Uniform(prior_low[8], prior_high[8], naming=['phase_c'])
cos_iota_prior = Uniform(prior_low[9], prior_high[9], naming=["cos_iota"],
transforms={
"cos_iota": (
"iota",
lambda params: jnp.arccos(
jnp.arcsin(jnp.sin(params["cos_iota"] / 2 * jnp.pi)) * 2 / jnp.pi
),
)
},
)
psi_prior = Uniform(prior_low[10], prior_high[10], naming=["psi"])
ra_prior = Uniform(prior_low[11], prior_high[11], naming=["ra"])
sin_dec_prior = Uniform(prior_low[12], prior_high[12], naming=["sin_dec"],
transforms={
"sin_dec": (
"dec",
lambda params: jnp.arcsin(
jnp.arcsin(jnp.sin(params["sin_dec"] / 2 * jnp.pi)) * 2 / jnp.pi
),
)
},
)
# Save the prior bounds
print("Saving prior bounds")
utils.save_prior_bounds(prior_low, prior_high, outdir)
# Compose the prior
prior_list = [
Mc_prior,
q_prior,
s1z_prior,
s2z_prior,
lambda_1_prior,
lambda_2_prior,
dL_prior,
tc_prior,
phic_prior,
cos_iota_prior,
psi_prior,
ra_prior,
sin_dec_prior,
]
complete_prior = Composite(prior_list)
bounds = jnp.array([[p.xmin, p.xmax] for p in complete_prior.priors])
print("Finished prior setup")
print("Initializing likelihood")
if args.relative_binning_ref_params_equal_true_params:
ref_params = true_param
print("Using the true parameters as reference parameters for the relative binning")
else:
ref_params = None
print("Will search for reference waveform for relative binning")
# ### TODO remove
# # Explicitly fix relative binning for NRTidalv2
# if args.waveform_approximant in ["IMRPhenomD_NRTidalv2", "NRTidalv2"]:
# # ## TODO this might be broken?
# # # # Explicitly set the f_min and f_max used there
# # # relbin_kwargs = {"f_min": config["fmin"], "f_max": config["f_sampling"] / 2}
# # relbin_kwargs = {}
# # # Set the reference parameters at the ideal location for not breaking relative binning
# # print("Setting the reference parameters to not break the relative binning for NRTidalv2")
# # ref_params = true_param
# # ref_params["lambda_1"] = 1.0
# # ref_params["lambda_2"] = 1.0
# print("Now, the reference parameters are: ")
# print(ref_params)
# else:
# relbin_kwargs = {}
relbin_kwargs = {}
if args.waveform_approximant == "IMRPhenomD_NRTidalv2":
print("Using IMRPhenomD_NRTidalv2 no taper as the reference waveform for the likelihood")
reference_waveform = RippleIMRPhenomD_NRTidalv2_no_taper(f_ref=config["fref"])
else:
reference_waveform = waveform
likelihood = HeterodynedTransientLikelihoodFD(
ifos,
prior=complete_prior,
bounds=bounds,
n_bins = args.relative_binning_binsize,
waveform=waveform,
reference_waveform=reference_waveform,
trigger_time=config["trigger_time"],
duration=config["duration"],
post_trigger_duration=config["post_trigger_duration"],
ref_params=ref_params,
**relbin_kwargs
)
if args.save_likelihood:
print(f"INFO: Saving the likelihood to {outdir}")
import pickle
with open(f'{outdir}likelihood.pickle', 'wb') as handle:
pickle.dump(likelihood, handle, protocol=pickle.HIGHEST_PROTOCOL)
# Save the ref params
utils.save_relative_binning_ref_params(likelihood, outdir)
# Generate arguments for the local samplercd
mass_matrix = jnp.eye(len(prior_list))
for idx, prior in enumerate(prior_list):
mass_matrix = mass_matrix.at[idx, idx].set(prior.xmax - prior.xmin) # fetch the prior range
local_sampler_arg = {'step_size': mass_matrix * args.eps_mass_matrix} # set the overall step size
hyperparameters["local_sampler_arg"] = local_sampler_arg
# Create jim object
jim = Jim(
likelihood,
complete_prior,
**hyperparameters
)
if args.smart_initial_guess:
n_chains = hyperparameters["n_chains"]
n_dim = len(prior_list)
initial_guess = utils.generate_smart_initial_guess(gmst, [H1, L1, V1], true_param, n_chains, n_dim, prior_low, prior_high)
# Plot it
utils.plot_chains(initial_guess, "initial_guess", outdir, truths = truths)
else:
initial_guess = jnp.array([])
### Finally, do the sampling
jim.sample(jax.random.PRNGKey(24), initial_guess = initial_guess)
# === Show results, save output ===
# Print a summary to screen:
jim.print_summary()
# Save and plot the results of the run
# - training phase
name = outdir + f'results_training.npz'
print(f"Saving samples to {name}")
state = jim.Sampler.get_sampler_state(training = True)
chains, log_prob, local_accs, global_accs, loss_vals = state["chains"], state["log_prob"], state["local_accs"], state["global_accs"], state["loss_vals"]
local_accs = jnp.mean(local_accs, axis=0)
global_accs = jnp.mean(global_accs, axis=0)
if args.save_training_chains:
np.savez(name, log_prob=log_prob, local_accs=local_accs, global_accs=global_accs, loss_vals=loss_vals, chains=chains)
else:
np.savez(name, log_prob=log_prob, local_accs=local_accs, global_accs=global_accs, loss_vals=loss_vals)
utils.plot_accs(local_accs, "Local accs (training)", "local_accs_training", outdir)
utils.plot_accs(global_accs, "Global accs (training)", "global_accs_training", outdir)
utils.plot_loss_vals(loss_vals, "Loss", "loss_vals", outdir)
utils.plot_log_prob(log_prob, "Log probability (training)", "log_prob_training", outdir)
# - production phase
name = outdir + f'results_production.npz'
state = jim.Sampler.get_sampler_state(training = False)
chains, log_prob, local_accs, global_accs = state["chains"], state["log_prob"], state["local_accs"], state["global_accs"]
local_accs = jnp.mean(local_accs, axis=0)
global_accs = jnp.mean(global_accs, axis=0)
np.savez(name, chains=chains, log_prob=log_prob, local_accs=local_accs, global_accs=global_accs)
utils.plot_accs(local_accs, "Local accs (production)", "local_accs_production", outdir)
utils.plot_accs(global_accs, "Global accs (production)", "global_accs_production", outdir)
utils.plot_log_prob(log_prob, "Log probability (production)", "log_prob_production", outdir)
# Plot the chains as corner plots
utils.plot_chains(chains, "chains_production", outdir, truths = truths)
# Save the NF and show a plot of samples from the flow
print("Saving the NF")
jim.Sampler.save_flow(outdir + "nf_model")
name = outdir + 'results_NF.npz'
chains = jim.Sampler.sample_flow(10_000)
np.savez(name, chains = chains)
# Finally, copy over this script to the outdir for reproducibility
shutil.copy2(__file__, outdir + "copy_injection_recovery.py")
print("Saving the jim hyperparameters")
jim.save_hyperparameters(outdir = outdir)
end_time = time.time()
runtime = end_time - start_time
print(f"Time taken: {runtime} seconds ({(runtime)/60} minutes)")
print(f"Saving runtime")
with open(outdir + 'runtime.txt', 'w') as file:
file.write(str(runtime))
print("Finished injection recovery successfully!")
############
### MAIN ###
############
def main(given_args = None):
parser = get_parser()
args = parser.parse_args()
print(given_args)
# Update with given args
if given_args is not None:
args.__dict__.update(given_args)
if args.load_existing_config and args.N == "":
raise ValueError("If load_existing_config is True, you need to specify the N argument to locate the existing injection. ")
print("------------------------------------")
print("Arguments script:")
for key, value in args.__dict__.items():
print(f"{key}: {value}")
print("------------------------------------")
print("Starting main code")
# If no N is given, fetch N from the structure of outdir
if len(args.N) == 0:
N = utils.get_N(args.outdir)
args.N = N
# TODO fix that os uses these
# import os
# os.environ["XLA_PYTHON_CLIENT_MEM_FRACTION"] = str(args.GPU_memory_fraction)
# os.environ['CUDA_VISIBLE_DEVICES'] = str(args.GPU_device)
# print(f"Running on GPU {args.GPU_device}")
# Execute the script
body(args)
if __name__ == "__main__":
main()
|
ThibeauWoutersREPO_NAMETurboPE-BNSPATH_START.@TurboPE-BNS_extracted@TurboPE-BNS-main@injections@outdir_NRTv2@injection_79@copy_injection_recovery.py@.PATH_END.py
|
{
"filename": "_thickness.py",
"repo_name": "catboost/catboost",
"repo_path": "catboost_extracted/catboost-master/contrib/python/plotly/py3/plotly/validators/volume/colorbar/_thickness.py",
"type": "Python"
}
|
import _plotly_utils.basevalidators
class ThicknessValidator(_plotly_utils.basevalidators.NumberValidator):
def __init__(
self, plotly_name="thickness", parent_name="volume.colorbar", **kwargs
):
super(ThicknessValidator, self).__init__(
plotly_name=plotly_name,
parent_name=parent_name,
edit_type=kwargs.pop("edit_type", "calc"),
min=kwargs.pop("min", 0),
**kwargs,
)
|
catboostREPO_NAMEcatboostPATH_START.@catboost_extracted@catboost-master@contrib@python@plotly@py3@plotly@validators@volume@colorbar@_thickness.py@.PATH_END.py
|
{
"filename": "_fill.py",
"repo_name": "catboost/catboost",
"repo_path": "catboost_extracted/catboost-master/contrib/python/plotly/py3/plotly/validators/isosurface/caps/z/_fill.py",
"type": "Python"
}
|
import _plotly_utils.basevalidators
class FillValidator(_plotly_utils.basevalidators.NumberValidator):
def __init__(self, plotly_name="fill", parent_name="isosurface.caps.z", **kwargs):
super(FillValidator, self).__init__(
plotly_name=plotly_name,
parent_name=parent_name,
edit_type=kwargs.pop("edit_type", "calc"),
max=kwargs.pop("max", 1),
min=kwargs.pop("min", 0),
**kwargs,
)
|
catboostREPO_NAMEcatboostPATH_START.@catboost_extracted@catboost-master@contrib@python@plotly@py3@plotly@validators@isosurface@caps@z@_fill.py@.PATH_END.py
|
{
"filename": "__init__.py",
"repo_name": "plotly/plotly.py",
"repo_path": "plotly.py_extracted/plotly.py-master/packages/python/plotly/plotly/validators/contourcarpet/colorbar/__init__.py",
"type": "Python"
}
|
import sys
from typing import TYPE_CHECKING
if sys.version_info < (3, 7) or TYPE_CHECKING:
from ._yref import YrefValidator
from ._ypad import YpadValidator
from ._yanchor import YanchorValidator
from ._y import YValidator
from ._xref import XrefValidator
from ._xpad import XpadValidator
from ._xanchor import XanchorValidator
from ._x import XValidator
from ._title import TitleValidator
from ._tickwidth import TickwidthValidator
from ._tickvalssrc import TickvalssrcValidator
from ._tickvals import TickvalsValidator
from ._ticktextsrc import TicktextsrcValidator
from ._ticktext import TicktextValidator
from ._ticksuffix import TicksuffixValidator
from ._ticks import TicksValidator
from ._tickprefix import TickprefixValidator
from ._tickmode import TickmodeValidator
from ._ticklen import TicklenValidator
from ._ticklabelstep import TicklabelstepValidator
from ._ticklabelposition import TicklabelpositionValidator
from ._ticklabeloverflow import TicklabeloverflowValidator
from ._tickformatstopdefaults import TickformatstopdefaultsValidator
from ._tickformatstops import TickformatstopsValidator
from ._tickformat import TickformatValidator
from ._tickfont import TickfontValidator
from ._tickcolor import TickcolorValidator
from ._tickangle import TickangleValidator
from ._tick0 import Tick0Validator
from ._thicknessmode import ThicknessmodeValidator
from ._thickness import ThicknessValidator
from ._showticksuffix import ShowticksuffixValidator
from ._showtickprefix import ShowtickprefixValidator
from ._showticklabels import ShowticklabelsValidator
from ._showexponent import ShowexponentValidator
from ._separatethousands import SeparatethousandsValidator
from ._outlinewidth import OutlinewidthValidator
from ._outlinecolor import OutlinecolorValidator
from ._orientation import OrientationValidator
from ._nticks import NticksValidator
from ._minexponent import MinexponentValidator
from ._lenmode import LenmodeValidator
from ._len import LenValidator
from ._labelalias import LabelaliasValidator
from ._exponentformat import ExponentformatValidator
from ._dtick import DtickValidator
from ._borderwidth import BorderwidthValidator
from ._bordercolor import BordercolorValidator
from ._bgcolor import BgcolorValidator
else:
from _plotly_utils.importers import relative_import
__all__, __getattr__, __dir__ = relative_import(
__name__,
[],
[
"._yref.YrefValidator",
"._ypad.YpadValidator",
"._yanchor.YanchorValidator",
"._y.YValidator",
"._xref.XrefValidator",
"._xpad.XpadValidator",
"._xanchor.XanchorValidator",
"._x.XValidator",
"._title.TitleValidator",
"._tickwidth.TickwidthValidator",
"._tickvalssrc.TickvalssrcValidator",
"._tickvals.TickvalsValidator",
"._ticktextsrc.TicktextsrcValidator",
"._ticktext.TicktextValidator",
"._ticksuffix.TicksuffixValidator",
"._ticks.TicksValidator",
"._tickprefix.TickprefixValidator",
"._tickmode.TickmodeValidator",
"._ticklen.TicklenValidator",
"._ticklabelstep.TicklabelstepValidator",
"._ticklabelposition.TicklabelpositionValidator",
"._ticklabeloverflow.TicklabeloverflowValidator",
"._tickformatstopdefaults.TickformatstopdefaultsValidator",
"._tickformatstops.TickformatstopsValidator",
"._tickformat.TickformatValidator",
"._tickfont.TickfontValidator",
"._tickcolor.TickcolorValidator",
"._tickangle.TickangleValidator",
"._tick0.Tick0Validator",
"._thicknessmode.ThicknessmodeValidator",
"._thickness.ThicknessValidator",
"._showticksuffix.ShowticksuffixValidator",
"._showtickprefix.ShowtickprefixValidator",
"._showticklabels.ShowticklabelsValidator",
"._showexponent.ShowexponentValidator",
"._separatethousands.SeparatethousandsValidator",
"._outlinewidth.OutlinewidthValidator",
"._outlinecolor.OutlinecolorValidator",
"._orientation.OrientationValidator",
"._nticks.NticksValidator",
"._minexponent.MinexponentValidator",
"._lenmode.LenmodeValidator",
"._len.LenValidator",
"._labelalias.LabelaliasValidator",
"._exponentformat.ExponentformatValidator",
"._dtick.DtickValidator",
"._borderwidth.BorderwidthValidator",
"._bordercolor.BordercolorValidator",
"._bgcolor.BgcolorValidator",
],
)
|
plotlyREPO_NAMEplotly.pyPATH_START.@plotly.py_extracted@plotly.py-master@packages@python@plotly@plotly@validators@contourcarpet@colorbar@__init__.py@.PATH_END.py
|
{
"filename": "denoising_generalisation_no_thresh.ipynb",
"repo_name": "utsav-akhaury/understanding-unets",
"repo_path": "understanding-unets_extracted/understanding-unets-master/experiments/denoising_tests/denoising_generalisation_no_thresh.ipynb",
"type": "Jupyter Notebook"
}
|
```python
%cd ../..
```
/volatile/home/Zaccharie/workspace/understanding-unets
```python
# this just to make sure we are using only on CPU
import os
os.environ["CUDA_VISIBLE_DEVICES"]="-1"
```
```python
%load_ext autoreload
%autoreload 2
import copy
import time
import numpy as np
import pandas as pd
from tqdm import tqdm_notebook
from learning_wavelets.data.datasets import im_dataset_bsd68
from learning_wavelets.models.learned_wavelet import learnlet
from learning_wavelets.utils.metrics import metrics_from_ds, metrics_original_from_ds
```
.|'''| /.\ '||'''|,
|| // \\ || ||
'||''|, '|| ||` `|'''|, //...\\ ||...|'
|| || `|..|| . || // \\ ||
||..|' || |...|' .// \\. .||
|| , |'
.|| ''
Package version: 0.0.3
License: CeCILL-B
Authors:
Antoine Grigis <antoine.grigis@cea.fr>
Samuel Farrens <samuel.farrens@cea.fr>
Jean-Luc Starck <jl.stark@cea.fr>
Philippe Ciuciu <philippe.ciuciu@cea.fr>
Dependencies:
scipy : >=1.3.0 - required | 1.4.1 installed
numpy : >=1.16.4 - required | 1.17.4 installed
matplotlib : >=3.0.0 - required | 3.1.2 installed
astropy : >=3.0.0 - required | 3.2.3 installed
nibabel : >=2.3.2 - required | 2.5.1 installed
pyqtgraph : >=0.10.0 - required | 0.10.0 installed
progressbar2 : >=3.34.3 - required | ? installed
modopt : >=1.4.0 - required | 1.4.1 installed
scikit-learn : >=0.19.1 - required | ? installed
pywt : >=1.0.0 - required | 1.1.1 installed
pysparse : >=0.0.1 - required | 0.1.0 installed
```python
np.random.seed(0)
```
```python
dynamic_denoising_net_params = [
{
'name': 'learnlet_0_55_big_bsd',
'init_function': learnlet,
'run_params': {
'denoising_activation': 'dynamic_soft_thresholding',
'learnlet_analysis_kwargs':{
'n_tiling': 256,
'mixing_details': False,
'kernel_size': 11,
'skip_connection': True,
},
'learnlet_synthesis_kwargs': {
'res': True,
'kernel_size': 13,
},
'n_scales': 5,
'clip': True,
'input_size': (None, None, 1),
},
'run_id': 'learnlet_dynamic_st_bsd500_0_55_1580806694',
'epoch': 500,
},
]
```
```python
noise_stds = [0.0001, 5, 15, 20, 25, 30, 50, 55, 60, 75]
```
```python
noise_std_metrics = {}
n_samples = 2
for noise_std in tqdm_notebook(noise_stds, 'Noise stds'):
metrics = []
im_ds = im_dataset_bsd68(
mode='testing',
batch_size=1,
patch_size=None,
noise_std=noise_std,
return_noise_level=False,
n_pooling=5,
n_samples=n_samples,
)
metrics.append(('original', metrics_original_from_ds(im_ds)))
for net_params in dynamic_denoising_net_params:
im_ds = im_dataset_bsd68(
mode='testing',
batch_size=1,
patch_size=None,
noise_std=noise_std,
return_noise_level=True,
n_pooling=5,
n_samples=n_samples,
)
metrics.append((net_params['name'], metrics_from_ds(im_ds, **net_params)))
im_ds = im_dataset_bsd68(
mode='testing',
batch_size=1,
patch_size=None,
noise_std=noise_std,
return_noise_level=True,
n_pooling=5,
n_samples=n_samples,
set_noise_zero=True,
)
metrics.append((net_params['name']+'_no_thresh', metrics_from_ds(im_ds, **net_params)))
# metrics.append(('bm3d', metrics_bm3d(im_gen_test)))
# metrics.append(('wavelets_24', metrics_wavelets(im_gen_test, '24', noise_std=noise_std)))
# metrics.sort(key=lambda x: x[1].metrics['PSNR'].mean())
noise_std_metrics[noise_std] = metrics
```
/volatile/home/Zaccharie/workspace/understanding-unets/venv/lib/python3.6/site-packages/ipykernel_launcher.py:3: TqdmDeprecationWarning: This function will be removed in tqdm==5.0.0
Please use `tqdm.notebook.tqdm` instead of `tqdm.tqdm_notebook`
This is separate from the ipykernel package so we can avoid doing imports until
HBox(children=(FloatProgress(value=0.0, description='Noise stds', max=10.0, style=ProgressStyle(description_wi…
HBox(children=(FloatProgress(value=1.0, bar_style='info', max=1.0), HTML(value='')))
HBox(children=(FloatProgress(value=0.0, description='Stats for original noisy images', max=2.0, style=Progress…
HBox(children=(FloatProgress(value=1.0, bar_style='info', max=1.0), HTML(value='')))
HBox(children=(FloatProgress(value=0.0, description='Stats for learnlet_0_55_big_bsd', max=2.0, style=Progress…
HBox(children=(FloatProgress(value=1.0, bar_style='info', max=1.0), HTML(value='')))
HBox(children=(FloatProgress(value=0.0, description='Stats for learnlet_0_55_big_bsd', max=2.0, style=Progress…
HBox(children=(FloatProgress(value=1.0, bar_style='info', max=1.0), HTML(value='')))
HBox(children=(FloatProgress(value=0.0, description='Stats for original noisy images', max=2.0, style=Progress…
HBox(children=(FloatProgress(value=1.0, bar_style='info', max=1.0), HTML(value='')))
HBox(children=(FloatProgress(value=0.0, description='Stats for learnlet_0_55_big_bsd', max=2.0, style=Progress…
HBox(children=(FloatProgress(value=1.0, bar_style='info', max=1.0), HTML(value='')))
HBox(children=(FloatProgress(value=0.0, description='Stats for learnlet_0_55_big_bsd', max=2.0, style=Progress…
HBox(children=(FloatProgress(value=1.0, bar_style='info', max=1.0), HTML(value='')))
HBox(children=(FloatProgress(value=0.0, description='Stats for original noisy images', max=2.0, style=Progress…
HBox(children=(FloatProgress(value=1.0, bar_style='info', max=1.0), HTML(value='')))
HBox(children=(FloatProgress(value=0.0, description='Stats for learnlet_0_55_big_bsd', max=2.0, style=Progress…
HBox(children=(FloatProgress(value=1.0, bar_style='info', max=1.0), HTML(value='')))
HBox(children=(FloatProgress(value=0.0, description='Stats for learnlet_0_55_big_bsd', max=2.0, style=Progress…
HBox(children=(FloatProgress(value=1.0, bar_style='info', max=1.0), HTML(value='')))
HBox(children=(FloatProgress(value=0.0, description='Stats for original noisy images', max=2.0, style=Progress…
HBox(children=(FloatProgress(value=1.0, bar_style='info', max=1.0), HTML(value='')))
HBox(children=(FloatProgress(value=0.0, description='Stats for learnlet_0_55_big_bsd', max=2.0, style=Progress…
HBox(children=(FloatProgress(value=1.0, bar_style='info', max=1.0), HTML(value='')))
HBox(children=(FloatProgress(value=0.0, description='Stats for learnlet_0_55_big_bsd', max=2.0, style=Progress…
HBox(children=(FloatProgress(value=1.0, bar_style='info', max=1.0), HTML(value='')))
HBox(children=(FloatProgress(value=0.0, description='Stats for original noisy images', max=2.0, style=Progress…
HBox(children=(FloatProgress(value=1.0, bar_style='info', max=1.0), HTML(value='')))
HBox(children=(FloatProgress(value=0.0, description='Stats for learnlet_0_55_big_bsd', max=2.0, style=Progress…
HBox(children=(FloatProgress(value=1.0, bar_style='info', max=1.0), HTML(value='')))
HBox(children=(FloatProgress(value=0.0, description='Stats for learnlet_0_55_big_bsd', max=2.0, style=Progress…
HBox(children=(FloatProgress(value=1.0, bar_style='info', max=1.0), HTML(value='')))
HBox(children=(FloatProgress(value=0.0, description='Stats for original noisy images', max=2.0, style=Progress…
HBox(children=(FloatProgress(value=1.0, bar_style='info', max=1.0), HTML(value='')))
HBox(children=(FloatProgress(value=0.0, description='Stats for learnlet_0_55_big_bsd', max=2.0, style=Progress…
HBox(children=(FloatProgress(value=1.0, bar_style='info', max=1.0), HTML(value='')))
HBox(children=(FloatProgress(value=0.0, description='Stats for learnlet_0_55_big_bsd', max=2.0, style=Progress…
HBox(children=(FloatProgress(value=1.0, bar_style='info', max=1.0), HTML(value='')))
HBox(children=(FloatProgress(value=0.0, description='Stats for original noisy images', max=2.0, style=Progress…
HBox(children=(FloatProgress(value=1.0, bar_style='info', max=1.0), HTML(value='')))
HBox(children=(FloatProgress(value=0.0, description='Stats for learnlet_0_55_big_bsd', max=2.0, style=Progress…
HBox(children=(FloatProgress(value=1.0, bar_style='info', max=1.0), HTML(value='')))
HBox(children=(FloatProgress(value=0.0, description='Stats for learnlet_0_55_big_bsd', max=2.0, style=Progress…
HBox(children=(FloatProgress(value=1.0, bar_style='info', max=1.0), HTML(value='')))
HBox(children=(FloatProgress(value=0.0, description='Stats for original noisy images', max=2.0, style=Progress…
HBox(children=(FloatProgress(value=1.0, bar_style='info', max=1.0), HTML(value='')))
HBox(children=(FloatProgress(value=0.0, description='Stats for learnlet_0_55_big_bsd', max=2.0, style=Progress…
HBox(children=(FloatProgress(value=1.0, bar_style='info', max=1.0), HTML(value='')))
HBox(children=(FloatProgress(value=0.0, description='Stats for learnlet_0_55_big_bsd', max=2.0, style=Progress…
HBox(children=(FloatProgress(value=1.0, bar_style='info', max=1.0), HTML(value='')))
HBox(children=(FloatProgress(value=0.0, description='Stats for original noisy images', max=2.0, style=Progress…
HBox(children=(FloatProgress(value=1.0, bar_style='info', max=1.0), HTML(value='')))
HBox(children=(FloatProgress(value=0.0, description='Stats for learnlet_0_55_big_bsd', max=2.0, style=Progress…
HBox(children=(FloatProgress(value=1.0, bar_style='info', max=1.0), HTML(value='')))
HBox(children=(FloatProgress(value=0.0, description='Stats for learnlet_0_55_big_bsd', max=2.0, style=Progress…
HBox(children=(FloatProgress(value=1.0, bar_style='info', max=1.0), HTML(value='')))
HBox(children=(FloatProgress(value=0.0, description='Stats for original noisy images', max=2.0, style=Progress…
HBox(children=(FloatProgress(value=1.0, bar_style='info', max=1.0), HTML(value='')))
HBox(children=(FloatProgress(value=0.0, description='Stats for learnlet_0_55_big_bsd', max=2.0, style=Progress…
HBox(children=(FloatProgress(value=1.0, bar_style='info', max=1.0), HTML(value='')))
HBox(children=(FloatProgress(value=0.0, description='Stats for learnlet_0_55_big_bsd', max=2.0, style=Progress…
```python
noise_std_metrics;
```
```python
# PSNR table
psnr_metrics_table = pd.DataFrame(
columns=['noise_std'] + [p['name'] for p in dynamic_denoising_net_params] + [p['name']+'_no_thresh' for p in dynamic_denoising_net_params],
)
for i, (noise_std, metrics) in enumerate(noise_std_metrics.items()):
psnr_metrics_table.loc[i, 'noise_std'] = noise_std
for name, m in metrics:
psnr_metrics_table.loc[i, name] = "{mean:.4} ({std:.2})".format(
mean=m.metrics['PSNR'].mean(),
std=m.metrics['PSNR'].stddev(),
)
psnr_metrics_table;
```
```python
# SSIM table
ssim_metrics_table = pd.DataFrame(
columns=['noise_std'] + [p['name'] for p in dynamic_denoising_net_params] + [p['name']+'_no_thresh' for p in dynamic_denoising_net_params])
for i, (noise_std, metrics) in enumerate(noise_std_metrics.items()):
ssim_metrics_table.loc[i, 'noise_std'] = noise_std
for name, m in metrics:
ssim_metrics_table.loc[i, name] = "{mean:.4} ({std:.4})".format(
mean=m.metrics['SSIM'].mean(),
std=m.metrics['SSIM'].stddev(),
)
ssim_metrics_table;
```
```python
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
```
```python
sns.set(style="whitegrid", palette="muted", rc={'figure.figsize': (9, 5), 'image.cmap': 'gray'})
```
```python
relative_to_original = True
model_family_str = r'$\bf{Learnlets}$'
noise_std_str = r'$\sigma$'
psnr_str = 'Ratio over original PSNR'
# PSNR to plot
psnr_to_plot = pd.DataFrame(
columns=[noise_std_str, psnr_str, 'psnr-std-dev', 'model_name', model_family_str, train_stds_str]
)
def from_name_to_family(model_name):
if 'no_thresh' in model_name:
return 'w/o thresholding'
else:
return 'w thresholding'
index = 0
orig_psnrs = {}
for i_noise, (noise_std, metrics) in enumerate(noise_std_metrics.items()):
for j_model, (name, m) in enumerate(metrics):
if relative_to_original and name == 'original':
orig_psnrs[noise_std] = m.metrics['PSNR'].mean()
else:
psnr_to_plot.loc[index, noise_std_str] = noise_std
psnr_to_plot.loc[index, psnr_str] = m.metrics['PSNR'].mean()
psnr_to_plot.loc[index, 'psnr-std-dev'] = m.metrics['PSNR'].stddev() / 2
psnr_to_plot.loc[index, 'model_name'] = name
psnr_to_plot.loc[index, model_family_str] = from_name_to_family(name)
index += 1
if relative_to_original:
for noise_std, orig_psnr in orig_psnrs.items():
psnr_to_plot.loc[psnr_to_plot[noise_std_str] == noise_std, psnr_str] = psnr_to_plot[psnr_to_plot[noise_std_str] == noise_std][psnr_str] / orig_psnr
psnr_to_plot;
```
```python
plt.figure()
psnr_to_plot[psnr_str] = psnr_to_plot[psnr_str].astype(float)
lplot = sns.lineplot(
x=noise_std_str,
y=psnr_str,
style=model_family_str,
data=psnr_to_plot,
color='C1',
linewidth=3.1,
)
plt.legend(bbox_to_anchor=(0., 1.01, 1., .05), loc='center', borderaxespad=0., ncol=3, fontsize=12.45)
plt.subplots_adjust(right=0.785)
plt.savefig(f'gen_wo_error_bars.png')
```

```python
```
|
utsav-akhauryREPO_NAMEunderstanding-unetsPATH_START.@understanding-unets_extracted@understanding-unets-master@experiments@denoising_tests@denoising_generalisation_no_thresh.ipynb@.PATH_END.py
|
{
"filename": "__init__.py",
"repo_name": "spacetelescope/specviz",
"repo_path": "specviz_extracted/specviz-master/specviz/__init__.py",
"type": "Python"
}
|
# Licensed under a 3-clause BSD style license - see LICENSE.rst
try:
from specviz.version import version as __version__
except Exception:
# package is not installed
__version__ = "unknown"
__all__ = ['__version__']
print('specviz is no longer supported, please use jdaviz. '
'If you must use legacy specviz, please try v0.7.1 '
'in Python 3.6.')
|
spacetelescopeREPO_NAMEspecvizPATH_START.@specviz_extracted@specviz-master@specviz@__init__.py@.PATH_END.py
|
{
"filename": "custom_init.py",
"repo_name": "rometsch/fargocpt",
"repo_path": "fargocpt_extracted/fargocpt-master/test/self_gravity_azi/custom_init.py",
"type": "Python"
}
|
from dataclasses import dataclass
import numpy as np
from types import SimpleNamespace
import argparse
@dataclass
class FargoCPTField:
outputdir: str
snapshotid: str
name: str
def __post_init__(self):
self.filename = f"{self.outputdir}/snapshots/{self.snapshotid}/{self.name}.dat"
self.grid = get_fargo_grid(self.outputdir)
if self.name == "vrad":
self.R, self.Phi = np.meshgrid(self.grid.ri, self.grid.phic, indexing="ij")
elif self.name == "vtheta":
self.R, self.Phi = np.meshgrid(self.grid.rc, self.grid.phii[:-1], indexing="ij")
else:
self.R, self.Phi = np.meshgrid(self.grid.rc, self.grid.phic, indexing="ij")
self._load()
def _load(self):
self._data = np.fromfile(self.filename, dtype=np.float64)
self._data = self._data.reshape(self.R.shape[0], self.R.shape[1])
def save(self, altid=None):
if altid is not None:
filename = f"{self.outputdir}/snapshots/{altid}/{self.name}.dat"
self._data.tofile(self.filename)
@property
def array(self):
return self._data
@array.setter
def array(self, data):
if not self._data.shape == data.shape:
raise ValueError("Shape of data does not match shape of field")
self._data = data
def get_fargo_grid(outputdir):
Nrad, Naz = np.genfromtxt(f"{outputdir}/dimensions.dat", usecols=(4,5), dtype=int, unpack=True)
ri = np.genfromtxt(f"{outputdir}/used_rad.dat")
phii = np.linspace(0, 2*np.pi, Naz+1)
Ri, Phii = np.meshgrid(ri, phii, indexing="ij")
Xi = Ri*np.cos(Phii)
Yi = Ri*np.sin(Phii)
rc = 2/3*(ri[1:]**2/(ri[1:]+ri[:-1]) + ri[:-1]) # approx center in polar coords
phic = 0.5*(phii[1:]+phii[:-1])
Rc, Phic = np.meshgrid(rc, phic, indexing="ij")
Xc = Rc*np.cos(Phic)
Yc = Rc*np.sin(Phic)
dphi = phii[1] - phii[0]
dr = ri[1:] - ri[:-1]
A = 0.5*(Ri[1:,1:]**2 - Ri[:-1,1:]**2)*dphi
return SimpleNamespace(
Nrad=Nrad, Naz=Naz,
ri=ri, phii=phii, Ri=Ri, Phii=Phii, Xi=Xi, Yi=Yi,
rc=rc, phic=phic, Rc=Rc, Phic=Phic, Xc=Xc, Yc=Yc,
dphi=dphi, dr=dr, A=A
)
def main():
parser = argparse.ArgumentParser()
parser.add_argument("outputdir")
opts = parser.parse_args()
field = FargoCPTField(opts.outputdir, 0, "Sigma")
R = field.R
print(R)
Phi = field.Phi
print(Phi)
R0 = 4
phi1 = np.pi
phi2 = np.pi/2
Deltar = 1
Deltaphi = 0.3
gauss1 = np.exp(-0.5*(R-R0)**2/Deltar**2) * np.exp(-0.5*((Phi-phi1))**2/Deltaphi**2)
gauss2 = np.exp(-0.5*(R-R0)**2/Deltar**2) * np.exp(-0.5*((Phi-phi2))**2/Deltaphi**2)
krad = np.argmin(np.abs(field.grid.ri - R0))
field.array = field.array[krad,0] * (gauss1 + gauss2)
field.save()
field.save("reference")
if __name__ == "__main__":
main()
|
rometschREPO_NAMEfargocptPATH_START.@fargocpt_extracted@fargocpt-master@test@self_gravity_azi@custom_init.py@.PATH_END.py
|
{
"filename": "np_test.py",
"repo_name": "tensorflow/tensorflow",
"repo_path": "tensorflow_extracted/tensorflow-master/tensorflow/python/ops/numpy_ops/tests/np_test.py",
"type": "Python"
}
|
# Copyright 2023 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
import collections
import functools
from functools import partial
import itertools
import operator
import unittest
from unittest import SkipTest
import warnings
from absl.testing import absltest
from absl.testing import parameterized
import numpy as onp
import six
from tensorflow.python.framework import errors_impl
from tensorflow.python.framework import ops
from tensorflow.python.ops.numpy_ops import np_config
from tensorflow.python.ops.numpy_ops.tests.config import config
from tensorflow.python.ops.numpy_ops.tests.config import FLAGS
import tensorflow.python.ops.numpy_ops.tests.extensions as nje
import tensorflow.python.ops.numpy_ops.tests.np_wrapper as tnp
import tensorflow.python.ops.numpy_ops.tests.test_util as jtu
from tensorflow.python.util import nest
from tensorflow.python.util.numpy_compat import np_where
config.parse_flags_with_absl()
nonempty_nonscalar_array_shapes = [(4,), (3, 4), (3, 1), (1, 4), (2, 1, 4), (2, 3, 4)]
nonempty_array_shapes = [()] + nonempty_nonscalar_array_shapes
empty_array_shapes = [(0,), (0, 4), (3, 0),]
scalar_shapes = [jtu.NUMPY_SCALAR_SHAPE, jtu.PYTHON_SCALAR_SHAPE]
array_shapes = nonempty_array_shapes + empty_array_shapes
nonzerodim_shapes = nonempty_nonscalar_array_shapes + empty_array_shapes
nonempty_shapes = scalar_shapes + nonempty_array_shapes
all_shapes = scalar_shapes + array_shapes
# TODO(wangpeng): float_dtypes = [tnp.bfloat16, onp.float16, onp.float32,
# onp.float64]
float_dtypes = [onp.float16, onp.float32, onp.float64]
complex_dtypes = [onp.complex64, onp.complex128]
int_dtypes = [onp.int32, onp.int64]
unsigned_dtypes = [onp.uint32, onp.uint64]
bool_dtypes = [onp.bool_]
default_dtypes = float_dtypes + int_dtypes
inexact_dtypes = float_dtypes + complex_dtypes
number_dtypes = float_dtypes + complex_dtypes + int_dtypes
all_dtypes = number_dtypes + bool_dtypes
python_scalar_dtypes = [tnp.bool_, tnp.int_, tnp.float64, tnp.complex128]
# pylint: disable=unnecessary-lambda,g-long-lambda,expression-not-assigned
def _valid_dtypes_for_shape(shape, dtypes):
# Not all (shape, dtype) pairs are valid. In particular, Python scalars only
# have one type in each category (float, bool, etc.)
if shape is jtu.PYTHON_SCALAR_SHAPE:
return [t for t in dtypes if t in python_scalar_dtypes]
return dtypes
def _shape_and_dtypes(shapes, dtypes):
for shape in shapes:
for dtype in _valid_dtypes_for_shape(shape, dtypes):
yield (shape, dtype)
OpRecord = collections.namedtuple(
"OpRecord",
["name", "nargs", "dtypes", "shapes", "rng_factory", "diff_modes",
"test_name", "check_dtypes", "tolerance", "inexact",
"check_incomplete_shape"])
def op_record(name, nargs, dtypes, shapes, rng_factory, diff_modes,
test_name=None, check_dtypes=True, tolerance=None, inexact=False,
check_incomplete_shape=True):
test_name = test_name or name
return OpRecord(name, nargs, dtypes, shapes, rng_factory, diff_modes,
test_name, check_dtypes, tolerance, inexact,
check_incomplete_shape)
def minus(a, b):
return [x for x in a if x not in b]
JAX_ONE_TO_ONE_OP_RECORDS = [
op_record("abs", 1, number_dtypes, all_shapes, jtu.rand_default, ["rev"]),
op_record("add", 2, all_dtypes, all_shapes, jtu.rand_default, ["rev"]),
op_record("ceil", 1, float_dtypes, all_shapes, jtu.rand_default, []),
op_record("conj", 1, number_dtypes, all_shapes, jtu.rand_default, ["rev"]),
op_record("equal", 2, all_dtypes, all_shapes, jtu.rand_some_equal, []),
op_record("exp", 1, number_dtypes, all_shapes, jtu.rand_default, ["rev"],
inexact=True),
op_record("fabs", 1, float_dtypes, all_shapes, jtu.rand_default, ["rev"]),
op_record("float_power", 2, inexact_dtypes, all_shapes,
partial(jtu.rand_default, scale=1), ["rev"],
tolerance={
# TODO(wangpeng): tnp.bfloat16: 1e-2,
onp.float32: 1e-3,
onp.float64: 1e-12, onp.complex64: 2e-4,
onp.complex128: 1e-12}, check_dtypes=False),
op_record("floor", 1, float_dtypes, all_shapes, jtu.rand_default, []),
op_record("greater", 2, minus(all_dtypes, complex_dtypes), all_shapes,
jtu.rand_some_equal, []),
op_record("greater_equal", 2, minus(all_dtypes, complex_dtypes), all_shapes,
jtu.rand_some_equal, []),
op_record("less", 2, minus(all_dtypes, complex_dtypes), all_shapes,
jtu.rand_some_equal, []),
op_record("less_equal", 2, minus(all_dtypes, complex_dtypes), all_shapes,
jtu.rand_some_equal, []),
op_record("log", 1, number_dtypes, all_shapes, jtu.rand_positive, ["rev"],
inexact=True),
op_record("logical_and", 2, all_dtypes, all_shapes, jtu.rand_bool, []),
op_record("logical_not", 1, all_dtypes, all_shapes, jtu.rand_bool, []),
op_record("logical_or", 2, all_dtypes, all_shapes, jtu.rand_bool, []),
op_record("logical_xor", 2, all_dtypes, all_shapes, jtu.rand_bool, []),
op_record("maximum", 2, minus(all_dtypes, complex_dtypes), all_shapes,
jtu.rand_some_inf, []),
op_record("minimum", 2, minus(all_dtypes, complex_dtypes), all_shapes,
jtu.rand_some_inf, []),
op_record("multiply", 2, all_dtypes, all_shapes, jtu.rand_default, ["rev"]),
op_record("negative", 1, number_dtypes, all_shapes, jtu.rand_default, ["rev"]),
op_record("nextafter", 2, [f for f in float_dtypes
if f not in (tnp.bfloat16, onp.float16)],
all_shapes, jtu.rand_default, ["rev"], inexact=True, tolerance=0),
op_record("not_equal", 2, all_dtypes, all_shapes, jtu.rand_some_equal, ["rev"]),
op_record("array_equal", 2, number_dtypes, all_shapes, jtu.rand_some_equal, ["rev"]),
op_record("reciprocal", 1, inexact_dtypes, all_shapes, jtu.rand_default, []),
op_record("subtract", 2, number_dtypes, all_shapes, jtu.rand_default, ["rev"]),
op_record("signbit", 1, default_dtypes + bool_dtypes, all_shapes,
jtu.rand_some_inf_and_nan, ["rev"]),
op_record("sin", 1, number_dtypes, all_shapes, jtu.rand_default, ["rev"],
inexact=True),
op_record("cos", 1, number_dtypes, all_shapes, jtu.rand_default, ["rev"],
inexact=True),
op_record("tan", 1, number_dtypes, all_shapes,
partial(jtu.rand_uniform, -1.5, 1.5), ["rev"],
tolerance={onp.complex64: 3e-5, onp.complex128: 4e-14},
inexact=True),
# TODO(wangpeng): Add float16 support
op_record("sinh", 1, minus(number_dtypes, [onp.float16]), all_shapes, jtu.rand_default, ["rev"],
inexact=True),
op_record("cosh", 1, minus(number_dtypes, [onp.float16]), all_shapes, jtu.rand_default, ["rev"],
inexact=True),
# TODO(b/142975473): on CPU, tanh for complex128 is only accurate to
# ~float32 precision.
# TODO(b/143135720): on GPU, tanh has only ~float32 precision.
op_record("tanh", 1, number_dtypes, all_shapes, jtu.rand_default, ["rev"],
tolerance={onp.float64: 1e-7, onp.complex128: 1e-7},
inexact=True),
op_record("arcsin", 1, minus(float_dtypes, [onp.float16]), all_shapes, jtu.rand_small, ["rev"],
inexact=True),
op_record("arccos", 1, minus(float_dtypes, [onp.float16]), all_shapes, jtu.rand_small, ["rev"],
inexact=True),
op_record("arctan", 1, minus(float_dtypes, [onp.float16]), all_shapes, jtu.rand_small, ["rev"],
inexact=True),
op_record("arctan2", 2, minus(float_dtypes, [onp.float16]), all_shapes, jtu.rand_small, ["rev"],
inexact=True),
op_record("arcsinh", 1, minus(number_dtypes, [onp.float16]), all_shapes, jtu.rand_positive, ["rev"],
inexact=True),
op_record("arccosh", 1, minus(number_dtypes, [onp.float16]), all_shapes, jtu.rand_positive, ["rev"],
inexact=True),
op_record("arctanh", 1, minus(number_dtypes, [onp.float16]), all_shapes, jtu.rand_small, ["rev"],
inexact=True),
]
JAX_COMPOUND_OP_RECORDS = [
# angle has inconsistent 32/64-bit return types across numpy versions.
op_record("angle", 1, number_dtypes, all_shapes, jtu.rand_default, [],
check_dtypes=False, inexact=True),
op_record("atleast_1d", 1, default_dtypes, all_shapes, jtu.rand_default, []),
op_record("atleast_2d", 1, default_dtypes, all_shapes, jtu.rand_default, []),
op_record("atleast_3d", 1, default_dtypes, all_shapes, jtu.rand_default, []),
op_record("cbrt", 1, default_dtypes, all_shapes, jtu.rand_default, ["rev"],
inexact=True),
op_record("conjugate", 1, number_dtypes, all_shapes, jtu.rand_default, ["rev"]),
op_record("deg2rad", 1, float_dtypes, all_shapes, jtu.rand_default, []),
op_record("divide", 2, number_dtypes, all_shapes, jtu.rand_nonzero, ["rev"],
inexact=six.PY3),
op_record("divmod", 2, minus(int_dtypes + float_dtypes, [onp.float16]),
all_shapes, jtu.rand_nonzero, []),
op_record("exp2", 1, number_dtypes, all_shapes, jtu.rand_default, ["rev"],
tolerance={
# TODO(wangpeng): tnp.bfloat16: 2e-2,
onp.float16: 1e-2}, inexact=True),
# TODO(b/142975473): on CPU, expm1 for float64 is only accurate to ~float32
# precision.
op_record("expm1", 1, number_dtypes, all_shapes, jtu.rand_positive, [],
test_name="expm1_large", tolerance={onp.float64: 1e-8}, inexact=True),
op_record("expm1", 1, number_dtypes, all_shapes, jtu.rand_small_positive,
[], tolerance={onp.float64: 1e-8}, inexact=True),
op_record("fix", 1, float_dtypes, all_shapes, jtu.rand_default, []),
op_record("floor_divide", 2, minus(number_dtypes, complex_dtypes),
all_shapes, jtu.rand_nonzero, ["rev"]),
op_record("heaviside", 2, default_dtypes, all_shapes, jtu.rand_default, [],
inexact=True),
op_record("hypot", 2, default_dtypes, all_shapes, jtu.rand_default, [],
inexact=True),
op_record("kron", 2, number_dtypes, nonempty_shapes, jtu.rand_default, [],
check_incomplete_shape=False),
op_record("outer", 2, number_dtypes, all_shapes, jtu.rand_default, []),
op_record("imag", 1, number_dtypes, all_shapes, jtu.rand_some_inf, []),
op_record("iscomplex", 1, number_dtypes, all_shapes, jtu.rand_some_inf, []),
op_record("isfinite", 1, minus(inexact_dtypes, complex_dtypes), all_shapes,
jtu.rand_some_inf_and_nan, []),
op_record("isinf", 1, minus(inexact_dtypes, complex_dtypes), all_shapes,
jtu.rand_some_inf_and_nan, []),
op_record("isnan", 1, minus(inexact_dtypes, complex_dtypes), all_shapes,
jtu.rand_some_inf_and_nan, []),
op_record("isneginf", 1, float_dtypes, all_shapes, jtu.rand_some_inf_and_nan, []),
op_record("isposinf", 1, float_dtypes, all_shapes, jtu.rand_some_inf_and_nan, []),
op_record("isreal", 1, number_dtypes, all_shapes, jtu.rand_some_inf, []),
op_record("isrealobj", 1, number_dtypes, all_shapes, jtu.rand_some_inf, []),
op_record("log2", 1, number_dtypes, all_shapes, jtu.rand_positive, ["rev"],
inexact=True),
op_record("log10", 1, number_dtypes, all_shapes, jtu.rand_positive, ["rev"],
inexact=True),
op_record("log1p", 1, number_dtypes, all_shapes, jtu.rand_positive, [],
test_name="log1p_large", tolerance={onp.float64: 1e-12},
inexact=True),
op_record("log1p", 1, number_dtypes, all_shapes, jtu.rand_small_positive, [],
tolerance={onp.float64: 1e-12}, inexact=True),
op_record("logaddexp", 2, float_dtypes, all_shapes,
jtu.rand_some_inf_and_nan, ["rev"],
tolerance={onp.float64: 1e-12}, inexact=True),
op_record("logaddexp2", 2, float_dtypes, all_shapes,
jtu.rand_some_inf_and_nan, ["rev"],
tolerance={onp.float16: 1e-2}, inexact=True),
op_record("polyval", 2, number_dtypes, nonempty_nonscalar_array_shapes,
jtu.rand_default, [], check_dtypes=False,
tolerance={onp.float16: 1e-2, onp.float64: 1e-12},
check_incomplete_shape=False),
op_record("positive", 1, number_dtypes, all_shapes, jtu.rand_default, ["rev"]),
op_record("power", 2, number_dtypes, all_shapes, jtu.rand_positive, ["rev"],
tolerance={onp.complex128: 1e-14}),
op_record("rad2deg", 1, float_dtypes, all_shapes, jtu.rand_default, [],
tolerance={onp.float64: 5e-6}),
op_record("ravel", 1, all_dtypes, all_shapes, jtu.rand_default, ["rev"]),
op_record("real", 1, number_dtypes, all_shapes, jtu.rand_some_inf, []),
op_record("remainder", 2, minus(default_dtypes, [onp.float16]), all_shapes,
jtu.rand_nonzero, [], tolerance={onp.float16: 1e-2}),
op_record("mod", 2, minus(default_dtypes, [onp.float16]), all_shapes,
jtu.rand_nonzero, []),
op_record("sinc", 1, [t for t in number_dtypes if t != tnp.bfloat16],
all_shapes, jtu.rand_default, ["rev"],
tolerance={onp.complex64: 1e-5}, inexact=True,
check_dtypes=False),
op_record("square", 1, number_dtypes, all_shapes, jtu.rand_default, ["rev"]),
op_record("sqrt", 1, number_dtypes, all_shapes, jtu.rand_positive, ["rev"],
inexact=True),
op_record("transpose", 1, all_dtypes, all_shapes, jtu.rand_default, ["rev"],
check_dtypes=False),
op_record("true_divide", 2, all_dtypes, all_shapes, jtu.rand_nonzero,
["rev"], inexact=True),
op_record("diff", 1, number_dtypes, nonzerodim_shapes, jtu.rand_default,
["rev"], check_incomplete_shape=False),
]
JAX_BITWISE_OP_RECORDS = [
op_record("bitwise_and", 2, int_dtypes, all_shapes,
jtu.rand_default, []),
op_record("bitwise_not", 1, int_dtypes, all_shapes,
jtu.rand_default, []),
op_record("bitwise_or", 2, int_dtypes, all_shapes,
jtu.rand_default, []),
op_record("bitwise_xor", 2, int_dtypes, all_shapes,
jtu.rand_default, []),
]
JAX_REDUCER_RECORDS = [
op_record("mean", 1, number_dtypes, nonempty_shapes, jtu.rand_default, [],
inexact=True),
op_record("prod", 1, all_dtypes, all_shapes, jtu.rand_small_positive, []),
op_record("sum", 1, all_dtypes, all_shapes, jtu.rand_default, []),
op_record("nanmean", 1, minus(inexact_dtypes, complex_dtypes),
nonempty_shapes, jtu.rand_some_nan, [], inexact=True),
op_record("nanprod", 1, minus(inexact_dtypes, complex_dtypes), all_shapes,
jtu.rand_some_nan, []),
op_record("nansum", 1, minus(number_dtypes, complex_dtypes), all_shapes,
jtu.rand_some_nan, []),
]
JAX_REDUCER_NO_DTYPE_RECORDS = [
op_record("all", 1, all_dtypes, all_shapes, jtu.rand_some_zero, []),
op_record("any", 1, all_dtypes, all_shapes, jtu.rand_some_zero, []),
op_record("max", 1, minus(all_dtypes, complex_dtypes), nonempty_shapes,
jtu.rand_default, []),
op_record("min", 1, minus(all_dtypes, complex_dtypes), nonempty_shapes,
jtu.rand_default, []),
op_record("var", 1, all_dtypes, nonempty_shapes, jtu.rand_default, [],
inexact=True),
op_record("std", 1, all_dtypes, nonempty_shapes, jtu.rand_default, [],
inexact=True),
]
JAX_ARGMINMAX_RECORDS = [
op_record("argmin", 1, minus(all_dtypes, complex_dtypes), nonempty_shapes,
jtu.rand_some_equal, []),
op_record("argmax", 1, minus(all_dtypes, complex_dtypes), nonempty_shapes,
jtu.rand_some_equal, []),
]
JAX_OPERATOR_OVERLOADS = [
op_record("__add__", 2, number_dtypes, all_shapes, jtu.rand_default, []),
op_record("__sub__", 2, number_dtypes, all_shapes, jtu.rand_default, []),
op_record("__mul__", 2, number_dtypes, all_shapes, jtu.rand_default, []),
op_record("__eq__", 2, number_dtypes, all_shapes, jtu.rand_default, []),
op_record("__ne__", 2, number_dtypes, all_shapes, jtu.rand_default, []),
op_record("__lt__", 2, default_dtypes, all_shapes, jtu.rand_default, []),
op_record("__gt__", 2, default_dtypes, all_shapes, jtu.rand_default, []),
op_record("__ge__", 2, default_dtypes, all_shapes, jtu.rand_default, []),
op_record("__pos__", 1, number_dtypes, all_shapes, jtu.rand_default, []),
op_record("__neg__", 1, number_dtypes, all_shapes, jtu.rand_default, []),
op_record("__pow__", 2, inexact_dtypes, all_shapes, jtu.rand_positive, [],
tolerance={onp.float32: 2e-4, onp.complex64: 2e-4, onp.complex128: 1e-14}),
op_record("__mod__", 2, minus(default_dtypes, [onp.float16]), all_shapes, jtu.rand_nonzero, [],
tolerance={onp.float16: 1e-1}),
op_record("__floordiv__", 2, default_dtypes, all_shapes, jtu.rand_nonzero, []),
op_record("__truediv__", 2, number_dtypes, all_shapes, jtu.rand_nonzero, [],
inexact=True),
op_record("__abs__", 1, number_dtypes, all_shapes, jtu.rand_default, []),
# TODO(mattjj): __invert__ fails on bool dtypes because ~True == -2
op_record("__invert__", 1, int_dtypes, all_shapes, jtu.rand_default, []),
# TODO(mattjj): investigate these failures
# op_record("__or__", 2, number_dtypes, all_shapes, jtu.rand_bool, []),
# op_record("__and__", 2, number_dtypes, all_shapes, jtu.rand_default, []),
# op_record("__xor__", 2, number_dtypes, all_shapes, jtu.rand_bool, []),
# op_record("__divmod__", 2, number_dtypes, all_shapes, jtu.rand_nonzero, []),
# TODO(mattjj): lshift, rshift
]
JAX_RIGHT_OPERATOR_OVERLOADS = [
op_record("__radd__", 2, number_dtypes, all_shapes, jtu.rand_default, []),
op_record("__rsub__", 2, number_dtypes, all_shapes, jtu.rand_default, []),
op_record("__rmul__", 2, number_dtypes, all_shapes, jtu.rand_default, []),
op_record("__rpow__", 2, inexact_dtypes, all_shapes, jtu.rand_positive, [],
tolerance={onp.float32: 2e-4, onp.complex64: 1e-3}),
op_record("__rmod__", 2, minus(default_dtypes, [onp.float16]), all_shapes, jtu.rand_nonzero, [],
tolerance={onp.float16: 1e-1}),
op_record("__rfloordiv__", 2, default_dtypes, all_shapes, jtu.rand_nonzero, []),
op_record("__rtruediv__", 2, number_dtypes, all_shapes, jtu.rand_nonzero, [],
inexact=True),
# op_record("__ror__", 2, number_dtypes, all_shapes, jtu.rand_bool, []),
# op_record("__rand__", 2, number_dtypes, all_shapes, jtu.rand_default, []),
# op_record("__rxor__", 2, number_dtypes, all_shapes, jtu.rand_bool, []),
# op_record("__rdivmod__", 2, number_dtypes, all_shapes, jtu.rand_nonzero, []),
]
numpy_version = tuple(map(int, onp.version.version.split('.')))
if numpy_version >= (1, 15):
JAX_COMPOUND_OP_RECORDS += [
op_record("isclose", 2, [t for t in all_dtypes if t != tnp.bfloat16],
all_shapes, jtu.rand_small_positive, []),
op_record("gcd", 2, int_dtypes, all_shapes, jtu.rand_default, []),
op_record("lcm", 2, int_dtypes, all_shapes, jtu.rand_default, []),
]
JAX_REDUCER_NO_DTYPE_RECORDS += [
op_record("ptp", 1, minus(number_dtypes, complex_dtypes), nonempty_shapes,
jtu.rand_default, []),
]
if six.PY2:
JAX_OPERATOR_OVERLOADS += [
op_record("__div__", 2, number_dtypes, all_shapes, jtu.rand_nonzero, []),
]
JAX_RIGHT_OPERATOR_OVERLOADS += [
op_record("__rdiv__", 2, number_dtypes, all_shapes, jtu.rand_nonzero, []),
]
CombosWithReplacement = itertools.combinations_with_replacement
def _dtypes_are_compatible_for_bitwise_ops(args):
if len(args) <= 1:
return True
is_signed = lambda dtype: tnp.issubdtype(dtype, onp.signedinteger)
width = lambda dtype: tnp.iinfo(dtype).bits
x, y = args
# `tnp.iinfo(dtype).bits` can't be called on bools, so we convert bools to
# ints.
if x == tnp.bool_:
x = tnp.int32
if y == tnp.bool_:
y = tnp.int32
if width(x) > width(y):
x, y = y, x
if x == tnp.uint32 and y == tnp.uint64:
return False
# The following condition seems a little ad hoc, but seems to capture what
# numpy actually implements.
return (
is_signed(x) == is_signed(y)
or (width(x) == 32 and width(y) == 32)
or (width(x) == 32 and width(y) == 64 and is_signed(y)))
def _shapes_are_broadcast_compatible(shapes):
accumulator = onp.zeros([])
for shape in shapes:
try:
accumulator = accumulator + onp.zeros(shape)
except ValueError:
return False
return True
def _shapes_are_equal_length(shapes):
return all(len(shape) == len(shapes[0]) for shape in shapes[1:])
# pylint: disable=g-doc-return-or-yield
def _promote_like_lnp(fun, inexact=False):
"""Decorator that promotes the arguments of `fun` to `tnp.result_type(*args)`.
tnp and onp have different type promotion semantics; this decorator allows
tests make an onp reference implementation act more like an tnp
implementation.
"""
def wrapper(*args, **kw):
flat_args = nest.flatten(args)
if inexact and not any(
tnp.issubdtype(tnp.result_type(x).as_numpy_dtype, tnp.inexact)
for x in flat_args):
dtype = tnp.result_type(tnp.float64, *flat_args)
else:
dtype = tnp.result_type(*flat_args)
dtype = dtype.as_numpy_dtype
args = nest.map_structure(lambda a: onp.asarray(a, dtype), args)
return fun(*args, **kw)
return wrapper
# pylint: enable=g-doc-return-or-yield
def new_test(f):
def wrapper(self, *args, **kwargs):
if not FLAGS.tf_numpy_additional_tests:
self.skipTest("Newly added test is disabled, since flag is False.")
else:
f(self, *args, **kwargs)
return wrapper
def named_parameters(ls):
"""A version that allows an empty param list."""
def noop(_):
def wrapper(self, *args, **kwargs):
self.skipTest("Empty parameter list")
return wrapper
if isinstance(ls, (list, tuple)) and not ls:
return noop
if isinstance(ls, itertools.chain):
try:
first = next(ls)
except StopIteration:
return noop
else:
ls = itertools.chain([first], ls)
return parameterized.named_parameters(ls)
# TODO(wangpeng): Enable all disabled tests in this class
class LaxBackedNumpyTests(jtu.TestCase):
"""Tests for LAX-backed Numpy implementation."""
def _GetArgsMaker(self, rng, shapes, dtypes, onp_arrays=True):
def f():
out = [rng(shape, dtype or tnp.float64)
for shape, dtype in zip(shapes, dtypes)]
return out if onp_arrays else [tnp.asarray(a) for a in out]
return f
@named_parameters(itertools.chain.from_iterable(
jtu.cases_from_list(
{"testcase_name": jtu.format_test_name_suffix(rec.test_name, shapes,
dtypes),
"rng_factory": rec.rng_factory, "shapes": shapes, "dtypes": dtypes,
"onp_op": getattr(onp, rec.name), "lnp_op": getattr(tnp, rec.name),
"check_dtypes": rec.check_dtypes, "tolerance": rec.tolerance,
"inexact": rec.inexact,
"check_incomplete_shape": rec.check_incomplete_shape}
for shapes in filter(
_shapes_are_broadcast_compatible,
CombosWithReplacement(rec.shapes, rec.nargs))
for dtypes in itertools.product(
*(_valid_dtypes_for_shape(s, rec.dtypes) for s in shapes)))
for rec in itertools.chain(JAX_ONE_TO_ONE_OP_RECORDS,
JAX_COMPOUND_OP_RECORDS)))
@unittest.skipIf(onp.__version__ >= onp.lib.NumpyVersion('2.0.0'),
'tf numpy is implemented to be numpy 1.x compatible')
def testOp(self, onp_op, lnp_op, rng_factory, shapes, dtypes, check_dtypes,
tolerance, inexact, check_incomplete_shape):
# TODO(b/147769803): Remove this skipping
if lnp_op.__name__ == "kron" and shapes == ((2, 3, 4), (2, 3, 4)):
self.skipTest("Case disabled because of b/147769803")
rng = rng_factory()
args_maker = self._GetArgsMaker(rng, shapes, dtypes, onp_arrays=False)
tol = max(jtu.tolerance(dtype, tolerance) for dtype in dtypes)
tol = functools.reduce(jtu.join_tolerance,
[tolerance, tol, jtu.default_tolerance()])
self._CheckAgainstNumpy(_promote_like_lnp(onp_op, inexact), lnp_op,
args_maker, check_dtypes=check_dtypes, tol=tol)
# tf.math.pow doesn't support int32/int64 on XLA (b/169191476).
check_xla = not (lnp_op.__name__ == "power" and set(dtypes).intersection(
(onp.int32, onp.int64)))
self._CompileAndCheck(lnp_op, args_maker, check_dtypes=check_dtypes,
atol=tol, rtol=tol,
check_incomplete_shape=check_incomplete_shape,
check_experimental_compile=check_xla,
check_xla_forced_compile=check_xla)
@named_parameters(itertools.chain.from_iterable(
jtu.cases_from_list(
{"testcase_name": jtu.format_test_name_suffix(rec.test_name, shapes,
dtypes),
"rng_factory": rec.rng_factory, "shapes": shapes, "dtypes": dtypes, "name": rec.name,
"tol": rec.tolerance}
for shapes in filter(
_shapes_are_broadcast_compatible,
CombosWithReplacement(rec.shapes, rec.nargs))
for dtypes in itertools.product(
*(_valid_dtypes_for_shape(s, rec.dtypes) for s in shapes)))
for rec in JAX_OPERATOR_OVERLOADS))
def testOperatorOverload(self, name, rng_factory, shapes, dtypes, tol):
rng = rng_factory()
# onp and tnp arrays have different type promotion rules; force the use of
# tnp arrays.
args_maker = self._GetArgsMaker(rng, shapes, dtypes, onp_arrays=False)
fun = lambda *xs: getattr(operator, name.strip('_'))(*xs)
scalar_arg = (jtu.PYTHON_SCALAR_SHAPE in shapes or
jtu.NUMPY_SCALAR_SHAPE in shapes or
() in shapes)
empty_shape = any(isinstance(s, tuple) and 0 in s for s in shapes)
self._CompileAndCheck(
fun, args_maker, check_dtypes=True, #not scalar_arg and not empty_shape,
atol=tol, rtol=tol)
@named_parameters(itertools.chain.from_iterable(
jtu.cases_from_list(
{"testcase_name": jtu.format_test_name_suffix(rec.test_name, shapes,
dtypes),
"rng_factory": rec.rng_factory, "shapes": shapes, "dtypes": dtypes, "name": rec.name,
"op_tolerance": rec.tolerance}
for shapes in filter(
_shapes_are_broadcast_compatible,
CombosWithReplacement(rec.shapes, rec.nargs))
for dtypes in itertools.product(
*(_valid_dtypes_for_shape(s, rec.dtypes) for s in shapes)))
for rec in JAX_RIGHT_OPERATOR_OVERLOADS))
def testRightOperatorOverload(self, name, rng_factory, shapes, dtypes,
op_tolerance):
if shapes[1] is jtu.PYTHON_SCALAR_SHAPE:
raise SkipTest() # TODO(mattjj): clean up
rng = rng_factory()
args_maker = self._GetArgsMaker(rng, shapes, dtypes, onp_arrays=False)
fun = lambda fst, snd: getattr(snd, name)(fst)
tol = max(jtu.tolerance(dtype, op_tolerance) for dtype in dtypes)
scalar_arg = (jtu.PYTHON_SCALAR_SHAPE in shapes or
jtu.NUMPY_SCALAR_SHAPE in shapes or
() in shapes)
empty_shape = any(isinstance(s, tuple) and 0 in s for s in shapes)
self._CompileAndCheck(
fun, args_maker, check_dtypes=True, # not scalar_arg and not empty_shape,
atol=tol, rtol=tol)
@named_parameters(itertools.chain.from_iterable(
jtu.cases_from_list(
{"testcase_name": jtu.format_test_name_suffix(
rec.test_name, shapes, dtypes),
"rng_factory": rec.rng_factory, "shapes": shapes, "dtypes": dtypes,
"onp_op": getattr(onp, rec.name), "lnp_op": getattr(tnp, rec.name)}
for shapes in filter(
_shapes_are_broadcast_compatible,
CombosWithReplacement(rec.shapes, rec.nargs))
for dtypes in filter(
_dtypes_are_compatible_for_bitwise_ops,
CombosWithReplacement(rec.dtypes, rec.nargs)))
for rec in JAX_BITWISE_OP_RECORDS))
def testBitwiseOp(self, onp_op, lnp_op, rng_factory, shapes, dtypes):
rng = rng_factory()
args_maker = self._GetArgsMaker(rng, shapes, dtypes)
has_python_scalar = jtu.PYTHON_SCALAR_SHAPE in shapes
self._CheckAgainstNumpy(onp_op, lnp_op, args_maker, check_dtypes=True)
if onp_op == onp.bitwise_not and has_python_scalar:
# For bitwise_not with a Python `int`, nje.jit may choose a different
# dtype for the `int` from onp's choice, which may result in a different
# result value, so we skip _CompileAndCheck.
return
# Numpy does value-dependent dtype promotion on Python/numpy/array scalars
# which `jit` can't do (when np.result_type is called inside `jit`, tensor
# values are not available), so we skip dtype check in this case.
check_dtypes = not(set(shapes) & set([jtu.NUMPY_SCALAR_SHAPE,
jtu.PYTHON_SCALAR_SHAPE, ()]))
self._CompileAndCheck(lnp_op, args_maker, check_dtypes=check_dtypes)
@named_parameters(itertools.chain.from_iterable(
jtu.cases_from_list(
{"testcase_name": "{}_inshape={}_axis={}_dtype={}_keepdims={}".format(
rec.test_name.capitalize(),
jtu.format_shape_dtype_string(shape, dtype), axis,
"None" if out_dtype is None else onp.dtype(out_dtype).name, keepdims),
"rng_factory": rec.rng_factory, "shape": shape, "dtype": dtype, "out_dtype": out_dtype,
"onp_op": getattr(onp, rec.name), "lnp_op": getattr(tnp, rec.name),
"axis": axis, "keepdims": keepdims, "inexact": rec.inexact}
for shape in rec.shapes for dtype in rec.dtypes
for out_dtype in [None] + rec.dtypes
for axis in set(range(-len(shape), len(shape))) | set([None])
for keepdims in [False, True])
for rec in JAX_REDUCER_RECORDS))
def testReducer(self, onp_op, lnp_op, rng_factory, shape, dtype, out_dtype,
axis, keepdims, inexact):
rng = rng_factory()
def onp_fun(x):
x_cast = x if dtype != tnp.bfloat16 else x.astype(onp.float32)
t = out_dtype if out_dtype != tnp.bfloat16 else onp.float32
return onp_op(x_cast, axis, dtype=t, keepdims=keepdims)
onp_fun = _promote_like_lnp(onp_fun, inexact)
lnp_fun = lambda x: lnp_op(x, axis, dtype=out_dtype, keepdims=keepdims)
args_maker = lambda: [rng(shape, dtype)]
tol_spec = {onp.float16: 1e-2, onp.float32: 1e-3, onp.complex64: 1e-3,
onp.float64: 1e-5, onp.complex128: 1e-5}
tol = jtu.tolerance(dtype, tol_spec)
tol = max(tol, jtu.tolerance(out_dtype, tol_spec)) if out_dtype else tol
self._CheckAgainstNumpy(onp_fun, lnp_fun, args_maker,
check_dtypes=tnp.bfloat16 not in (dtype, out_dtype),
tol=tol)
self._CompileAndCheck(lnp_fun, args_maker, check_dtypes=True, atol=tol,
rtol=tol)
@named_parameters(jtu.cases_from_list(
{"testcase_name": "{}_inshape={}_axis={}_keepdims={}".format(
rec.test_name.capitalize(),
jtu.format_shape_dtype_string(shape, dtype), axis, keepdims),
"rng_factory": rec.rng_factory, "shape": shape, "dtype": dtype,
"onp_op": getattr(onp, rec.name), "lnp_op": getattr(tnp, rec.name),
"axis": axis, "keepdims": keepdims, "inexact": rec.inexact}
for rec in JAX_REDUCER_NO_DTYPE_RECORDS
for shape in rec.shapes for dtype in rec.dtypes
for axis in set(range(-len(shape), len(shape))) | set([None])
for keepdims in [False, True]))
def testReducerNoDtype(self, onp_op, lnp_op, rng_factory, shape, dtype, axis,
keepdims, inexact):
rng = rng_factory()
onp_fun = lambda x: onp_op(x, axis, keepdims=keepdims)
onp_fun = _promote_like_lnp(onp_fun, inexact)
lnp_fun = lambda x: lnp_op(x, axis, keepdims=keepdims)
args_maker = lambda: [rng(shape, dtype)]
self._CheckAgainstNumpy(onp_fun, lnp_fun, args_maker, check_dtypes=True)
self._CompileAndCheck(lnp_fun, args_maker, check_dtypes=True)
@named_parameters(jtu.cases_from_list(
{"testcase_name": "_shape={}_axis={}".format(
jtu.format_shape_dtype_string(shape, dtype), axis),
"shape": shape, "dtype": dtype, "axis": axis}
for shape in all_shapes for dtype in all_dtypes
for axis in set(range(-len(shape), len(shape))) | set([None])))
def testCountNonzero(self, shape, dtype, axis):
rng = jtu.rand_some_zero()
onp_fun = lambda x: onp.count_nonzero(x, axis)
lnp_fun = lambda x: tnp.count_nonzero(x, axis)
args_maker = lambda: [rng(shape, dtype)]
self._CheckAgainstNumpy(onp_fun, lnp_fun, args_maker, check_dtypes=False)
self._CompileAndCheck(lnp_fun, args_maker, check_dtypes=True)
@named_parameters(jtu.cases_from_list(
{"testcase_name": "_shape={}".format(
jtu.format_shape_dtype_string(shape, dtype)),
"shape": shape, "dtype": dtype}
for shape in all_shapes for dtype in all_dtypes))
def testNonzero(self, shape, dtype):
rng = jtu.rand_some_zero()
onp_fun = lambda x: onp.nonzero(onp.atleast_1d(x)) # pylint: disable=unnecessary-lambda
lnp_fun = lambda x: tnp.nonzero(x) # pylint: disable=unnecessary-lambda
args_maker = lambda: [rng(shape, dtype)]
self._CheckAgainstNumpy(onp_fun, lnp_fun, args_maker, check_dtypes=False)
# The shapes of `nonzero`'s results are value-dependent, so `eval_on_shapes`
# won't return concrete shapes.
# Also, `nonzero` requires a known rank.
# Turns off XLA check because there are no XLA kernels for `Where`, which
# XLA can't support because it's output shape is dynamic.
self._CompileAndCheck(
lnp_fun, args_maker, check_dtypes=True, check_eval_on_shapes=False,
check_incomplete_shape=True, check_unknown_rank=False,
check_experimental_compile=False, check_xla_forced_compile=False)
@named_parameters(jtu.cases_from_list(
{"testcase_name": "{}_inshape={}_axis={}".format(
rec.test_name.capitalize(),
jtu.format_shape_dtype_string(shape, dtype), axis),
"rng_factory": rec.rng_factory, "shape": shape, "dtype": dtype,
"onp_op": getattr(onp, rec.name), "lnp_op": getattr(tnp, rec.name),
"axis": axis}
for rec in JAX_ARGMINMAX_RECORDS
for shape, dtype in _shape_and_dtypes(rec.shapes, rec.dtypes)
for axis in range(-len(shape), len(shape))))
def testArgMinMax(self, onp_op, lnp_op, rng_factory, shape, dtype, axis):
rng = rng_factory()
if dtype == onp.complex128 and jtu.device_under_test() == "gpu":
raise unittest.SkipTest("complex128 reductions not supported on GPU")
def onp_fun(array_to_reduce):
return onp_op(array_to_reduce, axis).astype(tnp.int_)
def lnp_fun(array_to_reduce):
return lnp_op(array_to_reduce, axis)
args_maker = lambda: [rng(shape, dtype)]
self._CheckAgainstNumpy(onp_fun, lnp_fun, args_maker, check_dtypes=True)
self._CompileAndCheck(
lnp_fun, args_maker, check_dtypes=True, check_incomplete_shape=True)
@named_parameters(jtu.cases_from_list(
{"testcase_name": "_{}_{}_{}".format(
jtu.format_shape_dtype_string(lhs_shape, lhs_dtype),
jtu.format_shape_dtype_string(rhs_shape, rhs_dtype),
axes),
"lhs_shape": lhs_shape, "lhs_dtype": lhs_dtype,
"rhs_shape": rhs_shape, "rhs_dtype": rhs_dtype,
"axes": axes, "rng_factory": rng_factory}
for rng_factory in [jtu.rand_default]
for lhs_shape, rhs_shape, axes in [
[(2,), (2,), (-1, -1, -1, None)], # scalar output
[(2, 4), (2, 4), (-1, -1, -1, 0)], # 2D vectors
[(3, 4), (3, 4), (-1, -1, -1, 0)], # 3D vectors
[(3, 4), (3, 6, 5, 4), (-1, -1, -1, 0)], # broadcasting
[(4, 3), (3, 6, 5, 4), (1, 0, -1, None)], # different axes
[(6, 1, 3), (5, 3), (-1, -1, -1, None)], # more broadcasting
[(6, 1, 2), (5, 3), (-1, -1, -1, None)], # mixed 2D and 3D vectors
[(10, 5, 2, 8), (1, 5, 1, 3), (-2, -1, -3, None)], # axes/broadcasting
[(4, 5, 2), (4, 5, 2), (-1, -1, 0, None)], # axisc should do nothing
[(4, 5, 2), (4, 5, 2), (-1, -1, -1, None)] # same as before
]
for lhs_dtype, rhs_dtype in CombosWithReplacement(
minus(number_dtypes, complex_dtypes), 2)))
def testCross(self, lhs_shape, lhs_dtype, rhs_shape, rhs_dtype, axes, rng_factory):
rng = rng_factory()
args_maker = lambda: [rng(lhs_shape, lhs_dtype), rng(rhs_shape, rhs_dtype)]
axisa, axisb, axisc, axis = axes
lnp_fun = lambda a, b: tnp.cross(a, b, axisa, axisb, axisc, axis)
def onp_fun(a, b):
a = a.astype(onp.float32) if lhs_dtype == tnp.bfloat16 else a
b = b.astype(onp.float32) if rhs_dtype == tnp.bfloat16 else b
out = onp.cross(a, b, axisa, axisb, axisc, axis)
return out.astype(tnp.promote_types(lhs_dtype, rhs_dtype))
tol_spec = {
# TODO(wangpeng): dtypes.bfloat16: 3e-1,
onp.float16: 0.15}
tol = max(jtu.tolerance(lhs_dtype, tol_spec),
jtu.tolerance(rhs_dtype, tol_spec))
self._CheckAgainstNumpy(onp_fun, lnp_fun, args_maker, check_dtypes=True,
tol=tol)
self._CompileAndCheck(lnp_fun, args_maker, check_dtypes=True, atol=tol,
rtol=tol, check_incomplete_shape=True)
@named_parameters(jtu.cases_from_list(
{"testcase_name": "_{}_{}_{}".format(
name,
jtu.format_shape_dtype_string(lhs_shape, lhs_dtype),
jtu.format_shape_dtype_string(rhs_shape, rhs_dtype)),
"lhs_shape": lhs_shape, "lhs_dtype": lhs_dtype,
"rhs_shape": rhs_shape, "rhs_dtype": rhs_dtype,
"rng_factory": rng_factory}
for rng_factory in [jtu.rand_default]
for name, lhs_shape, rhs_shape in [
("matrix-scalar", (3, 3), ()),
("scalar-matrix", (), (3, 3)),
("matrix-vector", (4, 5), (5,)),
("vector-matrix", (6,), (6, 4)),
("matrix-matrix", (3, 4), (4, 5)),
("tensor-vector", (4, 3, 2), (2,)),
("vector-tensor", (2,), (3, 2, 4)),
("tensor-matrix", (4, 3, 2), (2, 5)),
("matrix-tensor", (5, 2), (3, 2, 4)),
("tensor-tensor", (2, 3, 4), (5, 4, 1))]
for lhs_dtype, rhs_dtype in CombosWithReplacement(number_dtypes, 2)))
def testDot(self, lhs_shape, lhs_dtype, rhs_shape, rhs_dtype, rng_factory):
rng = rng_factory()
args_maker = lambda: [rng(lhs_shape, lhs_dtype), rng(rhs_shape, rhs_dtype)]
tol = {onp.float16: 1e-2, onp.float32: 1e-5, onp.float64: 1e-14,
onp.complex128: 1e-14}
if jtu.device_under_test() == "tpu":
tol[onp.float32] = tol[onp.complex64] = 2e-1
def onp_dot(x, y):
x = x.astype(onp.float32) if lhs_dtype == tnp.bfloat16 else x
y = y.astype(onp.float32) if rhs_dtype == tnp.bfloat16 else y
# `onp.dot(x, y).dtype` sometimes differs from `onp.result_type(x, y)`
# (e.g. when x is float64[] and y is complex64[3,3], or when x is
# float16[3,3] and y is int64[]). We ignore this corner case and pretend
# that they agree.
return onp.dot(x, y).astype(onp.result_type(x, y))
self._CheckAgainstNumpy(onp_dot, tnp.dot, args_maker,
check_dtypes=True, tol=tol)
# We disable dtype check in the following cases because `np.dot` does
# value-dependent type promotion in those cases.
check_dtypes = () not in (lhs_shape, rhs_shape)
# XLA lacks int32/int64 MatMul kernels (b/168657656).
check_xla = not set((lhs_dtype, rhs_dtype)).intersection(
(onp.int32, onp.int64))
self._CompileAndCheck(tnp.dot, args_maker, check_dtypes=check_dtypes,
atol=tol, rtol=tol, check_incomplete_shape=True,
check_experimental_compile=check_xla,
check_xla_forced_compile=check_xla)
@named_parameters(jtu.cases_from_list(
{"testcase_name": "_{}_{}_{}".format(
name,
jtu.format_shape_dtype_string(lhs_shape, lhs_dtype),
jtu.format_shape_dtype_string(rhs_shape, rhs_dtype)),
"lhs_shape": lhs_shape, "lhs_dtype": lhs_dtype,
"rhs_shape": rhs_shape, "rhs_dtype": rhs_dtype,
"rng_factory": rng_factory}
for rng_factory in [jtu.rand_default]
for name, lhs_shape, rhs_shape in [
("vector-vector", (3,), (3,)),
("matrix-vector", (3, 3), (3,)),
("vector-matrix", (3,), (3, 3)),
("matrix-matrix", (3, 3), (3, 3)),
("vector-tensor", (3,), (5, 3, 2)),
("tensor-vector", (5, 3, 2), (2,)),
("matrix-tensor", (5, 2), (3, 2, 4)),
("tensor-matrix", (5, 2, 3), (3, 2)),
("tensor-tensor", (5, 3, 4), (5, 4, 1)),
("tensor-tensor-broadcast", (3, 1, 3, 4), (5, 4, 1))]
for lhs_dtype, rhs_dtype in CombosWithReplacement(number_dtypes, 2)))
def testMatmul(self, lhs_shape, lhs_dtype, rhs_shape, rhs_dtype, rng_factory):
rng = rng_factory()
def onp_fun(x, y):
dtype = tnp.promote_types(lhs_dtype, rhs_dtype)
return (onp.matmul(x, y).astype(dtype),
onp.array(x).__matmul__(y).astype(dtype),
onp.array(y).__rmatmul__(x).astype(dtype))
def lnp_fun(x, y):
return (tnp.matmul(x, y),
tnp.array(x).__matmul__(y),
tnp.array(y).__rmatmul__(x))
args_maker = lambda: [rng(lhs_shape, lhs_dtype), rng(rhs_shape, rhs_dtype)]
tol = {onp.float16: 1e-2, onp.float32: 2e-2, onp.float64: 1e-12,
onp.complex128: 1e-12}
if jtu.device_under_test() == "tpu":
tol[onp.float32] = tol[onp.complex64] = 4e-2
self._CheckAgainstNumpy(onp_fun, lnp_fun, args_maker,
check_dtypes=True, tol=tol)
# XLA lacks int32/int64 MatMul kernels (b/168657656).
check_xla = not set((lhs_dtype, rhs_dtype)).intersection(
(onp.int32, onp.int64))
self._CompileAndCheck(lnp_fun, args_maker, check_dtypes=True, atol=tol,
rtol=tol, check_incomplete_shape=True,
check_experimental_compile=check_xla,
check_xla_forced_compile=check_xla)
@named_parameters(jtu.cases_from_list(
{"testcase_name": "_{}_{}_{}".format(
name,
jtu.format_shape_dtype_string(lhs_shape, lhs_dtype),
jtu.format_shape_dtype_string(rhs_shape, rhs_dtype)),
"lhs_shape": lhs_shape, "lhs_dtype": lhs_dtype,
"rhs_shape": rhs_shape, "rhs_dtype": rhs_dtype,
"rng_factory": rng_factory}
for rng_factory in [jtu.rand_default]
for name, lhs_shape, rhs_shape in [
("vector-vector", (3,), (3,)),
("vector-matrix", (9,), (3, 3)),
("matrix-matrix", (3, 3), (3, 3)),
("tensor-vector", (5, 3, 2), (30,))]
for lhs_dtype, rhs_dtype in CombosWithReplacement(number_dtypes, 2)))
@new_test
def testVDot(self, lhs_shape, lhs_dtype, rhs_shape, rhs_dtype, rng_factory):
rng = rng_factory()
args_maker = lambda: [rng(lhs_shape, lhs_dtype), rng(rhs_shape, rhs_dtype)]
tol = {onp.float16: 1e-2, onp.float32: 2e-2, onp.float64: 1e-12,
onp.complex128: 1e-12}
self._CheckAgainstNumpy(onp.vdot, tnp.vdot, args_maker,
check_dtypes=True, tol=tol)
# XLA lacks int32/int64 MatMul kernels (b/168657656).
check_xla = not set((lhs_dtype, rhs_dtype)).intersection(
(onp.int32, onp.int64))
self._CompileAndCheck(tnp.vdot, args_maker, check_dtypes=True, atol=tol,
rtol=tol, check_incomplete_shape=True,
check_experimental_compile=check_xla,
check_xla_forced_compile=check_xla)
@named_parameters(jtu.cases_from_list(
{"testcase_name": "_{}_{}_{}".format(
jtu.format_shape_dtype_string(lhs_shape, lhs_dtype),
jtu.format_shape_dtype_string(rhs_shape, rhs_dtype),
axes),
"lhs_shape": lhs_shape, "lhs_dtype": lhs_dtype,
"rhs_shape": rhs_shape, "rhs_dtype": rhs_dtype,
"axes": axes, "rng_factory": rng_factory}
for rng_factory in [jtu.rand_default]
for lhs_shape, rhs_shape, axes in [
[(2, 3, 4), (5, 6, 7), 0], # from issue #740
[(2, 3, 4), (3, 4, 5, 6), 2],
[(2, 3, 4), (5, 4, 3, 6), [1, 2]],
[(2, 3, 4), (5, 4, 3, 6), [[1, 2], [2, 1]]],
[(1, 2, 3, 4), (4, 5, 3, 6), [[2, 3], [2, 0]]],
]
for lhs_dtype, rhs_dtype in CombosWithReplacement(number_dtypes, 2)))
def testTensordot(self, lhs_shape, lhs_dtype, rhs_shape, rhs_dtype, axes, rng_factory):
rng = rng_factory()
args_maker = lambda: [rng(lhs_shape, lhs_dtype), rng(rhs_shape, rhs_dtype)]
lnp_fun = lambda a, b: tnp.tensordot(a, b, axes)
def onp_fun(a, b):
a = a if lhs_dtype != tnp.bfloat16 else a.astype(onp.float32)
b = b if rhs_dtype != tnp.bfloat16 else b.astype(onp.float32)
dtype = tnp.promote_types(lhs_dtype, rhs_dtype)
return onp.tensordot(a, b, axes).astype(dtype)
tol = {onp.float16: 1e-1, onp.float32: 1e-3, onp.float64: 1e-12,
onp.complex64: 1e-3, onp.complex128: 1e-12}
if jtu.device_under_test() == "tpu":
tol[onp.float32] = tol[onp.complex64] = 2e-1
self._CheckAgainstNumpy(onp_fun, lnp_fun, args_maker, check_dtypes=True,
tol=tol)
# XLA lacks int32/int64 MatMul kernels (b/168657656).
check_xla = not set((lhs_dtype, rhs_dtype)).intersection(
(onp.int32, onp.int64))
tol = {onp.float64: 1e-14, onp.float16: 0.04, onp.complex128: 6e-15}
tol = max(jtu.tolerance(lhs_dtype, tol), jtu.tolerance(rhs_dtype, tol))
self._CompileAndCheck(lnp_fun, args_maker, check_dtypes=True,
check_incomplete_shape=True,
check_experimental_compile=check_xla,
check_xla_forced_compile=check_xla,
atol = tol,
rtol = tol)
@named_parameters(jtu.cases_from_list(
{"testcase_name": "_{}_{}".format(
jtu.format_shape_dtype_string(lhs_shape, lhs_dtype),
jtu.format_shape_dtype_string(rhs_shape, rhs_dtype)),
"lhs_shape": lhs_shape, "lhs_dtype": lhs_dtype,
"rhs_shape": rhs_shape, "rhs_dtype": rhs_dtype,
"rng_factory": jtu.rand_default}
# TODO(phawkins): support integer dtypes too.
for lhs_shape, lhs_dtype in _shape_and_dtypes(all_shapes, inexact_dtypes)
for rhs_shape, rhs_dtype in _shape_and_dtypes(all_shapes, inexact_dtypes)
if len(jtu._dims_of_shape(lhs_shape)) == 0
or len(jtu._dims_of_shape(rhs_shape)) == 0
or lhs_shape[-1] == rhs_shape[-1]))
def testInner(self, lhs_shape, lhs_dtype, rhs_shape, rhs_dtype, rng_factory):
rng = rng_factory()
args_maker = lambda: [rng(lhs_shape, lhs_dtype), rng(rhs_shape, rhs_dtype)]
def onp_fun(lhs, rhs):
lhs = lhs if lhs_dtype != tnp.bfloat16 else lhs.astype(onp.float32)
rhs = rhs if rhs_dtype != tnp.bfloat16 else rhs.astype(onp.float32)
dtype = tnp.promote_types(lhs_dtype, rhs_dtype)
return onp.inner(lhs, rhs).astype(dtype)
lnp_fun = lambda lhs, rhs: tnp.inner(lhs, rhs) # pylint: disable=unnecessary-lambda
tol_spec = {onp.float16: 1e-2, onp.float32: 1e-5, onp.float64: 2e-6}
if jtu.device_under_test() == "tpu":
tol_spec[onp.float32] = tol_spec[onp.complex64] = 2e-1
tol = max(jtu.tolerance(lhs_dtype, tol_spec),
jtu.tolerance(rhs_dtype, tol_spec))
# TODO(phawkins): there are float32/float64 disagreements for some inputs.
self._CheckAgainstNumpy(onp_fun, lnp_fun, args_maker, check_dtypes=False,
tol=tol)
self._CompileAndCheck(lnp_fun, args_maker, check_dtypes=False, atol=tol,
rtol=tol, check_incomplete_shape=True)
@named_parameters(
jtu.cases_from_list(
{
"testcase_name": "_{}_amin={}_amax={}".format(
jtu.format_shape_dtype_string(shape, dtype), a_min, a_max
),
"shape": shape,
"dtype": dtype,
"a_min": a_min,
"a_max": a_max,
"rng_factory": jtu.rand_default,
}
for shape in all_shapes
for dtype in minus(number_dtypes, complex_dtypes)
for a_min, a_max in [
(-1, None),
(None, 1),
(-onp.ones(1), None),
(None, onp.ones(1)),
]
+ (
[]
if onp.__version__ >= onp.lib.NumpyVersion("2.0.0")
else [(-1, 1), (-onp.ones(1), onp.ones(1))]
)
)
)
def testClipStaticBounds(self, shape, dtype, a_min, a_max, rng_factory):
rng = rng_factory()
onp_fun = lambda x: onp.clip(x, a_min=a_min, a_max=a_max)
lnp_fun = lambda x: tnp.clip(x, a_min=a_min, a_max=a_max)
args_maker = lambda: [rng(shape, dtype)]
tol_spec = {onp.float64: 2e-7}
tol = jtu.tolerance(dtype, tol_spec)
is_x32_scalar = (dtype in [onp.int32, onp.float32] and
shape in [jtu.NUMPY_SCALAR_SHAPE, ()])
# Turns check_dtypes off if is_x32_scalar is True because there is
# a weird promotion inconsistency in numpy:
# ```
# print(np.result_type(np.ones([], np.int32), 1))
# print(np.result_type(np.ones([1], np.int32), 1))
# print(np.result_type(np.int32(1), 1))
# print(np.result_type(np.int32, 1))
# print(np.result_type(np.ones([], np.float32), 1))
# print(np.result_type(np.ones([1], np.float32), 1))
# print(np.result_type(np.float32(1), 1))
# print(np.result_type(np.float32, 1))
# ```
# >>>
# int64
# int32
# int64
# int32
# float64
# float32
# float64
# float32
self._CheckAgainstNumpy(onp_fun, lnp_fun, args_maker,
check_dtypes=not is_x32_scalar, tol=tol)
self._CompileAndCheck(lnp_fun, args_maker, check_dtypes=not is_x32_scalar,
atol=tol, rtol=tol, check_incomplete_shape=True)
@named_parameters(jtu.cases_from_list(
{"testcase_name": "_{}_amin={}_amax={}".format(
jtu.format_shape_dtype_string(shape, dtype), a_min, a_max),
"shape": shape, "dtype": dtype, "a_min": a_min, "a_max": a_max,
"rng_factory": jtu.rand_default}
for shape in array_shapes + [jtu.NUMPY_SCALAR_SHAPE]
for dtype in minus(number_dtypes, complex_dtypes)
for a_min, a_max in [(-1, None), (None, 1), (-1, 1),
(-onp.ones(1), None),
(None, onp.ones(1)),
(-onp.ones(1), onp.ones(1))]))
@new_test
def testClipAsMethodStaticBounds(
self, shape, dtype, a_min, a_max, rng_factory):
rng = rng_factory()
onp_fun = lambda x: onp.clip(x, a_min=a_min, a_max=a_max)
lnp_fun = lambda x: tnp.asarray(x).clip(a_min=a_min, a_max=a_max)
args_maker = lambda: [rng(shape, dtype)]
tol_spec = {onp.float64: 2e-7}
tol = jtu.tolerance(dtype, tol_spec)
is_x32_scalar = (dtype in [onp.int32, onp.float32] and
shape in [jtu.NUMPY_SCALAR_SHAPE, ()])
# Turns check_dtypes off if is_x32_scalar is True because there is
# a weird promotion inconsistency in numpy:
# ```
# print(np.result_type(np.ones([], np.int32), 1))
# print(np.result_type(np.ones([1], np.int32), 1))
# print(np.result_type(np.int32(1), 1))
# print(np.result_type(np.int32, 1))
# print(np.result_type(np.ones([], np.float32), 1))
# print(np.result_type(np.ones([1], np.float32), 1))
# print(np.result_type(np.float32(1), 1))
# print(np.result_type(np.float32, 1))
# ```
# >>>
# int64
# int32
# int64
# int32
# float64
# float32
# float64
# float32
self._CheckAgainstNumpy(onp_fun, lnp_fun, args_maker,
check_dtypes=not is_x32_scalar, tol=tol)
self._CompileAndCheck(lnp_fun, args_maker, check_dtypes=not is_x32_scalar,
atol=tol, rtol=tol, check_incomplete_shape=True)
@named_parameters(jtu.cases_from_list(
{"testcase_name": "_{}_decimals={}".format(
jtu.format_shape_dtype_string(shape, dtype), decimals),
"shape": shape, "dtype": dtype, "decimals": decimals,
"rng_factory": jtu.rand_default}
for shape, dtype in _shape_and_dtypes(
all_shapes, minus(number_dtypes, complex_dtypes))
for decimals in [0, 1, -2]))
def testRoundStaticDecimals(self, shape, dtype, decimals, rng_factory):
rng = rng_factory()
if tnp.issubdtype(dtype, onp.integer) and decimals < 0:
self.skipTest("Integer rounding with decimals < 0 not implemented")
onp_fun = lambda x: onp.round(x, decimals=decimals)
lnp_fun = lambda x: tnp.round(x, decimals=decimals)
args_maker = lambda: [rng(shape, dtype)]
tol = {
# TODO(b/154768983): tnp.bfloat16: 5e-2,
onp.float16: 1e-2}
check_dtypes = shape is not jtu.PYTHON_SCALAR_SHAPE
self._CheckAgainstNumpy(onp_fun, lnp_fun, args_maker,
check_dtypes=check_dtypes, tol=tol)
self._CompileAndCheck(lnp_fun, args_maker, check_dtypes=check_dtypes,
atol=tol, rtol=tol, check_incomplete_shape=True)
def testOperatorRound(self):
self.assertAllClose(round(onp.float32(7.532), 1),
round(tnp.float32(7.5), 1), check_dtypes=True)
self.assertAllClose(round(onp.float32(1.234), 2),
round(tnp.float32(1.234), 2), check_dtypes=True)
self.assertAllClose(round(onp.float32(1.234)),
round(tnp.float32(1.234)), check_dtypes=False)
self.assertAllClose(
round(onp.float32(7.532), 1),
round(tnp.array(7.5, tnp.float32), 1),
check_dtypes=True,
)
self.assertAllClose(
round(onp.float32(1.234), 2),
round(tnp.array(1.234, tnp.float32), 2),
check_dtypes=True,
)
self.assertAllClose(round(onp.float32(1.234)),
round(tnp.array(1.234, tnp.float32)),
check_dtypes=False)
@named_parameters(jtu.cases_from_list(
{"testcase_name": "_shape={}_mode={}_rpadwidth={}_rconstantvalues={}".format(
jtu.format_shape_dtype_string(shape, dtype), mode, pad_width_rank,
constant_values_rank),
"shape": shape, "dtype": dtype, "mode": mode,
"pad_width_rank": pad_width_rank,
"constant_values_rank": constant_values_rank,
"rng_factory": jtu.rand_default,
"irng_factory": partial(jtu.rand_int, 3)}
for mode, constant_values_rank, shapes in [
('constant', 0, all_shapes),
('constant', 1, all_shapes),
('constant', 2, all_shapes),
('symmetric', None, nonempty_shapes),
('reflect', None, nonempty_shapes),
('wrap', None, nonempty_shapes),
]
for shape, dtype in _shape_and_dtypes(shapes, all_dtypes)
for pad_width_rank in range(3)))
@jtu.disable
def testPad(self, shape, dtype, mode, pad_width_rank, constant_values_rank,
rng_factory, irng_factory):
rng = rng_factory()
irng = irng_factory()
pad_width = irng([len(shape), 2][2 - pad_width_rank:], onp.int32)
def onp_fun(x, kwargs):
if pad_width.size == 0:
return x
return onp.pad(x, pad_width, mode=mode, **kwargs)
def lnp_fun(x, kwargs):
return tnp.pad(x, pad_width, mode=mode, **kwargs)
def args_maker():
kwargs = {}
if constant_values_rank:
kwargs["constant_values"] = rng(
[len(shape), 2][2 - constant_values_rank:], dtype)
return rng(shape, dtype), kwargs
self._CheckAgainstNumpy(onp_fun, lnp_fun, args_maker,
check_dtypes=shape is not jtu.PYTHON_SCALAR_SHAPE)
self._CompileAndCheck(lnp_fun, args_maker, check_dtypes=True)
@named_parameters(jtu.cases_from_list(
{"testcase_name": "_shape=[{}]_reps={}".format(
jtu.format_shape_dtype_string(shape, dtype), reps),
"shape": shape, "dtype": dtype, "reps": reps,
"rng_factory": jtu.rand_default}
for reps in [(), (2,), (3, 4), (2, 3, 4)]
for shape, dtype in _shape_and_dtypes(all_shapes, default_dtypes)
))
def testTile(self, shape, dtype, reps, rng_factory):
rng = rng_factory()
onp_fun = lambda arg: onp.tile(arg, reps)
lnp_fun = lambda arg: tnp.tile(arg, reps)
args_maker = lambda: [rng(shape, dtype)]
tol_spec = {onp.float64: 2e-7}
tol = jtu.tolerance(dtype, tol_spec)
self._CheckAgainstNumpy(onp_fun, lnp_fun, args_maker,
check_dtypes=shape is not jtu.PYTHON_SCALAR_SHAPE,
tol=tol)
self._CompileAndCheck(lnp_fun, args_maker, check_dtypes=True, atol=tol,
rtol=tol)
@named_parameters(jtu.cases_from_list(
{"testcase_name": "_axis={}_baseshape=[{}]_dtypes=[{}]".format(
axis, ",".join(str(d) for d in base_shape),
",".join(onp.dtype(dtype).name for dtype in arg_dtypes)),
"axis": axis, "base_shape": base_shape, "arg_dtypes": arg_dtypes,
"rng_factory": jtu.rand_default}
for num_arrs in [3]
for arg_dtypes in CombosWithReplacement(default_dtypes, num_arrs)
for base_shape in [(4,), (3, 4), (2, 3, 4)]
for axis in range(-len(base_shape)+1, len(base_shape))))
def testConcatenate(self, axis, base_shape, arg_dtypes, rng_factory):
rng = rng_factory()
wrapped_axis = axis % len(base_shape)
shapes = [base_shape[:wrapped_axis] + (size,) + base_shape[wrapped_axis+1:]
for size, _ in zip(itertools.cycle([3, 1, 4]), arg_dtypes)]
def onp_fun(*args):
# TODO(nareshmodi): enable once bfloat16 has better support
# args = [x if x.dtype != bfloat16 else x.astype(onp.float32)
# for x in args]
dtype = functools.reduce(tnp.promote_types, arg_dtypes)
return onp.concatenate(args, axis=axis).astype(dtype)
lnp_fun = lambda *args: tnp.concatenate(args, axis=axis)
def args_maker():
return [rng(shape, dtype) for shape, dtype in zip(shapes, arg_dtypes)]
self._CheckAgainstNumpy(onp_fun, lnp_fun, args_maker, check_dtypes=True)
self._CompileAndCheck(lnp_fun, args_maker, check_dtypes=True)
@named_parameters(jtu.cases_from_list(
{"testcase_name": "_axis={}_baseshape=[{}]_dtypes=[{}]".format(
axis, ",".join(str(d) for d in base_shape),
",".join(onp.dtype(dtype).name for dtype in arg_dtypes)),
"axis": axis, "base_shape": base_shape, "arg_dtypes": arg_dtypes,
"rng_factory": jtu.rand_default}
for arg_dtypes in CombosWithReplacement(default_dtypes, 2)
for base_shape in [(4,), (3, 4), (2, 3, 4)]
for axis in range(-len(base_shape)+1, len(base_shape))))
def testAppend(self, axis, base_shape, arg_dtypes, rng_factory):
rng = rng_factory()
wrapped_axis = axis % len(base_shape)
shapes = [base_shape[:wrapped_axis] + (size,) + base_shape[wrapped_axis+1:]
for size, _ in zip(itertools.cycle([3, 1, 4]), arg_dtypes)]
def onp_fun(arr, values):
arr = arr.astype(onp.float32) if tnp.bfloat16 == arr.dtype else arr
values = (
values.astype(onp.float32)
if tnp.bfloat16 == values.dtype else values)
out = onp.append(arr, values, axis=axis)
return out.astype(tnp.promote_types(*arg_dtypes))
lnp_fun = lambda arr, values: tnp.append(arr, values, axis=axis)
def args_maker():
return [rng(shape, dtype) for shape, dtype in zip(shapes, arg_dtypes)]
self._CheckAgainstNumpy(onp_fun, lnp_fun, args_maker, check_dtypes=True)
self._CompileAndCheck(
lnp_fun, args_maker, check_dtypes=True, check_incomplete_shape=True)
@named_parameters(jtu.cases_from_list(
{"testcase_name": "_shape=[{}]_axis={}_repeats={}".format(
jtu.format_shape_dtype_string(shape, dtype), axis, repeats),
"axis": axis, "shape": shape, "dtype": dtype, "repeats": repeats,
"rng_factory": jtu.rand_default}
for repeats in [0, 1, 2]
for shape, dtype in _shape_and_dtypes(all_shapes, default_dtypes)
for axis in [None] + list(range(-len(shape), len(shape)))))
def testRepeat(self, axis, shape, dtype, repeats, rng_factory):
rng = rng_factory()
onp_fun = lambda arg: onp.repeat(arg, repeats=repeats, axis=axis)
onp_fun = _promote_like_lnp(onp_fun)
lnp_fun = lambda arg: tnp.repeat(arg, repeats=repeats, axis=axis)
args_maker = lambda: [rng(shape, dtype)]
self._CheckAgainstNumpy(onp_fun, lnp_fun, args_maker, check_dtypes=True)
self._CompileAndCheck(
lnp_fun, args_maker, check_dtypes=True, check_incomplete_shape=False)
def testIssue1233(self):
'''
Following numpy test suite from `test_repeat` at https://github.com/numpy/numpy/blob/master/numpy/core/tests/test_multiarray.py
'''
# pylint: disable=bad-whitespace
tol = 1e-5
def test_single(m, args_maker, repeats, axis):
lax_ans = tnp.repeat(m, repeats, axis)
numpy_ans = onp.repeat(m, repeats, axis)
self.assertAllClose(lax_ans, numpy_ans, check_dtypes=True, rtol=tol, atol=tol)
lnp_fun = lambda arg: tnp.repeat(arg, repeats=repeats, axis=axis)
# Turns off XLA check because there are no XLA kernels for `Where` used by
# tf.repeat (b/169192730).
self._CompileAndCheck(
lnp_fun, args_maker, check_dtypes=True, check_incomplete_shape=False,
check_experimental_compile=False, check_xla_forced_compile=False)
m = tnp.array([1,2,3,4,5,6])
args_maker = lambda: [m]
for repeats in [
2,
[1, 3, 2, 1, 1, 2],
[1, 3, 0, 1, 1, 2],
[2],
tnp.array([1, 3, 2, 1, 1, 2]),
tnp.array([2]),
]:
test_single(m, args_maker, repeats, None)
m_rect = m.reshape((2,3))
args_maker = lambda: [m_rect]
for repeats in [2, [2,1], [2], tnp.array([2,1]), tnp.array([2])]:
test_single(m_rect, args_maker, repeats, axis=0)
for repeats in [2, [1,3,2], [2], tnp.array([1,3,2]), tnp.array([2])]:
test_single(m_rect, args_maker, repeats, axis=1)
# pylint: enable=bad-whitespace
@named_parameters(jtu.cases_from_list(
{"testcase_name": "op={}_shape=[{}]_axis={}_out_dtype={}".format(
op, jtu.format_shape_dtype_string(shape, dtype), axis, out_dtype),
"axis": axis, "shape": shape, "dtype": dtype, "out_dtype": out_dtype,
"rng_factory": jtu.rand_default, "lnp_op": getattr(tnp, op),
"onp_op": getattr(onp, op)}
for op in ["cumsum", "cumprod"]
for dtype in default_dtypes
for out_dtype in default_dtypes
for shape in all_shapes
for axis in [None] + list(range(-len(shape), len(shape)))))
def testCumSumProd(self, axis, shape, dtype, out_dtype, onp_op, lnp_op, rng_factory):
rng = rng_factory()
onp_fun = lambda arg: onp_op(arg, axis=axis, dtype=out_dtype)
lnp_fun = lambda arg: lnp_op(arg, axis=axis, dtype=out_dtype)
args_maker = lambda: [rng(shape, dtype)]
tol = max(jtu.tolerance(dtype), jtu.tolerance(out_dtype))
self._CheckAgainstNumpy(onp_fun, lnp_fun, args_maker, check_dtypes=True,
tol=tol)
# XLA lacks int64 Cumsum/Cumprod kernels (b/168841378).
check_xla = out_dtype != onp.int64
rtol = None
if out_dtype == onp.float16:
rtol = 2e-3
self._CompileAndCheck(
lnp_fun, args_maker, check_dtypes=True, rtol=rtol,
check_incomplete_shape=True,
check_experimental_compile=check_xla,
check_xla_forced_compile=check_xla)
@named_parameters(jtu.cases_from_list(
{"testcase_name": "_dtype={}_m={}_n={}_k={}".format(
onp.dtype(dtype).name, m, n, k),
"m": m, "n": n, "k": k, "dtype": dtype, "rng_factory": jtu.rand_default}
for dtype in default_dtypes
for n in [0, 4]
for m in [None, 0, 1, 3, 4]
for k in list(range(-4, 4))))
def testTri(self, m, n, k, dtype, rng_factory):
rng = rng_factory()
onp_fun = lambda: onp.tri(n, M=m, k=k, dtype=dtype)
lnp_fun = lambda: tnp.tri(n, M=m, k=k, dtype=dtype)
args_maker = lambda: []
self._CheckAgainstNumpy(onp_fun, lnp_fun, args_maker, check_dtypes=True)
self._CompileAndCheck(
lnp_fun, args_maker, check_dtypes=True, check_incomplete_shape=True)
@named_parameters(jtu.cases_from_list(
{"testcase_name": "_op={}_shape={}_k={}".format(
op, jtu.format_shape_dtype_string(shape, dtype), k),
"dtype": dtype, "shape": shape, "op": op, "k": k,
"rng_factory": jtu.rand_default}
for dtype in default_dtypes
for shape in [shape for shape in all_shapes if len(shape) >= 2]
for op in ["tril", "triu"]
for k in list(range(-3, 3))))
def testTriLU(self, dtype, shape, op, k, rng_factory):
rng = rng_factory()
onp_fun = lambda arg: getattr(onp, op)(arg, k=k)
lnp_fun = lambda arg: getattr(tnp, op)(arg, k=k)
args_maker = lambda: [rng(shape, dtype)]
self._CheckAgainstNumpy(onp_fun, lnp_fun, args_maker, check_dtypes=True)
# Incomplete shape support is not implemented at the moment.
self._CompileAndCheck(
lnp_fun, args_maker, check_dtypes=True, check_incomplete_shape=False)
@named_parameters(jtu.cases_from_list(
{"testcase_name": "_ndim={}_n={}".format(ndim, n),
"ndim": ndim, "n": n}
for ndim in [0, 1, 4]
for n in [0, 1, 7]))
def testDiagIndices(self, ndim, n):
onp.testing.assert_equal(onp.diag_indices(n, ndim),
tnp.diag_indices(n, ndim))
@named_parameters(jtu.cases_from_list(
{"testcase_name": "_shape={}_k={}".format(
jtu.format_shape_dtype_string(shape, dtype), k),
"dtype": dtype, "shape": shape, "k": k, "rng_factory": jtu.rand_default}
for dtype in default_dtypes
for shape in [shape for shape in all_shapes if len(shape) in (1, 2)]
for k in list(range(-4, 4))))
def testDiag(self, shape, dtype, k, rng_factory):
rng = rng_factory()
onp_fun = lambda arg: onp.diag(arg, k)
lnp_fun = lambda arg: tnp.diag(arg, k)
args_maker = lambda: [rng(shape, dtype)]
self._CheckAgainstNumpy(onp_fun, lnp_fun, args_maker, check_dtypes=True)
self._CompileAndCheck(
lnp_fun, args_maker, check_dtypes=True, check_incomplete_shape=True)
@named_parameters(jtu.cases_from_list(
{"testcase_name": "_shape={}_offset={}_axis1={}_axis2={}".format(
jtu.format_shape_dtype_string(shape, dtype), offset, axis1, axis2),
"dtype": dtype, "shape": shape, "offset": offset, "axis1": axis1,
"axis2": axis2, "rng_factory": jtu.rand_default}
for dtype in default_dtypes
for shape in [shape for shape in all_shapes if len(shape) >= 2]
for axis1 in range(-len(shape), len(shape))
for axis2 in [a for a in range(-len(shape), len(shape))
if a % len(shape) != axis1 % len(shape)]
for offset in list(range(-4, 4))))
def testDiagonal(self, shape, dtype, offset, axis1, axis2, rng_factory):
rng = rng_factory()
onp_fun = lambda arg: onp.diagonal(arg, offset, axis1, axis2)
lnp_fun = lambda arg: tnp.diagonal(arg, offset, axis1, axis2)
args_maker = lambda: [rng(shape, dtype)]
self._CheckAgainstNumpy(onp_fun, lnp_fun, args_maker, check_dtypes=True)
self._CompileAndCheck(
lnp_fun, args_maker, check_dtypes=True, check_incomplete_shape=True)
@named_parameters(jtu.cases_from_list(
{"testcase_name": "_shape={}_n={}".format(onp.dtype(dtype).name, n),
"dtype": dtype, "n": n}
for dtype in default_dtypes
for n in list(range(4))))
def testIdentity(self, n, dtype):
onp_fun = lambda: onp.identity(n, dtype)
lnp_fun = lambda: tnp.identity(n, dtype)
args_maker = lambda: []
self._CheckAgainstNumpy(onp_fun, lnp_fun, args_maker, check_dtypes=True)
self._CompileAndCheck(
lnp_fun, args_maker, check_dtypes=True, check_incomplete_shape=True)
@named_parameters(jtu.cases_from_list(
{"testcase_name": "_shape={}_dtype_{}_offset={}_axis1={}_axis2={}".format(
jtu.format_shape_dtype_string(shape, dtype),
out_dtype, offset, axis1, axis2),
"dtype": dtype, "out_dtype": out_dtype, "shape": shape, "offset": offset,
"axis1": axis1, "axis2": axis2, "rng_factory": jtu.rand_default}
for dtype in default_dtypes
for out_dtype in [None] + number_dtypes
for shape in [shape for shape in all_shapes if len(shape) >= 2]
for axis1 in range(-len(shape), len(shape))
for axis2 in range(-len(shape), len(shape))
if (axis1 % len(shape)) != (axis2 % len(shape))
for offset in list(range(-4, 4))))
def testTrace(self, shape, dtype, out_dtype, offset, axis1, axis2, rng_factory):
rng = rng_factory()
def onp_fun(arg):
if out_dtype == tnp.bfloat16:
return onp.trace(arg, offset, axis1, axis2, onp.float32).astype(
tnp.bfloat16
)
else:
return onp.trace(arg, offset, axis1, axis2, out_dtype)
lnp_fun = lambda arg: tnp.trace(arg, offset, axis1, axis2, out_dtype)
args_maker = lambda: [rng(shape, dtype)]
self._CheckAgainstNumpy(onp_fun, lnp_fun, args_maker, check_dtypes=True)
self._CompileAndCheck(
lnp_fun, args_maker, check_dtypes=True, check_incomplete_shape=True)
@named_parameters(jtu.cases_from_list(
{"testcase_name": "_{}_axis={}".format(
jtu.format_test_name_suffix("", [shape] * len(dtypes), dtypes), axis),
"shape": shape, "axis": axis, "dtypes": dtypes, "rng_factory": rng_factory}
for dtypes in [
[onp.float32],
[onp.float32, onp.float32],
[onp.float32, onp.int32, onp.float32],
[onp.float32, onp.int64, onp.float32],
[onp.float32, onp.int32, onp.float64],
]
for shape in [(), (2,), (3, 4), (1, 100)]
for axis in range(-len(shape), len(shape) + 1)
for rng_factory in [jtu.rand_default]))
def testStack(self, shape, axis, dtypes, rng_factory):
rng = rng_factory()
args_maker = lambda: [[rng(shape, dtype) for dtype in dtypes]]
onp_fun = _promote_like_lnp(partial(onp.stack, axis=axis))
lnp_fun = partial(tnp.stack, axis=axis)
self._CheckAgainstNumpy(lnp_fun, onp_fun, args_maker, check_dtypes=True)
self._CompileAndCheck(
lnp_fun, args_maker, True, check_incomplete_shape=True)
@named_parameters(jtu.cases_from_list(
{"testcase_name": "_op={}_{}".format(
op, jtu.format_test_name_suffix("", [shape] * len(dtypes), dtypes)),
"shape": shape, "op": op, "dtypes": dtypes, "rng_factory": rng_factory}
for op in ["hstack", "vstack", "dstack"]
for dtypes in [
[onp.float32],
[onp.float32, onp.float32],
[onp.float32, onp.int32, onp.float32],
[onp.float32, onp.int64, onp.float32],
[onp.float32, onp.int32, onp.float64],
]
for shape in [(), (2,), (3, 4), (1, 100), (2, 3, 4)]
for rng_factory in [jtu.rand_default]))
def testHVDStack(self, shape, op, dtypes, rng_factory):
rng = rng_factory()
args_maker = lambda: [[rng(shape, dtype) for dtype in dtypes]]
onp_fun = _promote_like_lnp(getattr(onp, op))
lnp_fun = getattr(tnp, op)
self._CheckAgainstNumpy(lnp_fun, onp_fun, args_maker, check_dtypes=True)
self._CompileAndCheck(
lnp_fun, args_maker, True, check_incomplete_shape=True)
@named_parameters(jtu.cases_from_list(
{"testcase_name": "_inshape={}_outdtype={}".format(
jtu.format_shape_dtype_string(shape, fill_value_dtype),
onp.dtype(out_dtype).name if out_dtype else "None"),
"shape": shape, "fill_value_dtype": fill_value_dtype,
"out_dtype": out_dtype, "rng_factory": jtu.rand_default}
for shape in array_shapes + [3, onp.array(7, dtype=onp.int32)]
for fill_value_dtype in default_dtypes
for out_dtype in [None] + default_dtypes))
def testFull(self, shape, fill_value_dtype, out_dtype, rng_factory):
rng = rng_factory()
onp_fun = lambda fill_value: onp.full(shape, fill_value, dtype=out_dtype)
lnp_fun = lambda fill_value: tnp.full(shape, fill_value, dtype=out_dtype)
args_maker = lambda: [rng((), fill_value_dtype)]
self._CheckAgainstNumpy(onp_fun, lnp_fun, args_maker, check_dtypes=True)
self._CompileAndCheck(
lnp_fun, args_maker, check_dtypes=True, check_incomplete_shape=True)
@named_parameters(
jtu.cases_from_list(
{"testcase_name": ("_op={}_shape={}_dtype={}").format(op, shape, dtype),
"onp_op": getattr(onp, op), "lnp_op": getattr(tnp, op),
"shape": shape, "dtype": dtype}
for op in ["zeros", "ones"]
for shape in [2, (), (2,), (3, 0), onp.array((4, 5, 6), dtype=onp.int32),
onp.array(4, dtype=onp.int32)]
for dtype in all_dtypes))
def testZerosOnes(self, onp_op, lnp_op, shape, dtype):
rng = jtu.rand_default()
def args_maker(): return []
onp_op = partial(onp_op, shape, dtype)
lnp_op = partial(lnp_op, shape, dtype)
self._CheckAgainstNumpy(onp_op, lnp_op, args_maker, check_dtypes=True)
self._CompileAndCheck(
lnp_op, args_maker, check_dtypes=True, check_incomplete_shape=True)
@named_parameters(jtu.cases_from_list(
{"testcase_name": "_inshape={}_filldtype={}_outdtype={}".format(
jtu.format_shape_dtype_string(shape, in_dtype),
onp.dtype(fill_value_dtype).name,
onp.dtype(out_dtype).name),
"shape": shape, "in_dtype": in_dtype,
"fill_value_dtype": fill_value_dtype, "out_dtype": out_dtype,
"rng_factory": jtu.rand_default}
for shape in array_shapes
for in_dtype in default_dtypes
for fill_value_dtype in default_dtypes
for out_dtype in default_dtypes))
def testFullLike(self, shape, in_dtype, fill_value_dtype, out_dtype, rng_factory):
rng = rng_factory()
onp_fun = lambda x, fill_value: onp.full_like(
x, fill_value, dtype=out_dtype
)
lnp_fun = lambda x, fill_value: tnp.full_like(
x, fill_value, dtype=out_dtype
)
args_maker = lambda: [rng(shape, in_dtype), rng((), fill_value_dtype)]
self._CheckAgainstNumpy(onp_fun, lnp_fun, args_maker, check_dtypes=True)
self._CompileAndCheck(
lnp_fun, args_maker, check_dtypes=True, check_incomplete_shape=True)
@named_parameters(jtu.cases_from_list(
{"testcase_name": "_{}_axis={}_{}sections".format(
jtu.format_shape_dtype_string(shape, dtype), axis, num_sections),
"shape": shape, "num_sections": num_sections, "axis": axis,
"dtype": dtype, "rng_factory": jtu.rand_default}
for shape, axis, num_sections in [
((3,), 0, 3), ((12,), 0, 3), ((12, 4), 0, 4), ((12, 4), 1, 2),
((2, 3, 4), -1, 2), ((2, 3, 4), -2, 3)]
for dtype in default_dtypes))
def testSplitStaticInt(self, shape, num_sections, axis, dtype, rng_factory):
rng = rng_factory()
onp_fun = lambda x: onp.split(x, num_sections, axis=axis)
lnp_fun = lambda x: tnp.split(x, num_sections, axis=axis)
args_maker = lambda: [rng(shape, dtype)]
self._CheckAgainstNumpy(onp_fun, lnp_fun, args_maker, check_dtypes=True)
self._CompileAndCheck(
lnp_fun, args_maker, check_dtypes=True, check_incomplete_shape=True)
@named_parameters(jtu.cases_from_list(
{"testcase_name": "_{}_axis={}_{}sections".format(
jtu.format_shape_dtype_string(shape, dtype), axis, num_sections),
"shape": shape, "num_sections": num_sections, "axis": axis,
"dtype": dtype, "rng_factory": jtu.rand_default}
for shape, axis, num_sections in [
((12, 4), 0, 4), ((12, 4), 1, 2),
((2, 3, 4), 2, 2), ((4, 3, 4), 0, 2)]
for dtype in default_dtypes))
def testHVDSplit(self, shape, num_sections, axis, dtype, rng_factory):
rng = rng_factory()
def fn(module, axis):
if axis == 0:
return module.vsplit
elif axis == 1:
return module.hsplit
else:
assert axis == 2
return module.dsplit
onp_fun = lambda x: fn(onp, axis)(x, num_sections)
lnp_fun = lambda x: fn(tnp, axis)(x, num_sections)
args_maker = lambda: [rng(shape, dtype)]
self._CheckAgainstNumpy(onp_fun, lnp_fun, args_maker, check_dtypes=True)
self._CompileAndCheck(
lnp_fun, args_maker, check_dtypes=True, check_incomplete_shape=True)
@named_parameters(jtu.cases_from_list(
{"testcase_name": "_inshape={}_outshape={}_order={}".format(
jtu.format_shape_dtype_string(arg_shape, dtype),
jtu.format_shape_dtype_string(out_shape, dtype),
order),
"arg_shape": arg_shape, "out_shape": out_shape, "dtype": dtype,
"order": order, "rng_factory": jtu.rand_default}
for dtype in default_dtypes
for order in ["C", "F"]
for arg_shape, out_shape in [
(jtu.NUMPY_SCALAR_SHAPE, (1, 1, 1)),
((), (1, 1, 1)),
((7, 0), (0, 42, 101)),
((3, 4), 12),
((3, 4), (12,)),
((3, 4), -1),
((2, 1, 4), (-1,)),
((2, 2, 4), (2, 8))
]))
def testReshape(self, arg_shape, out_shape, dtype, order, rng_factory):
rng = rng_factory()
onp_fun = lambda x: onp.reshape(x, out_shape, order=order)
lnp_fun = lambda x: tnp.reshape(x, out_shape, order=order)
args_maker = lambda: [rng(arg_shape, dtype)]
self._CheckAgainstNumpy(onp_fun, lnp_fun, args_maker, check_dtypes=True)
self._CompileAndCheck(
lnp_fun, args_maker, check_dtypes=True, check_incomplete_shape=True)
@named_parameters(jtu.cases_from_list(
{"testcase_name": "_inshape={}_outshape={}".format(
jtu.format_shape_dtype_string(arg_shape, dtype),
jtu.format_shape_dtype_string(out_shape, dtype)),
"arg_shape": arg_shape, "out_shape": out_shape, "dtype": dtype,
"rng_factory": jtu.rand_default}
for dtype in default_dtypes
for arg_shape, out_shape in [
((7, 0), (0, 42, 101)),
((2, 1, 4), (-1,)),
((2, 2, 4), (2, 8))
]))
def testReshapeMethod(self, arg_shape, out_shape, dtype, rng_factory):
rng = rng_factory()
onp_fun = lambda x: onp.reshape(x, out_shape)
lnp_fun = lambda x: x.reshape(*out_shape)
args_maker = lambda: [rng(arg_shape, dtype)]
self._CheckAgainstNumpy(onp_fun, lnp_fun, args_maker, check_dtypes=True)
self._CompileAndCheck(
lnp_fun, args_maker, check_dtypes=True, check_incomplete_shape=True)
@named_parameters(jtu.cases_from_list(
{"testcase_name": "_inshape={}_expanddim={}".format(
jtu.format_shape_dtype_string(arg_shape, dtype), dim),
"arg_shape": arg_shape, "dtype": dtype, "dim": dim,
"rng_factory": jtu.rand_default}
for arg_shape in [(), (3,), (3, 4)]
for dtype in default_dtypes
for dim in range(-len(arg_shape)+1, len(arg_shape))))
def testExpandDimsStaticDim(self, arg_shape, dtype, dim, rng_factory):
rng = rng_factory()
onp_fun = lambda x: onp.expand_dims(x, dim)
lnp_fun = lambda x: tnp.expand_dims(x, dim)
args_maker = lambda: [rng(arg_shape, dtype)]
self._CheckAgainstNumpy(onp_fun, lnp_fun, args_maker, check_dtypes=True)
self._CompileAndCheck(
lnp_fun, args_maker, check_dtypes=True, check_incomplete_shape=True)
@named_parameters(jtu.cases_from_list(
{"testcase_name": "_inshape={}_axes=({},{})".format(
jtu.format_shape_dtype_string(arg_shape, dtype), ax1, ax2),
"arg_shape": arg_shape, "dtype": dtype, "ax1": ax1, "ax2": ax2,
"rng_factory": jtu.rand_default}
for arg_shape, ax1, ax2 in [
((3, 4), 0, 1), ((3, 4), 1, 0), ((3, 4, 5), 1, 2),
((3, 4, 5), -1, -2), ((3, 4, 5), 0, 1)]
for dtype in default_dtypes))
def testSwapAxesStaticAxes(self, arg_shape, dtype, ax1, ax2, rng_factory):
rng = rng_factory()
onp_fun = lambda x: onp.swapaxes(x, ax1, ax2)
lnp_fun = lambda x: tnp.swapaxes(x, ax1, ax2)
args_maker = lambda: [rng(arg_shape, dtype)]
self._CheckAgainstNumpy(onp_fun, lnp_fun, args_maker, check_dtypes=True)
self._CompileAndCheck(
lnp_fun, args_maker, check_dtypes=True, check_incomplete_shape=True)
@named_parameters(jtu.cases_from_list(
{"testcase_name": "_shape={}_axes=({},{})".format(
jtu.format_shape_dtype_string(arg_shape, dtype), source, destination),
"arg_shape": arg_shape, "dtype": dtype, "source": source,
"destination": destination, "rng_factory": jtu.rand_default}
for arg_shape, source, destination in [
(tuple(range(6)), (0, 2), (3, 5)),
(tuple(range(6)), (0, 2), (-1, -3)),
(tuple(range(6)), (-6, -4),(3, 5)),
(tuple(range(6)), (-6, -4), (-1, -3)),
(tuple(range(6)), 0, 4),
(tuple(range(6)), -6, -2),
(tuple(range(6)), tuple(range(6)), tuple(range(6))),
(tuple(range(6)), tuple(range(6)), tuple(reversed(range(6)))),
(tuple(range(6)), (), ()),
] for dtype in default_dtypes))
@new_test
def testMoveaxisStaticAxes(self, arg_shape, dtype, source, destination,
rng_factory):
rng = rng_factory()
onp_fun = lambda x: onp.moveaxis(x, source, destination)
lnp_fun = lambda x: tnp.moveaxis(x, source, destination)
args_maker = lambda: [rng(arg_shape, dtype)]
self._CheckAgainstNumpy(onp_fun, lnp_fun, args_maker, check_dtypes=True)
self._CompileAndCheck(
lnp_fun, args_maker, check_dtypes=True, check_incomplete_shape=True)
@named_parameters(jtu.cases_from_list(
{"testcase_name": "_inshape={}_axis={}".format(
jtu.format_shape_dtype_string(arg_shape, dtype), ax),
"arg_shape": arg_shape, "dtype": dtype, "ax": ax,
"rng_factory": jtu.rand_default}
for arg_shape, ax in [
((3, 1), None),
((3, 1), 1),
((1, 3, 1), (0, 2)),
((1, 4, 1), (0,))]
for dtype in default_dtypes))
def testSqueeze(self, arg_shape, dtype, ax, rng_factory):
rng = rng_factory()
onp_fun = lambda x: onp.squeeze(x, ax)
lnp_fun = lambda x: tnp.squeeze(x, ax)
args_maker = lambda: [rng(arg_shape, dtype)]
self._CheckAgainstNumpy(onp_fun, lnp_fun, args_maker, check_dtypes=True)
self._CompileAndCheck(
lnp_fun, args_maker, check_dtypes=True, check_incomplete_shape=True)
@named_parameters(jtu.cases_from_list(
{"testcase_name": "_shape={}_axis={}_weights={}_returned={}".format(
jtu.format_shape_dtype_string(shape, dtype),
axis,
(None if weights_shape is None else jtu.format_shape_dtype_string(weights_shape, dtype)),
returned),
"rng_factory": jtu.rand_default, "shape": shape, "dtype": dtype, "axis": axis,
"weights_shape": weights_shape, "returned": returned}
for shape, dtype in _shape_and_dtypes(nonempty_shapes, number_dtypes)
for axis in set(range(-len(shape), len(shape))) | set([None])
# `weights_shape` is either `None`, same as the averaged axis, or same as
# that of the input
for weights_shape in ([None, shape] if axis is None or len(shape) == 1
else [None, (shape[axis],), shape])
for returned in [False, True]))
def testAverage(self, shape, dtype, axis, weights_shape, returned, rng_factory):
rng = rng_factory()
if weights_shape is None:
onp_fun = lambda x: onp.average(x, axis, returned=returned)
lnp_fun = lambda x: tnp.average(x, axis, returned=returned)
args_maker = lambda: [rng(shape, dtype)]
else:
onp_fun = lambda x, weights: onp.average(x, axis, weights, returned)
lnp_fun = lambda x, weights: tnp.average(x, axis, weights, returned)
args_maker = lambda: [rng(shape, dtype), rng(weights_shape, dtype)]
onp_fun = _promote_like_lnp(onp_fun, inexact=True)
tol = {
# TODO(b/154768983): tnp.bfloat16: 1e-1,
onp.float16: 1e-1, onp.float32: 1e-3, onp.float64: 2e-7,
onp.complex64: 1e-3, onp.complex128: 1e-10,
}
check_dtypes = shape is not jtu.PYTHON_SCALAR_SHAPE
try:
self._CheckAgainstNumpy(
onp_fun, lnp_fun, args_maker, check_dtypes=check_dtypes, tol=tol)
except ZeroDivisionError:
self.skipTest("don't support checking for ZeroDivisionError")
self._CompileAndCheck(lnp_fun, args_maker, check_dtypes=check_dtypes,
rtol=tol, atol=tol, check_incomplete_shape=True)
@named_parameters(jtu.cases_from_list(
{"testcase_name": "_arg{}_ndmin={}".format(i, ndmin),
"arg": arg, "ndmin": ndmin, "dtype": dtype}
for i, (arg, dtype) in enumerate([
([True, False, True], tnp.bool_),
(3., tnp.float64),
([1, 2, 3], tnp.int_),
([1., 2., 3.], tnp.float64),
([[1, 2], [3, 4], [5, 6]], tnp.int_),
([[1, 2.], [3, 4], [5, 6]], tnp.float64),
([[1., 2j], [3., 4.], [5., 6.]], tnp.complex128),
([[3, onp.array(2, dtype=tnp.float64), 1],
onp.arange(3., dtype=tnp.float64)], tnp.float64), # pylint: disable=bad-continuation
])
for ndmin in [None, onp.ndim(arg), onp.ndim(arg) + 1, onp.ndim(arg) + 2]))
def testArray(self, arg, ndmin, dtype):
args_maker = lambda: [arg]
dtype = tnp.canonicalize_dtype(dtype)
if ndmin is not None:
onp_fun = partial(onp.array, ndmin=ndmin, dtype=dtype)
lnp_fun = partial(tnp.array, ndmin=ndmin)
else:
onp_fun = partial(onp.array, dtype=dtype)
lnp_fun = tnp.array
self._CheckAgainstNumpy(onp_fun, lnp_fun, args_maker, check_dtypes=True)
self._CompileAndCheck(lnp_fun, args_maker, check_dtypes=True,
check_incomplete_shape=True, static_argnums=[0])
def testIssue121(self):
assert not onp.isscalar(tnp.array(3))
@jtu.disable
def testArrayMethod(self):
class arraylike(object):
dtype = onp.float32
def __array__(self, dtype=None):
return 3.0
a = arraylike()
ans = tnp.array(a)
assert ans == 3.0
@jtu.skip_on_devices("tpu") # TODO(b/32368900): TPUs don't support uint8 yet.
@jtu.disable
def testMemoryView(self):
ans = tnp.array(bytearray(b"\x2a"))
self.assertAllClose(
ans, onp.array([0x2A], dtype=onp.uint8), check_dtypes=True
)
def testAllClose(self):
rng = onp.random.RandomState(0)
x = rng.randn(2, 2)
y = rng.randn(2)
def same(list1, list2):
allclose = functools.partial(tnp.allclose, atol=1e-3, rtol=1e-3)
elements_close = list(map(allclose, list1, list2))
return tnp.all(tnp.array(elements_close))
csame = nje.jit(same)
a1 = same((x, y), (x, y))
a2 = csame((x, y), (x, y))
a3 = csame((x, y), (x, 2 * y))
self.assertTrue(a1)
self.assertTrue(a2)
self.assertFalse(a3)
@jtu.skip_on_devices("tpu") # TODO(mattjj): investigate this failure
@jtu.disable
def testOnesBroadcastingConstantHandler(self):
# TODO(mattjj): update this test for jax3
self.skipTest("test needs jax3 update")
def fun(x):
ones = tnp.ones((3, 4))
assert isinstance(ones, onp.ndarray) and ones.strides == (0, 0)
# To check that the constant handler generates a Broadcast for stride-zero
# arrays, we monkey-patch the client instance.
# TODO(mattjj): once we have better HLO dumping and inspecting facilities,
# we can check the HLO more directly.
c = x._node.c
Broadcast = c.Broadcast # pylint: disable=invalid-name
was_called = []
c.Broadcast = lambda *args: was_called.append(True) or Broadcast(*args)
out = x + ones # the ndarray constant handler should call Broadcast here
assert was_called, "Broadcast was not called."
return out
fun = api.jit(fun)
out_val = fun(tnp.ones(4))
self.assertAllClose(out_val, onp.full((3, 4), 2.), check_dtypes=False)
def testZeroStridesConstantHandler(self):
raw_const = onp.random.RandomState(0).randn(1, 2, 1, 1, 5, 1)
const = onp.broadcast_to(raw_const, (3, 2, 3, 4, 5, 6))
def fun(x):
return x * const
fun = nje.jit(fun)
out_val = fun(3.)
self.assertAllClose(out_val, 3. * const, check_dtypes=False)
def testIsInstanceNdarrayDuringTracing(self):
arr = onp.ones(3)
@nje.jit
def f(x):
self.assertIsInstance(x, tnp.ndarray)
return tnp.sum(x)
f(arr)
@jtu.disable
def testNonArrayErrorMessage(self):
x = [1., 2.]
y = onp.array([3., 4.])
def g(x, y):
return tnp.add(x, y)
def f(x, y):
return tnp.dot(x, y)
self.assertRaises(TypeError, lambda: g(x, y))
self.assertRaises(TypeError, lambda: f(x, y))
self.assertRaises(TypeError, lambda: api.jit(g)(x, y))
self.assertRaises(TypeError, lambda: api.jit(f)(x, y))
@jtu.disable
def testAbstractionErrorMessage(self):
@api.jit
def f(x, n):
for _ in range(n):
x = x * x
return x
self.assertRaises(TypeError, lambda: f(3., 3))
@api.jit
def g(x):
if x > 0.:
return x * 2
else:
return x + 2
self.assertRaises(TypeError, lambda: g(3.))
@jtu.disable
def testTracingPrimitiveWithNoTranslationErrorMessage(self):
# TODO(mattjj): update this for jax3
self.skipTest("test needs jax3 update")
foo = tnp._not_implemented(lambda x: x)
# No error if there's no tracing.
foo(onp.arange(3))
cfoo = api.jit(foo)
self.assertRaises(NotImplementedError, lambda: cfoo(onp.arange(3)))
@named_parameters(jtu.cases_from_list(
{"testcase_name": "_{}_axis={}".format(
jtu.format_shape_dtype_string(shape, dtype), axis),
"rng_factory": rng_factory, "shape": shape, "dtype": dtype, "axis": axis}
for shape in [(3,), (2, 3)]
for dtype in default_dtypes
for axis in list(range(-len(shape), len(shape))) + [None] # Test negative axes
for rng_factory in [jtu.rand_default]))
def testFlip(self, shape, dtype, axis, rng_factory):
rng = rng_factory()
args_maker = self._GetArgsMaker(rng, [shape], [dtype])
lnp_op = lambda x: tnp.flip(x, axis)
onp_op = lambda x: onp.flip(x, axis)
self._CheckAgainstNumpy(onp_op, lnp_op, args_maker, check_dtypes=True)
self._CompileAndCheck(
lnp_op, args_maker, check_dtypes=True, check_incomplete_shape=True)
@named_parameters(jtu.cases_from_list(
{"testcase_name": "_{}".format(
jtu.format_shape_dtype_string(shape, dtype)),
"rng_factory": rng_factory, "shape": shape, "dtype": dtype}
for shape in [(3,), (2, 3), (3, 2, 4)]
for dtype in default_dtypes
for rng_factory in [jtu.rand_default]))
def testFlipud(self, shape, dtype, rng_factory):
rng = rng_factory()
args_maker = self._GetArgsMaker(rng, [shape], [dtype])
lnp_op = lambda x: tnp.flipud(x)
onp_op = lambda x: onp.flipud(x)
self._CheckAgainstNumpy(onp_op, lnp_op, args_maker, check_dtypes=True)
self._CompileAndCheck(
lnp_op, args_maker, check_dtypes=True, check_incomplete_shape=True)
@named_parameters(jtu.cases_from_list(
{"testcase_name": "_{}".format(
jtu.format_shape_dtype_string(shape, dtype)),
"rng_factory": rng_factory, "shape": shape, "dtype": dtype}
for shape in [(3, 2), (2, 3), (3, 2, 4)]
for dtype in default_dtypes
for rng_factory in [jtu.rand_default]))
def testFliplr(self, shape, dtype, rng_factory):
rng = rng_factory()
args_maker = self._GetArgsMaker(rng, [shape], [dtype])
lnp_op = lambda x: tnp.fliplr(x)
onp_op = lambda x: onp.fliplr(x)
self._CheckAgainstNumpy(onp_op, lnp_op, args_maker, check_dtypes=True)
self._CompileAndCheck(
lnp_op, args_maker, check_dtypes=True, check_incomplete_shape=True)
@named_parameters(jtu.cases_from_list(
{"testcase_name": "_{}_k={}_axes={}".format(
jtu.format_shape_dtype_string(shape, dtype), k, axes),
"rng_factory": rng_factory, "shape": shape, "dtype": dtype, "k": k, "axes": axes}
for shape, axes in [
[(2, 3), (0, 1)],
[(2, 3), (1, 0)],
[(4, 3, 2), (0, 2)],
[(4, 3, 2), (2, 1)],
]
for k in range(-3, 4)
for dtype in default_dtypes
for rng_factory in [jtu.rand_default]))
def testRot90(self, shape, dtype, k, axes, rng_factory):
rng = rng_factory()
args_maker = self._GetArgsMaker(rng, [shape], [dtype])
lnp_op = lambda x: tnp.rot90(x, k, axes)
onp_op = lambda x: onp.rot90(x, k, axes)
self._CheckAgainstNumpy(onp_op, lnp_op, args_maker, check_dtypes=True)
self._CompileAndCheck(
lnp_op, args_maker, check_dtypes=True, check_incomplete_shape=True)
@named_parameters(jtu.cases_from_list(
{"testcase_name": "_{}_k={}_axes={}".format(
jtu.format_shape_dtype_string(shape, dtype), k, axes),
"rng_factory": rng_factory, "shape": shape, "dtype": dtype, "k": k,
"axes": axes}
for shape, axes in [
[(2, 3), (-2, -1)],
[(2, 3), (-2, 1)],
[(4, 3, 2), (-1, -2)],
[(4, 3, 2), (2, -2)],
]
for k in range(-3, 4)
for dtype in default_dtypes
for rng_factory in [jtu.rand_default]))
@new_test
# These tests are only added as a separate test from testRot90 since we would
# like to measure coverage directly against the existing baseline. Once we
# stop measuring that, we can combine this test with the above.
def testRot90Additional(self, shape, dtype, k, axes, rng_factory):
rng = rng_factory()
args_maker = self._GetArgsMaker(rng, [shape], [dtype])
lnp_op = lambda x: tnp.rot90(x, k, axes)
onp_op = lambda x: onp.rot90(x, k, axes)
self._CheckAgainstNumpy(onp_op, lnp_op, args_maker, check_dtypes=True)
self._CompileAndCheck(
lnp_op, args_maker, check_dtypes=True, check_incomplete_shape=True)
# TODO(mattjj): test infix operator overrides
def testRavel(self):
rng = onp.random.RandomState(0)
args_maker = lambda: [rng.randn(3, 4).astype("float32")]
self._CompileAndCheck(lambda x: x.ravel(), args_maker, check_dtypes=True,
check_incomplete_shape=True)
def testAstype(self):
rng = onp.random.RandomState(0)
args_maker = lambda: [rng.randn(3, 4).astype("float32")]
op = lambda x: x.astype(tnp.int32)
self._CheckAgainstNumpy(op, op, args_maker, check_dtypes=True)
self._CompileAndCheck(
op, args_maker, check_dtypes=True, check_incomplete_shape=True)
# TODO(mattjj): test other ndarray-like method overrides
def testOnpMean(self):
# from https://github.com/google/jax/issues/125
x = tnp.add(tnp.eye(3, dtype=tnp.float64), 0.)
ans = onp.mean(x)
self.assertAllClose(ans, onp.array(1./3), check_dtypes=False)
@jtu.disable
def testArangeOnFloats(self):
# from https://github.com/google/jax/issues/145
expected = onp.arange(0.0, 1.0, 0.1, dtype=tnp.float64)
ans = tnp.arange(0.0, 1.0, 0.1)
self.assertAllClose(expected, ans, check_dtypes=True)
def testSortManually(self):
def _test(*args, **kwargs):
raw_ans = tnp.sort(*args, **kwargs)
fn_ans = nje.jit(tnp.sort, static_argnums=(1,))(*args, **kwargs)
expected = onp.sort(*args, **kwargs)
self.assertAllClose(expected, raw_ans, check_dtypes=True)
self.assertAllClose(expected, fn_ans, check_dtypes=True)
# manual tests for sort are nice because we don't have to worry about ties.
# lax.sort is tested combinatorially.
_test(onp.array([16, 15, 23, 42, 8, 4]))
_test(onp.array([[1, 4], [3, 1]]), None)
_test(onp.array([[1, 4], [3, 1]]))
_test(onp.array([[1, 4], [3, 1]]), 0)
def testArgsortManually(self):
def _test(*args, **kwargs):
raw_ans = tnp.argsort(*args, **kwargs)
fn_ans = nje.jit(tnp.argsort, static_argnums=(1,))(*args, **kwargs)
expected = onp.argsort(*args, **kwargs)
self.assertAllClose(expected, raw_ans, check_dtypes=True)
self.assertAllClose(expected, fn_ans, check_dtypes=True)
_test(onp.array([16, 15, 23, 42, 8, 4]))
_test(onp.array([[16, 15, 23], [42, 8, 4]]), 0)
_test(onp.array([[16, 15, 23], [42, 8, 4]]), 1)
_test(onp.array([[16, 15, 23], [42, 8, 4]]), None)
_test(onp.array([[16, 15, 23], [42, 8, 4]]))
@named_parameters(jtu.cases_from_list(
{"testcase_name": "_{}_shifts={}_axis={}".format(
jtu.format_shape_dtype_string(shape, dtype),
shifts, axis),
"rng_factory": rng_factory, "shape": shape, "dtype": dtype, "shifts": shifts,
"axis": axis}
for dtype in all_dtypes
for shape in [(3, 4), (3, 4, 5), (7, 4, 0)]
for shifts, axis in [
(3, None),
(1, 1),
((3,), (0,)),
((-2,), (-2,)),
((1, 2), (0, -1))
]
for rng_factory in [jtu.rand_default]))
def testRoll(self, shape, dtype, shifts, axis, rng_factory):
rng = rng_factory()
args_maker = lambda: [rng(shape, dtype), onp.array(shifts)]
lnp_op = partial(tnp.roll, axis=axis)
onp_op = partial(onp.roll, axis=axis)
self._CheckAgainstNumpy(lnp_op, onp_op, args_maker, check_dtypes=True)
self._CompileAndCheck(
lnp_op, args_maker, check_dtypes=True, check_incomplete_shape=True)
@named_parameters(jtu.cases_from_list(
{"testcase_name": "_{}_index={}_axis={}_mode={}".format(
jtu.format_shape_dtype_string(shape, dtype),
jtu.format_shape_dtype_string(index_shape, index_dtype),
axis, mode),
"rng_factory": rng_factory, "rng_indices_factory": rng_indices_factory,
"shape": shape, "index_shape": index_shape, "dtype": dtype,
"index_dtype": index_dtype, "axis": axis, "mode": mode}
for shape in [(3,), (3, 4), (3, 4, 5)]
for index_shape in scalar_shapes + [(3,), (2, 1, 3)]
for axis in itertools.chain(range(-len(shape), len(shape)), [None])
for dtype in all_dtypes
for index_dtype in int_dtypes
for mode in ['wrap', 'clip']
for rng_factory in [jtu.rand_default]
for rng_indices_factory in [partial(jtu.rand_int, -5, 5)]))
def testTake(self, shape, dtype, index_shape, index_dtype, axis, mode,
rng_factory, rng_indices_factory):
def args_maker():
x = rng(shape, dtype)
i = rng_indices(index_shape, index_dtype)
return x, i
rng = rng_factory()
rng_indices = rng_indices_factory()
lnp_op = lambda x, i: tnp.take(x, i, axis=axis, mode=mode)
onp_op = lambda x, i: onp.take(x, i, axis=axis, mode=mode)
self._CheckAgainstNumpy(lnp_op, onp_op, args_maker, check_dtypes=True)
self._CompileAndCheck(
lnp_op, args_maker, check_dtypes=True, check_incomplete_shape=True)
@named_parameters(jtu.cases_from_list(
{"testcase_name": "_{}_ishape={}_axis={}".format(
jtu.format_shape_dtype_string(x_shape, dtype), i_shape, axis),
"rng_factory": rng_factory, "x_shape": x_shape, "i_shape": i_shape, "dtype": dtype,
"axis": axis}
for x_shape, i_shape in filter(
_shapes_are_equal_length,
filter(_shapes_are_broadcast_compatible,
CombosWithReplacement(nonempty_nonscalar_array_shapes, 2)))
for axis in itertools.chain(range(len(x_shape)), [-1], [None])
for dtype in default_dtypes
for rng_factory in [jtu.rand_default]))
def testTakeAlongAxis(self, x_shape, i_shape, dtype, axis, rng_factory):
rng = rng_factory()
i_shape = onp.array(i_shape)
if axis is None:
i_shape = [onp.prod(i_shape, dtype=onp.int64)]
else:
# Test the case where the size of the axis doesn't necessarily broadcast.
i_shape[axis] *= 3
i_shape = list(i_shape)
def args_maker():
x = rng(x_shape, dtype)
n = onp.prod(x_shape, dtype=onp.int32) if axis is None else x_shape[axis]
i = rng(i_shape, onp.int32) % (2 * n - 1) - (n - 1)
return x, i
lnp_op = lambda x, i: tnp.take_along_axis(x, i, axis=axis)
if hasattr(onp, "take_along_axis"):
onp_op = lambda x, i: onp.take_along_axis(x, i, axis=axis)
self._CheckAgainstNumpy(lnp_op, onp_op, args_maker, check_dtypes=True)
self._CompileAndCheck(lnp_op, args_maker, check_dtypes=True,
check_incomplete_shape=True)
@named_parameters(jtu.cases_from_list(
{"testcase_name": "_shape={}_n={}_increasing={}".format(
jtu.format_shape_dtype_string([shape], dtype),
n, increasing),
"dtype": dtype, "shape": shape, "n": n, "increasing": increasing,
"rng_factory": jtu.rand_default}
for dtype in inexact_dtypes
for shape in [0, 5]
for n in [2, 4]
for increasing in [False, True]))
def testVander(self, shape, dtype, n, increasing, rng_factory):
rng = rng_factory()
def onp_fun(arg):
arg = arg.astype(onp.float32) if dtype == tnp.bfloat16 else arg
return onp.vander(arg, N=n, increasing=increasing)
lnp_fun = lambda arg: tnp.vander(arg, N=n, increasing=increasing)
args_maker = lambda: [rng([shape], dtype)]
# np.vander seems to return float64 for all floating types. We could obey
# those semantics, but they seem like a bug.
self._CheckAgainstNumpy(onp_fun, lnp_fun, args_maker, check_dtypes=False,
tol={onp.float32: 1e-3})
self._CompileAndCheck(
lnp_fun, args_maker, check_dtypes=False, check_incomplete_shape=True,
rtol={onp.complex128: 2e-15})
@named_parameters(jtu.cases_from_list(
{"testcase_name": jtu.format_test_name_suffix("nan_to_num", [shape],
[dtype]),
"rng_factory": jtu.rand_some_inf_and_nan, "shape": shape,
"dtype": dtype}
for shape in all_shapes
for dtype in inexact_dtypes))
@jtu.disable
def testNanToNum(self, rng_factory, shape, dtype):
rng = rng_factory()
dtype = onp.dtype(dtypes.canonicalize_dtype(dtype)).type
def onp_fun(x):
if dtype == tnp.bfloat16:
x = onp.where(onp.isnan(x), dtype(0), x)
x = onp.where(onp.isposinf(x), tnp.finfo(dtype).max, x)
x = onp.where(onp.isneginf(x), tnp.finfo(dtype).min, x)
return x
else:
return onp.nan_to_num(x).astype(dtype)
args_maker = lambda: [rng(shape, dtype)]
check_dtypes = shape is not jtu.PYTHON_SCALAR_SHAPE
self._CheckAgainstNumpy(onp_fun, tnp.nan_to_num, args_maker,
check_dtypes=check_dtypes)
self._CompileAndCheck(tnp.nan_to_num, args_maker,
check_dtypes=check_dtypes)
@named_parameters(jtu.cases_from_list(
{"testcase_name": jtu.format_test_name_suffix("ix_", shapes, dtypes),
"rng_factory": jtu.rand_default, "shapes": shapes, "dtypes": dtypes}
for shapes, dtypes in (
((), ()),
(((7,),), (onp.int32,)),
(((3,), (4,)), (onp.int32, onp.int32)),
(((3,), (1,), (4,)), (onp.int32, onp.int32, onp.int32)),
)))
def testIx_(self, rng_factory, shapes, dtypes):
rng = rng_factory()
args_maker = lambda: [rng(shape, dtype)
for shape, dtype in zip(shapes, dtypes)]
self._CheckAgainstNumpy(onp.ix_, tnp.ix_, args_maker,
check_dtypes=True)
self._CompileAndCheck(
tnp.ix_, args_maker, check_dtypes=True, check_incomplete_shape=True)
@named_parameters(jtu.cases_from_list(
{"testcase_name":
"_op={}_a_shape={}_q_shape={}_axis={}_keepdims={}".format(
op,
jtu.format_shape_dtype_string(a_shape, a_dtype),
jtu.format_shape_dtype_string(q_shape, q_dtype),
axis, keepdims),
"a_rng": jtu.rand_default(), "q_rng": q_rng, "op": op,
"a_shape": a_shape, "a_dtype": a_dtype,
"q_shape": q_shape, "q_dtype": q_dtype, "axis": axis,
"keepdims": keepdims}
for (op, q_rng) in (
("percentile", jtu.rand_uniform(low=0., high=100.)),
("quantile", jtu.rand_uniform(low=0., high=1.)),
("median", jtu.rand_uniform(low=0., high=1.)),
)
for a_dtype in float_dtypes
for a_shape, axis in (
((7,), None),
((47, 7), 0),
((4, 101), 1),
)
for q_dtype in [onp.float32]
for q_shape in scalar_shapes + [(4,)]
for keepdims in [False, True]))
@jtu.disable
def testQuantile(self, op, a_rng, q_rng, a_shape, a_dtype, q_shape, q_dtype,
axis, keepdims):
if op == "quantile" and numpy_version < (1, 15):
raise SkipTest("Numpy < 1.15 does not have np.quantile")
if op == "median":
args_maker = lambda: [a_rng(a_shape, a_dtype)]
else:
args_maker = lambda: [a_rng(a_shape, a_dtype), q_rng(q_shape, q_dtype)]
def onp_fun(*args):
args = [x if tnp.result_type(x) != tnp.bfloat16 else
onp.asarray(x, onp.float32) for x in args]
return getattr(onp, op)(*args, axis=axis, keepdims=keepdims)
lnp_fun = partial(getattr(tnp, op), axis=axis, keepdims=keepdims)
# TODO(phawkins): we currently set dtype=False because we aren't as
# aggressive about promoting to float64. It's not clear we want to mimic
# Numpy here.
tol_spec = {onp.float32: 2e-4, onp.float64: 5e-6}
tol = max(jtu.tolerance(a_dtype, tol_spec),
jtu.tolerance(q_dtype, tol_spec))
self._CheckAgainstNumpy(onp_fun, lnp_fun, args_maker, check_dtypes=False,
tol=tol)
self._CompileAndCheck(lnp_fun, args_maker, check_dtypes=True, rtol=tol)
@named_parameters(jtu.cases_from_list(
{"testcase_name": "_shape={}".format(
jtu.format_shape_dtype_string(shape, dtype)),
"shape": shape, "dtype": dtype}
for shape in all_shapes for dtype in all_dtypes))
def testWhereOneArgument(self, shape, dtype):
rng = jtu.rand_some_zero()
onp_fun = lambda x: np_where(x)
lnp_fun = lambda x: tnp.where(x)
args_maker = lambda: [rng(shape, dtype)]
self._CheckAgainstNumpy(onp_fun, lnp_fun, args_maker, check_dtypes=False)
# Turns off XLA check because there are no XLA kernels for `Where`, which
# XLA can't support because it's output shape is dynamic.
self._CompileAndCheck(
tnp.where,
args_maker,
check_dtypes=True,
check_eval_on_shapes=False,
check_incomplete_shape=True,
check_unknown_rank=False,
check_experimental_compile=False, check_xla_forced_compile=False)
@named_parameters(jtu.cases_from_list(
{"testcase_name": "_{}".format("_".join(
jtu.format_shape_dtype_string(shape, dtype)
for shape, dtype in zip(shapes, dtypes))),
"rng_factory": jtu.rand_default, "shapes": shapes, "dtypes": dtypes}
for shapes in filter(_shapes_are_broadcast_compatible,
CombosWithReplacement(all_shapes, 3))
for dtypes in CombosWithReplacement(all_dtypes, 3)))
def testWhereThreeArgument(self, rng_factory, shapes, dtypes):
rng = rng_factory()
args_maker = self._GetArgsMaker(rng_factory(), shapes, dtypes)
def onp_fun(cond, x, y):
return _promote_like_lnp(partial(onp.where, cond))(x, y)
self._CheckAgainstNumpy(onp_fun, tnp.where, args_maker, check_dtypes=True)
self._CompileAndCheck(
tnp.where, args_maker, check_dtypes=True, check_incomplete_shape=True)
def testWhereScalarPromotion(self):
x = tnp.where(tnp.array([True, False]), 3,
tnp.ones((2,), dtype=tnp.float32))
self.assertEqual(x.dtype, onp.dtype(onp.float32))
@named_parameters(jtu.cases_from_list(
{"testcase_name": jtu.format_test_name_suffix("", shapes,
(onp.bool_,) * n + dtypes),
"rng_factory": jtu.rand_default, "shapes": shapes, "dtypes": dtypes}
for n in range(0, 3)
for shapes in filter(
_shapes_are_broadcast_compatible,
CombosWithReplacement(all_shapes, 2 * n + 1))
for dtypes in CombosWithReplacement(all_dtypes, n + 1)))
def testSelect(self, rng_factory, shapes, dtypes):
rng = rng_factory()
n = len(dtypes) - 1
def args_maker():
condlist = [rng(shape, onp.bool_) for shape in shapes[:n]]
choicelist = [rng(shape, dtype)
for shape, dtype in zip(shapes[n:-1], dtypes[:n])]
default = rng(shapes[-1], dtypes[-1])
return condlist, choicelist, default
# TODO(phawkins): float32/float64 type mismatches
def onp_fun(condlist, choicelist, default):
choicelist = [x if tnp.bfloat16 != tnp.result_type(x)
else x.astype(onp.float32) for x in choicelist]
dtype = tnp.result_type(default, *choicelist).as_numpy_dtype
return onp.select(condlist,
[onp.asarray(x, dtype=dtype) for x in choicelist],
onp.asarray(default, dtype=dtype))
self._CheckAgainstNumpy(onp_fun, tnp.select, args_maker,
check_dtypes=False)
self._CompileAndCheck(tnp.select, args_maker, check_dtypes=True,
check_incomplete_shape=True,
rtol={onp.float64: 1e-7, onp.complex128: 1e-7})
@jtu.disable
def testIssue330(self):
x = tnp.full((1, 1), tnp.array([1])[0]) # doesn't crash
self.assertEqual(x[0, 0], 1)
@jtu.disable
def testScalarDtypePromotion(self):
orig_numpy_result = (1 + onp.eye(1, dtype=onp.float32)).dtype
jax_numpy_result = (1 + tnp.eye(1, dtype=tnp.float32)).dtype
self.assertEqual(orig_numpy_result, jax_numpy_result)
@jtu.disable
def testSymmetrizeDtypePromotion(self):
x = onp.eye(3, dtype=onp.float32)
orig_numpy_result = ((x + x.T) / 2).dtype
x = tnp.eye(3, dtype=tnp.float32)
jax_numpy_result = ((x + x.T) / 2).dtype
self.assertEqual(orig_numpy_result, jax_numpy_result)
@jtu.disable
def testIssue347(self):
# https://github.com/google/jax/issues/347
def test_fail(x):
x = tnp.sqrt(tnp.sum(x ** 2, axis=1))
ones = tnp.ones_like(x)
x = tnp.where(x > 0.5, x, ones)
return tnp.sum(x)
x = tnp.array([[1, 2], [3, 4], [0, 0]], dtype=tnp.float64)
result = api.grad(test_fail)(x)
assert not onp.any(onp.isnan(result))
def testIssue453(self):
# https://github.com/google/jax/issues/453
a = onp.arange(6) + 1
ans = tnp.reshape(a, (3, 2), order="F")
expected = onp.reshape(a, (3, 2), order="F")
self.assertAllClose(ans, expected, check_dtypes=True)
@named_parameters(jtu.cases_from_list(
{"testcase_name": "_op={}_dtype={}".format(op, pytype.__name__),
"pytype": pytype, "dtype": dtype, "op": op}
for pytype, dtype in [(int, tnp.int_), (float, tnp.float64),
(bool, tnp.bool_), (complex, tnp.complex128)]
for op in ["atleast_1d", "atleast_2d", "atleast_3d"]))
def testAtLeastNdLiterals(self, pytype, dtype, op):
# Fixes: https://github.com/google/jax/issues/634
onp_fun = lambda arg: getattr(onp, op)(arg).astype(dtype)
lnp_fun = lambda arg: getattr(tnp, op)(arg)
args_maker = lambda: [pytype(2)]
self._CheckAgainstNumpy(onp_fun, lnp_fun, args_maker, check_dtypes=True)
self._CompileAndCheck(
lnp_fun, args_maker, check_dtypes=True, check_incomplete_shape=True)
def testLongLong(self):
self.assertAllClose(
onp.int64(7), nje.jit(lambda x: x)(onp.longlong(7)), check_dtypes=True
)
def testArange(self):
# test cases inspired by dask tests at
# https://github.com/dask/dask/blob/master/dask/array/tests/test_creation.py#L92
self.assertAllClose(tnp.arange(77),
onp.arange(77, dtype=tnp.int_), check_dtypes=True)
self.assertAllClose(tnp.arange(2, 13),
onp.arange(2, 13, dtype=tnp.int_), check_dtypes=True)
self.assertAllClose(tnp.arange(4, 21, 9),
onp.arange(4, 21, 9, dtype=tnp.int_), check_dtypes=True)
self.assertAllClose(tnp.arange(53, 5, -3),
onp.arange(53, 5, -3, dtype=tnp.int_),
check_dtypes=True)
# TODO(mattjj): make these tests work when enable_x64=True
self.assertAllClose(
tnp.arange(77, dtype=float),
onp.arange(77, dtype=float),
check_dtypes=True)
self.assertAllClose(
tnp.arange(2, 13, dtype=int),
onp.arange(2, 13, dtype=int),
check_dtypes=True)
self.assertAllClose(tnp.arange(0, 1, -0.5),
onp.arange(0, 1, -0.5, dtype=tnp.float64),
check_dtypes=True)
self.assertRaises(TypeError, lambda: tnp.arange())
# # The following have been disabled since they test JAX specific behavior
# # test that tnp.arange(N) doesn't instantiate an ndarray
# self.assertFalse(type(tnp.arange(77)) == type(onp.arange(77)))
# self.assertTrue(type(tnp.arange(77)) == type(lax.iota(onp.int32, 77)))
# # test that tnp.arange(N, dtype=int32) doesn't instantiate an ndarray
# self.assertFalse(type(tnp.arange(77, dtype=tnp.int32)) ==
# type(onp.arange(77, dtype=onp.int32)))
# self.assertTrue(type(tnp.arange(77, dtype=tnp.int32)) ==
# type(lax.iota(onp.int32, 77)))
def testIssue830(self):
a = tnp.arange(4, dtype=tnp.complex64)
self.assertEqual(a.dtype, tnp.complex64)
def testIssue728(self):
assert tnp.allclose(tnp.eye(5000), onp.eye(5000))
self.assertEqual(0, onp.sum(tnp.eye(1050) - onp.eye(1050)))
def testIssue746(self):
tnp.arange(12).reshape(3, 4) # doesn't crash
def testIssue764(self):
x = tnp.linspace(190, 200, 4)
f = nje.grad(lambda x: tnp.sum(tnp.tanh(x)))
# Expected values computed with autograd in float64 precision.
expected = onp.array([3.71669453e-165, 4.72999108e-168, 6.01954653e-171,
7.66067839e-174], onp.float64)
self.assertAllClose(f(x), expected, check_dtypes=False)
@jtu.disable
def testIssue776(self):
"""Tests that the scatter-add transpose rule instantiates symbolic zeros."""
def f(u):
_ = onp.ones(10,).at[[2, 4, 5]].add(u)
# The transpose rule for lax.tie_in returns a symbolic zero for its first
# argument.
return 7.
self.assertAllClose(onp.zeros(3,), api.grad(f)(onp.ones(3,)),
check_dtypes=True)
@jtu.disable
def testIssue777(self):
x = tnp.linspace(-200, 0, 4, dtype=onp.float32)
f = nje.grad(lambda x: tnp.sum(1 / (1 + tnp.exp(-x))))
self.assertAllClose(f(x), onp.array([0., 0., 0., 0.25], dtype=onp.float32),
check_dtypes=True)
@named_parameters(
jtu.cases_from_list(
{"testcase_name": jtu.format_test_name_suffix(op, [()], [dtype]),
"dtype": dtype, "op": op}
for dtype in float_dtypes
for op in ("sqrt", "arccos", "arcsin", "arctan", "sin", "cos", "tan",
"sinh", "cosh", "tanh", "arccosh", "arcsinh", "arctanh", "exp",
"log", "expm1", "log1p")))
def testMathSpecialFloatValues(self, op, dtype):
onp_op = getattr(onp, op)
lnp_op = getattr(tnp, op)
dtype = onp.dtype(tnp.canonicalize_dtype(dtype)).type
for x in (onp.nan, -onp.inf, -100., -2., -1., 0., 1., 2., 100., onp.inf,
tnp.finfo(dtype).max, onp.sqrt(tnp.finfo(dtype).max),
onp.sqrt(tnp.finfo(dtype).max) * 2.):
if (op in ("sin", "cos", "tan", "arctan") and
jtu.device_under_test() == "tpu"):
continue # TODO(b/132196789, b/134175194): fix and reenable.
# TODO(b/158006398): fix and reenable.
if (op in ("cosh", "arccosh", "arcsinh", "arcsin", "sinh", "arccos",
"arctan", "arctanh") and dtype == onp.float16):
continue
x = dtype(x)
expected = onp_op(x)
actual = lnp_op(x)
tol = jtu.tolerance(dtype, {onp.float32: 1e-3, onp.float64: 1e-7})
self.assertAllClose(expected, actual, check_dtypes=True, atol=tol,
rtol=tol)
def testIssue883(self):
# from https://github.com/google/jax/issues/883
@partial(nje.jit, static_argnums=(1,))
def f(x, v):
return x
x = tnp.ones((10, 10))
v = tnp.array([1, 2, 3])
first_call = f(x, v)
second_call = f(x, v) # doesn't crash
def testReductionOfOutOfBoundsAxis(self): # Issue 888
x = tnp.ones((3, 4))
self.assertRaises(
errors_impl.InvalidArgumentError, lambda: tnp.sum(x, axis=2)
)
@jtu.disable
def testIssue956(self):
self.assertRaises(TypeError, lambda: tnp.ndarray((1, 1)))
@named_parameters(
jtu.cases_from_list(
{"testcase_name":
"_shape={}_dtype={}_out_dtype={}_axis={}_ddof={}_keepdims={}"
.format(shape, dtype, out_dtype, axis, ddof, keepdims),
"shape": shape, "dtype": dtype, "out_dtype": out_dtype, "axis": axis,
"ddof": ddof, "keepdims": keepdims, "rng_factory": rng_factory}
for shape in [(5,), (10, 5)]
for dtype in all_dtypes
for out_dtype in inexact_dtypes
for axis in [None, 0, -1]
for ddof in [0, 1, 2]
for keepdims in [False, True]
for rng_factory in [jtu.rand_default]))
def testVar(self, shape, dtype, out_dtype, axis, ddof, keepdims, rng_factory):
rng = rng_factory()
args_maker = self._GetArgsMaker(rng, [shape], [dtype])
def onp_fun(x):
out = onp.var(x.astype(tnp.promote_types(onp.float32, dtype)),
axis=axis, ddof=ddof, keepdims=keepdims)
return out.astype(out_dtype)
lnp_fun = partial(
tnp.var, dtype=out_dtype, axis=axis, ddof=ddof, keepdims=keepdims)
tol = jtu.tolerance(out_dtype, {onp.float16: 1e-1, onp.float32: 1e-3,
onp.float64: 1e-3, onp.complex128: 1e-6})
self._CheckAgainstNumpy(onp_fun, lnp_fun, args_maker, check_dtypes=True,
tol=tol)
self._CompileAndCheck(lnp_fun, args_maker, check_dtypes=True, rtol=tol,
atol=tol, check_incomplete_shape=True)
@named_parameters(
jtu.cases_from_list(
{"testcase_name": "_shape={}_dtype={}_rowvar={}_ddof={}_bias={}".format(
shape, dtype, rowvar, ddof, bias),
"shape": shape, "dtype": dtype, "rowvar": rowvar, "ddof": ddof,
"bias": bias, "rng_factory": rng_factory}
for shape in [(5,), (10, 5), (5, 10)]
for dtype in all_dtypes
for rowvar in [True, False]
for bias in [True, False]
for ddof in [None, 2, 3]
for rng_factory in [jtu.rand_default]))
@jtu.skip_on_devices("gpu") # TODO(b/138003641): test fails on GPU.
@jtu.disable
def testCov(self, shape, dtype, rowvar, ddof, bias, rng_factory):
rng = rng_factory()
args_maker = self._GetArgsMaker(rng, [shape], [dtype])
onp_fun = partial(onp.cov, rowvar=rowvar, ddof=ddof, bias=bias)
lnp_fun = partial(tnp.cov, rowvar=rowvar, ddof=ddof, bias=bias)
tol = {onp.float32: 1e-5, onp.float64: 1e-13, onp.complex128: 1e-13}
tol = 7e-2 if jtu.device_under_test() == "tpu" else tol
tol = jtu.join_tolerance(tol, jtu.tolerance(dtype))
self._CheckAgainstNumpy(
onp_fun, lnp_fun, args_maker, check_dtypes=False, tol=tol)
self._CompileAndCheck(lnp_fun, args_maker, check_dtypes=True, atol=tol,
rtol=tol)
def testIssue967(self):
self.assertRaises(TypeError, lambda: tnp.zeros(1.5))
@named_parameters(
jtu.cases_from_list(
{"testcase_name": "_shape={}_dtype={}_rowvar={}_ddof={}_bias={}".format(
shape, dtype, rowvar, ddof, bias),
"shape": shape, "dtype": dtype, "rowvar": rowvar, "ddof": ddof,
"bias": bias, "rng_factory": rng_factory}
for shape in [(5,), (10, 5), (3, 10)]
for dtype in number_dtypes
for rowvar in [True, False]
for bias in [True, False]
for ddof in [None, 2, 3]
for rng_factory in [jtu.rand_default]))
@jtu.disable
def testCorrCoef(self, shape, dtype, rowvar, ddof, bias, rng_factory):
rng = rng_factory()
args_maker = self._GetArgsMaker(rng, [shape], [dtype])
mat = onp.asarray([rng(shape, dtype)])
onp_fun = partial(onp.corrcoef, rowvar=rowvar, ddof=ddof, bias=bias)
lnp_fun = partial(tnp.corrcoef, rowvar=rowvar, ddof=ddof, bias=bias)
if not onp.any(onp.isclose(onp.std(mat), 0.0)):
self._CheckAgainstNumpy(
onp_fun, lnp_fun, args_maker, check_dtypes=False,
tol=1e-2 if jtu.device_under_test() == "tpu" else None)
self._CompileAndCheck(lnp_fun, args_maker, check_dtypes=True)
@named_parameters(
jtu.cases_from_list(
{
"testcase_name":
"_shapes={}_dtype={}_indexing={}_sparse={}".format(
shapes, jtu.dtype_str(dtype), indexing, sparse),
"shapes":
shapes,
"dtype":
dtype,
"indexing":
indexing,
"sparse":
sparse,
"rng_factory":
rng_factory
} for shapes in [(), (5,), (5, 3)] for dtype in number_dtypes
for indexing in ["xy", "ij"]
for sparse in [False] # TODO(nareshmodi): Make sparse work
for rng_factory in [jtu.rand_default]))
def testMeshGrid(self, shapes, dtype, indexing, sparse, rng_factory):
rng = rng_factory()
args_maker = self._GetArgsMaker(rng, [(x,) for x in shapes],
[dtype] * len(shapes))
onp_fun = partial(onp.meshgrid, indexing=indexing, sparse=sparse)
lnp_fun = partial(tnp.meshgrid, indexing=indexing, sparse=sparse)
self._CheckAgainstNumpy(onp_fun, lnp_fun, args_maker, check_dtypes=True)
self._CompileAndCheck(
lnp_fun, args_maker, check_dtypes=True, check_incomplete_shape=True)
@named_parameters(
jtu.cases_from_list(
{
"testcase_name": (
"_start_shape={}_stop_shape={}_num={}_endpoint={}"
"_retstep={}_dtype={}"
).format(start_shape, stop_shape, num, endpoint, retstep, dtype),
"start_shape": start_shape,
"stop_shape": stop_shape,
"num": num,
"endpoint": endpoint,
"retstep": retstep,
"dtype": dtype,
"rng_factory": rng_factory,
}
for start_shape in [(), (2,), (2, 2)]
for stop_shape in [(), (2,), (2, 2)]
for num in [0, 1, 2, 5, 20]
for endpoint in [True, False]
for retstep in [True, False]
for dtype in (
(
float_dtypes
+ complex_dtypes
+ [
None,
]
)
if onp.__version__ >= onp.lib.NumpyVersion("2.0.0")
else (number_dtypes + [None])
)
for rng_factory in [jtu.rand_default]
)
)
def testLinspace(self, start_shape, stop_shape, num, endpoint,
retstep, dtype, rng_factory):
if not endpoint and onp.issubdtype(dtype, onp.integer):
# TODO(b/157597565): Support all dtypes when the tf op supports endpoint
# Currently, subtracting the step early leads to rounding errors for
# integers.
return
rng = rng_factory()
# relax default tolerances slightly
tol = jtu.tolerance(dtype if dtype else onp.float32) * 10
args_maker = self._GetArgsMaker(rng,
[start_shape, stop_shape],
[dtype, dtype])
start, stop = args_maker()
ndim = len(onp.shape(start + stop))
for axis in range(-ndim, ndim):
lnp_op = lambda start, stop: tnp.linspace(
start, stop, num,
endpoint=endpoint, retstep=retstep, dtype=dtype, axis=axis)
onp_op = lambda start, stop: onp.linspace(
start, stop, num,
endpoint=endpoint, retstep=retstep, dtype=dtype, axis=axis)
self._CheckAgainstNumpy(onp_op, lnp_op, args_maker,
check_dtypes=False, tol=tol)
# floating-point compute between jitted platforms and non-jit + rounding
# cause unavoidable variation in integer truncation for some inputs.
if dtype in (inexact_dtypes + [None,]):
self._CompileAndCheck(lnp_op, args_maker,
check_dtypes=False, atol=tol, rtol=tol,
check_incomplete_shape=True)
@named_parameters(
jtu.cases_from_list(
{"testcase_name": ("_start_shape={}_stop_shape={}_num={}_endpoint={}"
"_base={}_dtype={}").format(
start_shape, stop_shape, num, endpoint, base,
dtype.__name__ if dtype else "None"),
"start_shape": start_shape,
"stop_shape": stop_shape,
"num": num, "endpoint": endpoint, "base": base,
"dtype": dtype, "rng_factory": rng_factory}
for start_shape in [(), (2,), (2, 2)]
for stop_shape in [(), (2,), (2, 2)]
for num in [0, 1, 2, 5, 20]
for endpoint in [True, False]
for base in [10.0, 2, onp.e]
for dtype in inexact_dtypes + [None,]
for rng_factory in [jtu.rand_default]))
def testLogspace(self, start_shape, stop_shape, num,
endpoint, base, dtype, rng_factory):
if (dtype in int_dtypes and
jtu.device_under_test() in ("gpu", "tpu") and
not FLAGS.enable_x64):
raise unittest.SkipTest("GPUx32 truncated exponentiation"
" doesn't exactly match other platforms.")
rng = rng_factory()
# relax default tolerances slightly
tol = {onp.float16: 2e-2, onp.float32: 1e-2, onp.float64: 1e-6,
onp.complex64: 1e-3, onp.complex128: 1e-6}
args_maker = self._GetArgsMaker(rng,
[start_shape, stop_shape],
[dtype, dtype])
start, stop = args_maker()
ndim = len(onp.shape(start + stop))
for axis in range(-ndim, ndim):
lnp_op = lambda start, stop: tnp.logspace(
start, stop, num, endpoint=endpoint, base=base, dtype=dtype, axis=axis)
onp_op = lambda start, stop: onp.logspace(
start, stop, num, endpoint=endpoint, base=base, dtype=dtype, axis=axis)
self._CheckAgainstNumpy(onp_op, lnp_op, args_maker,
check_dtypes=False, tol=tol)
if dtype in (inexact_dtypes + [None,]):
# Why do compiled and op-by-op float16 np.power numbers differ
# slightly more than expected?
atol = {onp.float16: 1e-2}
self._CompileAndCheck(lnp_op, args_maker,
check_dtypes=False, atol=atol, rtol=tol,
check_incomplete_shape=True)
@named_parameters(
jtu.cases_from_list(
{
"testcase_name": (
"_start_shape={}_stop_shape={}_num={}_endpoint={}_dtype={}"
).format(start_shape, stop_shape, num, endpoint, dtype),
"start_shape": start_shape,
"stop_shape": stop_shape,
"num": num,
"endpoint": endpoint,
"dtype": dtype,
"rng_factory": rng_factory,
}
for start_shape in [(), (2,), (2, 2)]
for stop_shape in [(), (2,), (2, 2)]
for num in [0, 1, 2, 5, 20]
for endpoint in [True, False]
# NB: numpy's geomspace gives nonsense results on integer types
for dtype in (
(
float_dtypes
+ [
None,
]
)
if onp.__version__ >= onp.lib.NumpyVersion("2.0.0")
else (
inexact_dtypes
+ [
None,
]
)
)
for rng_factory in [jtu.rand_default]
)
)
def testGeomspace(self, start_shape, stop_shape, num,
endpoint, dtype, rng_factory):
rng = rng_factory()
# relax default tolerances slightly
tol = {onp.float16: 4e-3, onp.float32: 2e-3, onp.complex128: 1e-14}
def args_maker():
"""Test the set of inputs onp.geomspace is well-defined on."""
start, stop = self._GetArgsMaker(rng,
[start_shape, stop_shape],
[dtype, dtype])()
# onp.geomspace can't handle differently ranked tensors
# w. negative numbers!
start, stop = tnp.broadcast_arrays(start, stop)
if dtype in complex_dtypes:
return start, stop
# to avoid NaNs, non-complex start and stop cannot
# differ in sign, elementwise
start = start * tnp.sign(start) * tnp.sign(stop)
return start, stop
start, stop = args_maker()
ndim = len(onp.shape(start + stop))
for axis in range(-ndim, ndim):
def lnp_op(start, stop):
return tnp.geomspace(start, stop, num, endpoint=endpoint, dtype=dtype,
axis=axis)
def onp_op(start, stop):
start = start.astype(onp.float32) if dtype == tnp.bfloat16 else start
stop = stop.astype(onp.float32) if dtype == tnp.bfloat16 else stop
return onp.geomspace(
start, stop, num, endpoint=endpoint,
dtype=dtype if dtype != tnp.bfloat16 else onp.float32,
axis=axis).astype(dtype)
self._CheckAgainstNumpy(onp_op, lnp_op, args_maker,
check_dtypes=False, tol=tol)
if dtype in (inexact_dtypes + [None,]):
self._CompileAndCheck(lnp_op, args_maker,
check_dtypes=False, atol=tol, rtol=tol,
check_incomplete_shape=True)
@jtu.disable
def testDisableNumpyRankPromotionBroadcasting(self):
try:
prev_flag = FLAGS.jax_numpy_rank_promotion
FLAGS.jax_numpy_rank_promotion = "allow"
tnp.ones(2) + tnp.ones((1, 2)) # works just fine
finally:
FLAGS.jax_numpy_rank_promotion = prev_flag
try:
prev_flag = FLAGS.jax_numpy_rank_promotion
FLAGS.jax_numpy_rank_promotion = "raise"
self.assertRaises(ValueError, lambda: tnp.ones(2) + tnp.ones((1, 2)))
finally:
FLAGS.jax_numpy_rank_promotion = prev_flag
try:
prev_flag = FLAGS.jax_numpy_rank_promotion
FLAGS.jax_numpy_rank_promotion = "warn"
with warnings.catch_warnings(record=True) as w:
warnings.simplefilter("always")
tnp.ones(2) + tnp.ones((1, 2))
assert len(w) > 0
msg = str(w[-1].message)
expected_msg = ("Following NumPy automatic rank promotion for add on "
"shapes (2,) (1, 2).")
self.assertEqual(msg[:len(expected_msg)], expected_msg)
prev_len = len(w)
tnp.ones(2) + 3
self.assertEqual(len(w), prev_len) # don't want to warn for scalars
finally:
FLAGS.jax_numpy_rank_promotion = prev_flag
def testStackArrayArgument(self):
# tests https://github.com/google/jax/issues/1271
@nje.jit
def foo(x):
return tnp.stack(x)
foo(onp.zeros(2)) # doesn't crash
@nje.jit
def foo(x):
return tnp.concatenate(x)
foo(onp.zeros((2, 2))) # doesn't crash
@jtu.disable
def testReluGradientConstants(self):
# This is a regression test that verifies that constants associated with the
# gradient of np.maximum (from lax._balanced_eq) aren't hoisted into the
# outermost jaxpr. This was producing some large materialized constants for
# every relu activation in a model.
def body(i, xy):
x, y = xy
y = y + jax.grad(lambda z: tnp.sum(tnp.maximum(z, 0.)))(x) # pylint: disable=undefined-variable
return x, y
f = lambda y: lax.fori_loop(0, 5, body, (y, y))
wrapped = linear_util.wrap_init(f)
pv = partial_eval.PartialVal(
(jax.core.ShapedArray((3, 4), onp.float32), jax.core.unit))
_, _, consts = partial_eval.trace_to_jaxpr(wrapped, [pv])
self.assertFalse(
any(onp.array_equal(x, onp.full((3, 4), 2., dtype=onp.float32))
for x in consts))
@named_parameters(
{"testcase_name": "_from={}_to={}".format(from_shape, to_shape),
"rng_factory": rng_factory, "from_shape": from_shape, "to_shape": to_shape}
for from_shape, to_shape in [
[(1, 3), (4, 3)],
[(3,), (2, 1, 3)],
[(3,), (3, 3)],
[(1,), (3,)],
]
for rng_factory in [jtu.rand_default])
def testBroadcastTo(self, from_shape, to_shape, rng_factory):
rng = rng_factory()
args_maker = self._GetArgsMaker(rng, [from_shape], [onp.float32])
onp_op = lambda x: onp.broadcast_to(x, to_shape)
lnp_op = lambda x: tnp.broadcast_to(x, to_shape)
self._CheckAgainstNumpy(onp_op, lnp_op, args_maker, check_dtypes=True)
self._CompileAndCheck(
lnp_op, args_maker, check_dtypes=True, check_incomplete_shape=True)
def testBroadcastToIssue1522(self):
self.assertRaisesRegex(
Exception, "Unable to broadcast",
lambda: tnp.broadcast_to(onp.ones((2, 3)), (1, 3)))
def testBroadcastToIntIssue1548(self):
self.assertAllClose(tnp.broadcast_to(1, (3, 2)), onp.ones((3, 2)),
check_dtypes=False)
def testBroadcastToOnScalar(self):
self.assertIsInstance(tnp.broadcast_to(10.0, ()), tnp.ndarray)
self.assertIsInstance(onp.broadcast_to(10.0, ()), onp.ndarray)
@jtu.disable
def testPrecision(self):
ones_1d = onp.ones((2,))
ones_2d = onp.ones((2, 2))
ones_3d = onp.ones((2, 2, 2))
HIGHEST = lax.Precision.HIGHEST
jtu.assert_dot_precision(None, tnp.dot, ones_1d, ones_1d)
jtu.assert_dot_precision(
HIGHEST,
partial(tnp.dot, precision=HIGHEST),
ones_1d, ones_1d)
jtu.assert_dot_precision(
HIGHEST,
partial(tnp.dot, precision=HIGHEST),
ones_3d, ones_3d)
jtu.assert_dot_precision(
HIGHEST,
partial(tnp.matmul, precision=HIGHEST),
ones_2d, ones_2d)
jtu.assert_dot_precision(
HIGHEST,
partial(tnp.vdot, precision=HIGHEST),
ones_1d, ones_1d)
jtu.assert_dot_precision(
HIGHEST,
partial(tnp.tensordot, axes=2, precision=HIGHEST),
ones_2d, ones_2d)
jtu.assert_dot_precision(
HIGHEST,
partial(tnp.tensordot, axes=(0, 0), precision=HIGHEST),
ones_1d, ones_1d)
jtu.assert_dot_precision(
HIGHEST,
partial(tnp.tensordot, axes=((0,), (0,)), precision=HIGHEST),
ones_1d, ones_1d)
jtu.assert_dot_precision(
HIGHEST,
partial(tnp.einsum, "i,i", precision=HIGHEST),
ones_1d, ones_1d)
jtu.assert_dot_precision(
HIGHEST,
partial(tnp.einsum, "ij,ij", precision=HIGHEST),
ones_2d, ones_2d)
jtu.assert_dot_precision(
HIGHEST,
partial(tnp.inner, precision=HIGHEST),
ones_1d, ones_1d)
@named_parameters(jtu.cases_from_list(
{"testcase_name":
"_{}_{}_{}_{}".format(
shape, jtu.dtype_str(key_dtype), jtu.dtype_str(value_dtype),
dimension).replace(" ", ""),
"shape": shape, "key_dtype": key_dtype, "value_dtype": value_dtype,
"dimension": dimension, "rng_factory": rng_factory}
for shape in all_shapes
for key_dtype in minus(number_dtypes, complex_dtypes)
for value_dtype in all_dtypes
for dimension in range(-len(shape), len(shape))
for rng_factory in [jtu.rand_default]))
@new_test
def testSortKeyValue(self, shape, key_dtype, value_dtype, dimension,
rng_factory):
if key_dtype == onp.float32 and value_dtype == onp.bool_:
self.skipTest(
"Temporarily disable this test because of TF nightly build failure"
)
def onp_ref(keys, values):
idxs = list(onp.ix_(*[onp.arange(d) for d in keys.shape]))
idxs[dimension] = onp.argsort(keys, axis=dimension)
return keys[tuple(idxs)], values[tuple(idxs)]
rng = rng_factory()
args_maker = self._GetArgsMaker(
rng, [shape, shape], [key_dtype, value_dtype])
op = partial(nje.sort_key_val, dimension=dimension)
self._CheckAgainstNumpy(onp_ref, op, args_maker,
check_dtypes=True)
# sort_key_val requires known rank.
# XLA only has TopKV2 (used by tf.argsort) kernels on those dtypes
# (b/169194137).
check_xla = key_dtype in (onp.uint32, onp.int32, onp.float32, tnp.bfloat16)
self._CompileAndCheck(op, args_maker, check_dtypes=True,
check_incomplete_shape=True, check_unknown_rank=False,
check_experimental_compile=check_xla,
check_xla_forced_compile=check_xla)
# Most grad tests are at the lax level (see lax_test.py), but we add some here
# as needed for e.g. particular compound ops of interest.
GradTestSpec = collections.namedtuple(
"GradTestSpec",
["op", "nargs", "order", "rng_factory", "dtypes", "name", "tol"])
def grad_test_spec(op, nargs, order, rng_factory, dtypes, name=None, tol=None):
return GradTestSpec(
op, nargs, order, rng_factory, dtypes, name or op.__name__, tol)
GRAD_TEST_RECORDS = [
grad_test_spec(tnp.arcsinh, nargs=1, order=2,
rng_factory=jtu.rand_positive,
dtypes=[onp.float64, onp.complex64], tol=1e-4),
grad_test_spec(tnp.arccosh, nargs=1, order=2,
rng_factory=jtu.rand_positive,
dtypes=[onp.float64, onp.complex64], tol=1e-4),
grad_test_spec(tnp.arctanh, nargs=1, order=2,
rng_factory=partial(jtu.rand_uniform, -0.9, 0.9),
dtypes=[onp.float64, onp.complex64], tol=1e-4),
]
GradSpecialValuesTestSpec = collections.namedtuple(
"GradSpecialValuesTestSpec", ["op", "values", "order"])
GRAD_SPECIAL_VALUE_TEST_RECORDS = [
GradSpecialValuesTestSpec(tnp.arcsinh, [0., 1000.], 2),
GradSpecialValuesTestSpec(tnp.arccosh, [1000.], 2),
GradSpecialValuesTestSpec(tnp.arctanh, [0.], 2),
# TODO(wangpeng): Add `GradSpecialValuesTestSpec(tnp.sinc, [0.], 1)`
]
# def num_float_bits(dtype):
# return tnp.finfo(dtypes.canonicalize_dtype(dtype)).bits
class NumpyGradTests(jtu.TestCase):
@named_parameters(itertools.chain.from_iterable(
jtu.cases_from_list(
{"testcase_name": jtu.format_test_name_suffix(
rec.name, shapes, itertools.repeat(dtype)),
"op": rec.op, "rng_factory": rec.rng_factory, "shapes": shapes, "dtype": dtype,
"order": rec.order, "tol": rec.tol}
for shapes in CombosWithReplacement(nonempty_shapes, rec.nargs)
for dtype in rec.dtypes)
for rec in GRAD_TEST_RECORDS))
@jtu.disable
def testOpGrad(self, op, rng_factory, shapes, dtype, order, tol):
rng = rng_factory()
tol = {onp.float32: 1e-1, onp.complex64: 1e-1}
args = tuple(rng(shape, dtype) for shape in shapes)
check_grads(op, args, order, ["fwd", "rev"], tol, tol)
@named_parameters(itertools.chain.from_iterable(
jtu.cases_from_list(
{"testcase_name": "_{}_{}".format(rec.op.__name__, special_value),
"op": rec.op, "special_value": special_value, "order": rec.order}
for special_value in rec.values)
for rec in GRAD_SPECIAL_VALUE_TEST_RECORDS))
@jtu.disable
def testOpGradSpecialValue(self, op, special_value, order):
check_grads(op, (special_value,), order, ["fwd", "rev"],
atol={onp.float32: 3e-3})
@jtu.disable
def testTakeAlongAxisIssue1521(self):
# https://github.com/google/jax/issues/1521
idx = tnp.repeat(tnp.arange(3), 10).reshape((30, 1))
def f(x):
y = x * tnp.arange(3.).reshape((1, 3))
return tnp.take_along_axis(y, idx, -1).sum()
check_grads(f, (1.,), order=1)
if __name__ == "__main__":
absltest.main()
|
tensorflowREPO_NAMEtensorflowPATH_START.@tensorflow_extracted@tensorflow-master@tensorflow@python@ops@numpy_ops@tests@np_test.py@.PATH_END.py
|
{
"filename": "meilisearch.py",
"repo_name": "langchain-ai/langchain",
"repo_path": "langchain_extracted/langchain-master/libs/langchain/langchain/vectorstores/meilisearch.py",
"type": "Python"
}
|
from typing import TYPE_CHECKING, Any
from langchain._api import create_importer
if TYPE_CHECKING:
from langchain_community.vectorstores import Meilisearch
# Create a way to dynamically look up deprecated imports.
# Used to consolidate logic for raising deprecation warnings and
# handling optional imports.
DEPRECATED_LOOKUP = {"Meilisearch": "langchain_community.vectorstores"}
_import_attribute = create_importer(__package__, deprecated_lookups=DEPRECATED_LOOKUP)
def __getattr__(name: str) -> Any:
"""Look up attributes dynamically."""
return _import_attribute(name)
__all__ = [
"Meilisearch",
]
|
langchain-aiREPO_NAMElangchainPATH_START.@langchain_extracted@langchain-master@libs@langchain@langchain@vectorstores@meilisearch.py@.PATH_END.py
|
{
"filename": "__init__.py",
"repo_name": "pymc-devs/pymc",
"repo_path": "pymc_extracted/pymc-main/tests/__init__.py",
"type": "Python"
}
|
# Copyright 2024 The PyMC Developers
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import pymc as pm
_log = pm._log
|
pymc-devsREPO_NAMEpymcPATH_START.@pymc_extracted@pymc-main@tests@__init__.py@.PATH_END.py
|
{
"filename": "reduce_utils.py",
"repo_name": "GeminiDRSoftware/DRAGONS",
"repo_path": "DRAGONS_extracted/DRAGONS-master/recipe_system/utils/reduce_utils.py",
"type": "Python"
}
|
#
# reduce_utils.py
# ------------------------------------------------------------------------------
# Utility function library for reduce and the Reduce class.
from argparse import ArgumentParser
from argparse import RawDescriptionHelpFormatter
import astrodata
import gemini_instruments
from .reduceActions import PosArgAction, UploadArgumentAction
from .reduceActions import BooleanAction
from .reduceActions import ParameterAction
from .reduceActions import CalibrationAction
from .reduceActions import UnitaryArgumentAction
# ------------------------------------------------------------------------------
class ReduceHelpFormatter(RawDescriptionHelpFormatter):
"""
ReduceHelpFormatter class overrides default help formatting on customized
reduce actions.
"""
def _format_args(self, action, default_metavar):
get_metavar = self._metavar_formatter(action, default_metavar)
if action.nargs is None:
result = '%s' % get_metavar(1)
elif isinstance(action, BooleanAction):
result = ''
elif isinstance(action, PosArgAction):
result = '%s [%s ...]' % get_metavar(2)
elif isinstance(action, UnitaryArgumentAction):
result = '%s' % get_metavar(1)
elif isinstance(action, ParameterAction):
result = '%s [%s ...]' % get_metavar(2)
elif isinstance(action, CalibrationAction):
result = '%s [%s ...]' % get_metavar(2)
else:
formats = ['%s' for _ in range(action.nargs)]
result = ' '.join(formats) % get_metavar(action.nargs)
return result
class ReduceArgumentParser(ArgumentParser):
"""
Converts an argument line from a user param file into an actual argument,
yields to the calling parser.
"""
def convert_arg_line_to_args(self, arg_line):
if not arg_line.startswith("#"):
for arg in arg_line.split():
if not arg.strip():
continue
if arg.strip().startswith("#"):
break
yield arg
def buildParser(version):
description = '\n'.join([
" Gemini Observatory ".center(77, '_'),
" DRAGONS Recipe Processing Management System ".center(77, '_'),
f" Recipe System Release v{version} ".center(77, '_'),
])
parser = ReduceArgumentParser(description=description,
prog="reduce",
formatter_class=ReduceHelpFormatter,
fromfile_prefix_chars='@')
parser.add_argument("-v", "--version", action='version',
version='%(prog)s v'+ version)
parser.add_argument("-d", "--displayflags", dest='displayflags',
default=False, nargs='*', action=BooleanAction,
help="display all parsed option flags and exit.")
parser.add_argument('files', metavar='fitsfile', nargs = "*",
action=PosArgAction, default=[],
help="fitsfile [fitsfile ...] ")
parser.add_argument("--adpkg", dest='adpkg', default=None,
nargs=1, action=UnitaryArgumentAction,
help="Specify an external astrodata definitions package. "
"This is only passed for non-Gemini instruments."
"The package must be importable. E.g., "
"--adpkg soar_instruments ")
parser.add_argument("--drpkg", dest='drpkg', default='geminidr',
nargs=1, action=UnitaryArgumentAction,
help="Specify another data reduction (dr) package. "
"The package must be importable. Recipe system default is "
"'geminidr'. E.g., --drpkg ghostdr ")
parser.add_argument("--logfile", dest="logfile", default="reduce.log",
nargs=1, action=UnitaryArgumentAction,
help="name of log (default is 'reduce.log')")
parser.add_argument("--logmode", dest="logmode", default="standard",
nargs=1, action=UnitaryArgumentAction,
help="Set log mode: 'standard', 'quiet', 'debug'. "
"Default is 'standard'. 'quiet' writes only to log file.")
parser.add_argument("-p", "--param", dest="userparam", default=None,
nargs="*", action=ParameterAction,
help="Set a parameter from the command line. The form "
"'-p par=val' sets a parameter such that all primitives "
"with that defined parameter will 'see' it. The form: "
"'-p primitivename:par=val', sets the parameter only "
"for 'primitivename'. Separate par/val pairs by "
"whitespace: "
"(eg. '-p par1=val1 par2=val2')")
parser.add_argument("--qa", action='store_const', dest="mode",
default='sq', const='qa',help="Use 'qa' recipes."
"Default is to use 'sq' recipes.")
parser.add_argument("--ql", action='store_const', dest="mode",
default='sq', const='ql',help="Use 'quicklook' recipes."
"Default is to use 'sq' recipes.")
parser.add_argument("-r", "--recipe", dest="recipename", default=None,
nargs=1, action=UnitaryArgumentAction,
help="Specify a recipe by name. Users can request "
"non-default system recipe functions by their simple "
"names, e.g., -r qaStack, OR may specify their own "
"recipe file and recipe function. A user defined "
"recipe function must be 'dotted' with the recipe file."
" E.g., "
" '-r /path/to/recipes/recipefile.recipe_function' "
"For a recipe file in the current working directory "
"(cwd), only the file name is needed, as in, "
"'-r recipefile.recipe_function' "
"The fact that the recipe function is dotted with the "
"recipe file name implies that multiple user defined "
"recipe functions can be defined in a single file. "
"Readers should understand that these recipe files "
"shall behave as python modules and should be named "
"accordingly. I.e., in the example above, 'recipefile'"
"is a python module named, 'recipefile.py' ")
parser.add_argument("--suffix", dest='suffix', default=None,
nargs=1, action=UnitaryArgumentAction,
help="Add 'suffix' to filenames at end of reduction; "
"strip all other suffixes marked by '_'; ")
parser.add_argument("--upload", dest='upload', default=None,
action=UploadArgumentAction, nargs="*",
help="Send these pipeline products to fitsstore."
"Default is None."
"Eg., --upload metrics calibs science")
parser.add_argument("--user_cal", dest='user_cal', default=None,
nargs="*", action=CalibrationAction,
help="Specify user supplied calibrations for "
"calibration types. "
"Eg., --user_cal processed_arc:gsTest_arc.fits")
parser.add_argument("-c", "--config", dest='config', default=None,
nargs=1, action=UnitaryArgumentAction,
help="Load a specific config file, overriding the "
"~/.dragons/dragonsrc file and the $DRAGONSRC "
"environment variable.")
return parser
# --------------------------- Emulation functions ------------------------------
# The functions below encapsulate ArgumentParser access to option strings and
# matches them to 'dest' attributes and attribute values. There is no public
# interface as with OptionParser.has_option() and OptionParser.get_option() for
# testing and getting option flags.
# The functions
#
# parser_has_option()
# get_option_flags()
#
# emulate those methods.
#
# insert_option_value() -- assigns an option value to matching 'dest' attr
# show_parser_options() -- pretty print options, 'dest' attrs, values.
# ------------------------------------------------------------------------------
def parser_has_option(parser, option):
return option in parser._option_string_actions
def get_option_flags(parser, option):
return parser._option_string_actions[option].option_strings
def insert_option_value(parser, args, option, value):
dest = parser._option_string_actions[option].dest
setattr(args, dest, value)
return
def show_parser_options(parser, args):
all_opts = list(parser.__dict__['_option_string_actions'].keys())
handled_flag_set = []
print("\n\t"+"-"*20+" switches, vars, vals "+"-"*20+"\n")
print("\t Literals\t\t\tvar 'dest'\t\tValue")
print("\t", "-"*65)
for opt in all_opts:
all_option_flags = get_option_flags(parser, opt)
if opt in handled_flag_set:
continue
elif "--help" in all_option_flags:
continue
elif "--version" in all_option_flags:
continue
else:
handled_flag_set.extend(all_option_flags)
dvar = parser.__dict__['_option_string_actions'][opt].__dict__['dest']
val = args.__dict__[dvar]
fmt1 = "\t{}".format(all_option_flags)
fmt2 = ":: {} ".format(dvar)
fmt3 = ":: {}".format(val)
fmtf = fmt1.ljust(33) + fmt2.ljust(24) + fmt3
print(fmtf)
print("\t"+"-"*65+"\n")
return
def set_btypes(userparams):
"""
All cmd line args are delivered as strings. Find any user parameters that
should be other python types and set them to those actual corresponding types.
I.e.,
'None' --> None
'True' --> True
'False' --> False
:parameters userparams: user parameters (if any) passed on the command line.
:type userparms: <list>
:returns: A tuple of same parameters with converted None and boolean types.
preserved with any specified primitive name.
E.g., [('foo','bar'), ('tileArrays:par1','val1')]
:rtype: <list> of tuples.
"""
upars = []
if userparams:
for upar in userparams:
tmp = upar.split("=", 1)
spec, val = tmp[0].strip(), tmp[1].strip()
if val == 'None':
val = None
elif val == 'True':
val = True
elif val == 'False':
val = False
upars.append((spec,val))
return upars
def normalize_args(args):
"""
Convert argparse argument lists to single string values.
:parameter args: argparse Namespace object or equivalent
:type args: <Namespace>
:return: Same with converted types.
:rtype: <Namespace>
"""
if isinstance(args.adpkg, list):
args.adpkg = args.adpkg[0]
if isinstance(args.drpkg, list):
args.drpkg = args.drpkg[0]
if isinstance(args.recipename, list):
args.recipename = args.recipename[0]
if isinstance(args.config, list):
args.config = args.config[0]
if isinstance(args.logmode, list):
args.logmode = args.logmode[0]
if isinstance(args.logfile, list):
args.logfile = args.logfile[0]
if isinstance(args.suffix, list):
args.suffix = args.suffix[0]
return args
def normalize_upload(upload):
"""
For Recipe System v2.0, upload shall now be a list of things to send
to fitsstore.
E.g.,
$ reduce --upload metrics <file.fits> <file2.fits>
$ reduce --upload metrics, calibs <file.fits> <file2.fits>
$ reduce --upload metrics, calibs, science <file.fits> <file2.fits>
Result in
upload == ['metrics']
upload == ['metrics', 'calibs']
upload == ['metrics', 'calibs', 'science']
:parameter upload: upload argument received by the reduce command line.
:type upload: <list>
:return: list of coerced or defaulted upload instructions.
:rtype: <list>
"""
if upload and isinstance(upload, list):
splitc = upload if len(upload) > 1 else upload[0].split(',')
return [c.lower() for c in splitc]
elif upload is None:
pass
else:
raise TypeError("upload must be None or a list")
return
def normalize_ucals(cals):
"""
When a user passes a --user_cal argument of the form,
--user_cal processed_bias:/path/to/foo.fits
The parser produces a user calibrations list like,
['processed_bias:/path/to/foo.fits']
This list would pass to the Reduce __init__ as such, but, this function
will translate into a dict and confirm that the provided file exists and
is of the correct type.
{'processed_bias': '/path/to/foo.fits'}
User calibrations always take precedence over nominal calibration
retrieval. User calibrations are not cached because they are not
retrieved from fitsstore and are presumably on disk.
Parameters
----------
cals : list
A list of strings like, 'caltype:calfilepath'.
Returns
-------
normalz : dict
a dictionary of the cal types applied to input files.
"""
normalz = {}
if cals is None:
return normalz
for cal in cals:
ctype, cpath = cal.split(":")
scal, stype = ctype.split("_")
caltags = {scal.upper(), stype.upper()}
cad = astrodata.open(cpath)
try:
assert caltags.issubset(cad.tags)
except AssertionError:
errmsg = "Calibration type {}\ndoes not match file {}"
raise TypeError(errmsg.format(ctype, cpath))
normalz[ctype] = cpath
return normalz
|
GeminiDRSoftwareREPO_NAMEDRAGONSPATH_START.@DRAGONS_extracted@DRAGONS-master@recipe_system@utils@reduce_utils.py@.PATH_END.py
|
{
"filename": "test_calibration_stacker.py",
"repo_name": "LCOGT/banzai",
"repo_path": "banzai_extracted/banzai-main/banzai/tests/test_calibration_stacker.py",
"type": "Python"
}
|
from banzai.lco import LCOCalibrationFrame
from banzai.data import CCDData
from banzai.context import Context
from banzai.calibrations import CalibrationStacker
from banzai.dbs import Instrument
import numpy as np
from astropy.io import fits
import pytest
nx, ny = 102, 105
header = {'DATASEC': f'[1:{nx},1:{ny}]', 'DETSEC': f'[1:{nx},1:{ny}]', 'CCDSUM': '1 1',
'OBSTYPE': 'TEST', 'RDNOISE': 3.0, 'TELESCOP': '1m0-02', 'DAY-OBS': '20191209',
'DATE-OBS': '2019-12-09T00:00:00', 'RA': 0.0, 'DEC': 0.0}
context = {'CALIBRATION_MIN_FRAMES': {'TEST': 1},
'CALIBRATION_FILENAME_FUNCTIONS': {'TEST': ['banzai.utils.file_utils.ccdsum_to_filename']},
'CALIBRATION_SET_CRITERIA': {'TEST': ['binning']},
'CALIBRATION_FRAME_CLASS': 'banzai.lco.LCOCalibrationFrame',
'TELESCOPE_FILENAME_FUNCTION': 'banzai.utils.file_utils.telescope_to_filename',
'MASTER_CALIBRATION_EXTENSION_ORDER': {'BIAS': ['SCI', 'BPM', 'ERR'],
'DARK': ['SCI', 'BPM', 'ERR'],
'SKYFLAT': ['SCI', 'BPM', 'ERR']}}
context = Context(context)
instrument = Instrument(site='cpt', camera='fa11', name='fa11')
@pytest.fixture(scope='module')
def set_random_seed():
np.random.seed(84651611)
class FakeStacker(CalibrationStacker):
@property
def calibration_type(self):
return 'TEST'
def test_stacking():
test_images = [LCOCalibrationFrame([CCDData(np.ones((ny, nx)) * i, meta=fits.Header(header))], '')
for i in range(9)]
for image in test_images:
image.instrument = instrument
stage = FakeStacker(context)
stacked_data = stage.do_stage(test_images)
np.testing.assert_allclose(stacked_data.data, np.ones((ny, nx)) * np.mean(np.arange(9)))
np.testing.assert_allclose(stacked_data.primary_hdu.uncertainty, np.ones((ny, nx)))
assert np.all(stacked_data.mask == 0)
def test_stacking_with_noise():
test_images = [LCOCalibrationFrame([CCDData(np.random.normal(0.0, 3.0, size=(ny, nx)), meta=fits.Header(header))], '')
for i in range(81)]
for image in test_images:
image.instrument = instrument
stage = FakeStacker(context)
stacked_data = stage.do_stage(test_images)
np.testing.assert_allclose(stacked_data.data, np.zeros((ny, nx)), atol=5.0/3.0)
np.testing.assert_allclose(stacked_data.primary_hdu.uncertainty, np.ones((ny, nx)) / 3.0, atol=0.05)
assert np.all(stacked_data.mask == 0)
def test_stacking_with_different_pixels():
d = np.arange(nx*ny, dtype=np.float64).reshape(ny, nx)
test_images = [LCOCalibrationFrame([CCDData(d * i, meta=fits.Header(header))], '')
for i in range(9)]
for image in test_images:
image.instrument = instrument
stage = FakeStacker(context)
stacked_data = stage.do_stage(test_images)
np.testing.assert_allclose(stacked_data.data, d * np.mean(np.arange(9)))
np.testing.assert_allclose(stacked_data.primary_hdu.uncertainty, np.ones((ny, nx)))
assert np.all(stacked_data.mask == 0)
|
LCOGTREPO_NAMEbanzaiPATH_START.@banzai_extracted@banzai-main@banzai@tests@test_calibration_stacker.py@.PATH_END.py
|
{
"filename": "README.md",
"repo_name": "rasg-affiliates/healvis",
"repo_path": "healvis_extracted/healvis-master/README.md",
"type": "Markdown"
}
|
# healvis

[](https://codecov.io/gh/rasg-affiliates/healvis)
Radio interferometric visibility simulator based on HEALpix maps.
**Note** This is a tool developed for specific research uses, and is not yet at the development standards of other RASG projects. Use at your own risk.
## Dependencies
Python dependencies for `healvis` include
* numpy
* astropy
* astropy-healpix
* scipy
* h5py
* pyyaml
* multiprocessing
* [pyuvdata](https://github.com/HERA-Team/pyuvdata/)
These will be installed automatically. Optional dependencies include
* [pygsm](https://github.com/telegraphic/PyGSM)
* [scikit-learn](https://scikit-learn.org/stable/)
The use of PyGSM within this package is subject to the GNU General Public License (GPL), due to its dependency on `healpy`.
## Installation
Clone this repository and run the installation script as
```pip install .```
To install optional dependencies, use ```pip install .[gsm]``` to install with PyGSM or ```pip install .[all]``` to install scikit-learn as well.
To install `healvis` for development, use ```pip install .[dev]```.
## Getting Started
To get started running `healvis`, see our [tutorial notebooks](https://github.com/RadioAstronomySoftwareGroup/healvis/tree/master/notebooks).
|
rasg-affiliatesREPO_NAMEhealvisPATH_START.@healvis_extracted@healvis-master@README.md@.PATH_END.py
|
{
"filename": "copy_reg.py",
"repo_name": "waynebhayes/SpArcFiRe",
"repo_path": "SpArcFiRe_extracted/SpArcFiRe-master/scripts/SpArcFiRe-pyvenv/lib/python2.7/copy_reg.py",
"type": "Python"
}
|
/usr/lib64/python2.7/copy_reg.py
|
waynebhayesREPO_NAMESpArcFiRePATH_START.@SpArcFiRe_extracted@SpArcFiRe-master@scripts@SpArcFiRe-pyvenv@lib@python2.7@copy_reg.py@.PATH_END.py
|
{
"filename": "debug.md",
"repo_name": "lgrcia/prose",
"repo_path": "prose_extracted/prose-main/docs/md/debug.md",
"type": "Markdown"
}
|
# Debugging
Finding why a specific error is thrown when running a [Sequence](prose.Sequence) can be challenging. Here are a few steps to debug a [Sequence](prose.Sequence).
## 1. Find from which block the error comes from
The error might be on the [Sequence](prose.Sequence) `_run` function, but scrolling up will reveal in which block the error actually occurs (if not specified). For each block, the documentation (should) contain explanation for each possible exceptions being raised. If not, [open a Github issue](https://github.com/lgrcia/prose/issues/new/choose)!
## 2. Show the last image
A [Sequence](prose.Sequence) holds a `last_image` attribute that corresponds to the image being processed by the blocks. When an error occurs it might be helpful to show this last image with
```python
sequence.last_image.show()
```
to realize why the next block struggles (such as because sources not being well detected in the image)
## 3. Run all blocks individually
If None of this methods help, you can always load an image by hand and run all blocks manually. That's what a [Sequence](prose.Sequence) does internally. Here is how to do it
```python
from prose import FITSImage
# your test image
test_image = FITSImage(your_path)
# your sequence
sequence = Sequence([...])
# run all blocks
for block in sequence.blocks:
test_image = block(test_image)
# terminate all blocks
for block in sequence.blocks:
block.terminate()
```
This way, the error will be thrown on a specific block, and you can track the changes of each of them on your test image
And if you have any question, just [open a Github issue](https://github.com/lgrcia/prose/issues/new/choose)!
|
lgrciaREPO_NAMEprosePATH_START.@prose_extracted@prose-main@docs@md@debug.md@.PATH_END.py
|
{
"filename": "thermo.ipynb",
"repo_name": "misharash/class_public",
"repo_path": "class_public_extracted/class_public-master/notebooks/thermo.ipynb",
"type": "Jupyter Notebook"
}
|
```python
# import necessary modules
# uncomment to get plots displayed in notebook
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
from classy import Class
from scipy.optimize import fsolve
from scipy.interpolate import interp1d
import math
```
```python
# esthetic definitions for the plots
font = {'size' : 16, 'family':'STIXGeneral'}
axislabelfontsize='large'
matplotlib.rc('font', **font)
matplotlib.mathtext.rcParams['legend.fontsize']='medium'
plt.rcParams["figure.figsize"] = [8.0,6.0]
```
```python
common_settings = {'output' : 'tCl',
# LambdaCDM parameters
'h':0.67556,
'omega_b':0.022032,
'omega_cdm':0.12038,
'A_s':2.215e-9,
'n_s':0.9619,
'tau_reio':0.0925,
# Take fixed value for primordial Helium (instead of automatic BBN adjustment)
'YHe':0.246,
'thermodynamics_verbose':1
}
##############
#
# call CLASS
#
###############
M = Class()
M.set(common_settings)
M.compute()
derived = M.get_current_derived_parameters(['tau_rec','conformal_age'])
thermo = M.get_thermodynamics()
print thermo.viewkeys()
```
```python
tau = thermo['conf. time [Mpc]']
g = thermo['g [Mpc^-1]']
# to make the reionisation peak visible, rescale g by 100 for late times
g[:500] *= 100
#################
#
# start plotting
#
#################
#
plt.xlim([1.e2,derived['conformal_age']])
plt.xlabel(r'$\tau \,\,\, \mathrm{[Mpc]}$')
plt.ylabel(r'$\mathrm{visibility} \,\,\, g \,\,\, [\mathrm{Mpc}^{-1}]$')
plt.axvline(x=derived['tau_rec'],color='k')
# The conformal time at reionisation could be extracted from the code.
# But we know it because it is part of the standard output
# when thermodynamics_verbose=1
plt.axvline(x=4255.316282,color='k')
#
# Print functions one by one, saving between each (for slides)
#
plt.semilogx(tau,g,'r',label=r'$\psi$')
```
```python
plt.savefig('thermo.pdf',bbox_inches='tight')
```
|
misharashREPO_NAMEclass_publicPATH_START.@class_public_extracted@class_public-master@notebooks@thermo.ipynb@.PATH_END.py
|
{
"filename": "config.py",
"repo_name": "catboost/catboost",
"repo_path": "catboost_extracted/catboost-master/contrib/python/setuptools/py2/setuptools/config.py",
"type": "Python"
}
|
from __future__ import absolute_import, unicode_literals
import io
import os
import sys
import warnings
import functools
from collections import defaultdict
from functools import partial
from functools import wraps
from importlib import import_module
from distutils.errors import DistutilsOptionError, DistutilsFileError
from setuptools.extern.packaging.version import LegacyVersion, parse
from setuptools.extern.packaging.specifiers import SpecifierSet
from setuptools.extern.six import string_types, PY3
__metaclass__ = type
def read_configuration(
filepath, find_others=False, ignore_option_errors=False):
"""Read given configuration file and returns options from it as a dict.
:param str|unicode filepath: Path to configuration file
to get options from.
:param bool find_others: Whether to search for other configuration files
which could be on in various places.
:param bool ignore_option_errors: Whether to silently ignore
options, values of which could not be resolved (e.g. due to exceptions
in directives such as file:, attr:, etc.).
If False exceptions are propagated as expected.
:rtype: dict
"""
from setuptools.dist import Distribution, _Distribution
filepath = os.path.abspath(filepath)
if not os.path.isfile(filepath):
raise DistutilsFileError(
'Configuration file %s does not exist.' % filepath)
current_directory = os.getcwd()
os.chdir(os.path.dirname(filepath))
try:
dist = Distribution()
filenames = dist.find_config_files() if find_others else []
if filepath not in filenames:
filenames.append(filepath)
_Distribution.parse_config_files(dist, filenames=filenames)
handlers = parse_configuration(
dist, dist.command_options,
ignore_option_errors=ignore_option_errors)
finally:
os.chdir(current_directory)
return configuration_to_dict(handlers)
def _get_option(target_obj, key):
"""
Given a target object and option key, get that option from
the target object, either through a get_{key} method or
from an attribute directly.
"""
getter_name = 'get_{key}'.format(**locals())
by_attribute = functools.partial(getattr, target_obj, key)
getter = getattr(target_obj, getter_name, by_attribute)
return getter()
def configuration_to_dict(handlers):
"""Returns configuration data gathered by given handlers as a dict.
:param list[ConfigHandler] handlers: Handlers list,
usually from parse_configuration()
:rtype: dict
"""
config_dict = defaultdict(dict)
for handler in handlers:
for option in handler.set_options:
value = _get_option(handler.target_obj, option)
config_dict[handler.section_prefix][option] = value
return config_dict
def parse_configuration(
distribution, command_options, ignore_option_errors=False):
"""Performs additional parsing of configuration options
for a distribution.
Returns a list of used option handlers.
:param Distribution distribution:
:param dict command_options:
:param bool ignore_option_errors: Whether to silently ignore
options, values of which could not be resolved (e.g. due to exceptions
in directives such as file:, attr:, etc.).
If False exceptions are propagated as expected.
:rtype: list
"""
options = ConfigOptionsHandler(
distribution, command_options, ignore_option_errors)
options.parse()
meta = ConfigMetadataHandler(
distribution.metadata, command_options, ignore_option_errors,
distribution.package_dir)
meta.parse()
return meta, options
class ConfigHandler:
"""Handles metadata supplied in configuration files."""
section_prefix = None
"""Prefix for config sections handled by this handler.
Must be provided by class heirs.
"""
aliases = {}
"""Options aliases.
For compatibility with various packages. E.g.: d2to1 and pbr.
Note: `-` in keys is replaced with `_` by config parser.
"""
def __init__(self, target_obj, options, ignore_option_errors=False):
sections = {}
section_prefix = self.section_prefix
for section_name, section_options in options.items():
if not section_name.startswith(section_prefix):
continue
section_name = section_name.replace(section_prefix, '').strip('.')
sections[section_name] = section_options
self.ignore_option_errors = ignore_option_errors
self.target_obj = target_obj
self.sections = sections
self.set_options = []
@property
def parsers(self):
"""Metadata item name to parser function mapping."""
raise NotImplementedError(
'%s must provide .parsers property' % self.__class__.__name__)
def __setitem__(self, option_name, value):
unknown = tuple()
target_obj = self.target_obj
# Translate alias into real name.
option_name = self.aliases.get(option_name, option_name)
current_value = getattr(target_obj, option_name, unknown)
if current_value is unknown:
raise KeyError(option_name)
if current_value:
# Already inhabited. Skipping.
return
skip_option = False
parser = self.parsers.get(option_name)
if parser:
try:
value = parser(value)
except Exception:
skip_option = True
if not self.ignore_option_errors:
raise
if skip_option:
return
setter = getattr(target_obj, 'set_%s' % option_name, None)
if setter is None:
setattr(target_obj, option_name, value)
else:
setter(value)
self.set_options.append(option_name)
@classmethod
def _parse_list(cls, value, separator=','):
"""Represents value as a list.
Value is split either by separator (defaults to comma) or by lines.
:param value:
:param separator: List items separator character.
:rtype: list
"""
if isinstance(value, list): # _get_parser_compound case
return value
if '\n' in value:
value = value.splitlines()
else:
value = value.split(separator)
return [chunk.strip() for chunk in value if chunk.strip()]
@classmethod
def _parse_dict(cls, value):
"""Represents value as a dict.
:param value:
:rtype: dict
"""
separator = '='
result = {}
for line in cls._parse_list(value):
key, sep, val = line.partition(separator)
if sep != separator:
raise DistutilsOptionError(
'Unable to parse option value to dict: %s' % value)
result[key.strip()] = val.strip()
return result
@classmethod
def _parse_bool(cls, value):
"""Represents value as boolean.
:param value:
:rtype: bool
"""
value = value.lower()
return value in ('1', 'true', 'yes')
@classmethod
def _exclude_files_parser(cls, key):
"""Returns a parser function to make sure field inputs
are not files.
Parses a value after getting the key so error messages are
more informative.
:param key:
:rtype: callable
"""
def parser(value):
exclude_directive = 'file:'
if value.startswith(exclude_directive):
raise ValueError(
'Only strings are accepted for the {0} field, '
'files are not accepted'.format(key))
return value
return parser
@classmethod
def _parse_file(cls, value):
"""Represents value as a string, allowing including text
from nearest files using `file:` directive.
Directive is sandboxed and won't reach anything outside
directory with setup.py.
Examples:
file: README.rst, CHANGELOG.md, src/file.txt
:param str value:
:rtype: str
"""
include_directive = 'file:'
if not isinstance(value, string_types):
return value
if not value.startswith(include_directive):
return value
spec = value[len(include_directive):]
filepaths = (os.path.abspath(path.strip()) for path in spec.split(','))
return '\n'.join(
cls._read_file(path)
for path in filepaths
if (cls._assert_local(path) or True)
and os.path.isfile(path)
)
@staticmethod
def _assert_local(filepath):
if not filepath.startswith(os.getcwd()):
raise DistutilsOptionError(
'`file:` directive can not access %s' % filepath)
@staticmethod
def _read_file(filepath):
with io.open(filepath, encoding='utf-8') as f:
return f.read()
@classmethod
def _parse_attr(cls, value, package_dir=None):
"""Represents value as a module attribute.
Examples:
attr: package.attr
attr: package.module.attr
:param str value:
:rtype: str
"""
attr_directive = 'attr:'
if not value.startswith(attr_directive):
return value
attrs_path = value.replace(attr_directive, '').strip().split('.')
attr_name = attrs_path.pop()
module_name = '.'.join(attrs_path)
module_name = module_name or '__init__'
parent_path = os.getcwd()
if package_dir:
if attrs_path[0] in package_dir:
# A custom path was specified for the module we want to import
custom_path = package_dir[attrs_path[0]]
parts = custom_path.rsplit('/', 1)
if len(parts) > 1:
parent_path = os.path.join(os.getcwd(), parts[0])
module_name = parts[1]
else:
module_name = custom_path
elif '' in package_dir:
# A custom parent directory was specified for all root modules
parent_path = os.path.join(os.getcwd(), package_dir[''])
sys.path.insert(0, parent_path)
try:
module = import_module(module_name)
value = getattr(module, attr_name)
finally:
sys.path = sys.path[1:]
return value
@classmethod
def _get_parser_compound(cls, *parse_methods):
"""Returns parser function to represents value as a list.
Parses a value applying given methods one after another.
:param parse_methods:
:rtype: callable
"""
def parse(value):
parsed = value
for method in parse_methods:
parsed = method(parsed)
return parsed
return parse
@classmethod
def _parse_section_to_dict(cls, section_options, values_parser=None):
"""Parses section options into a dictionary.
Optionally applies a given parser to values.
:param dict section_options:
:param callable values_parser:
:rtype: dict
"""
value = {}
values_parser = values_parser or (lambda val: val)
for key, (_, val) in section_options.items():
value[key] = values_parser(val)
return value
def parse_section(self, section_options):
"""Parses configuration file section.
:param dict section_options:
"""
for (name, (_, value)) in section_options.items():
try:
self[name] = value
except KeyError:
pass # Keep silent for a new option may appear anytime.
def parse(self):
"""Parses configuration file items from one
or more related sections.
"""
for section_name, section_options in self.sections.items():
method_postfix = ''
if section_name: # [section.option] variant
method_postfix = '_%s' % section_name
section_parser_method = getattr(
self,
# Dots in section names are translated into dunderscores.
('parse_section%s' % method_postfix).replace('.', '__'),
None)
if section_parser_method is None:
raise DistutilsOptionError(
'Unsupported distribution option section: [%s.%s]' % (
self.section_prefix, section_name))
section_parser_method(section_options)
def _deprecated_config_handler(self, func, msg, warning_class):
""" this function will wrap around parameters that are deprecated
:param msg: deprecation message
:param warning_class: class of warning exception to be raised
:param func: function to be wrapped around
"""
@wraps(func)
def config_handler(*args, **kwargs):
warnings.warn(msg, warning_class)
return func(*args, **kwargs)
return config_handler
class ConfigMetadataHandler(ConfigHandler):
section_prefix = 'metadata'
aliases = {
'home_page': 'url',
'summary': 'description',
'classifier': 'classifiers',
'platform': 'platforms',
}
strict_mode = False
"""We need to keep it loose, to be partially compatible with
`pbr` and `d2to1` packages which also uses `metadata` section.
"""
def __init__(self, target_obj, options, ignore_option_errors=False,
package_dir=None):
super(ConfigMetadataHandler, self).__init__(target_obj, options,
ignore_option_errors)
self.package_dir = package_dir
@property
def parsers(self):
"""Metadata item name to parser function mapping."""
parse_list = self._parse_list
parse_file = self._parse_file
parse_dict = self._parse_dict
exclude_files_parser = self._exclude_files_parser
return {
'platforms': parse_list,
'keywords': parse_list,
'provides': parse_list,
'requires': self._deprecated_config_handler(
parse_list,
"The requires parameter is deprecated, please use "
"install_requires for runtime dependencies.",
DeprecationWarning),
'obsoletes': parse_list,
'classifiers': self._get_parser_compound(parse_file, parse_list),
'license': exclude_files_parser('license'),
'license_files': parse_list,
'description': parse_file,
'long_description': parse_file,
'version': self._parse_version,
'project_urls': parse_dict,
}
def _parse_version(self, value):
"""Parses `version` option value.
:param value:
:rtype: str
"""
version = self._parse_file(value)
if version != value:
version = version.strip()
# Be strict about versions loaded from file because it's easy to
# accidentally include newlines and other unintended content
if isinstance(parse(version), LegacyVersion):
tmpl = (
'Version loaded from {value} does not '
'comply with PEP 440: {version}'
)
raise DistutilsOptionError(tmpl.format(**locals()))
return version
version = self._parse_attr(value, self.package_dir)
if callable(version):
version = version()
if not isinstance(version, string_types):
if hasattr(version, '__iter__'):
version = '.'.join(map(str, version))
else:
version = '%s' % version
return version
class ConfigOptionsHandler(ConfigHandler):
section_prefix = 'options'
@property
def parsers(self):
"""Metadata item name to parser function mapping."""
parse_list = self._parse_list
parse_list_semicolon = partial(self._parse_list, separator=';')
parse_bool = self._parse_bool
parse_dict = self._parse_dict
return {
'zip_safe': parse_bool,
'use_2to3': parse_bool,
'include_package_data': parse_bool,
'package_dir': parse_dict,
'use_2to3_fixers': parse_list,
'use_2to3_exclude_fixers': parse_list,
'convert_2to3_doctests': parse_list,
'scripts': parse_list,
'eager_resources': parse_list,
'dependency_links': parse_list,
'namespace_packages': parse_list,
'install_requires': parse_list_semicolon,
'setup_requires': parse_list_semicolon,
'tests_require': parse_list_semicolon,
'packages': self._parse_packages,
'entry_points': self._parse_file,
'py_modules': parse_list,
'python_requires': SpecifierSet,
}
def _parse_packages(self, value):
"""Parses `packages` option value.
:param value:
:rtype: list
"""
find_directives = ['find:', 'find_namespace:']
trimmed_value = value.strip()
if trimmed_value not in find_directives:
return self._parse_list(value)
findns = trimmed_value == find_directives[1]
if findns and not PY3:
raise DistutilsOptionError(
'find_namespace: directive is unsupported on Python < 3.3')
# Read function arguments from a dedicated section.
find_kwargs = self.parse_section_packages__find(
self.sections.get('packages.find', {}))
if findns:
from setuptools import find_namespace_packages as find_packages
else:
from setuptools import find_packages
return find_packages(**find_kwargs)
def parse_section_packages__find(self, section_options):
"""Parses `packages.find` configuration file section.
To be used in conjunction with _parse_packages().
:param dict section_options:
"""
section_data = self._parse_section_to_dict(
section_options, self._parse_list)
valid_keys = ['where', 'include', 'exclude']
find_kwargs = dict(
[(k, v) for k, v in section_data.items() if k in valid_keys and v])
where = find_kwargs.get('where')
if where is not None:
find_kwargs['where'] = where[0] # cast list to single val
return find_kwargs
def parse_section_entry_points(self, section_options):
"""Parses `entry_points` configuration file section.
:param dict section_options:
"""
parsed = self._parse_section_to_dict(section_options, self._parse_list)
self['entry_points'] = parsed
def _parse_package_data(self, section_options):
parsed = self._parse_section_to_dict(section_options, self._parse_list)
root = parsed.get('*')
if root:
parsed[''] = root
del parsed['*']
return parsed
def parse_section_package_data(self, section_options):
"""Parses `package_data` configuration file section.
:param dict section_options:
"""
self['package_data'] = self._parse_package_data(section_options)
def parse_section_exclude_package_data(self, section_options):
"""Parses `exclude_package_data` configuration file section.
:param dict section_options:
"""
self['exclude_package_data'] = self._parse_package_data(
section_options)
def parse_section_extras_require(self, section_options):
"""Parses `extras_require` configuration file section.
:param dict section_options:
"""
parse_list = partial(self._parse_list, separator=';')
self['extras_require'] = self._parse_section_to_dict(
section_options, parse_list)
def parse_section_data_files(self, section_options):
"""Parses `data_files` configuration file section.
:param dict section_options:
"""
parsed = self._parse_section_to_dict(section_options, self._parse_list)
self['data_files'] = [(k, v) for k, v in parsed.items()]
|
catboostREPO_NAMEcatboostPATH_START.@catboost_extracted@catboost-master@contrib@python@setuptools@py2@setuptools@config.py@.PATH_END.py
|
{
"filename": "test_examples.py",
"repo_name": "ytree-project/ytree",
"repo_path": "ytree_extracted/ytree-main/tests/test_examples.py",
"type": "Python"
}
|
"""
tests for example scripts
"""
#-----------------------------------------------------------------------------
# Copyright (c) ytree development team. All rights reserved.
#
# Distributed under the terms of the Modified BSD License.
#
# The full license is in the file COPYING.txt, distributed with this software.
#-----------------------------------------------------------------------------
from ytree.utilities.testing import \
ExampleScriptTest, \
TempDirTest
class TestPlotMostMassive(TempDirTest, ExampleScriptTest):
script_filename = "plot_most_massive.py"
input_filename = "tiny_ctrees/locations.dat"
output_files = ("most_massive.png",)
class TestPlotMostHalos(TempDirTest, ExampleScriptTest):
script_filename = "plot_most_halos.py"
input_filename = "tiny_ctrees/locations.dat"
output_files = ("most_halos.png",)
class TestHaloAge(TempDirTest, ExampleScriptTest):
ncores = 2
script_filename = "halo_age.py"
input_filename = "tiny_ctrees/locations.dat"
output_files = (
"halo_age/halo_age-analysis.h5",
"halo_age/halo_age.h5",
"halo_age/halo_age_0000-analysis.h5",
"halo_age/halo_age_0000.h5"
)
class TestHaloSignificance(TempDirTest, ExampleScriptTest):
ncores = 2
script_filename = "halo_significance.py"
input_filename = "tiny_ctrees/locations.dat"
output_files = (
"halo_significance/halo_significance-analysis.h5",
"halo_significance/halo_significance.h5",
"halo_significance/halo_significance_0000-analysis.h5",
"halo_significance/halo_significance_0000.h5"
)
|
ytree-projectREPO_NAMEytreePATH_START.@ytree_extracted@ytree-main@tests@test_examples.py@.PATH_END.py
|
{
"filename": "_lineposition.py",
"repo_name": "catboost/catboost",
"repo_path": "catboost_extracted/catboost-master/contrib/python/plotly/py3/plotly/validators/choropleth/colorbar/title/font/_lineposition.py",
"type": "Python"
}
|
import _plotly_utils.basevalidators
class LinepositionValidator(_plotly_utils.basevalidators.FlaglistValidator):
def __init__(
self,
plotly_name="lineposition",
parent_name="choropleth.colorbar.title.font",
**kwargs,
):
super(LinepositionValidator, self).__init__(
plotly_name=plotly_name,
parent_name=parent_name,
edit_type=kwargs.pop("edit_type", "colorbars"),
extras=kwargs.pop("extras", ["none"]),
flags=kwargs.pop("flags", ["under", "over", "through"]),
**kwargs,
)
|
catboostREPO_NAMEcatboostPATH_START.@catboost_extracted@catboost-master@contrib@python@plotly@py3@plotly@validators@choropleth@colorbar@title@font@_lineposition.py@.PATH_END.py
|
{
"filename": "tuto3_fit_with_likelihood.ipynb",
"repo_name": "mamartinod/grip",
"repo_path": "grip_extracted/grip-main/tutorials/tuto3_fit_with_likelihood.ipynb",
"type": "Jupyter Notebook"
}
|
# Tuto\#3 Fit the histograms with a likelihood
In this tutorial, we are going to fit the histograms to get self-calibrated null depth.
**Important: the self-calibration is as good as the model**.
Once again, we load the data, get the histograms, prepare the model then we perform the fit.
## Load data and get the histogram
```python
import h5py
import numpy as np
import os
from timeit import default_timer as time
from datetime import datetime
import matplotlib.pyplot as plt
import sys
import grip
try:
import cupy as cp # To use the GPU to speed up the process
onGpu = True
except ModuleNotFoundError:
import numpy as cp # If no GPU, we load numpy again with the same acronym to still use the functions
onGpu = False
datafolder = 'dataset/'
darkfolder = datafolder
save_path = 'results/'
```
No module named 'torch'
GRIP is imported without NPE features
```python
wl_min, wl_max = 11000, 11200
dark_list = ['dataset/UT2015-02-08_ID009_SCI_bet_Leo_DIT-60ms_11um_BCKG.hdf5']
data_list = ['dataset/UT2015-02-08_ID009_SCI_bet_Leo_DIT-60ms_11um_NULL.hdf5']
dark = grip.load_data(dark_list, ['wl_scale', 'Iminus1', 'p1', 'p2'], (wl_min, wl_max), 'hdf5')
data = grip.load_data(data_list, ['wl_scale', 'Iminus1', 'p1', 'p2', 'piston_rms'], (wl_min, wl_max), 'hdf5')
wl_scale = data['wl_scale']
dark_IA, dark_IB = dark['p1'], dark['p2']
dark_Iminus = dark['Iminus1']
Iminus = data['Iminus1']
data_IA, data_IB = data['p1'], data['p2']
# Calculate the null depth
Iplus = data_IA + data_IB + 2 * (data_IA * data_IB)**0.5 # Using the estimator above
data_null = Iminus / Iplus # Calculated null depth
# Get the histogram
bin_bounds = (-0.01, 0.1) # Minimum and maximum values of the bins of the histogram
normed = False # We want to normalise the histogram by its sum
null_axis, null_pdf, null_pdf_err, sz = grip.compute_data_histogram(data_null, \
bin_bounds, \
wl_scale, normed=normed)
injection, spectra = grip.get_injection_and_spectrum(
data_IA, data_IB, wl_scale, (wl_min, wl_max))
nb_frames_binning_photometry = -1 # Bin over all the sample
injection, dummy = grip.binning(
injection, nb_frames_binning_photometry, axis=1, avg=True)
data_IA_axis = cp.linspace(injection[0].min(), injection[0].max(),
np.size(np.unique(injection[0])),
dtype=cp.float32)
cdf_data_IA = grip.computeCdf(data_IA_axis, injection[0], 'cdf', True)
cdf_data_IA = cp.array(cdf_data_IA, dtype=cp.float32)
data_IB_axis = cp.linspace(injection[1].min(), injection[1].max(),
np.size(np.unique(injection[1])),
dtype=cp.float32)
cdf_data_IB = grip.computeCdf(data_IB_axis, injection[1], 'cdf', True)
cdf_data_IB = cp.array(cdf_data_IB, dtype=cp.float32)
sigma_eps = data['piston_rms']
sigma_eps = np.radians(sigma_eps)
sigma_eps *= 2200 / wl_scale
sigma_eps = sigma_eps.reshape((1, -1))
sigma_eps_axis, sigma_eps_cdf = grip.get_dark_cdf(sigma_eps, wl_scale)
std_dark_Iminus = np.std(dark_Iminus)
dark_Iminus -= np.mean(dark_Iminus, 1, keepdims=True) # The model is better when the data are forced to be of average 0
dark_Iminus_axis, dark_Iminus_cdf = grip.get_dark_cdf(
dark_Iminus, wl_scale)
rvu_opd = {'normal1':None} # To generate RV reproducing the statistics of the OPD fluctuations
rvu_IA = None # To generate RV reproducing the statistics of the injection of beam A
rvu_IB = None # To generate RV reproducing the statistics of the injection of beam B
rvu_bg = [None]*wl_scale.size # To generate RV reproducing the statistics of the thermal background, per spectral channel
rvu_eps = [None]*wl_scale.size # To generate RV reproducing the statistics of the fringe blurring, per spectral channel
# Uncomment the lines below to play with "deterministic" Monte-Carlo
# rvu_opd = cp.random.uniform(0, 1, size=n_samp_per_loop, dtype=cp.float32)
# rvu_IA = cp.random.uniform(0, 1, size=n_samp_per_loop, dtype=cp.float32)
# rvu_IB = cp.random.uniform(0, 1, size=n_samp_per_loop, dtype=cp.float32)
# rvu_bg = cp.random.uniform(0, 1, size=(wl_scale.size, n_samp_per_loop), dtype=cp.float32)
# rvu_eps = cp.random.uniform(0, 1, size=(wl_scale.size, n_samp_per_loop), dtype=cp.float32)
# Embed all of the above in lists
rvus = [rvu_IA, rvu_IB, rvu_bg, rvu_eps]
cdfs = [(data_IA_axis, cdf_data_IA), (data_IB_axis, cdf_data_IB),\
(dark_Iminus_axis, dark_Iminus_cdf), (sigma_eps_axis, sigma_eps_cdf)]
```
dataset/UT2015-02-08_ID009_SCI_bet_Leo_DIT-60ms_11um_BCKG.hdf5
dataset/UT2015-02-08_ID009_SCI_bet_Leo_DIT-60ms_11um_NULL.hdf5
## Fit the histograms
Now, let's dive into the core of this tutorial.
We will use the negative binomial lilelihood as a cost function.
As a reminder, the number of elements in the bins follow a binomial distribution.
The code can be used in the same way with a $\chi^2$ cost function.
The null estimator is:
$ N = \frac{I_{-}}{I_{+}} $
$I_{-}(\lambda) =I_1(\lambda) + I_2(\lambda) + 2 \sqrt{I_1(\lambda) I_2(\lambda)} |V| \cos(\frac{2\pi}{\lambda} \Delta OPD) (1 - 0.5\sigma^2_\epsilon + 0.125 \sigma^4_\epsilon) + B(\lambda)$
$I_{+}(\lambda) = I_1(\lambda) + I_2(\lambda) + 2 \sqrt{I_1(\lambda) I_2(\lambda)}$
- $I_x$ is the intensity of the output
- $\Delta OPD$ is the phase of the fringe due to atmospheric turbulence, the phase is centered around $\pi$
- $\sigma_\epsilon$ is the standard deviation of the phase within an exposure time, thus doing fringe blurring
- $B$ is the thermal background
The LBTI does not record the bright fringe, the paper uses an ah hoc expression:
The parameters to explore are:
- the astrophysical null depth ($N_a = \frac{1-|V|}{1+|V|}$)
- the location $\mu$ and
- the scale $\sigma$ of the normal distribution describing the statistics of the fluctuations of phase $\Delta \phi$.
We first need to set the boundaries, the initial guesses amd the type of the parameters to fit (deterministic or distribution's parameters).
```python
bounds_mu = (0, 1000) # Boundaries of mu opd
bounds_sig = (100, 300) # Boundaries of sig opd
bounds_na = (0.0, 0.02) # Boundaries of Na
mu_opd = 250. # initial guess of DeltaPhi mu
sig_opd = 250. # initial guess of DeltaPhi sig
na = 0.0063 # initial guess of astro null
# Compile them into a readable tuple called by the TRF algorithm
bounds_fit = np.array(([bounds_na[0], bounds_mu[0], bounds_sig[0]],
[bounds_na[1], bounds_mu[1], bounds_sig[1]])).T
initial_guess = [na, mu_opd, sig_opd]
initial_guess = np.array(initial_guess, dtype=np.float64)
label_guess = ['deterministic', 'normal1', 'normal1']
spec_chan_width = 2600
instrument_constants = (spec_chan_width, np.pi)
n_samp_total = int(2e6)
n_samp_per_loop = int(1e6)
nloop = n_samp_total // n_samp_per_loop
```
We are going to use `minimize_fit` to perform the model fitting.
It allows to choose the cost function.
It is powered by `scipy.optimize.minimize` with the L-BFGS-B algorithm.
If $\chi^2$ is preferred, GRIP has the function `lstsqrs_fit` powered by `scipy.optimize.curve_fit`.
Regardless the method, a critical parameter of the algorithm to tune is the **differential step**.
It is the typical step size to explore the parameter space and calculate the jacobian.
Baldy tune, the fit will perform poorly and the algorithm will likely converge to the initial guess, regardless how close to the true solutions they are.
```python
cost_fun_args = (label_guess, wl_scale, grip.lbti_model, instrument_constants, rvu_opd, cdfs, rvus)
cost_fun_kwargs = {'n_samp_per_loop':n_samp_per_loop, 'nloop':nloop, 'verbose':True, 'normed':True}
neg_log_multinomial = grip.return_neg_func(grip.log_multinomial)
start = time()
popt, pcov, res = grip.minimize_fit(neg_log_multinomial, grip.create_histogram_model, initial_guess,
null_axis, null_pdf, yerr=None, bounds=bounds_fit,
func_args=cost_fun_args, func_kwargs=cost_fun_kwargs)
stop = time()
print('Duration of fit', stop - start)
```
(1, 0.0063, 250.0, 250.0)
(2, 0.007639320225002102, 250.0, 250.0)
(3, 0.012360679774997895, 250.0, 250.0)
(4, 0.004721359549995793, 250.0, 250.0)
(5, 0.00631679846156664, 250.0, 250.0)
(6, 0.006350131795149135, 250.0, 250.0)
(7, 0.005707395024320714, 250.0, 250.0)
(8, 0.006108430572379088, 250.0, 250.0)
(9, 0.0062409540736968645, 250.0, 250.0)
(10, 0.006283465127984145, 250.0, 250.0)
(11, 0.00631679846156664, 381.9660112501051, 250.0)
(12, 0.00631679846156664, 618.0339887498948, 250.0)
(13, 0.00631679846156664, 236.0679774997897, 250.0)
(14, 0.00631679846156664, 255.04611754824808, 250.0)
(15, 0.00631679846156664, 246.74285246376127, 250.0)
(16, 0.00631679846156664, 303.52520309383374, 250.0)
(17, 0.00631679846156664, 273.56348048314806, 250.0)
(18, 0.00631679846156664, 262.1191208073624, 250.0)
(19, 0.00631679846156664, 251.87455250357436, 250.0)
(20, 0.00631679846156664, 249.91441750843495, 250.0)
(21, 0.00631679846156664, 252.38500004127997, 250.0)
(22, 0.00631679846156664, 251.87451914243692, 250.0)
(23, 0.00631679846156664, 252.06952611350422, 250.0)
(24, 0.00631679846156664, 251.94902579565832, 250.0)
(25, 0.00631679846156664, 251.9118318121551, 250.0)
(26, 0.00631679846156664, 251.99505282142027, 250.0)
(27, 0.00631679846156664, 251.9666065550983, 250.0)
(28, 0.00631679846156664, 251.93481895813707, 250.0)
(29, 0.00631679846156664, 251.95574104821637, 250.0)
(30, 0.00631679846156664, 251.94359926659786, 250.0)
(31, 0.00631679846156664, 251.95159079389245, 250.0)
(32, 0.00631679846156664, 251.94695304599816, 250.0)
(33, 0.00631679846156664, 251.95000553780267, 250.0)
(34, 0.00631679846156664, 251.9482340757383, 250.0)
(35, 0.00631679846156664, 251.94940002385724, 250.0)
(36, 0.00631679846156664, 251.94963130960375, 250.0)
(37, 0.00631679846156664, 251.9492570814048, 250.0)
(38, 0.00631679846156664, 251.94929044365028, 250.0)
(39, 0.00631679846156664, 251.9492237191593, 250.0)
(40, 0.00631679846156664, 251.9492570814048, 176.39320225002103)
(41, 0.00631679846156664, 251.9492570814048, 223.60679774997897)
(42, 0.00631679846156664, 251.9492570814048, 252.78640450004204)
(43, 0.00631679846156664, 251.9492570814048, 247.29594191428095)
(44, 0.00631679846156664, 251.9492570814048, 260.0897917327019)
(45, 0.00631679846156664, 251.9492570814048, 254.11135862588657)
(46, 0.00631679846156664, 251.9492570814048, 254.08276776901295)
(47, 0.00631679846156664, 251.9492570814048, 256.39491687322237)
(48, 0.00631679846156664, 251.9492570814048, 254.98360026107872)
(49, 0.00631679846156664, 251.9492570814048, 255.52267523803025)
(50, 0.00631679846156664, 251.9492570814048, 254.84247479096553)
(51, 0.00631679846156664, 251.9492570814048, 255.17170542429835)
(52, 0.00631679846156664, 251.9492570814048, 255.05545003996926)
(53, 0.00631679846156664, 251.9492570814048, 254.9296951281738)
(54, 0.00631679846156664, 251.9492570814048, 255.01104443453073)
(55, 0.00631679846156664, 251.9492570814048, 255.02800586651725)
(56, 0.00631679846156664, 251.9492570814048, 255.00402884688344)
(57, 0.00631679846156664, 251.9492570814048, 255.01752312505172)
(58, 0.00631679846156664, 251.9492570814048, 255.00836471850053)
(59, 0.00631679846156664, 251.9492570814048, 255.01351907410717)
(60, 0.00631679846156664, 251.9492570814048, 255.0100208740874)
(61, 0.00631679846156664, 251.9492570814048, 255.01198966273904)
(62, 0.00631679846156664, 251.9492570814048, 255.01065346923093)
(63, 0.00631679846156664, 251.9492570814048, 255.0114054795792)
(64, 0.00631679846156664, 251.9492570814048, 255.01089509907465)
(65, 0.00631679846156664, 251.9492570814048, 255.0111823414678)
(66, 0.00631679846156664, 251.9492570814048, 255.01126757264214)
(67, 0.00631679846156664, 251.9492570814048, 255.0111296657051)
(68, 0.00631679846156664, 251.9492570814048, 255.01121574912895)
(69, 0.00633359692313328, 253.8985141628096, 260.0223646829356)
(70, 0.007639320225002102, 251.9492570814048, 255.0111823414678)
(71, 0.012360679774997895, 251.9492570814048, 255.0111823414678)
(72, 0.004721359549995793, 251.9492570814048, 255.0111823414678)
(73, 0.006212121074544898, 251.9492570814048, 255.0111823414678)
(74, 0.006245454409430848, 251.9492570814048, 255.0111823414678)
(75, 0.006616747468524619, 251.9492570814048, 255.0111823414678)
(76, 0.006387275738217746, 251.9492570814048, 255.0111823414678)
(77, 0.006312159875476822, 251.9492570814048, 255.0111823414678)
(78, 0.006278787743822385, 251.9492570814048, 255.0111823414678)
(79, 0.006245454409430848, 381.9660112501051, 255.0111823414678)
(80, 0.006245454409430848, 618.0339887498948, 255.0111823414678)
(81, 0.006245454409430848, 236.0679774997897, 255.0111823414678)
(82, 0.006245454409430848, 252.4808360460145, 255.0111823414678)
(83, 0.006245454409430848, 241.7300328503845, 255.0111823414678)
(84, 0.006245454409430848, 240.3356142603766, 255.0111823414678)
(85, 0.006245454409430848, 239.15328529174633, 255.0111823414678)
(86, 0.006245454409430848, 240.38356284215425, 255.0111823414678)
(87, 0.006245454409430848, 239.77538491779814, 255.0111823414678)
(88, 0.006245454409430848, 240.12162569300665, 255.0111823414678)
(89, 0.006245454409430848, 240.25387790084517, 255.0111823414678)
(90, 0.006245454409430848, 240.27848953183167, 255.0111823414678)
(91, 0.006245454409430848, 240.25800346829683, 255.0111823414678)
(92, 0.006245454409430848, 240.26618007376743, 255.0111823414678)
(93, 0.006245454409430848, 240.26112665367398, 255.0111823414678)
(94, 0.006245454409430848, 240.26013586749693, 255.0111823414678)
(95, 0.006245454409430848, 240.25642764175316, 255.0111823414678)
(96, 0.006245454409430848, 240.2588179723137, 255.0111823414678)
(97, 0.006245454409430848, 240.25740155611751, 255.0111823414678)
(98, 0.006245454409430848, 240.2583145811473, 255.0111823414678)
(99, 0.006245454409430848, 240.25777355830257, 255.0111823414678)
(100, 0.006245454409430848, 240.25812230283137, 255.0111823414678)
(101, 0.006245454409430848, 240.25791565049337, 255.0111823414678)
(102, 0.006245454409430848, 240.25804885904998, 255.0111823414678)
(103, 0.006245454409430848, 240.2579699248807, 255.0111823414678)
(104, 0.006245454409430848, 240.25800346829683, 176.39320225002103)
(105, 0.006245454409430848, 240.25800346829683, 223.60679774997897)
(106, 0.006245454409430848, 240.25800346829683, 252.78640450004204)
(107, 0.006245454409430848, 240.25800346829683, 251.41397770206336)
(108, 0.006245454409430848, 240.25800346829683, 252.2162410533434)
(109, 0.006245454409430848, 240.25800346829683, 270.8203932499369)
(110, 0.006245454409430848, 240.25800346829683, 281.96601125010517)
(111, 0.006245454409430848, 240.25800346829683, 263.9320225002103)
(112, 0.006245454409430848, 240.25800346829683, 263.2442579881386)
(113, 0.006245454409430848, 240.25800346829683, 267.08977568245257)
(114, 0.006245454409430848, 240.25800346829683, 265.1381768877437)
(115, 0.006245454409430848, 240.25800346829683, 264.39273248056827)
(116, 0.006245454409430848, 240.25800346829683, 263.6693198328549)
(117, 0.006245454409430848, 240.25800346829683, 264.10799805375075)
(118, 0.006245454409430848, 240.25800346829683, 263.8316790102158)
(119, 0.006245454409430848, 240.25800346829683, 263.99923918047364)
(120, 0.006245454409430848, 240.25800346829683, 263.8936946975822)
(121, 0.006245454409430848, 240.25800346829683, 263.95769698745994)
(122, 0.006245454409430848, 240.25800346829683, 263.91738258232044)
(123, 0.006245454409430848, 240.25800346829683, 263.94182928169596)
(124, 0.006245454409430848, 240.25800346829683, 263.92643054916886)
(125, 0.006245454409430848, 240.25800346829683, 263.93576835741754)
(126, 0.006245454409430848, 240.25800346829683, 263.9298865649759)
(127, 0.006245454409430848, 240.25800346829683, 263.93345329034645)
(128, 0.006245454409430848, 240.25800346829683, 263.9312066455485)
(129, 0.006245454409430848, 240.25800346829683, 263.9325690134115)
(130, 0.006245454409430848, 240.25800346829683, 263.93171087145936)
(131, 0.006245454409430848, 240.25800346829683, 263.9322312496779)
(132, 0.006245454409430848, 240.25800346829683, 263.93190346861934)
(133, 0.006245454409430848, 240.25800346829683, 263.9321022354118)
(134, 0.006245454409430848, 240.25800346829683, 263.93197703418826)
(135, 0.006245454409430848, 240.25800346829683, 263.9320559658611)
(136, 0.0061741103572950555, 228.56674985518887, 272.85286265895274)
(137, 0.007639320225002102, 240.25800346829683, 263.9320225002103)
(138, 0.012360679774997895, 240.25800346829683, 263.9320225002103)
(139, 0.004721359549995793, 240.25800346829683, 263.9320225002103)
(140, 0.006354505801224418, 240.25800346829683, 263.9320225002103)
(141, 0.006387839136175245, 240.25800346829683, 263.9320225002103)
(142, 0.006242671440597287, 240.25800346829683, 263.9320225002103)
(143, 0.006311788876574967, 240.25800346829683, 263.9320225002103)
(144, 0.006354505801224418, 381.9660112501051, 263.9320225002103)
(145, 0.006354505801224418, 618.0339887498948, 263.9320225002103)
(146, 0.006354505801224418, 236.0679774997897, 263.9320225002103)
(147, 0.006354505801224418, 243.14434217989387, 263.9320225002103)
(148, 0.006354505801224418, 234.7986786127948, 263.9320225002103)
(149, 0.006354505801224418, 145.11356389627022, 263.9320225002103)
(150, 0.006354505801224418, 200.5420130760158, 263.9320225002103)
(151, 0.006354505801224418, 221.7137967189824, 263.9320225002103)
(152, 0.006354505801224418, 229.1304754140872, 263.9320225002103)
(153, 0.006354505801224418, 232.59776669096456, 263.9320225002103)
(154, 0.006354505801224418, 233.9580050649005, 263.9320225002103)
(155, 0.006354505801224418, 235.28350764574444, 263.9320225002103)
(156, 0.006354505801224418, 234.53375968823963, 263.9320225002103)
(157, 0.006354505801224418, 234.98386682464883, 263.9320225002103)
(158, 0.006354505801224418, 234.69748858787779, 263.9320225002103)
(159, 0.006354505801224418, 234.8694142154072, 263.9320225002103)
(160, 0.006354505801224418, 234.76002746259894, 263.9320225002103)
(161, 0.006354505801224418, 234.82569720877802, 263.9320225002103)
(162, 0.006354505801224418, 234.78391518712425, 263.9320225002103)
(163, 0.006354505801224418, 234.77479088826948, 263.9320225002103)
(164, 0.006354505801224418, 234.78955431394002, 263.9320225002103)
(165, 0.006354505801224418, 234.78043001508524, 263.9320225002103)
(166, 0.006354505801224418, 234.786069141901, 263.9320225002103)
(167, 0.006354505801224418, 234.782583969862, 263.9320225002103)
(168, 0.006354505801224418, 234.78176123234752, 263.9320225002103)
(169, 0.006354505801224418, 234.78309244960977, 263.9320225002103)
(170, 0.006354505801224418, 234.78195819945861, 263.9320225002103)
(171, 0.006354505801224418, 234.78234494683704, 263.9320225002103)
(172, 0.006354505801224418, 234.78277819184305, 263.9320225002103)
(173, 0.006354505801224418, 234.78249267119057, 263.9320225002103)
(174, 0.006354505801224418, 234.7826581560574, 263.9320225002103)
(175, 0.006354505801224418, 234.78254909687263, 263.9320225002103)
(176, 0.006354505801224418, 234.78261738440892, 263.9320225002103)
(177, 0.006354505801224418, 234.782583969862, 176.39320225002103)
(178, 0.006354505801224418, 234.782583969862, 223.60679774997897)
(179, 0.006354505801224418, 234.782583969862, 252.78640450004204)
(180, 0.006354505801224418, 234.782583969862, 252.20292581319111)
(181, 0.006354505801224418, 234.782583969862, 270.8203932499369)
(182, 0.006354505801224418, 234.782583969862, 262.5345537430438)
(183, 0.006354505801224418, 234.782583969862, 266.56786767748906)
(184, 0.006354505801224418, 234.782583969862, 266.3878277395123)
(185, 0.006354505801224418, 234.782583969862, 264.57943178033804)
(186, 0.006354505801224418, 234.782583969862, 263.7983578729398)
(187, 0.006354505801224418, 234.782583969862, 265.30750742476937)
(188, 0.006354505801224418, 234.782583969862, 264.2810880954376)
(189, 0.006354505801224418, 234.782583969862, 264.8575319301298)
(190, 0.006354505801224418, 234.782583969862, 264.46547463303494)
(191, 0.006354505801224418, 234.782583969862, 264.68565658528206)
(192, 0.006354505801224418, 234.782583969862, 264.53590402332924)
(193, 0.006354505801224418, 234.782583969862, 264.6200060453783)
(194, 0.006354505801224418, 234.782583969862, 264.6450823202418)
(195, 0.006354505801224418, 234.782583969862, 264.6045080552015)
(196, 0.006354505801224418, 234.782583969862, 264.6295843300649)
(197, 0.006354505801224418, 234.782583969862, 264.6140863398881)
(198, 0.006354505801224418, 234.782583969862, 264.6236646245747)
(199, 0.006354505801224418, 234.782583969862, 264.61774491908443)
(200, 0.006354505801224418, 234.782583969862, 264.6214034982808)
(201, 0.006354505801224418, 234.782583969862, 264.6191423719869)
(202, 0.006354505801224418, 234.782583969862, 264.6205398248894)
(203, 0.006354505801224418, 234.782583969862, 264.619676151498)
(204, 0.006354505801224418, 234.782583969862, 264.6202099310091)
(205, 0.006354505801224418, 234.782583969862, 264.6198800371287)
(206, 0.006354505801224418, 234.782583969862, 264.6200839227595)
(207, 0.006354505801224418, 234.782583969862, 264.6199579145098)
(208, 0.006354505801224418, 234.782583969862, 264.6200393889161)
(209, 0.006463557193017989, 229.30716447142717, 265.3080562776219)
(210, 0.0042132996479009215, 342.2915458386765, 251.11094086703696)
(211, 0.006817262035091656, 211.54780939004473, 267.53961793824124)
(212, 0.008426599295801845, 130.7437364486318, 277.6930987584415)
(213, 0.006876512058623017, 208.57289335467266, 267.9134326706837)
(214, 0.005822636908611109, 261.4874728972636, 261.2644216872373)
(215, 0.0065309168613445654, 225.92506695485807, 265.7330356260197)
(216, 0.006673704019955302, 218.75579034568568, 266.6338950740171)
(217, 0.006673633297636739, 218.7593412803016, 266.6334488793489)
(218, 0.006673707659736169, 218.75560759398664, 266.6339180377837)
(219, 0.006728540551888205, 216.0024736641313, 266.9798646161511)
(220, 0.00669465196083679, 217.70400400836246, 266.76605787242835)
(221, 0.006681707670885995, 218.35393076696948, 266.68439096335015)
(222, 0.006677701708673743, 218.5550682604595, 266.6591169195052)
(223, 0.0066757010363091614, 218.65552108636882, 266.6464944636799)
(224, 0.006674469061834675, 218.71737794989193, 266.6387218050191)
(225, 0.006673998489458694, 218.74100516932026, 266.6357529135936)
(226, 0.0066738187468052355, 218.75002996408224, 266.63461889797793)
(227, 0.0066737500912208425, 218.75347712893983, 266.63418574255655)
(228, 0.006673726739529151, 218.75464960648372, 266.6340384142373)
(229, 0.00667371494756859, 218.75524167532134, 266.63396401749753)
(230, 0.006673711299517091, 218.7554248422849, 266.6339410015507)
(231, 0.007639320225002102, 218.75560759398664, 266.6339180377837)
(232, 0.012360679774997895, 218.75560759398664, 266.6339180377837)
(233, 0.004721359549995793, 218.75560759398664, 266.6339180377837)
(234, 0.0066951602267397705, 218.75560759398664, 266.6339180377837)
(235, 0.006728493560391297, 218.75560759398664, 266.6339180377837)
(236, 0.005941235455241116, 218.75560759398664, 266.6339180377837)
(237, 0.006411786223488726, 218.75560759398664, 266.6339180377837)
(238, 0.006579387098020228, 218.75560759398664, 266.6339180377837)
(239, 0.006655645996134819, 218.75560759398664, 266.6339180377837)
(240, 0.0066951602267397705, 218.75560759398664, 176.39320225002103)
(241, 0.0066951602267397705, 218.75560759398664, 223.60679774997897)
(242, 0.0066951602267397705, 218.75560759398664, 252.78640450004204)
(243, 0.0066951602267397705, 218.75560759398664, 256.49551860830724)
(244, 0.0066951602267397705, 218.75560759398664, 270.6610974126271)
(245, 0.0066951602267397705, 218.75560759398664, 281.8675610083813)
(246, 0.0066951602267397705, 218.75560759398664, 273.62246266051335)
(247, 0.0066951602267397705, 218.75560759398664, 275.4579737817224)
(248, 0.0066951602267397705, 218.75560759398664, 272.9634584441685)
(249, 0.0066951602267397705, 218.75560759398664, 274.32356552208677)
(250, 0.0066951602267397705, 218.75560759398664, 274.75687092014897)
(251, 0.0066951602267397705, 218.75560759398664, 274.0242602890713)
(252, 0.0066951602267397705, 218.75560759398664, 274.48907345663775)
(253, 0.0066951602267397705, 218.75560759398664, 274.2092410960856)
(254, 0.0066951602267397705, 218.75560759398664, 274.38678392767747)
(255, 0.0066951602267397705, 218.75560759398664, 274.27989747709864)
(256, 0.0066951602267397705, 218.75560759398664, 274.34771280430783)
(257, 0.0066951602267397705, 218.75560759398664, 274.3068858131236)
(258, 0.0066951602267397705, 218.75560759398664, 274.3327889631593)
(259, 0.0066951602267397705, 218.75560759398664, 274.33848936323534)
(260, 0.0066951602267397705, 218.75560759398664, 274.32926592216285)
(261, 0.0066951602267397705, 218.75560759398664, 274.33496632223887)
(262, 0.0066951602267397705, 218.75560759398664, 274.33631200415573)
(263, 0.0066951602267397705, 218.75560759398664, 274.3341346450762)
(264, 0.0066951602267397705, 218.75560759398664, 274.3354803269931)
(265, 0.0066951602267397705, 218.75560759398664, 274.3346486498304)
(266, 0.0066951602267397705, 218.75560759398664, 274.3351626545846)
(267, 0.0066951602267397705, 218.75560759398664, 274.3348449821761)
(268, 0.0066951602267397705, 218.75560759398664, 274.33504131452185)
(269, 0.0066951602267397705, 218.75560759398664, 274.33491997445907)
(270, 0.0066951602267397705, 218.75560759398664, 274.3349997697972)
(271, 0.004111136364166658, 348.4982400249089, 258.03208353432785)
(272, 0.006651958369607318, 220.9247526825647, 274.06240118719825)
(273, 0.008222272728333316, 142.08000144164146, 283.9696823471296)
(274, 0.006978838046593363, 204.5122766235751, 276.12472001359913)
(275, 0.006868206207053931, 210.0670496658907, 275.42673191111066)
(276, 0.0074537878322276175, 180.66526949906296, 279.1212289845451)
(277, 0.007160252721756172, 195.40353043197237, 277.2692845929065)
(278, 0.006872501810162099, 209.8513694381635, 275.45383333033135)
(279, 0.0070481322864475345, 201.03304517327902, 276.56190478057533)
(280, 0.006938221218512366, 206.5516286096575, 275.86846410319055)
(281, 0.007005306090993069, 203.1833284642896, 276.29170973522037)
(282, 0.0069742636167209274, 204.74195661303523, 276.09585944651536)
(283, 0.006988360340703752, 204.03416667940718, 276.18479718509985)
(284, 0.006982475239292659, 204.32965487526226, 276.14766745116447)
(285, 0.006977090769861246, 204.60000657301313, 276.11369625790775)
(286, 0.006980227330580862, 204.44252132280454, 276.13348515479436)
(287, 0.0069781706462694465, 204.54578648242912, 276.1205093136087)
(288, 0.006979368705856561, 204.48563246957622, 276.12806799961953)
(289, 0.006979696671317663, 204.46916547680343, 276.130137168774)
(290, 0.006979899365119759, 204.45898831557733, 276.13141598563993)
(291, 0.006979571399658658, 204.47545530835012, 276.12934681648545)
(292, 0.006979774093460755, 204.46527814712402, 276.13062563335137)
(293, 0.006979597516981982, 204.4741439697324, 276.12951159347034)
(294, 0.0069796587977315645, 204.47106709188952, 276.1298982202705)
(295, 0.006979726243944842, 204.46768064899138, 276.1303237456403)
(296, 0.00697968220489505, 204.4698918291328, 276.13004589856723)
(297, 0.006979707967056109, 204.46859832304665, 276.1302084347954)
(298, 0.007604887541410909, 174.15574698374485, 287.6402349486319)
(299, 0.007639320225002102, 204.46916547680343, 276.130137168774)
(300, 0.012360679774997895, 204.46916547680343, 276.130137168774)
(301, 0.004721359549995793, 204.46916547680343, 276.130137168774)
(302, 0.0068046881350992214, 204.46916547680343, 276.130137168774)
(303, 0.006881051235531547, 204.46916547680343, 276.130137168774)
(304, 0.007020685302290622, 204.46916547680343, 276.130137168774)
(305, 0.006914384570328029, 204.46916547680343, 276.130137168774)
(306, 0.006947717904630097, 204.46916547680343, 276.130137168774)
(307, 0.006914384570328029, 204.46916547680343, 176.39320225002103)
(308, 0.006914384570328029, 204.46916547680343, 223.60679774997897)
(309, 0.006914384570328029, 204.46916547680343, 252.78640450004204)
(310, 0.006914384570328029, 204.46916547680343, 258.3944941259597)
(311, 0.006914384570328029, 204.46916547680343, 270.89511447969596)
(312, 0.006914384570328029, 204.46916547680343, 282.01219150977744)
(313, 0.006914384570328029, 204.46916547680343, 280.5418684555228)
(314, 0.006914384570328029, 204.46916547680343, 282.3229016992552)
(315, 0.006914384570328029, 204.46916547680343, 289.07495242766674)
(316, 0.006914384570328029, 204.46916547680343, 284.90195558374495)
(317, 0.006914384570328029, 204.46916547680343, 283.3080126243129)
(318, 0.006914384570328029, 204.46916547680343, 282.72053049119455)
(319, 0.006914384570328029, 204.46916547680343, 282.77441306305684)
(320, 0.006914384570328029, 204.46916547680343, 282.6459781792098)
(321, 0.006914384570328029, 204.46916547680343, 282.52257394483286)
(322, 0.006914384570328029, 204.46916547680343, 282.610220359577)
(323, 0.006914384570328029, 204.46916547680343, 282.6744546284481)
(324, 0.006914384570328029, 204.46916547680343, 282.63231990747363)
(325, 0.006914384570328029, 204.46916547680343, 282.6568552149399)
(326, 0.006914384570328029, 204.46916547680343, 282.6407611836342)
(327, 0.006914384570328029, 204.46916547680343, 282.65013283716183)
(328, 0.006914384570328029, 204.46916547680343, 282.6439854642191)
(329, 0.006914384570328029, 204.46916547680343, 282.64756511733583)
(330, 0.006914384570328029, 204.46916547680343, 282.64521702981324)
(331, 0.006914384570328029, 204.46916547680343, 282.6465843356359)
(332, 0.006914384570328029, 204.46916547680343, 282.6456874460108)
(333, 0.006914384570328029, 204.46916547680343, 282.6462097103621)
(334, 0.006914384570328029, 204.46916547680343, 282.64586712900945)
(335, 0.006914384570328029, 204.46916547680343, 282.64606661624055)
(336, 0.006914384570328029, 204.46916547680343, 282.6459357618077)
(337, 0.006914384570328029, 204.46916547680343, 282.6460119591497)
(338, 0.003691707251856393, 366.2782899318627, 262.3137624643033)
(339, 0.005973307810018113, 251.72019050433175, 276.70862429483736)
(340, 0.007383414503712787, 180.91939137152775, 285.60513816946593)
(341, 0.006996319938528865, 200.35522887058812, 283.1629171762949)
(342, 0.007019999432029707, 199.16629255079314, 283.3123136321918)
(343, 0.0066055640763411335, 219.97489838384385, 280.69759666892463)
(344, 0.006847064480476422, 207.8492757766117, 282.2212485356416)
(345, 0.006937797511738731, 203.29361261308077, 282.79369292947484)
(346, 0.0069739663605991615, 201.47759158823018, 283.0218860634802)
(347, 0.0069975766275490485, 200.29213110125744, 283.1708457616666)
(348, 0.006987781631529888, 200.7839332810217, 283.1090480846709)
(349, 0.0069930585954616365, 200.51897938424676, 283.1423410142377)
(350, 0.006995074216326158, 200.41777600113048, 283.1550577817471)
(351, 0.006996799951021287, 200.3311276673181, 283.1659456264242)
(352, 0.006995844114987972, 200.37911974855652, 283.15991515470864)
(353, 0.006996503286985946, 200.34602303010874, 283.1640739413111)
(354, 0.006996138190108891, 200.36435437395096, 283.16177050608394)
(355, 0.006996389971407685, 200.35171255242, 283.1633590212141)
(356, 0.006996250516809837, 200.35871450270827, 283.1624791872482)
(357, 0.006996346688708245, 200.35388575656316, 283.1630859460363)
(358, 0.006996293421791754, 200.35656026358575, 283.1627498793658)
(359, 0.006996330579806113, 200.3546945770227, 283.16298431325066)
(360, 0.006996309297251617, 200.35576316415353, 283.16285003933916)
Duration of fit 5.276994066999578
After the fit, let's display the plots as explained in Tuto \#1.
```python
uncertainties = np.diag(pcov)**0.5
na_opt = popt[0]
out = grip.create_histogram_model(popt, null_axis, label_guess, wl_scale, grip.lbti_model, instrument_constants, rvu_opd, cdfs, rvus, n_samp_per_loop=n_samp_per_loop, nloop=nloop, normed=normed)[0]
out = out / out.sum() * null_pdf.sum()
key = 'LBTI Null'
basin_hopping_count = 0
nb_rows_plot = 3
label_optimizer = 'lklh'
text_params = '%s ' % key+'Fitted values: ' +\
'Na2$ = %.2E \pm %.2E$, ' % (na_opt,
uncertainties[0]) +\
r'$\mu_{OPD} = %.2E \pm %.2E$ nm, ' % (popt[1],
uncertainties[1]) +\
'\n' + r'$\sigma_{OPD} = %.2E \pm %.2E$ nm,' % (
popt[2], uncertainties[2]) +\
' '+label_optimizer+' = %.2E ' % (res.fun) +\
'(Duration = %.3f s)' % (stop-start)
save_name = key + '_' + '%03d' % (basin_hopping_count) + '_' + str(wl_min) + '-' + str(wl_max) + '_' + os.path.basename(datafolder[:-1])
grip.plot_null_distributions(nb_rows_plot, wl_scale, text_params, null_axis, null_pdf,\
null_pdf_err, save_path, save_name, model=out,\
save_fig=False)
```
(497, 0.006996319938528865, 200.35522887058812, 283.1629171762949)

Now we have apparently a good fit.
We can explore the quality of the fit with the diagnostic tools.
```python
out = grip.create_histogram_model(popt, null_axis, label_guess, wl_scale, grip.lbti_model, instrument_constants, rvu_opd, cdfs, rvus, n_samp_per_loop=n_samp_per_loop, nloop=nloop)
synth_histo = out[0] # Flatten histogram along the wavelength axis.
diag = out[1][0]
diag = cp.array(diag)
if onGpu:
diag = cp.asnumpy(diag)
diag_IA = cp.asnumpy(cp.array(out[2][0][0][0]))
diag_IB = cp.asnumpy(cp.array(out[2][0][0][1]))
diag_dkIA = cp.asnumpy(cp.array(out[3])[0,:,0])
diag_dkIB = cp.asnumpy(cp.array(out[3])[0,:,1])
else:
diag = cp.array(diag)
diag_IA = out[2][0][0][0]
diag_IB = out[2][0][0][1]
diag_dkIA = np.array(out[3])[0,:,0]
diag_dkIB = np.array(out[3])[0,:,1]
liste_rv_interfminus = diag[:,0,:]
liste_rv_interfplus = diag[:,1,:]
histom = [np.histogram(Iminus[k], bins=int(
Iminus[k].size**0.5), density=True)[::-1]
for k in range(wl_scale.size)]
histom = [(elt[0][:-1], elt[1]) for elt in histom]
histom2 = [np.histogram(liste_rv_interfminus[k], bins=int(
liste_rv_interfminus[k].size**0.5), density=True)[::-1]
for k in range(wl_scale.size)]
histom2 = [(elt[0][:-1], elt[1]) for elt in histom2]
histop = [np.histogram(Iplus[k], bins=int(
Iplus[k].size**0.5), density=True)[::-1]
for k in range(wl_scale.size)]
histop = [(elt[0][:-1], elt[1]) for elt in histop]
histop2 = [np.histogram(liste_rv_interfplus[k], bins=int(
liste_rv_interfplus[k].size**0.5), density=True)[::-1]
for k in range(wl_scale.size)]
histop2 = [(elt[0][:-1], elt[1]) for elt in histop2]
histodkm = [np.histogram(dark['Iminus1'][k], bins=int(
dark['Iminus1'][k].size**0.5), density=True)[::-1]
for k in range(wl_scale.size)]
histodkm = [(elt[0][:-1], elt[1]) for elt in histodkm]
# Diagnostic on the synthetic sequence of nulled and anti-nulled signal
markers = ['.', '-', '-']
data_xy = ((histom, histop), (histom2, histop2), (histodkm,))
labels = (('Iminus', 'Iplus'), ('rv minus', 'rv plus'), ('Dark minus',))
save_name = key +\
'_details_' + str(wl_min) + '-' + str(wl_max) + '_' +\
os.path.basename(datafolder[:-1])
grip.plot_diag_spectral_data(nb_rows_plot, wl_scale, data_xy, labels, save_path, \
save_name, markers, save_fig=False)
```
(498, 0.006996319938528865, 200.35522887058812, 283.1629171762949)

Apparently, it is all good, the results look reliable.
But we are hitting some limitations of GRIP we need to test:
- the Monte-Carlo is a random process. The parameter space always changes a bit, so a picky scientist may think the result is just luck
- the differential step looks well-tuned, not all parameters loop back to their initial guesses
A way to address these is to repeat the fit with different initial guesses.
This technique is known as Basin hopping and is a way to find the global minimum of the cost function with an algorithm designed to find the local minima.
```python
bounds_mu = (0, 1000) # Boundaries of mu opd
bounds_sig = (100, 400) # Boundaries of sig opd
bounds_na = (0.0, 0.01) # Boundaries of Na
# Compile them into a readable tuple called by the TRF algorithm
bounds_fit = np.array(([bounds_na[0], bounds_mu[0], bounds_sig[0]],
[bounds_na[1], bounds_mu[1], bounds_sig[1]])).T
mu_opd0 = 20 # initial guess of DeltaPhi mu
sig_opd0 = 340 # initial guess of DeltaPhi sig
na0 = 0.0063 # initial guess of astro null
key = 'LBTI Null'
spec_chan_width = 2600
instrument_constants = (spec_chan_width, np.pi)
n_samp_total = int(2e6)
n_samp_per_loop = int(1e6)
nloop = n_samp_total // n_samp_per_loop
nb_files_data = len(data_list)
nb_files_dark = len(dark_list)
k = 0
basin_hopping_nloop = (10*k, 10*k+2) # Let's do the fit 2 times
for idx_basin, basin_hopping_count in enumerate(
range(basin_hopping_nloop[0], basin_hopping_nloop[1])):
plt.close('all')
grip.create_histogram_model.count = 0
# Save the content written in the console into a txt file
print('-------------')
print(basin_hopping_count)
print('-------------')
print('Fitting '+key)
print('%s-%02d-%02d %02dh%02d' % (datetime.now().year,
datetime.now().month,
datetime.now().day,
datetime.now().hour,
datetime.now().minute))
print('Spectral bandwidth (in nm):%s,%s' % (wl_min, wl_max))
print('Time binning of photometry: %s' % (
nb_frames_binning_photometry))
print('Bin bounds and number of bins %s %s' %
(bin_bounds, int(sz**0.5)))
print('Boundaries (na, mu, sig):')
print(str(bounds_na)+str(bounds_mu)+str(bounds_sig))
print('Number of elements ', n_samp_total)
print('Number of loaded points', data_null.shape[1], nb_files_data)
print('Number of loaded dark points',
dark['Iminus1'].shape[1], nb_files_dark)
print('Injection - shape array', injection.shape)
print('Normed PDF', normed)
print('')
"""
Prepare initial guess
"""
# Create the set of initial guess for each hop
guess_std = [50, 50, 0.03]
guess_ini = [mu_opd0, sig_opd0, na0]
if idx_basin > 0:
mu_opd, sig_opd, na = grip.basin_hoppin_values(guess_ini, guess_std, bounds_fit)
else:
mu_opd = mu_opd0
sig_opd = sig_opd0
na = na0
initial_guess = [na, mu_opd, sig_opd]
initial_guess = np.array(initial_guess, dtype=np.float64)
cost_fun_args = (label_guess, wl_scale, grip.lbti_model, instrument_constants, rvu_opd, cdfs, rvus)
cost_fun_kwargs = {'n_samp_per_loop':n_samp_per_loop, 'nloop':nloop, 'verbose':True}
start = time()
neg_log_multinomial = grip.return_neg_func(grip.log_multinomial)
popt, pcov, res = grip.minimize_fit(neg_log_multinomial, grip.create_histogram_model, initial_guess,
null_axis, null_pdf, yerr=None, bounds=bounds_fit,
func_args=cost_fun_args, func_kwargs=cost_fun_kwargs)
stop = time()
print('Duration of fit', stop - start)
uncertainties = np.diag(pcov)**0.5
na_opt = popt[0]
out = grip.create_histogram_model(popt, null_axis, label_guess, wl_scale, grip.lbti_model, instrument_constants, rvu_opd, cdfs, rvus, n_samp_per_loop=n_samp_per_loop, nloop=nloop)[0]
out = out / out.sum() * null_pdf.sum()
nb_rows_plot = 3
label_optimizer = 'lklh'
text_params = '%s ' % key+'Fitted values: ' +\
'Na2$ = %.2E \pm %.2E$, ' % (na_opt,
uncertainties[0]) +\
r'$\mu_{OPD} = %.2E \pm %.2E$ nm, ' % (popt[1],
uncertainties[1]) +\
'\n' + r'$\sigma_{OPD} = %.2E \pm %.2E$ nm,' % (
popt[2], uncertainties[2]) +\
' '+label_optimizer+' = %.2E ' % (res.fun) +\
'(Duration = %.3f s)' % (stop-start)
save_name = key + '_' + '%03d' % (basin_hopping_count) + '_' + str(wl_min) + '-' + str(wl_max) + '_' + os.path.basename(datafolder[:-1])
grip.plot_null_distributions(nb_rows_plot, wl_scale, text_params, null_axis, null_pdf,\
null_pdf_err, save_path, save_name, model=out,\
save_fig=False)
```
-------------
0
-------------
Fitting LBTI Null
2024-08-23 11h31
Spectral bandwidth (in nm):11000,11200
Time binning of photometry: -1
Bin bounds and number of bins (-0.01, 0.1) 5
Boundaries (na, mu, sig):
(0.0, 0.01)(0, 1000)(100, 400)
Number of elements 2000000
Number of loaded points 984 1
Number of loaded dark points 990 1
Injection - shape array (2, 1)
Normed PDF False
(1, 0.0063, 20.0, 340.0)
(2, 0.003819660112501051, 20.0, 340.0)
(3, 0.006180339887498948, 20.0, 340.0)
(4, 0.007639320225002103, 20.0, 340.0)
(5, 0.007176199071309889, 20.0, 340.0)
(6, 0.007142865724980423, 20.0, 340.0)
(7, 0.006775213570112458, 20.0, 340.0)
(8, 0.007042610361928581, 20.0, 340.0)
(9, 0.0071095323791453705, 20.0, 340.0)
(10, 0.007142865724980423, 381.9660112501051, 340.0)
(11, 0.007142865724980423, 618.0339887498948, 340.0)
(12, 0.007142865724980423, 236.0679774997897, 340.0)
(13, 0.007142865724980423, 159.36856354983593, 340.0)
(14, 0.007142865724980423, 128.22646164381382, 340.0)
(15, 0.007142865724980423, 111.30675184441465, 340.0)
(16, 0.007142865724980423, 68.79135581719831, 340.0)
(17, 0.007142865724980423, 76.7580779835163, 340.0)
(18, 0.007142865724980423, 81.2550728199729, 340.0)
(19, 0.007142865724980423, 81.00337946865145, 340.0)
(20, 0.007142865724980423, 79.03574934412046, 340.0)
(21, 0.007142865724980423, 78.16575629957188, 340.0)
(22, 0.007142865724980423, 79.65362298236025, 340.0)
(23, 0.007142865724980423, 79.49339092944508, 340.0)
(24, 0.007142865724980423, 79.65365720049981, 340.0)
(25, 0.007142865724980423, 79.59241978423384, 340.0)
(26, 0.007142865724980423, 79.63024544089615, 340.0)
(27, 0.007142865724980423, 79.64469355609438, 340.0)
(28, 0.007142865724980423, 79.63917486716203, 340.0)
(29, 0.007142865724980423, 79.6481042934279, 340.0)
(30, 0.007142865724980423, 79.64258560449557, 340.0)
(31, 0.007142865724980423, 79.64599634182909, 340.0)
(32, 0.007142865724980423, 79.64388839023027, 340.0)
(33, 0.007142865724980423, 79.64519117596498, 340.0)
(34, 0.007142865724980423, 79.64438601010087, 340.0)
(35, 0.007142865724980423, 79.64488362997147, 340.0)
(36, 0.007142865724980423, 79.64457608397797, 340.0)
(37, 0.007142865724980423, 79.64476615785506, 340.0)
(38, 0.007142865724980423, 79.64464868573864, 340.0)
(39, 0.007142865724980423, 79.64472777410148, 340.0)
(40, 0.007142865724980423, 79.64469355609438, 214.58980337503152)
(41, 0.007142865724980423, 79.64469355609438, 285.4101966249684)
(42, 0.007142865724980423, 79.64469355609438, 329.17960675006304)
(43, 0.007142865724980423, 79.64469355609438, 322.7980590648217)
(44, 0.007142865724980423, 79.64469355609438, 326.9755330337276)
(45, 0.007142865724980423, 79.64469355609438, 327.60462106962575)
(46, 0.007142865724980423, 79.64469355609438, 328.0575559712594)
(47, 0.007142865724980423, 79.64469355609438, 327.7182357418939)
(48, 0.007142865724980423, 79.64469355609438, 327.36433082182856)
(49, 0.007142865724980423, 79.64469355609438, 327.51283836213236)
(50, 0.007142865724980423, 79.64469355609438, 327.6480180128115)
(51, 0.007142865724980423, 79.64469355609438, 327.6576734318493)
(52, 0.007142865724980423, 79.64469355609438, 327.6314418555224)
(53, 0.007142865724980423, 79.64469355609438, 327.64168648412993)
(54, 0.007142865724980423, 79.64469355609438, 327.6517060547083)
(55, 0.007142865724980423, 79.64469355609438, 327.6539853899525)
(56, 0.007142865724980423, 79.64469355609438, 327.65233387776135)
(57, 0.007142865724980423, 79.64469355609438, 327.65167253822017)
(58, 0.007142865724980423, 79.64469355609438, 327.65194586177563)
(59, 0.007142865724980423, 79.64469355609438, 327.6517976528573)
(60, 0.007142865724980423, 79.64469355609438, 327.6517410420879)
(61, 0.007985731449960846, 139.28938711218876, 315.3034820841758)
(62, 0.003819660112501051, 79.64469355609438, 327.6517410420879)
(63, 0.006180339887498948, 79.64469355609438, 327.6517410420879)
(64, 0.007639320225002103, 79.64469355609438, 327.6517410420879)
(65, 0.007237841752945843, 79.64469355609438, 327.6517410420879)
(66, 0.007204508418203787, 79.64469355609438, 327.6517410420879)
(67, 0.006813310849682579, 79.64469355609438, 327.6517410420879)
(68, 0.007055084243345002, 79.64469355609438, 327.6517410420879)
(69, 0.007155560839134704, 79.64469355609438, 327.6517410420879)
(70, 0.007204508418203787, 381.9660112501051, 327.6517410420879)
(71, 0.007204508418203787, 618.0339887498948, 327.6517410420879)
(72, 0.007204508418203787, 236.06797749978972, 327.6517410420879)
(73, 0.007204508418203787, 170.6740288677147, 327.6517410420879)
(74, 0.007204508418203787, 144.37402647547685, 327.6517410420879)
(75, 0.007204508418203787, 119.87980637240719, 327.6517410420879)
(76, 0.007204508418203787, 110.001619357001, 327.6517410420879)
(77, 0.007204508418203787, 99.49931889196053, 327.6517410420879)
(78, 0.007204508418203787, 61.49396093269614, 327.6517410420879)
(79, 0.007204508418203787, 79.41230314350922, 327.6517410420879)
(80, 0.007204508418203787, 89.4534484758174, 327.6517410420879)
(81, 0.007204508418203787, 72.56810544103055, 327.6517410420879)
(82, 0.007204508418203787, 83.24767937447358, 327.6517410420879)
(83, 0.007204508418203787, 76.79805224688631, 327.6517410420879)
(84, 0.007204508418203787, 80.87728650409414, 327.6517410420879)
(85, 0.007204508418203787, 78.41374815611915, 327.6517410420879)
(86, 0.007204508418203787, 79.9718769942996, 327.6517410420879)
(87, 0.007204508418203787, 79.03088907796193, 327.6517410420879)
(88, 0.007204508418203787, 79.62604133529548, 327.6517410420879)
(89, 0.007204508418203787, 79.26661593425743, 327.6517410420879)
(90, 0.007204508418203787, 79.49394386807762, 327.6517410420879)
(91, 0.007204508418203787, 79.35665558130115, 327.6517410420879)
(92, 0.007204508418203787, 79.44348712542818, 327.6517410420879)
(93, 0.007204508418203787, 79.39104766613681, 327.6517410420879)
(94, 0.007204508418203787, 79.4242143646977, 327.6517410420879)
(95, 0.007204508418203787, 79.40418427360007, 327.6517410420879)
(96, 0.007204508418203787, 79.4168528251557, 327.6517410420879)
(97, 0.007204508418203787, 79.40920201115416, 327.6517410420879)
(98, 0.007204508418203787, 79.41404096726018, 327.6517410420879)
(99, 0.007204508418203787, 79.41111861635319, 327.6517410420879)
(100, 0.007204508418203787, 79.41296693311563, 327.6517410420879)
(101, 0.007204508418203787, 79.41185069439621, 327.6517410420879)
(102, 0.007204508418203787, 79.41255668857748, 327.6517410420879)
(103, 0.007204508418203787, 79.41213032332622, 327.6517410420879)
(104, 0.007204508418203787, 79.41239998910761, 327.6517410420879)
(105, 0.007204508418203787, 79.41245984297909, 327.6517410420879)
(106, 0.007204508418203787, 79.41236299738068, 327.6517410420879)
(107, 0.007204508418203787, 79.41239998910761, 214.58980337503152)
(108, 0.007204508418203787, 79.41239998910761, 285.4101966249684)
(109, 0.007204508418203787, 79.41239998910761, 329.17960675006304)
(110, 0.007204508418203787, 79.41239998910761, 322.60506376137164)
(111, 0.007204508418203787, 79.41239998910761, 331.9115431527679)
(112, 0.007204508418203787, 79.41239998910761, 329.9000031396077)
(113, 0.007204508418203787, 79.41239998910761, 326.66835478888027)
(114, 0.007204508418203787, 79.41239998910761, 328.22039385520605)
(115, 0.007204508418203787, 79.41239998910761, 327.62756768373725)
(116, 0.007204508418203787, 79.41239998910761, 328.58678057859424)
(117, 0.007204508418203787, 79.41239998910761, 327.99395440712544)
(118, 0.007204508418203787, 79.41239998910761, 328.36034113051363)
(119, 0.007204508418203787, 79.41239998910761, 328.133901682433)
(120, 0.007204508418203787, 79.41239998910761, 328.27384895774065)
(121, 0.007204508418203787, 79.41239998910761, 328.1873567849676)
(122, 0.007204508418203787, 79.41239998910761, 328.24081188750216)
(123, 0.007204508418203787, 79.41239998910761, 328.20777481726367)
(124, 0.007204508418203787, 79.41239998910761, 328.2281928495598)
(125, 0.007204508418203787, 79.41239998910761, 328.2177731447581)
(126, 0.007204508418203787, 79.41239998910761, 328.2204271969739)
(127, 0.007204508418203787, 79.41239998910761, 328.2193928328896)
(128, 0.007204508418203787, 79.41239998910761, 328.22001149870465)
(129, 0.007204508418203787, 79.41239998910761, 328.22024780801837)
(130, 0.007204508418203787, 79.41239998910761, 328.22033807014435)
(131, 0.007266151111427151, 79.18010642212084, 328.7890466683242)
(132, 0.003819660112501051, 79.41239998910761, 328.22039385520605)
(133, 0.006180339887498948, 79.41239998910761, 328.22039385520605)
(134, 0.007639320225002103, 79.41239998910761, 328.22039385520605)
(135, 0.007240993887469054, 79.41239998910761, 328.22039385520605)
(136, 0.007207660553594553, 79.41239998910761, 328.22039385520605)
(137, 0.006815258976491214, 79.41239998910761, 328.22039385520605)
(138, 0.00705777648838014, 79.41239998910761, 328.22039385520605)
(139, 0.007147359887101556, 79.41239998910761, 328.22039385520605)
(140, 0.007207660553594553, 381.9660112501051, 328.22039385520605)
(141, 0.007207660553594553, 618.0339887498948, 328.22039385520605)
(142, 0.007207660553594553, 236.06797749978972, 328.22039385520605)
(143, 0.007207660553594553, 172.91254261751067, 328.22039385520605)
(144, 0.007207660553594553, 140.9763738869055, 328.22039385520605)
(145, 0.007207660553594553, 129.04649954503495, 328.22039385520605)
(146, 0.007207660553594553, 79.75512284802943, 328.22039385520605)
(147, 0.007207660553594553, 102.53603572277281, 328.22039385520605)
(148, 0.007207660553594553, 101.08030920002332, 328.22039385520605)
(149, 0.007207660553594553, 112.66213184535249, 328.22039385520605)
(150, 0.007207660553594553, 107.01376374733857, 328.22039385520605)
(151, 0.007207660553594553, 104.246375635779, 328.22039385520605)
(152, 0.007207660553594553, 103.18932743722564, 328.22039385520605)
(153, 0.007207660553594553, 103.15479207985678, 328.22039385520605)
(154, 0.007207660553594553, 103.1893611232278, 328.22039385520605)
(155, 0.007207660553594553, 103.17613610452435, 328.22039385520605)
(156, 0.007207660553594553, 103.18428879649065, 328.22039385520605)
(157, 0.007207660553594553, 103.18740284772197, 328.22039385520605)
(158, 0.007207660553594553, 103.18859230944963, 328.22039385520605)
(159, 0.007207660553594553, 103.18904664340127, 328.22039385520605)
(160, 0.007207660553594553, 103.18922018352856, 328.22039385520605)
(161, 0.007207660553594553, 103.18928646995877, 328.22039385520605)
(162, 0.007207660553594553, 103.18932743722564, 214.58980337503152)
(163, 0.007207660553594553, 103.18932743722564, 285.4101966249684)
(164, 0.007207660553594553, 103.18932743722564, 329.17960675006304)
(165, 0.007207660553594553, 103.18932743722564, 318.5068726640352)
(166, 0.007207660553594553, 103.18932743722564, 323.7113020153934)
(167, 0.007207660553594553, 103.18932743722564, 323.78689159730624)
(168, 0.007207660553594553, 103.18932743722564, 321.1788237893053)
(169, 0.007207660553594553, 103.18932743722564, 322.7439814087968)
(170, 0.007207660553594553, 103.18932743722564, 323.2770820852459)
(171, 0.007207660553594553, 103.18932743722564, 323.512229818753)
(172, 0.007207660553594553, 103.18932743722564, 323.6352632024919)
(173, 0.007207660553594553, 103.18932743722564, 323.68225777332924)
(174, 0.007207660553594553, 103.18932743722564, 323.7401746664887)
(175, 0.007207660553594553, 103.18932743722564, 323.70020810210235)
(176, 0.007207660553594553, 103.18932743722564, 323.7223303867665)
(177, 0.007207660553594553, 103.18932743722564, 323.7112114429463)
(178, 0.007207660553594553, 103.18932743722564, 323.7155144784174)
(179, 0.007207660553594553, 103.18932743722564, 323.7129110330922)
(180, 0.007207660553594553, 103.18932743722564, 323.7119166054659)
(181, 0.007207660553594553, 103.18932743722564, 323.7122964430198)
(182, 0.007207660553594553, 103.18932743722564, 323.7116818529473)
(183, 0.007207660553594553, 103.18932743722564, 323.71206169050123)
(184, 0.007207660553594553, 103.18932743722564, 323.7118269379827)
(185, 0.007207660553594553, 103.18932743722564, 323.7119720230181)
(186, 0.007207660553594553, 103.18932743722564, 323.711882355535)
Duration of fit 3.3135499099998924
(323, 0.007207660553594553, 103.18932743722564, 323.7119166054659)
-------------
1
-------------
Fitting LBTI Null
2024-08-23 11h31
Spectral bandwidth (in nm):11000,11200
Time binning of photometry: -1
Bin bounds and number of bins (-0.01, 0.1) 5
Boundaries (na, mu, sig):
(0.0, 0.01)(0, 1000)(100, 400)
Number of elements 2000000
Number of loaded points 984 1
Number of loaded dark points 990 1
Injection - shape array (2, 1)
Normed PDF False
Random withdrawing of init guesses
Index 0 : No new guess for index take initial one
Index 2 : No new guess for index take initial one
Random drawing done
(1, 0.0063, 20.0, 328.75598714396915)
(2, 0.003819660112501051, 20.0, 328.75598714396915)
(3, 0.006180339887498948, 20.0, 328.75598714396915)
(4, 0.007639320225002103, 20.0, 328.75598714396915)
(5, 0.007453269223555636, 20.0, 328.75598714396915)
(6, 0.007405691236530114, 20.0, 328.75598714396915)
(7, 0.0074866025739947165, 20.0, 328.75598714396915)
(8, 0.007453269223555636, 381.9660112501051, 328.75598714396915)
(9, 0.007453269223555636, 618.0339887498948, 328.75598714396915)
(10, 0.007453269223555636, 236.0679774997897, 328.75598714396915)
(11, 0.007453269223555636, 165.65659378596143, 328.75598714396915)
(12, 0.007453269223555636, 138.77422220957524, 328.75598714396915)
(13, 0.007453269223555636, 107.35701563913028, 328.75598714396915)
(14, 0.007453269223555636, 103.33021456208995, 328.75598714396915)
(15, 0.007453269223555636, 103.52157635296533, 328.75598714396915)
(16, 0.007453269223555636, 105.33934038936978, 328.75598714396915)
(17, 0.007453269223555636, 104.21590043134462, 328.75598714396915)
(18, 0.007453269223555636, 103.69793421107833, 328.75598714396915)
(19, 0.007453269223555636, 103.52154178080681, 328.75598714396915)
(20, 0.007453269223555636, 103.58893906058137, 328.75598714396915)
(21, 0.007453269223555636, 103.54730661770043, 328.75598714396915)
(22, 0.007453269223555636, 103.5314044395546, 328.75598714396915)
(23, 0.007453269223555636, 103.52533034799805, 328.75598714396915)
(24, 0.007453269223555636, 103.52301025147423, 328.75598714396915)
(25, 0.007453269223555636, 103.52212405345931, 328.75598714396915)
(26, 0.007453269223555636, 103.52178555593837, 328.75598714396915)
(27, 0.007453269223555636, 103.52165626139048, 328.75598714396915)
(28, 0.007453269223555636, 103.52161092512384, 328.75598714396915)
(29, 0.007453269223555636, 103.52157635296533, 214.58980337503152)
(30, 0.007453269223555636, 103.52157635296533, 285.4101966249684)
(31, 0.007453269223555636, 103.52157635296533, 329.17960675006304)
(32, 0.007453269223555636, 103.52157635296533, 316.9102006018719)
(33, 0.007453269223555636, 103.52157635296533, 324.04392489801506)
(34, 0.007453269223555636, 103.52157635296533, 324.7286435764512)
(35, 0.007453269223555636, 103.52157635296533, 324.04389149479056)
(36, 0.007453269223555636, 103.52157635296533, 324.30546416044575)
(37, 0.007453269223555636, 103.52157635296533, 324.4671043140205)
(38, 0.007453269223555636, 103.52157635296533, 324.25831264278645)
(39, 0.007453269223555636, 103.52157635296533, 324.3594139849124)
(40, 0.007453269223555636, 103.52157635296533, 324.3098468256834)
(41, 0.007453269223555636, 103.52157635296533, 324.3054307611005)
(42, 0.007453269223555636, 103.52157635296533, 324.3071381896052)
(43, 0.007453269223555636, 103.52157635296533, 324.30610358268655)
(44, 0.007453269223555636, 103.52157635296533, 324.3057380663061)
(45, 0.007453269223555636, 103.52157635296533, 324.30549755979104)
(46, 0.008606538447111272, 187.04315270593065, 319.85494117692235)
(47, 0.003819660112501051, 103.52157635296533, 324.30546416044575)
(48, 0.006180339887498948, 103.52157635296533, 324.30546416044575)
(49, 0.007639320225002103, 103.52157635296533, 324.30546416044575)
(50, 0.007200480717573261, 103.52157635296533, 324.30546416044575)
(51, 0.007165521821311428, 103.52157635296533, 324.30546416044575)
(52, 0.0073427261202727405, 103.52157635296533, 324.30546416044575)
(53, 0.007254813626661046, 103.52157635296533, 324.30546416044575)
(54, 0.007288146962937951, 103.52157635296533, 324.30546416044575)
(55, 0.007254813626661046, 381.9660112501051, 324.30546416044575)
(56, 0.007254813626661046, 618.0339887498948, 324.30546416044575)
(57, 0.007254813626661046, 236.0679774997897, 324.30546416044575)
(58, 0.007254813626661046, 174.6679121485783, 324.30546416044575)
(59, 0.007254813626661046, 147.90285755424335, 324.30546416044575)
(60, 0.007254813626661046, 125.71342132117428, 324.30546416044575)
(61, 0.007254813626661046, 116.26992041501859, 324.30546416044575)
(62, 0.007254813626661046, 71.85876268572677, 324.30546416044575)
(63, 0.007254813626661046, 98.7155406807762, 324.30546416044575)
(64, 0.007254813626661046, 97.8313772896027, 324.30546416044575)
(65, 0.007254813626661046, 106.81037186133241, 324.30546416044575)
(66, 0.007254813626661046, 101.80749105855624, 324.30546416044575)
(67, 0.007254813626661046, 99.78988825948377, 324.30546416044575)
(68, 0.007254813626661046, 99.12590494011134, 324.30546416044575)
(69, 0.007254813626661046, 98.5327404471442, 324.30546416044575)
(70, 0.007254813626661046, 98.71557408539456, 324.30546416044575)
(71, 0.007254813626661046, 98.6457172046802, 324.30546416044575)
(72, 0.007254813626661046, 98.6888704861202, 324.30546416044575)
(73, 0.007254813626661046, 98.70535357290419, 324.30546416044575)
(74, 0.007254813626661046, 98.71164955181615, 324.30546416044575)
(75, 0.007254813626661046, 98.71405440176807, 324.30546416044575)
(76, 0.007254813626661046, 98.71313583082429, 324.30546416044575)
(77, 0.007254813626661046, 98.71462210983242, 324.30546416044575)
(78, 0.007254813626661046, 98.71370353888862, 324.30546416044575)
(79, 0.007254813626661046, 98.71348669370373, 324.30546416044575)
(80, 0.007254813626661046, 98.71383755658319, 324.30546416044575)
(81, 0.007254813626661046, 98.71362071139829, 324.30546416044575)
(82, 0.007254813626661046, 98.71375472909286, 324.30546416044575)
(83, 0.007254813626661046, 98.71367013424302, 324.30546416044575)
(84, 0.007254813626661046, 98.71370353888862, 214.58980337503152)
(85, 0.007254813626661046, 98.71370353888862, 285.4101966249684)
(86, 0.007254813626661046, 98.71370353888862, 329.17960675006304)
(87, 0.007254813626661046, 98.71370353888862, 319.0468636028804)
(88, 0.007254813626661046, 98.71370353888862, 325.38583526475736)
(89, 0.007254813626661046, 98.71370353888862, 356.2305898749054)
(90, 0.007254813626661046, 98.71370353888862, 339.512162874653)
(91, 0.007254813626661046, 98.71370353888862, 333.1262919989905)
(92, 0.007254813626661046, 98.71370353888862, 329.2404008498287)
(93, 0.007254813626661046, 98.71370353888862, 330.7246791922261)
(94, 0.007254813626661046, 98.71370353888862, 329.94998936792666)
(95, 0.007254813626661046, 98.71370353888862, 329.5713630517754)
(96, 0.007254813626661046, 98.71370353888862, 329.36681716198086)
(97, 0.007254813626661046, 98.71370353888862, 329.2886875843384)
(98, 0.007254813626661046, 98.71370353888862, 329.2171795700337)
(99, 0.007254813626661046, 98.71370353888862, 329.25561110854983)
(100, 0.007254813626661046, 98.71370353888862, 329.2315311102093)
(101, 0.007254813626661046, 98.71370353888862, 329.2462106516825)
(102, 0.007254813626661046, 98.71370353888862, 329.23701291076543)
(103, 0.007254813626661046, 98.71370353888862, 329.24261999666896)
(104, 0.007254813626661046, 98.71370353888862, 329.23910677225837)
(105, 0.007254813626661046, 98.71370353888862, 329.24124848849567)
(106, 0.007254813626661046, 98.71370353888862, 329.2399065561809)
(107, 0.007254813626661046, 98.71370353888862, 329.2407246189893)
(108, 0.007254813626661046, 98.71370353888862, 329.24021204645567)
(109, 0.007254813626661046, 98.71370353888862, 329.24052451864355)
(110, 0.007254813626661046, 98.71370353888862, 329.2403287333574)
(111, 0.007254813626661046, 98.71370353888862, 329.2404480871126)
(112, 0.007254813626661046, 98.71370353888862, 329.24036744329845)
(113, 0.007056358029766455, 93.90583072481192, 334.1753375392116)
(114, 0.003819660112501051, 98.71370353888862, 329.2404008498287)
(115, 0.006180339887498948, 98.71370353888862, 329.2404008498287)
(116, 0.007639320225002103, 98.71370353888862, 329.2404008498287)
(117, 0.007143078221422572, 98.71370353888862, 329.2404008498287)
(118, 0.007176411556413209, 98.71370353888862, 329.2404008498287)
(119, 0.007324250805695689, 98.71370353888862, 329.2404008498287)
(120, 0.007232881124767848, 98.71370353888862, 329.2404008498287)
(121, 0.007266214458426493, 98.71370353888862, 329.2404008498287)
(122, 0.007232881124767848, 381.9660112501051, 329.2404008498287)
(123, 0.007232881124767848, 618.0339887498948, 329.2404008498287)
(124, 0.007232881124767848, 236.06797749978966, 329.2404008498287)
(125, 0.007232881124767848, 168.1394274343256, 329.2404008498287)
(126, 0.007232881124767848, 144.56644451133735, 329.2404008498287)
(127, 0.007232881124767848, 89.34697634073217, 329.2404008498287)
(128, 0.007232881124767848, 102.87034595275331, 329.2404008498287)
(129, 0.007232881124767848, 95.37804697909868, 329.2404008498287)
(130, 0.007232881124767848, 55.2194681706052, 329.2404008498287)
(131, 0.007232881124767848, 76.3114281710834, 329.2404008498287)
(132, 0.007232881124767848, 89.19471215587788, 329.2404008498287)
(133, 0.007232881124767848, 89.34700981299653, 329.2404008498287)
(134, 0.007232881124767848, 91.6506610230337, 329.2404008498287)
(135, 0.007232881124767848, 90.49886348214613, 329.2404008498287)
(136, 0.007232881124767848, 93.07439576906151, 329.2404008498287)
(137, 0.007232881124767848, 91.04985285102133, 329.2404008498287)
(138, 0.007232881124767848, 92.19447930505213, 329.2404008498287)
(139, 0.007232881124767848, 91.42117272204366, 329.2404008498287)
(140, 0.007232881124767848, 91.85838112306116, 329.2404008498287)
(141, 0.007232881124767848, 91.56300429207597, 329.2404008498287)
(142, 0.007232881124767848, 91.73000304109767, 329.2404008498287)
(143, 0.007232881124767848, 91.61717913115056, 329.2404008498287)
(144, 0.007232881124767848, 91.68096697719812, 329.2404008498287)
(145, 0.007232881124767848, 91.637872078342, 329.2404008498287)
(146, 0.007232881124767848, 91.66223686746302, 329.2404008498287)
(147, 0.007232881124767848, 91.64577608084171, 329.2404008498287)
(148, 0.007232881124767848, 91.65508260215722, 329.2404008498287)
(149, 0.007232881124767848, 91.64879514114945, 329.2404008498287)
(150, 0.007232881124767848, 91.65234991597494, 329.2404008498287)
(151, 0.007232881124767848, 91.64994831957291, 329.2404008498287)
(152, 0.007232881124767848, 91.6513061227339, 329.2404008498287)
(153, 0.007232881124767848, 91.65038879453559, 329.2404008498287)
(154, 0.007232881124767848, 91.65022054807103, 329.2404008498287)
(155, 0.007232881124767848, 91.65049277656917, 329.2404008498287)
(156, 0.007232881124767848, 91.65032453010461, 329.2404008498287)
(157, 0.007232881124767848, 91.65042851213819, 329.2404008498287)
(158, 0.007232881124767848, 91.65038879453559, 214.58980337503152)
(159, 0.007232881124767848, 91.65038879453559, 285.4101966249684)
(160, 0.007232881124767848, 91.65038879453559, 329.17960675006304)
(161, 0.007232881124767848, 91.65038879453559, 319.6026288406325)
(162, 0.007232881124767848, 91.65038879453559, 327.1826821939228)
(163, 0.007232881124767848, 91.65038879453559, 326.35581124362096)
(164, 0.007232881124767848, 91.65038879453559, 323.7763250979071)
(165, 0.007232881124767848, 91.65038879453559, 325.37053520946773)
(166, 0.007232881124767848, 91.65038879453559, 326.0597384888493)
(167, 0.007232881124767848, 91.65038879453559, 326.67164784232637)
(168, 0.007232881124767848, 91.65038879453559, 326.4764500894353)
(169, 0.007232881124767848, 91.65038879453559, 326.24272151444103)
(170, 0.007232881124767848, 91.65038879453559, 326.4018911823585)
(171, 0.007232881124767848, 91.65038879453559, 326.3126148108527)
(172, 0.007232881124767848, 91.65038879453559, 326.3734122140192)
(173, 0.007232881124767848, 91.65038879453559, 326.3393116744963)
(174, 0.007232881124767848, 91.65038879453559, 326.36253421607813)
(175, 0.007232881124767848, 91.65038879453559, 326.34950896901506)
(176, 0.007232881124767848, 91.65038879453559, 326.3583791905942)
(177, 0.007232881124767848, 91.65038879453559, 326.35340398892794)
(178, 0.007232881124767848, 91.65038879453559, 326.35679211208344)
(179, 0.007232881124767848, 91.65038879453559, 326.35489175414784)
(180, 0.007232881124767848, 91.65038879453559, 326.3561859020351)
(181, 0.007232881124767848, 91.65038879453559, 326.3554600298945)
(182, 0.007232881124767848, 91.65038879453559, 326.355954350401)
(183, 0.007232881124767848, 91.65038879453559, 326.3556770919148)
(184, 0.007232881124767848, 91.65038879453559, 326.3558659055469)
(185, 0.007232881124767848, 91.65038879453559, 326.35576000222886)
(186, 0.00721094862287465, 84.58707405018255, 323.47122163741324)
(187, 0.003819660112501051, 91.65038879453559, 326.35581124362096)
(188, 0.006180339887498948, 91.65038879453559, 326.35581124362096)
(189, 0.007639320225002103, 91.65038879453559, 326.35581124362096)
(190, 0.00722644906490278, 91.65038879453559, 326.35581124362096)
(191, 0.0072597823983315156, 91.65038879453559, 326.35581124362096)
(192, 0.006826870915077709, 91.65038879453559, 326.35581124362096)
(193, 0.0071257396810411764, 91.65038879453559, 326.35581124362096)
(194, 0.007092406346118679, 91.65038879453559, 326.35581124362096)
(195, 0.006990980836678382, 91.65038879453559, 326.35581124362096)
(196, 0.007053665248838758, 91.65038879453559, 326.35581124362096)
(197, 0.007092406346118679, 381.9660112501051, 326.35581124362096)
(198, 0.007092406346118679, 618.0339887498948, 326.35581124362096)
(199, 0.007092406346118679, 236.06797749978966, 326.35581124362096)
(200, 0.007092406346118679, 176.31079899621577, 326.35581124362096)
(201, 0.007092406346118679, 148.21721257077974, 326.35581124362096)
(202, 0.007092406346118679, 124.71206893188541, 326.35581124362096)
(203, 0.007092406346118679, 115.96673209426582, 326.35581124362096)
(204, 0.007092406346118679, 120.45917482852775, 326.35581124362096)
(205, 0.007092406346118679, 120.34914551166175, 326.35581124362096)
(206, 0.007092406346118679, 122.52634752575352, 326.35581124362096)
(207, 0.007092406346118679, 121.24876453825219, 326.35581124362096)
(208, 0.007092406346118679, 120.7973927213979, 326.35581124362096)
(209, 0.007092406346118679, 120.59051547897869, 326.35581124362096)
(210, 0.007092406346118679, 120.50934249289548, 326.35581124362096)
(211, 0.007092406346118679, 120.41714736924386, 326.35581124362096)
(212, 0.007092406346118679, 120.47833717118002, 326.35581124362096)
(213, 0.007092406346118679, 120.44312176754211, 326.35581124362096)
(214, 0.007092406346118679, 120.46649419211684, 326.35581124362096)
(215, 0.007092406346118679, 120.4530431048547, 326.35581124362096)
(216, 0.007092406346118679, 120.46197057664276, 326.35581124362096)
(217, 0.007092406346118679, 120.45683271849427, 326.35581124362096)
(218, 0.007092406346118679, 120.4602427092837, 326.35581124362096)
(219, 0.007092406346118679, 120.45828022210034, 326.35581124362096)
(220, 0.007092406346118679, 120.45958272268058, 326.35581124362096)
(221, 0.007092406346118679, 120.45883311927903, 326.35581124362096)
(222, 0.007092406346118679, 120.45933063023031, 326.35581124362096)
(223, 0.007092406346118679, 120.45904430720901, 326.35581124362096)
(224, 0.007092406346118679, 120.45923433948262, 326.35581124362096)
(225, 0.007092406346118679, 120.45912497382024, 326.35581124362096)
(226, 0.007092406346118679, 120.45917482852775, 214.58980337503152)
(227, 0.007092406346118679, 120.45917482852775, 285.4101966249684)
(228, 0.007092406346118679, 120.45917482852775, 329.17960675006304)
(229, 0.007092406346118679, 120.45917482852775, 316.5064312139593)
(230, 0.007092406346118679, 120.45917482852775, 322.75641171800055)
(231, 0.007092406346118679, 120.45917482852775, 322.7754781878647)
(232, 0.007092406346118679, 120.45917482852775, 319.6608388133837)
(233, 0.007092406346118679, 120.45917482852775, 321.5740080830902)
(234, 0.007092406346118679, 120.45917482852775, 322.3047737178862)
(235, 0.007092406346118679, 120.45917482852775, 322.5839013525679)
(236, 0.007092406346118679, 120.45917482852775, 322.6905186218169)
(237, 0.007092406346118679, 120.45917482852775, 322.7312427948824)
(238, 0.007092406346118679, 120.45917482852775, 322.75149093541273)
(239, 0.007092406346118679, 120.45917482852775, 322.76369446144315)
(240, 0.007092406346118679, 120.45917482852775, 322.7591934784643)
(241, 0.007092406346118679, 120.45917482852775, 322.7609127009794)
(242, 0.007092406346118679, 120.45917482852775, 322.7581309405157)
(243, 0.007092406346118679, 120.45917482852775, 322.7598501630309)
(244, 0.007092406346118679, 120.45917482852775, 322.7587876250823)
(245, 0.007092406346118679, 120.45917482852775, 322.7594443096488)
(246, 0.007092406346118679, 120.45917482852775, 322.7590384562668)
(247, 0.007092406346118679, 120.45917482852775, 322.75928928745134)
(248, 0.007092406346118679, 120.45917482852775, 322.75913426525386)
(249, 0.007092406346118679, 120.45917482852775, 322.7592300742409)
(250, 0.0069519315674695095, 149.2679608625199, 319.1626489048609)
(251, 0.003819660112501051, 120.45917482852775, 322.7592300742409)
(252, 0.006180339887498948, 120.45917482852775, 322.7592300742409)
(253, 0.007639320225002103, 120.45917482852775, 322.7592300742409)
(254, 0.00714635776468319, 120.45917482852775, 322.7592300742409)
(255, 0.007113024430549628, 120.45917482852775, 322.7592300742409)
(256, 0.006756770635885933, 120.45917482852775, 322.7592300742409)
(257, 0.006984037156828695, 120.45917482852775, 322.7592300742409)
(258, 0.007061342462289819, 120.45917482852775, 322.7592300742409)
(259, 0.007113024430549628, 381.9660112501051, 322.7592300742409)
(260, 0.007113024430549628, 618.0339887498949, 322.7592300742409)
(261, 0.007113024430549628, 236.0679774997897, 322.7592300742409)
(262, 0.007113024430549628, 177.15577049006538, 322.7592300742409)
(263, 0.007113024430549628, 152.5494622162275, 322.7592300742409)
(264, 0.007113024430549628, 94.28075261514647, 322.7592300742409)
(265, 0.007113024430549628, 116.62074597954286, 322.7592300742409)
(266, 0.007113024430549628, 108.8927224861577, 322.7592300742409)
(267, 0.007113024430549628, 106.21757520166216, 322.7592300742409)
(268, 0.007113024430549628, 108.89275599104924, 322.7592300742409)
(269, 0.007113024430549628, 107.87090714839239, 322.7592300742409)
(270, 0.007113024430549628, 108.5024237573573, 322.7592300742409)
(271, 0.007113024430549628, 108.74364163752182, 322.7592300742409)
(272, 0.007113024430549628, 108.83577866905047, 322.7592300742409)
(273, 0.007113024430549628, 108.8709718834719, 322.7592300742409)
(274, 0.007113024430549628, 108.88441449520752, 322.7592300742409)
(275, 0.007113024430549628, 108.88954911599295, 322.7592300742409)
(276, 0.007113024430549628, 108.89151036661366, 322.7592300742409)
(277, 0.007113024430549628, 108.8922594976903, 322.7592300742409)
(278, 0.007113024430549628, 108.89254564029955, 322.7592300742409)
(279, 0.007113024430549628, 108.89265493705065, 322.7592300742409)
(280, 0.007113024430549628, 108.89268898126615, 322.7592300742409)
(281, 0.007113024430549628, 108.8927224861577, 214.58980337503152)
(282, 0.007113024430549628, 108.8927224861577, 285.4101966249684)
(283, 0.007113024430549628, 108.8927224861577, 329.17960675006304)
(284, 0.007113024430549628, 108.8927224861577, 317.64187038441264)
(285, 0.007113024430549628, 108.8927224861577, 324.20005478331615)
(286, 0.007113024430549628, 108.8927224861577, 324.2801439133989)
(287, 0.007113024430549628, 108.8927224861577, 321.69505124742426)
(288, 0.007113024430549628, 108.8927224861577, 322.9220883038279)
(289, 0.007113024430549628, 108.8927224861577, 320.1468739203045)
(290, 0.007113024430549628, 108.8927224861577, 321.10370012907646)
(291, 0.007113024430549628, 108.8927224861577, 322.1637376975148)
(292, 0.007113024430549628, 108.8927224861577, 321.4691752195007)
(293, 0.007113024430549628, 108.8927224861577, 321.8740735412923)
(294, 0.007113024430549628, 108.8927224861577, 321.60877428200126)
(295, 0.007113024430549628, 108.8927224861577, 321.76343167893793)
(296, 0.007113024430549628, 108.8927224861577, 321.6620963790789)
(297, 0.007113024430549628, 108.8927224861577, 321.7211702480971)
(298, 0.007113024430549628, 108.8927224861577, 321.68246360781114)
(299, 0.007113024430549628, 108.8927224861577, 321.70502781792914)
(300, 0.007113024430549628, 108.8927224861577, 321.69024319693017)
(301, 0.007113024430549628, 108.8927224861577, 321.68727165830524)
(302, 0.007113024430549628, 108.8927224861577, 321.6920797087993)
(303, 0.007113024430549628, 108.8927224861577, 321.68910817017434)
(304, 0.007113024430549628, 108.8927224861577, 321.69094468204344)
(305, 0.007113024430549628, 108.8927224861577, 321.6898096552876)
(306, 0.007113024430549628, 108.8927224861577, 321.6905111404009)
(307, 0.007113024430549628, 108.8927224861577, 321.69007759875825)
(308, 0.007113024430549628, 108.8927224861577, 321.6903455422289)
(309, 0.007113024430549628, 108.8927224861577, 321.69040879510214)
(310, 0.007113024430549628, 108.8927224861577, 321.69031219304145)
Duration of fit 4.743383852000079
(447, 0.007113024430549628, 108.8927224861577, 321.6903455422289)

We need to save the fitted parameters at each iteration and select the one which has the smallest value of the cost function.
This is beyond the scope of this tutorial.
The result is the self-calibrated null depth.
However, if the model does not handle some effects, the use of a calibrator is necessary.
Its data will go exactly through the same process (pre-processing + GRIP) and classic bias correction applies.
On this dataset, the null depth given by GRIP needs a classical calibration, as per Defrère+ (2016) because GRIP implements the exact same model as exposed in this article.
# Congratulations, we know the core of GRIP: the self-calibration.
## Going beyond this tutorial
We can use a $\chi^2$ cost function to find the self-calibrated null depth.
The function to use is `lstsqrs_fit`, it has the same interface as `minimize_fit`.
Everything is the same but:
- use the error bars on the PDF
- change the differential step, recommended values are `[0.005, 10, 10]`
## Bonus:
In order to keep track of everything done, GRIP has a Logger (`grip.Logger`) to save the content of the terminal. However, it does not work in a Jupyter notebook so it is not used here.
Below is a snippet:
```python
"""
Any piece of code
"""
# Start log everything displayed in the terminal
sys.stdout = gff.Logger('%s_%03d_basin_hop.log' % (key, basin_hopping_count))
"""
Any piece of code
"""
# Stop log
sys.stdout.close()
```
|
mamartinodREPO_NAMEgripPATH_START.@grip_extracted@grip-main@tutorials@tuto3_fit_with_likelihood.ipynb@.PATH_END.py
|
{
"filename": "_font.py",
"repo_name": "plotly/plotly.py",
"repo_path": "plotly.py_extracted/plotly.py-master/packages/python/plotly/plotly/graph_objs/choropleth/hoverlabel/_font.py",
"type": "Python"
}
|
from plotly.basedatatypes import BaseTraceHierarchyType as _BaseTraceHierarchyType
import copy as _copy
class Font(_BaseTraceHierarchyType):
# class properties
# --------------------
_parent_path_str = "choropleth.hoverlabel"
_path_str = "choropleth.hoverlabel.font"
_valid_props = {
"color",
"colorsrc",
"family",
"familysrc",
"lineposition",
"linepositionsrc",
"shadow",
"shadowsrc",
"size",
"sizesrc",
"style",
"stylesrc",
"textcase",
"textcasesrc",
"variant",
"variantsrc",
"weight",
"weightsrc",
}
# color
# -----
@property
def color(self):
"""
The 'color' property is a color and may be specified as:
- A hex string (e.g. '#ff0000')
- An rgb/rgba string (e.g. 'rgb(255,0,0)')
- An hsl/hsla string (e.g. 'hsl(0,100%,50%)')
- An hsv/hsva string (e.g. 'hsv(0,100%,100%)')
- A named CSS color:
aliceblue, antiquewhite, aqua, aquamarine, azure,
beige, bisque, black, blanchedalmond, blue,
blueviolet, brown, burlywood, cadetblue,
chartreuse, chocolate, coral, cornflowerblue,
cornsilk, crimson, cyan, darkblue, darkcyan,
darkgoldenrod, darkgray, darkgrey, darkgreen,
darkkhaki, darkmagenta, darkolivegreen, darkorange,
darkorchid, darkred, darksalmon, darkseagreen,
darkslateblue, darkslategray, darkslategrey,
darkturquoise, darkviolet, deeppink, deepskyblue,
dimgray, dimgrey, dodgerblue, firebrick,
floralwhite, forestgreen, fuchsia, gainsboro,
ghostwhite, gold, goldenrod, gray, grey, green,
greenyellow, honeydew, hotpink, indianred, indigo,
ivory, khaki, lavender, lavenderblush, lawngreen,
lemonchiffon, lightblue, lightcoral, lightcyan,
lightgoldenrodyellow, lightgray, lightgrey,
lightgreen, lightpink, lightsalmon, lightseagreen,
lightskyblue, lightslategray, lightslategrey,
lightsteelblue, lightyellow, lime, limegreen,
linen, magenta, maroon, mediumaquamarine,
mediumblue, mediumorchid, mediumpurple,
mediumseagreen, mediumslateblue, mediumspringgreen,
mediumturquoise, mediumvioletred, midnightblue,
mintcream, mistyrose, moccasin, navajowhite, navy,
oldlace, olive, olivedrab, orange, orangered,
orchid, palegoldenrod, palegreen, paleturquoise,
palevioletred, papayawhip, peachpuff, peru, pink,
plum, powderblue, purple, red, rosybrown,
royalblue, rebeccapurple, saddlebrown, salmon,
sandybrown, seagreen, seashell, sienna, silver,
skyblue, slateblue, slategray, slategrey, snow,
springgreen, steelblue, tan, teal, thistle, tomato,
turquoise, violet, wheat, white, whitesmoke,
yellow, yellowgreen
- A list or array of any of the above
Returns
-------
str|numpy.ndarray
"""
return self["color"]
@color.setter
def color(self, val):
self["color"] = val
# colorsrc
# --------
@property
def colorsrc(self):
"""
Sets the source reference on Chart Studio Cloud for `color`.
The 'colorsrc' property must be specified as a string or
as a plotly.grid_objs.Column object
Returns
-------
str
"""
return self["colorsrc"]
@colorsrc.setter
def colorsrc(self, val):
self["colorsrc"] = val
# family
# ------
@property
def family(self):
"""
HTML font family - the typeface that will be applied by the web
browser. The web browser will only be able to apply a font if
it is available on the system which it operates. Provide
multiple font families, separated by commas, to indicate the
preference in which to apply fonts if they aren't available on
the system. The Chart Studio Cloud (at https://chart-
studio.plotly.com or on-premise) generates images on a server,
where only a select number of fonts are installed and
supported. These include "Arial", "Balto", "Courier New",
"Droid Sans", "Droid Serif", "Droid Sans Mono", "Gravitas One",
"Old Standard TT", "Open Sans", "Overpass", "PT Sans Narrow",
"Raleway", "Times New Roman".
The 'family' property is a string and must be specified as:
- A non-empty string
- A tuple, list, or one-dimensional numpy array of the above
Returns
-------
str|numpy.ndarray
"""
return self["family"]
@family.setter
def family(self, val):
self["family"] = val
# familysrc
# ---------
@property
def familysrc(self):
"""
Sets the source reference on Chart Studio Cloud for `family`.
The 'familysrc' property must be specified as a string or
as a plotly.grid_objs.Column object
Returns
-------
str
"""
return self["familysrc"]
@familysrc.setter
def familysrc(self, val):
self["familysrc"] = val
# lineposition
# ------------
@property
def lineposition(self):
"""
Sets the kind of decoration line(s) with text, such as an
"under", "over" or "through" as well as combinations e.g.
"under+over", etc.
The 'lineposition' property is a flaglist and may be specified
as a string containing:
- Any combination of ['under', 'over', 'through'] joined with '+' characters
(e.g. 'under+over')
OR exactly one of ['none'] (e.g. 'none')
- A list or array of the above
Returns
-------
Any|numpy.ndarray
"""
return self["lineposition"]
@lineposition.setter
def lineposition(self, val):
self["lineposition"] = val
# linepositionsrc
# ---------------
@property
def linepositionsrc(self):
"""
Sets the source reference on Chart Studio Cloud for
`lineposition`.
The 'linepositionsrc' property must be specified as a string or
as a plotly.grid_objs.Column object
Returns
-------
str
"""
return self["linepositionsrc"]
@linepositionsrc.setter
def linepositionsrc(self, val):
self["linepositionsrc"] = val
# shadow
# ------
@property
def shadow(self):
"""
Sets the shape and color of the shadow behind text. "auto"
places minimal shadow and applies contrast text font color. See
https://developer.mozilla.org/en-US/docs/Web/CSS/text-shadow
for additional options.
The 'shadow' property is a string and must be specified as:
- A string
- A number that will be converted to a string
- A tuple, list, or one-dimensional numpy array of the above
Returns
-------
str|numpy.ndarray
"""
return self["shadow"]
@shadow.setter
def shadow(self, val):
self["shadow"] = val
# shadowsrc
# ---------
@property
def shadowsrc(self):
"""
Sets the source reference on Chart Studio Cloud for `shadow`.
The 'shadowsrc' property must be specified as a string or
as a plotly.grid_objs.Column object
Returns
-------
str
"""
return self["shadowsrc"]
@shadowsrc.setter
def shadowsrc(self, val):
self["shadowsrc"] = val
# size
# ----
@property
def size(self):
"""
The 'size' property is a number and may be specified as:
- An int or float in the interval [1, inf]
- A tuple, list, or one-dimensional numpy array of the above
Returns
-------
int|float|numpy.ndarray
"""
return self["size"]
@size.setter
def size(self, val):
self["size"] = val
# sizesrc
# -------
@property
def sizesrc(self):
"""
Sets the source reference on Chart Studio Cloud for `size`.
The 'sizesrc' property must be specified as a string or
as a plotly.grid_objs.Column object
Returns
-------
str
"""
return self["sizesrc"]
@sizesrc.setter
def sizesrc(self, val):
self["sizesrc"] = val
# style
# -----
@property
def style(self):
"""
Sets whether a font should be styled with a normal or italic
face from its family.
The 'style' property is an enumeration that may be specified as:
- One of the following enumeration values:
['normal', 'italic']
- A tuple, list, or one-dimensional numpy array of the above
Returns
-------
Any|numpy.ndarray
"""
return self["style"]
@style.setter
def style(self, val):
self["style"] = val
# stylesrc
# --------
@property
def stylesrc(self):
"""
Sets the source reference on Chart Studio Cloud for `style`.
The 'stylesrc' property must be specified as a string or
as a plotly.grid_objs.Column object
Returns
-------
str
"""
return self["stylesrc"]
@stylesrc.setter
def stylesrc(self, val):
self["stylesrc"] = val
# textcase
# --------
@property
def textcase(self):
"""
Sets capitalization of text. It can be used to make text appear
in all-uppercase or all-lowercase, or with each word
capitalized.
The 'textcase' property is an enumeration that may be specified as:
- One of the following enumeration values:
['normal', 'word caps', 'upper', 'lower']
- A tuple, list, or one-dimensional numpy array of the above
Returns
-------
Any|numpy.ndarray
"""
return self["textcase"]
@textcase.setter
def textcase(self, val):
self["textcase"] = val
# textcasesrc
# -----------
@property
def textcasesrc(self):
"""
Sets the source reference on Chart Studio Cloud for `textcase`.
The 'textcasesrc' property must be specified as a string or
as a plotly.grid_objs.Column object
Returns
-------
str
"""
return self["textcasesrc"]
@textcasesrc.setter
def textcasesrc(self, val):
self["textcasesrc"] = val
# variant
# -------
@property
def variant(self):
"""
Sets the variant of the font.
The 'variant' property is an enumeration that may be specified as:
- One of the following enumeration values:
['normal', 'small-caps', 'all-small-caps',
'all-petite-caps', 'petite-caps', 'unicase']
- A tuple, list, or one-dimensional numpy array of the above
Returns
-------
Any|numpy.ndarray
"""
return self["variant"]
@variant.setter
def variant(self, val):
self["variant"] = val
# variantsrc
# ----------
@property
def variantsrc(self):
"""
Sets the source reference on Chart Studio Cloud for `variant`.
The 'variantsrc' property must be specified as a string or
as a plotly.grid_objs.Column object
Returns
-------
str
"""
return self["variantsrc"]
@variantsrc.setter
def variantsrc(self, val):
self["variantsrc"] = val
# weight
# ------
@property
def weight(self):
"""
Sets the weight (or boldness) of the font.
The 'weight' property is a integer and may be specified as:
- An int (or float that will be cast to an int)
in the interval [1, 1000]
OR exactly one of ['normal', 'bold'] (e.g. 'bold')
- A tuple, list, or one-dimensional numpy array of the above
Returns
-------
int|numpy.ndarray
"""
return self["weight"]
@weight.setter
def weight(self, val):
self["weight"] = val
# weightsrc
# ---------
@property
def weightsrc(self):
"""
Sets the source reference on Chart Studio Cloud for `weight`.
The 'weightsrc' property must be specified as a string or
as a plotly.grid_objs.Column object
Returns
-------
str
"""
return self["weightsrc"]
@weightsrc.setter
def weightsrc(self, val):
self["weightsrc"] = val
# Self properties description
# ---------------------------
@property
def _prop_descriptions(self):
return """\
color
colorsrc
Sets the source reference on Chart Studio Cloud for
`color`.
family
HTML font family - the typeface that will be applied by
the web browser. The web browser will only be able to
apply a font if it is available on the system which it
operates. Provide multiple font families, separated by
commas, to indicate the preference in which to apply
fonts if they aren't available on the system. The Chart
Studio Cloud (at https://chart-studio.plotly.com or on-
premise) generates images on a server, where only a
select number of fonts are installed and supported.
These include "Arial", "Balto", "Courier New", "Droid
Sans", "Droid Serif", "Droid Sans Mono", "Gravitas
One", "Old Standard TT", "Open Sans", "Overpass", "PT
Sans Narrow", "Raleway", "Times New Roman".
familysrc
Sets the source reference on Chart Studio Cloud for
`family`.
lineposition
Sets the kind of decoration line(s) with text, such as
an "under", "over" or "through" as well as combinations
e.g. "under+over", etc.
linepositionsrc
Sets the source reference on Chart Studio Cloud for
`lineposition`.
shadow
Sets the shape and color of the shadow behind text.
"auto" places minimal shadow and applies contrast text
font color. See https://developer.mozilla.org/en-
US/docs/Web/CSS/text-shadow for additional options.
shadowsrc
Sets the source reference on Chart Studio Cloud for
`shadow`.
size
sizesrc
Sets the source reference on Chart Studio Cloud for
`size`.
style
Sets whether a font should be styled with a normal or
italic face from its family.
stylesrc
Sets the source reference on Chart Studio Cloud for
`style`.
textcase
Sets capitalization of text. It can be used to make
text appear in all-uppercase or all-lowercase, or with
each word capitalized.
textcasesrc
Sets the source reference on Chart Studio Cloud for
`textcase`.
variant
Sets the variant of the font.
variantsrc
Sets the source reference on Chart Studio Cloud for
`variant`.
weight
Sets the weight (or boldness) of the font.
weightsrc
Sets the source reference on Chart Studio Cloud for
`weight`.
"""
def __init__(
self,
arg=None,
color=None,
colorsrc=None,
family=None,
familysrc=None,
lineposition=None,
linepositionsrc=None,
shadow=None,
shadowsrc=None,
size=None,
sizesrc=None,
style=None,
stylesrc=None,
textcase=None,
textcasesrc=None,
variant=None,
variantsrc=None,
weight=None,
weightsrc=None,
**kwargs,
):
"""
Construct a new Font object
Sets the font used in hover labels.
Parameters
----------
arg
dict of properties compatible with this constructor or
an instance of
:class:`plotly.graph_objs.choropleth.hoverlabel.Font`
color
colorsrc
Sets the source reference on Chart Studio Cloud for
`color`.
family
HTML font family - the typeface that will be applied by
the web browser. The web browser will only be able to
apply a font if it is available on the system which it
operates. Provide multiple font families, separated by
commas, to indicate the preference in which to apply
fonts if they aren't available on the system. The Chart
Studio Cloud (at https://chart-studio.plotly.com or on-
premise) generates images on a server, where only a
select number of fonts are installed and supported.
These include "Arial", "Balto", "Courier New", "Droid
Sans", "Droid Serif", "Droid Sans Mono", "Gravitas
One", "Old Standard TT", "Open Sans", "Overpass", "PT
Sans Narrow", "Raleway", "Times New Roman".
familysrc
Sets the source reference on Chart Studio Cloud for
`family`.
lineposition
Sets the kind of decoration line(s) with text, such as
an "under", "over" or "through" as well as combinations
e.g. "under+over", etc.
linepositionsrc
Sets the source reference on Chart Studio Cloud for
`lineposition`.
shadow
Sets the shape and color of the shadow behind text.
"auto" places minimal shadow and applies contrast text
font color. See https://developer.mozilla.org/en-
US/docs/Web/CSS/text-shadow for additional options.
shadowsrc
Sets the source reference on Chart Studio Cloud for
`shadow`.
size
sizesrc
Sets the source reference on Chart Studio Cloud for
`size`.
style
Sets whether a font should be styled with a normal or
italic face from its family.
stylesrc
Sets the source reference on Chart Studio Cloud for
`style`.
textcase
Sets capitalization of text. It can be used to make
text appear in all-uppercase or all-lowercase, or with
each word capitalized.
textcasesrc
Sets the source reference on Chart Studio Cloud for
`textcase`.
variant
Sets the variant of the font.
variantsrc
Sets the source reference on Chart Studio Cloud for
`variant`.
weight
Sets the weight (or boldness) of the font.
weightsrc
Sets the source reference on Chart Studio Cloud for
`weight`.
Returns
-------
Font
"""
super(Font, self).__init__("font")
if "_parent" in kwargs:
self._parent = kwargs["_parent"]
return
# Validate arg
# ------------
if arg is None:
arg = {}
elif isinstance(arg, self.__class__):
arg = arg.to_plotly_json()
elif isinstance(arg, dict):
arg = _copy.copy(arg)
else:
raise ValueError(
"""\
The first argument to the plotly.graph_objs.choropleth.hoverlabel.Font
constructor must be a dict or
an instance of :class:`plotly.graph_objs.choropleth.hoverlabel.Font`"""
)
# Handle skip_invalid
# -------------------
self._skip_invalid = kwargs.pop("skip_invalid", False)
self._validate = kwargs.pop("_validate", True)
# Populate data dict with properties
# ----------------------------------
_v = arg.pop("color", None)
_v = color if color is not None else _v
if _v is not None:
self["color"] = _v
_v = arg.pop("colorsrc", None)
_v = colorsrc if colorsrc is not None else _v
if _v is not None:
self["colorsrc"] = _v
_v = arg.pop("family", None)
_v = family if family is not None else _v
if _v is not None:
self["family"] = _v
_v = arg.pop("familysrc", None)
_v = familysrc if familysrc is not None else _v
if _v is not None:
self["familysrc"] = _v
_v = arg.pop("lineposition", None)
_v = lineposition if lineposition is not None else _v
if _v is not None:
self["lineposition"] = _v
_v = arg.pop("linepositionsrc", None)
_v = linepositionsrc if linepositionsrc is not None else _v
if _v is not None:
self["linepositionsrc"] = _v
_v = arg.pop("shadow", None)
_v = shadow if shadow is not None else _v
if _v is not None:
self["shadow"] = _v
_v = arg.pop("shadowsrc", None)
_v = shadowsrc if shadowsrc is not None else _v
if _v is not None:
self["shadowsrc"] = _v
_v = arg.pop("size", None)
_v = size if size is not None else _v
if _v is not None:
self["size"] = _v
_v = arg.pop("sizesrc", None)
_v = sizesrc if sizesrc is not None else _v
if _v is not None:
self["sizesrc"] = _v
_v = arg.pop("style", None)
_v = style if style is not None else _v
if _v is not None:
self["style"] = _v
_v = arg.pop("stylesrc", None)
_v = stylesrc if stylesrc is not None else _v
if _v is not None:
self["stylesrc"] = _v
_v = arg.pop("textcase", None)
_v = textcase if textcase is not None else _v
if _v is not None:
self["textcase"] = _v
_v = arg.pop("textcasesrc", None)
_v = textcasesrc if textcasesrc is not None else _v
if _v is not None:
self["textcasesrc"] = _v
_v = arg.pop("variant", None)
_v = variant if variant is not None else _v
if _v is not None:
self["variant"] = _v
_v = arg.pop("variantsrc", None)
_v = variantsrc if variantsrc is not None else _v
if _v is not None:
self["variantsrc"] = _v
_v = arg.pop("weight", None)
_v = weight if weight is not None else _v
if _v is not None:
self["weight"] = _v
_v = arg.pop("weightsrc", None)
_v = weightsrc if weightsrc is not None else _v
if _v is not None:
self["weightsrc"] = _v
# Process unknown kwargs
# ----------------------
self._process_kwargs(**dict(arg, **kwargs))
# Reset skip_invalid
# ------------------
self._skip_invalid = False
|
plotlyREPO_NAMEplotly.pyPATH_START.@plotly.py_extracted@plotly.py-master@packages@python@plotly@plotly@graph_objs@choropleth@hoverlabel@_font.py@.PATH_END.py
|
{
"filename": "postgres_indexing.md",
"repo_name": "EranOfek/AstroPack",
"repo_path": "AstroPack_extracted/AstroPack-main/matlab/util/+db/doc/postgres_indexing.md",
"type": "Markdown"
}
|
# Postgres Indexing
Updated: 02/2022
## IDENTITY COLUMN (Auto-Increment)
After experiments and performance tests, we concluded to use integer
as IDENTITY, which is the faster and easiest way to generate primary key for
tables.
The syntax to define such field
pk BIGINT GENERATED ALWAYS AS IDENTITY
### Example
CREATE TABLE public.gcs_telemetry (
pk BIGINT GENERATED ALWAYS AS IDENTITY,
rcv_time DOUBLE PRECISION DEFAULT 0,
param_name VARCHAR,
param_index INTEGER DEFAULT 0,
);
CREATE INDEX gcs_telemetry_idx_rcv_time ON public.gcs_telemetry
USING btree (rcv_time);
CREATE INDEX gcs_telemetry_idx_param_name ON public.gcs_telemetry
USING btree (param_name);
### UUID (128-bit / String)
https://www.postgresql.org/docs/current/datatype-uuid.html
The data type uuid stores Universally Unique Identifiers (UUID) as
defined by RFC 4122, ISO/IEC 9834-8:2005, and related standards.
(Some systems refer to this data type as a globally unique identifier,
or GUID, instead.) This identifier is a 128-bit quantity that is
generated by an algorithm chosen to make it very unlikely that the
same identifier will be generated by anyone else in the known universe
using the same algorithm. Therefore, for distributed systems, these
identifiers provide a better uniqueness guarantee than sequence generators,
which are only unique within a single database.
A UUID is written as a sequence of lower-case hexadecimal digits,
in several groups separated by hyphens, specifically a group of 8 digits
followed by three groups of 4 digits followed by a group of 12 digits,
for a total of 32 digits representing the 128 bits. An example of a UUID
in this standard form is:
a0eebc99-9c0b-4ef8-bb6d-6bb9bd380a11
https://www.javatpoint.com/postgresql-uuid
https://www.postgresqltutorial.com/postgresql-uuid/
https://www.postgresql.org/docs/current/uuid-ossp.html
### UUID - Version 4 (random)
https://en.wikipedia.org/wiki/Universally_unique_identifier#Version_4_(random)
A version 4 UUID is randomly generated. As in other UUIDs, 4 bits are used to
indicate version 4, and 2 or 3 bits to indicate the variant
(102 or 1102 for variants 1 and 2 respectively). Thus, for variant 1
(that is, most UUIDs) a random version-4 UUID will have 6 predetermined
variant and version bits, leaving 122 bits for the randomly generated part,
for a total of 2122, or 5.3×1036 (5.3 undecillion) possible version-4 variant-1 UUIDs.
There are half as many possible version-4 variant-2 UUIDs (legacy GUIDs)
because there is one fewer random bit available, 3 bits being consumed for the variant.
### Postgres - How to create UUID values in PostgreSQL
https://www.javatpoint.com/postgresql-uuid
PostgreSQL enables us to store and equate the UUID values, but it does not contain
the functions, and creates the UUID values in its core.
And rather than it depends on the third-party modules that deliver the
particular algorithms to create the UUIDs, such as the uuid-ossp module contains
some accessible functions, which perform standard algorithms for creating UUIDs.
We will use the following CREATE EXTENSION command to install the uuid-ossp module in
the Javatpoint Database.
CREATE EXTENSION IF NOT EXISTS "uuid-ossp";
These functions are supported, for v1 and v4:
SELECT uuid_generate_v1();
SELECT uuid_generate_v4();
### Creating a table with UUID column (example)
We will create a table whose primary key is UUID data type.
In addition, the values of the primary key column will be generated
automatically using the uuid_generate_v4() function.
First, create the contacts table using the following statement:
CREATE TABLE contacts (
contact_id uuid DEFAULT uuid_generate_v4 (),
first_name VARCHAR NOT NULL,
last_name VARCHAR NOT NULL,
email VARCHAR NOT NULL,
phone VARCHAR,
PRIMARY KEY (contact_id)
);
INSERT INTO contacts (
first_name,
last_name,
email,
phone
)
VALUES
(
'John',
'Smith',
'john.smith@example.com',
'408-237-2345'
)
### Postgres UUID Examples
https://stackoverflow.com/questions/12505158/generating-a-uuid-in-postgres-for-insert-statement
INSERT INTO items VALUES( gen_random_uuid(), 54.321, 31, 'desc 1', 31.94 ) ;
### Postgres - Manually insert UUID
https://stackoverflow.com/questions/64914884/syntax-to-manually-insert-a-uuid-value-in-postgres
Syntax to manually insert a UUID value in Postgres
CREATE TABLE IF NOT EXISTS DIM_Jour (
jour_id uuid NOT NULL,
AAAA int,
MM int,
JJ int,
Jour_Semaine int,
Num_Semaine int,
PRIMARY KEY (jour_id)
);
Insert
INSERT INTO DIM_Jour (jour_id, AAAA, MM, JJ, Jour_Semaine, Num_Semaine) VALUES (
'292a485f-a56a-4938-8f1a-bbbbbbbbbbb1',
2020,
11,
19,
4,
47
)
But it's probably a good practice to be explicit about it and perform the cast yourself
INSERT INTO DIM_Jour (jour_id, AAAA, MM, JJ, Jour_Semaine, Num_Semaine) VALUES (
'292a485f-a56a-4938-8f1a-bbbbbbbbbbb1'::UUID,
2020,
11,
19,
4,
47
);
### UUID - MATLAB - Implemented in AstroPack Component class
function Result = newUuid()
% Generate Uuid using java package, such as '3ac140ba-19b5-4945-9d75-ff9e45d00e91'
% Output: Uuid char array (36 chars)
% Example: U = Component.newUuid()
Temp = java.util.UUID.randomUUID;
% Convert java string to char
Result = string(Temp.toString()).char;
end
Temp = java.util.UUID.randomUUID
Temp =
828e55aa-8d22-437f-a5af-2d8ca176db13
>> class(Temp)
ans =
'java.util.UUID'
### UUID - MATLAB - Using Java
https://www.mathworks.com/matlabcentral/answers/391490-how-can-i-represent-a-hexadecimal-string-in-a-128-bit-format-in-matlab
% Get UUID and store as character
uuid=char(java.util.UUID.randomUUID);
val=uuid;
% Remove the '-' to get the hexadecimal values
val(val == '-') = '';
% Turn 'val' into string elements each representing bytes
% reshape(val,2,16) gives a 2x16 char array
% permute(reshapedvalue,[2 1])' gives a 1x16 string array with each element
% of the string representing a byte
byte_hex = string(permute(reshape(val,2,16),[2 1]))';
% Create an array of bytes
bytes_vector = uint8(hex2dec(byte_hex));
% dec2hex is to verify that the byte set is as expected
dec2hex(bytes_vector);
### UUID - Python
https://docs.python.org/3/library/uuid.htm
uuid.uuid1(node=None, clock_seq=None)
Generate a UUID from a host ID, sequence number, and the current time.
If node is not given, getnode() is used to obtain the hardware address.
If clock_seq is given, it is used as the sequence number; otherwise a
random 14-bit sequence number is chosen.
uuid.uuid3(namespace, name)
Generate a UUID based on the MD5 hash of a namespace identifier
(which is a UUID) and a name (which is a string).
uuid.uuid4()
Generate a random UUID.
uuid.uuid5(namespace, name)
Generate a UUID based on the SHA-1 hash of a namespace identifier
(which is a UUID) and a name (which is a string).
def new_uuid():
s = str(uuid.uuid1())
return s
In [7]: uuid.uuid1()
Out[7]: UUID('8523447d-9e1e-11ec-b5d5-005056c00008')
## Data Types
### Numeric Types
https://www.postgresql.org/docs/current/datatype-numeric.html
### Geometric Types
https://www.postgresql.org/docs/current/datatype-geometric.html
point 16 bytes Point on a plane (x,y)
Points are the fundamental two-dimensional building block for geometric types.
Values of type point are specified using either of the following syntaxes:
( x , y )
x , y
where x and y are the respective coordinates, as floating-point numbers.
Points are output using the first syntax.
### Create table with point data type
http://www.java2s.com/Code/PostgreSQL/Data-Type/Createtablewithpointdatatype.htm
postgres=# CREATE TABLE cities (
postgres(# name varchar(80),
postgres(# location point
postgres(# );
CREATE TABLE
postgres=#
postgres=# drop table cities;
DROP TABLE
postgres=#
postgres=#
http://www.java2s.com/Code/PostgreSQL/Data-Type/UsePointdatatypeininsertstatement.htm
postgres=#
postgres=# CREATE TABLE cities (
postgres(# name varchar(80),
postgres(# location point
postgres(# );
CREATE TABLE
postgres=#
postgres=# INSERT INTO cities VALUES ('San Francisco', '(-194.0, 53.0)');
INSERT 0 1
postgres=#
postgres=# select * from cities;
name | location
---------------+-----------
San Francisco | (-194,53)
(1 row)
postgres=#
postgres=# drop table cities;
DROP TABLE
postgres=#
### Geoname
https://tapoueh.org/blog/2018/05/postgresql-data-types-point/
### SP-GiST
https://habr.com/en/company/postgrespro/blog/446624/
#### SP-GiST
First, a few words about this name.
The «GiST» part alludes to some similarity with the same-name access method.
The similarity does exist: both are generalized search trees that provide a
framework for building various access methods.
«SP» stands for space partitioning.
The space here is often just what we are used to call a space,
for example, a two-dimensional plane. But we will see that any search space
is meant, that is, actually any value domain.
SP-GiST is suitable for structures where the space can be recursively
split into non-intersecting areas. This class comprises quadtrees, k-dimensional
trees (k-D trees), and radix trees.
#### Structure
So, the idea of SP-GiST access method is to split the value domain into
non-overlapping subdomains each of which, in turn, can also be split.
Partitioning like this induces non-balanced trees (unlike B-trees and regular GiST).
postgres=# create table points(p point);
postgres=# insert into points(p) values
(point '(1,1)'), (point '(3,2)'), (point '(6,3)'),
(point '(5,5)'), (point '(7,8)'), (point '(8,6)');
postgres=# create index points_quad_idx on points using spgist(p);
### BRIN Index Type
https://www.cybertec-postgresql.com/en/brin-indexes-correlation-correlation-correlation/?gclid=Cj0KCQiAmpyRBhC-ARIsABs2EAoQdxayenamUtgXj7IC8ODxGw794_1mnAK_03SofsXOZfnpz2vVZasaAjS2EALw_wcB
https://www.percona.com/blog/2019/07/16/brin-index-for-postgresql-dont-forget-the-benefits/
BRIN Index is a revolutionary idea in indexing first proposed by PostgreSQL contributor
Alvaro Herrera. BRIN stands for “Block Range INdex”. A block range is a group of pages
adjacent to each other, where summary information about all those pages is stored in Index.
For example, Datatypes like integers – dates where sort order is linear – can be stored as min
and max value in the range. Other database systems including Oracle announced similar
features later. BRIN index often gives similar gains as Partitioning a table.
BRIN usage will return all the tuples in all the pages in the particular range.
So the index is lossy and extra work is needed to further filter out records.
So while one might say that is not good, there are a few advantages.
Since only summary information about a range of pages is stored, BRIN indexes
are usually very small compared to B-Tree indexes. So if we want to squeeze the
working set of data to shared_buffer, this is a great help.
Lossiness of BRIN can be controlled by specifying pages per range
(discussed in a later section)
Offloads the summarization work to vacuum or autovacuum. So the overhead of
index maintenance on transaction / DML operation is minimal.
Limitation:
BRIN indexes are efficient if the ordering of the key values follows the
organization of blocks in the storage layer. In the simplest case, this
could require the physical ordering of the table, which is often the
creation order of the rows within it, to match the key’s order. Keys on generated
sequence numbers or created data are best candidates for BRIN index.
### Partial Index
https://www.gojek.io/blog/the-postgres-performance-tuning-manual-indexes
Postgres supports an index over a subset of the rows of a table
(known as a partial index). It’s often the best way to index our data
if we want to repeatedly analyse rows that match a given WHERE clause.
Let us see how we can enhance the performance of Postgres using partial
indexing.
### Hash Index
https://hakibenita.com/postgresql-hash-index
https://dba.stackexchange.com/questions/259477/hash-index-vs-btree-index-performance-on-inserts-and-updates
https://devcenter.heroku.com/articles/postgresql-indexes
### Data Folder Structure
https://www.postgresql.org/docs/current/storage-file-layout.html
### Performance - More
https://devcenter.heroku.com/categories/postgres-performance
|
EranOfekREPO_NAMEAstroPackPATH_START.@AstroPack_extracted@AstroPack-main@matlab@util@+db@doc@postgres_indexing.md@.PATH_END.py
|
{
"filename": "two_d_plotters.py",
"repo_name": "lsst/rubin_sim",
"repo_path": "rubin_sim_extracted/rubin_sim-main/rubin_sim/maf/plots/two_d_plotters.py",
"type": "Python"
}
|
__all__ = ("TwoDMap", "VisitPairsHist")
import matplotlib.pyplot as plt
import numpy as np
from matplotlib import colors
from .perceptual_rainbow import make_pr_cmap
from .plot_handler import BasePlotter
perceptual_rainbow = make_pr_cmap()
class TwoDMap(BasePlotter):
def __init__(self):
self.plot_type = "TwoD"
self.object_plotter = False
self.default_plot_dict = {
"title": None,
"xlabel": None,
"ylabel": None,
"label": None,
"log_scale": False,
"cbar_format": None,
"cbarTitle": "Count",
"cmap": perceptual_rainbow,
"percentile_clip": None,
"color_min": None,
"color_max": None,
"zp": None,
"norm_val": None,
"cbar_edge": True,
"n_ticks": None,
"aspect": "auto",
"xextent": None,
"origin": None,
"figsize": None,
}
def __call__(self, metric_value, slicer, user_plot_dict, fig=None):
"""
Parameters
----------
metric_value : `numpy.ma.MaskedArray`
The metric values from the bundle.
slicer : `rubin_sim.maf.slicers.TwoDSlicer`
The slicer.
user_plot_dict : `dict`
Dictionary of plot parameters set by user
(overrides default values).
fig : `matplotlib.figure.Figure`
Matplotlib figure number to use. Default = None, starts new figure.
Returns
-------
fig : `matplotlib.figure.Figure`
Figure with the plot.
"""
if "Healpix" in slicer.slicer_name:
self.default_plot_dict["ylabel"] = "Healpix ID"
elif "User" in slicer.slicer_name:
self.default_plot_dict["ylabel"] = "User Field ID"
plot_dict = {}
plot_dict.update(self.default_plot_dict)
# Don't clobber with None
for key in user_plot_dict:
if user_plot_dict[key] is not None:
plot_dict[key] = user_plot_dict[key]
if plot_dict["xextent"] is None:
plot_dict["xextent"] = [0, metric_value[0, :].size]
if plot_dict["log_scale"]:
norm = colors.LogNorm()
else:
norm = None
# Mask out values below the color minimum so they show up as white
if plot_dict["color_min"] is not None:
low_vals = np.where(metric_value.data < plot_dict["color_min"])
metric_value.mask[low_vals] = True
if fig is None:
fig = plt.figure(figsize=plot_dict["figsize"])
ax = fig.add_subplot(111)
yextent = [0, slicer.nslice - 1]
xextent = plot_dict["xextent"]
extent = []
extent.extend(xextent)
extent.extend(yextent)
image = ax.imshow(
metric_value,
vmin=plot_dict["color_min"],
vmax=plot_dict["color_max"],
aspect=plot_dict["aspect"],
cmap=plot_dict["cmap"],
norm=norm,
extent=extent,
interpolation="none",
origin=plot_dict["origin"],
)
cb = plt.colorbar(image)
ax.set_xlabel(plot_dict["xlabel"])
ax.set_ylabel(plot_dict["ylabel"])
ax.set_title(plot_dict["title"])
cb.set_label(plot_dict["cbarTitle"])
# Fix white space on pdf's
if plot_dict["cbar_edge"]:
cb.solids.set_edgecolor("face")
return fig
class VisitPairsHist(BasePlotter):
"""
Given an TwoDSlicer, figure out what fraction of observations
are in singles, pairs, triples, etc.
Parameters
----------
metric_value : `numpy.ma.MaskedArray`
The metric values from the bundle.
slicer : `rubin_sim.maf.slicers.TwoDSlicer`
The slicer.
user_plot_dict : `dict`
Dictionary of plot parameters set by user
(overrides default values).
fig : `matplotlib.figure.Figure`
Matplotlib figure number to use. Default = None, starts new figure.
Returns
-------
fig : `matplotlib.figure.Figure`
Figure with the plot.
"""
def __init__(self):
self.plot_type = "TwoD"
self.object_plotter = False
self.default_plot_dict = {
"title": None,
"xlabel": "N visits per night per field",
"ylabel": "N Visits",
"label": None,
"color": "b",
"xlim": [0, 20],
"ylim": None,
"figsize": None,
}
def __call__(self, metric_value, slicer, user_plot_dict, fig=None):
plot_dict = {}
plot_dict.update(self.default_plot_dict)
# Don't clobber with None
for key in user_plot_dict:
if user_plot_dict[key] is not None:
plot_dict[key] = user_plot_dict[key]
max_val = metric_value.max()
bins = np.arange(0.5, max_val + 0.5, 1)
vals, bins = np.histogram(metric_value, bins)
xvals = (bins[:-1] + bins[1:]) / 2.0
if fig is None:
fig = plt.figure(figsize=plot_dict["figsize"])
ax = fig.add_subplot(111)
ax.bar(xvals, vals * xvals, color=plot_dict["color"], label=plot_dict["label"])
ax.set_xlabel(plot_dict["xlabel"])
ax.set_ylabel(plot_dict["ylabel"])
ax.set_title(plot_dict["title"])
ax.set_xlim(plot_dict["xlim"])
ax.set_ylim(plot_dict["ylim"])
return fig
|
lsstREPO_NAMErubin_simPATH_START.@rubin_sim_extracted@rubin_sim-main@rubin_sim@maf@plots@two_d_plotters.py@.PATH_END.py
|
{
"filename": "_ticklabeloverflow.py",
"repo_name": "catboost/catboost",
"repo_path": "catboost_extracted/catboost-master/contrib/python/plotly/py3/plotly/validators/surface/colorbar/_ticklabeloverflow.py",
"type": "Python"
}
|
import _plotly_utils.basevalidators
class TicklabeloverflowValidator(_plotly_utils.basevalidators.EnumeratedValidator):
def __init__(
self, plotly_name="ticklabeloverflow", parent_name="surface.colorbar", **kwargs
):
super(TicklabeloverflowValidator, self).__init__(
plotly_name=plotly_name,
parent_name=parent_name,
edit_type=kwargs.pop("edit_type", "calc"),
values=kwargs.pop("values", ["allow", "hide past div", "hide past domain"]),
**kwargs,
)
|
catboostREPO_NAMEcatboostPATH_START.@catboost_extracted@catboost-master@contrib@python@plotly@py3@plotly@validators@surface@colorbar@_ticklabeloverflow.py@.PATH_END.py
|
{
"filename": "rv_end_to_end.py",
"repo_name": "sblunt/orbitize",
"repo_path": "orbitize_extracted/orbitize-main/tests/end-to-end-tests/rv_end_to_end.py",
"type": "Python"
}
|
from orbitize import read_input, system, priors, sampler, results, kepler
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import os
"""
Simulates RV data from multiple instruments and relative astroemtric data
from a single instrument, then runs an orbit-fit and recovers the input
parameters.
Written: Vighnesh Nagpal, 2021
"""
def plot_rv(epochs, rvs):
plt.plot(epochs, rvs)
plt.savefig("rv_trend")
plt.close()
def plot_astro(ras, decs):
plt.plot(ras, decs)
plt.axis("equal")
plt.savefig("orbit_trend")
plt.close()
def gen_data():
"""
Simulates radial velocity and astrometric data for a test system.
Returns:
(rvs,rv_epochs): Tuple of generated radial velocity measurements (rvs) and their corresponding
measurement epochs (rv_epochs)
(ras,decs,astro_epochs): Tuple containing simulated astrometric measurements (ras, decs)
and the corresponding measurement epochs (astro_epochs)
"""
# set parameters for the synthetic data
sma = 1
inc = np.pi / 2
ecc = 0.2
aop = np.pi / 4
pan = np.pi / 4
tau = 0.4
plx = 50
mass_for_kamp = 0.1
mtot = 1.1
# epochs and errors for rv
rv_epochs = np.linspace(51544, 52426, 200)
# epochs and errors for astrometry
astro_epochs = np.linspace(51500, 52500, 10)
astro_err = 0
# generate rv trend
rvset = kepler.calc_orbit(
rv_epochs, sma, ecc, inc, aop, pan, tau, plx, mtot, mass_for_Kamp=mass_for_kamp
)
rvs = rvset[2]
# generate predictions for astrometric epochs
astro_set = kepler.calc_orbit(
astro_epochs,
sma,
ecc,
inc,
aop,
pan,
tau,
plx,
mtot,
mass_for_Kamp=mass_for_kamp,
)
ras, decs = astro_set[0], astro_set[1]
# return model generations
return (rvs, rv_epochs), (ras, decs, astro_epochs)
def scat_model(rvs, calibration_terms):
"""
Function that adds scatter to RV data based on provided calibration terms (gamma, sigma)
that are unique for each instrument in the dataset.
Args:
rvs (array): Array of radial velocity measurements
calibration_terms (tuple): Tuple of the form: (gamma_instrument1,jit_instrument1,
gamma_instrument2,jit_instrument2,
rv_err)
returns:
scat_rvs (array): Array of RV measurements with scatter added
errors (array): Array of measurement uncertainties the RV measurements
"""
gam1, jit1, gam2, jit2, rv_err = calibration_terms
# create empty arrays to be filled with data from each inst +respective jit and sigmas
length = int(len(rvs) / 2)
off_1 = np.zeros(length)
off_2 = np.zeros(length)
# create an array of normally sampled jitters for each instruments
errors1 = np.abs(rv_err * np.random.randn(length))
errors2 = np.abs(rv_err * np.random.randn(length))
jscat1 = np.random.randn(length) * np.sqrt(jit1**2 + errors1**2)
jscat2 = np.random.randn(length) * np.sqrt(jit2**2 + errors2**2)
# fill off_1 and off_2
off_1[:] = rvs[:length]
off_2[:] = rvs[length:]
# add scatters and gammas for first instrument
off_1 += gam1
off_1 += jscat1
# add scatters and gammas for second instrument
off_2 += gam2
off_2 += jscat2
# put em together
scat_rvs = np.concatenate([off_1, off_2])
# put measurement uncertainties together
errors = np.concatenate([errors1, errors2])
return scat_rvs, errors
def make_csv(fname, rv_epochs, model, astr_info, errors):
"""
Takes the data generated and saves it as an orbitize-compatible csv file.
"""
# unpack astrometric info
ras, decs, astro_epochs = astr_info
# actually make csv
frame = []
for i, val in enumerate(rv_epochs):
if i < 100:
obs = [val, 0, model[i], errors[i], None, None, None, None, "tel_1"]
else:
obs = [val, 0, model[i], errors[i], None, None, None, None, "tel_2"]
frame.append(obs)
for i, val in enumerate(astro_epochs):
obs = [val, 1, None, None, ras[i], 0, decs[i], 0, "default"]
frame.append(obs)
df = pd.DataFrame(
frame,
columns=[
"epoch",
"object",
"rv",
"rv_err",
"raoff",
"raoff_err",
"decoff",
"decoff_err",
"instrument",
],
)
df.set_index("epoch", inplace=True)
df.to_csv(fname)
def run_fit(fname):
"""
Runs the orbit fit! Saves the resultant posterior, orbit plot and corner plot
args:
fname (str): Path to the data file.
"""
# parameters for the system
num_planets = 1
data_table = read_input.read_file(fname)
m0 = 1.0
mass_err = 0.01
plx = 50
plx_err = 0.01
# initialise a system object
sys = system.System(
num_planets,
data_table,
m0,
plx,
mass_err=mass_err,
plx_err=plx_err,
fit_secondary_mass=True,
)
# MCMC parameters
n_temps = 5
n_walkers = 1000
n_threads = 5
total_orbits_MCMC = 5000 # n_walkers x num_steps_per_walker
burn_steps = 1
thin = 1
# set up sampler object and run it
mcmc_sampler = sampler.MCMC(sys, n_temps, n_walkers, n_threads)
orbits = mcmc_sampler.run_sampler(
total_orbits_MCMC, burn_steps=burn_steps, thin=thin
)
myResults = mcmc_sampler.results
try:
save_path = "."
filename = "post.hdf5"
hdf5_filename = os.path.join(save_path, filename)
myResults.save_results(hdf5_filename) # saves results object as an hdf5 file
except:
print("Something went wrong while saving the results")
finally:
corner_figure = myResults.plot_corner()
corner_name = "corner.png"
corner_figure.savefig(corner_name)
orbit_figure = myResults.plot_orbits(rv_time_series=True)
orbit_name = "joint_orbit.png"
orbit_figure.savefig(orbit_name)
print("Done!")
if __name__ == "__main__":
rv_info, astr_info = gen_data()
rvs, rv_epochs = rv_info
# set gammas and jitters
calibration_terms = (0.7, 0.009, -0.3, 0.006, 0.002)
# add scatter to model
model, errors = scat_model(rvs, calibration_terms)
# save this to a new file
fname = "./simulated_data.csv"
make_csv(fname, rv_epochs, model, astr_info, errors)
# run orbit fit
run_fit(fname)
# delete CSV
os.remove("demofile.txt")
|
sbluntREPO_NAMEorbitizePATH_START.@orbitize_extracted@orbitize-main@tests@end-to-end-tests@rv_end_to_end.py@.PATH_END.py
|
{
"filename": "LICENSE.md",
"repo_name": "brunzema/truncated-mvn-sampler",
"repo_path": "truncated-mvn-sampler_extracted/truncated-mvn-sampler-main/LICENSE.md",
"type": "Markdown"
}
|
MIT License
Copyright (c) 2021 Paul Brunzema
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
|
brunzemaREPO_NAMEtruncated-mvn-samplerPATH_START.@truncated-mvn-sampler_extracted@truncated-mvn-sampler-main@LICENSE.md@.PATH_END.py
|
{
"filename": "__init__.py",
"repo_name": "jax-ml/jax",
"repo_path": "jax_extracted/jax-main/jax/experimental/colocated_python/__init__.py",
"type": "Python"
}
|
# Copyright 2024 The JAX Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Colocated Python API."""
# Note: import <name> as <name> is required for names to be exported.
# See PEP 484 & https://github.com/google/jax/issues/7570
# pylint: disable=useless-import-alias
from jax.experimental.colocated_python.api import (
colocated_cpu_devices as colocated_cpu_devices,
colocated_python as colocated_python,
)
|
jax-mlREPO_NAMEjaxPATH_START.@jax_extracted@jax-main@jax@experimental@colocated_python@__init__.py@.PATH_END.py
|
{
"filename": "bondi.py",
"repo_name": "AFD-Illinois/ebhlight",
"repo_path": "ebhlight_extracted/ebhlight-master/test/bondi.py",
"type": "Python"
}
|
################################################################################
# #
# BONDI INFLOW #
# #
################################################################################
import os
import sys; sys.dont_write_bytecode = True
from subprocess import call
from shutil import copyfile
import glob
import numpy as np
#import h5py
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt
import pylab as pl
sys.path.insert(0, '../script/')
sys.path.insert(0, '../script/analysis/')
import util
import hdf5_to_dict as io
TMP_DIR = 'TMP'
TMP_BUILD = 'build_tmp.py'
util.safe_remove(TMP_DIR)
AUTO = False
for arg in sys.argv:
if arg == '-auto':
AUTO = True
RES = [32, 64, 128]
util.make_dir(TMP_DIR)
os.chdir('../prob/bondi/')
copyfile('build.py', TMP_BUILD)
# COMPILE CODE AT MULTIPLE RESOLUTIONS USING SEPARATE BUILD FILE
for n in range(len(RES)):
util.change_cparm('N1TOT', RES[n], TMP_BUILD)
util.change_cparm('N2TOT', RES[n], TMP_BUILD)
call(['python', TMP_BUILD, '-dir', TMP_DIR])
call(['cp', os.path.join(os.getcwd(), TMP_DIR, 'bhlight'),
'../../test/' + TMP_DIR + '/bhlight_' + str(RES[n])])
copyfile(os.path.join(os.getcwd(), TMP_DIR, 'param_template.dat'), '../../test/' +
TMP_DIR + '/param.dat')
util.safe_remove(TMP_BUILD)
util.safe_remove(TMP_DIR)
os.chdir('../../test/')
NVAR = 8
L1 = np.zeros(len(RES))
os.chdir(TMP_DIR)
# RUN PROBLEM FOR EACH RESOLUTION AND ANALYZE RESULT
for m in range(len(RES)):
print(['./bhlight_' + str(RES[m]), '-p', 'param.dat'])
util.change_rparm('tf', 50, 'param.dat')
call(['./bhlight_' + str(RES[m]), '-p', 'param.dat'])
dfiles = np.sort(glob.glob('dumps/dump*.h5'))
hdr = io.load_hdr(dfiles[0])
geom = io.load_geom(hdr)
dump0 = io.load_dump(dfiles[0], geom)
dump1 = io.load_dump(dfiles[-1], geom)
r = geom['r'][:,0,0]
imin = 0
while r[imin] < hdr['Reh']:
imin += 1
rho0 = np.mean(dump0['RHO'][imin:,:,0], axis=1)
rho1 = np.mean(dump1['RHO'][imin:,:,0], axis=1)
L1[m] = np.mean(np.fabs(rho1 - rho0))
files = glob.glob('dumps/*')
for f in files:
os.remove(f)
files = glob.glob('restarts/*')
for f in files:
os.remove(f)
# MEASURE CONVERGENCE
powerfit = np.polyfit(np.log(RES), np.log(L1), 1)[0]
os.chdir('../')
if not AUTO:
# MAKE PLOTS
fig = plt.figure(figsize=(16.18,10))
ax = fig.add_subplot(1,1,1)
ax.plot(RES, L1, marker='s', label='RHO')
amp = 1.
ax.plot([RES[0]/2., RES[-1]*2.],
10.*amp*np.asarray([RES[0]/2., RES[-1]*2.])**-2.,
color='k', linestyle='--', label='N^-2')
plt.xscale('log', basex=2); plt.yscale('log')
plt.xlim([RES[0]/np.sqrt(2.), RES[-1]*np.sqrt(2.)])
plt.xlabel('N'); plt.ylabel('L1')
#plt.title(NAMES[MODES[n]])
plt.legend(loc=1)
plt.savefig('bondi.png', bbox_inches='tight')
if AUTO:
data = {}
data['SOLX'] = [None]
data['CODEX'] = [None]
data['SOLY'] = [-2.]
data['CODEY'] = [powerfit]
data['THRESHOLD'] = 0.15
import pickle
pickle.dump(data, open('data.p', 'wb'))
# CLEAN UP
util.safe_remove(TMP_DIR)
|
AFD-IllinoisREPO_NAMEebhlightPATH_START.@ebhlight_extracted@ebhlight-master@test@bondi.py@.PATH_END.py
|
{
"filename": "gbsix_test.py",
"repo_name": "HERA-Team/aipy",
"repo_path": "aipy_extracted/aipy-main/tests/catalog_tests/gbsix_test.py",
"type": "Python"
}
|
from __future__ import print_function, division, absolute_import
import unittest, aipy._src.gbsix as h, aipy as a, numpy as n
class TestGBSixCatalog(unittest.TestCase):
def setUp(self):
self.cat = h.GBSixCatalog()
self.cat.fromfile(h.GBSIXFILE)
def test_spotcheck(self):
for srcname in self.cat:
src = self.cat[srcname]
self.assertEqual(src.index, 0)
self.assertEqual(len(src.srcshape), 3)
self.assertEqual(src.srcshape[0], 0)
self.assertEqual(src.srcshape[1], 0)
self.assertEqual(src.srcshape[2], 0)
self.assertEqual(src.mfreq, 4.85)
if __name__ == '__main__':
unittest.main()
|
HERA-TeamREPO_NAMEaipyPATH_START.@aipy_extracted@aipy-main@tests@catalog_tests@gbsix_test.py@.PATH_END.py
|
{
"filename": "plot_model.ipynb",
"repo_name": "dmlc/xgboost",
"repo_path": "xgboost_extracted/xgboost-master/demo/CLI/distributed-training/plot_model.ipynb",
"type": "Jupyter Notebook"
}
|
# XGBoost Model Analysis
This notebook can be used to load and analysis model learnt from all xgboost bindings, including distributed training.
```python
import sys
import os
%matplotlib inline
```
## Please change the ```pkg_path``` and ```model_file``` to be correct path
```python
pkg_path = '../../python-package/'
model_file = 's3://my-bucket/xgb-demo/model/0002.model'
sys.path.insert(0, pkg_path)
import xgboost as xgb
```
# Plot the Feature Importance
```python
# plot the first two trees.
bst = xgb.Booster(model_file=model_file)
xgb.plot_importance(bst)
```
# Plot the First Tree
```python
tree_id = 0
xgb.to_graphviz(bst, tree_id)
```
|
dmlcREPO_NAMExgboostPATH_START.@xgboost_extracted@xgboost-master@demo@CLI@distributed-training@plot_model.ipynb@.PATH_END.py
|
{
"filename": "smooth_cal_inspect_2458154.ipynb",
"repo_name": "HERA-Team/H1C_IDR3_Notebooks",
"repo_path": "H1C_IDR3_Notebooks-main/smooth_cal_inspect/smooth_cal_inspect_2458154.ipynb",
"type": "Jupyter Notebook"
}
|
# Stage 2 Calibration Smoothing Nightly Notebook
**Josh Dillon**, Last Revised 12/4/20
```python
import numpy as np
import matplotlib.pyplot as plt
import matplotlib
from hera_cal import io, redcal, apply_cal, abscal, utils
from hera_cal.smooth_cal import build_time_blacklist
from hera_qm.metrics_io import load_metric_file
import pyuvdata
import glob
import os
from copy import deepcopy
import inspect
import h5py
import matplotlib.cm as cm
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
```
```python
# If you want to run this notebook locally, copy the output of the next cell into the first few lines of this cell.
# JD = '2459122'
# data_path = '/lustre/aoc/projects/hera/H4C/2459122'
# lst_blacklist_string = '0-1.3 2.5-4.3 5.0-5.7 6.5-9.1 10.6-11.5 11.9-14.3 16.3-1.3'
# abscal_model_glob = '/lustre/aoc/projects/hera/zmartino/hera_calib_model/H3C/abscal_files_unique_baselines/zen.2458894.?????.uvh5'
# os.environ["JULIANDATE"] = JD
# os.environ["DATA_PATH"] = data_path
# os.environ["LST_BLACKLIST_STRING"] = lst_blacklist_string
# os.environ["ABSCAL_MODEL_GLOB"] = abscal_model_glob
```
```python
# Use environment variables to figure out path to data
JD = os.environ['JULIANDATE']
data_path = os.environ['DATA_PATH']
lst_blacklist_string = os.environ['LST_BLACKLIST_STRING']
abscal_model_glob = os.environ['ABSCAL_MODEL_GLOB']
print(f'JD = "{JD}"')
print(f'data_path = "{data_path}"')
print(f'lst_blacklist_string = "{lst_blacklist_string}"')
print(f'abscal_model_glob = "{abscal_model_glob}"')
```
JD = "2458154"
data_path = "/lustre/aoc/projects/hera/H1C_IDR3/IDR3_2/2458154"
lst_blacklist_string = ""
abscal_model_glob = "/lustre/aoc/projects/hera/H1C_IDR3/abscal_model/zen.245804*.HH.uvRXLS.uvh5"
```python
print('Looking for data in', data_path, 'on JD', JD)
data_list = sorted(glob.glob(os.path.join(data_path, f'zen.{JD}.?????.sum.uvh5')))
if len(data_list) == 0:
data_list = sorted(glob.glob(os.path.join(data_path, f'zen.{JD}.?????.uvh5')))
print('...found {} data files.'.format(len(data_list)))
abscal_list = sorted(glob.glob(os.path.join(data_path, f'zen.{JD}.*.abs.calfits')))
print('...found {} abscal files.'.format(len(abscal_list)))
smooth_cal_list = sorted(glob.glob(os.path.join(data_path, f'zen.{JD}.*.sum.smooth_abs.calfits')))
print('...found {} smooth_cal files.'.format(len(smooth_cal_list)))
```
Looking for data in /lustre/aoc/projects/hera/H1C_IDR3/IDR3_2/2458154 on JD 2458154
...found 73 data files.
...found 73 abscal files.
...found 73 smooth_cal files.
```python
# get all JDs and LSTs
_, _, file_lst_arrays, file_time_arrays = io.get_file_times(data_list)
# parse lst_blacklist_string
lst_blacklists = []
if len(lst_blacklist_string) > 0:
lst_blacklists = [tuple([float(arg) for arg in arg_pair.split('-', maxsplit=1)])
for arg_pair in lst_blacklist_string.split(' ')]
# get times that are blacklisted and reshape them like file_time_arrays
time_blacklisted_flat = build_time_blacklist(np.hstack(file_time_arrays), lst_blacklists=lst_blacklists)
time_blacklisted = [fta.astype(bool) for fta in file_time_arrays]
n = 0
for i in range(len(file_time_arrays)):
time_blacklisted[i] = np.zeros_like(time_blacklisted[i], dtype=bool)
for j in range(len(file_time_arrays[i])):
time_blacklisted[i][j] = time_blacklisted_flat[n]
n += 1
# pick the central time from among the not-LST blacklisted files, if possible
good_indices = [i for i, tb in enumerate(time_blacklisted) if not np.any(tb)]
if len(good_indices) > 0:
file_index = good_indices[len(good_indices)//2]
else:
file_index = len(data_list)//2
file_JD = '.'.join([s for s in data_list[file_index].split('.') if s.isdigit()])
```
/lustre/aoc/projects/hera/heramgr/anaconda2/envs/h1c_idr3/lib/python3.7/site-packages/numpy/core/_asarray.py:83: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray
return array(a, dtype, copy=False, order=order)
```python
# Load abscal gains
hca = io.HERACal(abscal_list[file_index])
ga, gaf, _, _ = hca.read()
# Get min_bl_cut, we only want to compare baselines actually used in absolute calibration
try:
min_bl_cut = float(hca.history.replace('\n','').split('--min_bl_cut')[-1].split('--')[0].strip())
except:
print('Could not find min_bl_cut, setting to 1 m.')
min_bl_cut = 1.0
# Load the most common redundant baseline longer than min_bl_cut
hd = io.HERAData(data_list[file_index])
bls_to_plot = []
for pol in ['ee', 'nn']:
reds = redcal.get_reds(hd.antpos, pols=[pol])
reds = sorted(reds, key=len, reverse=True)
bl_lens = np.array([np.linalg.norm(hd.antpos[red[0][1]] - hd.antpos[red[0][0]]) for red in reds])
try:
bl_group_to_plot = (np.array(reds)[bl_lens >= min_bl_cut])[0]
except:
bl_group_to_plot = reds[0]
bls_to_plot.extend(bl_group_to_plot)
# Load smooth_cal gains and determine ex_ants
hc = io.HERACal(smooth_cal_list[file_index])
gains, gain_flags, _, _ = hc.read()
ex_ants = [ant for ant in gain_flags if np.all(gain_flags[ant])]
# Load data and calibrate
data, flags, nsamples = hd.read(bls=bls_to_plot)
sc_data, sc_flags = deepcopy(data), deepcopy(flags)
ac_data, ac_flags = deepcopy(data), deepcopy(flags)
apply_cal.calibrate_in_place(sc_data, gains, data_flags=sc_flags, cal_flags=gain_flags)
apply_cal.calibrate_in_place(ac_data, ga, data_flags=ac_flags, cal_flags=gaf)
```
Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray
```python
plt.figure(figsize=(8,8))
plt.scatter(np.array(list(hd.antpos.values()))[:,0],
np.array(list(hd.antpos.values()))[:,1], c='w', s=0)
for ant,pos in hd.antpos.items():
bad = ant in [ant[0] for ant in ex_ants]
plt.gca().add_artist(plt.Circle(tuple(pos[0:2]), radius=7,
fill=(~bad), color=['grey','r'][bad]))
plt.text(pos[0],pos[1],str(ant), va='center', ha='center', color='w')
plt.xlabel("Antenna East-West Position (meters)")
plt.ylabel("Antenna North-South Position (meters)")
plt.title('Antenna Positions on {} (Red = Flagged)'.format(file_JD));
plt.axis('equal')
plt.tight_layout()
plt.show()
```

### Figure 1: Array and Flagged Antennas
#### OBSERVER CHECKLIST:
* Check that the array configuration looks reasonable.
* Check that all flags expected to be flagged are actually flagged but also that not everything is getting flagged.
```python
#check whether the model is redudnant by looking at the history
model_is_redundant = ('--model_is_redundant' in "".join(hc.history.split()))
# Find files that overlap with this file
abscal_matched_files = list(abscal.match_times(data_list[file_index],
sorted(glob.glob(abscal_model_glob)),
filetype='uvh5', atol=1e-5))
hdm = io.HERAData(abscal_matched_files)
# Get model baselines to load
model_bls = hdm.bls
model_antpos = hdm.antpos
if isinstance(model_bls, dict):
model_bls = list(model_bls.values())[0]
model_antpos = {ant: pos for antpos in hdm.antpos.values() for ant, pos in antpos.items()}
_, model_bl_to_load, data_to_model_bl_map = abscal.match_baselines(bls_to_plot, model_bls,
hd.antpos, model_antpos=model_antpos,
model_is_redundant=model_is_redundant)
model, model_flags, _ = hdm.read(bls=model_bl_to_load)
# Rephase model at index of best match to mean LST in the data
model_index = np.argmin(np.abs(model.lsts - np.mean(data.lsts)))
model_blvecs = {bl: model.antpos[bl[0]] - model.antpos[bl[1]] for bl in model.keys()}
utils.lst_rephase(model, model_blvecs, model.freqs, np.mean(data.lsts) - model.lsts[model_index],
lat=hdm.telescope_location_lat_lon_alt_degrees[0], inplace=True)
if not model_is_redundant:
model, _, _ = utils.red_average(model, flags=model_flags)
```
Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray
```python
import warnings
with warnings.catch_warnings():
warnings.filterwarnings('ignore', r'All-NaN (slice|axis) encountered')
for pol in ['ee', 'nn']:
for func, plot, ylabel in zip([np.abs, np.angle], [plt.semilogy, plt.plot], ['Amplitude (Jy)', 'Phase (Radians)']):
plt.figure(figsize=(16,4))
for d, f, l, m in zip([ac_data, sc_data],
[ac_flags, sc_flags],
['Abs Calibrated Data', 'Smooth Calibrated Data'],
['r-', 'b.']):
to_avg = []
for bl in [k for k in bls_to_plot if k[2] == pol]:
blvec = hd.antpos[bl[0]] - hd.antpos[bl[1]]
to_avg.append(deepcopy(d[bl]))
to_avg[-1][f[bl]] = np.nan + 1.0j * np.nan
to_plot = np.nanmedian(np.real(to_avg), axis=(0,1)) + 1.0j * np.nanmedian(np.imag(to_avg), axis=(0,1))
plot(hd.freqs/1e6, func(to_plot), m, label=l)
for bl in [k for k in model if k[2] == pol]:
plot(hd.freqs/1e6, func(model[bl][model_index]), 'k-', label='Abscal Model')
plt.xlabel('Frequency (MHz)')
plt.ylabel(ylabel)
plt.legend(loc='lower right')
plt.title('{}-Polarized, {:f} m East, {:f} m North Visibility on {}'.format(pol, blvec[0], blvec[1], file_JD))
```




### Figure 2: Example redundant baseline average, both absolute calibrated and smoothed, compared to the Abscal Model
#### OBSERVER CHECKLIST:
* Check that the abscaled data and the smoothcaled data are reasonably consistent
* Check that both match the abscal model fairly well.
# Load a whole day
```python
# Load relative difference and flagging info from smooth_cal gains
ant_flags_dict = {}
avg_rel_diff_ee_dict = {}
avg_rel_diff_nn_dict = {}
rel_diff_med_dict = {}
ants = set([])
for cal in smooth_cal_list:
hc = io.HERACal(cal)
_, flags, rel_diff, avg_rel_diff = hc.read()
ants |= set(flags.keys())
ant_flags_dict[cal] = {ant: np.all(flags[ant]) for ant in flags}
avg_rel_diff_ee_dict[cal] = avg_rel_diff['Jee']
avg_rel_diff_nn_dict[cal] = avg_rel_diff['Jnn']
rel_diff_med_dict[cal] = {ant: np.nanmedian(rel_diff[ant], axis=1) for ant in rel_diff}
all_flagged_dict = {ant: np.all([af[ant] for af in ant_flags_dict.values()]) for ant in ants}
avg_rel_diff_ee = np.vstack(np.array(list(avg_rel_diff_ee_dict.values())))
avg_rel_diff_nn = np.vstack(np.array(list(avg_rel_diff_nn_dict.values())))
```
Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray
Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray
```python
# save middle-numbered ants with a minimal number of flags
ants_to_save = {}
ant_to_nflags_dict = {ant: np.sum([af[ant] for af in ant_flags_dict.values()]) for ant in ants}
for pol in ['Jee', 'Jnn']:
min_flags = np.min([ant_to_nflags_dict[ant] for ant in ants if ant[1] == pol])
ant_candidates = sorted([ant for ant in ants if ant_to_nflags_dict[ant] == min_flags and ant[1] == pol])
Nac = len(ant_candidates)
ants_to_save[pol] = ant_candidates[(Nac // 2 - 1):(Nac // 2 + 1)]
```
```python
# Load smooth_cal gains/flags
times_dict = {}
sc_gain_dict = {}
sc_flag_dict = {}
for cal in smooth_cal_list:
hc = io.HERACal(cal)
gains, flags, _, _ = hc.read()
times_dict[cal] = hc.times
sc_gain_dict[cal] = {ant: gains[ant] for pol in ants_to_save for ant in ants_to_save[pol]}
sc_flag_dict[cal] = {ant: flags[ant] for pol in ants_to_save for ant in ants_to_save[pol]}
# Load abscal gains/flags
ac_gain_dict = {}
ac_flag_dict = {}
for cal in abscal_list:
hc = io.HERACal(cal)
gains, flags, _, _ = hc.read()
ac_gain_dict[cal] = {ant: gains[ant] for pol in ants_to_save for ant in ants_to_save[pol]}
ac_flag_dict[cal] = {ant: flags[ant] for pol in ants_to_save for ant in ants_to_save[pol]}
# Organize gains/flags into grids
times = np.hstack(list(times_dict.values()))
lsts = 12 / np.pi * pyuvdata.utils.get_lst_for_time(times, *hd.telescope_location_lat_lon_alt_degrees)
sc_gains = {ant: np.vstack([sc_gain_dict[cal][ant] for cal in sc_gain_dict])
for pol in ants_to_save for ant in ants_to_save[pol]}
sc_flags = {ant: np.vstack([sc_flag_dict[cal][ant] for cal in sc_flag_dict])
for pol in ants_to_save for ant in ants_to_save[pol]}
flag_mask = np.all([f for f in sc_flags.values()], axis=0)
ac_gains = {ant: np.vstack([ac_gain_dict[cal][ant] for cal in ac_gain_dict])
for pol in ants_to_save for ant in ants_to_save[pol]}
ac_flags = {ant: np.vstack([ac_flag_dict[cal][ant] for cal in ac_flag_dict])
for pol in ants_to_save for ant in ants_to_save[pol]}
```
# Inspect a whole day
```python
# for overplotting blacklisted LSTs
my_cmap = cm.binary
my_cmap.set_under('k', alpha=0)
blacklist = np.ones_like(avg_rel_diff_ee) * np.hstack(time_blacklisted)[:, np.newaxis]
```
You are modifying the state of a globally registered colormap. In future versions, you will not be able to modify a registered colormap in-place. To remove this warning, you can make a copy of the colormap first. cmap = copy.copy(mpl.cm.get_cmap("binary"))
```python
# Pick vmax to not saturate 90% of the abscal gains
vmax = np.max([np.percentile(np.abs(sc_gains[ants_to_save[pol][1]][~flag_mask]), 99) for pol in ['Jee', 'Jnn']])
# Plot abscal gain amplitude waterfalls for a single antenna
fig, axes = plt.subplots(4, 2, figsize=(16,16), gridspec_kw={'height_ratios': [1, .25, .25, 1]})
for ax, pol in zip(axes[0], ['Jee', 'Jnn']):
ant = ants_to_save[pol][1]
extent=[hd.freqs[0]/1e6, hd.freqs[-1]/1e6, times[-1], times[0]]
im = ax.imshow(np.abs(sc_gains[ant]) / ~sc_flags[ant], aspect='auto', cmap='inferno',
interpolation='nearest', vmin=0, vmax=vmax, extent=extent)
ax.imshow(blacklist, aspect='auto', cmap=my_cmap, interpolation=None, clim=[0.9, 1], alpha=.25, extent=extent)
ax.set_title(f'Smoothcal Gain Amplitude of Antenna {ant[0]}: {pol[1:]}-polarized' )
ax.set_xlabel('Frequency (MHz)')
ax.set_ylabel('LST (Hours)')
ax.set_xlim([hd.freqs[0]/1e6, hd.freqs[-1]/1e6])
ax.set_yticklabels(np.around(lsts[[min(max(np.searchsorted(times, t), 0), len(times) - 1) for t in ax.get_yticks()]], 2))
plt.colorbar(im, ax=ax, orientation='horizontal', pad=.1)
# Now plot median gain spectra
for ax, pol in zip(axes[1], ['Jee', 'Jnn']):
ant = ants_to_save[pol][1]
# plot abscal
to_med = deepcopy(np.abs(ac_gains[ant]))
to_med[sc_flags[ant]] = np.nan
if not np.all(np.hstack(time_blacklisted)):
ax.plot(hd.freqs / 1e6, np.nanmedian(to_med[~np.hstack(time_blacklisted), :], axis=0), 'r.', label='Abscal')
# plot smooth_cal
to_med = deepcopy(np.abs(sc_gains[ant]))
to_med[sc_flags[ant]] = np.nan
if not np.all(np.hstack(time_blacklisted)):
ax.plot(hd.freqs / 1e6, np.nanmedian(to_med[~np.hstack(time_blacklisted), :], axis=0), 'k.', ms=2, label='Smoothcal')
ax.set_ylim([0, vmax])
ax.set_xlim([hd.freqs[0]/1e6, hd.freqs[-1]/1e6])
ax.set_xlabel('Frequency (MHz)')
ax.set_ylabel('|g| (unitless)')
ax.set_title(f'Median Non-Blacklisted or Flagged Gain Amplitude Spectrum of Antenna {ant[0]}: {pol[1:]}-polarized')
ax.legend()
# Now plot median gain time series
for ax, pol in zip(axes[2], ['Jee', 'Jnn']):
ant = ants_to_save[pol][1]
to_med = deepcopy(np.abs(ac_gains[ant]))
to_med[:, np.all(sc_flags[ant], axis=0)] = np.nan
# plot abscal
if not np.all(np.hstack(time_blacklisted)):
ax.plot(lsts[~np.hstack(time_blacklisted)],
np.nanmedian(to_med[~np.hstack(time_blacklisted), :], axis=1),
'b.', label='Abscal: Not Blacklisted LSTs')
if np.any(np.hstack(time_blacklisted)):
ax.plot(lsts[np.hstack(time_blacklisted)],
np.nanmedian(to_med[np.hstack(time_blacklisted), :], axis=1),
'r.', label='Abscal: Blacklisted LSTs')
# plot smooth_cal
to_med = deepcopy(np.abs(sc_gains[ant]))
to_med[:, np.all(sc_flags[ant], axis=0)] = np.nan
ax.plot(lsts, np.nanmedian(to_med, axis=1),'k.', ms=2, label='Smoothcal')
ax.set_ylim([0, vmax])
ax.set_xlabel('LST (hours)')
ax.set_ylabel('|g| (unitless)')
ax.set_title(f'Median Over Unflagged Channels Gain Amplitude Time-Series of Antenna {ant[0]}: {pol[1:]}-polarized')
ax.legend()
# Now flagged plot abscal waterfall
for ax, pol in zip(axes[3], ['Jee', 'Jnn']):
ant = ants_to_save[pol][1]
extent=[hd.freqs[0]/1e6, hd.freqs[-1]/1e6, times[-1], times[0]]
im = ax.imshow(np.abs(ac_gains[ant]) / ~sc_flags[ant], aspect='auto', cmap='inferno',
interpolation='nearest', vmin=0, vmax=vmax, extent=extent)
ax.imshow(blacklist, aspect='auto', cmap=my_cmap, interpolation=None, clim=[0.9, 1], alpha=.25, extent=extent)
ax.set_title(f'Flagged Abscal Gain Amplitude of Antenna {ant[0]}: {pol[1:]}-polarized' )
ax.set_xlabel('Frequency (MHz)')
ax.set_ylabel('LST (Hours)')
ax.set_xlim([hd.freqs[0]/1e6, hd.freqs[-1]/1e6])
ax.set_yticklabels(np.around(lsts[[min(max(np.searchsorted(times, t), 0), len(times) - 1) for t in ax.get_yticks()]], 2))
plt.colorbar(im, ax=ax, orientation='horizontal', pad=.1)
plt.tight_layout()
```
divide by zero encountered in true_divide
FixedFormatter should only be used together with FixedLocator
All-NaN slice encountered
All-NaN slice encountered
All-NaN slice encountered
All-NaN slice encountered
divide by zero encountered in true_divide
FixedFormatter should only be used together with FixedLocator

### Figure 3 Example Smoothing of Gain Amplitudes
Smoothcal (top row) and Abscal (bottom row) gain amplitudes for an example antenna. In the waterfalls, grayed out regions are "blacklisted," meaning they are not flagged but they are given zero weight when performing calibration smoothing. We also plot median non-blacklisted amplitudes as a function of frequency (second row) and the median amplitude as a function of time (third row) for both abscal and smoothcal.
#### OBSERVER CHECKLIST:
* Check that the smoothcal solution matches the abscal solution reasonably well in the non-blacklisted regions.
* Check to see that the overall bandpass looks reasonable
```python
# Plot abscal gain amplitude waterfalls for a single antenna
fig, axes = plt.subplots(4, 2, figsize=(16,16), gridspec_kw={'height_ratios': [1, .25, .25, 1]})
for ax, pol in zip(axes[0], ['Jee', 'Jnn']):
ant0, ant1 = ants_to_save[pol]
extent=[hd.freqs[0]/1e6, hd.freqs[-1]/1e6, times[-1], times[0]]
im = ax.imshow(np.angle(sc_gains[ant0] / sc_gains[ant1]) / ~sc_flags[ant0], aspect='auto', cmap='inferno',
interpolation='nearest', vmin=-np.pi, vmax=np.pi, extent=extent)
ax.imshow(blacklist, aspect='auto', cmap=my_cmap, interpolation=None, clim=[0.9, 1], alpha=.25, extent=extent)
ax.set_title(f'Smoothcal Gain Phase of Ant {ant0[0]} / Ant {ant1[0]}: {pol[1:]}-polarized')
ax.set_xlabel('Frequency (MHz)')
ax.set_ylabel('LST (Hours)')
ax.set_xlim([hd.freqs[0]/1e6, hd.freqs[-1]/1e6])
ax.set_yticklabels(np.around(lsts[[min(max(np.searchsorted(times, t), 0), len(times) - 1) for t in ax.get_yticks()]], 2))
plt.colorbar(im, ax=ax, orientation='horizontal', pad=.1)
# Now plot median gain spectra
for ax, pol in zip(axes[1], ['Jee', 'Jnn']):
ant0, ant1 = ants_to_save[pol]
# plot abscal
to_med = deepcopy(ac_gains[ant0] / ac_gains[ant1])
to_med[sc_flags[ant0]] = np.nan + 1.0j * np.nan
if not np.all(np.hstack(time_blacklisted)):
med = 1.0j * np.nanmedian(to_med[~np.hstack(time_blacklisted), :].imag, axis=0)
med += np.nanmedian(to_med[~np.hstack(time_blacklisted), :].real, axis=0)
ax.plot(hd.freqs / 1e6, np.angle(med), 'r.', label='Abscal')
# plot smooth_cal
to_med = deepcopy(sc_gains[ant0] / sc_gains[ant1])
to_med[sc_flags[ant0]] = np.nan + 1.0j * np.nan
if not np.all(np.hstack(time_blacklisted)):
med = 1.0j * np.nanmedian(to_med[~np.hstack(time_blacklisted), :].imag, axis=0)
med += np.nanmedian(to_med[~np.hstack(time_blacklisted), :].real, axis=0)
ax.plot(hd.freqs / 1e6, np.angle(med), 'k.', ms=2, label='Smoothcal')
ax.set_ylim([-np.pi, np.pi])
ax.set_xlabel('Frequency (MHz)')
ax.set_ylabel(f'Phase of g$_{ant0[0]}$ / g$_{ant1[0]}$')
ax.set_title(f'Median Non-Blacklisted or Flagged Gain Phase Spectrum of Ant {ant0[0]} / Ant {ant1[0]}: {pol[1:]}-polarized')
ax.legend()
# Now plot median gain time series
for ax, pol in zip(axes[2], ['Jee', 'Jnn']):
ant = ants_to_save[pol][1]
to_med = deepcopy(np.abs(ac_gains[ant]))
to_med[:, np.all(sc_flags[ant], axis=0)] = np.nan
# plot abscal
to_med = deepcopy(ac_gains[ant0] / ac_gains[ant1])
to_med[:, np.all(sc_flags[ant], axis=0)] = np.nan + 1.0j * np.nan
if not np.all(np.hstack(time_blacklisted)):
med = 1.0j * np.nanmedian(to_med[~np.hstack(time_blacklisted), :].imag, axis=1)
med += np.nanmedian(to_med[~np.hstack(time_blacklisted), :].real, axis=1)
ax.plot(lsts[~np.hstack(time_blacklisted)], np.angle(med), 'b.', label='Abscal: Not Blacklisted LSTs')
if np.any(np.hstack(time_blacklisted)):
med = 1.0j * np.nanmedian(to_med[np.hstack(time_blacklisted), :].imag, axis=1)
med += np.nanmedian(to_med[np.hstack(time_blacklisted), :].real, axis=1)
ax.plot(lsts[np.hstack(time_blacklisted)], np.angle(med), 'r.', label='Abscal: Blacklisted LSTs')
# plot smooth_cal
to_med = deepcopy(sc_gains[ant0] / sc_gains[ant1])
to_med[:, np.all(sc_flags[ant], axis=0)] = np.nan + 1.0j * np.nan
med = 1.0j * np.nanmedian(to_med.imag, axis=1) + np.nanmedian(to_med.real, axis=1)
ax.plot(lsts, np.angle(med), 'k.', ms=2, label='Smoothcal')
ax.set_ylim([-np.pi, np.pi])
ax.set_xlabel('LST (hours)')
ax.set_ylabel(f'Phase of g$_{ant0[0]}$ / g$_{ant1[0]}$')
ax.set_title(f'Median Non-Blacklisted or Flagged Gain Phase Spectrum of Ant {ant0[0]} / Ant {ant1[0]}: {pol[1:]}-polarized')
ax.legend()
# Now flagged plot abscal waterfall
for ax, pol in zip(axes[3], ['Jee', 'Jnn']):
ant0, ant1 = ants_to_save[pol]
extent=[hd.freqs[0]/1e6, hd.freqs[-1]/1e6, times[-1], times[0]]
im = ax.imshow(np.angle(ac_gains[ant0] / ac_gains[ant1]) / ~sc_flags[ant], aspect='auto', cmap='inferno',
interpolation='nearest', vmin=-np.pi, vmax=np.pi, extent=extent)
ax.imshow(blacklist, aspect='auto', cmap=my_cmap, interpolation=None, clim=[0.9, 1], alpha=.25, extent=extent)
ax.set_title(f'Flagged Abscal Gain Phase of Ant {ant0[0]} / Ant {ant1[0]}: {pol[1:]}-polarized')
ax.set_xlabel('Frequency (MHz)')
ax.set_ylabel('LST (Hours)')
ax.set_xlim([hd.freqs[0]/1e6, hd.freqs[-1]/1e6])
ax.set_yticklabels(np.around(lsts[[min(max(np.searchsorted(times, t), 0), len(times) - 1) for t in ax.get_yticks()]], 2))
plt.colorbar(im, ax=ax, orientation='horizontal', pad=.1)
plt.tight_layout()
```
divide by zero encountered in true_divide
FixedFormatter should only be used together with FixedLocator
All-NaN slice encountered
All-NaN slice encountered
All-NaN slice encountered
All-NaN slice encountered
divide by zero encountered in true_divide
invalid value encountered in true_divide
FixedFormatter should only be used together with FixedLocator

### Figure 4 Example Smoothing of Gain Phases
Smoothcal (top row) and Abscal (bottom row) gain phases for an example antenna. In the waterfalls, grayed out regions are "blacklisted," meaning they are not flagged but they are given zero weight when performing calibration smoothing. We also plot median non-blacklisted phases as a function of frequency (second row) and the median phases as a function of time (third row) for both abscal and smoothcal.
#### OBSERVER CHECKLIST:
* Check that the smoothcal solution matches the abscal solution reasonably well in the non-blacklisted regions.
* Check to see that the final gain solution is reasonably approximated by a single time-independent delay (linear phase ramp in row 2).
```python
fig, axes = plt.subplots(1, 2, figsize=(20,12))
for ax, rd, t in zip(axes, [avg_rel_diff_ee, avg_rel_diff_nn], ['ee-polarized', 'nn-polarized']):
extent=[hd.freqs[0]/1e6, hd.freqs[-1]/1e6, times[-1], times[0]]
im = ax.imshow(rd / ~sc_flags[ant0], aspect='auto', vmin=0, cmap='inferno', vmax=.2, interpolation='nearest', extent=extent)
ax.imshow(blacklist, aspect='auto',
cmap=my_cmap, interpolation=None, clim=[0.9, 1], alpha=.25, extent=extent)
ax.set_title('Relative Difference Between Smoothcal and Abscal: ' + t)
ax.set_xlabel('Frequency (MHz)')
ax.set_ylabel('LST (Hours)')
ax.set_yticklabels(np.around(lsts[[min(max(np.searchsorted(times, t), 0), len(times) - 1) for t in ax.get_yticks()]], 2))
plt.colorbar(im, ax=ax, label='$|g_{smooth} - g_{abs}| / |g_{abs}|$ (unitless)')
```
invalid value encountered in true_divide
FixedFormatter should only be used together with FixedLocator

### Figure 5: Relative difference between Abscal and Smoothcal
Where omnical calfits files store $\chi^2$ per antenna, smooth_cal calfits files store the relative difference between Abscal and Smoothcal gains. This difference is done before taking the absolute value, so this metric is sensitive both to phase errors and amplitude errors.
#### OBSERVER CHECKLIST:
* Look for regions of high relative difference that are not blacklisted. This would indicate a problem with smoothing.
# Metadata
```python
print(redcal.version.history_string())
```
------------
This file was produced by the function <module>() in <ipython-input-1-c6de44361328> using:
git_branch: master
git_description: v3.0-733-gd2dd8ccf
git_hash: d2dd8ccf3fe43d5e5eb6a4c28ceaf4a6e3d1fcb7
git_origin: git@github.com:HERA-Team/hera_cal.git
version: 3.0
------------
```python
```
|
HERA-TeamREPO_NAMEH1C_IDR3_NotebooksPATH_START.@H1C_IDR3_Notebooks-main@smooth_cal_inspect@smooth_cal_inspect_2458154.ipynb@.PATH_END.py
|
{
"filename": "test_digamma.py",
"repo_name": "scipy/scipy",
"repo_path": "scipy_extracted/scipy-main/scipy/special/tests/test_digamma.py",
"type": "Python"
}
|
import numpy as np
from numpy import pi, log, sqrt
from numpy.testing import assert_, assert_equal
from scipy.special._testutils import FuncData
import scipy.special as sc
# Euler-Mascheroni constant
euler = 0.57721566490153286
def test_consistency():
# Make sure the implementation of digamma for real arguments
# agrees with the implementation of digamma for complex arguments.
# It's all poles after -1e16
x = np.r_[-np.logspace(15, -30, 200), np.logspace(-30, 300, 200)]
dataset = np.vstack((x + 0j, sc.digamma(x))).T
FuncData(sc.digamma, dataset, 0, 1, rtol=5e-14, nan_ok=True).check()
def test_special_values():
# Test special values from Gauss's digamma theorem. See
#
# https://en.wikipedia.org/wiki/Digamma_function
dataset = [
(1, -euler),
(0.5, -2*log(2) - euler),
(1/3, -pi/(2*sqrt(3)) - 3*log(3)/2 - euler),
(1/4, -pi/2 - 3*log(2) - euler),
(1/6, -pi*sqrt(3)/2 - 2*log(2) - 3*log(3)/2 - euler),
(1/8,
-pi/2 - 4*log(2) - (pi + log(2 + sqrt(2)) - log(2 - sqrt(2)))/sqrt(2) - euler)
]
dataset = np.asarray(dataset)
FuncData(sc.digamma, dataset, 0, 1, rtol=1e-14).check()
def test_nonfinite():
pts = [0.0, -0.0, np.inf]
std = [-np.inf, np.inf, np.inf]
assert_equal(sc.digamma(pts), std)
assert_(all(np.isnan(sc.digamma([-np.inf, -1]))))
|
scipyREPO_NAMEscipyPATH_START.@scipy_extracted@scipy-main@scipy@special@tests@test_digamma.py@.PATH_END.py
|
{
"filename": "_familysrc.py",
"repo_name": "plotly/plotly.py",
"repo_path": "plotly.py_extracted/plotly.py-master/packages/python/plotly/plotly/validators/image/hoverlabel/font/_familysrc.py",
"type": "Python"
}
|
import _plotly_utils.basevalidators
class FamilysrcValidator(_plotly_utils.basevalidators.SrcValidator):
def __init__(
self, plotly_name="familysrc", parent_name="image.hoverlabel.font", **kwargs
):
super(FamilysrcValidator, self).__init__(
plotly_name=plotly_name,
parent_name=parent_name,
edit_type=kwargs.pop("edit_type", "none"),
**kwargs,
)
|
plotlyREPO_NAMEplotly.pyPATH_START.@plotly.py_extracted@plotly.py-master@packages@python@plotly@plotly@validators@image@hoverlabel@font@_familysrc.py@.PATH_END.py
|
{
"filename": "ReflectingBoundaryInst.cc.py",
"repo_name": "LLNL/spheral",
"repo_path": "spheral_extracted/spheral-main/src/Boundary/ReflectingBoundaryInst.cc.py",
"type": "Python"
}
|
text = """
//------------------------------------------------------------------------------
// Explicit instantiation.
//------------------------------------------------------------------------------
#include "Boundary/ReflectingBoundary.cc"
#include "Geometry/Dimension.hh"
namespace Spheral {
template class ReflectingBoundary< Dim< %(ndim)s > >;
}
"""
|
LLNLREPO_NAMEspheralPATH_START.@spheral_extracted@spheral-main@src@Boundary@ReflectingBoundaryInst.cc.py@.PATH_END.py
|
{
"filename": "_width.py",
"repo_name": "catboost/catboost",
"repo_path": "catboost_extracted/catboost-master/contrib/python/plotly/py2/plotly/validators/scatter/line/_width.py",
"type": "Python"
}
|
import _plotly_utils.basevalidators
class WidthValidator(_plotly_utils.basevalidators.NumberValidator):
def __init__(self, plotly_name="width", parent_name="scatter.line", **kwargs):
super(WidthValidator, self).__init__(
plotly_name=plotly_name,
parent_name=parent_name,
anim=kwargs.pop("anim", True),
edit_type=kwargs.pop("edit_type", "style"),
min=kwargs.pop("min", 0),
role=kwargs.pop("role", "style"),
**kwargs
)
|
catboostREPO_NAMEcatboostPATH_START.@catboost_extracted@catboost-master@contrib@python@plotly@py2@plotly@validators@scatter@line@_width.py@.PATH_END.py
|
{
"filename": "index.md",
"repo_name": "youngjookim/sdr",
"repo_path": "sdr_extracted/sdr-master/SITE/themes/hugo-theme-console/exampleSite/content/photos/chicago-us/index.md",
"type": "Markdown"
}
|
+++
image = "chicago-us.jpg"
date = "2020-01-21"
title = "Chicago, US"
type = "gallery"
+++
[Chicago](https://en.wikipedia.org/w/index.php?title=Chicago&oldid=953376675), officially the City of Chicago, is the most populous city in the U.S. state of Illinois, and the third-most-populous city in the United States. With an estimated population of 2,705,994 (2018), it is also the most populous city in the Midwestern United States. Chicago is the county seat of Cook County, the second-most-populous county in the US, with a small portion of the northwest side of the city extending into DuPage County near O'Hare Airport. Chicago is the principal city of the Chicago metropolitan area, often referred to as Chicagoland. At nearly 10 million people, the metropolitan area is the third most populous in the United States.
|
youngjookimREPO_NAMEsdrPATH_START.@sdr_extracted@sdr-master@SITE@themes@hugo-theme-console@exampleSite@content@photos@chicago-us@index.md@.PATH_END.py
|
{
"filename": "bhm_rm.py",
"repo_name": "sdss/target_selection",
"repo_path": "target_selection_extracted/target_selection-main/python/target_selection/cartons/bhm_rm.py",
"type": "Python"
}
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# @Author: Tom Dwelly
# @Date: 2020-03-31
# @Filename: bhm_rm.py
# @License: BSD 3-clause (http://www.opensource.org/licenses/BSD-3-Clause)
# derived from guide.py
# isort: skip_file
import peewee
from peewee import JOIN
from peewee import fn
from target_selection.cartons.base import BaseCarton
from sdssdb.peewee.sdss5db.catalogdb import (
Catalog,
BHM_RM_v1_3,
CatalogToLegacy_Survey_DR8,
CatalogToLegacy_Survey_DR10,
CatalogToGaia_DR2,
CatalogToGaia_DR3,
CatalogToPanstarrs1,
)
# This module provides the following BHM cartons in v1.0:
# bhm_rm_core
# bhm_rm_known_spec
# bhm_rm_var
# bhm_rm_ancillary
# bhm_rm_xrayqso
class BhmRmBaseCarton(BaseCarton):
"""
This class provides common setting and the masking routines used by all RM cartons
"""
name = "bhm_rm_base"
base_name = "bhm_rm_base"
category = "science"
mapper = "BHM"
program = "bhm_rm"
instrument = "BOSS"
tile = False
priority = None
inertial = True
alias_c = None
alias_t = None
alias_tw = None
can_offset = False
def get_fieldlist(self):
"""Read the RM field centres from the yaml"""
fieldlist = []
base_parameters = self.config["parameters"].get(self.base_name, None)
if base_parameters:
fieldlist = base_parameters["fieldlist"]
return fieldlist
def append_spatial_query(self, query, fieldlist):
"""extend the peewee query using a list of field centres"""
if fieldlist is None:
return query
elif len(fieldlist) == 0:
return query
q = False
for f in fieldlist:
q = q | peewee.fn.q3c_radial_query(
self.alias_c.ra, self.alias_c.dec, f["racen"], f["deccen"], f["radius"]
)
return query.where(q)
def build_query(self, version_id, query_region=None):
c = Catalog.alias()
c2ls8 = CatalogToLegacy_Survey_DR8.alias()
c2ls10 = CatalogToLegacy_Survey_DR10.alias()
c2g3 = CatalogToGaia_DR3.alias()
c2g2 = CatalogToGaia_DR2.alias()
c2ps = CatalogToPanstarrs1.alias()
t = BHM_RM_v1_3.alias()
self.alias_c = c
self.alias_t = t
fieldlist = self.get_fieldlist()
# fold in tiers of magnitude-based priority
priority_mag_step = 0.5
priority_mag_bright = 17.0
priority_mag_faint = 22.0
priority_mag_bright_known_spec = 20.5
priority_floor = self.parameters.get("priority", 10000)
priority1 = peewee.Case(
None,
(
((t.mag_i <= priority_mag_bright), priority_floor + 0),
(
(
(self.name == "bhm_rm_known_spec")
& ~(t.rm_field_name.contains("SDSS-RM"))
& (t.mag_i <= priority_mag_bright_known_spec)
),
priority_floor + 0,
),
(
(t.mag_i <= priority_mag_faint),
priority_floor
+ 5
* (
1
+ peewee.fn.floor(
(t.mag_i - priority_mag_bright) / priority_mag_step
).cast("int")
),
),
((t.mag_i > priority_mag_faint), priority_floor + 95),
),
None,
)
# combine the priorities
priority = priority1
value = peewee.Value(self.parameters.get("value", 1.0)).cast("float")
instrument = peewee.Value(self.instrument)
inertial = peewee.Value(self.inertial).cast("bool")
# This is the scheme used in v0
cadence_v0 = peewee.Case(
None, ((t.rm_field_name.contains("S-CVZ"), "bhm_rm_lite5_100x8"),), "bhm_rm_174x8"
)
# this gives the new names for the same cadences assumed in v0
cadence_v0p5 = peewee.Case(
None, ((t.rm_field_name.contains("S-CVZ"), "dark_100x8"),), "dark_174x8"
)
# the following will replace old generic cadences when relevant table has been populated
# TODO - replace when correct cadences are loaded
cadence_v1p0 = peewee.Case(
None,
(
(t.rm_field_name.contains("SDSS-RM"), "bhm_rm_sdss-rm"),
(t.rm_field_name.contains("COSMOS"), "bhm_rm_cosmos"),
(t.rm_field_name.contains("XMM-LSS"), "bhm_rm_xmm-lss"),
(t.rm_field_name.contains("S-CVZ"), "bhm_rm_cvz-s"),
(t.rm_field_name.contains("CDFS"), "bhm_rm_cdfs"),
(t.rm_field_name.contains("ELIAS-S1"), "bhm_rm_elias-s1"),
),
"dark_174x8",
)
opt_prov = peewee.Value("psfmag")
route_taken = peewee.Case(
None,
(
(c2ls10.catalogid.is_null(False), "lsdr10"),
(c2g3.catalogid.is_null(False), "gdr3"),
(c2ls8.catalogid.is_null(False), "lsdr8"),
(c2g2.catalogid.is_null(False), "gdr2"),
(c2ps.catalogid.is_null(False), "ps1dr2"),
),
"unknown",
)
query = (
t.select(
c.catalogid,
c2ls10.catalogid.alias("c2ls10_catalogid"), # extra
c2g3.catalogid.alias("c2g3_catalogid"), # extra
c2ls8.catalogid.alias("c2ls8_catalogid"), # extra
c2g2.catalogid.alias("c2g2_catalogid"), # extra
c2ps.catalogid.alias("c2ps_catalogid"), # extra
c.ra, # extra
c.dec, # extra
t.rm_field_name.alias("rm_field_name"), # extra
t.pkey.alias("rm_pkey"), # extra
instrument.alias("instrument"),
priority.alias("priority"),
value.alias("value"),
cadence_v0p5.alias("cadence"),
cadence_v0.alias("cadence_v0"), # extra
cadence_v0p5.alias("cadence_v0p5"), # extra
cadence_v1p0.alias("cadence_v1p0_maybe"), # extra
t.mag_g.alias("g"),
t.mag_r.alias("r"),
t.mag_i.alias("i"),
t.mag_z.alias("z"),
t.gaia_g.alias("gaia_g"),
t.gaia_bp.alias("bp"),
t.gaia_rp.alias("rp"),
opt_prov.alias("optical_prov"),
inertial.alias("inertial"),
c2ls10.best.alias("c2ls10_best"), # extra
c2g3.best.alias("c2g3_best"), # extra
c2ls8.best.alias("c2ls8_best"), # extra
c2g2.best.alias("c2g2_best"), # extra
c2ps.best.alias("c2ps_best"), # extra
t.catalogidv05.alias("rm_catalogidv05"), # extra
t.ra.alias("rm_ra"), # extra
t.dec.alias("rm_dec"), # extra
route_taken.alias("route_taken"), # extra
)
.join(c2ls10, JOIN.LEFT_OUTER, on=(c2ls10.target_id == t.ls_id_dr10))
.join(c2g3, JOIN.LEFT_OUTER, on=(c2g3.target_id == t.gaia_dr3_source_id))
.join(c2ls8, JOIN.LEFT_OUTER, on=(c2ls8.target_id == t.ls_id_dr8))
.join(c2g2, JOIN.LEFT_OUTER, on=(c2g2.target_id == t.gaia_dr2_source_id))
.join(c2ps, JOIN.LEFT_OUTER, on=(c2ps.target_id == t.panstarrs1_catid_objid))
.join(
c,
on=(
fn.coalesce(
c2ls10.catalogid,
c2g3.catalogid,
c2ls8.catalogid,
c2g2.catalogid,
c2ps.catalogid,
)
== c.catalogid
),
)
.where(
c.version_id == version_id,
(
((route_taken == "lsdr10") & (c2ls10.version_id == version_id))
| ((route_taken == "gdr3") & (c2g3.version_id == version_id))
| ((route_taken == "lsdr8") & (c2ls8.version_id == version_id))
| ((route_taken == "gdr2") & (c2g2.version_id == version_id))
| ((route_taken == "ps1dr2") & (c2ps.version_id == version_id))
),
# the method below was throwing out ~20 cases where the lsdr8 ls_id
# was unexpectedly unmatched to a catalogid in the v1 crossmatch
# fn.coalesce(c2ls10.version_id, version_id) == version_id,
# fn.coalesce(c2g3.version_id, version_id) == version_id,
# fn.coalesce(c2ls8.version_id, version_id) == version_id,
# fn.coalesce(c2g2.version_id, version_id) == version_id,
# fn.coalesce(c2ps.version_id, version_id) == version_id,
# ## fn.coalesce(c2ls10.best,True) >> True # TODO check if this is dropping RM
# ## # targets like it does for AQMES
)
.where(
(
(t.mag_i >= self.parameters["mag_i_min"])
& (t.mag_i < self.parameters["mag_i_max"])
)
| (
# S-CVZ targets often have only Gaia photom
(t.rm_field_name.contains("S-CVZ"))
& (t.gaia_g >= self.parameters["mag_g_min_cvz_s"])
& (t.gaia_g < self.parameters["mag_g_max_cvz_s"])
)
)
.where(
# Reject any objects where the t.rm_unsuitable flag is set
t.rm_unsuitable >> False,
)
.distinct([t.pkey]) # avoid duplicates - trust the RM parent sample
# - only needed if NOT using c2t.best = True condition
)
query = self.append_spatial_query(query, fieldlist)
return query
class BhmRmCoreCarton(BhmRmBaseCarton):
name = "bhm_rm_core"
def build_query(self, version_id, query_region=None):
query = super().build_query(version_id, query_region)
t = self.alias_t
query = query.where(
(t.rm_core >> True),
~(t.rm_field_name.contains("SDSS-RM")), # ignore this carton in the SDSS-RM field
)
return query
class BhmRmKnownSpecCarton(BhmRmBaseCarton):
"""
bhm_rm_known_spec: select all spectroscopically confirmed QSOs where redshift is extragalactic
"""
name = "bhm_rm_known_spec"
def build_query(self, version_id, query_region=None):
query = super().build_query(version_id, query_region)
t = self.alias_t
query = query.where(
((t.rm_known_spec >> True),),
(
~(t.rm_field_name.contains("SDSS-RM"))
|
# include extra constraints on SDSS-RM targets
(t.mag_i < self.parameters["mag_i_max_sdss_rm"])
),
(
~(t.rm_field_name.contains("COSMOS"))
|
# include extra constraints on COSMOS targets
(t.mag_i < self.parameters["mag_i_max_cosmos"])
),
(
~(t.rm_field_name.contains("XMM-LSS"))
|
# include extra constraints on XMM-LSS targets
(t.mag_i < self.parameters["mag_i_max_xmm_lss"])
),
)
return query
class BhmRmVarCarton(BhmRmBaseCarton):
"""bhm_rm_var: selected based on g-band variability > 0.05 mag
and bright enough to be detected by Gaia (G<~21)
"""
name = "bhm_rm_var"
def build_query(self, version_id, query_region=None):
query = super().build_query(version_id, query_region)
t = self.alias_t
query = query.where(
(t.rm_var >> True),
~(t.rm_field_name.contains("SDSS-RM")), # ignore this carton in the SDSS-RM field
)
return query
class BhmRmAncillaryCarton(BhmRmBaseCarton):
"""
bhm_rm_ancillary: from the Gaia_unWISE AGN catalog or the XDQSO catalog,
but requiring no proper motion/parallax detection from Gaia DR2
"""
name = "bhm_rm_ancillary"
def build_query(self, version_id, query_region=None):
query = super().build_query(version_id, query_region)
t = self.alias_t
query = query.where(
(t.rm_ancillary >> True),
~(t.rm_field_name.contains("SDSS-RM")), # ignore this carton in the SDSS-RM field
)
return query
class BhmRmXrayQsoCarton(BhmRmBaseCarton):
"""
bhm_rm_xrayqso:
selected based on X-ray and SED
"""
name = "bhm_rm_xrayqso"
def build_query(self, version_id, query_region=None):
query = super().build_query(version_id, query_region)
t = self.alias_t
query = query.where(
(t.rm_xrayqso > 0),
~(t.rm_field_name.contains("SDSS-RM")), # ignore this carton in the SDSS-RM field
)
return query
|
sdssREPO_NAMEtarget_selectionPATH_START.@target_selection_extracted@target_selection-main@python@target_selection@cartons@bhm_rm.py@.PATH_END.py
|
{
"filename": "__init__.py",
"repo_name": "simonsobs/nextline-rdb",
"repo_path": "nextline-rdb_extracted/nextline-rdb-main/src/nextline_rdb/schema/nodes/__init__.py",
"type": "Python"
}
|
__all__ = [
'PromptNode',
'RunNode',
'StdoutNode',
'TraceCallNode',
'TraceNode',
]
from .prompt_node import PromptNode
from .run_node import RunNode
from .stdout_node import StdoutNode
from .trace_call_node import TraceCallNode
from .trace_node import TraceNode
|
simonsobsREPO_NAMEnextline-rdbPATH_START.@nextline-rdb_extracted@nextline-rdb-main@src@nextline_rdb@schema@nodes@__init__.py@.PATH_END.py
|
{
"filename": "_ticklabelstep.py",
"repo_name": "catboost/catboost",
"repo_path": "catboost_extracted/catboost-master/contrib/python/plotly/py3/plotly/validators/mesh3d/colorbar/_ticklabelstep.py",
"type": "Python"
}
|
import _plotly_utils.basevalidators
class TicklabelstepValidator(_plotly_utils.basevalidators.IntegerValidator):
def __init__(
self, plotly_name="ticklabelstep", parent_name="mesh3d.colorbar", **kwargs
):
super(TicklabelstepValidator, self).__init__(
plotly_name=plotly_name,
parent_name=parent_name,
edit_type=kwargs.pop("edit_type", "colorbars"),
min=kwargs.pop("min", 1),
**kwargs,
)
|
catboostREPO_NAMEcatboostPATH_START.@catboost_extracted@catboost-master@contrib@python@plotly@py3@plotly@validators@mesh3d@colorbar@_ticklabelstep.py@.PATH_END.py
|
{
"filename": "data_generator.py",
"repo_name": "SKA-INAF/sclassifier",
"repo_path": "sclassifier_extracted/sclassifier-master/sclassifier/data_generator.py",
"type": "Python"
}
|
#!/usr/bin/env python
from __future__ import print_function
##################################################
### MODULE IMPORT
##################################################
## STANDARD MODULES
import os
import sys
import subprocess
import string
import time
import signal
from threading import Thread
import datetime
import numpy as np
import random
import math
import logging
from collections import Counter
from itertools import chain
import json
## KERAS MODULES
from tensorflow.keras.utils import to_categorical
## SKLEARN MODULES
from sklearn.preprocessing import MultiLabelBinarizer
## ASTROPY MODULES
from astropy.io import ascii
from astropy.stats import sigma_clipped_stats
from sclassifier.data_loader import SourceData
##############################
## GLOBAL VARS
##############################
from sclassifier import logger
##############################
## DATA GENERATOR
##############################
class DataGenerator(object):
""" Read data from disk and provide it to the network
Arguments:
- datalist: Filelist (json) with input data
"""
def __init__(self, filename, preprocessor=None):
""" Return a DataLoader object """
# - Input data
self.datalistfile= filename
self.datalist= {}
self.datasize= 0
self.classids= []
self.classfract_map= {}
self.labels= []
self.snames= []
self.nchannels= 0
# - Pre-processor
self.preprocessor= preprocessor
#############################
## DISABLE AUGMENTATION
#############################
def disable_augmentation(self):
""" Disable augmentation """
if self.preprocessor is None:
logger.warn("Pre-processor is None, nothing will be done...")
return -1
self.preprocessor.disable_augmentation()
return 0
#############################
## READ DATALIST
#############################
def read_datalist(self):
""" Read json filelist """
# - Read data list
self.datalist= {}
try:
with open(self.datalistfile) as fp:
self.datalist= json.load(fp)
except Exception as e:
logger.error("Failed to read data filelist %s!" % self.datalistfile)
return -1
# - Check number of channels per image
nchannels_set= set([len(item["filepaths"]) for item in self.datalist["data"]])
if len(nchannels_set)!=1:
logger.warn("Number of channels in each object instance is different (len(nchannels_set)=%d!=1)!" % (len(nchannels_set)))
print(nchannels_set)
return -1
self.nchannels= list(nchannels_set)[0]
# - Inspect data (store number of instances per class, etc)
self.datasize= len(self.datalist["data"])
self.labels= [item["label"] for item in self.datalist["data"]]
self.snames= [item["sname"] for item in self.datalist["data"]]
self.classids= [item["id"] for item in self.datalist["data"]]
if not self.classids:
logger.error("Read classids is empty, check input data!")
return -1
if isinstance(self.classids[0], list): # multilabel (classids is a 2D list, flatten it)
self.classfract_map= dict(Counter( list(chain.from_iterable(self.classids)) ).items())
else:
self.classfract_map= dict(Counter(self.classids).items())
logger.info("#%d objects in dataset" % self.datasize)
return 0
#############################
## READ IMAGE DATA
#############################
def read_data(self, index, read_crop=False, crop_size=32, crop_range=None):
""" Read data at given index """
# - Check index
if index<0 or index>=self.datasize:
logger.error("Invalid index %d given!" % (index))
return None
# - Read source filelist
logger.debug("Reading source image data at index %d ..." % (index))
d= self.datalist["data"][index]
sdata= SourceData()
if sdata.set_from_dict(d)<0:
logger.error("Failed to set source image data at index %d!" % (index))
return None
sname= sdata.sname
label= sdata.label
classid= sdata.id
# - Read source image data
status= 0
if read_crop:
if crop_range is None:
status= sdata.read_random_img_crops(crop_size)
else:
ixmin= crop_range[0]
ixmax= crop_range[1]
iymin= crop_range[2]
iymax= crop_range[3]
status= sdata.read_img_crops(ixmin, ixmax, iymin, iymax)
else:
status= sdata.read_imgs()
if status<0:
logger.error("Failed to read source image at index %d (sname=%s, label=%s, classid=%s)!" % (index, sname, str(label), str(classid)))
return None
if sdata.img_cube is None:
logger.error("Source image data cube at index %d (sname=%s, label=%s, classid=%s) is None!" % (index, sname, str(label), str(classid)))
return None
# - Check if more augmenters are required for this dataset
augmenter_index= 0
if 'augmenter_index' in d:
augmenter_index= d['augmenter_index']
if augmenter_index<0 or augmenter_index is None:
logger.error("Invalid augmenter_index specified (must be [0, naugmenters-1]!")
return None
# - Apply pre-processing?
if self.preprocessor is not None:
logger.debug("Apply pre-processing ...")
#data_proc= self.preprocessor(sdata.img_cube)
data_proc= self.preprocessor(sdata.img_cube, augmenter_index=augmenter_index)
if data_proc is None:
logger.error("Failed to pre-process source image data at index %d (sname=%s, label=%s, classid=%s)!" % (index, sname, str(label), str(classid)))
return None
sdata.img_cube= data_proc
# - Check data cube integrity
logger.debug("Check bad pixels ...")
has_bad_pixs= sdata.has_bad_pixels(check_fract=False, thr=0)
if has_bad_pixs:
logger.warn("Source image data at index %d (sname=%s, label=%s, classid=%s) has bad pixels!" % (index, sname, str(label), str(classid)))
return None
return sdata
#####################################
## GENERATE CNN TRAIN DATA
#####################################
def generate_cnn_data(self, batch_size=32, shuffle=True, read_crop=False, crop_size=32, classtarget_map={}, nclasses=7, balance_classes=False, class_probs={}, skip_first_class=False):
""" Generator function for CNN classification task """
nb= 0
data_index= -1
data_indexes= np.arange(0, self.datasize)
target_ids= []
logger.info("Starting data generator ...")
while True:
try:
if nb==0:
logger.debug("Starting new batch ...")
# - Generate random data index and read data at this index
data_index = (data_index + 1) % self.datasize
if shuffle:
data_index= np.random.choice(data_indexes)
logger.debug("Reading data at index %d (batch %d/%d) ..." % (data_index, nb, batch_size))
sdata= self.read_data(data_index, read_crop, crop_size)
if sdata is None:
logger.warn("Failed to read source data at index %d, skip to next ..." % data_index)
continue
data_shape= sdata.img_cube.shape
inputs_shape= (batch_size,) + data_shape
logger.debug("Data %d shape=(%d,%d,%d)" % (data_index, data_shape[0], data_shape[1], data_shape[2]))
# - Set class targets
# NB: If id & label are a list treat as multilabel problem
class_id= sdata.id
class_name= sdata.label
target_id= class_id
multilabel= (isinstance(class_id, list)) and (isinstance(class_name, list))
if classtarget_map:
if multilabel:
target_id= [classtarget_map[item] for item in class_id]
else:
target_id= classtarget_map[class_id]
# - Apply class rebalancing?
if balance_classes and class_probs:
accept= True
if multilabel:
# - Find the largest prob among available classes
# Example probs={"COMPACT":0.5,"EXTENDED":0.7,"DIFFUSE":1.0} ==> ["COMPACT"] will be generated less frequently than ["COMPACT","EXTENDED","DIFFUSE"]
prob_max= 0
for item in class_name:
prob= class_probs[item]
if prob>prob_max:
prob_max= prob
prob= prob_max
else:
prob= class_probs[class_name]
r= random.uniform(0, 1)
accept= r<prob
if not accept:
continue
# - Initialize return data
if nb==0:
inputs= np.zeros(inputs_shape, dtype=np.float32)
target_ids= []
class_ids= []
# - Update inputs
inputs[nb]= sdata.img_cube
target_ids.append(target_id)
if not multilabel and class_id>=0:
class_ids.append(class_id)
nb+= 1
# - Return data if number of batch is reached and restart the batch
if nb>=batch_size:
# - Compute class abundances in batch
if not multilabel:
class_counts= np.bincount(class_ids)
class_fracts= class_counts/len(class_ids)
class_counts_str= ' '.join([str(x) for x in class_counts])
class_fracts_str= ' '.join([str(x) for x in class_fracts])
logger.debug("Class counts/fract in batch: counts=[%s], fract=[%s]" % (class_counts_str, class_fracts_str))
# - Return data
logger.debug("Batch size (%d) reached, yielding generated data of size (%d,%d,%d,%d) ..." % (nb,inputs.shape[0],inputs.shape[1],inputs.shape[2],inputs.shape[3]))
if multilabel:
mlb = MultiLabelBinarizer(classes=np.arange(0, nclasses))
output_targets= mlb.fit_transform(target_ids).astype('float32')
if skip_first_class: # do not include first label (e.g. NONE/BACKGROUND) as target, e.g. these instances will have [0,0,0...0] encoding
output_targets= output_targets[:, 1:nclasses]
#print("data_generator --> target_ids")
#print(target_ids)
#print("data_generator --> output_targets")
#print(output_targets)
else:
output_targets= to_categorical(np.array(target_ids), num_classes=nclasses)
yield inputs, output_targets
nb= 0
except (GeneratorExit):
logger.info("Data generator complete execution ...")
raise
except (KeyboardInterrupt):
logger.warn("Keyboard exception catched while generating data...")
raise
except Exception as e:
logger.warn("Exception catched while generating data (err=%s) ..." % str(e))
raise
#####################################
## GENERATE CAE TRAIN DATA
#####################################
def generate_cae_data(self, batch_size=32, shuffle=True, read_crop=False, crop_size=32, balance_classes=False, class_probs={}):
""" Generator function for CAE task """
nb= 0
data_index= -1
data_indexes= np.arange(0,self.datasize)
logger.info("Starting CAE data generator ...")
while True:
try:
if nb==0:
logger.debug("Starting new batch ...")
# - Generate random data index and read data at this index
data_index = (data_index + 1) % self.datasize
if shuffle:
data_index= np.random.choice(data_indexes)
sdata= self.read_data(data_index, read_crop, crop_size)
if sdata is None:
logger.warn("Failed to read source data at index %d, skip to next ..." % data_index)
continue
data_shape= sdata.img_cube.shape
inputs_shape= (batch_size,) + data_shape
# - Apply class rebalancing?
class_id= sdata.id
class_name= sdata.label
multilabel= (isinstance(class_id, list)) and (isinstance(class_name, list))
if balance_classes and class_probs:
accept= True
if multilabel:
# - Find the largest prob among available classes
# Example probs={"COMPACT":0.5,"EXTENDED":0.7,"DIFFUSE":1.0} ==> ["COMPACT"] will be generated less frequently than ["COMPACT","EXTENDED","DIFFUSE"]
prob_max= 0
for item in class_name:
prob= class_probs[item]
if prob>prob_max:
prob_max= prob
prob= prob_max
else:
prob= class_probs[class_name]
r= random.uniform(0, 1)
accept= r<prob
if not accept:
continue
# - Initialize return data
if nb==0:
inputs= np.zeros(inputs_shape, dtype=np.float32)
class_ids= []
# - Update inputs
try:
inputs[nb]= sdata.img_cube
except Exception as e:
logger.error("Exception occurred while filling input data (nb=%d), exit generator!" % (nb))
break
if not multilabel and class_id>=0:
class_ids.append(class_id)
nb+= 1
# - Return data if number of batch is reached and restart the batch
if nb>=batch_size:
# - Compute class abundances in batch
if not multilabel:
class_counts= np.bincount(class_ids)
class_fracts= class_counts/len(class_ids)
class_counts_str= ' '.join([str(x) for x in class_counts])
class_fracts_str= ' '.join([str(x) for x in class_fracts])
logger.debug("Class counts/fract in batch: counts=[%s], fract=[%s]" % (class_counts_str, class_fracts_str))
# - Return data
yield inputs, inputs
nb= 0
except (GeneratorExit):
logger.info("Data generator complete execution ...")
raise
except (KeyboardInterrupt):
logger.warn("Keyboard exception catched while generating data...")
raise
except Exception as e:
logger.warn("Exception catched while generating data (err=%s) ..." % str(e))
raise
#####################################
## GENERATE SIMCLR TRAIN DATA
#####################################
def generate_simclr_data(self, batch_size=32, shuffle=True, read_crop=False, crop_size=32, balance_classes=False, class_probs={}):
""" Generator function for SimCLR task """
nb= 0
data_index= -1
data_indexes= np.arange(0,self.datasize)
logger.info("Starting data generator ...")
while True:
try:
if nb==0:
logger.debug("Starting new batch ...")
# - Generate random data index and pairs of augmented data for SimCLR
data_index = (data_index + 1) % self.datasize
if shuffle:
data_index= np.random.choice(data_indexes)
if read_crop:# NB: must read the same crop range for the data pair
sdata_1= self.read_data(data_index, read_crop=True, crop_size=crop_size, crop_range=None)
crop_range= (sdata_1.ixmin, sdata_1.ixmax, sdata_1.iymin, sdata_1.iymax)
sdata_2= self.read_data(data_index, read_crop=True, crop_size=crop_size, crop_range=crop_range)
else:
sdata_1= self.read_data(data_index)
sdata_2= self.read_data(data_index)
if sdata_1 is None or sdata_2 is None:
logger.warn("Failed to read source data pair at index %d!" % (data_index))
continue
data_shape= sdata_1.img_cube.shape
inputs_shape= (batch_size,) + data_shape
# - Apply class rebalancing?
class_id= sdata_1.id
class_name= sdata_1.label
multilabel= (isinstance(class_id, list)) and (isinstance(class_name, list))
if balance_classes and class_probs:
accept= True
if multilabel:
# - Find the largest prob among available classes
# Example probs={"COMPACT":0.5,"EXTENDED":0.7,"DIFFUSE":1.0} ==> ["COMPACT"] will be generated less frequently than ["COMPACT","EXTENDED","DIFFUSE"]
prob_max= 0
for item in class_name:
prob= class_probs[item]
if prob>prob_max:
prob_max= prob
prob= prob_max
else:
prob= class_probs[class_name]
r= random.uniform(0, 1)
accept= r<prob
if not accept:
continue
# - Initialize return data
if nb==0:
# - The ref implementation (https://github.com/mwdhont/SimCLRv1-keras-tensorflow/blob/master/DataGeneratorSimCLR.py)
# uses a dimension (2*batch, 1, ny, nx, nchan), so that returned inputs is a list of len(2*batch) and item passed to encoder has shape (1,ny,nx,nchan) (NB: batch size=1)
inputs_simclr_shape= (2*batch_size, 1) + data_shape # original ref
inputs_simclr= np.empty(inputs_simclr_shape, dtype=np.float32)
labels_ab_aa = np.zeros((batch_size, 2 * batch_size))
labels_ba_bb = np.zeros((batch_size, 2 * batch_size))
class_ids= []
# - Update inputs
# - The ref implementation (https://github.com/mwdhont/SimCLRv1-keras-tensorflow/blob/master/DataGeneratorSimCLR.py)
# shuffles the position of augmented image pair
inputs_simclr[nb]= sdata_1.img_cube
inputs_simclr[nb + batch_size]= sdata_2.img_cube
labels_ab_aa[nb, nb] = 1
labels_ba_bb[nb, nb] = 1
if class_id>=0:
class_ids.append(class_id)
nb+= 1
# - Return data if number of batch is reached and restart the batch
if nb>=batch_size:
# - Compute class abundances in batch
if not multilabel:
class_counts= np.bincount(class_ids)
class_fracts= class_counts/len(class_ids)
class_counts_str= ' '.join([str(x) for x in class_counts])
class_fracts_str= ' '.join([str(x) for x in class_fracts])
logger.debug("Class counts/fract in batch: counts=[%s], fract=[%s]" % (class_counts_str, class_fracts_str))
# - Return data
y= np.concatenate([labels_ab_aa, labels_ba_bb], 1)
yield list(inputs_simclr), y # original implementation: returns a list (len=2xbatch_size) of arrays of shape (1, ny, nx, nchan). Each Input layer takes one list entry as input.
nb= 0
except (GeneratorExit):
logger.info("Data generator complete execution ...")
raise
except (KeyboardInterrupt):
logger.warn("Keyboard exception catched while generating data...")
raise
except Exception as e:
logger.warn("Exception catched while generating data (err=%s) ..." % str(e))
raise
def generate_simclr_data_v2(self, batch_size=32, shuffle=True, read_crop=False, crop_size=32, balance_classes=False, class_probs={}):
""" Generator function for SimCLR task (version 2) """
nb= 0
data_index= -1
data_indexes= np.arange(0,self.datasize)
logger.info("Starting data generator ...")
while True:
try:
if nb==0:
logger.debug("Starting new batch ...")
# - Generate random data index and pairs of augmented data for SimCLR
data_index = (data_index + 1) % self.datasize
if shuffle:
data_index= np.random.choice(data_indexes)
if read_crop:# NB: must read the same crop range for the data pair
sdata_1= self.read_data(data_index, read_crop=True, crop_size=crop_size, crop_range=None)
crop_range= (sdata_1.ixmin, sdata_1.ixmax, sdata_1.iymin, sdata_1.iymax)
sdata_2= self.read_data(data_index, read_crop=True, crop_size=crop_size, crop_range=crop_range)
else:
sdata_1= self.read_data(data_index)
sdata_2= self.read_data(data_index)
if sdata_1 is None or sdata_2 is None:
logger.warn("Failed to read source data pair at index %d!" % (data_index))
continue
data_shape= sdata_1.img_cube.shape
inputs_shape= (2*batch_size,) + data_shape
# - Apply class rebalancing?
class_id= sdata_1.id
class_name= sdata_1.label
if balance_classes and class_probs:
prob= class_probs[class_name]
r= random.uniform(0, 1)
accept= r<prob
if not accept:
continue
# - Initialize return data
if nb==0:
# - The ref implementation (https://github.com/garder14/simclr-tensorflow2/blob/main/datasets.py)
# uses a dimension (2*batch, ny, nx, nchan)
inputs_simclr= np.empty(inputs_shape, dtype=np.float32)
class_ids= []
# - Update inputs
inputs_simclr[nb]= sdata_1.img_cube
inputs_simclr[nb + 1]= sdata_2.img_cube
if class_id>=0:
class_ids.append(class_id)
nb+= 2
# - Return data if number of batch is reached and restart the batch
if nb>=batch_size:
# - Compute class abundances in batch
class_counts= np.bincount(class_ids)
class_fracts= class_counts/len(class_ids)
class_counts_str= ' '.join([str(x) for x in class_counts])
class_fracts_str= ' '.join([str(x) for x in class_fracts])
logger.debug("Class counts/fract in batch: counts=[%s], fract=[%s]" % (class_counts_str, class_fracts_str))
# - Return data
yield inputs_simclr
nb= 0
except (GeneratorExit):
logger.info("Data generator complete execution ...")
raise
except (KeyboardInterrupt):
logger.warn("Keyboard exception catched while generating data...")
raise
except Exception as e:
logger.warn("Exception catched while generating data (err=%s) ..." % str(e))
raise
#####################################
## GENERATE BYOL TRAIN DATA
#####################################
def generate_byol_data(self, batch_size=32, shuffle=True, read_crop=False, crop_size=32, balance_classes=False, class_probs={}):
""" Generator function for BYOL task """
nb= 0
data_index= -1
data_indexes= np.arange(0,self.datasize)
logger.info("Starting data generator ...")
while True:
try:
if nb==0:
logger.debug("Starting new batch ...")
# - Generate random data index and pairs of augmented data
data_index = (data_index + 1) % self.datasize
if shuffle:
data_index= np.random.choice(data_indexes)
if read_crop:# NB: must read the same crop range for the data pair
sdata_1= self.read_data(data_index, read_crop=True, crop_size=crop_size, crop_range=None)
crop_range= (sdata_1.ixmin, sdata_1.ixmax, sdata_1.iymin, sdata_1.iymax)
sdata_2= self.read_data(data_index, read_crop=True, crop_size=crop_size, crop_range=crop_range)
else:
sdata_1= self.read_data(data_index)
sdata_2= self.read_data(data_index)
if sdata_1 is None or sdata_2 is None:
logger.warn("Failed to read source data pair at index %d!" % (data_index))
continue
data_shape= sdata_1.img_cube.shape
inputs_shape= (batch_size,) + data_shape
# - Apply class rebalancing?
class_id= sdata_1.id
class_name= sdata_1.label
if balance_classes and class_probs:
prob= class_probs[class_name]
r= random.uniform(0, 1)
accept= r<prob
if not accept:
continue
# - Initialize return data
if nb==0:
inputs_1= np.zeros(inputs_shape, dtype=np.float32)
inputs_2= np.zeros(inputs_shape, dtype=np.float32)
class_ids= []
# - Update inputs
inputs_1[nb]= sdata_1.img_cube
inputs_2[nb]= sdata_2.img_cube
if class_id>=0:
class_ids.append(class_id)
nb+= 1
# - Return data if number of batch is reached and restart the batch
if nb>=batch_size:
# - Compute class abundances in batch
class_counts= np.bincount(class_ids)
class_fracts= class_counts/len(class_ids)
class_counts_str= ' '.join([str(x) for x in class_counts])
class_fracts_str= ' '.join([str(x) for x in class_fracts])
logger.debug("Class counts/fract in batch: counts=[%s], fract=[%s]" % (class_counts_str, class_fracts_str))
# - Return data
yield inputs_1, inputs_2
nb= 0
except (GeneratorExit):
logger.info("Data generator complete execution ...")
raise
except (KeyboardInterrupt):
logger.warn("Keyboard exception catched while generating data...")
raise
except Exception as e:
logger.warn("Exception catched while generating data (err=%s) ..." % str(e))
raise
#####################################
## GENERATE TRAIN DATA
#####################################
def generate_data(self, batch_size=32, shuffle=True, read_crop=False, crop_size=32, balance_classes=False, class_probs={}):
""" Generator function reading nsamples images from disk and returning to caller """
nb= 0
data_index= -1
data_indexes= np.arange(0,self.datasize)
logger.info("Starting data generator ...")
while True:
try:
if nb==0:
logger.debug("Starting new batch ...")
# - Generate random data index and read data at this index
data_index = (data_index + 1) % self.datasize
if shuffle:
data_index= np.random.choice(data_indexes)
logger.debug("Reading data at index %d (batch %d/%d) ..." % (data_index,nb, batch_size))
sdata= self.read_data(data_index, read_crop, crop_size)
if sdata is None:
logger.warn("Failed to read source data at index %d, skip to next ..." % data_index)
continue
data_shape= sdata.img_cube.shape
inputs_shape= (batch_size,) + data_shape
# - Apply class rebalancing?
class_id= sdata.id
class_name= sdata.label
multilabel= (isinstance(class_id, list)) and (isinstance(class_name, list))
if balance_classes and class_probs:
accept= True
if multilabel:
for item in class_name:
prob= class_probs[item]
r= random.uniform(0, 1)
if r<prob:
accept= False
break
else:
prob= class_probs[class_name]
r= random.uniform(0, 1)
accept= r<prob
if not accept:
continue
# - Initialize return data
if nb==0:
inputs= np.zeros(inputs_shape, dtype=np.float32)
class_ids= []
# - Update inputs
inputs[nb]= sdata.img_cube
if class_id>=0:
class_ids.append(class_id)
nb+= 1
# - Return data if number of batch is reached and restart the batch
if nb>=batch_size:
# - Compute class abundances in batch
if not multilabel:
class_counts= np.bincount(class_ids)
class_fracts= class_counts/len(class_ids)
class_counts_str= ' '.join([str(x) for x in class_counts])
class_fracts_str= ' '.join([str(x) for x in class_fracts])
logger.debug("Class counts/fract in batch: counts=[%s], fract=[%s]" % (class_counts_str, class_fracts_str))
# - Return data
yield inputs, sdata
nb= 0
except (GeneratorExit):
logger.info("Data generator complete execution ...")
raise
except (KeyboardInterrupt):
logger.warn("Keyboard exception catched while generating data...")
raise
except Exception as e:
logger.warn("Exception catched while generating data (err=%s) ..." % str(e))
raise
|
SKA-INAFREPO_NAMEsclassifierPATH_START.@sclassifier_extracted@sclassifier-master@sclassifier@data_generator.py@.PATH_END.py
|
{
"filename": "sim.py",
"repo_name": "zpenoyre/CausticFrog",
"repo_path": "CausticFrog_extracted/CausticFrog-master/causticTools/sim.py",
"type": "Python"
}
|
#runs the cython simulation (jury is out on whether this need be it's own file...)
import Cython
import pyximport
pyximport.install()
from . import cythonSim
#this allows line profiler (via ipython magic %lprun) to profile cython functions
from Cython.Compiler.Options import get_directive_defaults
get_directive_defaults()['linetrace'] = True
get_directive_defaults()['binding'] = True
def runSim(nShells,nPhase,nEcc,T,dt,rMin,rMax,name,nOutput,dmMass,baryonInit,baryonMass,findEcc,G=4.96e-15):
cythonSim.updateGlobal(G) #updates global variables
cythonSim.setFunctions(dmMass,baryonInit,baryonMass,findEcc) #sets user definied function for the intial state and final baryon mass
cythonSim.runSim(nShells,nPhase,nEcc,T,dt,rMin,rMax,nOutput,name) #runs the simulation
|
zpenoyreREPO_NAMECausticFrogPATH_START.@CausticFrog_extracted@CausticFrog-master@causticTools@sim.py@.PATH_END.py
|
{
"filename": "build_ext.py",
"repo_name": "catboost/catboost",
"repo_path": "catboost_extracted/catboost-master/contrib/python/setuptools/py3/setuptools/_distutils/command/build_ext.py",
"type": "Python"
}
|
"""distutils.command.build_ext
Implements the Distutils 'build_ext' command, for building extension
modules (currently limited to C extensions, should accommodate C++
extensions ASAP)."""
import contextlib
import os
import re
import sys
from distutils._log import log
from site import USER_BASE
from .._modified import newer_group
from ..core import Command
from ..errors import (
CCompilerError,
CompileError,
DistutilsError,
DistutilsOptionError,
DistutilsPlatformError,
DistutilsSetupError,
)
from ..extension import Extension
from ..sysconfig import customize_compiler, get_config_h_filename, get_python_version
from ..util import get_platform, is_mingw
# An extension name is just a dot-separated list of Python NAMEs (ie.
# the same as a fully-qualified module name).
extension_name_re = re.compile(r'^[a-zA-Z_][a-zA-Z_0-9]*(\.[a-zA-Z_][a-zA-Z_0-9]*)*$')
def show_compilers():
from ..ccompiler import show_compilers
show_compilers()
class build_ext(Command):
description = "build C/C++ extensions (compile/link to build directory)"
# XXX thoughts on how to deal with complex command-line options like
# these, i.e. how to make it so fancy_getopt can suck them off the
# command line and make it look like setup.py defined the appropriate
# lists of tuples of what-have-you.
# - each command needs a callback to process its command-line options
# - Command.__init__() needs access to its share of the whole
# command line (must ultimately come from
# Distribution.parse_command_line())
# - it then calls the current command class' option-parsing
# callback to deal with weird options like -D, which have to
# parse the option text and churn out some custom data
# structure
# - that data structure (in this case, a list of 2-tuples)
# will then be present in the command object by the time
# we get to finalize_options() (i.e. the constructor
# takes care of both command-line and client options
# in between initialize_options() and finalize_options())
sep_by = f" (separated by '{os.pathsep}')"
user_options = [
('build-lib=', 'b', "directory for compiled extension modules"),
('build-temp=', 't', "directory for temporary files (build by-products)"),
(
'plat-name=',
'p',
"platform name to cross-compile for, if supported "
f"[default: {get_platform()}]",
),
(
'inplace',
'i',
"ignore build-lib and put compiled extensions into the source "
"directory alongside your pure Python modules",
),
(
'include-dirs=',
'I',
"list of directories to search for header files" + sep_by,
),
('define=', 'D', "C preprocessor macros to define"),
('undef=', 'U', "C preprocessor macros to undefine"),
('libraries=', 'l', "external C libraries to link with"),
(
'library-dirs=',
'L',
"directories to search for external C libraries" + sep_by,
),
('rpath=', 'R', "directories to search for shared C libraries at runtime"),
('link-objects=', 'O', "extra explicit link objects to include in the link"),
('debug', 'g', "compile/link with debugging information"),
('force', 'f', "forcibly build everything (ignore file timestamps)"),
('compiler=', 'c', "specify the compiler type"),
('parallel=', 'j', "number of parallel build jobs"),
('swig-cpp', None, "make SWIG create C++ files (default is C)"),
('swig-opts=', None, "list of SWIG command line options"),
('swig=', None, "path to the SWIG executable"),
('user', None, "add user include, library and rpath"),
]
boolean_options = ['inplace', 'debug', 'force', 'swig-cpp', 'user']
help_options = [
('help-compiler', None, "list available compilers", show_compilers),
]
def initialize_options(self):
self.extensions = None
self.build_lib = None
self.plat_name = None
self.build_temp = None
self.inplace = False
self.package = None
self.include_dirs = None
self.define = None
self.undef = None
self.libraries = None
self.library_dirs = None
self.rpath = None
self.link_objects = None
self.debug = None
self.force = None
self.compiler = None
self.swig = None
self.swig_cpp = None
self.swig_opts = None
self.user = None
self.parallel = None
@staticmethod
def _python_lib_dir(sysconfig):
"""
Resolve Python's library directory for building extensions
that rely on a shared Python library.
See python/cpython#44264 and python/cpython#48686
"""
if not sysconfig.get_config_var('Py_ENABLE_SHARED'):
return
if sysconfig.python_build:
yield '.'
return
if sys.platform == 'zos':
# On z/OS, a user is not required to install Python to
# a predetermined path, but can use Python portably
installed_dir = sysconfig.get_config_var('base')
lib_dir = sysconfig.get_config_var('platlibdir')
yield os.path.join(installed_dir, lib_dir)
else:
# building third party extensions
yield sysconfig.get_config_var('LIBDIR')
def finalize_options(self): # noqa: C901
from distutils import sysconfig
self.set_undefined_options(
'build',
('build_lib', 'build_lib'),
('build_temp', 'build_temp'),
('compiler', 'compiler'),
('debug', 'debug'),
('force', 'force'),
('parallel', 'parallel'),
('plat_name', 'plat_name'),
)
if self.package is None:
self.package = self.distribution.ext_package
self.extensions = self.distribution.ext_modules
# Make sure Python's include directories (for Python.h, pyconfig.h,
# etc.) are in the include search path.
py_include = sysconfig.get_python_inc()
plat_py_include = sysconfig.get_python_inc(plat_specific=True)
if self.include_dirs is None:
self.include_dirs = self.distribution.include_dirs or []
if isinstance(self.include_dirs, str):
self.include_dirs = self.include_dirs.split(os.pathsep)
# If in a virtualenv, add its include directory
# Issue 16116
if sys.exec_prefix != sys.base_exec_prefix:
self.include_dirs.append(os.path.join(sys.exec_prefix, 'include'))
# Put the Python "system" include dir at the end, so that
# any local include dirs take precedence.
self.include_dirs.extend(py_include.split(os.path.pathsep))
if plat_py_include != py_include:
self.include_dirs.extend(plat_py_include.split(os.path.pathsep))
self.ensure_string_list('libraries')
self.ensure_string_list('link_objects')
# Life is easier if we're not forever checking for None, so
# simplify these options to empty lists if unset
if self.libraries is None:
self.libraries = []
if self.library_dirs is None:
self.library_dirs = []
elif isinstance(self.library_dirs, str):
self.library_dirs = self.library_dirs.split(os.pathsep)
if self.rpath is None:
self.rpath = []
elif isinstance(self.rpath, str):
self.rpath = self.rpath.split(os.pathsep)
# for extensions under windows use different directories
# for Release and Debug builds.
# also Python's library directory must be appended to library_dirs
if os.name == 'nt' and not is_mingw():
# the 'libs' directory is for binary installs - we assume that
# must be the *native* platform. But we don't really support
# cross-compiling via a binary install anyway, so we let it go.
self.library_dirs.append(os.path.join(sys.exec_prefix, 'libs'))
if sys.base_exec_prefix != sys.prefix: # Issue 16116
self.library_dirs.append(os.path.join(sys.base_exec_prefix, 'libs'))
if self.debug:
self.build_temp = os.path.join(self.build_temp, "Debug")
else:
self.build_temp = os.path.join(self.build_temp, "Release")
# Append the source distribution include and library directories,
# this allows distutils on windows to work in the source tree
self.include_dirs.append(os.path.dirname(get_config_h_filename()))
self.library_dirs.append(sys.base_exec_prefix)
# Use the .lib files for the correct architecture
if self.plat_name == 'win32':
suffix = 'win32'
else:
# win-amd64
suffix = self.plat_name[4:]
new_lib = os.path.join(sys.exec_prefix, 'PCbuild')
if suffix:
new_lib = os.path.join(new_lib, suffix)
self.library_dirs.append(new_lib)
# For extensions under Cygwin, Python's library directory must be
# appended to library_dirs
if sys.platform[:6] == 'cygwin':
if not sysconfig.python_build:
# building third party extensions
self.library_dirs.append(
os.path.join(
sys.prefix, "lib", "python" + get_python_version(), "config"
)
)
else:
# building python standard extensions
self.library_dirs.append('.')
self.library_dirs.extend(self._python_lib_dir(sysconfig))
# The argument parsing will result in self.define being a string, but
# it has to be a list of 2-tuples. All the preprocessor symbols
# specified by the 'define' option will be set to '1'. Multiple
# symbols can be separated with commas.
if self.define:
defines = self.define.split(',')
self.define = [(symbol, '1') for symbol in defines]
# The option for macros to undefine is also a string from the
# option parsing, but has to be a list. Multiple symbols can also
# be separated with commas here.
if self.undef:
self.undef = self.undef.split(',')
if self.swig_opts is None:
self.swig_opts = []
else:
self.swig_opts = self.swig_opts.split(' ')
# Finally add the user include and library directories if requested
if self.user:
user_include = os.path.join(USER_BASE, "include")
user_lib = os.path.join(USER_BASE, "lib")
if os.path.isdir(user_include):
self.include_dirs.append(user_include)
if os.path.isdir(user_lib):
self.library_dirs.append(user_lib)
self.rpath.append(user_lib)
if isinstance(self.parallel, str):
try:
self.parallel = int(self.parallel)
except ValueError:
raise DistutilsOptionError("parallel should be an integer")
def run(self): # noqa: C901
from ..ccompiler import new_compiler
# 'self.extensions', as supplied by setup.py, is a list of
# Extension instances. See the documentation for Extension (in
# distutils.extension) for details.
#
# For backwards compatibility with Distutils 0.8.2 and earlier, we
# also allow the 'extensions' list to be a list of tuples:
# (ext_name, build_info)
# where build_info is a dictionary containing everything that
# Extension instances do except the name, with a few things being
# differently named. We convert these 2-tuples to Extension
# instances as needed.
if not self.extensions:
return
# If we were asked to build any C/C++ libraries, make sure that the
# directory where we put them is in the library search path for
# linking extensions.
if self.distribution.has_c_libraries():
build_clib = self.get_finalized_command('build_clib')
self.libraries.extend(build_clib.get_library_names() or [])
self.library_dirs.append(build_clib.build_clib)
# Setup the CCompiler object that we'll use to do all the
# compiling and linking
self.compiler = new_compiler(
compiler=self.compiler,
verbose=self.verbose,
dry_run=self.dry_run,
force=self.force,
)
customize_compiler(self.compiler)
# If we are cross-compiling, init the compiler now (if we are not
# cross-compiling, init would not hurt, but people may rely on
# late initialization of compiler even if they shouldn't...)
if os.name == 'nt' and self.plat_name != get_platform():
self.compiler.initialize(self.plat_name)
# And make sure that any compile/link-related options (which might
# come from the command-line or from the setup script) are set in
# that CCompiler object -- that way, they automatically apply to
# all compiling and linking done here.
if self.include_dirs is not None:
self.compiler.set_include_dirs(self.include_dirs)
if self.define is not None:
# 'define' option is a list of (name,value) tuples
for name, value in self.define:
self.compiler.define_macro(name, value)
if self.undef is not None:
for macro in self.undef:
self.compiler.undefine_macro(macro)
if self.libraries is not None:
self.compiler.set_libraries(self.libraries)
if self.library_dirs is not None:
self.compiler.set_library_dirs(self.library_dirs)
if self.rpath is not None:
self.compiler.set_runtime_library_dirs(self.rpath)
if self.link_objects is not None:
self.compiler.set_link_objects(self.link_objects)
# Now actually compile and link everything.
self.build_extensions()
def check_extensions_list(self, extensions): # noqa: C901
"""Ensure that the list of extensions (presumably provided as a
command option 'extensions') is valid, i.e. it is a list of
Extension objects. We also support the old-style list of 2-tuples,
where the tuples are (ext_name, build_info), which are converted to
Extension instances here.
Raise DistutilsSetupError if the structure is invalid anywhere;
just returns otherwise.
"""
if not isinstance(extensions, list):
raise DistutilsSetupError(
"'ext_modules' option must be a list of Extension instances"
)
for i, ext in enumerate(extensions):
if isinstance(ext, Extension):
continue # OK! (assume type-checking done
# by Extension constructor)
if not isinstance(ext, tuple) or len(ext) != 2:
raise DistutilsSetupError(
"each element of 'ext_modules' option must be an "
"Extension instance or 2-tuple"
)
ext_name, build_info = ext
log.warning(
"old-style (ext_name, build_info) tuple found in "
"ext_modules for extension '%s' "
"-- please convert to Extension instance",
ext_name,
)
if not (isinstance(ext_name, str) and extension_name_re.match(ext_name)):
raise DistutilsSetupError(
"first element of each tuple in 'ext_modules' "
"must be the extension name (a string)"
)
if not isinstance(build_info, dict):
raise DistutilsSetupError(
"second element of each tuple in 'ext_modules' "
"must be a dictionary (build info)"
)
# OK, the (ext_name, build_info) dict is type-safe: convert it
# to an Extension instance.
ext = Extension(ext_name, build_info['sources'])
# Easy stuff: one-to-one mapping from dict elements to
# instance attributes.
for key in (
'include_dirs',
'library_dirs',
'libraries',
'extra_objects',
'extra_compile_args',
'extra_link_args',
):
val = build_info.get(key)
if val is not None:
setattr(ext, key, val)
# Medium-easy stuff: same syntax/semantics, different names.
ext.runtime_library_dirs = build_info.get('rpath')
if 'def_file' in build_info:
log.warning("'def_file' element of build info dict no longer supported")
# Non-trivial stuff: 'macros' split into 'define_macros'
# and 'undef_macros'.
macros = build_info.get('macros')
if macros:
ext.define_macros = []
ext.undef_macros = []
for macro in macros:
if not (isinstance(macro, tuple) and len(macro) in (1, 2)):
raise DistutilsSetupError(
"'macros' element of build info dict "
"must be 1- or 2-tuple"
)
if len(macro) == 1:
ext.undef_macros.append(macro[0])
elif len(macro) == 2:
ext.define_macros.append(macro)
extensions[i] = ext
def get_source_files(self):
self.check_extensions_list(self.extensions)
filenames = []
# Wouldn't it be neat if we knew the names of header files too...
for ext in self.extensions:
filenames.extend(ext.sources)
return filenames
def get_outputs(self):
# Sanity check the 'extensions' list -- can't assume this is being
# done in the same run as a 'build_extensions()' call (in fact, we
# can probably assume that it *isn't*!).
self.check_extensions_list(self.extensions)
# And build the list of output (built) filenames. Note that this
# ignores the 'inplace' flag, and assumes everything goes in the
# "build" tree.
outputs = []
for ext in self.extensions:
outputs.append(self.get_ext_fullpath(ext.name))
return outputs
def build_extensions(self):
# First, sanity-check the 'extensions' list
self.check_extensions_list(self.extensions)
if self.parallel:
self._build_extensions_parallel()
else:
self._build_extensions_serial()
def _build_extensions_parallel(self):
workers = self.parallel
if self.parallel is True:
workers = os.cpu_count() # may return None
try:
from concurrent.futures import ThreadPoolExecutor
except ImportError:
workers = None
if workers is None:
self._build_extensions_serial()
return
with ThreadPoolExecutor(max_workers=workers) as executor:
futures = [
executor.submit(self.build_extension, ext) for ext in self.extensions
]
for ext, fut in zip(self.extensions, futures):
with self._filter_build_errors(ext):
fut.result()
def _build_extensions_serial(self):
for ext in self.extensions:
with self._filter_build_errors(ext):
self.build_extension(ext)
@contextlib.contextmanager
def _filter_build_errors(self, ext):
try:
yield
except (CCompilerError, DistutilsError, CompileError) as e:
if not ext.optional:
raise
self.warn(f'building extension "{ext.name}" failed: {e}')
def build_extension(self, ext):
sources = ext.sources
if sources is None or not isinstance(sources, (list, tuple)):
raise DistutilsSetupError(
f"in 'ext_modules' option (extension '{ext.name}'), "
"'sources' must be present and must be "
"a list of source filenames"
)
# sort to make the resulting .so file build reproducible
sources = sorted(sources)
ext_path = self.get_ext_fullpath(ext.name)
depends = sources + ext.depends
if not (self.force or newer_group(depends, ext_path, 'newer')):
log.debug("skipping '%s' extension (up-to-date)", ext.name)
return
else:
log.info("building '%s' extension", ext.name)
# First, scan the sources for SWIG definition files (.i), run
# SWIG on 'em to create .c files, and modify the sources list
# accordingly.
sources = self.swig_sources(sources, ext)
# Next, compile the source code to object files.
# XXX not honouring 'define_macros' or 'undef_macros' -- the
# CCompiler API needs to change to accommodate this, and I
# want to do one thing at a time!
# Two possible sources for extra compiler arguments:
# - 'extra_compile_args' in Extension object
# - CFLAGS environment variable (not particularly
# elegant, but people seem to expect it and I
# guess it's useful)
# The environment variable should take precedence, and
# any sensible compiler will give precedence to later
# command line args. Hence we combine them in order:
extra_args = ext.extra_compile_args or []
macros = ext.define_macros[:]
for undef in ext.undef_macros:
macros.append((undef,))
objects = self.compiler.compile(
sources,
output_dir=self.build_temp,
macros=macros,
include_dirs=ext.include_dirs,
debug=self.debug,
extra_postargs=extra_args,
depends=ext.depends,
)
# XXX outdated variable, kept here in case third-part code
# needs it.
self._built_objects = objects[:]
# Now link the object files together into a "shared object" --
# of course, first we have to figure out all the other things
# that go into the mix.
if ext.extra_objects:
objects.extend(ext.extra_objects)
extra_args = ext.extra_link_args or []
# Detect target language, if not provided
language = ext.language or self.compiler.detect_language(sources)
self.compiler.link_shared_object(
objects,
ext_path,
libraries=self.get_libraries(ext),
library_dirs=ext.library_dirs,
runtime_library_dirs=ext.runtime_library_dirs,
extra_postargs=extra_args,
export_symbols=self.get_export_symbols(ext),
debug=self.debug,
build_temp=self.build_temp,
target_lang=language,
)
def swig_sources(self, sources, extension):
"""Walk the list of source files in 'sources', looking for SWIG
interface (.i) files. Run SWIG on all that are found, and
return a modified 'sources' list with SWIG source files replaced
by the generated C (or C++) files.
"""
new_sources = []
swig_sources = []
swig_targets = {}
# XXX this drops generated C/C++ files into the source tree, which
# is fine for developers who want to distribute the generated
# source -- but there should be an option to put SWIG output in
# the temp dir.
if self.swig_cpp:
log.warning("--swig-cpp is deprecated - use --swig-opts=-c++")
if (
self.swig_cpp
or ('-c++' in self.swig_opts)
or ('-c++' in extension.swig_opts)
):
target_ext = '.cpp'
else:
target_ext = '.c'
for source in sources:
(base, ext) = os.path.splitext(source)
if ext == ".i": # SWIG interface file
new_sources.append(base + '_wrap' + target_ext)
swig_sources.append(source)
swig_targets[source] = new_sources[-1]
else:
new_sources.append(source)
if not swig_sources:
return new_sources
swig = self.swig or self.find_swig()
swig_cmd = [swig, "-python"]
swig_cmd.extend(self.swig_opts)
if self.swig_cpp:
swig_cmd.append("-c++")
# Do not override commandline arguments
if not self.swig_opts:
for o in extension.swig_opts:
swig_cmd.append(o)
for source in swig_sources:
target = swig_targets[source]
log.info("swigging %s to %s", source, target)
self.spawn(swig_cmd + ["-o", target, source])
return new_sources
def find_swig(self):
"""Return the name of the SWIG executable. On Unix, this is
just "swig" -- it should be in the PATH. Tries a bit harder on
Windows.
"""
if os.name == "posix":
return "swig"
elif os.name == "nt":
# Look for SWIG in its standard installation directory on
# Windows (or so I presume!). If we find it there, great;
# if not, act like Unix and assume it's in the PATH.
for vers in ("1.3", "1.2", "1.1"):
fn = os.path.join(f"c:\\swig{vers}", "swig.exe")
if os.path.isfile(fn):
return fn
else:
return "swig.exe"
else:
raise DistutilsPlatformError(
"I don't know how to find (much less run) SWIG "
f"on platform '{os.name}'"
)
# -- Name generators -----------------------------------------------
# (extension names, filenames, whatever)
def get_ext_fullpath(self, ext_name):
"""Returns the path of the filename for a given extension.
The file is located in `build_lib` or directly in the package
(inplace option).
"""
fullname = self.get_ext_fullname(ext_name)
modpath = fullname.split('.')
filename = self.get_ext_filename(modpath[-1])
if not self.inplace:
# no further work needed
# returning :
# build_dir/package/path/filename
filename = os.path.join(*modpath[:-1] + [filename])
return os.path.join(self.build_lib, filename)
# the inplace option requires to find the package directory
# using the build_py command for that
package = '.'.join(modpath[0:-1])
build_py = self.get_finalized_command('build_py')
package_dir = os.path.abspath(build_py.get_package_dir(package))
# returning
# package_dir/filename
return os.path.join(package_dir, filename)
def get_ext_fullname(self, ext_name):
"""Returns the fullname of a given extension name.
Adds the `package.` prefix"""
if self.package is None:
return ext_name
else:
return self.package + '.' + ext_name
def get_ext_filename(self, ext_name):
r"""Convert the name of an extension (eg. "foo.bar") into the name
of the file from which it will be loaded (eg. "foo/bar.so", or
"foo\bar.pyd").
"""
from ..sysconfig import get_config_var
ext_path = ext_name.split('.')
ext_suffix = get_config_var('EXT_SUFFIX')
return os.path.join(*ext_path) + ext_suffix
def get_export_symbols(self, ext):
"""Return the list of symbols that a shared extension has to
export. This either uses 'ext.export_symbols' or, if it's not
provided, "PyInit_" + module_name. Only relevant on Windows, where
the .pyd file (DLL) must export the module "PyInit_" function.
"""
name = ext.name.split('.')[-1]
try:
# Unicode module name support as defined in PEP-489
# https://peps.python.org/pep-0489/#export-hook-name
name.encode('ascii')
except UnicodeEncodeError:
suffix = 'U_' + name.encode('punycode').replace(b'-', b'_').decode('ascii')
else:
suffix = "_" + name
initfunc_name = "PyInit" + suffix
if initfunc_name not in ext.export_symbols:
ext.export_symbols.append(initfunc_name)
return ext.export_symbols
def get_libraries(self, ext): # noqa: C901
"""Return the list of libraries to link against when building a
shared extension. On most platforms, this is just 'ext.libraries';
on Windows, we add the Python library (eg. python20.dll).
"""
# The python library is always needed on Windows. For MSVC, this
# is redundant, since the library is mentioned in a pragma in
# pyconfig.h that MSVC groks. The other Windows compilers all seem
# to need it mentioned explicitly, though, so that's what we do.
# Append '_d' to the python import library on debug builds.
if sys.platform == "win32" and not is_mingw():
from .._msvccompiler import MSVCCompiler
if not isinstance(self.compiler, MSVCCompiler):
template = "python%d%d"
if self.debug:
template = template + '_d'
pythonlib = template % (
sys.hexversion >> 24,
(sys.hexversion >> 16) & 0xFF,
)
# don't extend ext.libraries, it may be shared with other
# extensions, it is a reference to the original list
return ext.libraries + [pythonlib]
else:
# On Android only the main executable and LD_PRELOADs are considered
# to be RTLD_GLOBAL, all the dependencies of the main executable
# remain RTLD_LOCAL and so the shared libraries must be linked with
# libpython when python is built with a shared python library (issue
# bpo-21536).
# On Cygwin (and if required, other POSIX-like platforms based on
# Windows like MinGW) it is simply necessary that all symbols in
# shared libraries are resolved at link time.
from ..sysconfig import get_config_var
link_libpython = False
if get_config_var('Py_ENABLE_SHARED'):
# A native build on an Android device or on Cygwin
if hasattr(sys, 'getandroidapilevel'):
link_libpython = True
elif sys.platform == 'cygwin' or is_mingw():
link_libpython = True
elif '_PYTHON_HOST_PLATFORM' in os.environ:
# We are cross-compiling for one of the relevant platforms
if get_config_var('ANDROID_API_LEVEL') != 0:
link_libpython = True
elif get_config_var('MACHDEP') == 'cygwin':
link_libpython = True
if link_libpython:
ldversion = get_config_var('LDVERSION')
return ext.libraries + ['python' + ldversion]
return ext.libraries
|
catboostREPO_NAMEcatboostPATH_START.@catboost_extracted@catboost-master@contrib@python@setuptools@py3@setuptools@_distutils@command@build_ext.py@.PATH_END.py
|
{
"filename": "_shapesrc.py",
"repo_name": "plotly/plotly.py",
"repo_path": "plotly.py_extracted/plotly.py-master/packages/python/plotly/plotly/validators/scatter/fillpattern/_shapesrc.py",
"type": "Python"
}
|
import _plotly_utils.basevalidators
class ShapesrcValidator(_plotly_utils.basevalidators.SrcValidator):
def __init__(
self, plotly_name="shapesrc", parent_name="scatter.fillpattern", **kwargs
):
super(ShapesrcValidator, self).__init__(
plotly_name=plotly_name,
parent_name=parent_name,
edit_type=kwargs.pop("edit_type", "none"),
**kwargs,
)
|
plotlyREPO_NAMEplotly.pyPATH_START.@plotly.py_extracted@plotly.py-master@packages@python@plotly@plotly@validators@scatter@fillpattern@_shapesrc.py@.PATH_END.py
|
{
"filename": "_shadow.py",
"repo_name": "plotly/plotly.py",
"repo_path": "plotly.py_extracted/plotly.py-master/packages/python/plotly/plotly/validators/sankey/hoverlabel/font/_shadow.py",
"type": "Python"
}
|
import _plotly_utils.basevalidators
class ShadowValidator(_plotly_utils.basevalidators.StringValidator):
def __init__(
self, plotly_name="shadow", parent_name="sankey.hoverlabel.font", **kwargs
):
super(ShadowValidator, self).__init__(
plotly_name=plotly_name,
parent_name=parent_name,
array_ok=kwargs.pop("array_ok", True),
edit_type=kwargs.pop("edit_type", "calc"),
**kwargs,
)
|
plotlyREPO_NAMEplotly.pyPATH_START.@plotly.py_extracted@plotly.py-master@packages@python@plotly@plotly@validators@sankey@hoverlabel@font@_shadow.py@.PATH_END.py
|
{
"filename": "_trifinder.py",
"repo_name": "matplotlib/matplotlib",
"repo_path": "matplotlib_extracted/matplotlib-main/lib/matplotlib/tri/_trifinder.py",
"type": "Python"
}
|
import numpy as np
from matplotlib import _api
from matplotlib.tri import Triangulation
class TriFinder:
"""
Abstract base class for classes used to find the triangles of a
Triangulation in which (x, y) points lie.
Rather than instantiate an object of a class derived from TriFinder, it is
usually better to use the function `.Triangulation.get_trifinder`.
Derived classes implement __call__(x, y) where x and y are array-like point
coordinates of the same shape.
"""
def __init__(self, triangulation):
_api.check_isinstance(Triangulation, triangulation=triangulation)
self._triangulation = triangulation
def __call__(self, x, y):
raise NotImplementedError
class TrapezoidMapTriFinder(TriFinder):
"""
`~matplotlib.tri.TriFinder` class implemented using the trapezoid
map algorithm from the book "Computational Geometry, Algorithms and
Applications", second edition, by M. de Berg, M. van Kreveld, M. Overmars
and O. Schwarzkopf.
The triangulation must be valid, i.e. it must not have duplicate points,
triangles formed from colinear points, or overlapping triangles. The
algorithm has some tolerance to triangles formed from colinear points, but
this should not be relied upon.
"""
def __init__(self, triangulation):
from matplotlib import _tri
super().__init__(triangulation)
self._cpp_trifinder = _tri.TrapezoidMapTriFinder(
triangulation.get_cpp_triangulation())
self._initialize()
def __call__(self, x, y):
"""
Return an array containing the indices of the triangles in which the
specified *x*, *y* points lie, or -1 for points that do not lie within
a triangle.
*x*, *y* are array-like x and y coordinates of the same shape and any
number of dimensions.
Returns integer array with the same shape and *x* and *y*.
"""
x = np.asarray(x, dtype=np.float64)
y = np.asarray(y, dtype=np.float64)
if x.shape != y.shape:
raise ValueError("x and y must be array-like with the same shape")
# C++ does the heavy lifting, and expects 1D arrays.
indices = (self._cpp_trifinder.find_many(x.ravel(), y.ravel())
.reshape(x.shape))
return indices
def _get_tree_stats(self):
"""
Return a python list containing the statistics about the node tree:
0: number of nodes (tree size)
1: number of unique nodes
2: number of trapezoids (tree leaf nodes)
3: number of unique trapezoids
4: maximum parent count (max number of times a node is repeated in
tree)
5: maximum depth of tree (one more than the maximum number of
comparisons needed to search through the tree)
6: mean of all trapezoid depths (one more than the average number
of comparisons needed to search through the tree)
"""
return self._cpp_trifinder.get_tree_stats()
def _initialize(self):
"""
Initialize the underlying C++ object. Can be called multiple times if,
for example, the triangulation is modified.
"""
self._cpp_trifinder.initialize()
def _print_tree(self):
"""
Print a text representation of the node tree, which is useful for
debugging purposes.
"""
self._cpp_trifinder.print_tree()
|
matplotlibREPO_NAMEmatplotlibPATH_START.@matplotlib_extracted@matplotlib-main@lib@matplotlib@tri@_trifinder.py@.PATH_END.py
|
{
"filename": "README.md",
"repo_name": "samarth-kashyap/hmi-clean-ls",
"repo_path": "hmi-clean-ls_extracted/hmi-clean-ls-master/README.md",
"type": "Markdown"
}
|
## Cleaning large-scale features from HMI Dopplergrams
This package lets you remove the following features from the
Dopplergram images of the Helioseismic Magnetic Imager (HMI).
1. Effect of satellite velocity
2. Gravitational redshift
3. Large-scale features (differential rotation, meridional circulation, limb-shift)
### Attribution
This code was developed for the work [Kashyap & Hanasoge (2021)](https://arxiv.org/abs/2105.12055).
Please cite it if you find the code useful in your work. The BiBTex entry for this paper is:
```
@ARTICLE{Kashyap-Hanasoge-2021-ApJ,
author = {{Kashyap}, Samarth G. and {Hanasoge}, Shravan M.},
title = "{Characterizing Solar Surface Convection Using Doppler Measurements}",
journal = {\apj},
keywords = {Solar photosphere, Supergranulation, 1518, 1662, Astrophysics - Solar and Stellar Astrophysics},
year = 2021,
month = aug,
volume = {916},
number = {2},
eid = {87},
pages = {87},
doi = {10.3847/1538-4357/ac05bc},
archivePrefix = {arXiv},
eprint = {2105.12055},
primaryClass = {astro-ph.SR},
adsurl = {https://ui.adsabs.harvard.edu/abs/2021ApJ...916...87K},
adsnote = {Provided by the SAO/NASA Astrophysics Data System}
}
```
|
samarth-kashyapREPO_NAMEhmi-clean-lsPATH_START.@hmi-clean-ls_extracted@hmi-clean-ls-master@README.md@.PATH_END.py
|
{
"filename": "_xpad.py",
"repo_name": "catboost/catboost",
"repo_path": "catboost_extracted/catboost-master/contrib/python/plotly/py2/plotly/validators/scatterpolargl/marker/colorbar/_xpad.py",
"type": "Python"
}
|
import _plotly_utils.basevalidators
class XpadValidator(_plotly_utils.basevalidators.NumberValidator):
def __init__(
self, plotly_name="xpad", parent_name="scatterpolargl.marker.colorbar", **kwargs
):
super(XpadValidator, self).__init__(
plotly_name=plotly_name,
parent_name=parent_name,
edit_type=kwargs.pop("edit_type", "calc"),
min=kwargs.pop("min", 0),
role=kwargs.pop("role", "style"),
**kwargs
)
|
catboostREPO_NAMEcatboostPATH_START.@catboost_extracted@catboost-master@contrib@python@plotly@py2@plotly@validators@scatterpolargl@marker@colorbar@_xpad.py@.PATH_END.py
|
{
"filename": "test_baseten.py",
"repo_name": "langchain-ai/langchain",
"repo_path": "langchain_extracted/langchain-master/libs/community/tests/integration_tests/llms/test_baseten.py",
"type": "Python"
}
|
"""Test Baseten API wrapper."""
import os
from langchain_community.llms.baseten import Baseten
# This test requires valid BASETEN_MODEL_ID and BASETEN_API_KEY environment variables
def test_baseten_call() -> None:
"""Test valid call to Baseten."""
llm = Baseten(model=os.environ["BASETEN_MODEL_ID"]) # type: ignore[call-arg]
output = llm.invoke("Test prompt, please respond.")
assert isinstance(output, str)
|
langchain-aiREPO_NAMElangchainPATH_START.@langchain_extracted@langchain-master@libs@community@tests@integration_tests@llms@test_baseten.py@.PATH_END.py
|
{
"filename": "vectors.py",
"repo_name": "hannorein/REBOUND",
"repo_path": "REBOUND_extracted/REBOUND-main/rebound/vectors.py",
"type": "Python"
}
|
from ctypes import Structure, c_double
class Vec6d(Structure):
_fields_ = [("x", c_double),
("y", c_double),
("z", c_double),
("vx", c_double),
("vy", c_double),
("vz", c_double)]
class Vec3dBasic(Structure):
"""
Internal use only. Not used as Vec3d directly because assigments to numpy arrays don't worl
"""
_fields_ = [("x", c_double),
("y", c_double),
("z", c_double)]
class Vec3d:
"""
Class for 3D Cartesian vectors.
"""
_vec3d = None
@property
def __array_interface__(self):
return {"shape": (3,), "typestr": "<f8", "data": self._vec3d}
def __init__(self, *args):
if len(args) == 1:
vec = args[0]
if isinstance(vec,Vec3dBasic):
vec = [vec.x, vec.y, vec.z]
elif isinstance(vec,str):
vec = vec.lower()
if vec != "x" and vec !="y" and vec != "z":
raise ValueError("When passing a string to create a Vec3D, it needs to be one of 'x', 'y', or 'z'")
if vec == "x":
vec = [1,0,0]
elif vec == "y":
vec = [0,1,0]
if vec == "z":
vec = [0,0,1]
else:
vec = [float(vec[0]), float(vec[1]), float(vec[2])]
elif len(args) >= 3:
vec = [float(args[0]), float(args[1]), float(args[2])]
self._vec3d =Vec3dBasic(vec[0],vec[1],vec[2])
def __mul__(self, other):
try:
return Vec3d([self.x*other, self.y*other, self.z*other])
except:
return NotImplemented
def __truediv__(self, other):
if other==0.:
raise ZeroDivisionError
try:
return Vec3d([self.x/other, self.y/other, self.z/other])
except:
return NotImplemented
def __add__(self, other):
try:
o = Vec3d(other)
return Vec3d([self[0]+other[0], self[1]+other[1], self[2]+other[2]])
except:
return NotImplemented
def __sub__(self, other):
try:
o = Vec3d(other)
return Vec3d([self[0]-other[0], self[1]-other[1], self[2]-other[2]])
except:
return NotImplemented
def rotate(self, q):
if not isinstance(q, Rotation):
raise NotImplementedError
clibrebound.reb_vec3d_irotate(byref(_vec3d), q)
return self
def normalize(self):
clibrebound.reb_vec3d_normalize.restype = Vec3dBasic
r = clibrebound.reb_vec3d_normalize(self._vec3d)
self._vec3d = r._vec3d
return self
def __getitem__(self, key):
if not isinstance(key, int):
raise IndexError("Index must be an integer.")
if key < 0 or key >= 3:
raise IndexError("Vec3d has exactly three elements and can therefore not access the item with index "+str(key)+".")
if key == 0:
return self._vec3d.x
if key == 1:
return self._vec3d.y
if key == 2:
return self._vec3d.z
def __setitem__(self, key, value):
if not isinstance(key, int):
raise IndexError("Index must be an integer.")
if key < 0 or key >= 3:
raise IndexError("Vec3d has exactly three elements and can therefore not access the item with index "+str(key)+".")
if key == 0:
self._vec3d.x = c_double(value)
if key == 1:
self._vec3d.y = c_double(value)
if key == 2:
self._vec3d.z = c_double(value)
@property
def x(self):
return self._vec3d.x
@x.setter
def x(self, v):
self._vec3d.x = v
@property
def y(self):
return self._vec3d.y
@y.setter
def y(self, v):
self._vec3d.y = v
@property
def z(self):
return self._vec3d.z
@z.setter
def z(self, v):
self._vec3d.z = v
def __repr__(self):
return '<{0}.{1} object at {2}, [{3}, {4}, {5}]>'.format(self.__module__, type(self).__name__, hex(id(self)), self._vec3d.x, self._vec3d.y, self._vec3d.z)
|
hannoreinREPO_NAMEREBOUNDPATH_START.@REBOUND_extracted@REBOUND-main@rebound@vectors.py@.PATH_END.py
|
{
"filename": "ImageDataLoader.md",
"repo_name": "tensorflow/tensorflow",
"repo_path": "tensorflow_extracted/tensorflow-master/tensorflow/lite/g3doc/api_docs/python/tflite_model_maker/searcher/ImageDataLoader.md",
"type": "Markdown"
}
|
page_type: reference
description: DataLoader class for Image Searcher Task.
<link rel="stylesheet" href="/site-assets/css/style.css">
<!-- DO NOT EDIT! Automatically generated file. -->
<div itemscope itemtype="http://developers.google.com/ReferenceObject">
<meta itemprop="name" content="tflite_model_maker.searcher.ImageDataLoader" />
<meta itemprop="path" content="Stable" />
<meta itemprop="property" content="__init__"/>
<meta itemprop="property" content="__len__"/>
<meta itemprop="property" content="append"/>
<meta itemprop="property" content="create"/>
<meta itemprop="property" content="load_from_folder"/>
</div>
# tflite_model_maker.searcher.ImageDataLoader
<!-- Insert buttons and diff -->
<table class="tfo-notebook-buttons tfo-api nocontent" align="left">
<td>
<a target="_blank" href="https://github.com/tensorflow/examples/blob/master/tensorflow_examples/lite/model_maker/core/data_util/image_searcher_dataloader.py#L33-L155">
<img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />
View source on GitHub
</a>
</td>
</table>
DataLoader class for Image Searcher Task.
Inherits From: [`DataLoader`](../../tflite_model_maker/searcher/DataLoader)
<pre class="devsite-click-to-copy prettyprint lang-py tfo-signature-link">
<code>tflite_model_maker.searcher.ImageDataLoader(
embedder: image_embedder.ImageEmbedder,
metadata_type: <a href="../../tflite_model_maker/searcher/MetadataType"><code>tflite_model_maker.searcher.MetadataType</code></a> = <a href="../../tflite_model_maker/searcher/MetadataType#FROM_FILE_NAME"><code>tflite_model_maker.searcher.MetadataType.FROM_FILE_NAME</code></a>
) -> None
</code></pre>
<!-- Placeholder for "Used in" -->
<!-- Tabular view -->
<table class="responsive fixed orange">
<colgroup><col width="214px"><col></colgroup>
<tr><th colspan="2"><h2 class="add-link">Args</h2></th></tr>
<tr>
<td>
`embedder`<a id="embedder"></a>
</td>
<td>
Embedder to generate embedding from raw input image.
</td>
</tr><tr>
<td>
`metadata_type`<a id="metadata_type"></a>
</td>
<td>
Type of MetadataLoader to load metadata for each input
data. By default, load the file name as metadata for each input data.
</td>
</tr>
</table>
<!-- Tabular view -->
<table class="responsive fixed orange">
<colgroup><col width="214px"><col></colgroup>
<tr><th colspan="2"><h2 class="add-link">Attributes</h2></th></tr>
<tr>
<td>
`dataset`<a id="dataset"></a>
</td>
<td>
Gets the dataset.
Due to performance consideration, we don't return a copy, but the returned
`self._dataset` should never be changed.
</td>
</tr><tr>
<td>
`embedder_path`<a id="embedder_path"></a>
</td>
<td>
Gets the path to the TFLite Embedder model file.
</td>
</tr><tr>
<td>
`metadata`<a id="metadata"></a>
</td>
<td>
Gets the metadata.
</td>
</tr>
</table>
## Methods
<h3 id="append"><code>append</code></h3>
<a target="_blank" class="external" href="https://github.com/tensorflow/examples/blob/master/tensorflow_examples/lite/model_maker/core/data_util/searcher_dataloader.py#L92-L106">View source</a>
<pre class="devsite-click-to-copy prettyprint lang-py tfo-signature-link">
<code>append(
data_loader: 'DataLoader'
) -> None
</code></pre>
Appends the dataset.
Don't check if embedders from the two data loader are the same in this
function. Users are responsible to keep the embedder identical.
<!-- Tabular view -->
<table class="responsive fixed orange">
<colgroup><col width="214px"><col></colgroup>
<tr><th colspan="2">Args</th></tr>
<tr>
<td>
`data_loader`
</td>
<td>
The data loader in which the data will be appended.
</td>
</tr>
</table>
<h3 id="create"><code>create</code></h3>
<a target="_blank" class="external" href="https://github.com/tensorflow/examples/blob/master/tensorflow_examples/lite/model_maker/core/data_util/image_searcher_dataloader.py#L60-L92">View source</a>
<pre class="devsite-click-to-copy prettyprint lang-py tfo-signature-link">
<code>@classmethod</code>
<code>create(
image_embedder_path: str,
metadata_type: <a href="../../tflite_model_maker/searcher/MetadataType"><code>tflite_model_maker.searcher.MetadataType</code></a> = <a href="../../tflite_model_maker/searcher/MetadataType#FROM_FILE_NAME"><code>tflite_model_maker.searcher.MetadataType.FROM_FILE_NAME</code></a>,
l2_normalize: bool = False
) -> 'DataLoader'
</code></pre>
Creates DataLoader for the Image Searcher task.
<!-- Tabular view -->
<table class="responsive fixed orange">
<colgroup><col width="214px"><col></colgroup>
<tr><th colspan="2">Args</th></tr>
<tr>
<td>
`image_embedder_path`
</td>
<td>
Path to the ".tflite" image embedder model.
</td>
</tr><tr>
<td>
`metadata_type`
</td>
<td>
Type of MetadataLoader to load metadata for each input
image based on image path. By default, load the file name as metadata
for each input image.
</td>
</tr><tr>
<td>
`l2_normalize`
</td>
<td>
Whether to normalize the returned feature vector with L2
norm. Use this option only if the model does not already contain a
native L2_NORMALIZATION TF Lite Op. In most cases, this is already the
case and L2 norm is thus achieved through TF Lite inference.
</td>
</tr>
</table>
<!-- Tabular view -->
<table class="responsive fixed orange">
<colgroup><col width="214px"><col></colgroup>
<tr><th colspan="2">Returns</th></tr>
<tr class="alt">
<td colspan="2">
DataLoader object created for the Image Searcher task.
</td>
</tr>
</table>
<h3 id="load_from_folder"><code>load_from_folder</code></h3>
<a target="_blank" class="external" href="https://github.com/tensorflow/examples/blob/master/tensorflow_examples/lite/model_maker/core/data_util/image_searcher_dataloader.py#L94-L155">View source</a>
<pre class="devsite-click-to-copy prettyprint lang-py tfo-signature-link">
<code>load_from_folder(
path: str, mode: str = 'r'
) -> None
</code></pre>
Loads image data from folder.
Users can load images from different folders one by one. For instance,
```
# Creates data_loader instance.
data_loader = image_searcher_dataloader.DataLoader.create(tflite_path)
# Loads images, first from `image_path1` and secondly from `image_path2`.
data_loader.load_from_folder(image_path1)
data_loader.load_from_folder(image_path2)
```
<!-- Tabular view -->
<table class="responsive fixed orange">
<colgroup><col width="214px"><col></colgroup>
<tr><th colspan="2">Args</th></tr>
<tr>
<td>
`path`
</td>
<td>
image directory to be loaded.
</td>
</tr><tr>
<td>
`mode`
</td>
<td>
mode in which the file is opened, Used when metadata_type is
FROM_DAT_FILE. Only 'r' and 'rb' are supported. 'r' means opening for
reading, 'rb' means opening for reading binary.
</td>
</tr>
</table>
<h3 id="__len__"><code>__len__</code></h3>
<a target="_blank" class="external" href="https://github.com/tensorflow/examples/blob/master/tensorflow_examples/lite/model_maker/core/data_util/searcher_dataloader.py#L60-L61">View source</a>
<pre class="devsite-click-to-copy prettyprint lang-py tfo-signature-link">
<code>__len__()
</code></pre>
|
tensorflowREPO_NAMEtensorflowPATH_START.@tensorflow_extracted@tensorflow-master@tensorflow@lite@g3doc@api_docs@python@tflite_model_maker@searcher@ImageDataLoader.md@.PATH_END.py
|
{
"filename": "test_editor_load.py",
"repo_name": "matplotlib/viscm",
"repo_path": "viscm_extracted/viscm-main/test/test_editor_load.py",
"type": "Python"
}
|
import json
import numpy as np
import pytest
from viscm.gui import Colormap, viscm_editor
def approxeq(x, y, *, err=0.0001):
return abs(y - x) < err
@pytest.mark.parametrize(
"colormap_file",
[
"viscm/examples/sample_linear.jscm",
"viscm/examples/sample_diverging.jscm",
"viscm/examples/sample_diverging_continuous.jscm",
],
)
class TestEditorLoad:
def expected(self, colormap_file):
with open(colormap_file) as f:
exp = json.loads(f.read())
return exp
def actual(self, colormap_file):
cm = Colormap(None, "CatmulClark", "CAM02-UCS")
cm.load(colormap_file)
act = viscm_editor(
uniform_space=cm.uniform_space,
cmtype=cm.cmtype,
method=cm.method,
**cm.params,
)
return act
def test_editor_loads_jscm_parameters_match(self, colormap_file):
expected = self.expected(colormap_file)
actual = self.actual(colormap_file)
assert actual.name == expected["name"]
extensions = expected["extensions"]["https://matplotlib.org/viscm"]
xp, yp, fixed = actual.control_point_model.get_control_points()
assert extensions["fixed"] == fixed
assert len(extensions["xp"]) == len(xp)
assert len(extensions["yp"]) == len(yp)
assert len(xp) == len(yp)
for i in range(len(xp)):
assert extensions["xp"][i] == xp[i]
assert extensions["yp"][i] == yp[i]
assert extensions["min_Jp"] == actual.min_Jp
assert extensions["max_Jp"] == actual.max_Jp
assert extensions["filter_k"] == actual.cmap_model.filter_k
assert extensions["cmtype"] == actual.cmtype
@pytest.mark.xfail(reason="Test very old; intent unclear")
def test_editor_loads_jscm_data_match(self, colormap_file):
expected = self.expected(colormap_file)
actual = self.actual(colormap_file)
# Decode hexadecimal-encoded colormap string (grouped in units of 3 pairs of
# two-character [00-ff / 0-255] values) to 3-tuples of floats (0-1).
expected_colors_hex = expected["colors"]
expected_colors_hex = [
expected_colors_hex[i : i + 6]
for i in range(0, len(expected_colors_hex), 6)
]
expected_colors = [
[int(c[i : i + 2], 16) / 255 for i in range(0, len(c), 2)]
for c in expected_colors_hex
]
actual_colors = actual.cmap_model.get_sRGB(num=256)[0].tolist()
for i in range(len(expected_colors)):
for z in range(3):
# FIXME: The right-hand side of this comparison will always be 0.
# https://github.com/matplotlib/viscm/pull/66#discussion_r1213818015
assert actual_colors[i][z] == np.rint(expected_colors[i][z] / 256)
# Should the test look more like this?
# assert approxeq(
# expected_colors[i][z],
# actual_colors[i][z],
# err=0.005,
# )
# import matplotlib as mpl
# try:
# from matplotlib.backends.backend_qtagg import FigureCanvasQTAgg as FigureCanvas
# except ImportError:
# try:
# from matplotlib.backends.backend_qt5agg import (
# FigureCanvasQTAgg as FigureCanvas
# )
# except ImportError:
# from matplotlib.backends.backend_qt4agg import (
# FigureCanvasQTAgg as FigureCanvas
# )
# from matplotlib.backends.qt_compat import QtCore, QtGui
#
# def test_editor_add_point():
# # Testing linear
#
# fig = plt.figure()
# figure_canvas = FigureCanvas(fig)
# linear = viscm_editor(
# min_Jp=40,
# max_Jp=60,
# xp=[-10, 10],
# yp=[0,0],
# figure=fig,
# cmtype="linear",
# )
#
# Jp, ap, bp = linear.cmap_model.get_Jpapbp(3)
# eJp, eap, ebp = [40, 50, 60], [-10, 0, 10], [0, 0, 0]
# for i in range(3):
# assert approxeq(Jp[i], eJp[i])
# assert approxeq(ap[i], eap[i])
# assert approxeq(bp[i], ebp[i])
# rgb = linear.cmap_model.get_sRGB(3)[0]
# ergb = [[ 0.27446483, 0.37479529, 0.34722738],
# [ 0.44884374, 0.44012037, 0.43848162],
# [ 0.63153956, 0.49733664, 0.53352363]]
# for i in range(3):
# for z in range(3):
# assert approxeq(rgb[i][z], ergb[i][z])
# # Testing adding a point to linear
# linear.bezier_builder.mode = "add"
# qtEvent = QtGui.QMouseEvent(
# QtCore.QEvent.MouseButtonPress,
# QtCore.QPoint(),
# QtCore.Qt.LeftButton,
# QtCore.Qt.LeftButton,
# QtCore.Qt.ShiftModifier,
# )
# event = mpl.backend_bases.MouseEvent(
# "button_press_event",
# figure_canvas,
# 0,
# 10,
# guiEvent=qtEvent,
# )
# event.xdata = 0
# event.ydata = 10
# event.inaxes = linear.bezier_builder.ax
# linear.bezier_builder.on_button_press(event)
# Jp, ap, bp = linear.cmap_model.get_Jpapbp(3)
# eJp, eap, ebp = [40, 50, 60], [-10, 0, 10], [0, 5, 0]
# for i in range(3):
# assert approxeq(Jp[i], eJp[i])
# assert approxeq(ap[i], eap[i])
# assert approxeq(bp[i], ebp[i])
# rgb = linear.cmap_model.get_sRGB(3)[0]
# ergb = [[ 0.27446483, 0.37479529, 0.34722738],
# [ 0.46101392, 0.44012069, 0.38783966],
# [ 0.63153956, 0.49733664, 0.53352363]]
# for i in range(3):
# for z in range(3):
# assert approxeq(rgb[i][z], ergb[i][z])
# # Removing a point from linear
# linear.bezier_builder.mode = "remove"
# qtEvent = QtGui.QMouseEvent(
# QtCore.QEvent.MouseButtonPress,
# QtCore.QPoint(),
# QtCore.Qt.LeftButton,
# QtCore.Qt.LeftButton,
# QtCore.Qt.ControlModifier,
# )
# event = mpl.backend_bases.MouseEvent(
# "button_press_event",
# figure_canvas,
# 0,
# 10,
# guiEvent=qtEvent,
# )
# event.xdata = 0
# event.ydata = 10
# event.inaxes = linear.bezier_builder.ax
# linear.bezier_builder.on_button_press(event)
# # Jp, ap, bp = linear.cmap_model.get_Jpapbp(3)
# # print(Jp, ap, bp)
# # print(rgb)
# # use mpl transformations
# print(linear.control_point_model.get_control_points())
# # print(linear.cmap_model.get_Jpapbp(3))
|
matplotlibREPO_NAMEviscmPATH_START.@viscm_extracted@viscm-main@test@test_editor_load.py@.PATH_END.py
|
{
"filename": "_font.py",
"repo_name": "catboost/catboost",
"repo_path": "catboost_extracted/catboost-master/contrib/python/plotly/py2/plotly/validators/pointcloud/hoverlabel/_font.py",
"type": "Python"
}
|
import _plotly_utils.basevalidators
class FontValidator(_plotly_utils.basevalidators.CompoundValidator):
def __init__(
self, plotly_name="font", parent_name="pointcloud.hoverlabel", **kwargs
):
super(FontValidator, self).__init__(
plotly_name=plotly_name,
parent_name=parent_name,
data_class_str=kwargs.pop("data_class_str", "Font"),
data_docs=kwargs.pop(
"data_docs",
"""
color
colorsrc
Sets the source reference on Chart Studio Cloud
for color .
family
HTML font family - the typeface that will be
applied by the web browser. The web browser
will only be able to apply a font if it is
available on the system which it operates.
Provide multiple font families, separated by
commas, to indicate the preference in which to
apply fonts if they aren't available on the
system. The Chart Studio Cloud (at
https://chart-studio.plotly.com or on-premise)
generates images on a server, where only a
select number of fonts are installed and
supported. These include "Arial", "Balto",
"Courier New", "Droid Sans",, "Droid Serif",
"Droid Sans Mono", "Gravitas One", "Old
Standard TT", "Open Sans", "Overpass", "PT Sans
Narrow", "Raleway", "Times New Roman".
familysrc
Sets the source reference on Chart Studio Cloud
for family .
size
sizesrc
Sets the source reference on Chart Studio Cloud
for size .
""",
),
**kwargs
)
|
catboostREPO_NAMEcatboostPATH_START.@catboost_extracted@catboost-master@contrib@python@plotly@py2@plotly@validators@pointcloud@hoverlabel@_font.py@.PATH_END.py
|
{
"filename": "triangle.py",
"repo_name": "GabrielaCR/AGNfitter",
"repo_path": "AGNfitter_extracted/AGNfitter-master/functions/triangle.py",
"type": "Python"
}
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
from __future__ import print_function, absolute_import, unicode_literals
__all__ = ["corner", "hist2d", "error_ellipse"]
__version__ = "0.0.6"
__author__ = "Dan Foreman-Mackey (danfm@nyu.edu)"
__copyright__ = "Copyright 2013 Daniel Foreman-Mackey"
__contributors__ = [
# Alphabetical by first name.
"Adrian Price-Whelan @adrn",
"Brendon Brewer @eggplantbren",
"Ekta Patel @ekta1224",
"Emily Rice @emilurice",
"Geoff Ryan @geoffryan",
"Kyle Barbary @kbarbary",
"Phil Marshall @drphilmarshall",
"Pierre Gratier @pirg",
]
import numpy as np
import matplotlib.pyplot as pl
from matplotlib.ticker import MaxNLocator
from matplotlib.colors import LinearSegmentedColormap
from matplotlib.patches import Ellipse
import matplotlib.cm as cm
def corner(xs, weights=None, labels=None, extents=None, truths=None,
truth_color="#4682b4", scale_hist=False, quantiles=[],
verbose=False, plot_contours=True, plot_datapoints=True,
fig=None, **kwargs):
"""
Make a *sick* corner plot showing the projections of a data set in a
multi-dimensional space. kwargs are passed to hist2d() or used for
`matplotlib` styling.
Parameters
----------
xs : array_like (nsamples, ndim)
The samples. This should be a 1- or 2-dimensional array. For a 1-D
array this results in a simple histogram. For a 2-D array, the zeroth
axis is the list of samples and the next axis are the dimensions of
the space.
weights : array_like (nsamples,)
The weight of each sample. If `None` (default), samples are given
equal weight.
labels : iterable (ndim,) (optional)
A list of names for the dimensions.
extents : iterable (ndim,) (optional)
A list where each element is either a length 2 tuple containing
lower and upper bounds (extents) or a float in range (0., 1.)
giving the fraction of samples to include in bounds, e.g.,
[(0.,10.), (1.,5), 0.999, etc.].
If a fraction, the bounds are chosen to be equal-tailed.
truths : iterable (ndim,) (optional)
A list of reference values to indicate on the plots.
truth_color : str (optional)
A ``matplotlib`` style color for the ``truths`` makers.
scale_hist : bool (optional)
Should the 1-D histograms be scaled in such a way that the zero line
is visible?
quantiles : iterable (optional)
A list of fractional quantiles to show on the 1-D histograms as
vertical dashed lines.
verbose : bool (optional)
If true, print the values of the computed quantiles.
plot_contours : bool (optional)
Draw contours for dense regions of the plot.
plot_datapoints : bool (optional)
Draw the individual data points.
fig : matplotlib.Figure (optional)
Overplot onto the provided figure object.
"""
# Deal with 1D sample lists.
xs = np.atleast_1d(xs)
if len(xs.shape) == 1:
xs = np.atleast_2d(xs)
else:
assert len(xs.shape) == 2, "The input sample array must be 1- or 2-D."
xs = xs.T
assert xs.shape[0] <= xs.shape[1], "I don't believe that you want more " \
"dimensions than samples!"
if weights is not None:
weights = np.asarray(weights)
if weights.ndim != 1:
raise ValueError('weights must be 1-D')
if xs.shape[1] != weights.shape[0]:
raise ValueError('lengths of weights must match number of samples')
# backwards-compatibility
plot_contours = kwargs.get("smooth", plot_contours)
K = len(xs)
factor = 2.0 # size of one side of one panel
lbdim = 0.5 * factor # size of left/bottom margin
trdim = 0.05 * factor # size of top/right margin
whspace = 0.05 # w/hspace size
plotdim = factor * K + factor * (K - 1.) * whspace
dim = lbdim + plotdim + trdim
if fig is None:
fig, axes = pl.subplots(K, K, figsize=(dim, dim))
else:
try:
axes = np.array(fig.axes).reshape((K, K))
except:
raise ValueError("Provided figure has {0} axes, but data has "
"dimensions K={1}".format(len(fig.axes), K))
lb = lbdim / dim
tr = (lbdim + plotdim) / dim
fig.subplots_adjust(left=lb, bottom=lb, right=tr, top=tr,
wspace=whspace, hspace=whspace)
if extents is None:
extents = [[x.min(), x.max()] for x in xs]
# Check for parameters that never change.
m = np.array([e[0] == e[1] for e in extents], dtype=bool)
if np.any(m):
raise ValueError(("It looks like the parameter(s) in column(s) "
"{0} have no dynamic range. Please provide an "
"`extent` argument.")
.format(", ".join(map("{0}".format,
np.arange(len(m))[m]))))
else:
# If any of the extents are percentiles, convert them to ranges.
for i in range(len(extents)):
try:
emin, emax = extents[i]
except TypeError:
q = [0.5 - 0.5*extents[i], 0.5 + 0.5*extents[i]]
extents[i] = quantile(xs[i], q, weights=weights)
for i, x in enumerate(xs):
ax = axes[i, i]
# Plot the histograms.
n, b, p = ax.hist(x, weights=weights, bins=kwargs.get("bins", 50),
range=extents[i], histtype="step",
color=kwargs.get("color", "k"))
if truths is not None:
ax.axvline(truths[i], color=truth_color)
# Plot quantiles if wanted.
if len(quantiles) > 0:
qvalues = quantile(x, quantiles, weights=weights)
for q in qvalues:
ax.axvline(q, ls="dashed", color=kwargs.get("color", "k"))
if verbose:
print("Quantiles:")
print(list(zip(quantiles, qvalues)))
# Set up the axes.
ax.set_xlim(extents[i])
if scale_hist:
maxn = np.max(n)
ax.set_ylim(-0.1 * maxn, 1.1 * maxn)
else:
ax.set_ylim(0, 1.1 * np.max(n))
ax.set_yticklabels([])
ax.xaxis.set_major_locator(MaxNLocator(5))
# Not so DRY.
if i < K - 1:
ax.set_xticklabels([])
else:
[l.set_rotation(45) for l in ax.get_xticklabels()]
if labels is not None:
ax.set_xlabel(labels[i])
ax.xaxis.set_label_coords(0.5, -0.3)
for j, y in enumerate(xs):
ax = axes[i, j]
if j > i:
ax.set_visible(False)
ax.set_frame_on(False)
continue
elif j == i:
continue
hist2d(y, x, ax=ax, extent=[extents[j], extents[i]],
plot_contours=plot_contours,
plot_datapoints=plot_datapoints,
weights=weights, **kwargs)
if truths is not None:
ax.plot(truths[j], truths[i], "s", color=truth_color)
ax.axvline(truths[j], color=truth_color)
ax.axhline(truths[i], color=truth_color)
ax.xaxis.set_major_locator(MaxNLocator(5))
ax.yaxis.set_major_locator(MaxNLocator(5))
if i < K - 1:
ax.set_xticklabels([])
else:
[l.set_rotation(45) for l in ax.get_xticklabels()]
if labels is not None:
ax.set_xlabel(labels[j])
ax.xaxis.set_label_coords(0.5, -0.3)
if j > 0:
ax.set_yticklabels([])
else:
[l.set_rotation(45) for l in ax.get_yticklabels()]
if labels is not None:
ax.set_ylabel(labels[i])
ax.yaxis.set_label_coords(-0.3, 0.5)
return fig
def quantile(x, q, weights=None):
"""
Like numpy.percentile, but:
* Values of q are quantiles [0., 1.] rather than percentiles [0., 100.]
* scalar q not supported (q must be iterable)
* optional weights on x
"""
if weights is None:
return np.percentile(x, [100. * qi for qi in q])
else:
idx = np.argsort(x)
xsorted = x[idx]
cdf = np.add.accumulate(weights[idx])
cdf /= cdf[-1]
return np.interp(q, cdf, xsorted).tolist()
def error_ellipse(mu, cov, ax=None, factor=1.0, **kwargs):
"""
Plot the error ellipse at a point given its covariance matrix.
"""
# some sane defaults
facecolor = kwargs.pop('facecolor', 'none')
edgecolor = kwargs.pop('edgecolor', 'k')
x, y = mu
U, S, V = np.linalg.svd(cov)
theta = np.degrees(np.arctan2(U[1, 0], U[0, 0]))
ellipsePlot = Ellipse(xy=[x, y],
width=2 * np.sqrt(S[0]) * factor,
height=2 * np.sqrt(S[1]) * factor,
angle=theta,
facecolor=facecolor, edgecolor=edgecolor, **kwargs)
if ax is None:
ax = pl.gca()
ax.add_patch(ellipsePlot)
return ellipsePlot
def hist2d(x, y, *args, **kwargs):
"""
Plot a 2-D histogram of samples.
"""
ax = kwargs.pop("ax", pl.gca())
extent = kwargs.pop("extent", [[x.min(), x.max()], [y.min(), y.max()]])
bins = kwargs.pop("bins", 50)
color = kwargs.pop("color", "k")
linewidths = kwargs.pop("linewidths", None)
plot_datapoints = kwargs.get("plot_datapoints", True)
plot_contours = kwargs.get("plot_contours", True)
cmap = cm.get_cmap("gray")
cmap._init()
cmap._lut[:-3, :-1] = 0.
cmap._lut[:-3, -1] = np.linspace(1, 0, cmap.N)
X = np.linspace(extent[0][0], extent[0][1], bins + 1)
Y = np.linspace(extent[1][0], extent[1][1], bins + 1)
try:
H, X, Y = np.histogram2d(x.flatten(), y.flatten(), bins=(X, Y),
weights=kwargs.get('weights', None))
except ValueError:
raise ValueError("It looks like at least one of your sample columns "
"have no dynamic range. You could try using the "
"`extent` argument.")
V = 1.0 - np.exp(-0.5 * np.arange(0.5, 2.1, 0.5) ** 2)
Hflat = H.flatten()
inds = np.argsort(Hflat)[::-1]
Hflat = Hflat[inds]
sm = np.cumsum(Hflat)
sm /= sm[-1]
for i, v0 in enumerate(V):
try:
V[i] = Hflat[sm <= v0][-1]
except:
V[i] = Hflat[0]
X1, Y1 = 0.5 * (X[1:] + X[:-1]), 0.5 * (Y[1:] + Y[:-1])
X, Y = X[:-1], Y[:-1]
if plot_datapoints:
ax.plot(x, y, "o", color=color, ms=1.5, zorder=-1, alpha=0.1,
rasterized=True)
if plot_contours:
ax.contourf(X1, Y1, H.T, [V[-1], H.max()],
cmap=LinearSegmentedColormap.from_list("cmap",
([1] * 3,
[1] * 3),
N=2), antialiased=False)
if plot_contours:
ax.pcolor(X, Y, H.max() - H.T, cmap=cmap)
ax.contour(X1, Y1, H.T, V, colors=color, linewidths=linewidths)
data = np.vstack([x, y])
mu = np.mean(data, axis=1)
cov = np.cov(data)
if kwargs.pop("plot_ellipse", False):
error_ellipse(mu, cov, ax=ax, edgecolor="r", ls="dashed")
ax.set_xlim(extent[0])
ax.set_ylim(extent[1])
|
GabrielaCRREPO_NAMEAGNfitterPATH_START.@AGNfitter_extracted@AGNfitter-master@functions@triangle.py@.PATH_END.py
|
{
"filename": "__init__.py",
"repo_name": "StingraySoftware/stingray",
"repo_path": "stingray_extracted/stingray-main/stingray/simulator/tests/__init__.py",
"type": "Python"
}
|
import warnings
warnings.filterwarnings(
action="ignore", message=r".*RuntimeWarning: (under|over)flow encountered in .*"
)
import warnings
warnings.filterwarnings(
action="ignore", message=r".*RuntimeWarning: divide by zero encountered in .*"
)
|
StingraySoftwareREPO_NAMEstingrayPATH_START.@stingray_extracted@stingray-main@stingray@simulator@tests@__init__.py@.PATH_END.py
|
{
"filename": "_locationssrc.py",
"repo_name": "plotly/plotly.py",
"repo_path": "plotly.py_extracted/plotly.py-master/packages/python/plotly/plotly/validators/scattergeo/_locationssrc.py",
"type": "Python"
}
|
import _plotly_utils.basevalidators
class LocationssrcValidator(_plotly_utils.basevalidators.SrcValidator):
def __init__(self, plotly_name="locationssrc", parent_name="scattergeo", **kwargs):
super(LocationssrcValidator, self).__init__(
plotly_name=plotly_name,
parent_name=parent_name,
edit_type=kwargs.pop("edit_type", "none"),
**kwargs,
)
|
plotlyREPO_NAMEplotly.pyPATH_START.@plotly.py_extracted@plotly.py-master@packages@python@plotly@plotly@validators@scattergeo@_locationssrc.py@.PATH_END.py
|
{
"filename": "__init__.py",
"repo_name": "scikit-learn/scikit-learn",
"repo_path": "scikit-learn_extracted/scikit-learn-main/sklearn/datasets/tests/data/openml/id_2/__init__.py",
"type": "Python"
}
|
scikit-learnREPO_NAMEscikit-learnPATH_START.@scikit-learn_extracted@scikit-learn-main@sklearn@datasets@tests@data@openml@id_2@__init__.py@.PATH_END.py
|
|
{
"filename": "Units.ipynb",
"repo_name": "TeamLEGWORK/LEGWORK",
"repo_path": "LEGWORK_extracted/LEGWORK-main/docs/notebooks/Units.ipynb",
"type": "Jupyter Notebook"
}
|
# Understanding units in LEGWORK
```python
import legwork.source as source
import astropy.units as u
import numpy as np
```
## What units can I use?
We follow the [Standard Units](https://docs.astropy.org/en/stable/units/standard_units.html) defined in Astropy. This means that
- **lengths** are defined in terms of **metres** (or equivalent units)
- **masses** are defined in terms of **kilograms** (or equivalent units)
- **times** are defined in terms of **seconds** (or equivalent units)
However, if you're planning to try to measure the gravitational waves from a source for which kilograms is a sensible unit for the mass, I've got some bad news for you...
Therefore, for `LEGWORK` you are most likely to focus on the following units:
- mass: $\rm M_{\odot}$, accessed via `u.Msun`
- frequency: $\rm Hz$, accessed via `u.Hz`
- distance: $\rm kpc, Mpc, Gpc$, accessed via `u.kpc`, `u.Mpc`, `u.Gpc`
- separation: $\rm AU$, accessed via `u.AU` or perhaps $\rm R_{\odot}$, accessed via `u.Rsun`
- ages: $\rm yr, Myr, Gyr$, accessed via `u.yr`, `u.Myr`, `u.Gyr`
But that doesn't mean you *have* to use these units because of the flexibility of Astropy. ``LEGWORK`` will accept any equivalent unit to those listed above.
Astropy provides a very convenient method for getting equivalent units. Say you know you could input the mass of a source in kilograms but you know that this isn't the best unit. You can find some equivalent choices by running
```python
u.kg.find_equivalent_units()
```
<table style="width:50%"><tr><th>Primary name</th><th>Unit definition</th><th>Aliases</th></tr><tr><td>M_e</td><td>9.10938e-31 kg</td><td></td></tr><tr><td>M_p</td><td>1.67262e-27 kg</td><td></td></tr><tr><td>earthMass</td><td>5.97217e+24 kg</td><td>M_earth, Mearth</td></tr><tr><td>g</td><td>0.001 kg</td><td>gram</td></tr><tr><td>jupiterMass</td><td>1.89812e+27 kg</td><td>M_jup, Mjup, M_jupiter, Mjupiter</td></tr><tr><td>kg</td><td>irreducible</td><td>kilogram</td></tr><tr><td>solMass</td><td>1.98841e+30 kg</td><td>M_sun, Msun</td></tr><tr><td>t</td><td>1000 kg</td><td>tonne</td></tr><tr><td>u</td><td>1.66054e-27 kg</td><td>Da, Dalton</td></tr></table>
And thus you can see that you could even use the mass of an electron (`u.M_e`) as your unit if that is your heart's desire.
## How do I give a variable units?
Okay great, so you know what unit you want to use, now you just need to apply it to a variable. Say you have a list of masses that looks like this
```python
# a list of masses
masses = [1.0, 1.4, 10, 30, 50]
print(masses)
```
[1.0, 1.4, 10, 30, 50]
and you know that each mass is in terms of solar masses. To make sure `LEGWORK` knows this you multiply your variable by the unit.
```python
masses_with_units = masses * u.Msun
print(masses_with_units)
```
[ 1. 1.4 10. 30. 50. ] solMass
And...that's it! Your input has been transformed into an Astropy Quantity rather than a simple Python list and you're good to go!
## Could you show me an example of using units with `LEGWORK` input?
Well, how could I say no when you asked so nicely? Let's create a collection sources and get their SNRs.
```python
# let's define the primary in solar masses
m_1 = [10, 12, 30] * u.Msun
# and the secondary in electron masses (because why not)
m_2 = [1e60, 5e60, 7.5e60] * u.M_e
# then the frequencies are generally defined in terms of Hertz
f_orb = [1e-3, 1e-4, 1e-2] * u.Hz
# and the distances with kiloparsecs
dist = [1, 8, 50] * u.kpc
# finally, eccentricity has no unit
ecc = [0.7, 0.0, 0.2]
sources = source.Source(m_1=m_1, m_2=m_2, f_orb=f_orb, dist=dist, ecc=ecc, interpolate_g=False)
```
Then if we ask the class for the signal-to-noise ratio it will handle the units cleanly and fully simplify.
```python
sources.get_snr()
```
array([3.76050048e+03, 8.61372971e-01, 4.14167952e+03])
Be careful though, if you don't use correct units then you'll get a `UnitConversionError` that may be hard to isolate.
```python
try:
# give frequency units of length
f_orb = f_orb.value * u.m
# try to create a source
sources = source.Source(m_1=m_1, m_2=m_2, f_orb=f_orb, dist=dist, ecc=ecc, interpolate_g=False)
except u.UnitConversionError as error:
print(error)
```
'm(1/3) solMass(1/3) / (kg(1/3) s(2/3))' and 'AU' (length) are not convertible
## How do you convert between units?
Great work, if you've got this far then you can now provide input to `LEGWORK` with any unit of your choice.
But what about the output? `LEGWORK` tries to choose some sensible units for the output but maybe you want something else and can't for the life of you remember the difference between a kiloparsec and a light year. Never fear, Astropy has you covered!
In order to convert between units you can use the `.to()` method of an Astropy quanitity. Let's get the merger times of the sources that we defined in the earlier example and convert the result to different units.
```python
# get the merger times
t_merge = sources.get_merger_time()
# by default LEGWORK uses Gyr for merger times
t_merge
```
$[1.4564593 \times 10^{-5},~0.013231627,~1.8741491 \times 10^{-8}] \; \mathrm{Gyr}$
```python
# but we could look at how many years this is
t_merge.to(u.yr)
```
$[14564.593,~13231627,~18.741491] \; \mathrm{yr}$
```python
# or maybe you just really want to know how many fortnights until your favourite source merges
t_merge.to(u.fortnight)
```
$[379979.83,~3.4520369 \times 10^{8},~488.95211] \; \mathrm{fortnight}$
You can also convert to any *combination* of units as long as they simplify to an equivalent unit.
```python
# let's convert to a combination of units
t_merge.to(u.yr * u.Msun / u.kg)
```
$[7.3247438 \times 10^{-27},~6.6543759 \times 10^{-24},~9.425366 \times 10^{-30}] \; \mathrm{\frac{M_{\odot}\,yr}{kg}}$
Beware though, if you try to convert to a unit isn't equivalent then you'll get an `UnitConversionerror`
```python
try:
t_merge.to(u.yr * u.Msun)
except u.UnitConversionError as error:
print(error)
```
'Gyr' (time) and 'solMass yr' are not convertible
## How do I decompose a variable's value and unit?
So you've got the result and now the pesky unit is perhaps getting in the way of saving the result or doesn't work with another of your functions. If you want to get the value back then just use `.value` like this
```python
masses_with_units = [1.0, 1.4, 10, 30, 50] * u.Msun
print(masses_with_units)
print(masses_with_units.value)
```
[ 1. 1.4 10. 30. 50. ] solMass
[ 1. 1.4 10. 30. 50. ]
You can also use `.unit` to get the unit of a variable (this can be very useful when plotting and labelled axes)
```python
print(masses_with_units)
print(masses_with_units.unit)
```
[ 1. 1.4 10. 30. 50. ] solMass
solMass
That's all for this tutorial, be sure to check out [the other ones](../tutorials.rst) to find *other* ways to keep your feet up and let us do the `LEGWORK`! If you still have questions about units we recommend that you take a look at the [Astropy documentation](https://docs.astropy.org/en/stable/units/) directly.
|
TeamLEGWORKREPO_NAMELEGWORKPATH_START.@LEGWORK_extracted@LEGWORK-main@docs@notebooks@Units.ipynb@.PATH_END.py
|
{
"filename": "_size.py",
"repo_name": "plotly/plotly.py",
"repo_path": "plotly.py_extracted/plotly.py-master/packages/python/plotly/plotly/validators/densitymap/colorbar/tickfont/_size.py",
"type": "Python"
}
|
import _plotly_utils.basevalidators
class SizeValidator(_plotly_utils.basevalidators.NumberValidator):
def __init__(
self, plotly_name="size", parent_name="densitymap.colorbar.tickfont", **kwargs
):
super(SizeValidator, self).__init__(
plotly_name=plotly_name,
parent_name=parent_name,
edit_type=kwargs.pop("edit_type", "colorbars"),
min=kwargs.pop("min", 1),
**kwargs,
)
|
plotlyREPO_NAMEplotly.pyPATH_START.@plotly.py_extracted@plotly.py-master@packages@python@plotly@plotly@validators@densitymap@colorbar@tickfont@_size.py@.PATH_END.py
|
{
"filename": "_dtick.py",
"repo_name": "plotly/plotly.py",
"repo_path": "plotly.py_extracted/plotly.py-master/packages/python/plotly/plotly/validators/scattersmith/marker/colorbar/_dtick.py",
"type": "Python"
}
|
import _plotly_utils.basevalidators
class DtickValidator(_plotly_utils.basevalidators.AnyValidator):
def __init__(
self, plotly_name="dtick", parent_name="scattersmith.marker.colorbar", **kwargs
):
super(DtickValidator, self).__init__(
plotly_name=plotly_name,
parent_name=parent_name,
edit_type=kwargs.pop("edit_type", "colorbars"),
implied_edits=kwargs.pop("implied_edits", {"tickmode": "linear"}),
**kwargs,
)
|
plotlyREPO_NAMEplotly.pyPATH_START.@plotly.py_extracted@plotly.py-master@packages@python@plotly@plotly@validators@scattersmith@marker@colorbar@_dtick.py@.PATH_END.py
|
{
"filename": "_weightsrc.py",
"repo_name": "plotly/plotly.py",
"repo_path": "plotly.py_extracted/plotly.py-master/packages/python/plotly/plotly/validators/image/hoverlabel/font/_weightsrc.py",
"type": "Python"
}
|
import _plotly_utils.basevalidators
class WeightsrcValidator(_plotly_utils.basevalidators.SrcValidator):
def __init__(
self, plotly_name="weightsrc", parent_name="image.hoverlabel.font", **kwargs
):
super(WeightsrcValidator, self).__init__(
plotly_name=plotly_name,
parent_name=parent_name,
edit_type=kwargs.pop("edit_type", "none"),
**kwargs,
)
|
plotlyREPO_NAMEplotly.pyPATH_START.@plotly.py_extracted@plotly.py-master@packages@python@plotly@plotly@validators@image@hoverlabel@font@_weightsrc.py@.PATH_END.py
|
{
"filename": "__init__.py",
"repo_name": "keras-team/keras",
"repo_path": "keras_extracted/keras-master/keras/api/mixed_precision/__init__.py",
"type": "Python"
}
|
"""DO NOT EDIT.
This file was autogenerated. Do not edit it by hand,
since your modifications would be overwritten.
"""
from keras.src.dtype_policies.dtype_policy import DTypePolicy
from keras.src.dtype_policies.dtype_policy import DTypePolicy as Policy
from keras.src.dtype_policies.dtype_policy import dtype_policy
from keras.src.dtype_policies.dtype_policy import dtype_policy as global_policy
from keras.src.dtype_policies.dtype_policy import set_dtype_policy
from keras.src.dtype_policies.dtype_policy import (
set_dtype_policy as set_global_policy,
)
from keras.src.optimizers.loss_scale_optimizer import LossScaleOptimizer
|
keras-teamREPO_NAMEkerasPATH_START.@keras_extracted@keras-master@keras@api@mixed_precision@__init__.py@.PATH_END.py
|
{
"filename": "MoveEngine.py",
"repo_name": "dokester/BayesicFitting",
"repo_path": "BayesicFitting_extracted/BayesicFitting-master/BayesicFitting/source/MoveEngine.py",
"type": "Python"
}
|
import numpy as numpy
import math
from . import Tools
from .Formatter import formatter as fmt
from .OrderEngine import OrderEngine
__author__ = "Do Kester"
__year__ = 2023
__license__ = "GPL3"
__version__ = "3.2.0"
__url__ = "https://www.bayesicfitting.nl"
__status__ = "Alpha"
# * This file is part of the BayesicFitting package.
# *
# * BayesicFitting is free software: you can redistribute it and/or modify
# * it under the terms of the GNU Lesser General Public License as
# * published by the Free Software Foundation, either version 3 of
# * the License, or ( at your option ) any later version.
# *
# * BayesicFitting is distributed in the hope that it will be useful,
# * but WITHOUT ANY WARRANTY; without even the implied warranty of
# * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# * GNU Lesser General Public License for more details.
# *
# * The GPL3 license can be found at <http://www.gnu.org/licenses/>.
# *
# * 2018 - 2023 Do Kester
class MoveEngine( OrderEngine ):
"""
The MoveEngine tries to move a selection of the parameters to another spot.
Input order : [0,1,2,3,4,5,6,7,8,9]
output order: [4,5,6,7,1,2,3,8,9,0]
It belongs to the class of generalized travelling salesman problems
where the parameters of the problem is an ordered list.
The walker is kept when the logLikelihood > lowLhood
Attributes from Engine
----------------------
walkers, errdis, maxtrials, nstep, slow, rng, report, phantoms, verbose
Author Do Kester.
"""
# *********CONSTRUCTORS***************************************************
def __init__( self, walkers, errdis, copy=None, **kwargs ) :
"""
Constructor.
Parameters
----------
walkers : SampleList
walkers to be diffused
errdis : ErrorDistribution
error distribution to be used
copy : OrderEngine
to be copied
kwargs : dict for Engine
"phantoms", "slow", "seed", "verbose"
"""
super( ).__init__( walkers, errdis, copy=copy, **kwargs )
def copy( self ):
""" Return copy of this. """
return MoveEngine( self.walkers, self.errdis, copy=self )
def __str__( self ):
return str( "MoveEngine" )
# *********EXECUTE***************************************************
def executeOnce( self, kw, lowLhood ) :
walker = self.walkers[kw]
problem = walker.problem
param = walker.allpars
src = self.rng.randint( problem.npars )
des = src
while des == src :
des = self.rng.randint( problem.npars )
mx = ( des - src ) if src < des else ( problem.npars - src )
t = min( self.rng.geometric( 0.2 ), mx )
while t > 0 :
if src < des :
kln = src + t
ptry = param[kln:des]
ptry = numpy.append( ptry, param[src:kln] )
ptry = numpy.append( ptry, param[des:] )
ptry = numpy.append( ptry, param[:src] )
else :
kln = src + t
ptry = param[kln:]
ptry = numpy.append( ptry, param[:des] )
ptry = numpy.append( ptry, param[src:kln] )
ptry = numpy.append( ptry, param[des:src] )
Ltry = self.errdis.logLikelihood( problem, ptry )
if self.verbose > 4 :
print( "Order ", src, kln, t, des, fmt( lowLhood ), fmt( Ltry ) )
print( fmt( param, max=None, format='%3d' ) )
print( fmt( ptry, max=None, format='%3d' ) )
if Ltry >= lowLhood:
self.reportSuccess( )
self.setWalker( walker.id, problem, ptry, Ltry )
## check if better than Lbest in walkers[-1]
## self.checkBest( problem, ptry, Ltry )
return t
t = self.rng.randint( t )
self.reportFailed( )
self.reportReject()
return t # nr of reordered parameters
|
dokesterREPO_NAMEBayesicFittingPATH_START.@BayesicFitting_extracted@BayesicFitting-master@BayesicFitting@source@MoveEngine.py@.PATH_END.py
|
{
"filename": "_backoff.py",
"repo_name": "catboost/catboost",
"repo_path": "catboost_extracted/catboost-master/contrib/python/plotly/py3/plotly/validators/scattercarpet/line/_backoff.py",
"type": "Python"
}
|
import _plotly_utils.basevalidators
class BackoffValidator(_plotly_utils.basevalidators.NumberValidator):
def __init__(
self, plotly_name="backoff", parent_name="scattercarpet.line", **kwargs
):
super(BackoffValidator, self).__init__(
plotly_name=plotly_name,
parent_name=parent_name,
array_ok=kwargs.pop("array_ok", True),
edit_type=kwargs.pop("edit_type", "plot"),
min=kwargs.pop("min", 0),
**kwargs,
)
|
catboostREPO_NAMEcatboostPATH_START.@catboost_extracted@catboost-master@contrib@python@plotly@py3@plotly@validators@scattercarpet@line@_backoff.py@.PATH_END.py
|
{
"filename": "process_pointings.py",
"repo_name": "JonahDW/Bayesian-dipole",
"repo_path": "Bayesian-dipole_extracted/Bayesian-dipole-main/process_pointings.py",
"type": "Python"
}
|
import os
import sys
import json
from pathlib import Path
import numpy as np
from copy import deepcopy
import cycler
import healpy as hp
from healpy.newvisufunc import projview
import matplotlib
import matplotlib.pyplot as plt
from matplotlib.patches import Ellipse
from scipy.optimize import curve_fit
from scipy.stats import poisson, kstest
import astropy.units as u
from astropy.io import fits, ascii
from astropy.table import Table, vstack
from astropy.coordinates import SkyCoord, Angle
import helpers
class PointingData:
def __init__(self, catalog, catalog_name, ra_col, dec_col, flux_col,
pointing_col, rms_col=None, peak_flux_col=None):
self.catalog = catalog
self.cat_name = catalog_name
self.ra_col = ra_col
self.dec_col = dec_col
self.flux_col = flux_col
self.peak_flux_col = peak_flux_col
self.rms_col = rms_col
self.pointing_col = pointing_col
self.NSIDE = None
self.pointings = None
self.total_sources = len(self.catalog)
self.out_figures = os.path.join('Figures',self.cat_name)
if not os.path.exists(self.out_figures):
os.mkdir(self.out_figures)
def separate_pointings(self):
'''
Build table of separate pointings from the full catalog
'''
self.pointings = Table()
catalog_by_pointings = self.catalog.group_by(self.pointing_col)
self.pointings['source_id'] = catalog_by_pointings.groups.keys
pointing_coord = [f'{pointing[1:3]} {pointing[3:5]} {pointing[5:10]} {pointing[10:13]} {pointing[13:15]} {pointing[15:17]}'
for pointing in self.pointings_names[pointing_col]]
pointings_coord = SkyCoord(pointing_coord, unit=(u.hourangle, u.deg))
self.pointings['RA'] = pointings_coord.ra.deg
self.pointings['DEC'] = pointings_coord.dec.deg
self.pointings['theta'], self.pointings['phi'] = helpers.RADECtoTHETAPHI(pointings_coord.ra.deg,
pointings_coord.dec.deg)
l, b = helpers.equatorial_to_galactic(pointings_coord.ra.rad, pointings_coord.dec.rad)
self.pointings['l'] = np.rad2deg(l)
self.pointings['b'] = np.rad2deg(b)
self.pointings['n_sources'] = np.array([len(group) for group in catalog_by_pointings.groups])
self.total_sources = np.sum(self.pointings['n_sources'])
if self.rms_col is not None:
self.pointings['rms'] = np.array([group[0][self.rms_col] for group in catalog_by_pointings.groups])
def read_pointings_catalog(self, catalog, pointing_col, ra_col, dec_col, rms_col):
'''
Read in prepared catalog of pointings
'''
self.pointings = catalog
self.pointings['source_id'] = catalog[pointing_col]
self.pointings['RA'] = catalog[ra_col]
self.pointings['DEC'] = catalog[dec_col]
self.pointings['theta'], self.pointings['phi'] = helpers.RADECtoTHETAPHI(catalog[ra_col],
catalog[dec_col])
l, b = helpers.equatorial_to_galactic(np.deg2rad(catalog[ra_col]),
np.deg2rad(catalog[dec_col]))
self.pointings['l'] = np.rad2deg(l)
self.pointings['b'] = np.rad2deg(b)
self.pointings['n_sources'] = np.array([len(self.catalog[self.catalog[self.pointing_col] == pointing[pointing_col]])
for pointing in catalog])
self.total_sources = np.sum(self.pointings['n_sources'])
if rms_col is not None:
self.pointings['rms'] = catalog[rms_col]
def apply_cuts_sources(self, flux_cut=None, snr_cut=None, bool_col=None, exclude_col=True):
'''
Apply cuts in source catalog to prepare for a dipole measurement
Keyword arguments:
flux_cut (float) -- Lower flux density cut
snr_cut (float) -- Lower signal to noise cut
bool_col (float) -- Boolean column to select sources
exclude_col (bool) -- Whether to select include true values or false
'''
if snr_cut:
snr = self.catalog[self.peak_flux_col]/self.catalog[self.rms_col]
self.catalog = self.catalog[snr > snr_cut]
if flux_cut:
self.catalog = self.catalog[self.catalog[self.flux_col] > flux_cut]
if bool_col:
if exclude_col:
self.catalog = self.catalog[~self.catalog[bool_col]]
else:
self.catalog = self.catalog[self.catalog[bool_col]]
for pointing in self.pointings:
pointing_sources = self.catalog[self.pointing_col] == pointing['source_id']
pointing['n_sources'] = len(self.catalog[pointing_sources])
self.total_sources = np.sum(self.pointings['n_sources'])
print(f'--- Catalog {self.cat_name} loaded and prepped ---')
print(f"Number of sources after source cuts is {self.total_sources}")
def apply_cuts_pointings(self, col, low, high, include=True):
'''
Apply cuts in source catalog to prepare for a dipole measurement
Keyword arguments:
col (str) -- Column to select on
low (float) -- Lower bound of values
high (float) -- Upper bound of values
include (bool) -- Whether to select inside or outside bounds
'''
if include:
self.pointings = self.pointings[np.logical_and(self.pointings[col] > low,
self.pointings[col] < high)]
else:
self.pointings = self.pointings[np.logical_or(self.pointings[col] < low,
self.pointings[col] > high)]
self.total_sources = np.sum(self.pointings['n_sources'])
print(f"Number of sources after {col} cuts is {self.total_sources} in {len(self.pointings)} pointings")
def completeness_counts(self, completeness_col):
'''
Calculate effective number counts
'''
for pointing in self.pointings:
pointing_sources = self.catalog[self.catalog[self.pointing_col] == pointing['source_id']]
eff_counts = np.sum(1/pointing_sources[completeness_col])
pointing['n_sources'] = eff_counts
def rms_power_law(self, rms, counts, plot=False):
def power_law(x, a, b):
return a*x**(-b)
popt, pcov = curve_fit(power_law, rms/self.sigma_ref, counts)
print(f'Fit power law with index {popt[1]:.2f} and normalization {popt[0]:.2g}')
print('Calculating mean values')
if plot:
rms_range = np.linspace(rms/self.sigma_ref.min(),rms/self.sigma_ref.max(),50)
plt.plot(rms_range*self.sigma_ref, power_law(rms_range, *popt), '--k')
plt.scatter(rms, counts, s=1, marker='+', color='k')
plt.xlabel('$\\sigma$ (mJy/beam)')
plt.ylabel('Counts')
plt.xscale('log')
plt.yscale('log')
plt.autoscale(enable=True, axis='x', tight=True)
plt.savefig(os.path.join(self.out_figures,f'{self.cat_name}_rms_pow.png'), dpi=300)
plt.close()
return popt[0], popt[1]
def flux_power_law(self):
'''
Fit power law to flux distribution
'''
def power_law(x, a, b):
return a*x**(-b)
flux = self.catalog[self.flux_col]
bins = np.logspace(np.log10(flux.min()), np.log10(flux.max()), 50)
bin_means = [(bins[i]+bins[i+1])/2 for i in range(len(bins)-1)]
counts, bins = np.histogram(flux, bins)
popt, pcov = curve_fit(power_law, bin_means, counts)
print(f'Fit power law with index {popt[1]:.2f} and normalization {popt[0]:.2f}')
plt.bar(bin_means, counts, width=np.diff(bins),
edgecolor='k', alpha=0.8, align='center', label='Data')
plt.plot(bins, power_law(bins, *popt),
label=f'$N \propto S^{{-{popt[1]:.2f}}}$', color='k')
plt.xscale('log')
plt.yscale('log')
plt.xlabel('Flux density (Jy)')
plt.ylabel('Counts')
plt.legend()
plt.savefig(os.path.join(self.out_figures,
f'{self.cat_name}_flux_dist.png'),
dpi=300)
plt.close()
def plot_poisson(self, counts, poisson_lambda):
bins = np.arange(counts.min(), counts.max())
poisson_prob = poisson.pmf(bins, poisson_lambda)
plt.hist(counts, bins=bins, density=True,
color='navy', label='Data')
plt.step(bins, poisson_prob, where='post',
color='crimson', label=f'Poisson ($\lambda$={poisson_lambda:.2f})')
plt.xlabel('Cell counts')
plt.legend()
plt.savefig(os.path.join(self.out_figures,
f'{self.cat_name}_poisson_counts.png'),
dpi=300)
plt.close()
def catalog_data(params, flux_cut, snr_cut, extra_fit=False, completeness=None):
'''
Prepare catalog for a dipole measurement
'''
source_catalog = Table.read(params['catalog']['path'])
data = PointingData(source_catalog, params['catalog']['name'], **params['columns'])
# Define pointing columns if present
if 'pointing_columns' in params:
pointing_catalog = Table.read(params['catalog']['pointings_path'])
data.read_pointings_catalog(pointing_catalog, **params['pointing_columns'])
else:
data.separate_pointings()
# Apply source cuts if specified
if 'source_cuts' in params:
for cut in params['source_cuts']:
data.apply_cuts_sources(**cut)
# Apply pointing cuts if specified
if 'pointing_cuts' in params:
for cut in params['pointing_cuts']:
data.apply_cuts_pointings(**cut)
if extra_fit == 'noise':
data.apply_cuts_sources(snr_cut=snr_cut)
data.sigma_ref = np.median(data.pointings['rms'])
print(f'Median rms value: {data.sigma_ref}')
flux_cut = 0
else:
data.apply_cuts_sources(flux_cut=flux_cut)
data.flux_power_law()
# Apply completeness correction if specified
if completeness:
data.completeness_counts(completeness)
return data, flux_cut
|
JonahDWREPO_NAMEBayesian-dipolePATH_START.@Bayesian-dipole_extracted@Bayesian-dipole-main@process_pointings.py@.PATH_END.py
|
{
"filename": "viewbox.py",
"repo_name": "macrocosme/shwirl",
"repo_path": "shwirl_extracted/shwirl-master/shwirl/extern/vispy/scene/widgets/viewbox.py",
"type": "Python"
}
|
# -*- coding: utf-8 -*-
# Copyright (c) 2015, Vispy Development Team.
# Distributed under the (new) BSD License. See LICENSE.txt for more info.
from __future__ import division
import numpy as np
from .widget import Widget
from ..subscene import SubScene
from ..cameras import make_camera, BaseCamera
from ...ext.six import string_types
from ...visuals.filters import Clipper
class ViewBox(Widget):
""" Provides a rectangular widget to which its subscene is rendered.
Three classes work together when using a ViewBox:
* The :class:`SubScene` class describes a "world" coordinate system and the
entities that live inside it.
* ViewBox is a "window" through which we view the
subscene. Multiple ViewBoxes may view the same subscene.
* :class:`Camera` describes both the perspective from which the
subscene is rendered, and the way user interaction affects that
perspective.
In general it is only necessary to create the ViewBox; a SubScene and
Camera will be generated automatically.
Parameters
----------
camera : instance of Camera | str | None
The camera through which to view the SubScene. If None, then a
PanZoomCamera (2D interaction) is used. If str, then the string is
used as the argument to :func:`make_camera`.
**kwargs : dict
Extra keyword arguments to pass to `Widget`.
"""
def __init__(self, camera=None, **kwargs):
self._camera = None
self._scene = None
Widget.__init__(self, **kwargs)
self.interactive = True
# Each viewbox has an internal scene node, which has a transform that
# represents the transformation imposed by camera.
if self.name is not None:
name = str(self.name) + "_Scene"
else:
name = None
self._scene = SubScene(name=name, parent=self)
self._scene._clipper = Clipper()
self._scene.clip_children = True
self.transforms.changed.connect(self._update_scene_clipper)
# Camera is a helper object that handles scene transformation
# and user interaction.
if camera is None:
camera = 'base'
if isinstance(camera, string_types):
self.camera = make_camera(camera, parent=self.scene)
elif isinstance(camera, BaseCamera):
self.camera = camera
else:
raise TypeError('Argument "camera" must be None, str, or Camera.')
@property
def camera(self):
""" Get/set the Camera in use by this ViewBox
If a string is given (e.g. 'panzoom', 'turntable', 'fly'). A
corresponding camera is selected if it already exists in the
scene, otherwise a new camera is created.
The camera object is made a child of the scene (if it is not
already in the scene).
Multiple cameras can exist in one scene, although only one can
be active at a time. A single camera can be used by multiple
viewboxes at the same time.
"""
return self._camera
@camera.setter
def camera(self, cam):
if isinstance(cam, string_types):
# Try to select an existing camera
for child in self.scene.children:
if isinstance(child, BaseCamera):
this_cam_type = child.__class__.__name__.lower()[:-6]
if this_cam_type == cam:
self.camera = child
return
else:
# No such camera yet, create it then
self.camera = make_camera(cam)
elif isinstance(cam, BaseCamera):
# Ensure that the camera is in the scene
if not self.is_in_scene(cam):
cam.parent = self.scene
# Disconnect / connect
if self._camera is not None:
self._camera._viewbox_unset(self)
self._camera = cam
if self._camera is not None:
self._camera._viewbox_set(self)
# Update view
cam.view_changed()
else:
raise ValueError('Not a camera object.')
def is_in_scene(self, node):
"""Get whether the given node is inside the scene of this viewbox.
Parameters
----------
node : instance of Node
The node.
"""
return self.scene.is_child(node)
def get_scene_bounds(self, dim=None):
"""Get the total bounds based on the visuals present in the scene
Parameters
----------
dim : int | None
Dimension to return.
Returns
-------
bounds : list | tuple
If ``dim is None``, Returns a list of 3 tuples, otherwise
the bounds for the requested dimension.
"""
# todo: handle sub-children
# todo: handle transformations
# Init
bounds = [(np.inf, -np.inf), (np.inf, -np.inf), (np.inf, -np.inf)]
# Get bounds of all children
for ob in self.scene.children:
if hasattr(ob, 'bounds'):
for axis in (0, 1, 2):
if (dim is not None) and dim != axis:
continue
b = ob.bounds(axis)
if b is not None:
b = min(b), max(b) # Ensure correct order
bounds[axis] = (min(bounds[axis][0], b[0]),
max(bounds[axis][1], b[1]))
# Set defaults
for axis in (0, 1, 2):
if any(np.isinf(bounds[axis])):
bounds[axis] = -1, 1
if dim is not None:
return bounds[dim]
else:
return bounds
@property
def scene(self):
""" The root node of the scene viewed by this ViewBox.
"""
return self._scene
def add(self, node):
""" Add an Node to the scene for this ViewBox.
This is a convenience method equivalent to
`node.parent = viewbox.scene`
Parameters
----------
node : instance of Node
The node to add.
"""
node.parent = self.scene
def on_resize(self, event):
"""Resize event handler
Parameters
----------
event : instance of Event
The event.
"""
if self._scene is None:
# happens during init
return
self._update_scene_clipper()
def _update_scene_clipper(self, event=None):
tr = self.get_transform('visual', 'framebuffer')
self._scene._clipper.bounds = tr.map(self.inner_rect)
|
macrocosmeREPO_NAMEshwirlPATH_START.@shwirl_extracted@shwirl-master@shwirl@extern@vispy@scene@widgets@viewbox.py@.PATH_END.py
|
{
"filename": "_showscale.py",
"repo_name": "plotly/plotly.py",
"repo_path": "plotly.py_extracted/plotly.py-master/packages/python/plotly/plotly/validators/treemap/marker/_showscale.py",
"type": "Python"
}
|
import _plotly_utils.basevalidators
class ShowscaleValidator(_plotly_utils.basevalidators.BooleanValidator):
def __init__(self, plotly_name="showscale", parent_name="treemap.marker", **kwargs):
super(ShowscaleValidator, self).__init__(
plotly_name=plotly_name,
parent_name=parent_name,
edit_type=kwargs.pop("edit_type", "calc"),
**kwargs,
)
|
plotlyREPO_NAMEplotly.pyPATH_START.@plotly.py_extracted@plotly.py-master@packages@python@plotly@plotly@validators@treemap@marker@_showscale.py@.PATH_END.py
|
{
"filename": "README.md",
"repo_name": "catboost/catboost",
"repo_path": "catboost_extracted/catboost-master/contrib/restricted/boost/conversion/README.md",
"type": "Markdown"
}
|
# [Boost.Conversion](https://boost.org/libs/conversion)
Boost.Conversion is one of the [Boost C++ Libraries](https://github.com/boostorg). This library improves program safety and clarity by performing otherwise messy conversions.
### Test results
@ | Build | Tests coverage | More info
----------------|-------------- | -------------- |-----------
Develop branch: | [](https://github.com/boostorg/conversion/actions/workflows/ci.yml) [](https://ci.appveyor.com/project/apolukhin/conversion/branch/develop) | [](https://coveralls.io/github/boostorg/conversion?branch=develop) | [details...](https://www.boost.org/development/tests/develop/developer/conversion.html)
Master branch: | [](https://github.com/boostorg/conversion/actions/workflows/ci.yml) [](https://ci.appveyor.com/project/apolukhin/conversion/branch/master) | [](https://coveralls.io/github/boostorg/conversion?branch=master) | [details...](https://www.boost.org/development/tests/master/developer/conversion.html)
[Latest developer documentation](https://www.boost.org/doc/libs/develop/doc/html/conversion.html)
### License
Distributed under the [Boost Software License, Version 1.0](https://boost.org/LICENSE_1_0.txt).
|
catboostREPO_NAMEcatboostPATH_START.@catboost_extracted@catboost-master@contrib@restricted@boost@conversion@README.md@.PATH_END.py
|
{
"filename": "rotations2d.py",
"repo_name": "astropy/halotools",
"repo_path": "halotools_extracted/halotools-master/halotools/utils/rotations2d.py",
"type": "Python"
}
|
r"""
A set of vector rotation utilites for manipulating 2-dimensional vectors
"""
from __future__ import division, print_function, absolute_import, unicode_literals
import numpy as np
from .vector_utilities import (
elementwise_dot,
elementwise_norm,
normalized_vectors,
angles_between_list_of_vectors,
)
__all__ = [
"rotation_matrices_from_angles",
"rotation_matrices_from_vectors",
"rotation_matrices_from_basis",
]
__author__ = ["Duncan Campbell", "Andrew Hearin"]
def rotation_matrices_from_angles(angles):
r"""
Calculate a collection of rotation matrices defined by
an input collection of rotation angles and rotation axes.
Parameters
----------
angles : ndarray
Numpy array of shape (npts, ) storing a collection of rotation angles
Returns
-------
matrices : ndarray
Numpy array of shape (npts, 2, 2) storing a collection of rotation matrices
Examples
--------
>>> from halotools.utils.mcrotations import random_unit_vectors_2d
>>> npts = int(1e4)
>>> angles = np.random.uniform(-np.pi/2., np.pi/2., npts)
>>> rotation_matrices = rotation_matrices_from_angles(angles)
Notes
-----
The function `rotate_vector_collection` can be used to efficiently
apply the returned collection of matrices to a collection of 2d vectors
"""
angles = np.atleast_1d(angles)
npts = len(angles)
sina = np.sin(angles)
cosa = np.cos(angles)
R = np.zeros((npts, 2, 2))
R[:, 0, 0] = cosa
R[:, 1, 1] = cosa
R[:, 0, 1] = -sina
R[:, 1, 0] = sina
return R
def rotation_matrices_from_vectors(v0, v1):
r"""
Calculate a collection of rotation matrices defined by the unique
transformation rotating v1 into v2.
Parameters
----------
v0 : ndarray
Numpy array of shape (npts, 2) storing a collection of initial vector orientations.
Note that the normalization of `v0` will be ignored.
v1 : ndarray
Numpy array of shape (npts, 2) storing a collection of final vectors.
Note that the normalization of `v1` will be ignored.
Returns
-------
matrices : ndarray
Numpy array of shape (npts, 2, 2) rotating each v0 into the corresponding v1
Examples
--------
>>> npts = int(1e4)
>>> v0 = np.random.random((npts, 2))
>>> v1 = np.random.random((npts, 2))
>>> rotation_matrices = rotation_matrices_from_vectors(v0, v1)
Notes
-----
The function `rotate_vector_collection` can be used to efficiently
apply the returned collection of matrices to a collection of 2d vectors
"""
v0 = normalized_vectors(v0)
v1 = normalized_vectors(v1)
# use the atan2 function to get the direction of rotation right
angles = np.arctan2(v0[:, 0], v0[:, 1]) - np.arctan2(v1[:, 0], v1[:, 1])
return rotation_matrices_from_angles(angles)
def rotation_matrices_from_basis(ux, uy, tol=np.pi / 1000.0):
r"""
Calculate a collection of rotation matrices defined by an input collection
of basis vectors.
Parameters
----------
ux : array_like
Numpy array of shape (npts, 2) storing a collection of unit vectors
uy : array_like
Numpy array of shape (npts, 2) storing a collection of unit vectors
tol : float, optional
angular tolerance for orthogonality of the input basis vectors in radians.
Returns
-------
matrices : ndarray
Numpy array of shape (npts, 2, 2) storing a collection of rotation matrices
Example
-------
Let's build a rotation matrix that roates from a frame
rotated by 45 degrees to the standard frame.
>>> u1 = [np.sqrt(2), np.sqrt(2)]
>>> u2 = [np.sqrt(2), -1.0*np.sqrt(2)]
>>> rot = rotation_matrices_from_basis(u1, u2)
Notes
-----
The rotation matrices transform from the Cartesian frame defined by the standard
basis vectors,
.. math::
\u_1=(1,0)
\u_2=(0,1)
The function `rotate_vector_collection` can be used to efficiently
apply the returned collection of matrices to a collection of 2d vectors
"""
N = np.shape(ux)[0]
# assume initial unit vectors are the standard ones
ex = np.array([1.0, 0.0] * N).reshape(N, 2)
ey = np.array([0.0, 1.0] * N).reshape(N, 2)
ux = normalized_vectors(ux)
uy = normalized_vectors(uy)
d_theta = angles_between_list_of_vectors(ux, uy)
if np.any((np.pi / 2.0 - d_theta) > tol):
msg = "At least one set of basis vectors are not orthoginal to within the specified tolerance."
raise ValueError(msg)
r_11 = elementwise_dot(ex, ux)
r_12 = elementwise_dot(ex, uy)
r_21 = elementwise_dot(ey, ux)
r_22 = elementwise_dot(ey, uy)
r = np.zeros((N, 2, 2))
r[:, 0, 0] = r_11
r[:, 0, 1] = r_12
r[:, 1, 0] = r_21
r[:, 1, 1] = r_22
return r
|
astropyREPO_NAMEhalotoolsPATH_START.@halotools_extracted@halotools-master@halotools@utils@rotations2d.py@.PATH_END.py
|
{
"filename": "README.md",
"repo_name": "Anirbancosmo/limpy",
"repo_path": "limpy_extracted/limpy-master/README.md",
"type": "Markdown"
}
|
<img src="images/Limpy_logo.jpg" alt="logo" height="250"/>
# LIMpy
_A python package for multi-line intensity mapping_
### Description
LIMpy package is useful to model and alayze multi line intensity maps of CII (158 $\mu m$), OIII (88 $\mu m$), and CO (1-0) to CO (13-12) transitions.
This code can be used for following things:
* Analytic model for star formation rate
* Multi-line luminosity models
* Multi line intensity power spectrum based on Halo model approach
* Simulate line intensity maps based on halo catalogs
* Calculate power spectrum from simulated maps in cube and rectangular box
* Calculate cross-correlated signal between two separate lines
* Apply Gaussian beam convolution
* Can be used to quantify interlopers, signal-to-noise ratio, etc.
### Requirements
This code uses mainly three external packages:
* [CAMB](https://github.com/cmbant/CAMB): Used to calculate the matter power spectrum.
* [Colossus](https://bdiemer.bitbucket.io/colossus/): used mainly to calculate halo mass function.
* [Astropy](https://www.astropy.org/): used to implement beam convolution.
### Installation
You can install **LIMpy** by cloning the package directly from GitHub.
<code>
git clone https://github.com/Anirbancosmo/Limpy.git
cd Limpy
python setup.py install
</code>
### Initialization
Set the default cosmological and Astrophysical parameters in input.py file. These parameters will be used to fix the halo mass function,
### Examples
See my [examples](examples/) folder for a quick start.
[Luminosity_and_sfr](examples/Luminosity_and_sfr.ipynb) : Check the available models for star formation rate and line luminosities.
[Powerspectra-halo-model.ipynb](examples/powerspectra-halo-model.ipynb): examples that show how to calculate line intensity power spectra based on halo model approach.
[Simulated_maps_and_powerspectra.ipynb](examples/Simulated_maps_and_powerspectra.ipynb): some examples that show how to paint various line intensities on an external halo catalogue.
### Citation
If you find this package (or the paper) helpful in your research, please cite the following paper: [Arxiv:2304.06748](https://arxiv.org/abs/2304.06748).
@article{Roy:2023cpx,
author = "Roy, Anirban and Valent\'\i{}n-Mart\'\i{}nez, Dariannette and Wang, Kailai and Battaglia, Nicholas and van Engelen, Alexander",
title = "{$\texttt{LIMpy}$: A Semi-analytic Approach to Simulating Multi-line Intensity Maps at Millimetre Wavelengths}",
eprint = "2304.06748",
archivePrefix = "arXiv",
primaryClass = "astro-ph.GA",
month = "4",
year = "2023"}
### Contact
Anirban Roy (ar689@cornell.edu)
|
AnirbancosmoREPO_NAMElimpyPATH_START.@limpy_extracted@limpy-master@README.md@.PATH_END.py
|
{
"filename": "test_disco_conv.py",
"repo_name": "neuraloperator/neuraloperator",
"repo_path": "neuraloperator_extracted/neuraloperator-main/neuralop/layers/tests/test_disco_conv.py",
"type": "Python"
}
|
import pytest
import torch
from torch import nn
from torch.testing import assert_close
from ..discrete_continuous_convolution import (DiscreteContinuousConv2d,
DiscreteContinuousConvTranspose2d,
EquidistantDiscreteContinuousConv2d,
EquidistantDiscreteContinuousConvTranspose2d)
from ..embeddings import regular_grid_2d
# global params
batch_size = 4
in_channels = 6
out_channels = 3
side_length_in = 64
side_length_out = 48
device = "cuda" if torch.backends.cuda.is_built() else "cpu"
@pytest.mark.parametrize('conv_type', [DiscreteContinuousConv2d, DiscreteContinuousConvTranspose2d])
@pytest.mark.parametrize('groups', [1,3])
def test_regular_disco_conv2d(conv_type, groups):
# create regular grids of in and output coords
grid_in = torch.stack(regular_grid_2d(spatial_dims=[side_length_in, side_length_in]))
grid_out = torch.stack(regular_grid_2d(spatial_dims=[side_length_out, side_length_out]))
# reshape grids to point clouds (channels, n_pts)
grid_in = grid_in.view(2, -1)
grid_out = grid_out.view(2, -1)
# quad weights: one weight per point
quadrature_weights = torch.randn(grid_in.shape[1])
conv_layer = conv_type(
in_channels=in_channels,
out_channels=out_channels,
grid_in=grid_in,
grid_out=grid_out,
kernel_shape=3,
quadrature_weights=quadrature_weights,
groups=groups
)
# start with a grid, pass to forward as a point cloud
x = torch.randn(batch_size, in_channels, side_length_in, side_length_in)
x = x.reshape(batch_size, in_channels, -1)
res = conv_layer(x)
assert res.shape == (batch_size, out_channels, side_length_out ** 2)
@pytest.mark.parametrize('conv_type', [EquidistantDiscreteContinuousConv2d,
EquidistantDiscreteContinuousConvTranspose2d])
@pytest.mark.parametrize('groups', [1,3])
def test_equidistant_disco_conv2d(conv_type, groups):
in_shape = (side_length_in, side_length_in)
if conv_type == EquidistantDiscreteContinuousConv2d:
out_shape = (side_length_in // 2, side_length_in // 2)
else:
out_shape = (side_length_in * 2, side_length_in * 2)
conv_layer = conv_type(
in_channels=in_channels,
out_channels=out_channels,
in_shape=in_shape,
out_shape=out_shape,
kernel_shape=3,
groups=groups
)
# start with a grid, pass to forward as a grid
x = torch.randn(batch_size, in_channels, *in_shape)
res = conv_layer(x)
assert res.shape == (batch_size, out_channels, *out_shape)
|
neuraloperatorREPO_NAMEneuraloperatorPATH_START.@neuraloperator_extracted@neuraloperator-main@neuralop@layers@tests@test_disco_conv.py@.PATH_END.py
|
{
"filename": "ragged_gather_op_test.py",
"repo_name": "tensorflow/tensorflow",
"repo_path": "tensorflow_extracted/tensorflow-master/tensorflow/python/ops/ragged/ragged_gather_op_test.py",
"type": "Python"
}
|
# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for ragged_array_ops.gather."""
from absl.testing import parameterized
import numpy as np
from tensorflow.python.eager import context
from tensorflow.python.framework import constant_op
from tensorflow.python.framework import dtypes
from tensorflow.python.framework import errors
from tensorflow.python.framework import indexed_slices
from tensorflow.python.framework import ops
from tensorflow.python.framework import test_util
from tensorflow.python.ops import array_ops
from tensorflow.python.ops import gradients_impl
from tensorflow.python.ops.ragged import ragged_factory_ops
from tensorflow.python.ops.ragged import ragged_gather_ops
from tensorflow.python.ops.ragged import ragged_tensor
from tensorflow.python.platform import googletest
@test_util.run_all_in_graph_and_eager_modes
class RaggedGatherOpTest(test_util.TensorFlowTestCase, parameterized.TestCase):
@parameterized.named_parameters([
# Basic gather (axis=0 and batch_dims=0)
dict(testcase_name='Params1DTensor_Indices1DTensor',
params=['a', 'b', 'c', 'd', 'e'],
indices=[2, 0, 2, 1],
expected=['c', 'a', 'c', 'b']),
dict(testcase_name='Params1DTensor_Indices2DRagged',
params=['a', 'b', 'c', 'd', 'e'],
indices=[[3, 1, 2], [1], [], [0]],
expected=[['d', 'b', 'c'], ['b'], [], ['a']]),
dict(testcase_name='Params2DRagged_Indices0DTensor',
params=[['a', 'b'], ['c', 'd', 'e'], ['f'], [], ['g']],
indices=1,
expected=['c', 'd', 'e']),
dict(testcase_name='Params2DRagged_Indices1DTensor',
params=[['a', 'b', 'c'], ['d'], [], ['e']],
indices=[3, 1, 2, 1, 0],
expected=[
['e'], ['d'], [], ['d'], ['a', 'b', 'c']]),
dict(testcase_name='Params2DRagged_Indices2DRagged',
params=[['a', 'b', 'c'], ['d'], [], ['e']],
indices=[[3, 1, 2], [1], [], [0]],
expected=[
[['e'], ['d'], []], [['d']], [], [['a', 'b', 'c']]]),
dict(testcase_name='Params3DRagged_Indices2DTensor',
params=[
[['a', 'b'], []], [['c', 'd'], ['e'], ['f']], [['g']]],
indices=[[1, 2], [0, 1], [2, 2]],
indices_ragged_rank=0,
expected=[
[[['c', 'd'], ['e'], ['f']], [['g']]],
[[['a', 'b'], []], [['c', 'd'], ['e'], ['f']]],
[[['g']], [['g']]]]),
dict(testcase_name='Params3DRagged_Indices3DTensor',
params=[[['a', 'b'], []],
[['c', 'd'], ['e'], ['f']],
[['g']]],
indices=[[[1, 2], [0, 1], [2, 2]], [[0, 0], [1, 2], [0, 1]]],
indices_ragged_rank=0,
expected=[
[[[['c', 'd'], ['e'], ['f']], [['g']]],
[[['a', 'b'], []], [['c', 'd'], ['e'], ['f']]],
[[['g']], [['g']]]],
[[[['a', 'b'], []], [['a', 'b'], []]],
[[['c', 'd'], ['e'], ['f']], [['g']]],
[[['a', 'b'], []], [['c', 'd'], ['e'], ['f']]]]]),
dict(testcase_name='Params1DTensor_Indices4DRaggedRank2',
params=['a', 'b', 'c', 'd', 'e', 'f', 'g'],
indices=[[[[3, 4], [0, 6]], []],
[[[2, 1], [1, 0]], [[2, 5]], [[2, 3]]],
[[[1, 0]]]],
indices_ragged_rank=2,
expected=[
[[['d', 'e'], ['a', 'g']], []],
[[['c', 'b'], ['b', 'a']], [['c', 'f']], [['c', 'd']]],
[[['b', 'a']]]]),
# Batch gather (batch_dims=1)
dict(testcase_name='Batch1D_Params2DRagged_Indices1DTensor',
params=[['a', 'b'], ['c'], ['d', 'e', 'f', 'g'], ['h']],
indices=[1, 0, 3, 0],
batch_dims=1,
expected=['b', 'c', 'g', 'h']),
dict(testcase_name='Batch1D_Params2DRagged_Indices2DTensor',
params=[['a', 'b'], ['c'], ['d', 'e', 'f', 'g'], ['h']],
indices=[[1, 0], [0, 0], [3, 1], [0, 0]],
indices_ragged_rank=0,
batch_dims=1,
expected=[['b', 'a'], ['c', 'c'], ['g', 'e'], ['h', 'h']]),
dict(testcase_name='Batch1D_Params2DRagged_Indices2DRagged',
params=[['a', 'b'], ['c'], ['d', 'e', 'f', 'g'], ['h']],
indices=[[1, 0], [], [3, 2, 1], [0]],
batch_dims=1,
expected=[['b', 'a'], [], ['g', 'f', 'e'], ['h']]),
dict(testcase_name='Batch1D_Params3DRagged_Indices3DRagged',
params=[[['a'], ['b', 'c']],
[],
[['d', 'e', 'f'], ['g'], ['h', 'i'], ['j']],
[['k']]],
indices=[[[1, 0], []], [], [[3, 2, 1], [0]], [[0]]],
batch_dims=1,
expected=[[[['b', 'c'], ['a']], []],
[],
[[['j'], ['h', 'i'], ['g']], [['d', 'e', 'f']]],
[[['k']]]]),
# Batch gather (batch_dims=2)
dict(testcase_name='Batch2D_Params3DRagged_Indices2DRagged',
params=[[['a', 'b', 'c'], ['d', 'e'], ['f']],
[['g'], ['h', 'i']]],
indices=[[0, 1, 0], [0, 1]],
batch_dims=2,
expected=[['a', 'e', 'f'], ['g', 'i']]),
dict(testcase_name='Batch2D_Params3DRagged_Indices3DRagged',
params=[[['a', 'b', 'c'], ['d', 'e'], ['f']],
[['g'], ['h', 'i']]],
indices=[[[2, 1, 0], [1, 1], [0]], [[0], []]],
batch_dims=2,
expected=[[['c', 'b', 'a'], ['e', 'e'], ['f']], [['g'], []]]),
# Batch gather (batch_dims=3)
dict(testcase_name='Batch3D_Params4DRagged_Indices3DRagged',
params=[[[['a', 'b', 'c'], ['d', 'e'], ['f']],
[['g'], ['h', 'i']]], [[['j']]]],
indices=[[[0, 1, 0], [0, 1]], [[0]]],
batch_dims=3,
expected=[[['a', 'e', 'f'], ['g', 'i']], [['j']]]),
# Axis gather (axis=1)
dict(testcase_name='Params2DRagged_Indices0DTensor_axis_1',
params=[['a', 'b'], ['c', 'd', 'e'], ['f', 'g'], ['h', 'i', 'j'],
['k', 'l']],
indices=1,
axis=1,
expected=['b', 'd', 'g', 'i', 'l']),
dict(testcase_name='Params2DRagged_Indices1DTensor_axis_1',
params=[['a', 'b'], ['c', 'd', 'e'], ['f', 'g'], ['h', 'i', 'j'],
['k', 'l']],
indices=[1, 0],
axis=1,
expected=[['b', 'a'], ['d', 'c'], ['g', 'f'], ['i', 'h'],
['l', 'k']]),
dict(testcase_name='Params3DRagged_Indices0DTensor_axis_1',
params=[[['a', 'b'], ['c', 'd', 'e']],
[['f', 'g'], ['h', 'i', 'j'], ['k', 'l']]],
indices=1,
axis=1,
expected=[['c', 'd', 'e'], ['h', 'i', 'j']]),
dict(testcase_name='Params3DRagged_Indices1DTensor_axis_1',
params=[[['a', 'b'], ['c', 'd', 'e']],
[['f', 'g'], ['h', 'i', 'j'], ['k', 'l']]],
indices=[1, 0],
axis=1,
expected=[[['c', 'd', 'e'], ['a', 'b']],
[['h', 'i', 'j'], ['f', 'g']]]),
# Batch/axis gather, batch = 1, axis > batch
dict(testcase_name='Params3DRagged_Indices1DTensor_batch_1_axis_2',
params=[[['a', 'b'], ['c', 'd', 'e']],
[['f', 'g'], ['h', 'i', 'j'], ['k', 'l']]],
indices=[1, 0],
axis=2,
batch_dims=1,
expected=[['b', 'd'], ['f', 'h', 'k']]),
dict(testcase_name='Params4DRagged_Indices1DTensor_batch_1_axis_2',
params=[[[['a', 'b'], ['c', 'd', 'e']]],
[[['f', 'g']], [['h', 'i', 'j'], ['k', 'l']]]],
indices=[0, 1],
axis=2,
batch_dims=1,
expected=[[['a', 'b']],
[['h', 'i', 'j'], ['k', 'l']]]),
]) # pyformat: disable
def testRaggedGather(self,
params,
indices,
expected,
axis=None,
batch_dims=0,
params_ragged_rank=None,
indices_ragged_rank=None):
params = ragged_factory_ops.constant(params, ragged_rank=params_ragged_rank)
indices = ragged_factory_ops.constant(
indices, ragged_rank=indices_ragged_rank)
actual = ragged_gather_ops.gather(
params, indices, axis=axis, batch_dims=batch_dims)
self.assertAllEqual(actual, self._str_to_bytes(expected))
def _str_to_bytes(self, x):
if isinstance(x, list):
return [self._str_to_bytes(v) for v in x]
elif isinstance(x, str) and bytes is not str:
return bytes(x, 'utf-8')
else:
return x
def testOutOfBoundsError(self):
tensor_params = ['a', 'b', 'c']
tensor_indices = [0, 1, 2]
ragged_params = ragged_factory_ops.constant([['a', 'b'], ['c']])
ragged_indices = ragged_factory_ops.constant([[0, 3]])
with self.assertRaisesRegex(errors.InvalidArgumentError,
r'indices\[1\] = 3 is not in \[0, 3\)'):
self.evaluate(ragged_gather_ops.gather(tensor_params, ragged_indices))
with self.assertRaisesRegex(errors.InvalidArgumentError,
r'indices\[2\] = 2 is not in \[0, 2\)'):
self.evaluate(ragged_gather_ops.gather(ragged_params, tensor_indices))
with self.assertRaisesRegex(errors.InvalidArgumentError,
r'indices\[1\] = 3 is not in \[0, 2\)'):
self.evaluate(ragged_gather_ops.gather(ragged_params, ragged_indices))
def testUnknownIndicesRankError(self):
if context.executing_eagerly():
return
params = ragged_factory_ops.constant([], ragged_rank=1)
indices = constant_op.constant([0], dtype=dtypes.int64)
indices = array_ops.placeholder_with_default(indices, None)
self.assertRaisesRegex(ValueError,
r'rank\(indices\) must be known statically',
ragged_gather_ops.gather, params, indices)
# pylint: disable=bad-whitespace
@parameterized.parameters([
# params.shape=[2, None]; indices.shape=[3]
dict(
params = [[1.0, 2.0], [3.0, 4.0, 5.0]],
indices = [0, 0, 1],
expected_out = [[1.0, 2.0], [1.0, 2.0], [3.0, 4.0, 5.0]],
out_grad = [[0.1, 0.2], [0.3, 0.4], [0.5, 0.6, 0.7]],
expected_grad = [[0.4, 0.6], [0.5, 0.6, 0.7]]),
# params.shape=[2, None]; indices.shape=[0]
dict(
params = [[1, 2], [3, 4, 5]],
indices = [],
expected_out = [],
out_grad = [],
expected_grad = [[0, 0], [0, 0, 0]]),
# params.shape=[2, None]; indices.shape=[2, 2]
dict(
params = [[1.0, 2.0], [3.0, 4.0, 5.0]],
indices = [[0, 0], [1, 0]],
expected_out = [[[1.0, 2.0], [1.0, 2.0]],
[[3.0, 4.0, 5.0], [1.0, 2.0]]],
out_grad = [[[0.1, 0.2], [0.3, 0.4]],
[[0.5, 0.6, 0.7], [0.8, 0.9]]],
expected_grad = [[1.2, 1.5], [0.5, 0.6, 0.7]]),
# params.shape=[3, None, None]; indices.shape=[3]
dict(
params = [[[1, 2], [3, 4, 5]], [[6.0]], [[7.0, 8.0]]],
indices = [2, 1, 2],
expected_out = [[[7.0, 8.0]], [[6.0]], [[7.0, 8.0]]],
out_grad = [[[0.1, 0.2]], [[0.3]], [[0.4, 0.5]]],
expected_grad = [[[0, 0], [0, 0, 0]], [[0.3]], [[0.5, 0.7]]]),
# params.shape=[3, None, None]; indices.shape=[0]
dict(
params = [[[1, 2], [3, 4, 5]], [[6.0]], [[7.0, 8.0]]],
indices = [2, 1, 2],
expected_out = [[[7.0, 8.0]], [[6.0]], [[7.0, 8.0]]],
out_grad = [[[0.1, 0.2]], [[0.3]], [[0.4, 0.5]]],
expected_grad = [[[0, 0], [0, 0, 0]], [[0.3]], [[0.5, 0.7]]]),
# params.shape=[0, None]; indices.shape=[0]
dict(
params = [],
indices = [],
expected_out = [],
out_grad = [],
expected_grad = [],
params_ragged_rank = 1),
# params.shape=[2, None, 2]; indices.shape=[3]
dict(
params = [[[1, 2], [3, 4]], [], [[5, 6]]],
indices = [1, 1, 2, 0, 2],
expected_out = [[], [], [[5, 6]], [[1, 2], [3, 4]], [[5, 6]]],
out_grad = [[], [], [[1, 2]], [[3, 4], [5, 6]], [[7, 7]]],
expected_grad = [[[3, 4], [5, 6]], [], [[8, 9]]],
params_ragged_rank = 1),
]) # pyformat: disable
@test_util.run_deprecated_v1
def testGradient(self,
params,
indices,
expected_out,
out_grad,
expected_grad,
params_ragged_rank=None):
"""Tests that ragged_gather generates the right gradient.
Args:
params: The `params` that should be passed to `gather`.
indices: The `indices` that should be passed to `gather`.
expected_out: The expected value of `gather(params, indices)`.
`expected_out.shape = indices.shape + params.shape[1:]`.
out_grad: The value that should be fed in as the gradient for `out`
when testing the gradient of `ragged_gather`. Must have the same
shape as `expected_out`.
expected_grad: The expected gradient for that should be returned for
`params`. Must have hte same shape as `params`.
params_ragged_rank: The ragged_rank of `params`.
"""
if context.executing_eagerly():
return
params = ragged_factory_ops.constant(
params, dtype=dtypes.float32, ragged_rank=params_ragged_rank)
indices = constant_op.constant(indices, dtype=dtypes.int32)
out_ragged_rank = params.ragged_rank + indices.shape.ndims - 1
out_grad = ragged_factory_ops.constant(
out_grad, dtype=dtypes.float32, ragged_rank=out_ragged_rank)
expected_out = ragged_factory_ops.constant(
expected_out, dtype=dtypes.float32, ragged_rank=out_ragged_rank)
expected_grad = ragged_factory_ops.constant(
expected_grad,
dtype=dtypes.float32,
ragged_rank=params.ragged_rank)
out = ragged_gather_ops.gather(params, indices)
self.assertAllClose(out, expected_out)
grads = gradients_impl.gradients(
out.flat_values,
(params.nested_row_splits + (params.flat_values, indices,)),
out_grad.flat_values)
param_nested_splits_grads = grads[:-2]
params_flat_values_grad = grads[-2]
indices_grad = grads[-1]
self.assertEqual(indices_grad, None)
for splits_grad in param_nested_splits_grads:
self.assertEqual(splits_grad, None)
# The gradient generates an IndexedSlices; convert back to a normal Tensor.
self.assertIsInstance(params_flat_values_grad, indexed_slices.IndexedSlices)
params_flat_values_grad = ops.convert_to_tensor(params_flat_values_grad)
params_grad = params.with_flat_values(params_flat_values_grad)
self.assertAllClose(params_grad, expected_grad, atol=2e-6, rtol=2e-6)
@parameterized.parameters([
# Basic gather (batch_dims == 0, axis == 0)
dict(params_shape=[3, 4], indices_shape=[], axis=0),
dict(params_shape=[3, 4], indices_shape=[5], axis=0),
dict(params_shape=[3, 4], indices_shape=[2, 5], axis=0),
# Gather over axis (axis > 0)
dict(params_shape=[3, 4], indices_shape=[], axis=1),
dict(params_shape=[3, 4], indices_shape=[2], axis=1),
dict(params_shape=[3, 4], indices_shape=[2, 5], axis=1),
dict(params_shape=[7, 3, 1], indices_shape=[2, 4], axis=1),
dict(params_shape=[3, 4, 5, 6], indices_shape=[2, 1, 7], axis=1),
dict(params_shape=[7, 3, 5], indices_shape=[], axis=2),
dict(params_shape=[7, 3, 5], indices_shape=[2], axis=2),
dict(params_shape=[7, 3, 5], indices_shape=[4, 2], axis=2),
dict(params_shape=[7, 3, 5, 6], indices_shape=[4, 2], axis=2),
dict(params_shape=[7, 3, 5, 6], indices_shape=[], axis=3),
dict(params_shape=[7, 3, 5, 6], indices_shape=[4], axis=3),
dict(params_shape=[7, 3, 5, 6], indices_shape=[8, 4], axis=3),
dict(params_shape=[7, 3, 5, 6], indices_shape=[2, 3, 2, 3], axis=3),
# Batched gather (batch_dims > 0)
dict(params_shape=[7, 3], indices_shape=[7], batch_dims=1),
dict(params_shape=[7, 3], indices_shape=[7, 5], batch_dims=1),
dict(params_shape=[5, 3], indices_shape=[5, 7, 4, 2], batch_dims=1),
dict(params_shape=[2, 3, 6], indices_shape=[2], batch_dims=1),
dict(params_shape=[7, 3, 6], indices_shape=[7, 5, 4, 2], batch_dims=1),
dict(params_shape=[7, 3, 5], indices_shape=[7, 3], batch_dims=2),
dict(params_shape=[7, 3, 5], indices_shape=[7, 3, 2], batch_dims=2),
dict(params_shape=[7, 3, 5, 6], indices_shape=[7, 3, 5], batch_dims=3),
dict(params_shape=[2, 3, 5, 6], indices_shape=[2, 3, 5, 7], batch_dims=3),
# Batched gather with axis (axis > batch_dims > 0)
dict(params_shape=[2, 3, 6], indices_shape=[2], axis=2, batch_dims=1),
dict(params_shape=[2, 3, 6], indices_shape=[2, 4], axis=2, batch_dims=1),
dict(
params_shape=[3, 1, 6, 7], indices_shape=[3, 4], axis=3,
batch_dims=1),
dict(
params_shape=[3, 2, 6, 7], indices_shape=[3, 4], axis=3,
batch_dims=1),
dict(
params_shape=[2, 3, 6, 7], indices_shape=[2, 3], axis=3,
batch_dims=2),
])
def testMatchesDenseGather(self,
params_shape,
indices_shape,
axis=None,
batch_dims=0):
# Build random params & indices matrics w/ the expected shapes.
if axis is None:
axis = batch_dims
params = np.random.randint(100, size=params_shape, dtype=np.int32)
indices = np.random.randint(
params_shape[axis], size=indices_shape, dtype=np.int32)
# Use array_ops.gather to get the expected value.
expected = array_ops.gather(
params, indices, axis=axis, batch_dims=batch_dims)
# Build ragged tensors with varying ragged_ranks from params & axis.
params_tensors = [params] + [
ragged_tensor.RaggedTensor.from_tensor(params, ragged_rank=i)
for i in range(1, len(params_shape))
]
indices_tensors = [indices] + [
ragged_tensor.RaggedTensor.from_tensor(indices, ragged_rank=i)
for i in range(1, len(indices_shape))
]
# For each combination of params & axis tensors, check that
# ragged_gather_ops.gather matches array_ops.gather.
for params_tensor in params_tensors:
for indices_tensor in indices_tensors:
actual = ragged_gather_ops.gather(
params_tensor, indices_tensor, axis=axis, batch_dims=batch_dims)
if isinstance(actual, ragged_tensor.RaggedTensor):
actual = actual.to_tensor()
self.assertAllEqual(
expected, actual, 'params.ragged_rank=%s, indices.ragged_rank=%s' %
(getattr(params_tensor, 'ragged_rank',
0), getattr(indices_tensor, 'ragged_rank', 0)))
if __name__ == '__main__':
googletest.main()
|
tensorflowREPO_NAMEtensorflowPATH_START.@tensorflow_extracted@tensorflow-master@tensorflow@python@ops@ragged@ragged_gather_op_test.py@.PATH_END.py
|
{
"filename": "_marker.py",
"repo_name": "catboost/catboost",
"repo_path": "catboost_extracted/catboost-master/contrib/python/plotly/py3/plotly/graph_objs/treemap/_marker.py",
"type": "Python"
}
|
from plotly.basedatatypes import BaseTraceHierarchyType as _BaseTraceHierarchyType
import copy as _copy
class Marker(_BaseTraceHierarchyType):
# class properties
# --------------------
_parent_path_str = "treemap"
_path_str = "treemap.marker"
_valid_props = {
"autocolorscale",
"cauto",
"cmax",
"cmid",
"cmin",
"coloraxis",
"colorbar",
"colors",
"colorscale",
"colorssrc",
"cornerradius",
"depthfade",
"line",
"pad",
"pattern",
"reversescale",
"showscale",
}
# autocolorscale
# --------------
@property
def autocolorscale(self):
"""
Determines whether the colorscale is a default palette
(`autocolorscale: true`) or the palette determined by
`marker.colorscale`. Has an effect only if colors is set to a
numerical array. In case `colorscale` is unspecified or
`autocolorscale` is true, the default palette will be chosen
according to whether numbers in the `color` array are all
positive, all negative or mixed.
The 'autocolorscale' property must be specified as a bool
(either True, or False)
Returns
-------
bool
"""
return self["autocolorscale"]
@autocolorscale.setter
def autocolorscale(self, val):
self["autocolorscale"] = val
# cauto
# -----
@property
def cauto(self):
"""
Determines whether or not the color domain is computed with
respect to the input data (here colors) or the bounds set in
`marker.cmin` and `marker.cmax` Has an effect only if colors is
set to a numerical array. Defaults to `false` when
`marker.cmin` and `marker.cmax` are set by the user.
The 'cauto' property must be specified as a bool
(either True, or False)
Returns
-------
bool
"""
return self["cauto"]
@cauto.setter
def cauto(self, val):
self["cauto"] = val
# cmax
# ----
@property
def cmax(self):
"""
Sets the upper bound of the color domain. Has an effect only if
colors is set to a numerical array. Value should have the same
units as colors and if set, `marker.cmin` must be set as well.
The 'cmax' property is a number and may be specified as:
- An int or float
Returns
-------
int|float
"""
return self["cmax"]
@cmax.setter
def cmax(self, val):
self["cmax"] = val
# cmid
# ----
@property
def cmid(self):
"""
Sets the mid-point of the color domain by scaling `marker.cmin`
and/or `marker.cmax` to be equidistant to this point. Has an
effect only if colors is set to a numerical array. Value should
have the same units as colors. Has no effect when
`marker.cauto` is `false`.
The 'cmid' property is a number and may be specified as:
- An int or float
Returns
-------
int|float
"""
return self["cmid"]
@cmid.setter
def cmid(self, val):
self["cmid"] = val
# cmin
# ----
@property
def cmin(self):
"""
Sets the lower bound of the color domain. Has an effect only if
colors is set to a numerical array. Value should have the same
units as colors and if set, `marker.cmax` must be set as well.
The 'cmin' property is a number and may be specified as:
- An int or float
Returns
-------
int|float
"""
return self["cmin"]
@cmin.setter
def cmin(self, val):
self["cmin"] = val
# coloraxis
# ---------
@property
def coloraxis(self):
"""
Sets a reference to a shared color axis. References to these
shared color axes are "coloraxis", "coloraxis2", "coloraxis3",
etc. Settings for these shared color axes are set in the
layout, under `layout.coloraxis`, `layout.coloraxis2`, etc.
Note that multiple color scales can be linked to the same color
axis.
The 'coloraxis' property is an identifier of a particular
subplot, of type 'coloraxis', that may be specified as the string 'coloraxis'
optionally followed by an integer >= 1
(e.g. 'coloraxis', 'coloraxis1', 'coloraxis2', 'coloraxis3', etc.)
Returns
-------
str
"""
return self["coloraxis"]
@coloraxis.setter
def coloraxis(self, val):
self["coloraxis"] = val
# colorbar
# --------
@property
def colorbar(self):
"""
The 'colorbar' property is an instance of ColorBar
that may be specified as:
- An instance of :class:`plotly.graph_objs.treemap.marker.ColorBar`
- A dict of string/value properties that will be passed
to the ColorBar constructor
Supported dict properties:
bgcolor
Sets the color of padded area.
bordercolor
Sets the axis line color.
borderwidth
Sets the width (in px) or the border enclosing
this color bar.
dtick
Sets the step in-between ticks on this axis.
Use with `tick0`. Must be a positive number, or
special strings available to "log" and "date"
axes. If the axis `type` is "log", then ticks
are set every 10^(n*dtick) where n is the tick
number. For example, to set a tick mark at 1,
10, 100, 1000, ... set dtick to 1. To set tick
marks at 1, 100, 10000, ... set dtick to 2. To
set tick marks at 1, 5, 25, 125, 625, 3125, ...
set dtick to log_10(5), or 0.69897000433. "log"
has several special values; "L<f>", where `f`
is a positive number, gives ticks linearly
spaced in value (but not position). For example
`tick0` = 0.1, `dtick` = "L0.5" will put ticks
at 0.1, 0.6, 1.1, 1.6 etc. To show powers of 10
plus small digits between, use "D1" (all
digits) or "D2" (only 2 and 5). `tick0` is
ignored for "D1" and "D2". If the axis `type`
is "date", then you must convert the time to
milliseconds. For example, to set the interval
between ticks to one day, set `dtick` to
86400000.0. "date" also has special values
"M<n>" gives ticks spaced by a number of
months. `n` must be a positive integer. To set
ticks on the 15th of every third month, set
`tick0` to "2000-01-15" and `dtick` to "M3". To
set ticks every 4 years, set `dtick` to "M48"
exponentformat
Determines a formatting rule for the tick
exponents. For example, consider the number
1,000,000,000. If "none", it appears as
1,000,000,000. If "e", 1e+9. If "E", 1E+9. If
"power", 1x10^9 (with 9 in a super script). If
"SI", 1G. If "B", 1B.
labelalias
Replacement text for specific tick or hover
labels. For example using {US: 'USA', CA:
'Canada'} changes US to USA and CA to Canada.
The labels we would have shown must match the
keys exactly, after adding any tickprefix or
ticksuffix. For negative numbers the minus sign
symbol used (U+2212) is wider than the regular
ascii dash. That means you need to use −1
instead of -1. labelalias can be used with any
axis type, and both keys (if needed) and values
(if desired) can include html-like tags or
MathJax.
len
Sets the length of the color bar This measure
excludes the padding of both ends. That is, the
color bar length is this length minus the
padding on both ends.
lenmode
Determines whether this color bar's length
(i.e. the measure in the color variation
direction) is set in units of plot "fraction"
or in *pixels. Use `len` to set the value.
minexponent
Hide SI prefix for 10^n if |n| is below this
number. This only has an effect when
`tickformat` is "SI" or "B".
nticks
Specifies the maximum number of ticks for the
particular axis. The actual number of ticks
will be chosen automatically to be less than or
equal to `nticks`. Has an effect only if
`tickmode` is set to "auto".
orientation
Sets the orientation of the colorbar.
outlinecolor
Sets the axis line color.
outlinewidth
Sets the width (in px) of the axis line.
separatethousands
If "true", even 4-digit integers are separated
showexponent
If "all", all exponents are shown besides their
significands. If "first", only the exponent of
the first tick is shown. If "last", only the
exponent of the last tick is shown. If "none",
no exponents appear.
showticklabels
Determines whether or not the tick labels are
drawn.
showtickprefix
If "all", all tick labels are displayed with a
prefix. If "first", only the first tick is
displayed with a prefix. If "last", only the
last tick is displayed with a suffix. If
"none", tick prefixes are hidden.
showticksuffix
Same as `showtickprefix` but for tick suffixes.
thickness
Sets the thickness of the color bar This
measure excludes the size of the padding, ticks
and labels.
thicknessmode
Determines whether this color bar's thickness
(i.e. the measure in the constant color
direction) is set in units of plot "fraction"
or in "pixels". Use `thickness` to set the
value.
tick0
Sets the placement of the first tick on this
axis. Use with `dtick`. If the axis `type` is
"log", then you must take the log of your
starting tick (e.g. to set the starting tick to
100, set the `tick0` to 2) except when
`dtick`=*L<f>* (see `dtick` for more info). If
the axis `type` is "date", it should be a date
string, like date data. If the axis `type` is
"category", it should be a number, using the
scale where each category is assigned a serial
number from zero in the order it appears.
tickangle
Sets the angle of the tick labels with respect
to the horizontal. For example, a `tickangle`
of -90 draws the tick labels vertically.
tickcolor
Sets the tick color.
tickfont
Sets the color bar's tick label font
tickformat
Sets the tick label formatting rule using d3
formatting mini-languages which are very
similar to those in Python. For numbers, see: h
ttps://github.com/d3/d3-format/tree/v1.4.5#d3-
format. And for dates see:
https://github.com/d3/d3-time-
format/tree/v2.2.3#locale_format. We add two
items to d3's date formatter: "%h" for half of
the year as a decimal number as well as "%{n}f"
for fractional seconds with n digits. For
example, *2016-10-13 09:15:23.456* with
tickformat "%H~%M~%S.%2f" would display
"09~15~23.46"
tickformatstops
A tuple of :class:`plotly.graph_objects.treemap
.marker.colorbar.Tickformatstop` instances or
dicts with compatible properties
tickformatstopdefaults
When used in a template (as layout.template.dat
a.treemap.marker.colorbar.tickformatstopdefault
s), sets the default property values to use for
elements of
treemap.marker.colorbar.tickformatstops
ticklabeloverflow
Determines how we handle tick labels that would
overflow either the graph div or the domain of
the axis. The default value for inside tick
labels is *hide past domain*. In other cases
the default is *hide past div*.
ticklabelposition
Determines where tick labels are drawn relative
to the ticks. Left and right options are used
when `orientation` is "h", top and bottom when
`orientation` is "v".
ticklabelstep
Sets the spacing between tick labels as
compared to the spacing between ticks. A value
of 1 (default) means each tick gets a label. A
value of 2 means shows every 2nd label. A
larger value n means only every nth tick is
labeled. `tick0` determines which labels are
shown. Not implemented for axes with `type`
"log" or "multicategory", or when `tickmode` is
"array".
ticklen
Sets the tick length (in px).
tickmode
Sets the tick mode for this axis. If "auto",
the number of ticks is set via `nticks`. If
"linear", the placement of the ticks is
determined by a starting position `tick0` and a
tick step `dtick` ("linear" is the default
value if `tick0` and `dtick` are provided). If
"array", the placement of the ticks is set via
`tickvals` and the tick text is `ticktext`.
("array" is the default value if `tickvals` is
provided).
tickprefix
Sets a tick label prefix.
ticks
Determines whether ticks are drawn or not. If
"", this axis' ticks are not drawn. If
"outside" ("inside"), this axis' are drawn
outside (inside) the axis lines.
ticksuffix
Sets a tick label suffix.
ticktext
Sets the text displayed at the ticks position
via `tickvals`. Only has an effect if
`tickmode` is set to "array". Used with
`tickvals`.
ticktextsrc
Sets the source reference on Chart Studio Cloud
for `ticktext`.
tickvals
Sets the values at which ticks on this axis
appear. Only has an effect if `tickmode` is set
to "array". Used with `ticktext`.
tickvalssrc
Sets the source reference on Chart Studio Cloud
for `tickvals`.
tickwidth
Sets the tick width (in px).
title
:class:`plotly.graph_objects.treemap.marker.col
orbar.Title` instance or dict with compatible
properties
titlefont
Deprecated: Please use
treemap.marker.colorbar.title.font instead.
Sets this color bar's title font. Note that the
title's font used to be set by the now
deprecated `titlefont` attribute.
titleside
Deprecated: Please use
treemap.marker.colorbar.title.side instead.
Determines the location of color bar's title
with respect to the color bar. Defaults to
"top" when `orientation` if "v" and defaults
to "right" when `orientation` if "h". Note that
the title's location used to be set by the now
deprecated `titleside` attribute.
x
Sets the x position with respect to `xref` of
the color bar (in plot fraction). When `xref`
is "paper", defaults to 1.02 when `orientation`
is "v" and 0.5 when `orientation` is "h". When
`xref` is "container", defaults to 1 when
`orientation` is "v" and 0.5 when `orientation`
is "h". Must be between 0 and 1 if `xref` is
"container" and between "-2" and 3 if `xref` is
"paper".
xanchor
Sets this color bar's horizontal position
anchor. This anchor binds the `x` position to
the "left", "center" or "right" of the color
bar. Defaults to "left" when `orientation` is
"v" and "center" when `orientation` is "h".
xpad
Sets the amount of padding (in px) along the x
direction.
xref
Sets the container `x` refers to. "container"
spans the entire `width` of the plot. "paper"
refers to the width of the plotting area only.
y
Sets the y position with respect to `yref` of
the color bar (in plot fraction). When `yref`
is "paper", defaults to 0.5 when `orientation`
is "v" and 1.02 when `orientation` is "h". When
`yref` is "container", defaults to 0.5 when
`orientation` is "v" and 1 when `orientation`
is "h". Must be between 0 and 1 if `yref` is
"container" and between "-2" and 3 if `yref` is
"paper".
yanchor
Sets this color bar's vertical position anchor
This anchor binds the `y` position to the
"top", "middle" or "bottom" of the color bar.
Defaults to "middle" when `orientation` is "v"
and "bottom" when `orientation` is "h".
ypad
Sets the amount of padding (in px) along the y
direction.
yref
Sets the container `y` refers to. "container"
spans the entire `height` of the plot. "paper"
refers to the height of the plotting area only.
Returns
-------
plotly.graph_objs.treemap.marker.ColorBar
"""
return self["colorbar"]
@colorbar.setter
def colorbar(self, val):
self["colorbar"] = val
# colors
# ------
@property
def colors(self):
"""
Sets the color of each sector of this trace. If not specified,
the default trace color set is used to pick the sector colors.
The 'colors' property is an array that may be specified as a tuple,
list, numpy array, or pandas Series
Returns
-------
numpy.ndarray
"""
return self["colors"]
@colors.setter
def colors(self, val):
self["colors"] = val
# colorscale
# ----------
@property
def colorscale(self):
"""
Sets the colorscale. Has an effect only if colors is set to a
numerical array. The colorscale must be an array containing
arrays mapping a normalized value to an rgb, rgba, hex, hsl,
hsv, or named color string. At minimum, a mapping for the
lowest (0) and highest (1) values are required. For example,
`[[0, 'rgb(0,0,255)'], [1, 'rgb(255,0,0)']]`. To control the
bounds of the colorscale in color space, use `marker.cmin` and
`marker.cmax`. Alternatively, `colorscale` may be a palette
name string of the following list: Blackbody,Bluered,Blues,Civi
dis,Earth,Electric,Greens,Greys,Hot,Jet,Picnic,Portland,Rainbow
,RdBu,Reds,Viridis,YlGnBu,YlOrRd.
The 'colorscale' property is a colorscale and may be
specified as:
- A list of colors that will be spaced evenly to create the colorscale.
Many predefined colorscale lists are included in the sequential, diverging,
and cyclical modules in the plotly.colors package.
- A list of 2-element lists where the first element is the
normalized color level value (starting at 0 and ending at 1),
and the second item is a valid color string.
(e.g. [[0, 'green'], [0.5, 'red'], [1.0, 'rgb(0, 0, 255)']])
- One of the following named colorscales:
['aggrnyl', 'agsunset', 'algae', 'amp', 'armyrose', 'balance',
'blackbody', 'bluered', 'blues', 'blugrn', 'bluyl', 'brbg',
'brwnyl', 'bugn', 'bupu', 'burg', 'burgyl', 'cividis', 'curl',
'darkmint', 'deep', 'delta', 'dense', 'earth', 'edge', 'electric',
'emrld', 'fall', 'geyser', 'gnbu', 'gray', 'greens', 'greys',
'haline', 'hot', 'hsv', 'ice', 'icefire', 'inferno', 'jet',
'magenta', 'magma', 'matter', 'mint', 'mrybm', 'mygbm', 'oranges',
'orrd', 'oryel', 'oxy', 'peach', 'phase', 'picnic', 'pinkyl',
'piyg', 'plasma', 'plotly3', 'portland', 'prgn', 'pubu', 'pubugn',
'puor', 'purd', 'purp', 'purples', 'purpor', 'rainbow', 'rdbu',
'rdgy', 'rdpu', 'rdylbu', 'rdylgn', 'redor', 'reds', 'solar',
'spectral', 'speed', 'sunset', 'sunsetdark', 'teal', 'tealgrn',
'tealrose', 'tempo', 'temps', 'thermal', 'tropic', 'turbid',
'turbo', 'twilight', 'viridis', 'ylgn', 'ylgnbu', 'ylorbr',
'ylorrd'].
Appending '_r' to a named colorscale reverses it.
Returns
-------
str
"""
return self["colorscale"]
@colorscale.setter
def colorscale(self, val):
self["colorscale"] = val
# colorssrc
# ---------
@property
def colorssrc(self):
"""
Sets the source reference on Chart Studio Cloud for `colors`.
The 'colorssrc' property must be specified as a string or
as a plotly.grid_objs.Column object
Returns
-------
str
"""
return self["colorssrc"]
@colorssrc.setter
def colorssrc(self, val):
self["colorssrc"] = val
# cornerradius
# ------------
@property
def cornerradius(self):
"""
Sets the maximum rounding of corners (in px).
The 'cornerradius' property is a number and may be specified as:
- An int or float in the interval [0, inf]
Returns
-------
int|float
"""
return self["cornerradius"]
@cornerradius.setter
def cornerradius(self, val):
self["cornerradius"] = val
# depthfade
# ---------
@property
def depthfade(self):
"""
Determines if the sector colors are faded towards the
background from the leaves up to the headers. This option is
unavailable when a `colorscale` is present, defaults to false
when `marker.colors` is set, but otherwise defaults to true.
When set to "reversed", the fading direction is inverted, that
is the top elements within hierarchy are drawn with fully
saturated colors while the leaves are faded towards the
background color.
The 'depthfade' property is an enumeration that may be specified as:
- One of the following enumeration values:
[True, False, 'reversed']
Returns
-------
Any
"""
return self["depthfade"]
@depthfade.setter
def depthfade(self, val):
self["depthfade"] = val
# line
# ----
@property
def line(self):
"""
The 'line' property is an instance of Line
that may be specified as:
- An instance of :class:`plotly.graph_objs.treemap.marker.Line`
- A dict of string/value properties that will be passed
to the Line constructor
Supported dict properties:
color
Sets the color of the line enclosing each
sector. Defaults to the `paper_bgcolor` value.
colorsrc
Sets the source reference on Chart Studio Cloud
for `color`.
width
Sets the width (in px) of the line enclosing
each sector.
widthsrc
Sets the source reference on Chart Studio Cloud
for `width`.
Returns
-------
plotly.graph_objs.treemap.marker.Line
"""
return self["line"]
@line.setter
def line(self, val):
self["line"] = val
# pad
# ---
@property
def pad(self):
"""
The 'pad' property is an instance of Pad
that may be specified as:
- An instance of :class:`plotly.graph_objs.treemap.marker.Pad`
- A dict of string/value properties that will be passed
to the Pad constructor
Supported dict properties:
b
Sets the padding form the bottom (in px).
l
Sets the padding form the left (in px).
r
Sets the padding form the right (in px).
t
Sets the padding form the top (in px).
Returns
-------
plotly.graph_objs.treemap.marker.Pad
"""
return self["pad"]
@pad.setter
def pad(self, val):
self["pad"] = val
# pattern
# -------
@property
def pattern(self):
"""
Sets the pattern within the marker.
The 'pattern' property is an instance of Pattern
that may be specified as:
- An instance of :class:`plotly.graph_objs.treemap.marker.Pattern`
- A dict of string/value properties that will be passed
to the Pattern constructor
Supported dict properties:
bgcolor
When there is no colorscale sets the color of
background pattern fill. Defaults to a
`marker.color` background when `fillmode` is
"overlay". Otherwise, defaults to a transparent
background.
bgcolorsrc
Sets the source reference on Chart Studio Cloud
for `bgcolor`.
fgcolor
When there is no colorscale sets the color of
foreground pattern fill. Defaults to a
`marker.color` background when `fillmode` is
"replace". Otherwise, defaults to dark grey or
white to increase contrast with the `bgcolor`.
fgcolorsrc
Sets the source reference on Chart Studio Cloud
for `fgcolor`.
fgopacity
Sets the opacity of the foreground pattern
fill. Defaults to a 0.5 when `fillmode` is
"overlay". Otherwise, defaults to 1.
fillmode
Determines whether `marker.color` should be
used as a default to `bgcolor` or a `fgcolor`.
shape
Sets the shape of the pattern fill. By default,
no pattern is used for filling the area.
shapesrc
Sets the source reference on Chart Studio Cloud
for `shape`.
size
Sets the size of unit squares of the pattern
fill in pixels, which corresponds to the
interval of repetition of the pattern.
sizesrc
Sets the source reference on Chart Studio Cloud
for `size`.
solidity
Sets the solidity of the pattern fill. Solidity
is roughly the fraction of the area filled by
the pattern. Solidity of 0 shows only the
background color without pattern and solidty of
1 shows only the foreground color without
pattern.
soliditysrc
Sets the source reference on Chart Studio Cloud
for `solidity`.
Returns
-------
plotly.graph_objs.treemap.marker.Pattern
"""
return self["pattern"]
@pattern.setter
def pattern(self, val):
self["pattern"] = val
# reversescale
# ------------
@property
def reversescale(self):
"""
Reverses the color mapping if true. Has an effect only if
colors is set to a numerical array. If true, `marker.cmin` will
correspond to the last color in the array and `marker.cmax`
will correspond to the first color.
The 'reversescale' property must be specified as a bool
(either True, or False)
Returns
-------
bool
"""
return self["reversescale"]
@reversescale.setter
def reversescale(self, val):
self["reversescale"] = val
# showscale
# ---------
@property
def showscale(self):
"""
Determines whether or not a colorbar is displayed for this
trace. Has an effect only if colors is set to a numerical
array.
The 'showscale' property must be specified as a bool
(either True, or False)
Returns
-------
bool
"""
return self["showscale"]
@showscale.setter
def showscale(self, val):
self["showscale"] = val
# Self properties description
# ---------------------------
@property
def _prop_descriptions(self):
return """\
autocolorscale
Determines whether the colorscale is a default palette
(`autocolorscale: true`) or the palette determined by
`marker.colorscale`. Has an effect only if colors is
set to a numerical array. In case `colorscale` is
unspecified or `autocolorscale` is true, the default
palette will be chosen according to whether numbers in
the `color` array are all positive, all negative or
mixed.
cauto
Determines whether or not the color domain is computed
with respect to the input data (here colors) or the
bounds set in `marker.cmin` and `marker.cmax` Has an
effect only if colors is set to a numerical array.
Defaults to `false` when `marker.cmin` and
`marker.cmax` are set by the user.
cmax
Sets the upper bound of the color domain. Has an effect
only if colors is set to a numerical array. Value
should have the same units as colors and if set,
`marker.cmin` must be set as well.
cmid
Sets the mid-point of the color domain by scaling
`marker.cmin` and/or `marker.cmax` to be equidistant to
this point. Has an effect only if colors is set to a
numerical array. Value should have the same units as
colors. Has no effect when `marker.cauto` is `false`.
cmin
Sets the lower bound of the color domain. Has an effect
only if colors is set to a numerical array. Value
should have the same units as colors and if set,
`marker.cmax` must be set as well.
coloraxis
Sets a reference to a shared color axis. References to
these shared color axes are "coloraxis", "coloraxis2",
"coloraxis3", etc. Settings for these shared color axes
are set in the layout, under `layout.coloraxis`,
`layout.coloraxis2`, etc. Note that multiple color
scales can be linked to the same color axis.
colorbar
:class:`plotly.graph_objects.treemap.marker.ColorBar`
instance or dict with compatible properties
colors
Sets the color of each sector of this trace. If not
specified, the default trace color set is used to pick
the sector colors.
colorscale
Sets the colorscale. Has an effect only if colors is
set to a numerical array. The colorscale must be an
array containing arrays mapping a normalized value to
an rgb, rgba, hex, hsl, hsv, or named color string. At
minimum, a mapping for the lowest (0) and highest (1)
values are required. For example, `[[0,
'rgb(0,0,255)'], [1, 'rgb(255,0,0)']]`. To control the
bounds of the colorscale in color space, use
`marker.cmin` and `marker.cmax`. Alternatively,
`colorscale` may be a palette name string of the
following list: Blackbody,Bluered,Blues,Cividis,Earth,E
lectric,Greens,Greys,Hot,Jet,Picnic,Portland,Rainbow,Rd
Bu,Reds,Viridis,YlGnBu,YlOrRd.
colorssrc
Sets the source reference on Chart Studio Cloud for
`colors`.
cornerradius
Sets the maximum rounding of corners (in px).
depthfade
Determines if the sector colors are faded towards the
background from the leaves up to the headers. This
option is unavailable when a `colorscale` is present,
defaults to false when `marker.colors` is set, but
otherwise defaults to true. When set to "reversed", the
fading direction is inverted, that is the top elements
within hierarchy are drawn with fully saturated colors
while the leaves are faded towards the background
color.
line
:class:`plotly.graph_objects.treemap.marker.Line`
instance or dict with compatible properties
pad
:class:`plotly.graph_objects.treemap.marker.Pad`
instance or dict with compatible properties
pattern
Sets the pattern within the marker.
reversescale
Reverses the color mapping if true. Has an effect only
if colors is set to a numerical array. If true,
`marker.cmin` will correspond to the last color in the
array and `marker.cmax` will correspond to the first
color.
showscale
Determines whether or not a colorbar is displayed for
this trace. Has an effect only if colors is set to a
numerical array.
"""
def __init__(
self,
arg=None,
autocolorscale=None,
cauto=None,
cmax=None,
cmid=None,
cmin=None,
coloraxis=None,
colorbar=None,
colors=None,
colorscale=None,
colorssrc=None,
cornerradius=None,
depthfade=None,
line=None,
pad=None,
pattern=None,
reversescale=None,
showscale=None,
**kwargs,
):
"""
Construct a new Marker object
Parameters
----------
arg
dict of properties compatible with this constructor or
an instance of
:class:`plotly.graph_objs.treemap.Marker`
autocolorscale
Determines whether the colorscale is a default palette
(`autocolorscale: true`) or the palette determined by
`marker.colorscale`. Has an effect only if colors is
set to a numerical array. In case `colorscale` is
unspecified or `autocolorscale` is true, the default
palette will be chosen according to whether numbers in
the `color` array are all positive, all negative or
mixed.
cauto
Determines whether or not the color domain is computed
with respect to the input data (here colors) or the
bounds set in `marker.cmin` and `marker.cmax` Has an
effect only if colors is set to a numerical array.
Defaults to `false` when `marker.cmin` and
`marker.cmax` are set by the user.
cmax
Sets the upper bound of the color domain. Has an effect
only if colors is set to a numerical array. Value
should have the same units as colors and if set,
`marker.cmin` must be set as well.
cmid
Sets the mid-point of the color domain by scaling
`marker.cmin` and/or `marker.cmax` to be equidistant to
this point. Has an effect only if colors is set to a
numerical array. Value should have the same units as
colors. Has no effect when `marker.cauto` is `false`.
cmin
Sets the lower bound of the color domain. Has an effect
only if colors is set to a numerical array. Value
should have the same units as colors and if set,
`marker.cmax` must be set as well.
coloraxis
Sets a reference to a shared color axis. References to
these shared color axes are "coloraxis", "coloraxis2",
"coloraxis3", etc. Settings for these shared color axes
are set in the layout, under `layout.coloraxis`,
`layout.coloraxis2`, etc. Note that multiple color
scales can be linked to the same color axis.
colorbar
:class:`plotly.graph_objects.treemap.marker.ColorBar`
instance or dict with compatible properties
colors
Sets the color of each sector of this trace. If not
specified, the default trace color set is used to pick
the sector colors.
colorscale
Sets the colorscale. Has an effect only if colors is
set to a numerical array. The colorscale must be an
array containing arrays mapping a normalized value to
an rgb, rgba, hex, hsl, hsv, or named color string. At
minimum, a mapping for the lowest (0) and highest (1)
values are required. For example, `[[0,
'rgb(0,0,255)'], [1, 'rgb(255,0,0)']]`. To control the
bounds of the colorscale in color space, use
`marker.cmin` and `marker.cmax`. Alternatively,
`colorscale` may be a palette name string of the
following list: Blackbody,Bluered,Blues,Cividis,Earth,E
lectric,Greens,Greys,Hot,Jet,Picnic,Portland,Rainbow,Rd
Bu,Reds,Viridis,YlGnBu,YlOrRd.
colorssrc
Sets the source reference on Chart Studio Cloud for
`colors`.
cornerradius
Sets the maximum rounding of corners (in px).
depthfade
Determines if the sector colors are faded towards the
background from the leaves up to the headers. This
option is unavailable when a `colorscale` is present,
defaults to false when `marker.colors` is set, but
otherwise defaults to true. When set to "reversed", the
fading direction is inverted, that is the top elements
within hierarchy are drawn with fully saturated colors
while the leaves are faded towards the background
color.
line
:class:`plotly.graph_objects.treemap.marker.Line`
instance or dict with compatible properties
pad
:class:`plotly.graph_objects.treemap.marker.Pad`
instance or dict with compatible properties
pattern
Sets the pattern within the marker.
reversescale
Reverses the color mapping if true. Has an effect only
if colors is set to a numerical array. If true,
`marker.cmin` will correspond to the last color in the
array and `marker.cmax` will correspond to the first
color.
showscale
Determines whether or not a colorbar is displayed for
this trace. Has an effect only if colors is set to a
numerical array.
Returns
-------
Marker
"""
super(Marker, self).__init__("marker")
if "_parent" in kwargs:
self._parent = kwargs["_parent"]
return
# Validate arg
# ------------
if arg is None:
arg = {}
elif isinstance(arg, self.__class__):
arg = arg.to_plotly_json()
elif isinstance(arg, dict):
arg = _copy.copy(arg)
else:
raise ValueError(
"""\
The first argument to the plotly.graph_objs.treemap.Marker
constructor must be a dict or
an instance of :class:`plotly.graph_objs.treemap.Marker`"""
)
# Handle skip_invalid
# -------------------
self._skip_invalid = kwargs.pop("skip_invalid", False)
self._validate = kwargs.pop("_validate", True)
# Populate data dict with properties
# ----------------------------------
_v = arg.pop("autocolorscale", None)
_v = autocolorscale if autocolorscale is not None else _v
if _v is not None:
self["autocolorscale"] = _v
_v = arg.pop("cauto", None)
_v = cauto if cauto is not None else _v
if _v is not None:
self["cauto"] = _v
_v = arg.pop("cmax", None)
_v = cmax if cmax is not None else _v
if _v is not None:
self["cmax"] = _v
_v = arg.pop("cmid", None)
_v = cmid if cmid is not None else _v
if _v is not None:
self["cmid"] = _v
_v = arg.pop("cmin", None)
_v = cmin if cmin is not None else _v
if _v is not None:
self["cmin"] = _v
_v = arg.pop("coloraxis", None)
_v = coloraxis if coloraxis is not None else _v
if _v is not None:
self["coloraxis"] = _v
_v = arg.pop("colorbar", None)
_v = colorbar if colorbar is not None else _v
if _v is not None:
self["colorbar"] = _v
_v = arg.pop("colors", None)
_v = colors if colors is not None else _v
if _v is not None:
self["colors"] = _v
_v = arg.pop("colorscale", None)
_v = colorscale if colorscale is not None else _v
if _v is not None:
self["colorscale"] = _v
_v = arg.pop("colorssrc", None)
_v = colorssrc if colorssrc is not None else _v
if _v is not None:
self["colorssrc"] = _v
_v = arg.pop("cornerradius", None)
_v = cornerradius if cornerradius is not None else _v
if _v is not None:
self["cornerradius"] = _v
_v = arg.pop("depthfade", None)
_v = depthfade if depthfade is not None else _v
if _v is not None:
self["depthfade"] = _v
_v = arg.pop("line", None)
_v = line if line is not None else _v
if _v is not None:
self["line"] = _v
_v = arg.pop("pad", None)
_v = pad if pad is not None else _v
if _v is not None:
self["pad"] = _v
_v = arg.pop("pattern", None)
_v = pattern if pattern is not None else _v
if _v is not None:
self["pattern"] = _v
_v = arg.pop("reversescale", None)
_v = reversescale if reversescale is not None else _v
if _v is not None:
self["reversescale"] = _v
_v = arg.pop("showscale", None)
_v = showscale if showscale is not None else _v
if _v is not None:
self["showscale"] = _v
# Process unknown kwargs
# ----------------------
self._process_kwargs(**dict(arg, **kwargs))
# Reset skip_invalid
# ------------------
self._skip_invalid = False
|
catboostREPO_NAMEcatboostPATH_START.@catboost_extracted@catboost-master@contrib@python@plotly@py3@plotly@graph_objs@treemap@_marker.py@.PATH_END.py
|
{
"filename": "convert.py",
"repo_name": "tensorflow/tensorflow",
"repo_path": "tensorflow_extracted/tensorflow-master/tensorflow/python/data/util/convert.py",
"type": "Python"
}
|
# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Helpers constructing Datasets."""
from tensorflow.python.framework import constant_op
from tensorflow.python.framework import dtypes
from tensorflow.python.framework import ops
from tensorflow.python.framework import tensor_shape
def optional_param_to_tensor(argument_name,
argument_value,
argument_default=0,
argument_dtype=dtypes.int64):
if argument_value is not None:
return ops.convert_to_tensor(
argument_value, dtype=argument_dtype, name=argument_name)
else:
return constant_op.constant(
argument_default, dtype=argument_dtype, name=argument_name)
def partial_shape_to_tensor(shape_like):
"""Returns a `tf.Tensor` that represents the given shape.
Args:
shape_like: A value that can be converted to a `tf.TensorShape` or a
`tf.Tensor`.
Returns:
A 1-D `tf.Tensor` of `tf.int64` elements representing the given shape, where
`-1` is substituted for any unknown dimensions.
"""
try:
# First attempt to convert the input to a shape, and return the
# "canonical" tensor representation, which uses `-1` in place of
# `None`.
shape_like = tensor_shape.as_shape(shape_like)
return ops.convert_to_tensor(
[dim if dim is not None else -1 for dim in shape_like.as_list()],
dtype=dtypes.int64)
except (TypeError, ValueError):
# The argument was not trivially convertible to a
# `tf.TensorShape`, so fall back on the conversion to tensor
# machinery.
ret = ops.convert_to_tensor(shape_like, preferred_dtype=dtypes.int64)
if ret.shape.dims is not None and len(ret.shape.dims) != 1:
raise ValueError("The given shape {} must be a 1-D tensor of `tf.int64` "
"values, but the shape was {}.".format(
shape_like, ret.shape))
if ret.dtype != dtypes.int64:
raise TypeError("The given shape {} must be a 1-D tensor of `tf.int64` "
"values, but the element type was {}.".format(
shape_like, ret.dtype.name))
return ret
|
tensorflowREPO_NAMEtensorflowPATH_START.@tensorflow_extracted@tensorflow-master@tensorflow@python@data@util@convert.py@.PATH_END.py
|
{
"filename": "_values.py",
"repo_name": "catboost/catboost",
"repo_path": "catboost_extracted/catboost-master/contrib/python/plotly/py3/plotly/validators/layout/xaxis/rangebreak/_values.py",
"type": "Python"
}
|
import _plotly_utils.basevalidators
class ValuesValidator(_plotly_utils.basevalidators.InfoArrayValidator):
def __init__(
self, plotly_name="values", parent_name="layout.xaxis.rangebreak", **kwargs
):
super(ValuesValidator, self).__init__(
plotly_name=plotly_name,
parent_name=parent_name,
edit_type=kwargs.pop("edit_type", "calc"),
free_length=kwargs.pop("free_length", True),
items=kwargs.pop("items", {"editType": "calc", "valType": "any"}),
**kwargs,
)
|
catboostREPO_NAMEcatboostPATH_START.@catboost_extracted@catboost-master@contrib@python@plotly@py3@plotly@validators@layout@xaxis@rangebreak@_values.py@.PATH_END.py
|
{
"filename": "_shadowsrc.py",
"repo_name": "catboost/catboost",
"repo_path": "catboost_extracted/catboost-master/contrib/python/plotly/py3/plotly/validators/scatterpolar/hoverlabel/font/_shadowsrc.py",
"type": "Python"
}
|
import _plotly_utils.basevalidators
class ShadowsrcValidator(_plotly_utils.basevalidators.SrcValidator):
def __init__(
self,
plotly_name="shadowsrc",
parent_name="scatterpolar.hoverlabel.font",
**kwargs,
):
super(ShadowsrcValidator, self).__init__(
plotly_name=plotly_name,
parent_name=parent_name,
edit_type=kwargs.pop("edit_type", "none"),
**kwargs,
)
|
catboostREPO_NAMEcatboostPATH_START.@catboost_extracted@catboost-master@contrib@python@plotly@py3@plotly@validators@scatterpolar@hoverlabel@font@_shadowsrc.py@.PATH_END.py
|
{
"filename": "_templateitemname.py",
"repo_name": "catboost/catboost",
"repo_path": "catboost_extracted/catboost-master/contrib/python/plotly/py3/plotly/validators/treemap/marker/colorbar/tickformatstop/_templateitemname.py",
"type": "Python"
}
|
import _plotly_utils.basevalidators
class TemplateitemnameValidator(_plotly_utils.basevalidators.StringValidator):
def __init__(
self,
plotly_name="templateitemname",
parent_name="treemap.marker.colorbar.tickformatstop",
**kwargs,
):
super(TemplateitemnameValidator, self).__init__(
plotly_name=plotly_name,
parent_name=parent_name,
edit_type=kwargs.pop("edit_type", "colorbars"),
**kwargs,
)
|
catboostREPO_NAMEcatboostPATH_START.@catboost_extracted@catboost-master@contrib@python@plotly@py3@plotly@validators@treemap@marker@colorbar@tickformatstop@_templateitemname.py@.PATH_END.py
|
{
"filename": "test1.py",
"repo_name": "teuben/QAC",
"repo_path": "QAC_extracted/QAC-master/test/test1.py",
"type": "Python"
}
|
#
# it is assumed you have done execfile('ngvla.py')
#
# This test takes about 800 MB of disk space, and needs about 2 GB memory
#
# 667.692u 20.628s 9:45.15 117.6% 0+0k 1121096+3180192io 335pf+0w niter=0
# 2073.348u 37.540s 30:59.81 113.4% 0+0k 2335376+3269568io 887pf+0w niter=[0,1000,2000]
pdir = 'test1'
model = '../models/skymodel.fits' # this has phasecenter with dec=-30 for ALMA sims
phasecenter = 'J2000 180.000000deg 40.000000deg' # so modify this for ngVLA
# pick the piece of the model to image, and at what pixel size
# natively this model is 4096 pixels at 0.05"
imsize_m = 4096
pixel_m = 0.005 # 0.01 was the bench
# pick the sky imaging parameters (for tclean)
imsize_s = 512
pixel_s = 0.1 # 0.25 was the bench
# pick a few niter values for tclean to check flux convergence
niter = [0,1000,2000]
# -- do not change parameters below this ---
import sys
for arg in qac_argv(sys.argv):
exec(arg)
test = pdir
ptg = test + '.ptg' # use a single pointing mosaic for the ptg
if type(niter) != type([]): niter = [niter]
# report
qac_log("TEST: %s" % test)
qac_begin(test)
qac_project(test)
qac_version()
# create a single pointing mosaic
qac_ptg(phasecenter,ptg)
# create a MS based on a model and antenna configuration
qac_log("VLA")
qac_vla(test,model,imsize_m,pixel_m,cfg=1,ptg=ptg, phasecenter=phasecenter)
# clean this interferometric map a bit
qac_log("CLEAN")
qac_clean1(test+'/clean1',test+'/'+test+'.SWcore.ms', imsize_s, pixel_s, phasecenter=phasecenter,niter=niter)
# create two OTF TP maps
qac_log("OTF")
qac_tp_otf(test+'/clean1',test+'/'+test+'.SWcore.skymodel', 45.0, label="45")
qac_tp_otf(test+'/clean1',test+'/'+test+'.SWcore.skymodel', 18.0, label="18")
# combine TP + INT using feather, for all niter's
qac_log("FEATHER")
for idx in range(len(niter)):
qac_feather(test+'/clean1',label="45",niteridx=idx)
qac_feather(test+'/clean1',label="18",niteridx=idx)
#
# --------------------------------------------------------------------------------------------------------------
# regression
regress51 = [
"0.0067413167369070152 0.010552344105428218 0.0 0.10000000149011612 113100.52701950417",
"376.81918701712272 791.0277433970175 0.25461947454935646 20152.279939787601 0.0",
"2.0449149476928925 22.901153529996495 -33.027679443359375 96.835914611816406 1479.648825106718",
"6570.3070261008925 8953.8105374926636 309.4671630859375 24888.931640625 60026.507913198388",
"42668.1077336737 44654.739711457922 13273.1376953125 67745.1171875 70071.789310851134",
"6570.3070261008925 8953.8105374926636 309.4671630859375 24888.931640625 60026.507913198388",
"42668.1077336737 44654.739711457922 13273.1376953125 67745.1171875 70071.789310851134",
"14.774882361675623 23.377433067737787 -7.3271188735961914 137.7615966796875 63317.084178311772",
"25.239792422269797 33.841414373223429 -36.166255950927734 180.94865417480469 108163.97872576566",
"9.9542300510560686 17.099277097148892 -17.992568969726562 114.72189331054688 42658.398669071932",
"19.474530926399115 28.794431828862734 -30.066720962524414 181.16015625 83457.213655954009",
]
r = regress51
qac_log("**** REGRESSION STATS ****")
# regression
qac_stats(model, r[0])
qac_stats('test1/test1.SWcore.ms', r[1])
qac_stats('test1/clean1/dirtymap.image', r[2])
qac_stats('test1/clean1/otf18.image.pbcor')
qac_stats('test1/clean1/otf45.image.pbcor')
qac_stats('test1/clean1/dirtymap.image.pbcor')
qac_stats('test1/clean1/dirtymap_2.image.pbcor')
qac_stats('test1/clean1/dirtymap_3.image.pbcor')
qac_stats('test1/clean1/feather18.image.pbcor')
qac_stats('test1/clean1/feather18_2.image.pbcor')
qac_stats('test1/clean1/feather18_3.image.pbcor')
qac_stats('test1/clean1/feather45.image.pbcor')
qac_stats('test1/clean1/feather45_2.image.pbcor')
qac_stats('test1/clean1/feather45_3.image.pbcor')
# done
qac_end()
|
teubenREPO_NAMEQACPATH_START.@QAC_extracted@QAC-master@test@test1.py@.PATH_END.py
|
{
"filename": "Dedner-divB.py",
"repo_name": "LLNL/spheral",
"repo_path": "spheral_extracted/spheral-main/tests/functional/MHD/Dedner-divB/Dedner-divB.py",
"type": "Python"
}
|
#-------------------------------------------------------------------------------
# The evolution of a uniform, magnetized conducting fluid.
#-------------------------------------------------------------------------------
from math import *
from Spheral import *
from SpheralTestUtilities import *
from SpheralVisitDump import dumpPhysicsState
from findLastRestart import *
# Load the mpi module if we"re parallel.
import loadmpi
mpi, procID, numProcs = loadmpi.loadmpi()
from GenerateNodeDistribution3d import *
from CubicNodeGenerator import GenerateCubicNodeDistribution
title("Dedner magnetic divergence test")
#-------------------------------------------------------------------------------
# Generic problem parameters
#-------------------------------------------------------------------------------
commandLine(seed = "lattice",
n = 20,
rho0 = 1.0,
V0 = Vector3d(1.0, 1.0, 0.0),
Bz = 1.0/sqrt(4*pi),
P0 = 6.0,
nPerh = 1.3,
mu0 = 1.0,
gamma = 5.0/3.0,
r0 = 1.0/sqrt(8),
divBCleaner = 'none',
mu = 1.0,
Qlimiter = True,
balsaraCorrection = False,
epsilon2 = 1e-2,
negligibleSoundSpeed = 1e-5,
csMultiplier = 1e-4,
hmin = 1e-5,
hmax = 1.0,
hminratio = 0.05,
HsmoothFraction = 0.0,
cfl = 0.25,
XSPH = True,
epsilonTensile = 0.0,
nTensile = 8,
HEvolution = Hydro3d.HEvolutionType.IdealH,
compatibleEnergy = False,
gradhCorrection = True,
limitIdealH = False,
neighborSearchType = Neighbor3d.NeighborSearchType.GatherScatter,
numGridLevels = 20,
topGridCellSize = 2.0,
origin = Vector3d(0.0, 0.0, 0.0),
goalTime = 1.0,
maxSteps = None,
statsStep = 10,
smoothIters = 0,
sumForMassDensity = Hydro3d.MassDensityType.RigorousSumDensity,
restoreCycle = None,
graphics = False,
)
def plotField(x, F, titleStr, filename):
import pylab as p
import griddata as g
import numpy
p.ion()
p.clf()
xhat = Vector3d(1, 0, 0)
yhat = Vector3d(0, 1, 0)
numInternalNodes = len(x.internalValues())
indices = [i for i in range(numInternalNodes) if abs(x[i].z) < 1e-8]
xs = numpy.array([x[i].dot(xhat) for i in indices])
ys = numpy.array([x[i].dot(yhat) for i in indices])
x1 = p.linspace(-0.5, 1.5, 50)
y1 = p.linspace(-0.5, 1.5, 50)
xg, yg = p.meshgrid(x1, y1)
if isinstance(F, VectorField3d) or isinstance(F[0], Vector3d):
Fxs = numpy.array([F[i].dot(xhat) for i in indices])
Fys = numpy.array([F[i].dot(yhat) for i in indices])
Fxg = g.griddata(xs, ys, Fxs, xg, yg)
Fyg = g.griddata(xs, ys, Fys, xg, yg)
p.quiver(xg, yg, Fxg, Fyg)
else:
# levels = [0.1*i for i in xrange(32)]
Fs = numpy.array([F[i] for i in indices])
Fg = g.griddata(xs, ys, Fs, xg, yg)
p.contour(xg, yg, Fg, 30)
p.colorbar()
p.title(titleStr)
p.savefig(filename)
#-------------------------------------------------------------------------------
# Interpolation kernels.
#-------------------------------------------------------------------------------
WT = TableKernel3d(BSplineKernel3d(), 1000)
WTPi = TableKernel3d(BSplineKernel3d(), 1000)
output("WT")
output("WTPi")
kernelExtent = WT.kernelExtent()
#-------------------------------------------------------------------------------
# A few derived variables.
#-------------------------------------------------------------------------------
nx = ny = n
nz = int(2 * 2 * kernelExtent * nPerh)
nzx = 1.0*nz/nx
xmin = (-0.5, -0.5, -0.5*nzx)
xmax = (1.5, 1.5, 1.5*nzx)
u0 = P0 / ((gamma-1.0)*rho0)
dataDir = "Dedner-divB-%ix%ix%i" % (n, n, n)
#-------------------------------------------------------------------------------
# Check if the necessary output directories exist. If not, create them.
#-------------------------------------------------------------------------------
#-------------------------------------------------------------------------------
# Material properties.
#-------------------------------------------------------------------------------
eos = GammaLawGasMKS3d(gamma, mu)
#-------------------------------------------------------------------------------
# Make the NodeList.
#-------------------------------------------------------------------------------
# Perfectly conducting node list.
nodes = ConductingFluidNodeList("nodes", eos, WT, WTPi)
output("nodes")
nodes.HsmoothFraction = HsmoothFraction
nodes.XSPH = XSPH
nodes.nodesPerSmoothingScale = nPerh
nodes.epsilonTensile = epsilonTensile
nodes.nTensile = nTensile
nodes.hmin = hmin
nodes.hmax = hmax
nodes.hminratio = hminratio
output("nodes.HsmoothFraction")
output("nodes.nodesPerSmoothingScale")
output("nodes.epsilonTensile")
output("nodes.nTensile")
output("nodes.XSPH")
output("nodes.hmin")
output("nodes.hmax")
output("nodes.hminratio")
#-------------------------------------------------------------------------------
# Construct the neighbor object.
#-------------------------------------------------------------------------------
neighbor1 = NestedGridNeighbor3d(nodes,
neighborSearchType,
numGridLevels,
topGridCellSize,
origin,
kernelExtent)
nodes.registerNeighbor(neighbor1)
#-------------------------------------------------------------------------------
# Set the node properties.
#-------------------------------------------------------------------------------
x = nodes.positions()
v = nodes.velocity()
B = nodes.magneticInduction()
if restoreCycle is None:
from ParMETISDistributeNodes import distributeNodes3d
generator = GenerateNodeDistribution3d(nx, ny, nz, rho0, seed,
xmin = xmin,
xmax = xmax,
nNodePerh = nPerh,
SPH = True)
distributeNodes3d((nodes, generator))
output("mpi.reduce(nodes.numInternalNodes, mpi.MIN)")
output("mpi.reduce(nodes.numInternalNodes, mpi.MAX)")
output("mpi.reduce(nodes.numInternalNodes, mpi.SUM)")
# Set node specific thermal energies
nodes.specificThermalEnergy(ScalarField3d("tmp", nodes, u0))
# Set nodal magnetic inductions.
r = [sqrt(xi.x**2 + xi.y**2) for xi in x.internalValues()]
for nodeID in range(nodes.numInternalNodes):
ri = r[nodeID]/r0
if ri < 1.0:
Bx = (ri**8 - 2*ri**4 + 1)/sqrt(4*pi)
else:
Bx = 0.0
B[nodeID] = Vector3d(Bx, 0, Bz)
v[nodeID] = V0
# Plot the B field configuration "before."
#plotField(x, B, 'B before div cleaning', 'B-before.png')
# plotField(x, [Bi.x for Bi in B.internalValues()], 'Bx before div cleaning', 'Bx-before.png')
# Jot down the analytic maximum divergence of B. The expression for
# div B = dBx/dx + dBy/dy + dBz/dz is (16*x*r**2/r0**4)*((r/r0)**4 - 1).
#proj = Vector3d(1., 1., 0)
#rs = [xi.dot(proj) for xi in x.internalValues()]
#divBs = [(16*x[i].x*rs[i]**2/r0**4)*((rs[i]/r0)**4 - 1) for i in xrange(len(x.internalValues()))]
#maxDivB0 = max(divBs)
# Plot div B "before."
#plotField(x, divBs, 'div B before div cleaning', 'divB-before.png')
#-------------------------------------------------------------------------------
# Check if the necessary output directories exist. If not, create them.
#-------------------------------------------------------------------------------
simName = 'Dedner-divB-%ix%ix%i'%(n, n, nz)
dataDir = '/p/lscratcha/jnjohnso/' + simName
visitDir = dataDir + "/visit"
restartDir = dataDir + "/restart"
import os, sys
if mpi.rank == 0:
if restoreCycle is None:
import shutil
if os.path.exists(visitDir):
shutil.rmtree(visitDir)
if os.path.exists(restartDir):
shutil.rmtree(restartDir)
if not os.path.exists(visitDir):
os.makedirs(visitDir)
if not os.path.exists(restartDir):
os.makedirs(restartDir)
mpi.barrier()
#-------------------------------------------------------------------------------
# Construct a DataBase to hold our node list
#-------------------------------------------------------------------------------
db = DataBase3d()
output("db")
output("db.appendNodeList(nodes)")
output("db.numNodeLists")
output("db.numFluidNodeLists")
#-------------------------------------------------------------------------------
# Construct the artificial viscosities for the problem.
#-------------------------------------------------------------------------------
q = MonaghanGingoldViscosity3d(1.0, 0.75)
#q = PriceMonaghanDissipation(1.0, 1.0, 1.0, 0.75, 1.0)
##-------------------------------------------------------------------------------
## Construct the hydro physics object.
##-------------------------------------------------------------------------------
hydro = Hydro3d(WT, WTPi, q, compatibleEnergy, gradhCorrection)
hydro.cfl = cfl
hydro.HEvolution = HEvolution
hydro.sumForMassDensity = sumForMassDensity
hydro.HsmoothMin = hmin
hydro.HsmoothMax = hmax
#output("hydro")
#output("hydro.cfl")
#output("hydro.HEvolution")
#output("hydro.sumForMassDensity")
#output("hydro.HsmoothMin")
#output("hydro.HsmoothMax")
#output("hydro.kernel()")
#output("hydro.PiKernel()")
#output("hydro.valid()")
#-------------------------------------------------------------------------------
# Construct an MHD object.
#-------------------------------------------------------------------------------
mhd = MHD(WT, mu0)
if divBCleaner == 'none':
mhd.divBCleaner = MHD.BDivergenceCleanerType.noCleaner
elif divBCleaner == 'hyperbolic':
mhd.divBCleaner = MHD.BDivergenceCleanerType.hyperbolicCleaner
elif divBCleaner == 'GreensFn':
mhd.divBCleaner = MHD.BDivergenceCleanerType.GreensFnProjCleaner
elif divBCleaner == 'BiotSavart':
mhd.divBCleaner = MHD.BDivergenceCleanerType.BiotSavartProjCleaner
else:
raise ValueError("divBCleaner must be 'hyperBolic', 'GreensFn', 'BiotSavart', or 'none'.")
#-------------------------------------------------------------------------------
# Create boundary conditions.
#-------------------------------------------------------------------------------
xPlane1 = Plane3d(Vector3d(-0.5, 0.0, 0.0), Vector3d( 1.0, 0.0, 0.0))
xPlane2 = Plane3d(Vector3d( 1.5, 0.0, 0.0), Vector3d(-1.0, 0.0, 0.0))
yPlane1 = Plane3d(Vector3d( 0.0,-0.5, 0.0), Vector3d( 0.0, 1.0, 0.0))
yPlane2 = Plane3d(Vector3d( 0.0, 1.5, 0.0), Vector3d( 0.0,-1.0, 0.0))
zPlane1 = Plane3d(Vector3d( 0.0, 0.0,-0.5*nzx), Vector3d( 0.0, 0.0, 1.0))
zPlane2 = Plane3d(Vector3d( 0.0, 0.0, 1.5*nzx), Vector3d( 0.0, 0.0,-1.0))
xbc = PeriodicBoundary3d(xPlane1, xPlane2)
ybc = PeriodicBoundary3d(yPlane1, yPlane2)
zbc = PeriodicBoundary3d(zPlane1, zPlane2)
hydro.appendBoundary(xbc)
hydro.appendBoundary(ybc)
hydro.appendBoundary(zbc)
mhd.appendBoundary(xbc)
mhd.appendBoundary(ybc)
mhd.appendBoundary(zbc)
#-------------------------------------------------------------------------------
# Construct a time integrator.
#-------------------------------------------------------------------------------
integrator = SynchronousRK2Integrator3d(db)
integrator.appendPhysicsPackage(hydro)
integrator.appendPhysicsPackage(mhd)
integrator.verbose = True
integrator.rigorousBoundaries = True
integrator.lastDt = 1e-3
output("integrator")
output("integrator.havePhysicsPackage(hydro)")
output("integrator.havePhysicsPackage(mhd)")
output("integrator.valid()")
#-------------------------------------------------------------------------------
# Build the controller.
#-------------------------------------------------------------------------------
#raw_input()
restartBaseName = '%s/%s'%(restartDir, simName)
control = SpheralController(integrator, WT,
statsStep = statsStep,
initializeMassDensity = True,
restartBaseName = restartBaseName)
output("control")
#print 'max |div B| (0):', maxDivB0
# Restore if desired.
if restoreCycle is not None:
if restoreCycle == -1:
restoreCycle = findLastRestart(simName)
control.loadRestartFile(restoreCycle)
else:
dumpPhysicsState(integrator, simName, visitDir, dumpDerivatives = True)
output("integrator.dtGrowth")
# If we're using a projection scheme to clean div B, advance one step and
# read off our diagnostics.
if mhd.divBCleaner == MHD.BDivergenceCleanerType.GreensFnProjCleaner or \
mhd.divBCleaner == MHD.BDivergenceCleanerType.BiotSavartProjCleaner:
control.advance(control.time() + 1e-10, 1)
maxDivB1 = max(mhd.maxDivB(), abs(mhd.minDivB()))
# Otherwise, go get 'em!
else:
while control.time() < goalTime:
dt = goalTime/10
control.advance(min(goalTime, control.time() + dt), maxSteps)
control.dropRestartFile()
dumpPhysicsState(integrator, simName, visitDir, dumpDerivatives = True)
maxDivB1 = max(mhd.maxDivB(), abs(mhd.minDivB()))
print('max |div B| (1):', maxDivB1)
# Plot the final field configuration (and its divergence).
#plotField(x, B, 'B after div cleaning', 'B-after.png')
#plotField(x, [Bi.x for Bi in B.internalValues()], 'Bx after div cleaning', 'Bx-after.png')
#plotField(x, nodes.magneticDivergence(), 'div B after div cleaning', 'divB-after.png')
|
LLNLREPO_NAMEspheralPATH_START.@spheral_extracted@spheral-main@tests@functional@MHD@Dedner-divB@Dedner-divB.py@.PATH_END.py
|
{
"filename": "test_history.py",
"repo_name": "langchain-ai/langchain",
"repo_path": "langchain_extracted/langchain-master/libs/langchain/tests/unit_tests/schema/runnable/test_history.py",
"type": "Python"
}
|
from langchain.schema.runnable.history import __all__
EXPECTED_ALL = [
"RunnableWithMessageHistory",
"GetSessionHistoryCallable",
"MessagesOrDictWithMessages",
]
def test_all_imports() -> None:
assert set(__all__) == set(EXPECTED_ALL)
|
langchain-aiREPO_NAMElangchainPATH_START.@langchain_extracted@langchain-master@libs@langchain@tests@unit_tests@schema@runnable@test_history.py@.PATH_END.py
|
{
"filename": "test_acq_peakd_both.py",
"repo_name": "spacetelescope/calcos",
"repo_path": "calcos_extracted/calcos-master/tests/test_acq_peakd_both.py",
"type": "Python"
}
|
"""Tests for COS/BOTH ACQ/PEAKD."""
import pytest
import calcos
from helpers import BaseCOS
# TODO: Mark this as slow when there are faster tests added for CI tests
# so that this only runs in nightly tests.
@pytest.mark.slow
class TestBOTHACQPEAKD(BaseCOS):
detector = 'fuv'
def test_both_acq_peakd(self):
"""
FUV COS regression test
"""
files_to_download = ['ld7y02rrq_rawacq.fits',
'ld7y02rrq_spt.fits']
# Prepare input files.
self.get_input_files(files_to_download)
input_file = 'ld7y02rrq_rawacq.fits'
# Run CALCOS
calcos.calcos(input_file)
# No need to compare results as this test doesn't
# produce any products. We are just testing that the
# code runs to completion
|
spacetelescopeREPO_NAMEcalcosPATH_START.@calcos_extracted@calcos-master@tests@test_acq_peakd_both.py@.PATH_END.py
|
{
"filename": "fft.py",
"repo_name": "jax-ml/jax",
"repo_path": "jax_extracted/jax-main/jax/_src/scipy/fft.py",
"type": "Python"
}
|
# Copyright 2021 The JAX Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import annotations
from collections.abc import Sequence
from functools import partial
import math
from jax import lax
import jax.numpy as jnp
from jax._src.util import canonicalize_axis
from jax._src.numpy.util import promote_dtypes_complex, promote_dtypes_inexact
from jax._src.typing import Array
def _W4(N: int, k: Array) -> Array:
N_arr, k = promote_dtypes_complex(N, k)
return jnp.exp(-.5j * jnp.pi * k / N_arr)
def _dct_interleave(x: Array, axis: int) -> Array:
v0 = lax.slice_in_dim(x, None, None, 2, axis)
v1 = lax.rev(lax.slice_in_dim(x, 1, None, 2, axis), (axis,))
return lax.concatenate([v0, v1], axis)
def _dct_ortho_norm(out: Array, axis: int) -> Array:
factor = lax.concatenate([lax.full((1,), 4, out.dtype), lax.full((out.shape[axis] - 1,), 2, out.dtype)], 0)
factor = lax.expand_dims(factor, [a for a in range(out.ndim) if a != axis])
return out / lax.sqrt(factor * out.shape[axis])
# Implementation based on
# John Makhoul: A Fast Cosine Transform in One and Two Dimensions (1980)
def dct(x: Array, type: int = 2, n: int | None = None,
axis: int = -1, norm: str | None = None) -> Array:
"""Computes the discrete cosine transform of the input
JAX implementation of :func:`scipy.fft.dct`.
Args:
x: array
type: integer, default = 2. Currently only type 2 is supported.
n: integer, default = x.shape[axis]. The length of the transform.
If larger than ``x.shape[axis]``, the input will be zero-padded, if
smaller, the input will be truncated.
axis: integer, default=-1. The axis along which the dct will be performed.
norm: string. The normalization mode: one of ``[None, "backward", "ortho"]``.
The default is ``None``, which is equivalent to ``"backward"``.
Returns:
array containing the discrete cosine transform of x
See Also:
- :func:`jax.scipy.fft.dctn`: multidimensional DCT
- :func:`jax.scipy.fft.idct`: inverse DCT
- :func:`jax.scipy.fft.idctn`: multidimensional inverse DCT
Examples:
>>> x = jax.random.normal(jax.random.key(0), (3, 3))
>>> with jnp.printoptions(precision=2, suppress=True):
... print(jax.scipy.fft.dct(x))
[[-0.58 -0.33 -1.08]
[-0.88 -1.01 -1.79]
[-1.06 -2.43 1.24]]
When ``n`` smaller than ``x.shape[axis]``
>>> with jnp.printoptions(precision=2, suppress=True):
... print(jax.scipy.fft.dct(x, n=2))
[[-0.22 -0.9 ]
[-0.57 -1.68]
[-2.52 -0.11]]
When ``n`` smaller than ``x.shape[axis]`` and ``axis=0``
>>> with jnp.printoptions(precision=2, suppress=True):
... print(jax.scipy.fft.dct(x, n=2, axis=0))
[[-2.22 1.43 -0.67]
[ 0.52 -0.26 -0.04]]
When ``n`` larger than ``x.shape[axis]`` and ``axis=1``
>>> with jnp.printoptions(precision=2, suppress=True):
... print(jax.scipy.fft.dct(x, n=4, axis=1))
[[-0.58 -0.35 -0.64 -1.11]
[-0.88 -0.9 -1.46 -1.68]
[-1.06 -2.25 -1.15 1.93]]
"""
if type != 2:
raise NotImplementedError('Only DCT type 2 is implemented.')
if norm is not None and norm not in ['backward', 'ortho']:
raise ValueError(f"jax.scipy.fft.dct: {norm=!r} is not implemented")
axis = canonicalize_axis(axis, x.ndim)
if n is not None:
x = lax.pad(x, jnp.array(0, x.dtype),
[(0, n - x.shape[axis] if a == axis else 0, 0)
for a in range(x.ndim)])
N = x.shape[axis]
v = _dct_interleave(x, axis)
V = jnp.fft.fft(v, axis=axis)
k = lax.expand_dims(jnp.arange(N, dtype=V.real.dtype), [a for a in range(x.ndim) if a != axis])
out = V * _W4(N, k)
out = 2 * out.real
if norm == 'ortho':
out = _dct_ortho_norm(out, axis)
return out
def _dct2(x: Array, axes: Sequence[int], norm: str | None) -> Array:
axis1, axis2 = map(partial(canonicalize_axis, num_dims=x.ndim), axes)
N1, N2 = x.shape[axis1], x.shape[axis2]
v = _dct_interleave(_dct_interleave(x, axis1), axis2)
V = jnp.fft.fftn(v, axes=axes)
k1 = lax.expand_dims(jnp.arange(N1, dtype=V.dtype),
[a for a in range(x.ndim) if a != axis1])
k2 = lax.expand_dims(jnp.arange(N2, dtype=V.dtype),
[a for a in range(x.ndim) if a != axis2])
out = _W4(N1, k1) * (_W4(N2, k2) * V + _W4(N2, -k2) * jnp.roll(jnp.flip(V, axis=axis2), shift=1, axis=axis2))
out = 2 * out.real
if norm == 'ortho':
return _dct_ortho_norm(_dct_ortho_norm(out, axis1), axis2)
return out
def dctn(x: Array, type: int = 2,
s: Sequence[int] | None=None,
axes: Sequence[int] | None = None,
norm: str | None = None) -> Array:
"""Computes the multidimensional discrete cosine transform of the input
JAX implementation of :func:`scipy.fft.dctn`.
Args:
x: array
type: integer, default = 2. Currently only type 2 is supported.
s: integer or sequence of integers. Specifies the shape of the result. If not
specified, it will default to the shape of ``x`` along the specified ``axes``.
axes: integer or sequence of integers. Specifies the axes along which the
transform will be computed.
norm: string. The normalization mode: one of ``[None, "backward", "ortho"]``.
The default is ``None``, which is equivalent to ``"backward"``.
Returns:
array containing the discrete cosine transform of x
See Also:
- :func:`jax.scipy.fft.dct`: one-dimensional DCT
- :func:`jax.scipy.fft.idct`: one-dimensional inverse DCT
- :func:`jax.scipy.fft.idctn`: multidimensional inverse DCT
Examples:
``jax.scipy.fft.dctn`` computes the transform along both the axes by default
when ``axes`` argument is ``None``.
>>> x = jax.random.normal(jax.random.key(0), (3, 3))
>>> with jnp.printoptions(precision=2, suppress=True):
... print(jax.scipy.fft.dctn(x))
[[-5.04 -7.54 -3.26]
[ 0.83 3.64 -4.03]
[ 0.12 -0.73 3.74]]
When ``s=[2]``, dimension of the transform along ``axis 0`` will be ``2``
and dimension along ``axis 1`` will be same as that of input.
>>> with jnp.printoptions(precision=2, suppress=True):
... print(jax.scipy.fft.dctn(x, s=[2]))
[[-2.92 -2.68 -5.74]
[ 0.42 0.97 1. ]]
When ``s=[2]`` and ``axes=[1]``, dimension of the transform along ``axis 1`` will
be ``2`` and dimension along ``axis 0`` will be same as that of input.
Also when ``axes=[1]``, transform will be computed only along ``axis 1``.
>>> with jnp.printoptions(precision=2, suppress=True):
... print(jax.scipy.fft.dctn(x, s=[2], axes=[1]))
[[-0.22 -0.9 ]
[-0.57 -1.68]
[-2.52 -0.11]]
When ``s=[2, 4]``, shape of the transform will be ``(2, 4)``.
>>> with jnp.printoptions(precision=2, suppress=True):
... print(jax.scipy.fft.dctn(x, s=[2, 4]))
[[-2.92 -2.49 -4.21 -5.57]
[ 0.42 0.79 1.16 0.8 ]]
"""
if type != 2:
raise NotImplementedError('Only DCT type 2 is implemented.')
if norm is not None and norm not in ['backward', 'ortho']:
raise ValueError(f"jax.scipy.fft.dctn: {norm=!r} is not implemented")
if axes is None:
axes = range(x.ndim)
if len(axes) == 1:
return dct(x, n=s[0] if s is not None else None, axis=axes[0], norm=norm)
if s is not None:
ns = dict(zip(axes, s))
pads = [(0, ns[a] - x.shape[a] if a in ns else 0, 0) for a in range(x.ndim)]
x = lax.pad(x, jnp.array(0, x.dtype), pads)
if len(axes) == 2:
return _dct2(x, axes=axes, norm=norm)
# compose high-D DCTs from 2D and 1D DCTs:
for axes_block in [axes[i:i+2] for i in range(0, len(axes), 2)]:
x = dctn(x, axes=axes_block, norm=norm)
return x
def idct(x: Array, type: int = 2, n: int | None = None,
axis: int = -1, norm: str | None = None) -> Array:
"""Computes the inverse discrete cosine transform of the input
JAX implementation of :func:`scipy.fft.idct`.
Args:
x: array
type: integer, default = 2. Currently only type 2 is supported.
n: integer, default = x.shape[axis]. The length of the transform.
If larger than ``x.shape[axis]``, the input will be zero-padded, if
smaller, the input will be truncated.
axis: integer, default=-1. The axis along which the dct will be performed.
norm: string. The normalization mode: one of ``[None, "backward", "ortho"]``.
The default is ``None``, which is equivalent to ``"backward"``.
Returns:
array containing the inverse discrete cosine transform of x
See Also:
- :func:`jax.scipy.fft.dct`: DCT
- :func:`jax.scipy.fft.dctn`: multidimensional DCT
- :func:`jax.scipy.fft.idctn`: multidimensional inverse DCT
Examples:
>>> x = jax.random.normal(jax.random.key(0), (3, 3))
>>> with jnp.printoptions(precision=2, suppress=True):
... print(jax.scipy.fft.idct(x))
[[-0.02 -0. -0.17]
[-0.02 -0.07 -0.28]
[-0.16 -0.36 0.18]]
When ``n`` smaller than ``x.shape[axis]``
>>> with jnp.printoptions(precision=2, suppress=True):
... print(jax.scipy.fft.idct(x, n=2))
[[ 0. -0.19]
[-0.03 -0.34]
[-0.38 0.04]]
When ``n`` smaller than ``x.shape[axis]`` and ``axis=0``
>>> with jnp.printoptions(precision=2, suppress=True):
... print(jax.scipy.fft.idct(x, n=2, axis=0))
[[-0.35 0.23 -0.1 ]
[ 0.17 -0.09 0.01]]
When ``n`` larger than ``x.shape[axis]`` and ``axis=0``
>>> with jnp.printoptions(precision=2, suppress=True):
... print(jax.scipy.fft.idct(x, n=4, axis=0))
[[-0.34 0.03 0.07]
[ 0. 0.18 -0.17]
[ 0.14 0.09 -0.14]
[ 0. -0.18 0.14]]
``jax.scipy.fft.idct`` can be used to reconstruct ``x`` from the result
of ``jax.scipy.fft.dct``
>>> x_dct = jax.scipy.fft.dct(x)
>>> jnp.allclose(x, jax.scipy.fft.idct(x_dct))
Array(True, dtype=bool)
"""
if type != 2:
raise NotImplementedError('Only DCT type 2 is implemented.')
if norm is not None and norm not in ['backward', 'ortho']:
raise ValueError(f"jax.scipy.fft.idct: {norm=!r} is not implemented")
axis = canonicalize_axis(axis, x.ndim)
if n is not None:
x = lax.pad(x, jnp.array(0, x.dtype),
[(0, n - x.shape[axis] if a == axis else 0, 0)
for a in range(x.ndim)])
N = x.shape[axis]
x, = promote_dtypes_inexact(x)
if norm is None or norm == 'backward':
x = _dct_ortho_norm(x, axis)
x = _dct_ortho_norm(x, axis)
k = lax.expand_dims(jnp.arange(N, dtype=x.dtype), [a for a in range(x.ndim) if a != axis])
# everything is complex from here...
w4 = _W4(N,k)
x = x.astype(w4.dtype)
x = x / (_W4(N, k))
x = x * 2 * N
x = jnp.fft.ifft(x, axis=axis)
# convert back to reals..
out = _dct_deinterleave(x.real, axis)
return out
def idctn(x: Array, type: int = 2,
s: Sequence[int] | None=None,
axes: Sequence[int] | None = None,
norm: str | None = None) -> Array:
"""Computes the multidimensional inverse discrete cosine transform of the input
JAX implementation of :func:`scipy.fft.idctn`.
Args:
x: array
type: integer, default = 2. Currently only type 2 is supported.
s: integer or sequence of integers. Specifies the shape of the result. If not
specified, it will default to the shape of ``x`` along the specified ``axes``.
axes: integer or sequence of integers. Specifies the axes along which the
transform will be computed.
norm: string. The normalization mode: one of ``[None, "backward", "ortho"]``.
The default is ``None``, which is equivalent to ``"backward"``.
Returns:
array containing the inverse discrete cosine transform of x
See Also:
- :func:`jax.scipy.fft.dct`: one-dimensional DCT
- :func:`jax.scipy.fft.dctn`: multidimensional DCT
- :func:`jax.scipy.fft.idct`: one-dimensional inverse DCT
Examples:
``jax.scipy.fft.idctn`` computes the transform along both the axes by default
when ``axes`` argument is ``None``.
>>> x = jax.random.normal(jax.random.key(0), (3, 3))
>>> with jnp.printoptions(precision=2, suppress=True):
... print(jax.scipy.fft.idctn(x))
[[-0.03 -0.08 -0.08]
[ 0.05 0.12 -0.09]
[-0.02 -0.04 0.08]]
When ``s=[2]``, dimension of the transform along ``axis 0`` will be ``2``
and dimension along ``axis 1`` will be the same as that of input.
>>> with jnp.printoptions(precision=2, suppress=True):
... print(jax.scipy.fft.idctn(x, s=[2]))
[[-0.01 -0.03 -0.14]
[ 0. 0.03 0.06]]
When ``s=[2]`` and ``axes=[1]``, dimension of the transform along ``axis 1`` will
be ``2`` and dimension along ``axis 0`` will be same as that of input.
Also when ``axes=[1]``, transform will be computed only along ``axis 1``.
>>> with jnp.printoptions(precision=2, suppress=True):
... print(jax.scipy.fft.idctn(x, s=[2], axes=[1]))
[[ 0. -0.19]
[-0.03 -0.34]
[-0.38 0.04]]
When ``s=[2, 4]``, shape of the transform will be ``(2, 4)``
>>> with jnp.printoptions(precision=2, suppress=True):
... print(jax.scipy.fft.idctn(x, s=[2, 4]))
[[-0.01 -0.01 -0.05 -0.11]
[ 0. 0.01 0.03 0.04]]
``jax.scipy.fft.idctn`` can be used to reconstruct ``x`` from the result
of ``jax.scipy.fft.dctn``
>>> x_dctn = jax.scipy.fft.dctn(x)
>>> jnp.allclose(x, jax.scipy.fft.idctn(x_dctn))
Array(True, dtype=bool)
"""
if type != 2:
raise NotImplementedError('Only DCT type 2 is implemented.')
if norm is not None and norm not in ['backward', 'ortho']:
raise ValueError(f"jax.scipy.fft.idctn: {norm=!r} is not implemented")
if axes is None:
axes = range(x.ndim)
if len(axes) == 1:
return idct(x, n=s[0] if s is not None else None, axis=axes[0], norm=norm)
if s is not None:
ns = dict(zip(axes, s))
pads = [(0, ns[a] - x.shape[a] if a in ns else 0, 0) for a in range(x.ndim)]
x = lax.pad(x, jnp.array(0, x.dtype), pads)
# compose high-D DCTs from 1D DCTs:
for axis in axes:
x = idct(x, axis=axis, norm=norm)
return x
def _dct_deinterleave(x: Array, axis: int) -> Array:
empty_slice = slice(None, None, None)
ix0 = tuple(
slice(None, math.ceil(x.shape[axis]/2), 1) if i == axis else empty_slice
for i in range(len(x.shape)))
ix1 = tuple(
slice(math.ceil(x.shape[axis]/2), None, 1) if i == axis else empty_slice
for i in range(len(x.shape)))
v0 = x[ix0]
v1 = lax.rev(x[ix1], (axis,))
out = jnp.zeros(x.shape, dtype=x.dtype)
evens = tuple(
slice(None, None, 2) if i == axis else empty_slice for i in range(len(x.shape)))
odds = tuple(
slice(1, None, 2) if i == axis else empty_slice for i in range(len(x.shape)))
out = out.at[evens].set(v0)
out = out.at[odds].set(v1)
return out
|
jax-mlREPO_NAMEjaxPATH_START.@jax_extracted@jax-main@jax@_src@scipy@fft.py@.PATH_END.py
|
{
"filename": "feature_request.md",
"repo_name": "kyleaoman/martini",
"repo_path": "martini_extracted/martini-main/.github/ISSUE_TEMPLATE/feature_request.md",
"type": "Markdown"
}
|
---
name: Feature request
about: Suggest an idea for this project
title: ''
labels: ''
assignees: ''
---
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
|
kyleaomanREPO_NAMEmartiniPATH_START.@martini_extracted@martini-main@.github@ISSUE_TEMPLATE@feature_request.md@.PATH_END.py
|
{
"filename": "core.py",
"repo_name": "adrn/gala",
"repo_path": "gala_extracted/gala-main/gala/dynamics/mockstream/core.py",
"type": "Python"
}
|
# Third-party
import astropy.units as u
import numpy as np
# Project
from ...io import quantity_to_hdf5, quantity_from_hdf5
from .. import PhaseSpacePosition
__all__ = ['MockStream']
class MockStream(PhaseSpacePosition):
@u.quantity_input(release_time=u.Myr)
def __init__(self, pos, vel=None, frame=None,
release_time=None, lead_trail=None):
super().__init__(pos=pos, vel=vel, frame=frame)
if release_time is not None:
release_time = u.Quantity(release_time)
if len(release_time) != self.pos.shape[0]:
raise ValueError('shape mismatch: input release time array '
'must have the same shape as the input '
'phase-space data, minus the component '
'dimension. expected {}, got {}'
.format(self.pos.shape[0],
len(release_time)))
self.release_time = release_time
if lead_trail is not None:
lead_trail = np.array(lead_trail)
if len(lead_trail) != self.pos.shape[0]:
raise ValueError('shape mismatch: input leading/trailing array '
'must have the same shape as the input '
'phase-space data, minus the component '
'dimension. expected {}, got {}'
.format(self.pos.shape[0],
len(lead_trail)))
self.lead_trail = lead_trail
# ------------------------------------------------------------------------
# Input / output
#
def to_hdf5(self, f):
"""
Serialize this object to an HDF5 file.
Requires ``h5py``.
Parameters
----------
f : str, :class:`h5py.File`
Either the filename or an open HDF5 file.
"""
f = super().to_hdf5(f)
# if self.potential is not None:
# import yaml
# from ..potential.potential.io import to_dict
# f['potential'] = yaml.dump(to_dict(self.potential)).encode('utf-8')
if self.release_time:
quantity_to_hdf5(f, 'release_time', self.release_time)
if self.lead_trail is not None:
f['lead_trail'] = self.lead_trail.astype('S1') # TODO HACK
return f
@classmethod
def from_hdf5(cls, f):
"""
Load an object from an HDF5 file.
Requires ``h5py``.
Parameters
----------
f : str, :class:`h5py.File`
Either the filename or an open HDF5 file.
"""
# TODO: this is duplicated code from PhaseSpacePosition
if isinstance(f, str):
import h5py
f = h5py.File(f, mode='r')
obj = PhaseSpacePosition.from_hdf5(f)
if 'release_time' in f:
t = quantity_from_hdf5(f['release_time'])
else:
t = None
if 'lead_trail' in f:
lt = f['lead_trail'][:]
else:
lt = None
return cls(pos=obj.pos, vel=obj.vel,
release_time=t, lead_trail=lt,
frame=obj.frame)
|
adrnREPO_NAMEgalaPATH_START.@gala_extracted@gala-main@gala@dynamics@mockstream@core.py@.PATH_END.py
|
{
"filename": "__init__.py",
"repo_name": "catboost/catboost",
"repo_path": "catboost_extracted/catboost-master/contrib/python/plotly/py3/plotly/validators/scattergl/error_x/__init__.py",
"type": "Python"
}
|
import sys
from typing import TYPE_CHECKING
if sys.version_info < (3, 7) or TYPE_CHECKING:
from ._width import WidthValidator
from ._visible import VisibleValidator
from ._valueminus import ValueminusValidator
from ._value import ValueValidator
from ._type import TypeValidator
from ._tracerefminus import TracerefminusValidator
from ._traceref import TracerefValidator
from ._thickness import ThicknessValidator
from ._symmetric import SymmetricValidator
from ._copy_ystyle import Copy_YstyleValidator
from ._color import ColorValidator
from ._arraysrc import ArraysrcValidator
from ._arrayminussrc import ArrayminussrcValidator
from ._arrayminus import ArrayminusValidator
from ._array import ArrayValidator
else:
from _plotly_utils.importers import relative_import
__all__, __getattr__, __dir__ = relative_import(
__name__,
[],
[
"._width.WidthValidator",
"._visible.VisibleValidator",
"._valueminus.ValueminusValidator",
"._value.ValueValidator",
"._type.TypeValidator",
"._tracerefminus.TracerefminusValidator",
"._traceref.TracerefValidator",
"._thickness.ThicknessValidator",
"._symmetric.SymmetricValidator",
"._copy_ystyle.Copy_YstyleValidator",
"._color.ColorValidator",
"._arraysrc.ArraysrcValidator",
"._arrayminussrc.ArrayminussrcValidator",
"._arrayminus.ArrayminusValidator",
"._array.ArrayValidator",
],
)
|
catboostREPO_NAMEcatboostPATH_START.@catboost_extracted@catboost-master@contrib@python@plotly@py3@plotly@validators@scattergl@error_x@__init__.py@.PATH_END.py
|
{
"filename": "groups.py",
"repo_name": "spacetelescope/PyFITS",
"repo_path": "PyFITS_extracted/PyFITS-master/pyfits/hdu/groups.py",
"type": "Python"
}
|
import sys
import numpy as np
from ..column import Column, ColDefs, FITS2NUMPY
from ..fitsrec import FITS_rec, FITS_record
from ..util import lazyproperty, _is_int, _is_pseudo_unsigned, _unsigned_zero
from .base import DTYPE2BITPIX
from .image import PrimaryHDU
from .table import _TableLikeHDU
class Group(FITS_record):
"""
One group of the random group data.
"""
def __init__(self, input, row=0, start=None, end=None, step=None,
base=None):
super(Group, self).__init__(input, row, start, end, step, base)
@property
def parnames(self):
return self.array.parnames
@property
def data(self):
# The last column in the coldefs is the data portion of the group
return self.field(self.array._coldefs.names[-1])
@lazyproperty
def _unique(self):
return _par_indices(self.parnames)
def par(self, parname):
"""
Get the group parameter value.
"""
if _is_int(parname):
result = self.array[self.row][parname]
else:
indx = self._unique[parname.upper()]
if len(indx) == 1:
result = self.array[self.row][indx[0]]
# if more than one group parameter have the same name
else:
result = self.array[self.row][indx[0]].astype('f8')
for i in indx[1:]:
result += self.array[self.row][i]
return result
def setpar(self, parname, value):
"""
Set the group parameter value.
"""
# TODO: It would be nice if, instead of requiring a multi-part value to
# be an array, there were an *option* to automatically split the value
# into multiple columns if it doesn't already fit in the array data
# type.
if _is_int(parname):
self.array[self.row][parname] = value
else:
indx = self._unique[parname.upper()]
if len(indx) == 1:
self.array[self.row][indx[0]] = value
# if more than one group parameter have the same name, the
# value must be a list (or tuple) containing arrays
else:
if isinstance(value, (list, tuple)) and \
len(indx) == len(value):
for i in range(len(indx)):
self.array[self.row][indx[i]] = value[i]
else:
raise ValueError('Parameter value must be a sequence '
'with %d arrays/numbers.' % len(indx))
class GroupData(FITS_rec):
"""
Random groups data object.
Allows structured access to FITS Group data in a manner analogous
to tables.
"""
_record_type = Group
def __new__(cls, input=None, bitpix=None, pardata=None, parnames=[],
bscale=None, bzero=None, parbscales=None, parbzeros=None):
"""
Parameters
----------
input : array or FITS_rec instance
input data, either the group data itself (a
`numpy.ndarray`) or a record array (`FITS_rec`) which will
contain both group parameter info and the data. The rest
of the arguments are used only for the first case.
bitpix : int
data type as expressed in FITS ``BITPIX`` value (8, 16, 32,
64, -32, or -64)
pardata : sequence of arrays
parameter data, as a list of (numeric) arrays.
parnames : sequence of str
list of parameter names.
bscale : int
``BSCALE`` of the data
bzero : int
``BZERO`` of the data
parbscales : sequence of int
list of bscales for the parameters
parbzeros : sequence of int
list of bzeros for the parameters
"""
if not isinstance(input, FITS_rec):
if pardata is None:
npars = 0
else:
npars = len(pardata)
if parbscales is None:
parbscales = [None] * npars
if parbzeros is None:
parbzeros = [None] * npars
if parnames is None:
parnames = ['PAR%d' % (idx + 1) for idx in range(npars)]
if len(parnames) != npars:
raise ValueError('The number of parameter data arrays does '
'not match the number of parameters.')
unique_parnames = _unique_parnames(parnames + ['DATA'])
if bitpix is None:
bitpix = DTYPE2BITPIX[input.dtype.name]
fits_fmt = GroupsHDU._bitpix2tform[bitpix] # -32 -> 'E'
format = FITS2NUMPY[fits_fmt] # 'E' -> 'f4'
data_fmt = '%s%s' % (str(input.shape[1:]), format)
formats = ','.join(([format] * npars) + [data_fmt])
gcount = input.shape[0]
cols = [Column(name=unique_parnames[idx], format=fits_fmt,
bscale=parbscales[idx], bzero=parbzeros[idx])
for idx in range(npars)]
cols.append(Column(name=unique_parnames[-1], format=fits_fmt,
bscale=bscale, bzero=bzero))
coldefs = ColDefs(cols)
self = FITS_rec.__new__(cls,
np.rec.array(None,
formats=formats,
names=coldefs.names,
shape=gcount))
# By default the data field will just be 'DATA', but it may be
# uniquified if 'DATA' is already used by one of the group names
self._data_field = unique_parnames[-1]
self._coldefs = coldefs
self.parnames = parnames
for idx, name in enumerate(unique_parnames[:-1]):
column = coldefs[idx]
# Note: _get_scale_factors is used here and in other cases
# below to determine whether the column has non-default
# scale/zero factors.
# TODO: Find a better way to do this than using this interface
scale, zero = self._get_scale_factors(column)[3:5]
if scale or zero:
self._cache_field(name, pardata[idx])
else:
np.rec.recarray.field(self, idx)[:] = pardata[idx]
column = coldefs[self._data_field]
scale, zero = self._get_scale_factors(column)[3:5]
if scale or zero:
self._cache_field(self._data_field, input)
else:
np.rec.recarray.field(self, npars)[:] = input
else:
self = FITS_rec.__new__(cls, input)
self.parnames = None
return self
def __array_finalize__(self, obj):
super(GroupData, self).__array_finalize__(obj)
if isinstance(obj, GroupData):
self.parnames = obj.parnames
elif isinstance(obj, FITS_rec):
self.parnames = obj._coldefs.names
def __getitem__(self, key):
out = super(GroupData, self).__getitem__(key)
if isinstance(out, GroupData):
out.parnames = self.parnames
return out
@property
def data(self):
"""
The raw group data represented as a multi-dimensional `numpy.ndarray`
array.
"""
# The last column in the coldefs is the data portion of the group
return self.field(self._coldefs.names[-1])
@lazyproperty
def _unique(self):
return _par_indices(self.parnames)
def par(self, parname):
"""
Get the group parameter values.
"""
if _is_int(parname):
result = self.field(parname)
else:
indx = self._unique[parname.upper()]
if len(indx) == 1:
result = self.field(indx[0])
# if more than one group parameter have the same name
else:
result = self.field(indx[0]).astype('f8')
for i in indx[1:]:
result += self.field(i)
return result
class GroupsHDU(PrimaryHDU, _TableLikeHDU):
"""
FITS Random Groups HDU class.
See the :ref:`random-groups` section in the PyFITS documentation for more
details on working with this type of HDU.
"""
_bitpix2tform = {8: 'B', 16: 'I', 32: 'J', 64: 'K', -32: 'E', -64: 'D'}
_data_type = GroupData
_data_field = 'DATA'
"""
The name of the table record array field that will contain the group data
for each group; 'DATA' by default, but may be preceded by any number of
underscores if 'DATA' is already a parameter name
"""
def __init__(self, data=None, header=None):
super(GroupsHDU, self).__init__(data=data, header=header)
# Update the axes; GROUPS HDUs should always have at least one axis
if len(self._axes) <= 0:
self._axes = [0]
self._header['NAXIS'] = 1
self._header.set('NAXIS1', 0, after='NAXIS')
@classmethod
def match_header(cls, header):
keyword = header.cards[0].keyword
return (keyword == 'SIMPLE' and 'GROUPS' in header and
header['GROUPS'] == True)
@lazyproperty
def data(self):
"""
The data of a random group FITS file will be like a binary table's
data.
"""
data = self._get_tbdata()
data._coldefs = self.columns
data.parnames = self.parnames
del self.columns
return data
@lazyproperty
def parnames(self):
"""The names of the group parameters as described by the header."""
pcount = self._header['PCOUNT']
# The FITS standard doesn't really say what to do if a parname is
# missing, so for now just assume that won't happen
return [self._header['PTYPE' + str(idx + 1)] for idx in range(pcount)]
@lazyproperty
def columns(self):
if self._has_data and hasattr(self.data, '_coldefs'):
return self.data._coldefs
format = self._bitpix2tform[self._header['BITPIX']]
pcount = self._header['PCOUNT']
parnames = []
bscales = []
bzeros = []
for idx in range(pcount):
bscales.append(self._header.get('PSCAL' + str(idx + 1), None))
bzeros.append(self._header.get('PZERO' + str(idx + 1), None))
parnames.append(self._header['PTYPE' + str(idx + 1)])
formats = [format] * len(parnames)
dim = [None] * len(parnames)
# Now create columns from collected parameters, but first add the DATA
# column too, to contain the group data.
parnames.append('DATA')
bscales.append(self._header.get('BSCALE'))
bzeros.append(self._header.get('BZEROS'))
data_shape = self.shape[:-1]
formats.append(str(int(np.prod(data_shape))) + format)
dim.append(data_shape)
parnames = _unique_parnames(parnames)
self._data_field = parnames[-1]
cols = [Column(name=name, format=fmt, bscale=bscale, bzero=bzero,
dim=dim)
for name, fmt, bscale, bzero, dim in
zip(parnames, formats, bscales, bzeros, dim)]
coldefs = ColDefs(cols)
return coldefs
@property
def _nrows(self):
if not self._data_loaded:
# The number of 'groups' equates to the number of rows in the table
# representation of the data
return self._header.get('GCOUNT', 0)
else:
return len(self.data)
@lazyproperty
def _theap(self):
# Only really a lazyproperty for symmetry with _TableBaseHDU
return 0
@property
def size(self):
"""
Returns the size (in bytes) of the HDU's data part.
"""
size = 0
naxis = self._header.get('NAXIS', 0)
# for random group image, NAXIS1 should be 0, so we skip NAXIS1.
if naxis > 1:
size = 1
for idx in range(1, naxis):
size = size * self._header['NAXIS' + str(idx + 1)]
bitpix = self._header['BITPIX']
gcount = self._header.get('GCOUNT', 1)
pcount = self._header.get('PCOUNT', 0)
size = abs(bitpix) * gcount * (pcount + size) // 8
return size
def update_header(self):
old_naxis = self._header.get('NAXIS', 0)
if self._data_loaded:
if isinstance(self.data, GroupData):
self._axes = list(self.data.data.shape)[1:]
self._axes.reverse()
self._axes = [0] + self._axes
field0 = self.data.dtype.names[0]
field0_code = self.data.dtype.fields[field0][0].name
elif self.data is None:
self._axes = [0]
field0_code = 'uint8' # For lack of a better default
else:
raise ValueError('incorrect array type')
self._header['BITPIX'] = DTYPE2BITPIX[field0_code]
self._header['NAXIS'] = len(self._axes)
# add NAXISi if it does not exist
for idx, axis in enumerate(self._axes):
if (idx == 0):
after = 'NAXIS'
else:
after = 'NAXIS' + str(idx)
self._header.set('NAXIS' + str(idx + 1), axis, after=after)
# delete extra NAXISi's
for idx in range(len(self._axes) + 1, old_naxis + 1):
try:
del self._header['NAXIS' + str(idx)]
except KeyError:
pass
if self._has_data and isinstance(self.data, GroupData):
self._header.set('GROUPS', True,
after='NAXIS' + str(len(self._axes)))
self._header.set('PCOUNT', len(self.data.parnames), after='GROUPS')
self._header.set('GCOUNT', len(self.data), after='PCOUNT')
column = self.data._coldefs[self._data_field]
scale, zero = self.data._get_scale_factors(column)[3:5]
if scale:
self._header.set('BSCALE', column.bscale)
if zero:
self._header.set('BZERO', column.bzero)
for idx, name in enumerate(self.data.parnames):
self._header.set('PTYPE' + str(idx + 1), name)
column = self.data._coldefs[idx]
scale, zero = self.data._get_scale_factors(column)[3:5]
if scale:
self._header.set('PSCAL' + str(idx + 1), column.bscale)
if zero:
self._header.set('PZERO' + str(idx + 1), column.bzero)
# Update the position of the EXTEND keyword if it already exists
if 'EXTEND' in self._header:
if len(self._axes):
after = 'NAXIS' + str(len(self._axes))
else:
after = 'NAXIS'
self._header.set('EXTEND', after=after)
def _writedata_internal(self, fileobj):
"""
Basically copy/pasted from `_ImageBaseHDU._writedata_internal()`, but
we have to get the data's byte order a different way...
TODO: Might be nice to store some indication of the data's byte order
as an attribute or function so that we don't have to do this.
"""
size = 0
if self.data is not None:
self.data._scale_back()
# Based on the system type, determine the byteorders that
# would need to be swapped to get to big-endian output
if sys.byteorder == 'little':
swap_types = ('<', '=')
else:
swap_types = ('<',)
# deal with unsigned integer 16, 32 and 64 data
if _is_pseudo_unsigned(self.data.dtype):
# Convert the unsigned array to signed
output = np.array(
self.data - _unsigned_zero(self.data.dtype),
dtype='>i%d' % self.data.dtype.itemsize)
should_swap = False
else:
output = self.data
fname = self.data.dtype.names[0]
byteorder = self.data.dtype.fields[fname][0].str[0]
should_swap = (byteorder in swap_types)
if not fileobj.simulateonly:
if should_swap:
output.byteswap(True)
try:
fileobj.writearray(output)
finally:
output.byteswap(True)
else:
fileobj.writearray(output)
size += output.size * output.itemsize
return size
def _verify(self, option='warn'):
errs = super(GroupsHDU, self)._verify(option=option)
# Verify locations and values of mandatory keywords.
self.req_cards('NAXIS', 2,
lambda v: (_is_int(v) and v >= 1 and v <= 999), 1,
option, errs)
self.req_cards('NAXIS1', 3, lambda v: (_is_int(v) and v == 0), 0,
option, errs)
after = self._header['NAXIS'] + 3
pos = lambda x: x >= after
self.req_cards('GCOUNT', pos, _is_int, 1, option, errs)
self.req_cards('PCOUNT', pos, _is_int, 0, option, errs)
self.req_cards('GROUPS', pos, lambda v: (v == True), True, option,
errs)
return errs
def _calculate_datasum(self, blocking):
"""
Calculate the value for the ``DATASUM`` card in the HDU.
"""
if self._has_data:
# We have the data to be used.
# Check the byte order of the data. If it is little endian we
# must swap it before calculating the datasum.
# TODO: Maybe check this on a per-field basis instead of assuming
# that all fields have the same byte order?
byteorder = \
self.data.dtype.fields[self.data.dtype.names[0]][0].str[0]
if byteorder != '>':
byteswapped = True
d = self.data.byteswap(True)
d.dtype = d.dtype.newbyteorder('>')
else:
byteswapped = False
d = self.data
byte_data = d.view(type=np.ndarray, dtype=np.ubyte)
cs = self._compute_checksum(byte_data, blocking=blocking)
# If the data was byteswapped in this method then return it to
# its original little-endian order.
if byteswapped:
d.byteswap(True)
d.dtype = d.dtype.newbyteorder('<')
return cs
else:
# This is the case where the data has not been read from the file
# yet. We can handle that in a generic manner so we do it in the
# base class. The other possibility is that there is no data at
# all. This can also be handled in a generic manner.
return super(GroupsHDU, self)._calculate_datasum(blocking=blocking)
def _summary(self):
summary = super(GroupsHDU, self)._summary()
name, classname, length, shape, format, gcount = summary
# Drop the first axis from the shape
if shape:
shape = shape[1:]
if shape and all(shape):
# Update the format
format = self.columns[0].dtype.name
# Update the GCOUNT report
gcount = '%d Groups %d Parameters' % (self._gcount, self._pcount)
return (name, classname, length, shape, format, gcount)
def _par_indices(names):
"""
Given a list of objects, returns a mapping of objects in that list to the
index or indices at which that object was found in the list.
"""
unique = {}
for idx, name in enumerate(names):
# Case insensitive
name = name.upper()
if name in unique:
unique[name].append(idx)
else:
unique[name] = [idx]
return unique
def _unique_parnames(names):
"""
Given a list of parnames, including possible duplicates, returns a new list
of parnames with duplicates prepended by one or more underscores to make
them unique. This is also case insensitive.
"""
upper_names = set()
unique_names = []
for name in names:
name_upper = name.upper()
while name_upper in upper_names:
name = '_' + name
name_upper = '_' + name_upper
unique_names.append(name)
upper_names.add(name_upper)
return unique_names
|
spacetelescopeREPO_NAMEPyFITSPATH_START.@PyFITS_extracted@PyFITS-master@pyfits@hdu@groups.py@.PATH_END.py
|
{
"filename": "README.md",
"repo_name": "zachetienne/nrpytutorial",
"repo_path": "nrpytutorial_extracted/nrpytutorial-master/README.md",
"type": "Markdown"
}
|
# NRPy+, SENRv2, and the NRPy+ Jupyter Tutorial
[](https://github.com/zachetienne/nrpytutorial/actions/workflows/github-actions-ubuntu20.yml)
[](https://github.com/zachetienne/nrpytutorial/actions/workflows/github-actions-ubuntu22.yml)
[](https://github.com/zachetienne/nrpytutorial/actions/workflows/github-actions-MacOS12.yml)
[](https://github.com/zachetienne/nrpytutorial/actions/workflows/github-actions-windows2022.yml)
[](https://mybinder.org/v2/gh/zachetienne/nrpytutorial/master?filepath=NRPyPlus_Tutorial.ipynb)
This repository houses
* [The newest version of NRPy+: Python-Based Code Generation for Numerical Relativity... and Beyond](https://arxiv.org/abs/1712.07658),
* The second version of SENR, the Simple, Efficient Numerical Relativity code (see the "Colliding Black Holes" Start-to-Finish tutorial notebook), and
* The NRPy+ Jupyter Tutorial: An Introduction to Python-Based Code Generation for Numerical Relativity... and Beyond!
To explore the NRPy+ tutorial without downloading, check out the [NRPy+ tutorial mybinder](https://mybinder.org/v2/gh/zachetienne/nrpytutorial/master?filepath=NRPyPlus_Tutorial.ipynb).
If you would like to explore the NRPy+ tutorial on your local computer, you'll need to install Python, Jupyter, Sympy, and Matplotlib. Once they are installed, [you may find this useful](https://jupyter-notebook-beginner-guide.readthedocs.io/en/latest/execute.html)
In certain circumstances, developers may wish to execute one of these Jupyter notebooks from the command line. For example, when the notebook constructs an [Einstein Toolkit](https://einsteintoolkit.org) thorn. In such a case, the following command should be useful:
`jupyter nbconvert --to notebook --inplace --execute --ExecutePreprocessor.timeout=-1 [Jupyter notebook file]`
Alternatively one can simply use the script:
`./run_Jupyter_notebook.sh [Jupyter notebook file]`
|
zachetienneREPO_NAMEnrpytutorialPATH_START.@nrpytutorial_extracted@nrpytutorial-master@README.md@.PATH_END.py
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.