code string | signature string | docstring string | loss_without_docstring float64 | loss_with_docstring float64 | factor float64 |
|---|---|---|---|---|---|
# Extended filename used for output images
extfnm = fnm + '_files'
# Directory into which output images are written
extpth = os.path.join(pth, extfnm)
# Make output image directory if it doesn't exist
mkdir(extpth)
# Iterate over output images in resources dict
for r in res['output... | def write_notebook_rst(txt, res, fnm, pth) | Write the converted notebook text `txt` and resources `res` to
filename `fnm` in directory `pth`. | 3.327784 | 3.316674 | 1.00335 |
# Read the notebook file
ntbk = nbformat.read(npth, nbformat.NO_CONVERT)
# Convert notebook object to rstpth
notebook_object_to_rst(ntbk, rpth, rdir, cr) | def notebook_to_rst(npth, rpth, rdir, cr=None) | Convert notebook at `npth` to rst document at `rpth`, in directory
`rdir`. Parameter `cr` is a CrossReferenceLookup object. | 4.28669 | 4.469654 | 0.959065 |
# Parent directory of file rpth
rdir = os.path.dirname(rpth)
# File basename
rb = os.path.basename(os.path.splitext(rpth)[0])
# Pre-process notebook prior to conversion to rst
if cr is not None:
preprocess_notebook(ntbk, cr)
# Convert notebook to rst
rex = RSTExporter()
... | def notebook_object_to_rst(ntbk, rpth, cr=None) | Convert notebook object `ntbk` to rst document at `rpth`, in
directory `rdir`. Parameter `cr` is a CrossReferenceLookup
object. | 4.182382 | 4.127025 | 1.013413 |
# Read entire text of script at spth
with open(spth) as f:
stxt = f.read()
# Process script text
stxt = preprocess_script_string(stxt)
# Convert script text to notebook object
nbs = script_string_to_notebook_object(stxt)
# Read notebook file npth
nbn = nbformat.read(npth, ... | def script_and_notebook_to_rst(spth, npth, rpth) | Convert a script and the corresponding executed notebook to rst.
The script is converted to notebook format *without* replacement
of sphinx cross-references with links to online docs, and the
resulting markdown cells are inserted into the executed notebook,
which is then converted to rst. | 3.795858 | 3.728009 | 1.0182 |
# Ensure that output directory exists
mkdir(rpth)
# Iterate over index files
for fp in glob(os.path.join(spth, '*.rst')) + \
glob(os.path.join(spth, '*', '*.rst')):
# Index basename
b = os.path.basename(fp)
# Index dirname
dn = os.path.dirname(fp)
... | def make_example_scripts_docs(spth, npth, rpth) | Generate rst docs from example scripts. Arguments `spth`, `npth`,
and `rpth` are the top-level scripts directory, the top-level
notebooks directory, and the top-level output directory within the
docs respectively. | 3.176129 | 3.157211 | 1.005992 |
# An initial '.' indicates a partial name
if name[0] == '.':
# Find matches for the partial name in the string
# containing all full names for this role
ptrn = r'(?<= )[^,]*' + name + r'(?=,)'
ml = re.findall(ptrn, self.rolnam[role])
... | def get_full_name(self, role, name) | If ``name`` is already the full name of an object, return
``name``. Otherwise, if ``name`` is a partial object name,
look up the full name and return it. | 4.388003 | 4.367621 | 1.004667 |
# Expand partial names to full names
name = self.get_full_name(role, name)
# Look up domain corresponding to role
dom = IntersphinxInventory.roledomain[role]
# Get the inventory entry tuple corresponding to the name
# of the referenced type
itpl = self.i... | def get_docs_url(self, role, name) | Get a url for the online docs corresponding to a sphinx cross
reference :role:`name`. | 11.182654 | 10.650919 | 1.049924 |
n = len(self.baseurl)
return url[0:n] == self.baseurl | def matching_base_url(self, url) | Return True if the initial part of `url` matches the base url
passed to the initialiser of this object, and False otherwise. | 7.895651 | 4.918129 | 1.605417 |
# Raise an exception if the initial part of url does not match
# the base url for this object
n = len(self.baseurl)
if url[0:n] != self.baseurl:
raise KeyError('base of url %s does not match base url %s' %
(url, self.baseurl))
# Th... | def get_sphinx_ref(self, url, label=None) | Get an internal sphinx cross reference corresponding to `url`
into the online docs, associated with a link with label `label`
(if not None). | 6.29922 | 6.282485 | 1.002664 |
# Initialise dicts
revinv = {}
rolnam = {}
# Iterate over domain keys in inventory dict
for d in inv:
# Since keys seem to be duplicated, ignore those not
# starting with 'py:'
if d[0:3] == 'py:' and d in IntersphinxInventory.domainro... | def inventory_maps(inv) | Construct dicts facilitating information lookup in an
inventory dict. A reversed dict allows lookup of a tuple
specifying the sphinx cross-reference role and the name of the
referenced type from the intersphinx inventory url postfix
string. A role-specific name lookup string allows the s... | 8.273963 | 5.19421 | 1.592921 |
if role == 'cite':
# If the cross-reference is a citation, make sure that
# the cite key is in the sphinx environment bibtex cache.
# If it is, construct the url from the cite key, otherwise
# raise an exception
if name not in self.env.bibtex... | def get_docs_url(self, role, name) | Get the online docs url for sphinx cross-reference :role:`name`. | 4.617155 | 4.379669 | 1.054225 |
if role == 'cite':
# Get the string used as the citation label in the text
try:
cstr = self.env.bibtex_cache.get_label_from_key(name)
except Exception:
raise KeyError('cite key %s not found' % name, 'cite', 0)
# The link l... | def get_docs_label(self, role, name) | Get an appropriate label to use in a link to the online docs. | 5.039724 | 4.921941 | 1.02393 |
# A url is assumed to correspond to a citation if it contains
# 'zreferences.html#'
if 'zreferences.html#' in url:
key = url.partition('zreferences.html#')[2]
ref = ':cite:`%s`' % key
else:
# If the url does not correspond to a citation, try ... | def get_sphinx_ref(self, url, label=None) | Get an internal sphinx cross reference corresponding to `url`
into the online docs, associated with a link with label `label`
(if not None). | 5.598146 | 5.571361 | 1.004808 |
# Find sphinx cross-references
mi = re.finditer(r':([^:]+):`([^`]+)`', txt)
if mi:
# Iterate over match objects in iterator returned by re.finditer
for mo in mi:
# Initialize link label and url for substitution
lbl = None
... | def substitute_ref_with_url(self, txt) | In the string `txt`, replace sphinx references with
corresponding links to online docs. | 3.98022 | 3.763788 | 1.057504 |
# Find links
mi = re.finditer(r'\[([^\]]+|\[[^\]]+\])\]\(([^\)]+)\)', txt)
if mi:
# Iterate over match objects in iterator returned by
# re.finditer
for mo in mi:
# Get components of current match: full matching text,
... | def substitute_url_with_ref(self, txt) | In the string `txt`, replace links to online docs with
corresponding sphinx cross-references. | 4.647452 | 4.390683 | 1.058481 |
# Extract method selection argument or set default
if 'method' in kwargs:
method = kwargs['method']
del kwargs['method']
else:
method = 'cns'
# Assign base class depending on method selection argument
if method == 'ism':
base = ConvCnstrMOD_IterSM
elif meth... | def ConvCnstrMOD(*args, **kwargs) | A wrapper function that dynamically defines a class derived from
one of the implementations of the Convolutional Constrained MOD
problems, and returns an object instantiated with the provided
parameters. The wrapper is designed to allow the appropriate
object to be created by calling this function using... | 3.844175 | 2.80593 | 1.370018 |
# Assign base class depending on method selection argument
if method == 'ism':
base = ConvCnstrMOD_IterSM.Options
elif method == 'cg':
base = ConvCnstrMOD_CG.Options
elif method == 'cns':
base = ConvCnstrMOD_Consensus.Options
else:
raise ValueError('Unknown Conv... | def ConvCnstrMODOptions(opt=None, method='cns') | A wrapper function that dynamically defines a class derived from
the Options class associated with one of the implementations of
the Convolutional Constrained MOD problem, and returns an object
instantiated with the provided parameters. The wrapper is designed
to allow the appropriate object to be creat... | 4.286292 | 4.048053 | 1.058853 |
if self.opt['Y0'] is None:
return np.zeros(ushape, dtype=self.dtype)
else:
# If initial Y is non-zero, initial U is chosen so that
# the relevant dual optimality criterion (see (3.10) in
# boyd-2010-distributed) is satisfied.
return s... | def uinit(self, ushape) | Return initialiser for working variable U | 9.347448 | 8.536878 | 1.094949 |
D = self.Y
if crop:
D = cr.bcrop(D, self.cri.dsz, self.cri.dimN)
return D | def getdict(self, crop=True) | Get final dictionary. If ``crop`` is ``True``, apply
:func:`.cnvrep.bcrop` to returned array. | 14.269937 | 12.435812 | 1.147487 |
r
self.Y = self.Pcn(self.AX + self.U) | def ystep(self) | r"""Minimise Augmented Lagrangian with respect to
:math:`\mathbf{y}`. | 72.078835 | 48.815643 | 1.476552 |
return self.Xf if self.opt['fEvalX'] else \
sl.rfftn(self.Y, None, self.cri.axisN) | def obfn_fvarf(self) | Variable to be evaluated in computing data fidelity term,
depending on 'fEvalX' option value. | 23.949556 | 12.820977 | 1.867998 |
dfd = self.obfn_dfd()
cns = self.obfn_cns()
return (dfd, cns) | def eval_objfn(self) | Compute components of objective function as well as total
contribution to objective function. | 10.39892 | 7.81406 | 1.330796 |
r
return np.linalg.norm((self.Pcn(self.obfn_gvar()) - self.obfn_gvar())) | def obfn_cns(self) | r"""Compute constraint violation measure :math:`\| P(\mathbf{y}) -
\mathbf{y}\|_2`. | 16.607697 | 10.950132 | 1.516666 |
if D is None:
Df = self.Xf
else:
Df = sl.rfftn(D, None, self.cri.axisN)
Sf = np.sum(self.Zf * Df, axis=self.cri.axisM)
return sl.irfftn(Sf, self.cri.Nv, self.cri.axisN) | def reconstruct(self, D=None) | Reconstruct representation. | 3.721742 | 3.610311 | 1.030865 |
r
self.cgit = None
self.YU[:] = self.Y - self.U
b = self.ZSf + self.rho*sl.rfftn(self.YU, None, self.cri.axisN)
self.Xf[:], cgit = sl.solvemdbi_cg(self.Zf, self.rho, b,
self.cri.axisM, self.cri.axisK,
... | def xstep(self) | r"""Minimise Augmented Lagrangian with respect to :math:`\mathbf{x}`. | 8.578735 | 7.131407 | 1.202951 |
if self.opt['Y0'] is None:
return np.zeros(ushape, dtype=self.dtype)
else:
# If initial Y is non-zero, initial U is chosen so that
# the relevant dual optimality criterion (see (3.10) in
# boyd-2010-distributed) is satisfied.
return n... | def uinit(self, ushape) | Return initialiser for working variable U. | 8.846716 | 8.071373 | 1.096061 |
r
# This test reflects empirical evidence that two slightly
# different implementations are faster for single or
# multi-channel data. This kludge is intended to be temporary.
if self.cri.Cd > 1:
for i in range(self.Nb):
self.xistep(i)
else:
... | def xstep(self) | r"""Minimise Augmented Lagrangian with respect to block vector
:math:`\mathbf{x} = \left( \begin{array}{ccc} \mathbf{x}_0^T &
\mathbf{x}_1^T & \ldots \end{array} \right)^T\;`. | 4.98174 | 4.794158 | 1.039127 |
r
self.YU[:] = self.Y - self.U[..., i]
b = np.take(self.ZSf, [i], axis=self.cri.axisK) + \
self.rho*sl.rfftn(self.YU, None, self.cri.axisN)
self.Xf[..., i] = sl.solvedbi_sm(np.take(
self.Zf, [i], axis=self.cri.axisK),
sel... | def xistep(self, i) | r"""Minimise Augmented Lagrangian with respect to :math:`\mathbf{x}`
component :math:`\mathbf{x}_i`. | 6.631153 | 6.316051 | 1.049889 |
r
Y = self.obfn_gvar()
return np.linalg.norm((self.Pcn(Y) - Y)) | def obfn_cns(self) | r"""Compute constraint violation measure :math:`\| P(\mathbf{y})
- \mathbf{y}\|_2`. | 22.186373 | 16.525465 | 1.342557 |
# If the dictionary has a single channel but the input (and
# therefore also the coefficient map array) has multiple
# channels, the channel index and multiple image index have
# the same behaviour in the dictionary update equation: the
# simplest way to handle this is ... | def setcoef(self, Z) | Set coefficient array. | 8.908943 | 8.448788 | 1.054464 |
# Compute X D - S
Ryf = self.eval_Rf(self.Yf)
gradf = sl.inner(np.conj(self.Zf), Ryf, axis=self.cri.axisK)
# Multiple channel signal, single channel dictionary
if self.cri.C > 1 and self.cri.Cd == 1:
gradf = np.sum(gradf, axis=self.cri.axisC, keepdims=True... | def eval_grad(self) | Compute gradient in Fourier domain. | 8.620804 | 7.638571 | 1.128589 |
diff = self.Xf - self.Yfprv
return sl.rfl2norm2(diff, self.X.shape, axis=self.cri.axisN) | def rsdl(self) | Compute fixed point residual in Fourier domain. | 33.370571 | 18.908241 | 1.764869 |
r
Ef = self.eval_Rf(self.Xf)
return sl.rfl2norm2(Ef, self.S.shape, axis=self.cri.axisN) / 2.0 | def obfn_dfd(self) | r"""Compute data fidelity term :math:`(1/2) \| \sum_m
\mathbf{d}_m * \mathbf{x}_m - \mathbf{s} \|_2^2`. | 25.578798 | 21.841705 | 1.171099 |
r
return np.linalg.norm((self.Pcn(self.X) - self.X)) | def obfn_cns(self) | r"""Compute constraint violation measure :math:`\|
P(\mathbf{y}) - \mathbf{y}\|_2`. | 20.5343 | 13.969906 | 1.469895 |
r
if Xf is None:
Xf = self.Xf
Rf = self.eval_Rf(Xf)
return 0.5 * np.linalg.norm(Rf.flatten(), 2)**2 | def obfn_f(self, Xf=None) | r"""Compute data fidelity term :math:`(1/2) \| \sum_m
\mathbf{d}_m * \mathbf{x}_m - \mathbf{s} \|_2^2`.
This is used for backtracking. Since the backtracking is
computed in the DFT, it is important to preserve the
DFT scaling. | 5.155694 | 5.134714 | 1.004086 |
r
Ef = self.eval_Rf(self.Xf)
E = sl.irfftn(Ef, self.cri.Nv, self.cri.axisN)
return (np.linalg.norm(self.W * E)**2) / 2.0 | def obfn_dfd(self) | r"""Compute data fidelity term :math:`(1/2) \sum_k \| W (\sum_m
\mathbf{d}_m * \mathbf{x}_{k,m} - \mathbf{s}_k) \|_2^2` | 11.29002 | 10.965078 | 1.029634 |
r
if Xf is None:
Xf = self.Xf
Rf = self.eval_Rf(Xf)
R = sl.irfftn(Rf, self.cri.Nv, self.cri.axisN)
WRf = sl.rfftn(self.W * R, self.cri.Nv, self.cri.axisN)
return 0.5 * np.linalg.norm(WRf.flatten(), 2)**2 | def obfn_f(self, Xf=None) | r"""Compute data fidelity term :math:`(1/2) \sum_k \| W (\sum_m
\mathbf{d}_m * \mathbf{x}_{k,m} - \mathbf{s}_k) \|_2^2`.
This is used for backtracking. Since the backtracking is
computed in the DFT, it is important to preserve the
DFT scaling. | 4.466607 | 4.165499 | 1.072286 |
clsmod = {'admm': admm_cbpdn.ConvBPDN,
'fista': fista_cbpdn.ConvBPDN}
if label in clsmod:
return clsmod[label]
else:
raise ValueError('Unknown ConvBPDN solver method %s' % label) | def cbpdn_class_label_lookup(label) | Get a CBPDN class from a label string. | 4.692989 | 4.366402 | 1.074796 |
dflt = copy.deepcopy(cbpdn_class_label_lookup(method).Options.defaults)
if method == 'admm':
dflt.update({'MaxMainIter': 1, 'AutoRho':
{'Period': 10, 'AutoScaling': False,
'RsdlRatio': 10.0, 'Scaling': 2.0,
'RsdlTarget': 1.0}})
e... | def ConvBPDNOptionsDefaults(method='admm') | Get defaults dict for the ConvBPDN class specified by the ``method``
parameter. | 5.04611 | 4.964925 | 1.016352 |
# Assign base class depending on method selection argument
base = cbpdn_class_label_lookup(method).Options
# Nested class with dynamically determined inheritance
class ConvBPDNOptions(base):
def __init__(self, opt):
super(ConvBPDNOptions, self).__init__(opt)
# Allow pickl... | def ConvBPDNOptions(opt=None, method='admm') | A wrapper function that dynamically defines a class derived from
the Options class associated with one of the implementations of
the Convolutional BPDN problem, and returns an object
instantiated with the provided parameters. The wrapper is designed
to allow the appropriate object to be created by calli... | 7.792745 | 6.993777 | 1.11424 |
# Extract method selection argument or set default
method = kwargs.pop('method', 'admm')
# Assign base class depending on method selection argument
base = cbpdn_class_label_lookup(method)
# Nested class with dynamically determined inheritance
class ConvBPDN(base):
def __init__(se... | def ConvBPDN(*args, **kwargs) | A wrapper function that dynamically defines a class derived from
one of the implementations of the Convolutional Constrained MOD
problems, and returns an object instantiated with the provided
parameters. The wrapper is designed to allow the appropriate
object to be created by calling this function using... | 5.938691 | 5.218736 | 1.137956 |
clsmod = {'ism': admm_ccmod.ConvCnstrMOD_IterSM,
'cg': admm_ccmod.ConvCnstrMOD_CG,
'cns': admm_ccmod.ConvCnstrMOD_Consensus,
'fista': fista_ccmod.ConvCnstrMOD}
if label in clsmod:
return clsmod[label]
else:
raise ValueError('Unknown ConvCnstrMO... | def ccmod_class_label_lookup(label) | Get a CCMOD class from a label string. | 5.044937 | 5.372272 | 0.93907 |
dflt = copy.deepcopy(ccmod_class_label_lookup(method).Options.defaults)
if method == 'fista':
dflt.update({'MaxMainIter': 1, 'BackTrack':
{'gamma_u': 1.2, 'MaxIter': 50}})
else:
dflt.update({'MaxMainIter': 1, 'AutoRho':
{'Period': 10, 'AutoScal... | def ConvCnstrMODOptionsDefaults(method='fista') | Get defaults dict for the ConvCnstrMOD class specified by the
``method`` parameter. | 5.393535 | 5.10053 | 1.057446 |
# Assign base class depending on method selection argument
base = ccmod_class_label_lookup(method).Options
# Nested class with dynamically determined inheritance
class ConvCnstrMODOptions(base):
def __init__(self, opt):
super(ConvCnstrMODOptions, self).__init__(opt)
# All... | def ConvCnstrMODOptions(opt=None, method='fista') | A wrapper function that dynamically defines a class derived from
the Options class associated with one of the implementations of
the Convolutional Constrained MOD problem, and returns an object
instantiated with the provided parameters. The wrapper is designed
to allow the appropriate object to be creat... | 6.609409 | 6.12456 | 1.079165 |
# Extract method selection argument or set default
method = kwargs.pop('method', 'fista')
# Assign base class depending on method selection argument
base = ccmod_class_label_lookup(method)
# Nested class with dynamically determined inheritance
class ConvCnstrMOD(base):
def __init... | def ConvCnstrMOD(*args, **kwargs) | A wrapper function that dynamically defines a class derived from
one of the implementations of the Convolutional Constrained MOD
problems, and returns an object instantiated with the provided
parameters. The wrapper is designed to allow the appropriate
object to be created by calling this function using... | 5.706865 | 5.095749 | 1.119927 |
if self.opt['AccurateDFid']:
if self.dmethod == 'fista':
D = self.dstep.getdict(crop=False)
else:
D = self.dstep.var_y()
if self.xmethod == 'fista':
X = self.xstep.getcoef()
else:
X = self.x... | def evaluate(self) | Evaluate functional value of previous iteration. | 4.466753 | 4.215888 | 1.059505 |
if self.opt['AccurateDFid']:
D = self.dstep.var_y()
X = self.xstep.var_y()
S = self.xstep.S
dfd = 0.5*np.linalg.norm((D.dot(X) - S))**2
rl1 = np.sum(np.abs(X))
return dict(DFid=dfd, RegL1=rl1, ObjFun=dfd+self.xstep.lmbda*rl1)
... | def evaluate(self) | Evaluate functional value of previous iteration | 7.305434 | 6.369809 | 1.146884 |
return not os.path.exists(pth1) or not os.path.exists(pth2) or \
os.stat(pth1).st_mtime > os.stat(pth2).st_mtime | def is_newer_than(pth1, pth2) | Return true if either file pth1 or file pth2 don't exist, or if
pth1 has been modified more recently than pth2 | 2.534363 | 2.315168 | 1.094678 |
sz = int(np.product(shape))
csz = sz * np.dtype(dtype).itemsize
raw = mp.RawArray('c', csz)
return np.frombuffer(raw, dtype=dtype, count=sz).reshape(shape) | def mpraw_as_np(shape, dtype) | Construct a numpy array of the specified shape and dtype for which the
underlying storage is a multiprocessing RawArray in shared memory.
Parameters
----------
shape : tuple
Shape of numpy array
dtype : data-type
Data type of array
Returns
-------
arr : ndarray
Numpy ... | 3.46387 | 3.730947 | 0.928416 |
return np.ascontiguousarray(np.swapaxes(x[np.newaxis, ...], 0, axis+1)) | def swap_axis_to_0(x, axis) | Insert a new singleton axis at position 0 and swap it with the
specified axis. The resulting array has an additional dimension,
with ``axis`` + 1 (which was ``axis`` before the insertion of the
new axis) of ``x`` at position 0, and a singleton axis at position
``axis`` + 1.
Parameters
---------... | 4.615553 | 7.016592 | 0.657806 |
globals()[mpv] = mpraw_as_np(npv.shape, npv.dtype)
globals()[mpv][:] = npv | def init_mpraw(mpv, npv) | Set a global variable as a multiprocessing RawArray in shared
memory with a numpy array wrapper and initialise its value.
Parameters
----------
mpv : string
Name of global variable to set
npv : ndarray
Numpy array to use as initialiser for global variable value | 5.73545 | 5.529434 | 1.037258 |
global mp_DSf
# Set working dictionary for cbpdn step and compute DFT of dictionary
# D and of D^T S
mp_Df[:] = sl.rfftn(mp_D_Y, mp_cri.Nv, mp_cri.axisN)
if mp_cri.Cd == 1:
mp_DSf[:] = np.conj(mp_Df) * mp_Sf
else:
mp_DSf[:] = sl.inner(np.conj(mp_Df[np.newaxis, ...]), mp_Sf,... | def cbpdn_setdict() | Set the dictionary for the cbpdn stage. There are no parameters
or return values because all inputs and outputs are from and to
global variables. | 8.878844 | 8.866396 | 1.001404 |
YU = mp_Z_Y[k] - mp_Z_U[k]
b = mp_DSf[k] + mp_xrho * sl.rfftn(YU, None, mp_cri.axisN)
if mp_cri.Cd == 1:
Xf = sl.solvedbi_sm(mp_Df, mp_xrho, b, axis=mp_cri.axisM)
else:
Xf = sl.solvemdbi_ism(mp_Df, mp_xrho, b, mp_cri.axisM, mp_cri.axisC)
mp_Z_X[k] = sl.irfftn(Xf, mp_cri.Nv, mp_... | def cbpdn_xstep(k) | Do the X step of the cbpdn stage. The only parameter is the slice
index `k` and there are no return values; all inputs and outputs are
from and to global variables. | 5.904515 | 5.868493 | 1.006138 |
mp_Z_X[k] = mp_xrlx * mp_Z_X[k] + (1 - mp_xrlx) * mp_Z_Y[k] | def cbpdn_relax(k) | Do relaxation for the cbpdn stage. The only parameter is the slice
index `k` and there are no return values; all inputs and outputs are
from and to global variables. | 7.49721 | 7.376577 | 1.016353 |
AXU = mp_Z_X[k] + mp_Z_U[k]
mp_Z_Y[k] = sp.prox_l1(AXU, (mp_lmbda/mp_xrho)) | def cbpdn_ystep(k) | Do the Y step of the cbpdn stage. The only parameter is the slice
index `k` and there are no return values; all inputs and outputs are
from and to global variables. | 13.108836 | 11.797667 | 1.111138 |
# Set working coefficient maps for ccmod step and compute DFT of
# coefficient maps Z and Z^T S
mp_Zf[k] = sl.rfftn(mp_Z_Y[k], mp_cri.Nv, mp_cri.axisN)
mp_ZSf[k] = np.conj(mp_Zf[k]) * mp_Sf[k] | def ccmod_setcoef(k) | Set the coefficient maps for the ccmod stage. The only parameter is
the slice index `k` and there are no return values; all inputs and
outputs are from and to global variables. | 13.060806 | 12.280555 | 1.063536 |
YU = mp_D_Y - mp_D_U[k]
b = mp_ZSf[k] + mp_drho * sl.rfftn(YU, None, mp_cri.axisN)
Xf = sl.solvedbi_sm(mp_Zf[k], mp_drho, b, axis=mp_cri.axisM)
mp_D_X[k] = sl.irfftn(Xf, mp_cri.Nv, mp_cri.axisN) | def ccmod_xstep(k) | Do the X step of the ccmod stage. The only parameter is the slice
index `k` and there are no return values; all inputs and outputs are
from and to global variables. | 8.761149 | 8.650258 | 1.012819 |
mAXU = np.mean(mp_D_X + mp_D_U, axis=0)
mp_D_Y[:] = mp_dprox(mAXU) | def ccmod_ystep() | Do the Y step of the ccmod stage. There are no parameters
or return values because all inputs and outputs are from and to
global variables. | 16.151426 | 14.369185 | 1.124032 |
cbpdn_xstep(k)
if mp_xrlx != 1.0:
cbpdn_relax(k)
cbpdn_ystep(k)
cbpdn_ustep(k)
ccmod_setcoef(k)
ccmod_xstep(k)
if mp_drlx != 1.0:
ccmod_relax(k) | def step_group(k) | Do a single iteration over cbpdn and ccmod steps that can be
performed independently for each slice `k` of the input data set. | 6.401753 | 5.005707 | 1.278891 |
YU0 = mp_Z_Y0[k] + mp_S[k] - mp_Z_U0[k]
YU1 = mp_Z_Y1[k] - mp_Z_U1[k]
if mp_cri.Cd == 1:
b = np.conj(mp_Df) * sl.rfftn(YU0, None, mp_cri.axisN) + \
sl.rfftn(YU1, None, mp_cri.axisN)
Xf = sl.solvedbi_sm(mp_Df, 1.0, b, axis=mp_cri.axisM)
else:
b = sl.inner(np.conj... | def cbpdnmd_xstep(k) | Do the X step of the cbpdn stage. The only parameter is the slice
index `k` and there are no return values; all inputs and outputs are
from and to global variables. | 3.324509 | 3.271733 | 1.016131 |
mp_Z_X[k] = mp_xrlx * mp_Z_X[k] + (1 - mp_xrlx) * mp_Z_Y1[k]
mp_DX[k] = mp_xrlx * mp_DX[k] + (1 - mp_xrlx) * (mp_Z_Y0[k] + mp_S[k]) | def cbpdnmd_relax(k) | Do relaxation for the cbpdn stage. The only parameter is the slice
index `k` and there are no return values; all inputs and outputs are
from and to global variables. | 4.202227 | 4.150481 | 1.012467 |
if mp_W.shape[0] > 1:
W = mp_W[k]
else:
W = mp_W
AXU0 = mp_DX[k] - mp_S[k] + mp_Z_U0[k]
AXU1 = mp_Z_X[k] + mp_Z_U1[k]
mp_Z_Y0[k] = mp_xrho*AXU0 / (W**2 + mp_xrho)
mp_Z_Y1[k] = sp.prox_l1(AXU1, (mp_lmbda/mp_xrho)) | def cbpdnmd_ystep(k) | Do the Y step of the cbpdn stage. The only parameter is the slice
index `k` and there are no return values; all inputs and outputs are
from and to global variables. | 4.984846 | 5.002069 | 0.996557 |
mp_Z_U0[k] += mp_DX[k] - mp_Z_Y0[k] - mp_S[k]
mp_Z_U1[k] += mp_Z_X[k] - mp_Z_Y1[k] | def cbpdnmd_ustep(k) | Do the U step of the cbpdn stage. The only parameter is the slice
index `k` and there are no return values; all inputs and outputs are
from and to global variables. | 5.516077 | 5.345995 | 1.031815 |
# Set working coefficient maps for ccmod step and compute DFT of
# coefficient maps Z
mp_Zf[k] = sl.rfftn(mp_Z_Y1[k], mp_cri.Nv, mp_cri.axisN) | def ccmodmd_setcoef(k) | Set the coefficient maps for the ccmod stage. The only parameter is
the slice index `k` and there are no return values; all inputs and
outputs are from and to global variables. | 27.22682 | 24.516476 | 1.110552 |
YU0 = mp_D_Y0 - mp_D_U0[k]
YU1 = mp_D_Y1[k] + mp_S[k] - mp_D_U1[k]
b = sl.rfftn(YU0, None, mp_cri.axisN) + \
np.conj(mp_Zf[k]) * sl.rfftn(YU1, None, mp_cri.axisN)
Xf = sl.solvedbi_sm(mp_Zf[k], 1.0, b, axis=mp_cri.axisM)
mp_D_X[k] = sl.irfftn(Xf, mp_cri.Nv, mp_cri.axisN)
mp_DX[k] = sl... | def ccmodmd_xstep(k) | Do the X step of the ccmod stage. The only parameter is the slice
index `k` and there are no return values; all inputs and outputs are
from and to global variables. | 4.273917 | 4.201213 | 1.017305 |
mp_D_X[k] = mp_drlx * mp_D_X[k] + (1 - mp_drlx) * mp_D_Y0
mp_DX[k] = mp_drlx * mp_DX[k] + (1 - mp_drlx) * (mp_D_Y1[k] + mp_S[k]) | def ccmodmd_relax(k) | Do relaxation for the ccmod stage. The only parameter is the slice
index `k` and there are no return values; all inputs and outputs are
from and to global variables. | 4.116242 | 4.072294 | 1.010792 |
mAXU = np.mean(mp_D_X + mp_D_U0, axis=0)
mp_D_Y0[:] = mp_dprox(mAXU)
AXU1 = mp_DX - mp_S + mp_D_U1
mp_D_Y1[:] = mp_drho*AXU1 / (mp_W**2 + mp_drho) | def ccmodmd_ystep() | Do the Y step of the ccmod stage. There are no parameters
or return values because all inputs and outputs are from and to
global variables. | 10.880256 | 9.987291 | 1.08941 |
cbpdnmd_xstep(k)
if mp_xrlx != 1.0:
cbpdnmd_relax(k)
cbpdnmd_ystep(k)
cbpdnmd_ustep(k)
ccmodmd_setcoef(k)
ccmodmd_xstep(k)
if mp_drlx != 1.0:
ccmodmd_relax(k) | def md_step_group(k) | Do a single iteration over cbpdn and ccmod steps that can be
performed independently for each slice `k` of the input data set. | 6.150316 | 5.120872 | 1.201029 |
# If the nproc parameter of __init__ is zero, just iterate
# over the K consensus instances instead of using
# multiprocessing to do the computations in parallel. This is
# useful for debugging and timing comparisons.
if self.nproc == 0:
for k in range(self.... | def step(self) | Do a single iteration over all cbpdn and ccmod steps. Those that
are not coupled on the K axis are performed in parallel. | 11.461105 | 7.917269 | 1.447608 |
# Construct tuple of status display column titles and set status
# display strings
hdrtxt = ['Itn', 'Fnc', 'DFid', u('Regℓ1')]
hdrstr, fmtstr, nsep = common.solve_status_str(
hdrtxt, fwdth0=type(self).fwiter, fprec=type(self).fpothr)
# Print header and sepa... | def solve(self) | Start (or re-start) optimisation. This method implements the
framework for the alternation between `X` and `D` updates in a
dictionary learning algorithm.
If option ``Verbose`` is ``True``, the progress of the
optimisation is displayed at every iteration. At termination
of this ... | 5.215775 | 4.518962 | 1.154198 |
global mp_Z_Y
return np.swapaxes(mp_Z_Y, 0, self.xstep.cri.axisK+1)[0] | def getcoef(self) | Get final coefficient map array. | 30.981756 | 24.452286 | 1.267029 |
X = mp_Z_Y
Xf = mp_Zf
Df = mp_Df
Sf = mp_Sf
Ef = sl.inner(Df[np.newaxis, ...], Xf,
axis=self.xstep.cri.axisM+1) - Sf
Ef = np.swapaxes(Ef, 0, self.xstep.cri.axisK+1)[0]
dfd = sl.rfl2norm2(Ef, self.xstep.S.shape,
... | def evaluate(self) | Evaluate functional value of previous iteration. | 7.2521 | 6.858596 | 1.057374 |
# If the nproc parameter of __init__ is zero, just iterate
# over the K consensus instances instead of using
# multiprocessing to do the computations in parallel. This is
# useful for debugging and timing comparisons.
if self.nproc == 0:
for k in range(self.... | def step(self) | Do a single iteration over all cbpdn and ccmod steps. Those that
are not coupled on the K axis are performed in parallel. | 12.190627 | 8.724504 | 1.397286 |
global mp_D_Y0
D = mp_D_Y0
if crop:
D = cr.bcrop(D, self.dstep.cri.dsz, self.dstep.cri.dimN)
return D | def getdict(self, crop=True) | Get final dictionary. If ``crop`` is ``True``, apply
:func:`.cnvrep.bcrop` to returned array. | 16.059628 | 12.853119 | 1.249473 |
global mp_Z_Y1
return np.swapaxes(mp_Z_Y1, 0, self.xstep.cri.axisK+1)[0] | def getcoef(self) | Get final coefficient map array. | 29.463621 | 23.799263 | 1.238006 |
if self.opt['AccurateDFid']:
DX = self.reconstruct()
W = self.dstep.W
S = self.dstep.S
else:
W = mp_W
S = mp_S
Xf = mp_Zf
Df = mp_Df
DX = sl.irfftn(sl.inner(
Df[np.newaxis, ...], Xf,... | def evaluate(self) | Evaluate functional value of previous iteration. | 8.116821 | 7.466968 | 1.08703 |
if self.opt['AutoRho', 'Enabled']:
tau = self.rho_tau
mu = self.rho_mu
xi = self.rho_xi
if k != 0 and cp.mod(k + 1, self.opt['AutoRho', 'Period']) == 0:
if self.opt['AutoRho', 'AutoScaling']:
if s == 0.0 or r == 0.0:
rhomlt = tau
... | def _update_rho(self, k, r, s) | Patched version of :func:`sporco.admm.admm.ADMM.update_rho`. | 4.428314 | 4.151277 | 1.066735 |
self.D = np.asarray(D, dtype=self.dtype) | def setdict(self, D) | Set dictionary array. | 7.052281 | 4.696606 | 1.50157 |
# Compute D^T(D Y - S)
return self.D.T.dot(self.D.dot(self.Y) - self.S) | def eval_grad(self) | Compute gradient in spatial domain for variable Y. | 6.762136 | 5.570749 | 1.213865 |
return np.asarray(sp.prox_l1(V, (self.lmbda / self.L) * self.wl1),
dtype=self.dtype) | def eval_proxop(self, V) | Compute proximal operator of :math:`g`. | 9.144237 | 8.256569 | 1.10751 |
return np.linalg.norm((self.X - self.Yprv).ravel()) | def rsdl(self) | Compute fixed point residual. | 23.807873 | 11.803642 | 2.016994 |
dfd = self.obfn_f()
reg = self.obfn_reg()
obj = dfd + reg[0]
return (obj, dfd) + reg[1:] | def eval_objfn(self) | Compute components of objective function as well as total
contribution to objective function. | 7.992234 | 6.703826 | 1.19219 |
r
if X is None:
X = self.X
return 0.5 * np.linalg.norm((self.D.dot(X) - self.S).ravel())**2 | def obfn_f(self, X=None) | r"""Compute data fidelity term :math:`(1/2) \| D \mathbf{x} -
\mathbf{s} \|_2^2`. | 5.135604 | 3.706573 | 1.38554 |
if X is None:
X = self.X
return self.D.dot(self.X) | def reconstruct(self, X=None) | Reconstruct representation. | 4.728995 | 4.186249 | 1.12965 |
if D is not None:
self.D = np.asarray(D, dtype=self.dtype)
self.Df = sl.rfftn(self.D, self.cri.Nv, self.cri.axisN) | def setdict(self, D=None) | Set dictionary array. | 4.981694 | 4.857226 | 1.025625 |
# Compute D X - S
Ryf = self.eval_Rf(self.Yf)
# Compute D^H Ryf
gradf = np.conj(self.Df) * Ryf
# Multiple channel signal, multiple channel dictionary
if self.cri.Cd > 1:
gradf = np.sum(gradf, axis=self.cri.axisC, keepdims=True)
return gradf | def eval_grad(self) | Compute gradient in Fourier domain. | 11.308169 | 9.464366 | 1.194815 |
return sl.inner(self.Df, Vf, axis=self.cri.axisM) - self.Sf | def eval_Rf(self, Vf) | Evaluate smooth term in Vf. | 24.099583 | 20.431479 | 1.179532 |
return sp.prox_l1(V, (self.lmbda / self.L) * self.wl1) | def eval_proxop(self, V) | Compute proximal operator of :math:`g`. | 12.882721 | 11.722901 | 1.098936 |
dfd = self.obfn_dfd()
reg = self.obfn_reg()
obj = dfd + reg[0]
return (obj, dfd) + reg[1:] | def eval_objfn(self) | Compute components of objective function as well as total
contribution to objective function. | 7.674099 | 6.096067 | 1.258861 |
if X is None:
X = self.X
Xf = sl.rfftn(X, None, self.cri.axisN)
Sf = np.sum(self.Df * Xf, axis=self.cri.axisM)
return sl.irfftn(Sf, self.cri.Nv, self.cri.axisN) | def reconstruct(self, X=None) | Reconstruct representation. | 3.678732 | 3.496037 | 1.052258 |
# Compute D X - S
self.Ryf[:] = self.eval_Rf(self.Yf)
# Map to spatial domain to multiply by mask
Ry = sl.irfftn(self.Ryf, self.cri.Nv, self.cri.axisN)
# Multiply by mask
self.WRy[:] = (self.W**2) * Ry
# Map back to frequency domain
WRyf = sl.rf... | def eval_grad(self) | Compute gradient in Fourier domain. | 6.267098 | 5.748644 | 1.090187 |
return D.reshape(D.shape[0:dimN] + (Cd,) + (1,) + (M,)) | def stdformD(D, Cd, M, dimN=2) | Reshape dictionary array (`D` in :mod:`.admm.cbpdn` module, `X` in
:mod:`.admm.ccmod` module) to internal standard form.
Parameters
----------
D : array_like
Dictionary array
Cd : int
Size of dictionary channel index
M : int
Number of filters in dictionary
dimN : int, opti... | 6.362753 | 7.093311 | 0.897007 |
r
# Number of dimensions in input array `S`
sdim = cri.dimN + cri.dimC + cri.dimK
if W.ndim < sdim:
if W.size == 1:
# Weight array is a scalar
shpW = (1,) * (cri.dimN + 3)
else:
# Invalid weight array shape
raise ValueError('weight array ... | def l1Wshape(W, cri) | r"""Get appropriate internal shape (see
:class:`CSC_ConvRepIndexing`) for an :math:`\ell_1` norm weight
array `W`, as in option ``L1Weight`` in
:class:`.admm.cbpdn.ConvBPDN.Options` and related options classes.
The external shape of `W` depends on the external shape of input
data array `S` and the s... | 3.388158 | 3.039612 | 1.114668 |
# Number of axes in W available for C and/or K axes
ckdim = W.ndim - cri.dimN
if ckdim >= 2:
# Both C and K axes are present in W
shpW = W.shape + (1,) if ckdim == 2 else W.shape
elif ckdim == 1:
# Exactly one of C or K axes is present in W
if cri.C == 1 and cri.K >... | def mskWshape(W, cri) | Get appropriate internal shape (see
:class:`CSC_ConvRepIndexing` and :class:`CDU_ConvRepIndexing`) for
data fidelity term mask array `W`. The external shape of `W`
depends on the external shape of input data array `S`. The
simplest criterion for ensuring that the external `W` is
compatible with `S`... | 3.054755 | 2.822952 | 1.082114 |
vz = v.copy()
if isinstance(dsz[0], tuple):
# Multi-scale dictionary specification
axisN = tuple(range(0, dimN))
m0 = 0 # Initial index of current block of equi-sized filters
# Iterate over distinct filter sizes
for mb in range(0, len(dsz)):
# Determine... | def zeromean(v, dsz, dimN=2) | Subtract mean value from each filter in the input array v. The
`dsz` parameter specifies the support sizes of each filter using the
same format as the `dsz` parameter of :func:`bcrop`. Support sizes
must be taken into account to ensure that the mean values are
computed over the correct number of samples... | 2.925233 | 2.752953 | 1.06258 |
r
axisN = tuple(range(0, dimN))
vn = np.sqrt(np.sum(v**2, axisN, keepdims=True))
vn[vn == 0] = 1.0
return np.asarray(v / vn, dtype=v.dtype) | def normalise(v, dimN=2) | r"""Normalise vectors, corresponding to slices along specified number
of initial spatial dimensions of an array, to have unit
:math:`\ell_2` norm. The remaining axes enumerate the distinct
vectors to be normalised.
Parameters
----------
v : array_like
Array with components to be normalise... | 3.829268 | 4.18968 | 0.913976 |
vp = np.zeros(Nv + v.shape[len(Nv):], dtype=v.dtype)
axnslc = tuple([slice(0, x) for x in v.shape])
vp[axnslc] = v
return vp | def zpad(v, Nv) | Zero-pad initial axes of array to specified size. Padding is
applied to the right, top, etc. of the array indices.
Parameters
----------
v : array_like
Array to be padded
Nv : tuple
Sizes to which each of initial indices should be padded
Returns
-------
vp : ndarray
P... | 4.871636 | 4.618695 | 1.054765 |
if crp:
def zpadfn(x):
return x
else:
def zpadfn(x):
return zpad(x, Nv)
if zm:
def zmeanfn(x):
return zeromean(x, dsz, dimN)
else:
def zmeanfn(x):
return x
return normalise(zmeanfn(zpadfn(bcrop(x, dsz, dimN))), d... | def Pcn(x, dsz, Nv, dimN=2, dimC=1, crp=False, zm=False) | Constraint set projection for convolutional dictionary update
problem.
Parameters
----------
x : array_like
Input array
dsz : tuple
Filter support size(s), specified using the same format as the `dsz`
parameter of :func:`bcrop`
Nv : tuple
Sizes of problem spatial indice... | 3.358868 | 3.775939 | 0.889545 |
fncdict = {(False, False): _Pcn,
(False, True): _Pcn_zm,
(True, False): _Pcn_crp,
(True, True): _Pcn_zm_crp}
fnc = fncdict[(crp, zm)]
return functools.partial(fnc, dsz=dsz, Nv=Nv, dimN=dimN, dimC=dimC) | def getPcn(dsz, Nv, dimN=2, dimC=1, crp=False, zm=False) | Construct the constraint set projection function for convolutional
dictionary update problem.
Parameters
----------
dsz : tuple
Filter support size(s), specified using the same format as the `dsz`
parameter of :func:`bcrop`
Nv : tuple
Sizes of problem spatial indices
dimN : in... | 2.343852 | 2.847099 | 0.823242 |
return normalise(zpad(bcrop(x, dsz, dimN), Nv), dimN + dimC) | def _Pcn(x, dsz, Nv, dimN=2, dimC=1) | Projection onto dictionary update constraint set: support
projection and normalisation. The result has the full spatial
dimensions of the input.
Parameters
----------
x : array_like
Input array
dsz : tuple
Filter support size(s), specified using the same format as the
`dsz` ... | 12.526287 | 15.383444 | 0.814271 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.