code stringlengths 1 25.8M | language stringclasses 18 values | source stringclasses 4 values | repo stringclasses 78 values | path stringlengths 0 268 |
|---|---|---|---|---|
<!--
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
-->
<!-- START doctoc generated TOC please keep comment here to allow auto update -->
<!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->
**Table of Contents** *generated with [DocToc](https://github.com/thlorenz/doctoc)*
- [Do we have a process defined here?](#do-we-have-a-process-defined-here)
- [How does the selection process for cherry-picking work?](#how-does-the-selection-process-for-cherry-picking-work)
- [What's the release manager's role ?](#whats-the-release-managers-role-)
- [Is this process following the ASF rules?](#is-this-process-following-the-asf-rules)
- [What's the role of individual maintainers?](#whats-the-role-of-individual-maintainers)
- [When proposed PRs are rejected?](#when-proposed-prs-are-rejected)
- [Why are provider changes not cherry-picked?](#why-are-provider-changes-not-cherry-picked)
- [What's the purpose of patch releases?](#whats-the-purpose-of-patch-releases)
- [Do we intentionally skip over some changes when releasing a patch release?](#do-we-intentionally-skip-over-some-changes-when-releasing-a-patch-release)
<!-- END doctoc generated TOC please keep comment here to allow auto update -->
This page describes the context and the process we follow when we create a patch release.
# Do we have a process defined here?
We have a well defined and well tested process that we follow.,
We follow it since the beginning of Airflow 2.0 actually - it's been also done the same in 1.10 -
but with some variations, we do it the same way since the beginning of 2.0,
but it has been refined and improved over time - by those who volunteered their time in the release
process (a lot of ad-hoc discussion have been traditionally happening in #release-management slack channel)
and we continue to do so.
The succinct form of it is described in a prominent place in our most important
[README](../README.md#what-goes-into-the-next-release).
# How does the selection process for cherry-picking work?
In short (and this is the most important thing that every maintainer should be aware of):
**maintainers who think that issue should be included should mark it with the next patch milestone**
It's up to individual maintainers who want to include certain changes to take care about it
and mark the issues they think are bug fixes, to go into the next release
This is the only thing that the maintainer has to do to get the PR proposed to be considered in
the next patch release. Sometimes - if controversial - maintainers discuss the proposals in
the #release-management channel in Slack, sometimes in #contributors or in the PR itself -
especially if the release manager decides to not include it and changes the milestone (and explains why).
# What's the release manager's role ?
Release manager's job is purely mechanical (as also mandated by the Apache Software Foundation release
manager role description) to assess cherry-pick ability of those changes. Release manager -
at the sole discretion and individual decision can reject some of those changes that other maintainers think
should be included. But the release manager on his own does not make proposals on what should be included.
This is the only place in the whole ASF process setup where a single person has such power to
make individual decisions and the main reason for it is to make the release process as smooth as possible.
# Is this process following the ASF rules?
We think so. The release manager's role is nicely described in
[Release manager chapter of release publishing ASF infra docs](https://infra.apache.org/release-publishing.html#releasemanager).
There is far more complete description here that describes the whole process
[Release management policy](https://www.apache.org/legal/release-policy.html#management) - also mentioning
that it's the PMC member's responsibility (and particularly PMC chair's) to adhere to the process.
# What's the role of individual maintainers?
The role of maintainers (collectively) to propose things for the next release.
In our case it happens with setting the milestone on a PR.
# When proposed PRs are rejected?
There are various reasons to reject those - if too complex to cherry-pick or when the release manager
assesses it's a new feature, not a bugfix. Essentially (according to [SemVer](https://semver.org/) when
it comes to the user-facing changes, the patch release should contain only bug-fixes. and may
contain docs changes if they are fixing/improving docs (not about new features) and also environment/build
script changes (so non-user-facing changes) as they are pretty much always needed to keep the things
nicely building - those are usually skipped from the changelog as non-user facing).
# Why are provider changes not cherry-picked?
In our case - basically none of the provider changes are cherry-picked - unless they are needed to
make the builds work well (sometimes happen). Providers are ALWAYS released from the latest `main` code
not from the `v2-*-stable` branch. In fact all the tests and ci checks for providers are skipped in the
non-main (v2* branches). So yes - not seeing provider changes cherry-picked is absolutely expected.
# What's the purpose of patch releases?
The purpose of that release is as described in SemVer - to give the users bug-fix-only release that has no
new features. Of course it's sometimes debatable whether changes are features/bug-fixes, but we usually use
the #release-management on Airflow's slack to quickly chat about it, and eventually the release manager
always makes a comment in the PR when the milestone is changed and explains the reasoning.
Sometimes we also include technically breaking changes in patch release (for example when we fix a security
issue it often is done in a "breaking" way).
We have to remember that SemVer is a statement of intention and not a technical definition of breaking vs.
not breaking. As [Hyrum's Law](https://www.hyrumslaw.com/) correctly states: "With a sufficient number
of users of an API, it does not matter what you promise in the contract: all observable behaviors of
your system will be depended on by somebody.". Our intention is to keep the patch releases following
SemVer intentions as much as possible, but we also have to be pragmatic and sometimes we have to break the
[Public Interface of Airflow](https://airflow.apache.org/docs/apache-airflow/stable/public-airflow-interface.html)
to fix things. Those should be rare and deliberate decisions and we should always try to avoid them,
but sometimes they are needed to either protect our users or to make the code maintainable and we asses
the likelihood of breaking our user's workflow is low.
# Do we intentionally skip over some changes when releasing a patch release?
Skipping is not intentional because we never "skip" things when cherry-picking, It's **reverse** -
those maintainer who think that certain bug fixes (or internal changes or sometimes even feature changes
that we classify really as "bugfix" SHOULD intentionally mark those PRs they want with the next `patch`
milestone to be included. So there is no skipping, if maintainer did not deliberately mark PR as
upcoming milestone, it will just not be included (not skipped). By default all the changes merged to main
are included in the next `minor` release. See [README](../README.md#what-goes-into-the-next-release) for
a bit more detailed description of transition period when we branch-off and start working on
new `minor` release. | unknown | github | https://github.com/apache/airflow | dev/WHAT_GOES_INTO_THE_NEXT_RELEASE.md |
import logging, re
from autotest.client.shared import error
from autotest.client import utils
from autotest.client.virt import virt_test_utils, virt_utils, aexpect
def run_ethtool(test, params, env):
"""
Test offload functions of ethernet device using ethtool
1) Log into a guest.
2) Initialize the callback of sub functions.
3) Enable/disable sub function of NIC.
4) Execute callback function.
5) Check the return value.
6) Restore original configuration.
@param test: KVM test object.
@param params: Dictionary with the test parameters.
@param env: Dictionary with test environment.
@todo: Not all guests have ethtool installed, so
find a way to get it installed using yum/apt-get/
whatever
"""
def ethtool_get(f_type):
feature_pattern = {
'tx': 'tx.*checksumming',
'rx': 'rx.*checksumming',
'sg': 'scatter.*gather',
'tso': 'tcp.*segmentation.*offload',
'gso': 'generic.*segmentation.*offload',
'gro': 'generic.*receive.*offload',
'lro': 'large.*receive.*offload',
}
o = session.cmd("ethtool -k %s" % ethname)
try:
result = re.findall("%s: (.*)" % feature_pattern.get(f_type), o)[0]
logging.debug("(%s) %s: %s", ethname, f_type, result)
return result
except IndexError:
logging.debug("(%s) %s: failed to get status", ethname, f_type)
def ethtool_set(f_type, status):
"""
Set ethernet device offload status
@param f_type: Offload type name
@param status: New status will be changed to
"""
logging.info("(%s) %s: set status %s", ethname, f_type, status)
if status not in ["off", "on"]:
return False
cmd = "ethtool -K %s %s %s" % (ethname, f_type, status)
if ethtool_get(f_type) != status:
try:
session.cmd(cmd)
return True
except aexpect.ShellCmdError, e:
logging.error(e)
return False
if ethtool_get(f_type) != status:
logging.error("(%s) %s: set status %s failed", ethname, f_type,
status)
return False
return True
def ethtool_save_params():
logging.info("Saving ethtool configuration")
for i in supported_features:
feature_status[i] = ethtool_get(i)
def ethtool_restore_params():
logging.info("Restoring ethtool configuration")
for i in supported_features:
ethtool_set(i, feature_status[i])
def compare_md5sum(name):
logging.info("Comparing md5sum of the files on guest and host")
host_result = utils.hash_file(name, method="md5")
try:
o = session.cmd_output("md5sum %s" % name)
guest_result = re.findall("\w+", o)[0]
except IndexError:
logging.error("Could not get file md5sum in guest")
return False
logging.debug("md5sum: guest(%s), host(%s)", guest_result, host_result)
return guest_result == host_result
def transfer_file(src):
"""
Transfer file by scp, use tcpdump to capture packets, then check the
return string.
@param src: Source host of transfer file
@return: Tuple (status, error msg/tcpdump result)
"""
session2.cmd_output("rm -rf %s" % filename)
dd_cmd = ("dd if=/dev/urandom of=%s bs=1M count=%s" %
(filename, params.get("filesize")))
failure = (False, "Failed to create file using: %s" % dd_cmd)
logging.info("Creating file in %s, cmd: %s", src, dd_cmd)
tcpdump_cmd = "tcpdump -lep -s 0 tcp -vv port ssh"
if src == "guest":
tcpdump_cmd += " and src %s" % guest_ip
copy_files_func = vm.copy_files_from
try:
session.cmd_output(dd_cmd, timeout=360)
except aexpect.ShellCmdError, e:
return failure
else:
tcpdump_cmd += " and dst %s" % guest_ip
copy_files_func = vm.copy_files_to
try:
utils.system(dd_cmd)
except error.CmdError, e:
return failure
# only capture the new tcp port after offload setup
original_tcp_ports = re.findall("tcp.*:(\d+).*%s" % guest_ip,
utils.system_output("/bin/netstat -nap"))
for i in original_tcp_ports:
tcpdump_cmd += " and not port %s" % i
logging.debug("Listening traffic using command: %s", tcpdump_cmd)
session2.sendline(tcpdump_cmd)
if not virt_utils.wait_for(
lambda:session.cmd_status("pgrep tcpdump") == 0, 30):
return (False, "Tcpdump process wasn't launched")
logging.info("Transfering file %s from %s", filename, src)
try:
copy_files_func(filename, filename)
except virt_utils.SCPError, e:
return (False, "File transfer failed (%s)" % e)
session.cmd("killall tcpdump")
try:
tcpdump_string = session2.read_up_to_prompt(timeout=60)
except aexpect.ExpectError:
return (False, "Failed to read tcpdump's output")
if not compare_md5sum(filename):
return (False, "Failure, md5sum mismatch")
return (True, tcpdump_string)
def tx_callback(status="on"):
s, o = transfer_file("guest")
if not s:
logging.error(o)
return False
return True
def rx_callback(status="on"):
s, o = transfer_file("host")
if not s:
logging.error(o)
return False
return True
def so_callback(status="on"):
s, o = transfer_file("guest")
if not s:
logging.error(o)
return False
logging.info("Check if contained large frame")
# MTU: default IPv4 MTU is 1500 Bytes, ethernet header is 14 Bytes
return (status == "on") ^ (len([i for i in re.findall(
"length (\d*):", o) if int(i) > mtu]) == 0)
def ro_callback(status="on"):
s, o = transfer_file("host")
if not s:
logging.error(o)
return False
return True
vm = env.get_vm(params["main_vm"])
vm.verify_alive()
session = vm.wait_for_login(timeout=int(params.get("login_timeout", 360)))
# Let's just error the test if we identify that there's no ethtool installed
session.cmd("ethtool -h")
session2 = vm.wait_for_login(timeout=int(params.get("login_timeout", 360)))
mtu = 1514
feature_status = {}
filename = "/tmp/ethtool.dd"
guest_ip = vm.get_address()
ethname = virt_test_utils.get_linux_ifname(session, vm.get_mac_address(0))
supported_features = params.get("supported_features")
if supported_features:
supported_features = supported_features.split()
else:
raise error.TestError("No supported features set on the parameters")
test_matrix = {
# type:(callback, (dependence), (exclude)
"tx": (tx_callback, (), ()),
"rx": (rx_callback, (), ()),
"sg": (tx_callback, ("tx",), ()),
"tso": (so_callback, ("tx", "sg",), ("gso",)),
"gso": (so_callback, (), ("tso",)),
"gro": (ro_callback, ("rx",), ("lro",)),
"lro": (rx_callback, (), ("gro",)),
}
ethtool_save_params()
failed_tests = []
try:
for f_type in supported_features:
callback = test_matrix[f_type][0]
for i in test_matrix[f_type][2]:
if not ethtool_set(i, "off"):
e_msg = "Failed to disable %s" % i
logging.error(e_msg)
failed_tests.append(e_msg)
for i in [f for f in test_matrix[f_type][1]] + [f_type]:
if not ethtool_set(i, "on"):
e_msg = "Failed to enable %s" % i
logging.error(e_msg)
failed_tests.append(e_msg)
if not callback(status="on"):
e_msg = "Callback failed after enabling %s" % f_type
logging.error(e_msg)
failed_tests.append(e_msg)
if not ethtool_set(f_type, "off"):
e_msg = "Failed to disable %s" % f_type
logging.error(e_msg)
failed_tests.append(e_msg)
if not callback(status="off"):
e_msg = "Callback failed after disabling %s" % f_type
logging.error(e_msg)
failed_tests.append(e_msg)
if failed_tests:
raise error.TestFail("Failed tests: %s" % failed_tests)
finally:
ethtool_restore_params()
session.close()
session2.close() | unknown | codeparrot/codeparrot-clean | ||
# vi: ts=4 expandtab
#
# Copyright (C) 2009-2011 Canonical Ltd.
# Copyright (C) 2012, 2013 Hewlett-Packard Development Company, L.P.
#
# Author: Scott Moser <scott.moser@canonical.com>
# Author: Juerg Haefliger <juerg.haefliger@hp.com>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3, as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import os
from cloudinit.settings import PER_ALWAYS
from cloudinit import util
frequency = PER_ALWAYS
def handle(name, cfg, cloud, log, _args):
if "bootcmd" not in cfg:
log.debug(("Skipping module named %s,"
" no 'bootcmd' key in configuration"), name)
return
with util.ExtendedTemporaryFile(suffix=".sh") as tmpf:
try:
content = util.shellify(cfg["bootcmd"])
tmpf.write(util.encode_text(content))
tmpf.flush()
except Exception:
util.logexc(log, "Failed to shellify bootcmd")
raise
try:
env = os.environ.copy()
iid = cloud.get_instance_id()
if iid:
env['INSTANCE_ID'] = str(iid)
cmd = ['/bin/sh', tmpf.name]
util.subp(cmd, env=env, capture=False)
except Exception:
util.logexc(log, "Failed to run bootcmd module %s", name)
raise | unknown | codeparrot/codeparrot-clean | ||
"""Test NotebookApp"""
import logging
import os
from tempfile import NamedTemporaryFile
import nose.tools as nt
from traitlets.tests.utils import check_help_all_output
from jupyter_core.application import NoStart
from ipython_genutils.tempdir import TemporaryDirectory
from traitlets import TraitError
from notebook import notebookapp
NotebookApp = notebookapp.NotebookApp
def test_help_output():
"""ipython notebook --help-all works"""
check_help_all_output('notebook')
def test_server_info_file():
td = TemporaryDirectory()
nbapp = NotebookApp(runtime_dir=td.name, log=logging.getLogger())
def get_servers():
return list(notebookapp.list_running_servers(nbapp.runtime_dir))
nbapp.initialize(argv=[])
nbapp.write_server_info_file()
servers = get_servers()
nt.assert_equal(len(servers), 1)
nt.assert_equal(servers[0]['port'], nbapp.port)
nt.assert_equal(servers[0]['url'], nbapp.connection_url)
nbapp.remove_server_info_file()
nt.assert_equal(get_servers(), [])
# The ENOENT error should be silenced.
nbapp.remove_server_info_file()
def test_nb_dir():
with TemporaryDirectory() as td:
app = NotebookApp(notebook_dir=td)
nt.assert_equal(app.notebook_dir, td)
def test_no_create_nb_dir():
with TemporaryDirectory() as td:
nbdir = os.path.join(td, 'notebooks')
app = NotebookApp()
with nt.assert_raises(TraitError):
app.notebook_dir = nbdir
def test_missing_nb_dir():
with TemporaryDirectory() as td:
nbdir = os.path.join(td, 'notebook', 'dir', 'is', 'missing')
app = NotebookApp()
with nt.assert_raises(TraitError):
app.notebook_dir = nbdir
def test_invalid_nb_dir():
with NamedTemporaryFile() as tf:
app = NotebookApp()
with nt.assert_raises(TraitError):
app.notebook_dir = tf
def test_generate_config():
with TemporaryDirectory() as td:
app = NotebookApp(config_dir=td)
app.initialize(['--generate-config'])
with nt.assert_raises(NoStart):
app.start()
assert os.path.exists(os.path.join(td, 'jupyter_notebook_config.py')) | unknown | codeparrot/codeparrot-clean | ||
from __future__ import print_function, division
import matplotlib
import logging
from sys import stdout
matplotlib.use('Agg') # Must be before importing matplotlib.pyplot or pylab!
from neuralnilm import (Net, RealApplianceSource,
BLSTMLayer, DimshuffleLayer,
BidirectionalRecurrentLayer)
from neuralnilm.source import (standardise, discretize, fdiff, power_and_fdiff,
RandomSegments, RandomSegmentsInMemory,
SameLocation)
from neuralnilm.experiment import run_experiment, init_experiment
from neuralnilm.net import TrainingError
from neuralnilm.layers import (MixtureDensityLayer, DeConv1DLayer,
SharedWeightsDenseLayer)
from neuralnilm.objectives import (scaled_cost, mdn_nll,
scaled_cost_ignore_inactive, ignore_inactive,
scaled_cost3)
from neuralnilm.plot import MDNPlotter, CentralOutputPlotter, Plotter
from neuralnilm.updates import clipped_nesterov_momentum
from neuralnilm.disaggregate import disaggregate
from lasagne.nonlinearities import sigmoid, rectify, tanh, identity
from lasagne.objectives import mse, binary_crossentropy
from lasagne.init import Uniform, Normal, Identity
from lasagne.layers import (LSTMLayer, DenseLayer, Conv1DLayer,
ReshapeLayer, FeaturePoolLayer, RecurrentLayer,
DropoutLayer)
from lasagne.layers.batch_norm import BatchNormLayer
from lasagne.updates import nesterov_momentum, momentum
from functools import partial
import os
import __main__
from copy import deepcopy
from math import sqrt
import numpy as np
import theano.tensor as T
import gc
"""
447: first attempt at disaggregation
"""
NAME = os.path.splitext(os.path.split(__main__.__file__)[1])[0]
#PATH = "/homes/dk3810/workspace/python/neuralnilm/figures"
PATH = "/data/dk3810/figures"
SAVE_PLOT_INTERVAL = 1000
N_SEQ_PER_BATCH = 64
source_dict = dict(
filename='/data/dk3810/ukdale.h5',
window=("2013-03-18", None),
train_buildings=[1],
validation_buildings=[1],
n_seq_per_batch=N_SEQ_PER_BATCH,
standardise_input=True,
standardise_targets=True,
independently_center_inputs=True,
ignore_incomplete=True,
offset_probability=0.5,
ignore_offset_activations=True
)
net_dict = dict(
save_plot_interval=SAVE_PLOT_INTERVAL,
# loss_function=partial(ignore_inactive, loss_func=mdn_nll, seq_length=SEQ_LENGTH),
# loss_function=lambda x, t: mdn_nll(x, t).mean(),
# loss_function=lambda x, t: (mse(x, t) * MASK).mean(),
loss_function=lambda x, t: mse(x, t).mean(),
# loss_function=lambda x, t: binary_crossentropy(x, t).mean(),
# loss_function=partial(scaled_cost, loss_func=mse),
# loss_function=ignore_inactive,
# loss_function=partial(scaled_cost3, ignore_inactive=False),
# updates_func=momentum,
updates_func=clipped_nesterov_momentum,
updates_kwargs={'clip_range': (0, 10)},
learning_rate=1e-2,
learning_rate_changes_by_iteration={
1000: 1e-3,
5000: 1e-4
},
do_save_activations=True,
auto_reshape=False,
# plotter=CentralOutputPlotter
plotter=Plotter(n_seq_to_plot=32)
)
def exp_a(name, target_appliance, seq_length):
global source
source_dict_copy = deepcopy(source_dict)
source_dict_copy.update(dict(
target_appliance=target_appliance,
logger=logging.getLogger(name),
seq_length=seq_length
))
source = SameLocation(**source_dict_copy)
net_dict_copy = deepcopy(net_dict)
net_dict_copy.update(dict(
experiment_name=name,
source=source
))
NUM_FILTERS = 4
net_dict_copy['layers_config'] = [
{
'type': DimshuffleLayer,
'pattern': (0, 2, 1) # (batch, features, time)
},
{
'label': 'conv0',
'type': Conv1DLayer, # convolve over the time axis
'num_filters': NUM_FILTERS,
'filter_length': 4,
'stride': 1,
'nonlinearity': None,
'border_mode': 'valid'
},
{
'type': DimshuffleLayer,
'pattern': (0, 2, 1) # back to (batch, time, features)
},
{
'label': 'dense0',
'type': DenseLayer,
'num_units': (seq_length - 3),
'nonlinearity': rectify
},
{
'type': DropoutLayer,
'p': 0.5
},
{
'label': 'dense2',
'type': DenseLayer,
'num_units': 16,
'nonlinearity': rectify
},
{
'type': DropoutLayer,
'p': 0.5
},
{
'type': DenseLayer,
'num_units': (seq_length - 3) * NUM_FILTERS,
'nonlinearity': rectify
},
{
'type': ReshapeLayer,
'shape': (N_SEQ_PER_BATCH, seq_length - 3, NUM_FILTERS)
},
{
'type': DimshuffleLayer,
'pattern': (0, 2, 1) # (batch, features, time)
},
{
'type': DeConv1DLayer,
'num_output_channels': 1,
'filter_length': 4,
'stride': 1,
'nonlinearity': None,
'border_mode': 'full'
},
{
'type': DimshuffleLayer,
'pattern': (0, 2, 1) # back to (batch, time, features)
}
]
net = Net(**net_dict_copy)
return net
def main():
APPLIANCES = [
('a', 'fridge freezer', 800),
('b', 'coffee maker', 512),
('c', 'dish washer', 2000),
('d', 'hair dryer', 256),
('e', 'kettle', 256),
('f', 'oven', 2000),
('g', 'toaster', 256),
('h', 'light', 2000),
('i', 'washer dryer', 2000)
]
for experiment, appliance, seq_length in APPLIANCES[:1]:
full_exp_name = NAME + experiment
func_call = init_experiment(PATH, 'a', full_exp_name)
func_call = func_call[:-1] + ", '{}', {})".format(appliance, seq_length)
logger = logging.getLogger(full_exp_name)
try:
net = eval(func_call)
run_experiment(net, epochs=20000)
except KeyboardInterrupt:
logger.info("KeyboardInterrupt")
break
except Exception as exception:
logger.exception("Exception")
# raise
else:
del net.source
del net
gc.collect()
finally:
logging.shutdown()
if __name__ == "__main__":
main()
"""
Emacs variables
Local Variables:
compile-command: "cp /home/jack/workspace/python/neuralnilm/scripts/e464.py /mnt/sshfs/imperial/workspace/python/neuralnilm/scripts/"
End:
""" | unknown | codeparrot/codeparrot-clean | ||
#!/usr/bin/python
# -*- coding: utf8 -*-
import re
from Page import *
class NounIterator(object):
def __init__(self, start = None):
self.start = start
self.idx = 0
self.started = False
def filtered(self, n):
if self.started:
return False
#print "%s %s" % (n, self.start)
if self.start is None or n == self.start:
self.started = True
return False
return True
def __iter__(self):
return self
def next(self):
n = self._next()
while self.filtered(n):
n = self._next()
return n
class NounPageIterator(NounIterator):
def _next(self):
while self.idx >= len(self.it.nouns):
self.it = self.it.next()
self.idx = 0
nouns = self.it.nouns
n = nouns[self.idx]
self.idx += 1
return n
class GermanNounPageIterator(NounPageIterator):
URL = "http://de.wiktionary.org/w/api.php?action=query&list=categorymembers&cmtitle=Kategorie:Substantiv_(Deutsch)&format=json"
def __iter__(self):
self.it = NounListPage(self.URL)
return NounPageIterator.__iter__(self)
def next(self):
n = NounPageIterator.next(self)
return GermanNounPage.from_noun(n)
class GermanNounPageRawIterator(NounPageIterator):
URL = "http://de.wiktionary.org/w/api.php?action=query&list=categorymembers&cmtitle=%s&format=json"
def __init__(self, cat, start = None):
#@cat = ex: Kategorie:Substantiv_(Deutsch)
self.url = self.URL % cat
NounPageIterator.__init__(self, start)
def __iter__(self):
self.it = NounListPage(self.url)
return NounPageIterator.__iter__(self)
class EnglishNounPageIterator(NounPageIterator):
URL = "http://en.wiktionary.org/w/api.php?action=query&list=categorymembers&cmtitle=Category:English_adjectives&format=json"
def __iter__(self):
self.it = EnglishNounListPage(self.URL)
return NounPageIterator.__iter__(self)
def next(self):
n = NounPageIterator.next(self)
return EnglishNounPage.from_noun(n)
class NounFileIterator(NounIterator):
def __init__(self, fp, start = None):
self.fp = fp
NounIterator.__init__(self, start)
def __iter__(self):
NounIterator.__iter__(self)
self.it = iter(self.fp)
return self
def _next(self):
return self.it.next().strip().decode("utf-8")
class NounFileNameIterator(NounFileIterator):
def __init__(self, f, start = None):
self.fp = open(f, "r")
NounFileIterator.__init__(self, self.fp, start)
class GermanNounFileNameIterator(NounFileIterator):
def next(self):
n = NounFileNameIterator.next(self)
return GermanNounPage.from_noun(n)
class FreqNounIterator(NounPageIterator):
URL = "http://fr.wiktionary.org/w/api.php?action=query&list=categorymembers&cmtitle=%s&format=json"
def __init__(self, title):
"ex: Wiktionnaire:Listes_de_fréquence/wortschatz-de-1-2000"
self.page = FrenchNounPage.from_noun(title)
def __iter__(self):
self.it = iter(self.page.wikicode().splitlines())
self.remain = None
return self
def next(self):
if self.remain is not None:
r = self.remain
self.remain = None
return r
while True:
line = self.it.next()
s = re.search("[*] [0-9]+[.] [[]{2}(.+?)[]]{2}", line)
if s is not None:
g = s.group(1)
g = g.split("|")
if len(g) > 1:
self.remain = g[1]
return g[0]
class CategoryRawIterator(NounPageIterator):
URL = u"http://%s.wiktionary.org/w/api.php?action=query&list=categorymembers&cmtitle=%s:%s&format=json&cmlimit=500"
def __init__(self, dom, catword, cat, start):
self.url = self.URL % (dom, catword, cat)
NounPageIterator.__init__(self, start)
def __iter__(self):
self.it = NounListPage(self.url)
return NounPageIterator.__iter__(self)
class FrenchCategoryRawIterator(CategoryRawIterator):
def __init__(self, cat, start = None):
#@cat = ex: "Noms communs en français")
return CategoryRawIterator.__init__(self, u"fr", u"Catégorie", cat, start)
class SpanishCategoryRawIterator(CategoryRawIterator):
def __init__(self, cat, start = None):
#@cat = ex: "Noms communs en français")
return CategoryRawIterator.__init__(self, u"es", u"Categoría", cat, start)
class RecentChangesIterator(NounPageIterator):
def __init__(self, rcstart = None, until = None, domain = "fr"):
NounPageIterator.__init__(self)
self.rcstart = rcstart
self.until = until
self.domain = domain
def __iter__(self):
self.it = RecentChangesPage(rcstart = self.rcstart, until = self.until, domain = self.domain)
return NounPageIterator.__iter__(self)
class SearchIterator(NounPageIterator):
def __init__(self, search):
NounPageIterator.__init__(self)
self.search = search
def __iter__(self):
self.it = SearchPage(self.search)
return NounPageIterator.__iter__(self) | unknown | codeparrot/codeparrot-clean | ||
import argparse
import sys
import matplotlib
import matplotlib.pyplot as plt
from matplotlib.gridspec import GridSpec
import numpy as np
import json
parser = argparse.ArgumentParser(description="Does some awesome things.")
parser.add_argument('message', type=str, help="pass a message into the script")
args = parser.parse_args(sys.argv[1:])
data = []
New_data=[]
dt=[]
with open(args.message) as json_file:
data = json.load(json_file)
def graph(grid,d_tiempo):
plt.switch_backend('TkAgg') #default on my system
f = plt.figure(num=args.message, figsize=(20,15))
mng = plt._pylab_helpers.Gcf.figs.get(f.number, None)
print(New_data)
mng = plt.get_current_fig_manager()
mng.resize(*mng.window.maxsize())
plt.title(args.message)
if grid == 1:
tempo = d_tiempo
tempo_init = tempo[0]
tempo_end = tempo[-1]
gs1 = GridSpec(4, 1)
gs1.update(left=0.05, right=0.95, wspace=0.5, hspace=0.3, bottom=0.08)
ax1 = plt.subplot(gs1[0, :])
ax1.grid()
ax1.set_ylabel('Pitch',fontsize=8)
if grid ==1:
L1 = ax1.plot(d_tiempo,New_data['pitch'])
else:
L1 = ax1.plot(d_tiempo,data['pitch'])
ax2 = plt.subplot(gs1[1, :])
ax2.grid()
ax2.set_ylabel('Roll',fontsize=8)
if grid ==1:
L1 = ax2.plot(d_tiempo,New_data['roll'])
else:
L1 = ax2.plot(d_tiempo,data['roll'])
ax3 = plt.subplot(gs1[2, :])
ax3.grid()
ax3.set_ylabel('Yaw',fontsize=8)
if grid ==1:
L1 = ax3.plot(d_tiempo,New_data['yaw'])
else:
L1 = ax3.plot(d_tiempo,data['yaw'])
ax4 = plt.subplot(gs1[3, :])
ax4.grid()
ax4.set_ylabel('Tiempo',fontsize=8)
if grid ==1:
L1 = ax4.plot(d_tiempo,New_data['ledblue'])
L2 = ax4.plot(d_tiempo,New_data['ledred'])
else:
L1 = ax4.plot(d_tiempo,data['ledblue'])
L2 = ax4.plot(d_tiempo,data['ledred'])
plt.show()
def find_nearest(array,values):
idx = np.abs(np.subtract.outer(array, values)).argmin(0)
return idx
def corte(init_cut,end_cut,a,b,c,d,e,f,g,h,i):
a=a[init_cut:end_cut]
b=b[init_cut:end_cut]
c=c[init_cut:end_cut]
d=d[init_cut:end_cut]
e=e[init_cut:end_cut]
f=f[init_cut:end_cut]
g=g[init_cut:end_cut]
h=h[init_cut:end_cut]
i=i[init_cut:end_cut]
datos={'roll':a,'pitch':b,'yaw':c, 'X':d, 'Y':e, 'Z':f,'time':g, 'ledblue':h, 'ledred':i}
return datos
def reset_tempo(var_in,var_out):
uni = var_in[0]
for t in range(0,len(var_in)):
var_out.append(round((var_in[t]-uni),3))
return var_out
graph(0,data['time'])
init_cut = float(input("tiempo inicial: "))
init_cuty = find_nearest(data['time'],init_cut)
end_cut = float(input("tiempo final: "))
end_cuty = find_nearest(data['time'],end_cut)
New_data=corte(init_cuty,end_cuty,data['pitch'],data['roll'],data['yaw'],data['X'],data['Y'],data['Z'],data['time'],data['ledblue'],data['ledred'])
data = []
print(data)
data = New_data
print(data)
dt = reset_tempo(New_data['time'],dt)
graph(0,dt) | unknown | codeparrot/codeparrot-clean | ||
from etnaviv.parse_rng import parse_rng_file, format_path, BitSet, Domain, Stripe, Register, Array, BaseType
class DomainVisitor(object):
'''
Walk a rnndb domain, visit all registers recursively.
'''
def __init__(self):
pass
def visit(self, node):
if isinstance(node, Domain):
self.visit_domain(node)
elif isinstance(node, Stripe):
self.visit_stripe(node)
elif isinstance(node, Array):
self.visit_array(node)
elif isinstance(node, Register):
self.visit_register(node)
else:
raise ValueError('DomainVisitor: unknown node type %s' % node.__class__.__name__)
def visit_domain(self, node):
for child in node.contents:
self.visit(child)
def visit_stripe(self, node):
for child in node.contents:
self.visit(child)
def visit_register(self, node):
for child in node.contents:
self.visit(child)
def visit_array(self, node):
for child in node.contents:
self.visit(child) | unknown | codeparrot/codeparrot-clean | ||
"""A script to train the DNC on implemented tasks.
You can start training the DNC model on any implemented task by executing:
> python -m src.tasks.train --task=<task_name>
TO SUPPORT NEW TASKS:
1) Import necessary code for task (follow necessary requirements listed below).
2) Create new section in flags and define any valid flags for the task.
3) In the "get_task" method, append a command-line name for the task to the end
of the list "valid_tasks".
4) Append a lambda function to the end of the "instantiate_task" list that
returns an instatiated object of the task using all FLAGS defined in step 2.
REQUIREMENTS FOR ALL TASKS:
* The task's class must be a sub-class of snt.AbstractModule implementing
methods `_build(self)`, `cost(output, task_state)`,
`to_string(output, task_state)`, and `process_output(output, task_state)`.
* The `_build(self)` method must return a collections.namedtuple,
`task_state`, containing at least fields 'input'. Other fields are
allowed to be used internally in the other methods. For example, the
'target' field would likely be needed for supervised learning tasks to
calculate the cost.
* The `cost(output, task_state)` method must return the losses for the
model to be used in `tf.gradients(losses, trainable_variables)`.
* The `to_string(output, task_state, model_state)` method must return a
string. This string will be logged to the console every time a report
comes up during training time. Preferrably, this string provides an
example input/output to show what the DNC model is doing.
* The `process_output(output, task_state, model_state)` method returns
the output back if no processing is needed. This method processes the
output passed to `to_string(output, task_state)`, but not to
`cost(output, task_state)`. If the output needs to be processed in
`cost(output, task_output)`, then that method needs to call it itself.
This provides ability to transform the data before
`to_string(output, task_state)` converts it to a human readable
representation. For example, if the model outputs logits, but you need
probabilitites (repeat copy task), then do that here.
* The task's class has public property `output_size`. This property must be
an integer representing the size of the output expected from the DNC model
for each iteration of this task.
"""
from .. dnc.dnc import DNC
from . dna_sequencing.dna_sequencing import DNASequencing
from . repeat_copy.repeat_copy import RepeatCopy
import sonnet as snt
import tensorflow as tf
FLAGS = tf.flags.FLAGS
# DNC parameters
tf.flags.DEFINE_integer("memory_size", 16, "The number of memory slots.")
tf.flags.DEFINE_integer("word_size", 16, "The width of each memory slot.")
tf.flags.DEFINE_integer("num_read_heads", 1,
"The number of memory read heads.")
tf.flags.DEFINE_integer("hidden_size", 64,
"The size of LSTM hidden layer in the controller.")
tf.flags.DEFINE_string("controller", "lstm", "The type of controller to use "
"(options: [lstm, ff]).")
# Task parameters
tf.flags.DEFINE_integer("batch_size", 16, "The batch size used in training.")
tf.flags.DEFINE_string("task", "repeat_copy", "The task to train the DNC on.")
# RepeatCopy task parameters (used only if using the RepeatCopy task)
tf.flags.DEFINE_integer("num_bits", 4,
"Dimensionality of each vector to copy.")
tf.flags.DEFINE_integer("min_length", 1,
"Lower limit on number of vectors in the observation "
"pattern to copy.")
tf.flags.DEFINE_integer("max_length", 2,
"Upper limit on number of vectors in the observation "
"pattern to copy.")
tf.flags.DEFINE_integer("min_repeats", 1,
"Lower limit on number of copy repeats.")
tf.flags.DEFINE_integer("max_repeats", 2,
"Upper limit on number of copy repeats.")
# Training parameters
tf.flags.DEFINE_integer("num_training_iterations", 1000,
"Number of iterations to train for.")
tf.flags.DEFINE_integer("report_interval", 100,
"Iterations between reports (samples, valid loss).")
tf.flags.DEFINE_string("checkpoint_dir", "~/tmp/dnc", "Checkpoint directory.")
tf.flags.DEFINE_string("checkpoint_basename", "model.ckpt",
"Base name for the checkpoint files")
tf.flags.DEFINE_integer("checkpoint_interval", -1,
"Checkpointing step interval (-1 means never).")
tf.flags.DEFINE_float("gpu_usage", 0.2,
"The percent of gpu memory to use for each process.")
tf.flags.DEFINE_boolean("test", False,
"Whether this is testing the model or not.")
# Optimizer parameters
tf.flags.DEFINE_float("max_grad_norm", 50, "Gradient clipping norm limit.")
tf.flags.DEFINE_float("learning_rate", 1e-4, "Optimizer learning rate.")
tf.flags.DEFINE_float("optimizer_epsilon", 1e-10,
"Epsilon used for RMSProp optimizer.")
def get_task(task_name):
"""Instantiate a task with all valid flags that provides training data."""
valid_tasks = ["repeat_copy", "dna_sequencing"]
instantiate_task = [
lambda: RepeatCopy(
num_bits=FLAGS.num_bits,
batch_size=FLAGS.batch_size,
min_length=FLAGS.min_length,
max_length=FLAGS.max_length,
min_repeats=FLAGS.min_repeats,
max_repeats=FLAGS.max_repeats),
lambda: DNASequencing(
batch_size=FLAGS.batch_size),
]
return instantiate_task[valid_tasks.index(task_name)]()
def run_model(input, output_size):
"""Run the model on the given input and returns size output_size."""
dnc_cell = DNC(output_size,
memory_size=FLAGS.memory_size,
word_size=FLAGS.word_size,
num_read_heads=FLAGS.num_read_heads,
hidden_size=FLAGS.hidden_size)
if FLAGS.test and FLAGS.task == "repeat_copy":
prev_state = dnc_cell.initial_state(1, dtype=input.dtype)
else:
prev_state = dnc_cell.initial_state(FLAGS.batch_size,
dtype=input.dtype)
if FLAGS.test and FLAGS.task == "repeat_copy":
model_state = {
'rw': prev_state.tape_head.read_weights,
'ww': prev_state.tape_head.write_weights,
'fg': prev_state.tape_head.free_gate,
'ag': prev_state.tape_head.alloc_gate,
}
output = None
model_state_t = prev_state
for time_index in range(13):
output_t, model_state_t = tf.nn.dynamic_rnn(
cell=dnc_cell,
inputs=tf.expand_dims(input[time_index, :, :], 0),
time_major=True,
initial_state=model_state_t)
if output is None:
output = output_t
else:
output = tf.concat([output, output_t], 0)
model_state['rw'] = tf.concat(
[model_state['rw'], model_state_t.tape_head.read_weights], 0)
model_state['ww'] = tf.concat(
[model_state['ww'], model_state_t.tape_head.write_weights], 0)
model_state['fg'] = tf.concat(
[model_state['fg'], model_state_t.tape_head.free_gate], 0)
model_state['ag'] = tf.concat(
[model_state['ag'], model_state_t.tape_head.alloc_gate], 0)
else:
output, model_state = tf.nn.dynamic_rnn(
cell=dnc_cell,
inputs=input,
time_major=True,
initial_state=prev_state)
return output, model_state
def run_lstm_baseline(input, output_size):
"""Run a basic LSTM basline model on given input."""
lstm = snt.LSTM(hidden_size=output_size)
initial_state = lstm.initial_state(FLAGS.batch_size, dtype=input.dtype)
output, model_state = tf.nn.dynamic_rnn(
cell=lstm,
inputs=input,
time_major=True,
initial_state=initial_state)
return output, model_state
def get_config():
"""Return configuration for a tf.Session using a fraction of GPU memory."""
config = tf.ConfigProto()
config.gpu_options.per_process_gpu_memory_fraction = FLAGS.gpu_usage
def train():
"""Train the DNC and periodically report the loss."""
task = get_task(FLAGS.task)
task_state = task()
# output, model_state = run_model(task_state.input, task.output_size)
output, model_state = run_model(task_state.input, task.output_size)
output_processed = task.process_output(output, task_state, model_state)
# responsibility of task.cost to process output if desired
train_loss = task.cost(output, task_state)
trainable_variables = tf.trainable_variables()
grads, _ = tf.clip_by_global_norm(
tf.gradients(train_loss, trainable_variables), FLAGS.max_grad_norm)
global_step = tf.Variable(0, trainable=False, name='global_step')
optimizer = tf.train.RMSPropOptimizer(
FLAGS.learning_rate, epsilon=FLAGS.optimizer_epsilon)
train_step = optimizer.apply_gradients(
zip(grads, trainable_variables), global_step=global_step)
saver = tf.train.Saver()
if FLAGS.checkpoint_interval > 0:
hooks = [
tf.train.CheckpointSaverHook(
checkpoint_dir=FLAGS.checkpoint_dir,
checkpoint_basename=FLAGS.checkpoint_basename,
save_steps=FLAGS.checkpoint_interval,
saver=saver)
]
else:
hooks = []
# Training time
with tf.train.SingularMonitoredSession(
hooks=hooks, config=get_config(), checkpoint_dir=FLAGS.checkpoint_dir,
) as sess:
start_iteration = sess.run(global_step)
total_loss = 0
for train_iteration in range(start_iteration,
FLAGS.num_training_iterations):
if FLAGS.test:
loss = sess.run(train_loss)
else:
_, loss = sess.run([train_step, train_loss])
total_loss += loss
# report periodically
if (train_iteration + 1) % FLAGS.report_interval == 0:
task_state_eval, output_eval, model_state_eval = sess.run(
[task_state, output_processed, model_state])
report_string = task.to_string(
output_eval, task_state_eval, model_state_eval,
verbose=FLAGS.test)
if not FLAGS.test:
tf.logging.info(
"Train Iteration %d: Avg training loss: %f.\n",
train_iteration, total_loss / FLAGS.report_interval)
# reset total_loss to report the interval's loss only
total_loss = 0
if report_string != "":
tf.logging.info(report_string)
return task
def main(unused):
"""Main method for this app."""
tf.logging.set_verbosity(3) # Print INFO log messages.
train()
if __name__ == "__main__":
tf.app.run() | unknown | codeparrot/codeparrot-clean | ||
/*
* Copyright 2012-present the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.boot.logging.logback;
import java.util.function.Supplier;
import ch.qos.logback.classic.Level;
import ch.qos.logback.classic.LoggerContext;
import ch.qos.logback.classic.spi.ILoggingEvent;
import ch.qos.logback.core.Appender;
import ch.qos.logback.core.pattern.Converter;
import ch.qos.logback.core.spi.LifeCycle;
import ch.qos.logback.core.status.InfoStatus;
import ch.qos.logback.core.status.Status;
import org.jspecify.annotations.Nullable;
/**
* Custom {@link LogbackConfigurator} used to add {@link Status Statuses} when Logback
* debugging is enabled.
*
* @author Andy Wilkinson
*/
class DebugLogbackConfigurator extends LogbackConfigurator {
DebugLogbackConfigurator(LoggerContext context) {
super(context);
}
@Override
<T extends Converter<?>> void conversionRule(String conversionWord, Class<T> converterClass,
Supplier<T> converterSupplier) {
info("Adding conversion rule of type '" + converterClass.getName() + "' for word '" + conversionWord + "'");
super.conversionRule(conversionWord, converterClass, converterSupplier);
}
@Override
void appender(String name, Appender<?> appender) {
info("Adding appender '" + appender + "' named '" + name + "'");
super.appender(name, appender);
}
@Override
void logger(String name, @Nullable Level level, boolean additive, @Nullable Appender<ILoggingEvent> appender) {
info("Configuring logger '" + name + "' with level '" + level + "'. Additive: " + additive);
if (appender != null) {
info("Adding appender '" + appender + "' to logger '" + name + "'");
}
super.logger(name, level, additive, appender);
}
@Override
void start(LifeCycle lifeCycle) {
info("Starting '" + lifeCycle + "'");
super.start(lifeCycle);
}
private void info(String message) {
getContext().getStatusManager().add(new InfoStatus(message, this));
}
} | java | github | https://github.com/spring-projects/spring-boot | core/spring-boot/src/main/java/org/springframework/boot/logging/logback/DebugLogbackConfigurator.java |
// We need this feature as it changes `dylib` linking behavior and allows us to link to `rustc_driver`.
#![feature(rustc_private)]
// Several crates are depended upon but unused so that they are present in the sysroot
#![expect(unused_crate_dependencies)]
use std::process::ExitCode;
// A note about jemalloc: rustc uses jemalloc when built for CI and
// distribution. The obvious way to do this is with the `#[global_allocator]`
// mechanism. However, for complicated reasons (see
// https://github.com/rust-lang/rust/pull/81782#issuecomment-784438001 for some
// details) that mechanism doesn't work here. Also, we'd like to use a
// consistent allocator across the rustc <-> llvm boundary, and
// `#[global_allocator]` wouldn't provide that.
//
// Instead, we use a lower-level mechanism, namely the
// `"override_allocator_on_supported_platforms"` Cargo feature of jemalloc-sys.
//
// This makes jemalloc-sys override the libc/system allocator's implementation
// of `malloc`, `free`, etc.. This means that Rust's `System` allocator, which
// calls `libc::malloc()` et al., is actually calling into jemalloc.
//
// A consequence of not using `GlobalAlloc` (and the `tikv-jemallocator` crate
// provides an impl of that trait, which is called `Jemalloc`) is that we
// cannot use the sized deallocation APIs (`sdallocx`) that jemalloc provides.
// It's unclear how much performance is lost because of this.
//
// NOTE: Even though Cargo passes `--extern` with `tikv_jemalloc_sys`, we still need to `use` the
// crate for the compiler to see the `#[used]`, see https://github.com/rust-lang/rust/issues/64402.
// This is similarly required if we used a crate with `#[global_allocator]`.
//
// NOTE: if you are reading this comment because you want to set a custom `global_allocator` for
// benchmarking, consider using the benchmarks in the `rustc-perf` collector suite instead:
// https://github.com/rust-lang/rustc-perf/blob/master/collector/README.md#profiling
//
// NOTE: if you are reading this comment because you want to replace jemalloc with another allocator
// to compare their performance, see
// https://github.com/rust-lang/rust/commit/b90cfc887c31c3e7a9e6d462e2464db1fe506175#diff-43914724af6e464c1da2171e4a9b6c7e607d5bc1203fa95c0ab85be4122605ef
// for an example of how to do so.
#[cfg(feature = "jemalloc")]
use tikv_jemalloc_sys as _;
fn main() -> ExitCode {
rustc_driver::main()
} | rust | github | https://github.com/rust-lang/rust | compiler/rustc/src/main.rs |
//===--- DumpAST.cpp ---------------------------------------------*- C++-*-===//
//
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
// See https://llvm.org/LICENSE.txt for license information.
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
//
//===----------------------------------------------------------------------===//
// Defines a few tweaks that expose AST and related information.
// Some of these are fairly clang-specific and hidden (e.g. textual AST dumps).
// Others are more generally useful (class layout) and are exposed by default.
//===----------------------------------------------------------------------===//
#include "XRefs.h"
#include "refactor/Tweak.h"
#include "clang/AST/ASTTypeTraits.h"
#include "clang/AST/Type.h"
#include "llvm/Support/FormatVariadic.h"
#include "llvm/Support/ScopedPrinter.h"
#include "llvm/Support/raw_ostream.h"
#include <optional>
namespace clang {
namespace clangd {
namespace {
/// Dumps the AST of the selected node.
/// Input:
/// fcall("foo");
/// ^^^^^
/// Message:
/// CallExpr
/// |-DeclRefExpr fcall
/// `-StringLiteral "foo"
class DumpAST : public Tweak {
public:
const char *id() const final;
bool prepare(const Selection &Inputs) override {
for (auto *N = Inputs.ASTSelection.commonAncestor(); N && !Node;
N = N->Parent)
if (dumpable(N->ASTNode))
Node = N->ASTNode;
return Node.has_value();
}
Expected<Effect> apply(const Selection &Inputs) override;
std::string title() const override {
return std::string(
llvm::formatv("Dump {0} AST", Node->getNodeKind().asStringRef()));
}
llvm::StringLiteral kind() const override { return CodeAction::INFO_KIND; }
bool hidden() const override { return true; }
private:
static bool dumpable(const DynTypedNode &N) {
// Sadly not all node types can be dumped, and there's no API to check.
// See DynTypedNode::dump().
return N.get<Decl>() || N.get<Stmt>() || N.get<Type>();
}
std::optional<DynTypedNode> Node;
};
REGISTER_TWEAK(DumpAST)
llvm::Expected<Tweak::Effect> DumpAST::apply(const Selection &Inputs) {
std::string Str;
llvm::raw_string_ostream OS(Str);
Node->dump(OS, Inputs.AST->getASTContext());
return Effect::showMessage(std::move(OS.str()));
}
/// Dumps the SelectionTree.
/// Input:
/// int fcall(int);
/// void foo() {
/// fcall(2 + 2);
/// ^^^^^
/// }
/// Message:
/// TranslationUnitDecl
/// FunctionDecl void foo()
/// CompoundStmt {}
/// .CallExpr fcall(2 + 2)
/// ImplicitCastExpr fcall
/// .DeclRefExpr fcall
/// BinaryOperator 2 + 2
/// *IntegerLiteral 2
class ShowSelectionTree : public Tweak {
public:
const char *id() const final;
bool prepare(const Selection &Inputs) override { return true; }
Expected<Effect> apply(const Selection &Inputs) override {
return Effect::showMessage(llvm::to_string(Inputs.ASTSelection));
}
std::string title() const override { return "Show selection tree"; }
llvm::StringLiteral kind() const override { return CodeAction::INFO_KIND; }
bool hidden() const override { return true; }
};
REGISTER_TWEAK(ShowSelectionTree)
/// Dumps the symbol under the cursor.
/// Inputs:
/// void foo();
/// ^^^
/// Message:
/// foo -
/// {"containerName":null,"id":"CA2EBE44A1D76D2A","name":"foo","usr":"c:@F@foo#"}
class DumpSymbol : public Tweak {
const char *id() const final;
bool prepare(const Selection &Inputs) override { return true; }
Expected<Effect> apply(const Selection &Inputs) override {
std::string Storage;
llvm::raw_string_ostream Out(Storage);
for (auto &Sym : getSymbolInfo(
*Inputs.AST, sourceLocToPosition(Inputs.AST->getSourceManager(),
Inputs.Cursor)))
Out << Sym;
return Effect::showMessage(Out.str());
}
std::string title() const override { return "Dump symbol under the cursor"; }
llvm::StringLiteral kind() const override { return CodeAction::INFO_KIND; }
bool hidden() const override { return true; }
};
REGISTER_TWEAK(DumpSymbol)
/// Shows the layout of the RecordDecl under the cursor.
/// Input:
/// struct X { int foo; };
/// ^^^^^^^^
/// Message:
/// 0 | struct S
/// 0 | int foo
/// | [sizeof=4, dsize=4, align=4,
/// | nvsize=4, nvalign=4]
class DumpRecordLayout : public Tweak {
public:
const char *id() const final;
bool prepare(const Selection &Inputs) override {
if (auto *Node = Inputs.ASTSelection.commonAncestor())
if (auto *D = Node->ASTNode.get<Decl>())
Record = dyn_cast<RecordDecl>(D);
return Record && Record->isThisDeclarationADefinition() &&
!Record->isDependentType();
}
Expected<Effect> apply(const Selection &Inputs) override {
std::string Str;
llvm::raw_string_ostream OS(Str);
Inputs.AST->getASTContext().DumpRecordLayout(Record, OS);
return Effect::showMessage(std::move(OS.str()));
}
std::string title() const override {
return std::string(llvm::formatv(
"Show {0} layout",
TypeWithKeyword::getTagTypeKindName(Record->getTagKind())));
}
llvm::StringLiteral kind() const override { return CodeAction::INFO_KIND; }
// FIXME: this is interesting to most users. However:
// - triggering is too broad (e.g. triggers on comments within a class)
// - showMessage has inconsistent UX (e.g. newlines are stripped in VSCode)
// - the output itself is a bit hard to decipher.
bool hidden() const override { return true; }
private:
const RecordDecl *Record = nullptr;
};
REGISTER_TWEAK(DumpRecordLayout)
} // namespace
} // namespace clangd
} // namespace clang | cpp | github | https://github.com/llvm/llvm-project | clang-tools-extra/clangd/refactor/tweaks/DumpAST.cpp |
<?php
namespace Illuminate\Tests\Database;
use Illuminate\Database\Connection;
use Illuminate\Database\Schema\Grammars\MariaDbGrammar;
use Illuminate\Database\Schema\MariaDbBuilder;
use Mockery as m;
use PHPUnit\Framework\TestCase;
class DatabaseMariaDbBuilderTest extends TestCase
{
public function testCreateDatabase()
{
$connection = m::mock(Connection::class);
$grammar = new MariaDbGrammar($connection);
$connection->shouldReceive('getConfig')->once()->with('charset')->andReturn('utf8mb4');
$connection->shouldReceive('getConfig')->once()->with('collation')->andReturn('utf8mb4_unicode_ci');
$connection->shouldReceive('getSchemaGrammar')->once()->andReturn($grammar);
$connection->shouldReceive('statement')->once()->with(
'create database `my_temporary_database` default character set `utf8mb4` default collate `utf8mb4_unicode_ci`'
)->andReturn(true);
$builder = new MariaDbBuilder($connection);
$builder->createDatabase('my_temporary_database');
}
public function testDropDatabaseIfExists()
{
$connection = m::mock(Connection::class);
$grammar = new MariaDbGrammar($connection);
$connection->shouldReceive('getSchemaGrammar')->once()->andReturn($grammar);
$connection->shouldReceive('statement')->once()->with(
'drop database if exists `my_database_a`'
)->andReturn(true);
$builder = new MariaDbBuilder($connection);
$builder->dropDatabaseIfExists('my_database_a');
}
} | php | github | https://github.com/laravel/framework | tests/Database/DatabaseMariaDbBuilderTest.php |
/*
* Copyright 2002-present the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cache.interceptor;
import java.util.Collections;
import java.util.concurrent.Callable;
import java.util.concurrent.CompletableFuture;
import java.util.concurrent.atomic.AtomicLong;
import org.junit.jupiter.api.AfterEach;
import org.junit.jupiter.api.BeforeEach;
import org.junit.jupiter.api.Test;
import reactor.core.publisher.Flux;
import reactor.core.publisher.Mono;
import org.springframework.cache.Cache;
import org.springframework.cache.CacheManager;
import org.springframework.cache.annotation.CacheConfig;
import org.springframework.cache.annotation.CacheEvict;
import org.springframework.cache.annotation.CachePut;
import org.springframework.cache.annotation.Cacheable;
import org.springframework.cache.annotation.CachingConfigurer;
import org.springframework.cache.annotation.EnableCaching;
import org.springframework.cache.support.SimpleCacheManager;
import org.springframework.cache.support.SimpleValueWrapper;
import org.springframework.context.annotation.AnnotationConfigApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import static org.assertj.core.api.Assertions.assertThat;
import static org.assertj.core.api.Assertions.assertThatExceptionOfType;
import static org.mockito.ArgumentMatchers.any;
import static org.mockito.ArgumentMatchers.eq;
import static org.mockito.BDDMockito.given;
import static org.mockito.BDDMockito.willReturn;
import static org.mockito.BDDMockito.willThrow;
import static org.mockito.Mockito.mock;
import static org.mockito.Mockito.verify;
/**
* @author Stephane Nicoll
*/
class CacheErrorHandlerTests {
private AnnotationConfigApplicationContext context;
private Cache cache;
private CacheInterceptor cacheInterceptor;
private CacheErrorHandler errorHandler;
private SimpleService simpleService;
@BeforeEach
void setup() {
this.context = new AnnotationConfigApplicationContext(Config.class);
this.cache = context.getBean("mockCache", Cache.class);
this.cacheInterceptor = context.getBean(CacheInterceptor.class);
this.errorHandler = context.getBean(CacheErrorHandler.class);
this.simpleService = context.getBean(SimpleService.class);
}
@AfterEach
void closeContext() {
this.context.close();
}
@Test
void getFail() {
UnsupportedOperationException exception = new UnsupportedOperationException("Test exception on get");
willThrow(exception).given(this.cache).get(0L);
Object result = this.simpleService.get(0L);
verify(this.errorHandler).handleCacheGetError(exception, this.cache, 0L);
verify(this.cache).get(0L);
verify(this.cache).put(0L, result); // result of the invocation
}
@Test
@SuppressWarnings("unchecked")
public void getSyncFail() {
UnsupportedOperationException exception = new UnsupportedOperationException("Test exception on get");
willThrow(exception).given(this.cache).get(eq(0L), any(Callable.class));
Object result = this.simpleService.getSync(0L);
assertThat(result).isEqualTo(0L);
verify(this.errorHandler).handleCacheGetError(exception, this.cache, 0L);
verify(this.cache).get(eq(0L), any(Callable.class));
}
@Test
public void getCompletableFutureFail() {
UnsupportedOperationException exception = new UnsupportedOperationException("Test exception on get");
willThrow(exception).given(this.cache).retrieve(eq(0L));
Object result = this.simpleService.getFuture(0L).join();
assertThat(result).isEqualTo(0L);
verify(this.errorHandler).handleCacheGetError(exception, this.cache, 0L);
verify(this.cache).retrieve(eq(0L));
}
@Test
public void getMonoFail() {
UnsupportedOperationException exception = new UnsupportedOperationException("Test exception on get");
willThrow(exception).given(this.cache).retrieve(eq(0L));
Object result = this.simpleService.getMono(0L).block();
assertThat(result).isEqualTo(0L);
verify(this.errorHandler).handleCacheGetError(exception, this.cache, 0L);
verify(this.cache).retrieve(eq(0L));
}
@Test
public void getFluxFail() {
UnsupportedOperationException exception = new UnsupportedOperationException("Test exception on get");
willThrow(exception).given(this.cache).retrieve(eq(0L));
Object result = this.simpleService.getFlux(0L).blockLast();
assertThat(result).isEqualTo(0L);
verify(this.errorHandler).handleCacheGetError(exception, this.cache, 0L);
verify(this.cache).retrieve(eq(0L));
}
@Test
void getAndPutFail() {
UnsupportedOperationException exception = new UnsupportedOperationException("Test exception on get");
willThrow(exception).given(this.cache).get(0L);
willThrow(exception).given(this.cache).put(0L, 0L); // Update of the cache will fail as well
Object counter = this.simpleService.get(0L);
willReturn(new SimpleValueWrapper(2L)).given(this.cache).get(0L);
Object counter2 = this.simpleService.get(0L);
Object counter3 = this.simpleService.get(0L);
assertThat(counter2).isNotSameAs(counter);
assertThat(counter3).isEqualTo(counter2);
}
@Test
void getFailProperException() {
UnsupportedOperationException exception = new UnsupportedOperationException("Test exception on get");
willThrow(exception).given(this.cache).get(0L);
this.cacheInterceptor.setErrorHandler(new SimpleCacheErrorHandler());
assertThatExceptionOfType(UnsupportedOperationException.class)
.isThrownBy(() -> this.simpleService.get(0L))
.withMessage("Test exception on get");
}
@Test
void putFail() {
UnsupportedOperationException exception = new UnsupportedOperationException("Test exception on put");
willThrow(exception).given(this.cache).put(0L, 0L);
this.simpleService.put(0L);
verify(this.errorHandler).handleCachePutError(exception, cache, 0L, 0L);
}
@Test
void putFailProperException() {
UnsupportedOperationException exception = new UnsupportedOperationException("Test exception on put");
willThrow(exception).given(this.cache).put(0L, 0L);
this.cacheInterceptor.setErrorHandler(new SimpleCacheErrorHandler());
assertThatExceptionOfType(UnsupportedOperationException.class)
.isThrownBy(() -> this.simpleService.put(0L))
.withMessage("Test exception on put");
}
@Test
void evictFail() {
UnsupportedOperationException exception = new UnsupportedOperationException("Test exception on evict");
willThrow(exception).given(this.cache).evict(0L);
this.simpleService.evict(0L);
verify(this.errorHandler).handleCacheEvictError(exception, cache, 0L);
}
@Test
void evictFailProperException() {
UnsupportedOperationException exception = new UnsupportedOperationException("Test exception on evict");
willThrow(exception).given(this.cache).evict(0L);
this.cacheInterceptor.setErrorHandler(new SimpleCacheErrorHandler());
assertThatExceptionOfType(UnsupportedOperationException.class)
.isThrownBy(() -> this.simpleService.evict(0L))
.withMessage("Test exception on evict");
}
@Test
void clearFail() {
UnsupportedOperationException exception = new UnsupportedOperationException("Test exception on evict");
willThrow(exception).given(this.cache).clear();
this.simpleService.clear();
verify(this.errorHandler).handleCacheClearError(exception, cache);
}
@Test
void clearFailProperException() {
UnsupportedOperationException exception = new UnsupportedOperationException("Test exception on clear");
willThrow(exception).given(this.cache).clear();
this.cacheInterceptor.setErrorHandler(new SimpleCacheErrorHandler());
assertThatExceptionOfType(UnsupportedOperationException.class)
.isThrownBy(() -> this.simpleService.clear())
.withMessage("Test exception on clear");
}
@Configuration
@EnableCaching
static class Config implements CachingConfigurer {
@Bean
@Override
public CacheErrorHandler errorHandler() {
return mock();
}
@Bean
public SimpleService simpleService() {
return new SimpleService();
}
@Override
@Bean
public CacheManager cacheManager() {
SimpleCacheManager cacheManager = new SimpleCacheManager();
cacheManager.setCaches(Collections.singletonList(mockCache()));
return cacheManager;
}
@Bean
public Cache mockCache() {
Cache cache = mock();
given(cache.getName()).willReturn("test");
return cache;
}
}
@CacheConfig("test")
public static class SimpleService {
private AtomicLong counter = new AtomicLong();
@Cacheable
public Object get(long id) {
return this.counter.getAndIncrement();
}
@Cacheable(sync = true)
public Object getSync(long id) {
return this.counter.getAndIncrement();
}
@Cacheable
public CompletableFuture<Long> getFuture(long id) {
return CompletableFuture.completedFuture(this.counter.getAndIncrement());
}
@Cacheable
public Mono<Long> getMono(long id) {
return Mono.just(this.counter.getAndIncrement());
}
@Cacheable
public Flux<Long> getFlux(long id) {
return Flux.just(this.counter.getAndIncrement(), 0L);
}
@CachePut
public Object put(long id) {
return this.counter.getAndIncrement();
}
@CacheEvict
public void evict(long id) {
}
@CacheEvict(allEntries = true)
public void clear() {
}
}
} | java | github | https://github.com/spring-projects/spring-framework | spring-context/src/test/java/org/springframework/cache/interceptor/CacheErrorHandlerTests.java |
# Copyright 2014 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
import abc
from neutron.api import extensions
from neutron.api.v2 import attributes as attr
from neutron.api.v2 import resource_helper
from neutron.common import exceptions as qexception
from neutron.plugins.common import constants
from neutron.services.service_base import ServicePluginBase
import six
supported_tunnel_types = ['fullmesh', 'Customized']
supported_tunnel_backup = ['frr', 'Secondary']
supported_qos = ['Gold', 'Silver', 'Bronze']
positive_int = (0, attr.UNLIMITED)
network_type = ['L2', 'L3']
class MPLSVPNNotFound(qexception.NotFound):
message = _("MPLSVPN %(mplsvpn_id)s could not be found")
class DuplicateMPLSVPNForTenant(qexception.InvalidInput):
message = (_("MPLSVPN service %(mplsvpn_id)s already exists "
"for tenant %(tenant_id)s"))
class AttachmentCircuitNotFound(qexception.NotFound):
message = (_("AttachmentCircuit %(attachmentcircuit_id)s could "
"not be found"))
class DuplicateAttachmentCircuitForTenant(qexception.InvalidInput):
message = (_("Attachment circuit %(attachmentcircuit_id)s already "
"exists for tenant %(tenant_id)s"))
class ProviderEdgeNotFound(qexception.NotFound):
message = _("ProviderEdge %(provideredge_id)s could not be found")
RESOURCE_ATTRIBUTE_MAP = {
'mplsvpns': {
'id': {'allow_post': False, 'allow_put': False,
'validate': {'type:uuid': None},
'is_visible': True,
'primary_key': True},
'tenant_id': {'allow_post': True, 'allow_put': False,
'validate': {'type:string': None},
'is_visible': True},
'vpn_id': {'allow_post': True, 'allow_put': False,
'validate': {'type:string': None},
'is_visible': True},
'name': {'allow_post': True, 'allow_put': False,
'validate': {'type:string': None},
'is_visible': True, 'default': ''},
'tunnel_options': {'allow_post': True, 'allow_put': False,
'convert_to': attr.convert_none_to_empty_dict,
'default': {},
'validate': {'type:dict_or_empty': {
'tunnel_type': {'type:values':
supported_tunnel_types,
'default': 'fullmesh',
'required': False},
'tunnel_backup': {'type:values':
supported_tunnel_backup,
'default': 'frr',
'required': False},
'qos': {'type:values': supported_qos,
'default': 'Gold',
'required': False},
'bandwidth': {'type:range': positive_int,
'default': '10',
'required': False}}},
'is_visible': True},
'status': {'allow_post': False, 'allow_put': False,
'is_visible': True},
'attachment_circuits': {'allow_post': True, 'allow_put': True,
'convert_to': attr.convert_none_to_empty_list,
'validate': {'type:uuid_list': None},
'default': None,
'is_visible': True}
},
'attachment_circuits': {
'id': {'allow_post': False, 'allow_put': False,
'validate': {'type:uuid': None},
'is_visible': True,
'primary_key': True},
'tenant_id': {'allow_post': True, 'allow_put': False,
'validate': {'type:string': None},
'is_visible': True},
'name': {'allow_post': True, 'allow_put': False,
'validate': {'type:string': None},
'is_visible': True, 'default': ''},
'network_type': {'allow_post': True, 'allow_put': False,
'validate': {'type:string': None},
'is_visible': True, 'default': 'L2'},
'provider_edge_id': {'allow_post': True, 'allow_put': False,
'validate': {'type:uuid': None},
'is_visible': True},
'networks': {'allow_post': True, 'allow_put': True,
'convert_to': attr.convert_none_to_empty_list,
'validate': {'type:uuid_list': None},
'default': None,
'is_visible': True}
},
'provider_edges': {
'id': {'allow_post': False, 'allow_put': False,
'validate': {'type:uuid': None},
'is_visible': True,
'primary_key': True},
'name': {'allow_post': True, 'allow_put': True,
'validate': {'type:string': None},
'is_visible': True, 'default': ''},
'tenant_id': {'allow_post': True, 'allow_put': False,
'validate': {'type:string': None},
'is_visible': True}
}
}
class Mplsvpn(extensions.ExtensionDescriptor):
@classmethod
def get_name(cls):
return "MPLS VPN service"
@classmethod
def get_alias(cls):
return "mplsvpn"
@classmethod
def get_description(cls):
return "Extension for MPLS VPN service"
@classmethod
def get_namespace(cls):
return "https://wiki.openstack.org/Neutron/MPLSVPN"
@classmethod
def get_updated(cls):
return "2014-03-17T10:00:00-00:00"
@classmethod
def get_resources(cls):
special_mappings = {}
plural_mappings = resource_helper.build_plural_mappings(
special_mappings, RESOURCE_ATTRIBUTE_MAP)
attr.PLURALS.update(plural_mappings)
return resource_helper.build_resource_info(plural_mappings,
RESOURCE_ATTRIBUTE_MAP,
constants.MPLSVPN,
register_quota=True,
translate_name=True)
@classmethod
def get_plugin_interface(cls):
return MPLSVPNPluginBase
def update_attributes_map(self, attributes):
super(Mplsvpn, self).update_attributes_map(
attributes, extension_attrs_map=RESOURCE_ATTRIBUTE_MAP)
def get_extended_resources(self, version):
if version == "2.0":
return RESOURCE_ATTRIBUTE_MAP
else:
return {}
@six.add_metaclass(abc.ABCMeta)
class MPLSVPNPluginBase(ServicePluginBase):
def get_plugin_name(self):
return constants.MPLSVPN
def get_plugin_type(self):
return constants.MPLSVPN
def get_plugin_description(self):
return 'MPLS VPN service plugin'
@abc.abstractmethod
def get_mplsvpns(self, context, filters=None, fields=None):
pass
@abc.abstractmethod
def get_mplsvpn(self, context, mplsvpn_id, fields=None):
pass
@abc.abstractmethod
def create_mplsvpn(self, context, mplsvpn):
pass
@abc.abstractmethod
def update_mplsvpn(self, context, mplsvpn_id, mplsvpn):
pass
@abc.abstractmethod
def delete_mplsvpn(self, context, mplsvpn_id):
pass
@abc.abstractmethod
def get_attachment_circuits(self, context, filters=None, fields=None):
pass
@abc.abstractmethod
def get_attachment_circuit(self, context, attachmentcircuit_id,
fields=None):
pass
@abc.abstractmethod
def create_attachment_circuit(self, context, attachmentcircuit):
pass
@abc.abstractmethod
def update_attachment_circuit(self, context, attachmentcircuit_id,
attachmentcircuit):
pass
@abc.abstractmethod
def delete_attachment_circuit(self, context, attachmentcircuit_id):
pass
@abc.abstractmethod
def get_provider_edges(self, context, filters=None, fields=None):
pass
@abc.abstractmethod
def get_provider_edge(self, context, provideredge_id, fields=None):
pass
@abc.abstractmethod
def create_provider_edge(self, context, provideredge):
pass
@abc.abstractmethod
def delete_provider_edge(self, context, provideredge_id):
pass | unknown | codeparrot/codeparrot-clean | ||
"""Keras implementation of SSD."""
import keras.backend as K
from keras.layers import Activation
from keras.layers import Conv2D
from keras.layers import Dense
from keras.layers import Flatten
from keras.layers import GlobalAveragePooling2D
from keras.layers import Input
from keras.layers import MaxPooling2D
from keras.layers import Reshape
from keras.layers import ZeroPadding2D
from keras.layers import concatenate
from keras.models import Model
from ssd_layers import Normalize
from ssd_layers import PriorBox
def SSD300(input_shape, num_classes=21):
"""SSD300 architecture.
# Arguments
input_shape: Shape of the input image,
expected to be either (300, 300, 3) or (3, 300, 300)(not tested).
num_classes: Number of classes including background.
# References
https://arxiv.org/abs/1512.02325
"""
input_layer = Input(shape=input_shape)
# Block 1
conv1_1 = Conv2D(64, (3, 3),
name='conv1_1',
padding='same',
activation='relu')(input_layer)
conv1_2 = Conv2D(64, (3, 3),
name='conv1_2',
padding='same',
activation='relu')(conv1_1)
pool1 = MaxPooling2D(name='pool1',
pool_size=(2, 2),
strides=(2, 2),
padding='same', )(conv1_2)
# Block 2
conv2_1 = Conv2D(128, (3, 3),
name='conv2_1',
padding='same',
activation='relu')(pool1)
conv2_2 = Conv2D(128, (3, 3),
name='conv2_2',
padding='same',
activation='relu')(conv2_1)
pool2 = MaxPooling2D(name='pool2',
pool_size=(2, 2),
strides=(2, 2),
padding='same')(conv2_2)
# Block 3
conv3_1 = Conv2D(256, (3, 3),
name='conv3_1',
padding='same',
activation='relu')(pool2)
conv3_2 = Conv2D(256, (3, 3),
name='conv3_2',
padding='same',
activation='relu')(conv3_1)
conv3_3 = Conv2D(256, (3, 3),
name='conv3_3',
padding='same',
activation='relu')(conv3_2)
pool3 = MaxPooling2D(name='pool3',
pool_size=(2, 2),
strides=(2, 2),
padding='same')(conv3_3)
# Block 4
conv4_1 = Conv2D(512, (3, 3),
name='conv4_1',
padding='same',
activation='relu')(pool3)
conv4_2 = Conv2D(512, (3, 3),
name='conv4_2',
padding='same',
activation='relu')(conv4_1)
conv4_3 = Conv2D(512, (3, 3),
name='conv4_3',
padding='same',
activation='relu')(conv4_2)
pool4 = MaxPooling2D(name='pool4',
pool_size=(2, 2),
strides=(2, 2),
padding='same')(conv4_3)
# Block 5
conv5_1 = Conv2D(512, (3, 3),
name='conv5_1',
padding='same',
activation='relu')(pool4)
conv5_2 = Conv2D(512, (3, 3),
name='conv5_2',
padding='same',
activation='relu')(conv5_1)
conv5_3 = Conv2D(512, (3, 3),
name='conv5_3',
padding='same',
activation='relu')(conv5_2)
pool5 = MaxPooling2D(name='pool5',
pool_size=(3, 3),
strides=(1, 1),
padding='same')(conv5_3)
# FC6
fc6 = Conv2D(1024, (3, 3),
name='fc6',
dilation_rate=(6, 6),
padding='same',
activation='relu'
)(pool5)
# x = Dropout(0.5, name='drop6')(x)
# FC7
fc7 = Conv2D(1024, (1, 1),
name='fc7',
padding='same',
activation='relu'
)(fc6)
# x = Dropout(0.5, name='drop7')(x)
# Block 6
conv6_1 = Conv2D(256, (1, 1),
name='conv6_1',
padding='same',
activation='relu')(fc7)
conv6_2 = Conv2D(512, (3, 3),
name='conv6_2',
strides=(2, 2),
padding='same',
activation='relu')(conv6_1)
# Block 7
conv7_1 = Conv2D(128, (1, 1),
name='conv7_1',
padding='same',
activation='relu')(conv6_2)
conv7_1z = ZeroPadding2D(name='conv7_1z')(conv7_1)
conv7_2 = Conv2D(256, (3, 3),
name='conv7_2',
padding='valid',
strides=(2, 2),
activation='relu')(conv7_1z)
# Block 8
conv8_1 = Conv2D(128, (1, 1),
name='conv8_1',
padding='same',
activation='relu')(conv7_2)
conv8_2 = Conv2D(256, (3, 3),
name='conv8_2',
padding='same',
strides=(2, 2),
activation='relu')(conv8_1)
# Last Pool
pool6 = GlobalAveragePooling2D(name='pool6')(conv8_2)
# Prediction from conv4_3
num_priors = 3
img_size = (input_shape[1], input_shape[0])
name = 'conv4_3_norm_mbox_conf'
if num_classes != 21:
name += '_{}'.format(num_classes)
conv4_3_norm = Normalize(20, name='conv4_3_norm')(conv4_3)
conv4_3_norm_mbox_loc = Conv2D(num_priors * 4, (3, 3),
name='conv4_3_norm_mbox_loc',
padding='same')(conv4_3_norm)
conv4_3_norm_mbox_loc_flat = Flatten(name='conv4_3_norm_mbox_loc_flat')(conv4_3_norm_mbox_loc)
conv4_3_norm_mbox_conf = Conv2D(num_priors * num_classes, (3, 3),
name=name,
padding='same')(conv4_3_norm)
conv4_3_norm_mbox_conf_flat = Flatten(name='conv4_3_norm_mbox_conf_flat')(conv4_3_norm_mbox_conf)
conv4_3_norm_mbox_priorbox = PriorBox(img_size, 30.0,
name='conv4_3_norm_mbox_priorbox',
aspect_ratios=[2],
variances=[0.1, 0.1, 0.2, 0.2])(conv4_3_norm)
# Prediction from fc7
num_priors = 6
name = 'fc7_mbox_conf'
if num_classes != 21:
name += '_{}'.format(num_classes)
fc7_mbox_conf = Conv2D(num_priors * num_classes, (3, 3),
padding='same',
name=name)(fc7)
fc7_mbox_conf_flat = Flatten(name='fc7_mbox_conf_flat')(fc7_mbox_conf)
fc7_mbox_loc = Conv2D(num_priors * 4, (3, 3),
name='fc7_mbox_loc',
padding='same')(fc7)
fc7_mbox_loc_flat = Flatten(name='fc7_mbox_loc_flat')(fc7_mbox_loc)
fc7_mbox_priorbox = PriorBox(img_size, 60.0,
name='fc7_mbox_priorbox',
max_size=114.0,
aspect_ratios=[2, 3],
variances=[0.1, 0.1, 0.2, 0.2]
)(fc7)
# Prediction from conv6_2
num_priors = 6
name = 'conv6_2_mbox_conf'
if num_classes != 21:
name += '_{}'.format(num_classes)
conv6_2_mbox_conf = Conv2D(num_priors * num_classes, (3, 3),
padding='same',
name=name)(conv6_2)
conv6_2_mbox_conf_flat = Flatten(name='conv6_2_mbox_conf_flat')(conv6_2_mbox_conf)
conv6_2_mbox_loc = Conv2D(num_priors * 4, (3, 3,),
name='conv6_2_mbox_loc',
padding='same')(conv6_2)
conv6_2_mbox_loc_flat = Flatten(name='conv6_2_mbox_loc_flat')(conv6_2_mbox_loc)
conv6_2_mbox_priorbox = PriorBox(img_size, 114.0,
max_size=168.0,
aspect_ratios=[2, 3],
variances=[0.1, 0.1, 0.2, 0.2],
name='conv6_2_mbox_priorbox')(conv6_2)
# Prediction from conv7_2
num_priors = 6
name = 'conv7_2_mbox_conf'
if num_classes != 21:
name += '_{}'.format(num_classes)
conv7_2_mbox_conf = Conv2D(num_priors * num_classes, (3, 3),
padding='same',
name=name)(conv7_2)
conv7_2_mbox_conf_flat = Flatten(name='conv7_2_mbox_conf_flat')(conv7_2_mbox_conf)
conv7_2_mbox_loc = Conv2D(num_priors * 4, (3, 3),
padding='same',
name='conv7_2_mbox_loc')(conv7_2)
conv7_2_mbox_loc_flat = Flatten(name='conv7_2_mbox_loc_flat')(conv7_2_mbox_loc)
conv7_2_mbox_priorbox = PriorBox(img_size, 168.0,
max_size=222.0,
aspect_ratios=[2, 3],
variances=[0.1, 0.1, 0.2, 0.2],
name='conv7_2_mbox_priorbox')(conv7_2)
# Prediction from conv8_2
num_priors = 6
name = 'conv8_2_mbox_conf'
if num_classes != 21:
name += '_{}'.format(num_classes)
conv8_2_mbox_conf = Conv2D(num_priors * num_classes, (3, 3),
padding='same',
name=name)(conv8_2)
conv8_2_mbox_conf_flat = Flatten(name='conv8_2_mbox_conf_flat')(conv8_2_mbox_conf)
conv8_2_mbox_loc = Conv2D(num_priors * 4, (3, 3),
padding='same',
name='conv8_2_mbox_loc')(conv8_2)
conv8_2_mbox_loc_flat = Flatten(name='conv8_2_mbox_loc_flat')(conv8_2_mbox_loc)
conv8_2_mbox_priorbox = PriorBox(img_size, 222.0,
max_size=276.0,
aspect_ratios=[2, 3],
variances=[0.1, 0.1, 0.2, 0.2],
name='conv8_2_mbox_priorbox')(conv8_2)
# Prediction from pool6
num_priors = 6
name = 'pool6_mbox_conf_flat'
if num_classes != 21:
name += '_{}'.format(num_classes)
if K.image_dim_ordering() == 'tf':
target_shape = (1, 1, 256)
else:
target_shape = (256, 1, 1)
pool6_mbox_loc_flat = Dense(num_priors * 4, name='pool6_mbox_loc_flat')(pool6)
pool6_mbox_conf_flat = Dense(num_priors * num_classes, name=name)(pool6)
pool6_reshaped = Reshape(target_shape,
name='pool6_reshaped')(pool6)
pool6_mbox_priorbox = PriorBox(img_size, 276.0, max_size=330.0, aspect_ratios=[2, 3],
variances=[0.1, 0.1, 0.2, 0.2],
name='pool6_mbox_priorbox')(pool6_reshaped)
# Gather all predictions
mbox_loc = concatenate([conv4_3_norm_mbox_loc_flat,
fc7_mbox_loc_flat,
conv6_2_mbox_loc_flat,
conv7_2_mbox_loc_flat,
conv8_2_mbox_loc_flat,
pool6_mbox_loc_flat],
axis=1,
name='mbox_loc')
mbox_conf = concatenate([conv4_3_norm_mbox_conf_flat,
fc7_mbox_conf_flat,
conv6_2_mbox_conf_flat,
conv7_2_mbox_conf_flat,
conv8_2_mbox_conf_flat,
pool6_mbox_conf_flat],
axis=1,
name='mbox_conf')
mbox_priorbox = concatenate([conv4_3_norm_mbox_priorbox,
fc7_mbox_priorbox,
conv6_2_mbox_priorbox,
conv7_2_mbox_priorbox,
conv8_2_mbox_priorbox,
pool6_mbox_priorbox],
axis=1,
name='mbox_priorbox')
if hasattr(mbox_loc, '_keras_shape'):
num_boxes = mbox_loc._keras_shape[-1] // 4
elif hasattr(mbox_loc, 'int_shape'):
num_boxes = K.int_shape(mbox_loc)[-1] // 4
mbox_loc = Reshape((num_boxes, 4),
name='mbox_loc_final')(mbox_loc)
mbox_conf = Reshape((num_boxes, num_classes),
name='mbox_conf_logits')(mbox_conf)
mbox_conf = Activation('softmax',
name='mbox_conf_final')(mbox_conf)
predictions = concatenate([mbox_loc,
mbox_conf,
mbox_priorbox],
axis=2,
name='predictions')
model = Model(inputs=input_layer, outputs=predictions)
return model | unknown | codeparrot/codeparrot-clean | ||
from django.db import migrations, models
Q = models.Q
def set_dept_code(apps, schema_editor):
Event = apps.get_model("proposal", "Event")
for event in Event.objects.all():
if "Zoning Board" in event.title:
event.dept = "zba"
elif "Planning Board" in event.title:
event.dept = "pb"
else:
continue
others = Event.objects.filter(Q(pk__gt=event.pk, date=event.date,
region_name=event.region_name) &
(Q(dept=event.dept) |
Q(title=event.title)))
if others.exists():
for other in others:
for p in event.proposals.all():
other.proposals.add(p)
event.agenda_url = event.agenda_url or other.agenda_url
event.minutes = event.minutes or other.minutes
others.delete()
def do_nothing(_, __):
pass
class Migration(migrations.Migration):
dependencies = [
('proposal', '0029_attribute_admin_properties'),
]
operations = [
migrations.AddField(
model_name='event',
name='dept',
field=models.CharField(null=True, help_text='Department code, used to ensure event uniqueness', max_length=20),
),
migrations.AlterField(
model_name='event',
name='title',
field=models.CharField(max_length=256),
),
migrations.RunPython(set_dept_code, do_nothing)
] | unknown | codeparrot/codeparrot-clean | ||
"""Shared UI utility function."""
import hangups
def get_conv_name(conv, truncate=False, show_unread=False):
"""Return a readable name for a conversation.
If the conversation has a custom name, use the custom name. Otherwise, for
one-to-one conversations, the name is the full name of the other user. For
group conversations, the name is a comma-separated list of first names. If
the group conversation is empty, the name is "Empty Conversation".
If truncate is true, only show up to two names in a group conversation.
If show_unread is True, if there are unread chat messages, show the number
of unread chat messages in parentheses after the conversation name.
"""
num_unread = len([conv_event for conv_event in conv.unread_events if
isinstance(conv_event, hangups.ChatMessageEvent) and
not conv.get_user(conv_event.user_id).is_self])
if show_unread and num_unread > 0:
postfix = ' ({})'.format(num_unread)
else:
postfix = ''
if conv.name is not None:
return conv.name + postfix
else:
participants = sorted(
(user for user in conv.users if not user.is_self),
key=lambda user: user.id_
)
names = [user.first_name for user in participants]
if len(participants) == 0:
return "Empty Conversation" + postfix
if len(participants) == 1:
return participants[0].full_name + postfix
elif truncate and len(participants) > 2:
return (', '.join(names[:2] + ['+{}'.format(len(names) - 2)]) +
postfix)
else:
return ', '.join(names) + postfix | unknown | codeparrot/codeparrot-clean | ||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
# Copyright (c) 2011 Citrix Systems, Inc.
# Copyright 2011 OpenStack LLC.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""VIF drivers for VMWare."""
from nova import exception
from nova import flags
from nova import log as logging
from nova.virt import vif
from nova.virt.vmwareapi import network_utils
LOG = logging.getLogger(__name__)
FLAGS = flags.FLAGS
FLAGS.set_default('vmwareapi_vlan_interface', 'vmnic0')
class VMWareVlanBridgeDriver(vif.VIFDriver):
"""VIF Driver to setup bridge/VLAN networking using VMWare API."""
def plug(self, instance, network, mapping):
"""Plug the VIF to specified instance using information passed.
Currently we are plugging the VIF(s) during instance creation itself.
We can use this method when we add support to add additional NIC to
an existing instance."""
pass
def ensure_vlan_bridge(self, session, network):
"""Create a vlan and bridge unless they already exist."""
vlan_num = network['vlan']
bridge = network['bridge']
vlan_interface = FLAGS.vmwareapi_vlan_interface
# Check if the vlan_interface physical network adapter exists on the
# host.
if not network_utils.check_if_vlan_interface_exists(session,
vlan_interface):
raise exception.NetworkAdapterNotFound(adapter=vlan_interface)
# Get the vSwitch associated with the Physical Adapter
vswitch_associated = network_utils.get_vswitch_for_vlan_interface(
session, vlan_interface)
if vswitch_associated is None:
raise exception.SwitchNotFoundForNetworkAdapter(
adapter=vlan_interface)
# Check whether bridge already exists and retrieve the the ref of the
# network whose name_label is "bridge"
network_ref = network_utils.get_network_with_the_name(session, bridge)
if network_ref is None:
# Create a port group on the vSwitch associated with the
# vlan_interface corresponding physical network adapter on the ESX
# host.
network_utils.create_port_group(session, bridge,
vswitch_associated, vlan_num)
else:
# Get the vlan id and vswitch corresponding to the port group
_get_pg_info = network_utils.get_vlanid_and_vswitch_for_portgroup
pg_vlanid, pg_vswitch = _get_pg_info(session, bridge)
# Check if the vswitch associated is proper
if pg_vswitch != vswitch_associated:
raise exception.InvalidVLANPortGroup(
bridge=bridge, expected=vswitch_associated,
actual=pg_vswitch)
# Check if the vlan id is proper for the port group
if pg_vlanid != vlan_num:
raise exception.InvalidVLANTag(bridge=bridge, tag=vlan_num,
pgroup=pg_vlanid)
def unplug(self, instance, network, mapping):
"""Cleanup operations like deleting port group if no instance
is associated with it."""
pass | unknown | codeparrot/codeparrot-clean | ||
<div class="transfer-state-container">
<div class="header">
<h2>
<mat-icon>swap_horiz</mat-icon>
Transfer State
</h2>
</div>
@if (isLoading()) {
<div class="loading">
<mat-icon class="spinning">hourglass_empty</mat-icon>
Loading transfer state...
</div>
} @else if (error()) {
<div class="error-card">
<div class="card-header">
<mat-icon>error</mat-icon>
<h3>No Transfer State Found</h3>
</div>
<div class="card-content">
<p>{{ error() }}</p>
<p>
Transfer state is used in Server-Side Rendering (SSR) to pass data from the server to the
client. If you're expecting transfer state data, make sure:
</p>
<ul>
<li>The application is using SSR</li>
<li>Transfer state is being used in your components</li>
<li>You're inspecting the initial page load (not after client-side navigation)</li>
</ul>
</div>
</div>
} @else if (!hasData()) {
<div class="empty-card">
<div class="card-header">
<mat-icon>info</mat-icon>
<h3>Transfer State is Empty</h3>
</div>
<div class="card-content">
<p>No transfer state data found on this page.</p>
<p>This could be normal if the page doesn't use SSR or doesn't transfer any state.</p>
</div>
</div>
} @else {
<div class="transfer-state-content">
<div class="summary">
<div class="summary-card">
<div class="summary-stats">
<div class="stat">
<span class="stat-value">{{ transferStateItems().length }}</span>
<span class="stat-label">Keys</span>
</div>
<div class="stat">
<span class="stat-value">{{ totalSize() }}</span>
<span class="stat-label">Total Size</span>
</div>
</div>
</div>
</div>
<div class="table-container">
<table mat-table [dataSource]="transferStateItems()" class="transfer-state-table">
<!-- Key Column -->
<ng-container matColumnDef="key">
<th mat-header-cell *matHeaderCellDef>Key</th>
<td mat-cell *matCellDef="let item" class="key-cell">
<code>{{ item.key }}</code>
</td>
</ng-container>
<!-- Type Column -->
<ng-container matColumnDef="type">
<th mat-header-cell *matHeaderCellDef>Type</th>
<td mat-cell *matCellDef="let item">
<span class="type-badge type-{{ item.type }}">{{ item.type }}</span>
</td>
</ng-container>
<!-- Size Column -->
<ng-container matColumnDef="size">
<th mat-header-cell *matHeaderCellDef>
Size
<mat-icon class="info-icon" matTooltip="Uncompressed size">help_outline</mat-icon>
</th>
<td mat-cell *matCellDef="let item">{{ item.size }}</td>
</ng-container>
<!-- Value Column -->
<ng-container matColumnDef="value">
<th mat-header-cell *matHeaderCellDef>Value</th>
<td mat-cell *matCellDef="let item" class="value-cell">
<div class="value-container">
<pre
#valuePreview
class="value-preview"
[class.is-expanded]="item.isExpanded"
><code>{{ getFormattedValue(item.value) }}</code></pre>
<div class="action-buttons">
@if (isValueLong(valuePreview, item.isExpanded)) {
<button
ng-button
btnType="icon"
size="compact"
class="expand-button"
(click)="toggleExpanded(item)"
[matTooltip]="item.isExpanded ? 'Collapse value' : 'Expand value'"
>
<mat-icon>{{ item.isExpanded ? 'expand_less' : 'expand_more' }}</mat-icon>
</button>
}
<button
ng-button
btnType="icon"
size="compact"
class="copy-button"
(click)="copyToClipboard(item)"
[matTooltip]="item.isCopied ? 'Copied!' : 'Copy value to clipboard'"
>
<mat-icon>{{ item.isCopied ? 'check' : 'content_copy' }}</mat-icon>
</button>
</div>
</div>
</td>
</ng-container>
<tr mat-header-row *matHeaderRowDef="displayedColumns"></tr>
<tr mat-row *matRowDef="let row; columns: displayedColumns"></tr>
</table>
</div>
</div>
}
</div> | html | github | https://github.com/angular/angular | devtools/projects/ng-devtools/src/lib/devtools-tabs/transfer-state/transfer-state.component.html |
//// [tests/cases/compiler/aliasWithInterfaceExportAssignmentUsedInVarInitializer.ts] ////
//// [aliasWithInterfaceExportAssignmentUsedInVarInitializer_0.ts]
interface c {
q3: number;
}
export = c;
//// [aliasWithInterfaceExportAssignmentUsedInVarInitializer_1.ts]
import moduleA = require("./aliasWithInterfaceExportAssignmentUsedInVarInitializer_0");
var d = b.q3;
//// [aliasWithInterfaceExportAssignmentUsedInVarInitializer_0.js]
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
//// [aliasWithInterfaceExportAssignmentUsedInVarInitializer_1.js]
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
var d = b.q3; | javascript | github | https://github.com/microsoft/TypeScript | tests/baselines/reference/aliasWithInterfaceExportAssignmentUsedInVarInitializer.js |
# Copyright 2018 VEXXHOST, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
ALPHA = 'alpha'
ARMV6 = 'armv6'
ARMV7 = 'armv7l'
ARMV7B = 'armv7b'
AARCH64 = 'aarch64'
CRIS = 'cris'
I686 = 'i686'
IA64 = 'ia64'
LM32 = 'lm32'
M68K = 'm68k'
MICROBLAZE = 'microblaze'
MICROBLAZEEL = 'microblazeel'
MIPS = 'mips'
MIPSEL = 'mipsel'
MIPS64 = 'mips64'
MIPS64EL = 'mips64el'
OPENRISC = 'openrisc'
PARISC = 'parisc'
PARISC64 = 'parisc64'
PPC = 'ppc'
PPCLE = 'ppcle'
PPC64 = 'ppc64'
PPC64LE = 'ppc64le'
PPCEMB = 'ppcemb'
S390 = 's390'
S390X = 's390x'
SH4 = 'sh4'
SH4EB = 'sh4eb'
SPARC = 'sparc'
SPARC64 = 'sparc64'
UNICORE32 = 'unicore32'
X86_64 = 'x86_64'
XTENSA = 'xtensa'
XTENSAEB = 'xtensaeb'
ALL = (
ALPHA, ARMV6, ARMV7, ARMV7B,
AARCH64, CRIS, I686, IA64, LM32,
M68K, MICROBLAZE, MICROBLAZEEL, MIPS, MIPSEL,
MIPS64, MIPS64EL, OPENRISC, PARISC, PARISC64,
PPC, PPCLE, PPC64, PPC64LE, PPCEMB,
S390, S390X, SH4, SH4EB, SPARC,
SPARC64, UNICORE32, X86_64, XTENSA, XTENSAEB,
) | unknown | codeparrot/codeparrot-clean | ||
import os, sys, mock, gc
from os.path import normpath
import mock_urwid
from libmproxy import console
from libmproxy.console import common
import tutils
class TestConsoleState:
def test_flow(self):
"""
normal flow:
connect -> request -> response
"""
c = console.ConsoleState()
f = self._add_request(c)
assert f in c.flows
assert c.get_focus() == (f, 0)
def test_focus(self):
"""
normal flow:
connect -> request -> response
"""
c = console.ConsoleState()
f = self._add_request(c)
assert c.get_focus() == (f, 0)
assert c.get_from_pos(0) == (f, 0)
assert c.get_from_pos(1) == (None, None)
assert c.get_next(0) == (None, None)
f2 = self._add_request(c)
assert c.get_focus() == (f, 0)
assert c.get_next(0) == (f2, 1)
assert c.get_prev(1) == (f, 0)
assert c.get_next(1) == (None, None)
c.set_focus(0)
assert c.get_focus() == (f, 0)
c.set_focus(-1)
assert c.get_focus() == (f, 0)
c.set_focus(2)
assert c.get_focus() == (f2, 1)
c.delete_flow(f2)
assert c.get_focus() == (f, 0)
c.delete_flow(f)
assert c.get_focus() == (None, None)
def _add_request(self, state):
f = tutils.tflow()
return state.add_flow(f)
def _add_response(self, state):
f = self._add_request(state)
f.response = tutils.tresp()
state.update_flow(f)
def test_add_response(self):
c = console.ConsoleState()
f = self._add_request(c)
f.response = tutils.tresp()
c.focus = None
c.update_flow(f)
def test_focus_view(self):
c = console.ConsoleState()
self._add_request(c)
self._add_response(c)
self._add_request(c)
self._add_response(c)
self._add_request(c)
self._add_response(c)
assert not c.set_limit("~s")
assert len(c.view) == 3
assert c.focus == 0
def test_settings(self):
c = console.ConsoleState()
f = self._add_request(c)
c.add_flow_setting(f, "foo", "bar")
assert c.get_flow_setting(f, "foo") == "bar"
assert c.get_flow_setting(f, "oink") == None
assert c.get_flow_setting(f, "oink", "foo") == "foo"
assert len(c.flowsettings) == 1
c.delete_flow(f)
del f
gc.collect()
assert len(c.flowsettings) == 0
def test_format_keyvals():
assert common.format_keyvals(
[
("aa", "bb"),
None,
("cc", "dd"),
(None, "dd"),
(None, "dd"),
]
)
class TestPathCompleter:
def test_lookup_construction(self):
c = console._PathCompleter()
cd = tutils.test_data.path("completion")
ca = os.path.join(cd, "a")
assert c.complete(ca).endswith(normpath("/completion/aaa"))
assert c.complete(ca).endswith(normpath("/completion/aab"))
c.reset()
ca = os.path.join(cd, "aaa")
assert c.complete(ca).endswith(normpath("/completion/aaa"))
assert c.complete(ca).endswith(normpath("/completion/aaa"))
c.reset()
assert c.complete(cd).endswith(normpath("/completion/aaa"))
def test_completion(self):
c = console._PathCompleter(True)
c.reset()
c.lookup = [
("a", "x/a"),
("aa", "x/aa"),
]
assert c.complete("a") == "a"
assert c.final == "x/a"
assert c.complete("a") == "aa"
assert c.complete("a") == "a"
c = console._PathCompleter(True)
r = c.complete("l")
assert c.final.endswith(r)
c.reset()
assert c.complete("/nonexistent") == "/nonexistent"
assert c.final == "/nonexistent"
c.reset()
assert c.complete("~") != "~"
c.reset()
s = "thisisatotallynonexistantpathforsure"
assert c.complete(s) == s
assert c.final == s
def test_options():
assert console.Options(kill=True) | unknown | codeparrot/codeparrot-clean | ||
# Copyright 2014 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
import unittest
from telemetry import decorators
from telemetry.core import util
from telemetry.unittest import run_tests
class MockPossibleBrowser(object):
def __init__(self, browser_type, os_name, os_version_name,
supports_tab_control):
self.browser_type = browser_type
self.platform = MockPlatform(os_name, os_version_name)
self.supports_tab_control = supports_tab_control
class MockPlatform(object):
def __init__(self, os_name, os_version_name):
self.os_name = os_name
self.os_version_name = os_version_name
def GetOSName(self):
return self.os_name
def GetOSVersionName(self):
return self.os_version_name
class RunTestsUnitTest(unittest.TestCase):
def setUp(self):
self.suite = unittest.TestSuite()
self.suite.addTests(run_tests.Discover(
util.GetTelemetryDir(), util.GetTelemetryDir(), 'disabled_cases.py'))
def _GetEnabledTests(self, browser_type, os_name, os_version_name,
supports_tab_control):
# pylint: disable=W0212
def MockPredicate(test):
method = getattr(test, test._testMethodName)
return decorators.IsEnabled(method, MockPossibleBrowser(
browser_type, os_name, os_version_name, supports_tab_control))
enabled_tests = set()
for i in run_tests.FilterSuite(self.suite, MockPredicate)._tests:
for j in i:
for k in j:
enabled_tests.add(k._testMethodName)
return enabled_tests
def testSystemMacMavericks(self):
self.assertEquals(
set(['testAllEnabled',
'testMacOnly',
'testMavericksOnly',
'testNoChromeOS',
'testNoWinLinux',
'testSystemOnly',
'testHasTabs']),
self._GetEnabledTests('system', 'mac', 'mavericks', True))
def testSystemMacLion(self):
self.assertEquals(
set(['testAllEnabled',
'testMacOnly',
'testNoChromeOS',
'testNoMavericks',
'testNoWinLinux',
'testSystemOnly',
'testHasTabs']),
self._GetEnabledTests('system', 'mac', 'lion', True))
def testCrosGuestChromeOS(self):
self.assertEquals(
set(['testAllEnabled',
'testChromeOSOnly',
'testNoMac',
'testNoMavericks',
'testNoSystem',
'testNoWinLinux',
'testHasTabs']),
self._GetEnabledTests('cros-guest', 'chromeos', '', True))
def testCanaryWindowsWin7(self):
self.assertEquals(
set(['testAllEnabled',
'testNoChromeOS',
'testNoMac',
'testNoMavericks',
'testNoSystem',
'testWinOrLinuxOnly',
'testHasTabs']),
self._GetEnabledTests('canary', 'win', 'win7', True))
def testDoesntHaveTabs(self):
self.assertEquals(
set(['testAllEnabled',
'testNoChromeOS',
'testNoMac',
'testNoMavericks',
'testNoSystem',
'testWinOrLinuxOnly']),
self._GetEnabledTests('canary', 'win', 'win7', False)) | unknown | codeparrot/codeparrot-clean | ||
import datetime
from email import utils
import test.support
import time
import unittest
from test.support import cpython_only
from test.support.import_helper import ensure_lazy_imports
class TestImportTime(unittest.TestCase):
@cpython_only
def test_lazy_import(self):
ensure_lazy_imports("email.utils", {"random", "socket"})
class DateTimeTests(unittest.TestCase):
datestring = 'Sun, 23 Sep 2001 20:10:55'
dateargs = (2001, 9, 23, 20, 10, 55)
offsetstring = ' -0700'
utcoffset = datetime.timedelta(hours=-7)
tz = datetime.timezone(utcoffset)
naive_dt = datetime.datetime(*dateargs)
aware_dt = datetime.datetime(*dateargs, tzinfo=tz)
def test_naive_datetime(self):
self.assertEqual(utils.format_datetime(self.naive_dt),
self.datestring + ' -0000')
def test_aware_datetime(self):
self.assertEqual(utils.format_datetime(self.aware_dt),
self.datestring + self.offsetstring)
def test_usegmt(self):
utc_dt = datetime.datetime(*self.dateargs,
tzinfo=datetime.timezone.utc)
self.assertEqual(utils.format_datetime(utc_dt, usegmt=True),
self.datestring + ' GMT')
def test_usegmt_with_naive_datetime_raises(self):
with self.assertRaises(ValueError):
utils.format_datetime(self.naive_dt, usegmt=True)
def test_usegmt_with_non_utc_datetime_raises(self):
with self.assertRaises(ValueError):
utils.format_datetime(self.aware_dt, usegmt=True)
def test_parsedate_to_datetime(self):
self.assertEqual(
utils.parsedate_to_datetime(self.datestring + self.offsetstring),
self.aware_dt)
def test_parsedate_to_datetime_naive(self):
self.assertEqual(
utils.parsedate_to_datetime(self.datestring + ' -0000'),
self.naive_dt)
def test_parsedate_to_datetime_with_invalid_raises_valueerror(self):
# See also test_parsedate_returns_None_for_invalid_strings in test_email.
invalid_dates = [
'',
' ',
'0',
'A Complete Waste of Time',
'Wed, 3 Apr 2002 12.34.56.78+0800'
'Tue, 06 Jun 2017 27:39:33 +0600',
'Tue, 06 Jun 2017 07:39:33 +2600',
'Tue, 06 Jun 2017 27:39:33',
'17 June , 2022',
'Friday, -Nov-82 16:14:55 EST',
'Friday, Nov--82 16:14:55 EST',
'Friday, 19-Nov- 16:14:55 EST',
]
for dtstr in invalid_dates:
with self.subTest(dtstr=dtstr):
self.assertRaises(ValueError, utils.parsedate_to_datetime, dtstr)
class LocaltimeTests(unittest.TestCase):
def test_localtime_is_tz_aware_daylight_true(self):
test.support.patch(self, time, 'daylight', True)
t = utils.localtime()
self.assertIsNotNone(t.tzinfo)
def test_localtime_is_tz_aware_daylight_false(self):
test.support.patch(self, time, 'daylight', False)
t = utils.localtime()
self.assertIsNotNone(t.tzinfo)
def test_localtime_daylight_true_dst_false(self):
test.support.patch(self, time, 'daylight', True)
t0 = datetime.datetime(2012, 3, 12, 1, 1)
t1 = utils.localtime(t0)
t2 = utils.localtime(t1)
self.assertEqual(t1, t2)
def test_localtime_daylight_false_dst_false(self):
test.support.patch(self, time, 'daylight', False)
t0 = datetime.datetime(2012, 3, 12, 1, 1)
t1 = utils.localtime(t0)
t2 = utils.localtime(t1)
self.assertEqual(t1, t2)
@test.support.run_with_tz('Europe/Minsk')
def test_localtime_daylight_true_dst_true(self):
test.support.patch(self, time, 'daylight', True)
t0 = datetime.datetime(2012, 3, 12, 1, 1)
t1 = utils.localtime(t0)
t2 = utils.localtime(t1)
self.assertEqual(t1, t2)
@test.support.run_with_tz('Europe/Minsk')
def test_localtime_daylight_false_dst_true(self):
test.support.patch(self, time, 'daylight', False)
t0 = datetime.datetime(2012, 3, 12, 1, 1)
t1 = utils.localtime(t0)
t2 = utils.localtime(t1)
self.assertEqual(t1, t2)
@test.support.run_with_tz('EST+05EDT,M3.2.0,M11.1.0')
def test_localtime_epoch_utc_daylight_true(self):
test.support.patch(self, time, 'daylight', True)
t0 = datetime.datetime(1990, 1, 1, tzinfo = datetime.timezone.utc)
t1 = utils.localtime(t0)
t2 = t0 - datetime.timedelta(hours=5)
t2 = t2.replace(tzinfo = datetime.timezone(datetime.timedelta(hours=-5)))
self.assertEqual(t1, t2)
@test.support.run_with_tz('EST+05EDT,M3.2.0,M11.1.0')
def test_localtime_epoch_utc_daylight_false(self):
test.support.patch(self, time, 'daylight', False)
t0 = datetime.datetime(1990, 1, 1, tzinfo = datetime.timezone.utc)
t1 = utils.localtime(t0)
t2 = t0 - datetime.timedelta(hours=5)
t2 = t2.replace(tzinfo = datetime.timezone(datetime.timedelta(hours=-5)))
self.assertEqual(t1, t2)
def test_localtime_epoch_notz_daylight_true(self):
test.support.patch(self, time, 'daylight', True)
t0 = datetime.datetime(1990, 1, 1)
t1 = utils.localtime(t0)
t2 = utils.localtime(t0.replace(tzinfo=None))
self.assertEqual(t1, t2)
def test_localtime_epoch_notz_daylight_false(self):
test.support.patch(self, time, 'daylight', False)
t0 = datetime.datetime(1990, 1, 1)
t1 = utils.localtime(t0)
t2 = utils.localtime(t0.replace(tzinfo=None))
self.assertEqual(t1, t2)
@test.support.run_with_tz('Europe/Kyiv')
def test_variable_tzname(self):
t0 = datetime.datetime(1984, 1, 1, tzinfo=datetime.timezone.utc)
t1 = utils.localtime(t0)
if t1.tzname() in ('Europe', 'UTC'):
self.skipTest("Can't find a Kyiv timezone database")
self.assertEqual(t1.tzname(), 'MSK')
t0 = datetime.datetime(1994, 1, 1, tzinfo=datetime.timezone.utc)
t1 = utils.localtime(t0)
self.assertEqual(t1.tzname(), 'EET')
# Issue #24836: The timezone files are out of date (pre 2011k)
# on Mac OS X Snow Leopard.
@test.support.requires_mac_ver(10, 7)
class FormatDateTests(unittest.TestCase):
@test.support.run_with_tz('Europe/Minsk')
def test_formatdate(self):
timeval = time.mktime((2011, 12, 1, 18, 0, 0, 4, 335, 0))
string = utils.formatdate(timeval, localtime=False, usegmt=False)
self.assertEqual(string, 'Thu, 01 Dec 2011 15:00:00 -0000')
string = utils.formatdate(timeval, localtime=False, usegmt=True)
self.assertEqual(string, 'Thu, 01 Dec 2011 15:00:00 GMT')
@test.support.run_with_tz('Europe/Minsk')
def test_formatdate_with_localtime(self):
timeval = time.mktime((2011, 1, 1, 18, 0, 0, 6, 1, 0))
string = utils.formatdate(timeval, localtime=True)
self.assertEqual(string, 'Sat, 01 Jan 2011 18:00:00 +0200')
# Minsk moved from +0200 (with DST) to +0300 (without DST) in 2011
timeval = time.mktime((2011, 12, 1, 18, 0, 0, 4, 335, 0))
string = utils.formatdate(timeval, localtime=True)
self.assertEqual(string, 'Thu, 01 Dec 2011 18:00:00 +0300')
if __name__ == '__main__':
unittest.main() | python | github | https://github.com/python/cpython | Lib/test/test_email/test_utils.py |
from BinaryTreeExceptions import *
class Node(object):
def __init__(self, element, parent=None):
self.element = element
self.parent = parent
self.left = None
self.right = None
def __contains__(self, other):
if other == self.element:
return True
elif other < self.element and self.left is not None:
return other in self.left
elif other > self.element and self.right is not None:
return other in self.right
else:
return False
def isRoot(self):
return self.parent is None
def isLeaf(self):
return self.left is None and self.right is None
def isInnerNode(self):
return not self.isRoot() and not self.isLeaf()
def height(self, height=-1):
height += 1
if self.left is None:
yield height
else:
yield max(self.left.height(height))
if self.right is None:
yield height
else:
yield max(self.right.height(height))
def levelOf(self, element, level=0):
if element == self.element:
return level
elif element < self.element and self.left is not None:
return self.left.levelOf(element, level+1)
elif element > self.element and self.right is not None:
return self.right.levelOf(element, level+1)
else:
raise NoSuchElement(element)
def isAncestorOf(self, targetAncestor, targetDescendant, ancestorSeen=False):
if targetAncestor == self.element:
ancestorSeen = True
if targetDescendant == self.element:
#===================================================================
# This covers the case of checking duplicate values within the tree
# which can result in similar values having an ancestor/descendant
# relationship. But ONLY if there actually are duplicates.
# If the programmer does isAncestorOf(5,5) and there is only one five,
# then the answer is - no, five is not its own ancestor/descendant.
#===================================================================
if targetAncestor == targetDescendant:
if self.right is not None:
return self.right.element == targetDescendant
else:
return False
else:
return ancestorSeen
elif targetDescendant < self.element and self.left is not None:
return self.left.isAncestorOf(targetAncestor, targetDescendant, ancestorSeen)
elif targetDescendant > self.element and self.right is not None:
return self.right.isAncestorOf(targetAncestor, targetDescendant, ancestorSeen)
else:
raise NoSuchElement(targetDescendant)
def descendants(self, element):
'''
@return tuple of the target element's descendants.
'''
if element == self.element:
return tuple([descendant for descendant in self._buildDescendants()])
elif element < self.element and self.left is not None:
return self.left.descendants(element)
elif element > self.element and self.right is not None:
return self.right.descendants(element)
else:
raise NoSuchElement(element)
def _buildDescendants(self):
'''
@return Generator of this node's descendants traversed in order.
'''
if self.left:
for element in self.left.inOrder():
yield element
if self.right:
for element in self.right.inOrder():
yield element
def ancestors(self, element):
'''
@param element: Target element whose ancestors we want.
@return tuple of the target element's ancestors.
@
'''
if element == self.element:
pass
elif element < self.element and self.left is not None:
yield self.element
for ancestor in self.left.ancestors(element):
yield ancestor
elif element > self.element and self.right is not None:
yield self.element
for ancestor in self.right.ancestors(element):
yield ancestor
else:
raise NoSuchElement(element)
def min(self):
if self.left is not None:
return self.left.min()
else:
return self.element
def max(self):
if self.right is not None:
return self.right.max()
else:
return self.element
def insert(self, element):
'''
@param element: Element to insert into the tree.
'''
if element >= self.element:
if self.right is not None:
self.right.insert(element)
else:
self.right = Node(element, parent=self)
elif element < self.element:
if self.left is not None:
self.left.insert(element)
else:
self.left = Node(element, parent=self)
def attach(self, root):
'''
@param element: Element to insert into the tree.
'''
if root.element >= self.element:
if self.right is None:
self.right = root
else:
self.right.attach(root)
elif root.element < self.element:
if self.left is None:
self.left = root
else:
self.left.attach(root)
def _detach(self):
if self.parent.left == self:
self.parent.left = None
elif self.parent.right == self:
self.parent.right = None
return self
def detachAt(self, element):
if element == self.element:
return self._detach()
elif self.left and element < self.element:
return self.left.detachAt(element)
elif self.right and element > self.element:
return self.right.detachAt(element)
else:
raise NoSuchElement(element)
def inOrder(self):
if self.left:
for element in self.left.inOrder():
yield element
yield self.element
if self.right:
for element in self.right.inOrder():
yield element
def preOrder(self):
yield self.element
if self.left:
for element in self.left.preOrder():
yield element
if self.right:
for element in self.right.preOrder():
yield element
def postOrder(self):
if self.left:
for element in self.left.postOrder():
yield element
if self.right:
for element in self.right.postOrder():
yield element
yield self.element | unknown | codeparrot/codeparrot-clean | ||
<?php
namespace Illuminate\View\Compilers\Concerns;
use Illuminate\Support\Str;
trait CompilesConditionals
{
/**
* Identifier for the first case in the switch statement.
*
* @var bool
*/
protected $firstCaseInSwitch = true;
/**
* Compile the if-auth statements into valid PHP.
*
* @param string|null $guard
* @return string
*/
protected function compileAuth($guard = null)
{
$guard = is_null($guard) ? '()' : $guard;
return "<?php if(auth()->guard{$guard}->check()): ?>";
}
/**
* Compile the else-auth statements into valid PHP.
*
* @param string|null $guard
* @return string
*/
protected function compileElseAuth($guard = null)
{
$guard = is_null($guard) ? '()' : $guard;
return "<?php elseif(auth()->guard{$guard}->check()): ?>";
}
/**
* Compile the end-auth statements into valid PHP.
*
* @return string
*/
protected function compileEndAuth()
{
return '<?php endif; ?>';
}
/**
* Compile the env statements into valid PHP.
*
* @param string $environments
* @return string
*/
protected function compileEnv($environments)
{
return "<?php if(app()->environment{$environments}): ?>";
}
/**
* Compile the end-env statements into valid PHP.
*
* @return string
*/
protected function compileEndEnv()
{
return '<?php endif; ?>';
}
/**
* Compile the production statements into valid PHP.
*
* @return string
*/
protected function compileProduction()
{
return "<?php if(app()->environment('production')): ?>";
}
/**
* Compile the end-production statements into valid PHP.
*
* @return string
*/
protected function compileEndProduction()
{
return '<?php endif; ?>';
}
/**
* Compile the if-guest statements into valid PHP.
*
* @param string|null $guard
* @return string
*/
protected function compileGuest($guard = null)
{
$guard = is_null($guard) ? '()' : $guard;
return "<?php if(auth()->guard{$guard}->guest()): ?>";
}
/**
* Compile the else-guest statements into valid PHP.
*
* @param string|null $guard
* @return string
*/
protected function compileElseGuest($guard = null)
{
$guard = is_null($guard) ? '()' : $guard;
return "<?php elseif(auth()->guard{$guard}->guest()): ?>";
}
/**
* Compile the end-guest statements into valid PHP.
*
* @return string
*/
protected function compileEndGuest()
{
return '<?php endif; ?>';
}
/**
* Compile the has-section statements into valid PHP.
*
* @param string $expression
* @return string
*/
protected function compileHasSection($expression)
{
return "<?php if (! empty(trim(\$__env->yieldContent{$expression}))): ?>";
}
/**
* Compile the has-stack statements into valid PHP.
*
* @param string $expression
* @return string
*/
protected function compileHasStack($expression)
{
return "<?php if (! \$__env->isStackEmpty{$expression}): ?>";
}
/**
* Compile the section-missing statements into valid PHP.
*
* @param string $expression
* @return string
*/
protected function compileSectionMissing($expression)
{
return "<?php if (empty(trim(\$__env->yieldContent{$expression}))): ?>";
}
/**
* Compile the if statements into valid PHP.
*
* @param string $expression
* @return string
*/
protected function compileIf($expression)
{
return "<?php if{$expression}: ?>";
}
/**
* Compile the unless statements into valid PHP.
*
* @param string $expression
* @return string
*/
protected function compileUnless($expression)
{
return "<?php if (! {$expression}): ?>";
}
/**
* Compile the else-if statements into valid PHP.
*
* @param string $expression
* @return string
*/
protected function compileElseif($expression)
{
return "<?php elseif{$expression}: ?>";
}
/**
* Compile the else statements into valid PHP.
*
* @return string
*/
protected function compileElse()
{
return '<?php else: ?>';
}
/**
* Compile the end-if statements into valid PHP.
*
* @return string
*/
protected function compileEndif()
{
return '<?php endif; ?>';
}
/**
* Compile the end-unless statements into valid PHP.
*
* @return string
*/
protected function compileEndunless()
{
return '<?php endif; ?>';
}
/**
* Compile the if-isset statements into valid PHP.
*
* @param string $expression
* @return string
*/
protected function compileIsset($expression)
{
return "<?php if(isset{$expression}): ?>";
}
/**
* Compile the end-isset statements into valid PHP.
*
* @return string
*/
protected function compileEndIsset()
{
return '<?php endif; ?>';
}
/**
* Compile the switch statements into valid PHP.
*
* @param string $expression
* @return string
*/
protected function compileSwitch($expression)
{
$this->firstCaseInSwitch = true;
return "<?php switch{$expression}:";
}
/**
* Compile the case statements into valid PHP.
*
* @param string $expression
* @return string
*/
protected function compileCase($expression)
{
if ($this->firstCaseInSwitch) {
$this->firstCaseInSwitch = false;
return "case {$expression}: ?>";
}
return "<?php case {$expression}: ?>";
}
/**
* Compile the default statements in switch case into valid PHP.
*
* @return string
*/
protected function compileDefault()
{
return '<?php default: ?>';
}
/**
* Compile the end switch statements into valid PHP.
*
* @return string
*/
protected function compileEndSwitch()
{
return '<?php endswitch; ?>';
}
/**
* Compile a once block into valid PHP.
*
* @param string|null $id
* @return string
*/
protected function compileOnce($id = null)
{
$id = $id ? $this->stripParentheses($id) : "'".(string) Str::uuid()."'";
return '<?php if (! $__env->hasRenderedOnce('.$id.')): $__env->markAsRenderedOnce('.$id.'); ?>';
}
/**
* Compile an end-once block into valid PHP.
*
* @return string
*/
public function compileEndOnce()
{
return '<?php endif; ?>';
}
/**
* Compile a boolean value into a raw true / false value for embedding into HTML attributes or JavaScript.
*
* @param bool $condition
* @return string
*/
protected function compileBool($condition)
{
return "<?php echo ($condition ? 'true' : 'false'); ?>";
}
/**
* Compile a checked block into valid PHP.
*
* @param string $condition
* @return string
*/
protected function compileChecked($condition)
{
return "<?php if{$condition}: echo 'checked'; endif; ?>";
}
/**
* Compile a disabled block into valid PHP.
*
* @param string $condition
* @return string
*/
protected function compileDisabled($condition)
{
return "<?php if{$condition}: echo 'disabled'; endif; ?>";
}
/**
* Compile a required block into valid PHP.
*
* @param string $condition
* @return string
*/
protected function compileRequired($condition)
{
return "<?php if{$condition}: echo 'required'; endif; ?>";
}
/**
* Compile a readonly block into valid PHP.
*
* @param string $condition
* @return string
*/
protected function compileReadonly($condition)
{
return "<?php if{$condition}: echo 'readonly'; endif; ?>";
}
/**
* Compile a selected block into valid PHP.
*
* @param string $condition
* @return string
*/
protected function compileSelected($condition)
{
return "<?php if{$condition}: echo 'selected'; endif; ?>";
}
/**
* Compile the push statements into valid PHP.
*
* @param string $expression
* @return string
*/
protected function compilePushIf($expression)
{
$parts = explode(',', $this->stripParentheses($expression));
if (count($parts) > 2) {
$last = array_pop($parts);
$parts = [
implode(',', $parts),
trim($last),
];
}
return "<?php if({$parts[0]}): \$__env->startPush({$parts[1]}); ?>";
}
/**
* Compile the else-if push statements into valid PHP.
*
* @param string $expression
* @return string
*/
protected function compileElsePushIf($expression)
{
$parts = explode(',', $this->stripParentheses($expression), 2);
return "<?php \$__env->stopPush(); elseif({$parts[0]}): \$__env->startPush({$parts[1]}); ?>";
}
/**
* Compile the else push statements into valid PHP.
*
* @param string $expression
* @return string
*/
protected function compileElsePush($expression)
{
return "<?php \$__env->stopPush(); else: \$__env->startPush{$expression}; ?>";
}
/**
* Compile the end-push statements into valid PHP.
*
* @return string
*/
protected function compileEndPushIf()
{
return '<?php $__env->stopPush(); endif; ?>';
}
} | php | github | https://github.com/laravel/framework | src/Illuminate/View/Compilers/Concerns/CompilesConditionals.php |
# swift_build_support/products/ninja.py -------------------------*- python -*-
#
# This source file is part of the Swift.org open source project
#
# Copyright (c) 2014 - 2017 Apple Inc. and the Swift project authors
# Licensed under Apache License v2.0 with Runtime Library Exception
#
# See https://swift.org/LICENSE.txt for license information
# See https://swift.org/CONTRIBUTORS.txt for the list of Swift project authors
#
# ----------------------------------------------------------------------------
"""
Ninja build
"""
# ----------------------------------------------------------------------------
import os.path
import platform
import sys
from . import product
from .. import cache_util
from .. import shell
class Ninja(product.Product):
@cache_util.reify
def ninja_bin_path(self):
return os.path.join(self.build_dir, 'ninja')
def do_build(self):
if os.path.exists(self.ninja_bin_path):
return
env = None
if platform.system() == "Darwin":
from .. import xcrun
sysroot = xcrun.sdk_path("macosx")
osx_version_min = self.args.darwin_deployment_version_osx
assert sysroot is not None
env = {
"CXX": self.toolchain.cxx,
"CFLAGS": (
"-isysroot {sysroot} -mmacosx-version-min={osx_version}"
).format(sysroot=sysroot, osx_version=osx_version_min),
"LDFLAGS": (
"-mmacosx-version-min={osx_version}"
).format(osx_version=osx_version_min),
}
elif self.toolchain.cxx:
env = {
"CXX": self.toolchain.cxx,
}
# Ninja can only be built in-tree. Copy the source tree to the build
# directory.
shell.rmtree(self.build_dir)
shell.copytree(self.source_dir, self.build_dir)
with shell.pushd(self.build_dir):
shell.call([sys.executable, 'configure.py', '--bootstrap'],
env=env) | unknown | codeparrot/codeparrot-clean | ||
import re
from django.utils.cache import patch_vary_headers
from django.utils.deprecation import MiddlewareMixin
from django.utils.text import compress_sequence, compress_string
re_accepts_gzip = re.compile(r'\bgzip\b')
class GZipMiddleware(MiddlewareMixin):
"""
This middleware compresses content if the browser allows gzip compression.
It sets the Vary header accordingly, so that caches will base their storage
on the Accept-Encoding header.
"""
def process_response(self, request, response):
# It's not worth attempting to compress really short responses.
if not response.streaming and len(response.content) < 200:
return response
# Avoid gzipping if we've already got a content-encoding.
if response.has_header('Content-Encoding'):
return response
patch_vary_headers(response, ('Accept-Encoding',))
ae = request.META.get('HTTP_ACCEPT_ENCODING', '')
if not re_accepts_gzip.search(ae):
return response
if response.streaming:
# Delete the `Content-Length` header for streaming content, because
# we won't know the compressed size until we stream it.
response.streaming_content = compress_sequence(response.streaming_content)
del response['Content-Length']
else:
# Return the compressed content only if it's actually shorter.
compressed_content = compress_string(response.content)
if len(compressed_content) >= len(response.content):
return response
response.content = compressed_content
response['Content-Length'] = str(len(response.content))
# If there is a strong ETag, make it weak to fulfill the requirements
# of RFC 7232 section-2.1 while also allowing conditional request
# matches on ETags.
etag = response.get('ETag')
if etag and etag.startswith('"'):
response['ETag'] = 'W/' + etag
response['Content-Encoding'] = 'gzip'
return response | unknown | codeparrot/codeparrot-clean | ||
# Copyright (c) 2013-2016 Hewlett Packard Enterprise Development LP
#
# Redistribution and use of this software in source and binary forms,
# with or without modification, are permitted provided that the following
# conditions are met:
#
# Redistributions of source code must retain the above copyright notice,
# this list of conditions and the following disclaimer.
#
# Redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
import os
import sys
from requestbuilder import Arg
import requestbuilder.auth.aws
import requestbuilder.service
import requestbuilder.request
from euca2ools.commands import Euca2ools
from euca2ools.exceptions import AWSError
from euca2ools.util import strip_response_metadata, add_fake_region_name
class ELB(requestbuilder.service.BaseService):
NAME = 'elasticloadbalancing'
DESCRIPTION = 'Load balancing service'
API_VERSION = '2012-06-01'
REGION_ENVVAR = ('EUCA_DEFAULT_REGION', 'AWS_DEFAULT_REGION')
URL_ENVVAR = 'AWS_ELB_URL'
ARGS = [Arg('-U', '--url', metavar='URL',
help='load balancing service endpoint URL')]
def configure(self):
requestbuilder.service.BaseService.configure(self)
add_fake_region_name(self)
# pylint: disable=no-self-use
def handle_http_error(self, response):
raise AWSError(response)
# pylint: enable=no-self-use
class ELBRequest(requestbuilder.request.AWSQueryRequest):
SUITE = Euca2ools
SERVICE_CLASS = ELB
AUTH_CLASS = requestbuilder.auth.aws.HmacV4Auth
METHOD = 'POST'
def parse_response(self, response):
response_dict = requestbuilder.request.AWSQueryRequest.parse_response(
self, response)
return strip_response_metadata(response_dict) | unknown | codeparrot/codeparrot-clean | ||
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this file,
# You can obtain one at http://mozilla.org/MPL/2.0/.
import hashlib
import json
import os
import traceback
import urlparse
from abc import ABCMeta, abstractmethod
from ..testrunner import Stop
here = os.path.split(__file__)[0]
def executor_kwargs(test_type, server_config, cache_manager, **kwargs):
timeout_multiplier = kwargs["timeout_multiplier"]
if timeout_multiplier is None:
timeout_multiplier = 1
executor_kwargs = {"server_config": server_config,
"timeout_multiplier": timeout_multiplier,
"debug_info": kwargs["debug_info"]}
if test_type == "reftest":
executor_kwargs["screenshot_cache"] = cache_manager.dict()
return executor_kwargs
def strip_server(url):
"""Remove the scheme and netloc from a url, leaving only the path and any query
or fragment.
url - the url to strip
e.g. http://example.org:8000/tests?id=1#2 becomes /tests?id=1#2"""
url_parts = list(urlparse.urlsplit(url))
url_parts[0] = ""
url_parts[1] = ""
return urlparse.urlunsplit(url_parts)
class TestharnessResultConverter(object):
harness_codes = {0: "OK",
1: "ERROR",
2: "TIMEOUT"}
test_codes = {0: "PASS",
1: "FAIL",
2: "TIMEOUT",
3: "NOTRUN"}
def __call__(self, test, result):
"""Convert a JSON result into a (TestResult, [SubtestResult]) tuple"""
result_url, status, message, stack, subtest_results = result
assert result_url == test.url, ("Got results from %s, expected %s" %
(result_url, test.url))
harness_result = test.result_cls(self.harness_codes[status], message)
return (harness_result,
[test.subtest_result_cls(name, self.test_codes[status], message, stack)
for name, status, message, stack in subtest_results])
testharness_result_converter = TestharnessResultConverter()
def reftest_result_converter(self, test, result):
return (test.result_cls(result["status"], result["message"],
extra=result.get("extra")), [])
class ExecutorException(Exception):
def __init__(self, status, message):
self.status = status
self.message = message
class TestExecutor(object):
__metaclass__ = ABCMeta
test_type = None
convert_result = None
def __init__(self, browser, server_config, timeout_multiplier=1,
debug_info=None):
"""Abstract Base class for object that actually executes the tests in a
specific browser. Typically there will be a different TestExecutor
subclass for each test type and method of executing tests.
:param browser: ExecutorBrowser instance providing properties of the
browser that will be tested.
:param server_config: Dictionary of wptserve server configuration of the
form stored in TestEnvironment.external_config
:param timeout_multiplier: Multiplier relative to base timeout to use
when setting test timeout.
"""
self.runner = None
self.browser = browser
self.server_config = server_config
self.timeout_multiplier = timeout_multiplier
self.debug_info = debug_info
self.last_environment = {"protocol": "http",
"prefs": {}}
self.protocol = None # This must be set in subclasses
@property
def logger(self):
"""StructuredLogger for this executor"""
if self.runner is not None:
return self.runner.logger
def setup(self, runner):
"""Run steps needed before tests can be started e.g. connecting to
browser instance
:param runner: TestRunner instance that is going to run the tests"""
self.runner = runner
self.protocol.setup(runner)
def teardown(self):
"""Run cleanup steps after tests have finished"""
self.protocol.teardown()
def run_test(self, test):
"""Run a particular test.
:param test: The test to run"""
if test.environment != self.last_environment:
self.on_environment_change(test.environment)
try:
result = self.do_test(test)
except Exception as e:
result = self.result_from_exception(test, e)
if result is Stop:
return result
if result[0].status == "ERROR":
self.logger.debug(result[0].message)
self.last_environment = test.environment
self.runner.send_message("test_ended", test, result)
def server_url(self, protocol):
return "%s://%s:%s" % (protocol,
self.server_config["host"],
self.server_config["ports"][protocol][0])
def test_url(self, test):
return urlparse.urljoin(self.server_url(test.environment["protocol"]), test.url)
@abstractmethod
def do_test(self, test):
"""Test-type and protocol specific implementation of running a
specific test.
:param test: The test to run."""
pass
def on_environment_change(self, new_environment):
pass
def result_from_exception(self, test, e):
if hasattr(e, "status") and e.status in test.result_cls.statuses:
status = e.status
else:
status = "ERROR"
message = unicode(getattr(e, "message", ""))
if message:
message += "\n"
message += traceback.format_exc(e)
return test.result_cls(status, message), []
class TestharnessExecutor(TestExecutor):
convert_result = testharness_result_converter
class RefTestExecutor(TestExecutor):
convert_result = reftest_result_converter
def __init__(self, browser, server_config, timeout_multiplier=1, screenshot_cache=None,
debug_info=None):
TestExecutor.__init__(self, browser, server_config,
timeout_multiplier=timeout_multiplier,
debug_info=debug_info)
self.screenshot_cache = screenshot_cache
class RefTestImplementation(object):
def __init__(self, executor):
self.timeout_multiplier = executor.timeout_multiplier
self.executor = executor
# Cache of url:(screenshot hash, screenshot). Typically the
# screenshot is None, but we set this value if a test fails
# and the screenshot was taken from the cache so that we may
# retrieve the screenshot from the cache directly in the future
self.screenshot_cache = self.executor.screenshot_cache
self.message = None
@property
def logger(self):
return self.executor.logger
def get_hash(self, test):
timeout = test.timeout * self.timeout_multiplier
if test.url not in self.screenshot_cache:
success, data = self.executor.screenshot(test)
if not success:
return False, data
screenshot = data
hash_value = hashlib.sha1(screenshot).hexdigest()
self.screenshot_cache[test.url] = (hash_value, None)
rv = True, (hash_value, screenshot)
else:
rv = True, self.screenshot_cache[test.url]
self.message.append("%s %s" % (test.url, rv[1][0]))
return rv
def is_pass(self, lhs_hash, rhs_hash, relation):
assert relation in ("==", "!=")
self.message.append("Testing %s %s %s" % (lhs_hash, relation, rhs_hash))
return ((relation == "==" and lhs_hash == rhs_hash) or
(relation == "!=" and lhs_hash != rhs_hash))
def run_test(self, test):
self.message = []
# Depth-first search of reference tree, with the goal
# of reachings a leaf node with only pass results
stack = list(((test, item[0]), item[1]) for item in reversed(test.references))
while stack:
hashes = [None, None]
screenshots = [None, None]
nodes, relation = stack.pop()
for i, node in enumerate(nodes):
success, data = self.get_hash(node)
if success is False:
return {"status": data[0], "message": data[1]}
hashes[i], screenshots[i] = data
if self.is_pass(hashes[0], hashes[1], relation):
if nodes[1].references:
stack.extend(list(((nodes[1], item[0]), item[1]) for item in reversed(nodes[1].references)))
else:
# We passed
return {"status":"PASS", "message": None}
# We failed, so construct a failure message
for i, (node, screenshot) in enumerate(zip(nodes, screenshots)):
if screenshot is None:
success, screenshot = self.retake_screenshot(node)
if success:
screenshots[i] = screenshot
log_data = [{"url": nodes[0].url, "screenshot": screenshots[0]}, relation,
{"url": nodes[1].url, "screenshot": screenshots[1]}]
return {"status": "FAIL",
"message": "\n".join(self.message),
"extra": {"reftest_screenshots": log_data}}
def retake_screenshot(self, node):
success, data = self.executor.screenshot(node)
if not success:
return False, data
hash_val, _ = self.screenshot_cache[node.url]
self.screenshot_cache[node.url] = hash_val, data
return True, data
class Protocol(object):
def __init__(self, executor, browser):
self.executor = executor
self.browser = browser
@property
def logger(self):
return self.executor.logger
def setup(self, runner):
pass
def teardown(self):
pass
def wait(self):
pass | unknown | codeparrot/codeparrot-clean | ||
from gaphas.item import Line
class LineInstance:
"""
Class representation of the ASCEND 'ARE_THE_SAME' relationship
signified by a ConnectorLine on the BlockCanvas.
There will be a reference to a LineInstance stored within a
LineItem object.
The intention is that it should be possible to encapsulate all the
non-graphical information about a canvas model using a number of
BlockInstance and LineInstance objects, and no Gaphas objects. It
should be possible to pickle this collection of objects and have everything
that is necessary for passing the model to ASCEND -- this provides the
clean separation of ASCEND-related and Gaphas-releated code, hopefully.
"""
def __init__(self):
self.connectedblocks = []
def __str__(self):
"""
Representation of this line requires the name object from the
BlockInstance as well as name associated with the connected Port
(still a problem).
"""
try:
return ("\t%s, %s ARE_THE_SAME;\n" % (li.fromblock.name, li.toblock.name))
except:
return "" | unknown | codeparrot/codeparrot-clean | ||
# Copyright 2014 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
import argparse
import lib2to3.refactor
from webkitpy.common.system.systemhost import SystemHost
from webkitpy.thirdparty import autopep8
def parse_args(args=None):
parser = argparse.ArgumentParser()
parser.add_argument('--chromium', action='store_const', dest='style', const='chromium', default='blink',
help="Format according to Chromium's Python coding styles instead of Blink's.")
parser.add_argument('--no-backups', action='store_false', default=True, dest='backup',
help='Do not back up files before overwriting them.')
parser.add_argument('-j', '--jobs', metavar='n', type=int, default=0,
help='Number of parallel jobs; match CPU count if less than 1.')
parser.add_argument('files', nargs='*', default=['-'],
help="files to format or '-' for standard in")
parser.add_argument('--double-quote-strings', action='store_const', dest='quoting', const='double', default='single',
help='Rewrite string literals to use double quotes instead of single quotes.')
parser.add_argument('--no-autopep8', action='store_true',
help='Skip the autopep8 code-formatting step.')
parser.add_argument('--leave-strings-alone', action='store_true',
help='Do not reformat string literals to use a consistent quote style.')
return parser.parse_args(args=args)
def main(host=None, args=None):
options = parse_args(args)
if options.no_autopep8:
options.style = None
if options.leave_strings_alone:
options.quoting = None
autopep8_options = _autopep8_options_for_style(options.style)
fixers = _fixers_for_quoting(options.quoting)
if options.files == ['-']:
host = host or SystemHost()
host.print_(reformat_source(host.stdin.read(), autopep8_options, fixers, '<stdin>'), end='')
return
# We create the arglist before checking if we need to create a Host, because a
# real host is non-picklable and can't be passed to host.executive.map().
arglist = [(host, name, autopep8_options, fixers, options.backup) for name in options.files]
host = host or SystemHost()
host.executive.map(_reformat_thunk, arglist, processes=options.jobs)
def _autopep8_options_for_style(style):
return {
None: [],
'blink': autopep8.parse_args(['--aggressive',
'--max-line-length', '132',
'--indent-size', '4',
'']),
'chromium': autopep8.parse_args(['--aggressive',
'--max-line-length', '80',
'--indent-size', '2',
'']),
}.get(style)
def _fixers_for_quoting(quoting):
return {
None: [],
'double': ['webkitpy.formatter.fix_double_quote_strings'],
'single': ['webkitpy.formatter.fix_single_quote_strings'],
}.get(quoting)
def _reformat_thunk(args):
reformat_file(*args)
def reformat_file(host, name, autopep8_options, fixers, should_backup_file):
host = host or SystemHost()
source = host.filesystem.read_text_file(name)
dest = reformat_source(source, autopep8_options, fixers, name)
if dest != source:
if should_backup_file:
host.filesystem.write_text_file(name + '.bak', source)
host.filesystem.write_text_file(name, dest)
def reformat_source(source, autopep8_options, fixers, name):
tmp_str = source
if autopep8_options:
tmp_str = autopep8.fix_code(tmp_str, autopep8_options)
if fixers:
tool = lib2to3.refactor.RefactoringTool(fixer_names=fixers,
explicit=fixers)
tmp_str = unicode(tool.refactor_string(tmp_str, name=name))
return tmp_str | unknown | codeparrot/codeparrot-clean | ||
# -*- coding: utf-8 -*-
# Generated by Django 1.10.1 on 2016-10-30 23:47
from __future__ import unicode_literals
from django.db import migrations, models
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='TblAcademicoSemestre',
fields=[
('codigo', models.AutoField(primary_key=True, serialize=False, verbose_name='C\xf3digo')),
('descricao', models.CharField(max_length=100, unique=True, verbose_name='Descri\xe7\xe3o')),
('data', models.DateField(auto_now_add=True, verbose_name='Data de cadastro')),
('hora', models.TimeField(auto_now_add=True, verbose_name='Hora de cadastro')),
('ativo', models.BooleanField(choices=[(True, 'Sim'), (False, 'N\xe3o')], verbose_name='Ativo')),
],
options={
'ordering': ['codigo'],
'db_table': 'tbl_academico_semestre',
},
),
] | unknown | codeparrot/codeparrot-clean | ||
# (c) 2014, Matt Martz <matt@sivel.net>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
class ModuleDocFragment(object):
# Standard files documentation fragment
DOCUMENTATION = """
options:
mode:
required: false
default: null
description:
- mode the file or directory should be. For those used to I(/usr/bin/chmod) remember that modes are actually octal numbers (like 0644). Leaving off the leading zero will likely have unexpected results. As of version 1.8, the mode may be specified as a symbolic mode (for example, C(u+rwx) or C(u=rw,g=r,o=r)).
owner:
required: false
default: null
description:
- name of the user that should own the file/directory, as would be fed to I(chown)
group:
required: false
default: null
description:
- name of the group that should own the file/directory, as would be fed to I(chown)
seuser:
required: false
default: null
description:
- user part of SELinux file context. Will default to system policy, if
applicable. If set to C(_default), it will use the C(user) portion of the
policy if available
serole:
required: false
default: null
description:
- role part of SELinux file context, C(_default) feature works as for I(seuser).
setype:
required: false
default: null
description:
- type part of SELinux file context, C(_default) feature works as for I(seuser).
selevel:
required: false
default: "s0"
description:
- level part of the SELinux file context. This is the MLS/MCS attribute,
sometimes known as the C(range). C(_default) feature works as for
I(seuser).
unsafe_writes:
description:
- Normally this module uses atomic operations to prevent data corruption or inconsistent reads from the target files,
sometimes systems are configured or just broken in ways that prevent this. One example are docker mounted files,
they cannot be updated atomically and can only be done in an unsafe manner.
- This boolean option allows ansible to fall back to unsafe methods of updating files for those cases in which you do
not have any other choice. Be aware that this is subject to race conditions and can lead to data corruption.
required: false
default: false
version_added: "2.2"
""" | unknown | codeparrot/codeparrot-clean | ||
"""
Tests for Management commands of comprehensive theming.
"""
from django.test import TestCase
from django.core.management import call_command, CommandError
from openedx.core.djangoapps.theming.helpers import get_themes
from openedx.core.djangoapps.theming.management.commands.compile_sass import Command
class TestUpdateAssets(TestCase):
"""
Test comprehensive theming helper functions.
"""
def setUp(self):
super(TestUpdateAssets, self).setUp()
self.themes = get_themes()
def test_errors_for_invalid_arguments(self):
"""
Test update_asset command.
"""
# make sure error is raised for invalid theme list
with self.assertRaises(CommandError):
call_command("compile_sass", themes=["all", "test-theme"])
# make sure error is raised for invalid theme list
with self.assertRaises(CommandError):
call_command("compile_sass", themes=["no", "test-theme"])
# make sure error is raised for invalid theme list
with self.assertRaises(CommandError):
call_command("compile_sass", themes=["all", "no"])
# make sure error is raised for invalid theme list
with self.assertRaises(CommandError):
call_command("compile_sass", themes=["test-theme", "non-existing-theme"])
def test_parse_arguments(self):
"""
Test parse arguments method for update_asset command.
"""
# make sure compile_sass picks all themes when called with 'themes=all' option
parsed_args = Command.parse_arguments(themes=["all"])
self.assertItemsEqual(parsed_args[2], get_themes())
# make sure compile_sass picks no themes when called with 'themes=no' option
parsed_args = Command.parse_arguments(themes=["no"])
self.assertItemsEqual(parsed_args[2], [])
# make sure compile_sass picks only specified themes
parsed_args = Command.parse_arguments(themes=["test-theme"])
self.assertItemsEqual(parsed_args[2], [theme for theme in get_themes() if theme.theme_dir_name == "test-theme"]) | unknown | codeparrot/codeparrot-clean | ||
<!---
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
-->
# Apache Hadoop Changelog
## Release 0.14.3 - 2007-10-19
### BUG FIXES:
| JIRA | Summary | Priority | Component | Reporter | Contributor |
|:---- |:---- | :--- |:---- |:---- |:---- |
| [HADOOP-2036](https://issues.apache.org/jira/browse/HADOOP-2036) | NPE in JvmMetrics.doThreadUpdates | Blocker | metrics | Koji Noguchi | Nigel Daley |
| [HADOOP-2053](https://issues.apache.org/jira/browse/HADOOP-2053) | OutOfMemoryError : Java heap space errors in hadoop 0.14 | Blocker | . | Lohit Vijayarenu | Arun C Murthy |
| [HADOOP-2043](https://issues.apache.org/jira/browse/HADOOP-2043) | 0.14.2 release compiled with Java 1.6 instead of Java 1.5 | Blocker | build | Doug Cutting | Doug Cutting |
| [HADOOP-2072](https://issues.apache.org/jira/browse/HADOOP-2072) | RawLocalFileStatus is causing Path problems | Major | fs | Dennis Kubes | | | unknown | github | https://github.com/apache/hadoop | hadoop-common-project/hadoop-common/src/site/markdown/release/0.14.3/CHANGELOG.0.14.3.md |
import inspect
import types
import unittest
import contextlib
from test.support.import_helper import import_module
from test.support import gc_collect, requires_working_socket
asyncio = import_module("asyncio")
requires_working_socket(module=True)
_no_default = object()
class AwaitException(Exception):
pass
@types.coroutine
def awaitable(*, throw=False):
if throw:
yield ('throw',)
else:
yield ('result',)
def run_until_complete(coro):
exc = False
while True:
try:
if exc:
exc = False
fut = coro.throw(AwaitException)
else:
fut = coro.send(None)
except StopIteration as ex:
return ex.args[0]
if fut == ('throw',):
exc = True
def to_list(gen):
async def iterate():
res = []
async for i in gen:
res.append(i)
return res
return run_until_complete(iterate())
def py_anext(iterator, default=_no_default):
"""Pure-Python implementation of anext() for testing purposes.
Closely matches the builtin anext() C implementation.
Can be used to compare the built-in implementation of the inner
coroutines machinery to C-implementation of __anext__() and send()
or throw() on the returned generator.
"""
try:
__anext__ = type(iterator).__anext__
except AttributeError:
raise TypeError(f'{iterator!r} is not an async iterator')
if default is _no_default:
return __anext__(iterator)
async def anext_impl():
try:
# The C code is way more low-level than this, as it implements
# all methods of the iterator protocol. In this implementation
# we're relying on higher-level coroutine concepts, but that's
# exactly what we want -- crosstest pure-Python high-level
# implementation and low-level C anext() iterators.
return await __anext__(iterator)
except StopAsyncIteration:
return default
return anext_impl()
class AsyncGenSyntaxTest(unittest.TestCase):
def test_async_gen_syntax_01(self):
code = '''async def foo():
await abc
yield from 123
'''
with self.assertRaisesRegex(SyntaxError, 'yield from.*inside async'):
exec(code, {}, {})
def test_async_gen_syntax_02(self):
code = '''async def foo():
yield from 123
'''
with self.assertRaisesRegex(SyntaxError, 'yield from.*inside async'):
exec(code, {}, {})
def test_async_gen_syntax_03(self):
code = '''async def foo():
await abc
yield
return 123
'''
with self.assertRaisesRegex(SyntaxError, 'return.*value.*async gen'):
exec(code, {}, {})
def test_async_gen_syntax_04(self):
code = '''async def foo():
yield
return 123
'''
with self.assertRaisesRegex(SyntaxError, 'return.*value.*async gen'):
exec(code, {}, {})
def test_async_gen_syntax_05(self):
code = '''async def foo():
if 0:
yield
return 12
'''
with self.assertRaisesRegex(SyntaxError, 'return.*value.*async gen'):
exec(code, {}, {})
class AsyncGenTest(unittest.TestCase):
def compare_generators(self, sync_gen, async_gen):
def sync_iterate(g):
res = []
while True:
try:
res.append(g.__next__())
except StopIteration:
res.append('STOP')
break
except Exception as ex:
res.append(str(type(ex)))
return res
def async_iterate(g):
res = []
while True:
an = g.__anext__()
try:
while True:
try:
an.__next__()
except StopIteration as ex:
if ex.args:
res.append(ex.args[0])
break
else:
res.append('EMPTY StopIteration')
break
except StopAsyncIteration:
raise
except Exception as ex:
res.append(str(type(ex)))
break
except StopAsyncIteration:
res.append('STOP')
break
return res
sync_gen_result = sync_iterate(sync_gen)
async_gen_result = async_iterate(async_gen)
self.assertEqual(sync_gen_result, async_gen_result)
return async_gen_result
def test_async_gen_iteration_01(self):
async def gen():
await awaitable()
a = yield 123
self.assertIs(a, None)
await awaitable()
yield 456
await awaitable()
yield 789
self.assertEqual(to_list(gen()), [123, 456, 789])
def test_async_gen_iteration_02(self):
async def gen():
await awaitable()
yield 123
await awaitable()
g = gen()
ai = g.__aiter__()
an = ai.__anext__()
self.assertEqual(an.__next__(), ('result',))
try:
an.__next__()
except StopIteration as ex:
self.assertEqual(ex.args[0], 123)
else:
self.fail('StopIteration was not raised')
an = ai.__anext__()
self.assertEqual(an.__next__(), ('result',))
try:
an.__next__()
except StopAsyncIteration as ex:
self.assertFalse(ex.args)
else:
self.fail('StopAsyncIteration was not raised')
def test_async_gen_exception_03(self):
async def gen():
await awaitable()
yield 123
await awaitable(throw=True)
yield 456
with self.assertRaises(AwaitException):
to_list(gen())
def test_async_gen_exception_04(self):
async def gen():
await awaitable()
yield 123
1 / 0
g = gen()
ai = g.__aiter__()
an = ai.__anext__()
self.assertEqual(an.__next__(), ('result',))
try:
an.__next__()
except StopIteration as ex:
self.assertEqual(ex.args[0], 123)
else:
self.fail('StopIteration was not raised')
with self.assertRaises(ZeroDivisionError):
ai.__anext__().__next__()
def test_async_gen_exception_05(self):
async def gen():
yield 123
raise StopAsyncIteration
with self.assertRaisesRegex(RuntimeError,
'async generator.*StopAsyncIteration'):
to_list(gen())
def test_async_gen_exception_06(self):
async def gen():
yield 123
raise StopIteration
with self.assertRaisesRegex(RuntimeError,
'async generator.*StopIteration'):
to_list(gen())
def test_async_gen_exception_07(self):
def sync_gen():
try:
yield 1
1 / 0
finally:
yield 2
yield 3
yield 100
async def async_gen():
try:
yield 1
1 / 0
finally:
yield 2
yield 3
yield 100
self.compare_generators(sync_gen(), async_gen())
def test_async_gen_exception_08(self):
def sync_gen():
try:
yield 1
finally:
yield 2
1 / 0
yield 3
yield 100
async def async_gen():
try:
yield 1
await awaitable()
finally:
await awaitable()
yield 2
1 / 0
yield 3
yield 100
self.compare_generators(sync_gen(), async_gen())
def test_async_gen_exception_09(self):
def sync_gen():
try:
yield 1
1 / 0
finally:
yield 2
yield 3
yield 100
async def async_gen():
try:
await awaitable()
yield 1
1 / 0
finally:
yield 2
await awaitable()
yield 3
yield 100
self.compare_generators(sync_gen(), async_gen())
def test_async_gen_exception_10(self):
async def gen():
yield 123
with self.assertRaisesRegex(TypeError,
"non-None value .* async generator"):
gen().__anext__().send(100)
def test_async_gen_exception_11(self):
def sync_gen():
yield 10
yield 20
def sync_gen_wrapper():
yield 1
sg = sync_gen()
sg.send(None)
try:
sg.throw(GeneratorExit())
except GeneratorExit:
yield 2
yield 3
async def async_gen():
yield 10
yield 20
async def async_gen_wrapper():
yield 1
asg = async_gen()
await asg.asend(None)
try:
await asg.athrow(GeneratorExit())
except GeneratorExit:
yield 2
yield 3
self.compare_generators(sync_gen_wrapper(), async_gen_wrapper())
def test_async_gen_exception_12(self):
async def gen():
with self.assertWarnsRegex(RuntimeWarning,
f"coroutine method 'asend' of '{gen.__qualname__}' "
f"was never awaited"):
await anext(me)
yield 123
me = gen()
ai = me.__aiter__()
an = ai.__anext__()
with self.assertRaisesRegex(RuntimeError,
r'anext\(\): asynchronous generator is already running'):
an.__next__()
with self.assertRaisesRegex(RuntimeError,
r"cannot reuse already awaited __anext__\(\)/asend\(\)"):
an.send(None)
def test_async_gen_asend_throw_concurrent_with_send(self):
import types
@types.coroutine
def _async_yield(v):
return (yield v)
class MyExc(Exception):
pass
async def agenfn():
while True:
try:
await _async_yield(None)
except MyExc:
pass
return
yield
agen = agenfn()
gen = agen.asend(None)
gen.send(None)
gen2 = agen.asend(None)
with self.assertRaisesRegex(RuntimeError,
r'anext\(\): asynchronous generator is already running'):
gen2.throw(MyExc)
with self.assertRaisesRegex(RuntimeError,
r"cannot reuse already awaited __anext__\(\)/asend\(\)"):
gen2.send(None)
def test_async_gen_athrow_throw_concurrent_with_send(self):
import types
@types.coroutine
def _async_yield(v):
return (yield v)
class MyExc(Exception):
pass
async def agenfn():
while True:
try:
await _async_yield(None)
except MyExc:
pass
return
yield
agen = agenfn()
gen = agen.asend(None)
gen.send(None)
gen2 = agen.athrow(MyExc)
with self.assertRaisesRegex(RuntimeError,
r'athrow\(\): asynchronous generator is already running'):
gen2.throw(MyExc)
with self.assertRaisesRegex(RuntimeError,
r"cannot reuse already awaited aclose\(\)/athrow\(\)"):
gen2.send(None)
def test_async_gen_asend_throw_concurrent_with_throw(self):
import types
@types.coroutine
def _async_yield(v):
return (yield v)
class MyExc(Exception):
pass
async def agenfn():
try:
yield
except MyExc:
pass
while True:
try:
await _async_yield(None)
except MyExc:
pass
agen = agenfn()
with self.assertRaises(StopIteration):
agen.asend(None).send(None)
gen = agen.athrow(MyExc)
gen.throw(MyExc)
gen2 = agen.asend(MyExc)
with self.assertRaisesRegex(RuntimeError,
r'anext\(\): asynchronous generator is already running'):
gen2.throw(MyExc)
with self.assertRaisesRegex(RuntimeError,
r"cannot reuse already awaited __anext__\(\)/asend\(\)"):
gen2.send(None)
def test_async_gen_athrow_throw_concurrent_with_throw(self):
import types
@types.coroutine
def _async_yield(v):
return (yield v)
class MyExc(Exception):
pass
async def agenfn():
try:
yield
except MyExc:
pass
while True:
try:
await _async_yield(None)
except MyExc:
pass
agen = agenfn()
with self.assertRaises(StopIteration):
agen.asend(None).send(None)
gen = agen.athrow(MyExc)
gen.throw(MyExc)
gen2 = agen.athrow(None)
with self.assertRaisesRegex(RuntimeError,
r'athrow\(\): asynchronous generator is already running'):
gen2.throw(MyExc)
with self.assertRaisesRegex(RuntimeError,
r"cannot reuse already awaited aclose\(\)/athrow\(\)"):
gen2.send(None)
def test_async_gen_3_arg_deprecation_warning(self):
async def gen():
yield 123
with self.assertWarns(DeprecationWarning):
x = gen().athrow(GeneratorExit, GeneratorExit(), None)
with self.assertRaises(GeneratorExit):
x.send(None)
del x
gc_collect()
def test_async_gen_api_01(self):
async def gen():
yield 123
g = gen()
self.assertEqual(g.__name__, 'gen')
g.__name__ = '123'
self.assertEqual(g.__name__, '123')
self.assertIn('.gen', g.__qualname__)
g.__qualname__ = '123'
self.assertEqual(g.__qualname__, '123')
self.assertIsNone(g.ag_await)
self.assertIsInstance(g.ag_frame, types.FrameType)
self.assertFalse(g.ag_running)
self.assertIsInstance(g.ag_code, types.CodeType)
aclose = g.aclose()
self.assertTrue(inspect.isawaitable(aclose))
aclose.close()
def test_async_gen_asend_close_runtime_error(self):
import types
@types.coroutine
def _async_yield(v):
return (yield v)
async def agenfn():
try:
await _async_yield(None)
except GeneratorExit:
await _async_yield(None)
return
yield
agen = agenfn()
gen = agen.asend(None)
gen.send(None)
with self.assertRaisesRegex(RuntimeError, "coroutine ignored GeneratorExit"):
gen.close()
def test_async_gen_athrow_close_runtime_error(self):
import types
@types.coroutine
def _async_yield(v):
return (yield v)
class MyExc(Exception):
pass
async def agenfn():
try:
yield
except MyExc:
try:
await _async_yield(None)
except GeneratorExit:
await _async_yield(None)
agen = agenfn()
with self.assertRaises(StopIteration):
agen.asend(None).send(None)
gen = agen.athrow(MyExc)
gen.send(None)
with self.assertRaisesRegex(RuntimeError, "coroutine ignored GeneratorExit"):
gen.close()
class AsyncGenAsyncioTest(unittest.TestCase):
def setUp(self):
self.loop = asyncio.new_event_loop()
asyncio.set_event_loop(None)
def tearDown(self):
self.loop.close()
self.loop = None
asyncio.events._set_event_loop_policy(None)
def check_async_iterator_anext(self, ait_class):
with self.subTest(anext="pure-Python"):
self._check_async_iterator_anext(ait_class, py_anext)
with self.subTest(anext="builtin"):
self._check_async_iterator_anext(ait_class, anext)
def _check_async_iterator_anext(self, ait_class, anext):
g = ait_class()
async def consume():
results = []
results.append(await anext(g))
results.append(await anext(g))
results.append(await anext(g, 'buckle my shoe'))
return results
res = self.loop.run_until_complete(consume())
self.assertEqual(res, [1, 2, 'buckle my shoe'])
with self.assertRaises(StopAsyncIteration):
self.loop.run_until_complete(consume())
async def test_2():
g1 = ait_class()
self.assertEqual(await anext(g1), 1)
self.assertEqual(await anext(g1), 2)
with self.assertRaises(StopAsyncIteration):
await anext(g1)
with self.assertRaises(StopAsyncIteration):
await anext(g1)
g2 = ait_class()
self.assertEqual(await anext(g2, "default"), 1)
self.assertEqual(await anext(g2, "default"), 2)
self.assertEqual(await anext(g2, "default"), "default")
self.assertEqual(await anext(g2, "default"), "default")
return "completed"
result = self.loop.run_until_complete(test_2())
self.assertEqual(result, "completed")
def test_send():
p = ait_class()
obj = anext(p, "completed")
with self.assertRaises(StopIteration):
with contextlib.closing(obj.__await__()) as g:
g.send(None)
test_send()
async def test_throw():
p = ait_class()
obj = anext(p, "completed")
self.assertRaises(SyntaxError, obj.throw, SyntaxError)
return "completed"
result = self.loop.run_until_complete(test_throw())
self.assertEqual(result, "completed")
def test_async_generator_anext(self):
async def agen():
yield 1
yield 2
self.check_async_iterator_anext(agen)
def test_python_async_iterator_anext(self):
class MyAsyncIter:
"""Asynchronously yield 1, then 2."""
def __init__(self):
self.yielded = 0
def __aiter__(self):
return self
async def __anext__(self):
if self.yielded >= 2:
raise StopAsyncIteration()
else:
self.yielded += 1
return self.yielded
self.check_async_iterator_anext(MyAsyncIter)
def test_python_async_iterator_types_coroutine_anext(self):
import types
class MyAsyncIterWithTypesCoro:
"""Asynchronously yield 1, then 2."""
def __init__(self):
self.yielded = 0
def __aiter__(self):
return self
@types.coroutine
def __anext__(self):
if False:
yield "this is a generator-based coroutine"
if self.yielded >= 2:
raise StopAsyncIteration()
else:
self.yielded += 1
return self.yielded
self.check_async_iterator_anext(MyAsyncIterWithTypesCoro)
def test_async_gen_aiter(self):
async def gen():
yield 1
yield 2
g = gen()
async def consume():
return [i async for i in aiter(g)]
res = self.loop.run_until_complete(consume())
self.assertEqual(res, [1, 2])
def test_async_gen_aiter_class(self):
results = []
class Gen:
async def __aiter__(self):
yield 1
yield 2
g = Gen()
async def consume():
ait = aiter(g)
while True:
try:
results.append(await anext(ait))
except StopAsyncIteration:
break
self.loop.run_until_complete(consume())
self.assertEqual(results, [1, 2])
def test_aiter_idempotent(self):
async def gen():
yield 1
applied_once = aiter(gen())
applied_twice = aiter(applied_once)
self.assertIs(applied_once, applied_twice)
def test_anext_bad_args(self):
async def gen():
yield 1
async def call_with_too_few_args():
await anext()
async def call_with_too_many_args():
await anext(gen(), 1, 3)
async def call_with_wrong_type_args():
await anext(1, gen())
async def call_with_kwarg():
await anext(aiterator=gen())
with self.assertRaises(TypeError):
self.loop.run_until_complete(call_with_too_few_args())
with self.assertRaises(TypeError):
self.loop.run_until_complete(call_with_too_many_args())
with self.assertRaises(TypeError):
self.loop.run_until_complete(call_with_wrong_type_args())
with self.assertRaises(TypeError):
self.loop.run_until_complete(call_with_kwarg())
def test_anext_bad_await(self):
async def bad_awaitable():
class BadAwaitable:
def __await__(self):
return 42
class MyAsyncIter:
def __aiter__(self):
return self
def __anext__(self):
return BadAwaitable()
regex = r"__await__.*iterator"
awaitable = anext(MyAsyncIter(), "default")
with self.assertRaisesRegex(TypeError, regex):
await awaitable
awaitable = anext(MyAsyncIter())
with self.assertRaisesRegex(TypeError, regex):
await awaitable
return "completed"
result = self.loop.run_until_complete(bad_awaitable())
self.assertEqual(result, "completed")
async def check_anext_returning_iterator(self, aiter_class):
awaitable = anext(aiter_class(), "default")
with self.assertRaises(TypeError):
await awaitable
awaitable = anext(aiter_class())
with self.assertRaises(TypeError):
await awaitable
return "completed"
def test_anext_return_iterator(self):
class WithIterAnext:
def __aiter__(self):
return self
def __anext__(self):
return iter("abc")
result = self.loop.run_until_complete(self.check_anext_returning_iterator(WithIterAnext))
self.assertEqual(result, "completed")
def test_anext_return_generator(self):
class WithGenAnext:
def __aiter__(self):
return self
def __anext__(self):
yield
result = self.loop.run_until_complete(self.check_anext_returning_iterator(WithGenAnext))
self.assertEqual(result, "completed")
def test_anext_await_raises(self):
class RaisingAwaitable:
def __await__(self):
raise ZeroDivisionError()
yield
class WithRaisingAwaitableAnext:
def __aiter__(self):
return self
def __anext__(self):
return RaisingAwaitable()
async def do_test():
awaitable = anext(WithRaisingAwaitableAnext())
with self.assertRaises(ZeroDivisionError):
await awaitable
awaitable = anext(WithRaisingAwaitableAnext(), "default")
with self.assertRaises(ZeroDivisionError):
await awaitable
return "completed"
result = self.loop.run_until_complete(do_test())
self.assertEqual(result, "completed")
def test_anext_iter(self):
@types.coroutine
def _async_yield(v):
return (yield v)
class MyError(Exception):
pass
async def agenfn():
try:
await _async_yield(1)
except MyError:
await _async_yield(2)
return
yield
def test1(anext):
agen = agenfn()
with contextlib.closing(anext(agen, "default").__await__()) as g:
self.assertEqual(g.send(None), 1)
self.assertEqual(g.throw(MyError()), 2)
try:
g.send(None)
except StopIteration as e:
err = e
else:
self.fail('StopIteration was not raised')
self.assertEqual(err.value, "default")
def test2(anext):
agen = agenfn()
with contextlib.closing(anext(agen, "default").__await__()) as g:
self.assertEqual(g.send(None), 1)
self.assertEqual(g.throw(MyError()), 2)
with self.assertRaises(MyError):
g.throw(MyError())
def test3(anext):
agen = agenfn()
with contextlib.closing(anext(agen, "default").__await__()) as g:
self.assertEqual(g.send(None), 1)
g.close()
with self.assertRaisesRegex(RuntimeError, 'cannot reuse'):
self.assertEqual(g.send(None), 1)
def test4(anext):
@types.coroutine
def _async_yield(v):
yield v * 10
return (yield (v * 10 + 1))
async def agenfn():
try:
await _async_yield(1)
except MyError:
await _async_yield(2)
return
yield
agen = agenfn()
with contextlib.closing(anext(agen, "default").__await__()) as g:
self.assertEqual(g.send(None), 10)
self.assertEqual(g.throw(MyError()), 20)
with self.assertRaisesRegex(MyError, 'val'):
g.throw(MyError('val'))
def test5(anext):
@types.coroutine
def _async_yield(v):
yield v * 10
return (yield (v * 10 + 1))
async def agenfn():
try:
await _async_yield(1)
except MyError:
return
yield 'aaa'
agen = agenfn()
with contextlib.closing(anext(agen, "default").__await__()) as g:
self.assertEqual(g.send(None), 10)
with self.assertRaisesRegex(StopIteration, 'default'):
g.throw(MyError())
def test6(anext):
@types.coroutine
def _async_yield(v):
yield v * 10
return (yield (v * 10 + 1))
async def agenfn():
await _async_yield(1)
yield 'aaa'
agen = agenfn()
with contextlib.closing(anext(agen, "default").__await__()) as g:
with self.assertRaises(MyError):
g.throw(MyError())
def run_test(test):
with self.subTest('pure-Python anext()'):
test(py_anext)
with self.subTest('builtin anext()'):
test(anext)
run_test(test1)
run_test(test2)
run_test(test3)
run_test(test4)
run_test(test5)
run_test(test6)
def test_aiter_bad_args(self):
async def gen():
yield 1
async def call_with_too_few_args():
await aiter()
async def call_with_too_many_args():
await aiter(gen(), 1)
async def call_with_wrong_type_arg():
await aiter(1)
with self.assertRaises(TypeError):
self.loop.run_until_complete(call_with_too_few_args())
with self.assertRaises(TypeError):
self.loop.run_until_complete(call_with_too_many_args())
with self.assertRaises(TypeError):
self.loop.run_until_complete(call_with_wrong_type_arg())
async def to_list(self, gen):
res = []
async for i in gen:
res.append(i)
return res
def test_async_gen_asyncio_01(self):
async def gen():
yield 1
await asyncio.sleep(0.01)
yield 2
await asyncio.sleep(0.01)
return
yield 3
res = self.loop.run_until_complete(self.to_list(gen()))
self.assertEqual(res, [1, 2])
def test_async_gen_asyncio_02(self):
async def gen():
yield 1
await asyncio.sleep(0.01)
yield 2
1 / 0
yield 3
with self.assertRaises(ZeroDivisionError):
self.loop.run_until_complete(self.to_list(gen()))
def test_async_gen_asyncio_03(self):
loop = self.loop
class Gen:
async def __aiter__(self):
yield 1
await asyncio.sleep(0.01)
yield 2
res = loop.run_until_complete(self.to_list(Gen()))
self.assertEqual(res, [1, 2])
def test_async_gen_asyncio_anext_04(self):
async def foo():
yield 1
await asyncio.sleep(0.01)
try:
yield 2
yield 3
except ZeroDivisionError:
yield 1000
await asyncio.sleep(0.01)
yield 4
async def run1():
it = foo().__aiter__()
self.assertEqual(await it.__anext__(), 1)
self.assertEqual(await it.__anext__(), 2)
self.assertEqual(await it.__anext__(), 3)
self.assertEqual(await it.__anext__(), 4)
with self.assertRaises(StopAsyncIteration):
await it.__anext__()
with self.assertRaises(StopAsyncIteration):
await it.__anext__()
async def run2():
it = foo().__aiter__()
self.assertEqual(await it.__anext__(), 1)
self.assertEqual(await it.__anext__(), 2)
try:
it.__anext__().throw(ZeroDivisionError)
except StopIteration as ex:
self.assertEqual(ex.args[0], 1000)
else:
self.fail('StopIteration was not raised')
self.assertEqual(await it.__anext__(), 4)
with self.assertRaises(StopAsyncIteration):
await it.__anext__()
self.loop.run_until_complete(run1())
self.loop.run_until_complete(run2())
def test_async_gen_asyncio_anext_05(self):
async def foo():
v = yield 1
v = yield v
yield v * 100
async def run():
it = foo().__aiter__()
try:
it.__anext__().send(None)
except StopIteration as ex:
self.assertEqual(ex.args[0], 1)
else:
self.fail('StopIteration was not raised')
try:
it.__anext__().send(10)
except StopIteration as ex:
self.assertEqual(ex.args[0], 10)
else:
self.fail('StopIteration was not raised')
try:
it.__anext__().send(12)
except StopIteration as ex:
self.assertEqual(ex.args[0], 1200)
else:
self.fail('StopIteration was not raised')
with self.assertRaises(StopAsyncIteration):
await it.__anext__()
self.loop.run_until_complete(run())
def test_async_gen_asyncio_anext_06(self):
DONE = 0
# test synchronous generators
def foo():
try:
yield
except:
pass
g = foo()
g.send(None)
with self.assertRaises(StopIteration):
g.send(None)
# now with asynchronous generators
async def gen():
nonlocal DONE
try:
yield
except:
pass
DONE = 1
async def run():
nonlocal DONE
g = gen()
await g.asend(None)
with self.assertRaises(StopAsyncIteration):
await g.asend(None)
DONE += 10
self.loop.run_until_complete(run())
self.assertEqual(DONE, 11)
def test_async_gen_asyncio_anext_tuple(self):
async def foo():
try:
yield (1,)
except ZeroDivisionError:
yield (2,)
async def run():
it = foo().__aiter__()
self.assertEqual(await it.__anext__(), (1,))
with self.assertRaises(StopIteration) as cm:
it.__anext__().throw(ZeroDivisionError)
self.assertEqual(cm.exception.args[0], (2,))
with self.assertRaises(StopAsyncIteration):
await it.__anext__()
self.loop.run_until_complete(run())
def test_async_gen_asyncio_anext_tuple_no_exceptions(self):
# StopAsyncIteration exceptions should be cleared.
# See: https://github.com/python/cpython/issues/128078.
async def foo():
if False:
yield (1, 2)
async def run():
it = foo().__aiter__()
with self.assertRaises(StopAsyncIteration):
await it.__anext__()
res = await anext(it, ('a', 'b'))
self.assertTupleEqual(res, ('a', 'b'))
self.loop.run_until_complete(run())
def test_sync_anext_raises_exception(self):
# See: https://github.com/python/cpython/issues/131670
msg = 'custom'
for exc_type in [
StopAsyncIteration,
StopIteration,
ValueError,
Exception,
]:
exc = exc_type(msg)
with self.subTest(exc=exc):
class A:
def __anext__(self):
raise exc
with self.assertRaisesRegex(exc_type, msg):
anext(A())
with self.assertRaisesRegex(exc_type, msg):
anext(A(), 1)
def test_async_gen_asyncio_anext_stopiteration(self):
async def foo():
try:
yield StopIteration(1)
except ZeroDivisionError:
yield StopIteration(3)
async def run():
it = foo().__aiter__()
v = await it.__anext__()
self.assertIsInstance(v, StopIteration)
self.assertEqual(v.value, 1)
with self.assertRaises(StopIteration) as cm:
it.__anext__().throw(ZeroDivisionError)
v = cm.exception.args[0]
self.assertIsInstance(v, StopIteration)
self.assertEqual(v.value, 3)
with self.assertRaises(StopAsyncIteration):
await it.__anext__()
self.loop.run_until_complete(run())
def test_async_gen_asyncio_aclose_06(self):
async def foo():
try:
yield 1
1 / 0
finally:
await asyncio.sleep(0.01)
yield 12
async def run():
gen = foo()
it = gen.__aiter__()
await it.__anext__()
await gen.aclose()
with self.assertRaisesRegex(
RuntimeError,
"async generator ignored GeneratorExit"):
self.loop.run_until_complete(run())
def test_async_gen_asyncio_aclose_07(self):
DONE = 0
async def foo():
nonlocal DONE
try:
yield 1
1 / 0
finally:
await asyncio.sleep(0.01)
await asyncio.sleep(0.01)
DONE += 1
DONE += 1000
async def run():
gen = foo()
it = gen.__aiter__()
await it.__anext__()
await gen.aclose()
self.loop.run_until_complete(run())
self.assertEqual(DONE, 1)
def test_async_gen_asyncio_aclose_08(self):
DONE = 0
fut = asyncio.Future(loop=self.loop)
async def foo():
nonlocal DONE
try:
yield 1
await fut
DONE += 1000
yield 2
finally:
await asyncio.sleep(0.01)
await asyncio.sleep(0.01)
DONE += 1
DONE += 1000
async def run():
gen = foo()
it = gen.__aiter__()
self.assertEqual(await it.__anext__(), 1)
await gen.aclose()
self.loop.run_until_complete(run())
self.assertEqual(DONE, 1)
# Silence ResourceWarnings
fut.cancel()
self.loop.run_until_complete(asyncio.sleep(0.01))
def test_async_gen_asyncio_gc_aclose_09(self):
DONE = 0
async def gen():
nonlocal DONE
try:
while True:
yield 1
finally:
await asyncio.sleep(0)
DONE = 1
async def run():
g = gen()
await g.__anext__()
await g.__anext__()
del g
gc_collect() # For PyPy or other GCs.
# Starts running the aclose task
await asyncio.sleep(0)
# For asyncio.sleep(0) in finally block
await asyncio.sleep(0)
self.loop.run_until_complete(run())
self.assertEqual(DONE, 1)
def test_async_gen_asyncio_aclose_10(self):
DONE = 0
# test synchronous generators
def foo():
try:
yield
except:
pass
g = foo()
g.send(None)
g.close()
# now with asynchronous generators
async def gen():
nonlocal DONE
try:
yield
except:
pass
DONE = 1
async def run():
nonlocal DONE
g = gen()
await g.asend(None)
await g.aclose()
DONE += 10
self.loop.run_until_complete(run())
self.assertEqual(DONE, 11)
def test_async_gen_asyncio_aclose_11(self):
DONE = 0
# test synchronous generators
def foo():
try:
yield
except:
pass
yield
g = foo()
g.send(None)
with self.assertRaisesRegex(RuntimeError, 'ignored GeneratorExit'):
g.close()
# now with asynchronous generators
async def gen():
nonlocal DONE
try:
yield
except:
pass
yield
DONE += 1
async def run():
nonlocal DONE
g = gen()
await g.asend(None)
with self.assertRaisesRegex(RuntimeError, 'ignored GeneratorExit'):
await g.aclose()
DONE += 10
self.loop.run_until_complete(run())
self.assertEqual(DONE, 10)
def test_async_gen_asyncio_aclose_12(self):
DONE = 0
async def target():
await asyncio.sleep(0.01)
1 / 0
async def foo():
nonlocal DONE
task = asyncio.create_task(target())
try:
yield 1
finally:
try:
await task
except ZeroDivisionError:
DONE = 1
async def run():
gen = foo()
it = gen.__aiter__()
await it.__anext__()
await gen.aclose()
self.loop.run_until_complete(run())
self.assertEqual(DONE, 1)
def test_async_gen_asyncio_asend_01(self):
DONE = 0
# Sanity check:
def sgen():
v = yield 1
yield v * 2
sg = sgen()
v = sg.send(None)
self.assertEqual(v, 1)
v = sg.send(100)
self.assertEqual(v, 200)
async def gen():
nonlocal DONE
try:
await asyncio.sleep(0.01)
v = yield 1
await asyncio.sleep(0.01)
yield v * 2
await asyncio.sleep(0.01)
return
finally:
await asyncio.sleep(0.01)
await asyncio.sleep(0.01)
DONE = 1
async def run():
g = gen()
v = await g.asend(None)
self.assertEqual(v, 1)
v = await g.asend(100)
self.assertEqual(v, 200)
with self.assertRaises(StopAsyncIteration):
await g.asend(None)
self.loop.run_until_complete(run())
self.assertEqual(DONE, 1)
def test_async_gen_asyncio_asend_02(self):
DONE = 0
async def sleep_n_crash(delay):
await asyncio.sleep(delay)
1 / 0
async def gen():
nonlocal DONE
try:
await asyncio.sleep(0.01)
v = yield 1
await sleep_n_crash(0.01)
DONE += 1000
yield v * 2
finally:
await asyncio.sleep(0.01)
await asyncio.sleep(0.01)
DONE = 1
async def run():
g = gen()
v = await g.asend(None)
self.assertEqual(v, 1)
await g.asend(100)
with self.assertRaises(ZeroDivisionError):
self.loop.run_until_complete(run())
self.assertEqual(DONE, 1)
def test_async_gen_asyncio_asend_03(self):
DONE = 0
async def sleep_n_crash(delay):
fut = asyncio.ensure_future(asyncio.sleep(delay),
loop=self.loop)
self.loop.call_later(delay / 2, lambda: fut.cancel())
return await fut
async def gen():
nonlocal DONE
try:
await asyncio.sleep(0.01)
v = yield 1
await sleep_n_crash(0.01)
DONE += 1000
yield v * 2
finally:
await asyncio.sleep(0.01)
await asyncio.sleep(0.01)
DONE = 1
async def run():
g = gen()
v = await g.asend(None)
self.assertEqual(v, 1)
await g.asend(100)
with self.assertRaises(asyncio.CancelledError):
self.loop.run_until_complete(run())
self.assertEqual(DONE, 1)
def test_async_gen_asyncio_athrow_01(self):
DONE = 0
class FooEr(Exception):
pass
# Sanity check:
def sgen():
try:
v = yield 1
except FooEr:
v = 1000
yield v * 2
sg = sgen()
v = sg.send(None)
self.assertEqual(v, 1)
v = sg.throw(FooEr)
self.assertEqual(v, 2000)
with self.assertRaises(StopIteration):
sg.send(None)
async def gen():
nonlocal DONE
try:
await asyncio.sleep(0.01)
try:
v = yield 1
except FooEr:
v = 1000
await asyncio.sleep(0.01)
yield v * 2
await asyncio.sleep(0.01)
# return
finally:
await asyncio.sleep(0.01)
await asyncio.sleep(0.01)
DONE = 1
async def run():
g = gen()
v = await g.asend(None)
self.assertEqual(v, 1)
v = await g.athrow(FooEr)
self.assertEqual(v, 2000)
with self.assertRaises(StopAsyncIteration):
await g.asend(None)
self.loop.run_until_complete(run())
self.assertEqual(DONE, 1)
def test_async_gen_asyncio_athrow_02(self):
DONE = 0
class FooEr(Exception):
pass
async def sleep_n_crash(delay):
fut = asyncio.ensure_future(asyncio.sleep(delay),
loop=self.loop)
self.loop.call_later(delay / 2, lambda: fut.cancel())
return await fut
async def gen():
nonlocal DONE
try:
await asyncio.sleep(0.01)
try:
v = yield 1
except FooEr:
await sleep_n_crash(0.01)
yield v * 2
await asyncio.sleep(0.01)
# return
finally:
await asyncio.sleep(0.01)
await asyncio.sleep(0.01)
DONE = 1
async def run():
g = gen()
v = await g.asend(None)
self.assertEqual(v, 1)
try:
await g.athrow(FooEr)
except asyncio.CancelledError:
self.assertEqual(DONE, 1)
raise
else:
self.fail('CancelledError was not raised')
with self.assertRaises(asyncio.CancelledError):
self.loop.run_until_complete(run())
self.assertEqual(DONE, 1)
def test_async_gen_asyncio_athrow_03(self):
DONE = 0
# test synchronous generators
def foo():
try:
yield
except:
pass
g = foo()
g.send(None)
with self.assertRaises(StopIteration):
g.throw(ValueError)
# now with asynchronous generators
async def gen():
nonlocal DONE
try:
yield
except:
pass
DONE = 1
async def run():
nonlocal DONE
g = gen()
await g.asend(None)
with self.assertRaises(StopAsyncIteration):
await g.athrow(ValueError)
DONE += 10
self.loop.run_until_complete(run())
self.assertEqual(DONE, 11)
def test_async_gen_asyncio_athrow_tuple(self):
async def gen():
try:
yield 1
except ZeroDivisionError:
yield (2,)
async def run():
g = gen()
v = await g.asend(None)
self.assertEqual(v, 1)
v = await g.athrow(ZeroDivisionError)
self.assertEqual(v, (2,))
with self.assertRaises(StopAsyncIteration):
await g.asend(None)
self.loop.run_until_complete(run())
def test_async_gen_asyncio_athrow_stopiteration(self):
async def gen():
try:
yield 1
except ZeroDivisionError:
yield StopIteration(2)
async def run():
g = gen()
v = await g.asend(None)
self.assertEqual(v, 1)
v = await g.athrow(ZeroDivisionError)
self.assertIsInstance(v, StopIteration)
self.assertEqual(v.value, 2)
with self.assertRaises(StopAsyncIteration):
await g.asend(None)
self.loop.run_until_complete(run())
def test_async_gen_asyncio_shutdown_01(self):
finalized = 0
async def waiter(timeout):
nonlocal finalized
try:
await asyncio.sleep(timeout)
yield 1
finally:
await asyncio.sleep(0)
finalized += 1
async def wait():
async for _ in waiter(1):
pass
t1 = self.loop.create_task(wait())
t2 = self.loop.create_task(wait())
self.loop.run_until_complete(asyncio.sleep(0.1))
# Silence warnings
t1.cancel()
t2.cancel()
with self.assertRaises(asyncio.CancelledError):
self.loop.run_until_complete(t1)
with self.assertRaises(asyncio.CancelledError):
self.loop.run_until_complete(t2)
self.loop.run_until_complete(self.loop.shutdown_asyncgens())
self.assertEqual(finalized, 2)
def test_async_gen_asyncio_shutdown_02(self):
messages = []
def exception_handler(loop, context):
messages.append(context)
async def async_iterate():
yield 1
yield 2
it = async_iterate()
async def main():
loop = asyncio.get_running_loop()
loop.set_exception_handler(exception_handler)
async for i in it:
break
asyncio.run(main())
self.assertEqual(messages, [])
def test_async_gen_asyncio_shutdown_exception_01(self):
messages = []
def exception_handler(loop, context):
messages.append(context)
async def async_iterate():
try:
yield 1
yield 2
finally:
1/0
it = async_iterate()
async def main():
loop = asyncio.get_running_loop()
loop.set_exception_handler(exception_handler)
async for i in it:
break
asyncio.run(main())
message, = messages
self.assertEqual(message['asyncgen'], it)
self.assertIsInstance(message['exception'], ZeroDivisionError)
self.assertIn('an error occurred during closing of asynchronous generator',
message['message'])
def test_async_gen_asyncio_shutdown_exception_02(self):
messages = []
def exception_handler(loop, context):
messages.append(context)
async def async_iterate():
try:
yield 1
yield 2
finally:
1/0
async def main():
loop = asyncio.get_running_loop()
loop.set_exception_handler(exception_handler)
async for i in async_iterate():
break
gc_collect()
asyncio.run(main())
message, = messages
self.assertIsInstance(message['exception'], ZeroDivisionError)
self.assertIn('unhandled exception during asyncio.run() shutdown',
message['message'])
del message, messages
gc_collect()
def test_async_gen_expression_01(self):
async def arange(n):
for i in range(n):
await asyncio.sleep(0.01)
yield i
def make_arange(n):
# This syntax is legal starting with Python 3.7
return (i * 2 async for i in arange(n))
async def run():
return [i async for i in make_arange(10)]
res = self.loop.run_until_complete(run())
self.assertEqual(res, [i * 2 for i in range(10)])
def test_async_gen_expression_02(self):
async def wrap(n):
await asyncio.sleep(0.01)
return n
def make_arange(n):
# This syntax is legal starting with Python 3.7
return (i * 2 for i in range(n) if await wrap(i))
async def run():
return [i async for i in make_arange(10)]
res = self.loop.run_until_complete(run())
self.assertEqual(res, [i * 2 for i in range(1, 10)])
def test_asyncgen_nonstarted_hooks_are_cancellable(self):
# See https://bugs.python.org/issue38013
messages = []
def exception_handler(loop, context):
messages.append(context)
async def async_iterate():
yield 1
yield 2
async def main():
loop = asyncio.get_running_loop()
loop.set_exception_handler(exception_handler)
async for i in async_iterate():
break
asyncio.run(main())
self.assertEqual([], messages)
gc_collect()
def test_async_gen_await_same_anext_coro_twice(self):
async def async_iterate():
yield 1
yield 2
async def run():
it = async_iterate()
nxt = it.__anext__()
await nxt
with self.assertRaisesRegex(
RuntimeError,
r"cannot reuse already awaited __anext__\(\)/asend\(\)"
):
await nxt
await it.aclose() # prevent unfinished iterator warning
self.loop.run_until_complete(run())
def test_async_gen_await_same_aclose_coro_twice(self):
async def async_iterate():
yield 1
yield 2
async def run():
it = async_iterate()
nxt = it.aclose()
await nxt
with self.assertRaisesRegex(
RuntimeError,
r"cannot reuse already awaited aclose\(\)/athrow\(\)"
):
await nxt
self.loop.run_until_complete(run())
def test_async_gen_throw_same_aclose_coro_twice(self):
async def async_iterate():
yield 1
yield 2
it = async_iterate()
nxt = it.aclose()
with self.assertRaises(StopIteration):
nxt.throw(GeneratorExit)
with self.assertRaisesRegex(
RuntimeError,
r"cannot reuse already awaited aclose\(\)/athrow\(\)"
):
nxt.throw(GeneratorExit)
def test_async_gen_throw_custom_same_aclose_coro_twice(self):
async def async_iterate():
yield 1
yield 2
it = async_iterate()
class MyException(Exception):
pass
nxt = it.aclose()
with self.assertRaises(MyException):
nxt.throw(MyException)
with self.assertRaisesRegex(
RuntimeError,
r"cannot reuse already awaited aclose\(\)/athrow\(\)"
):
nxt.throw(MyException)
def test_async_gen_throw_custom_same_athrow_coro_twice(self):
async def async_iterate():
yield 1
yield 2
it = async_iterate()
class MyException(Exception):
pass
nxt = it.athrow(MyException)
with self.assertRaises(MyException):
nxt.throw(MyException)
with self.assertRaisesRegex(
RuntimeError,
r"cannot reuse already awaited aclose\(\)/athrow\(\)"
):
nxt.throw(MyException)
def test_async_gen_aclose_twice_with_different_coros(self):
# Regression test for https://bugs.python.org/issue39606
async def async_iterate():
yield 1
yield 2
async def run():
it = async_iterate()
await it.aclose()
await it.aclose()
self.loop.run_until_complete(run())
def test_async_gen_aclose_after_exhaustion(self):
# Regression test for https://bugs.python.org/issue39606
async def async_iterate():
yield 1
yield 2
async def run():
it = async_iterate()
async for _ in it:
pass
await it.aclose()
self.loop.run_until_complete(run())
def test_async_gen_aclose_compatible_with_get_stack(self):
async def async_generator():
yield object()
async def run():
ag = async_generator()
asyncio.create_task(ag.aclose())
tasks = asyncio.all_tasks()
for task in tasks:
# No AttributeError raised
task.get_stack()
self.loop.run_until_complete(run())
class TestUnawaitedWarnings(unittest.TestCase):
def test_asend(self):
async def gen():
yield 1
# gh-113753: asend objects allocated from a free-list should warn.
# Ensure there is a finalized 'asend' object ready to be reused.
try:
g = gen()
g.asend(None).send(None)
except StopIteration:
pass
msg = f"coroutine method 'asend' of '{gen.__qualname__}' was never awaited"
with self.assertWarnsRegex(RuntimeWarning, msg):
g = gen()
g.asend(None)
gc_collect()
def test_athrow(self):
async def gen():
yield 1
msg = f"coroutine method 'athrow' of '{gen.__qualname__}' was never awaited"
with self.assertWarnsRegex(RuntimeWarning, msg):
g = gen()
g.athrow(RuntimeError)
gc_collect()
def test_athrow_throws_immediately(self):
async def gen():
yield 1
g = gen()
msg = "athrow expected at least 1 argument, got 0"
with self.assertRaisesRegex(TypeError, msg):
g.athrow()
def test_aclose(self):
async def gen():
yield 1
msg = f"coroutine method 'aclose' of '{gen.__qualname__}' was never awaited"
with self.assertWarnsRegex(RuntimeWarning, msg):
g = gen()
g.aclose()
gc_collect()
def test_aclose_throw(self):
async def gen():
return
yield
class MyException(Exception):
pass
g = gen()
with self.assertRaises(MyException):
g.aclose().throw(MyException)
del g
gc_collect() # does not warn unawaited
def test_asend_send_already_running(self):
@types.coroutine
def _async_yield(v):
return (yield v)
async def agenfn():
while True:
await _async_yield(1)
return
yield
agen = agenfn()
gen = agen.asend(None)
gen.send(None)
gen2 = agen.asend(None)
with self.assertRaisesRegex(RuntimeError,
r'anext\(\): asynchronous generator is already running'):
gen2.send(None)
del gen2
gc_collect() # does not warn unawaited
def test_athrow_send_already_running(self):
@types.coroutine
def _async_yield(v):
return (yield v)
async def agenfn():
while True:
await _async_yield(1)
return
yield
agen = agenfn()
gen = agen.asend(None)
gen.send(None)
gen2 = agen.athrow(Exception)
with self.assertRaisesRegex(RuntimeError,
r'athrow\(\): asynchronous generator is already running'):
gen2.send(None)
del gen2
gc_collect() # does not warn unawaited
if __name__ == "__main__":
unittest.main() | python | github | https://github.com/python/cpython | Lib/test/test_asyncgen.py |
//===--- Replacement.h - Migrator Replacements ------------------*- C++ -*-===//
//
// This source file is part of the Swift.org open source project
//
// Copyright (c) 2014 - 2017 Apple Inc. and the Swift project authors
// Licensed under Apache License v2.0 with Runtime Library Exception
//
// See https://swift.org/LICENSE.txt for license information
// See https://swift.org/CONTRIBUTORS.txt for the list of Swift project authors
//
//===----------------------------------------------------------------------===//
#ifndef SWIFT_MIGRATOR_REPLACEMENT_H
#define SWIFT_MIGRATOR_REPLACEMENT_H
namespace swift {
namespace migrator {
struct Replacement {
size_t Offset;
size_t Remove;
std::string Text;
bool isRemove() const {
return Remove > 0;
}
bool isInsert() const { return Remove == 0 && !Text.empty(); }
bool isReplace() const { return Remove > 0 && !Text.empty(); }
size_t endOffset() const {
if (isInsert()) {
return Offset + Text.size();
} else {
return Offset + Remove;
}
}
bool operator<(const Replacement &Other) const {
return Offset < Other.Offset;
}
bool operator==(const Replacement &Other) const {
return Offset == Other.Offset && Remove == Other.Remove &&
Text == Other.Text;
}
};
} // end namespace migrator
} // end namespace swift
#endif // SWIFT_MIGRATOR_REPLACEMENT_H | c | github | https://github.com/apple/swift | include/swift/Migrator/Replacement.h |
<?php
namespace Illuminate\Queue;
use Carbon\Carbon;
use Closure;
use DateTimeInterface;
use Illuminate\Bus\UniqueLock;
use Illuminate\Container\Container;
use Illuminate\Contracts\Cache\Repository as Cache;
use Illuminate\Contracts\Encryption\Encrypter;
use Illuminate\Contracts\Queue\ShouldBeEncrypted;
use Illuminate\Contracts\Queue\ShouldBeUnique;
use Illuminate\Contracts\Queue\ShouldQueueAfterCommit;
use Illuminate\Queue\Events\JobQueued;
use Illuminate\Queue\Events\JobQueueing;
use Illuminate\Support\Collection;
use Illuminate\Support\InteractsWithTime;
use Illuminate\Support\Str;
use RuntimeException;
use Throwable;
abstract class Queue
{
use InteractsWithTime;
/**
* The IoC container instance.
*
* @var \Illuminate\Container\Container
*/
protected $container;
/**
* The connection name for the queue.
*
* @var string
*/
protected $connectionName;
/**
* The original configuration for the queue.
*
* @var array
*/
protected $config;
/**
* Indicates that jobs should be dispatched after all database transactions have committed.
*
* @var bool
*/
protected $dispatchAfterCommit;
/**
* The create payload callbacks.
*
* @var callable[]
*/
protected static $createPayloadCallbacks = [];
/**
* Push a new job onto the queue.
*
* @param string $queue
* @param string $job
* @param mixed $data
* @return mixed
*/
public function pushOn($queue, $job, $data = '')
{
return $this->push($job, $data, $queue);
}
/**
* Push a new job onto a specific queue after (n) seconds.
*
* @param string $queue
* @param \DateTimeInterface|\DateInterval|int $delay
* @param string $job
* @param mixed $data
* @return mixed
*/
public function laterOn($queue, $delay, $job, $data = '')
{
return $this->later($delay, $job, $data, $queue);
}
/**
* Push an array of jobs onto the queue.
*
* @param array $jobs
* @param mixed $data
* @param string|null $queue
* @return void
*/
public function bulk($jobs, $data = '', $queue = null)
{
foreach ((array) $jobs as $job) {
$this->push($job, $data, $queue);
}
}
/**
* Create a payload string from the given job and data.
*
* @param \Closure|string|object $job
* @param string $queue
* @param mixed $data
* @param \DateTimeInterface|\DateInterval|int|null $delay
* @return string
*
* @throws \Illuminate\Queue\InvalidPayloadException
*/
protected function createPayload($job, $queue, $data = '', $delay = null)
{
if ($job instanceof Closure) {
$job = CallQueuedClosure::create($job);
}
$value = $this->createPayloadArray($job, $queue, $data);
$value['delay'] = isset($delay)
? $this->secondsUntil($delay)
: null;
$payload = json_encode($value, \JSON_UNESCAPED_UNICODE);
if (json_last_error() !== JSON_ERROR_NONE) {
throw new InvalidPayloadException(
'Unable to JSON encode payload. Error ('.json_last_error().'): '.json_last_error_msg(), $value
);
}
return $payload;
}
/**
* Create a payload array from the given job and data.
*
* @param string|object $job
* @param string $queue
* @param mixed $data
* @return array
*/
protected function createPayloadArray($job, $queue, $data = '')
{
return is_object($job)
? $this->createObjectPayload($job, $queue)
: $this->createStringPayload($job, $queue, $data);
}
/**
* Create a payload for an object-based queue handler.
*
* @param object $job
* @param string $queue
* @return array
*/
protected function createObjectPayload($job, $queue)
{
$payload = $this->withCreatePayloadHooks($queue, [
'uuid' => (string) Str::uuid(),
'displayName' => $this->getDisplayName($job),
'job' => 'Illuminate\Queue\CallQueuedHandler@call',
'maxTries' => $this->getJobTries($job),
'maxExceptions' => $job->maxExceptions ?? null,
'failOnTimeout' => $job->failOnTimeout ?? false,
'backoff' => $this->getJobBackoff($job),
'timeout' => $job->timeout ?? null,
'retryUntil' => $this->getJobExpiration($job),
'data' => [
'commandName' => $job,
'command' => $job,
'batchId' => $job->batchId ?? null,
],
'createdAt' => Carbon::now()->getTimestamp(),
]);
try {
$command = $this->jobShouldBeEncrypted($job) && $this->container->bound(Encrypter::class)
? $this->container[Encrypter::class]->encrypt(serialize(clone $job))
: serialize(clone $job);
} catch (Throwable $e) {
throw new RuntimeException(
sprintf('Failed to serialize job of type [%s]: %s', get_class($job), $e->getMessage()),
0,
$e
);
}
return array_merge($payload, [
'data' => array_merge($payload['data'], [
'commandName' => get_class($job),
'command' => $command,
]),
]);
}
/**
* Get the display name for the given job.
*
* @param object $job
* @return string
*/
protected function getDisplayName($job)
{
return method_exists($job, 'displayName')
? $job->displayName()
: get_class($job);
}
/**
* Get the maximum number of attempts for an object-based queue handler.
*
* @param mixed $job
* @return mixed
*/
public function getJobTries($job)
{
if (! method_exists($job, 'tries') && ! isset($job->tries)) {
return;
}
if (is_null($tries = $job->tries ?? $job->tries())) {
return;
}
return $tries;
}
/**
* Get the backoff for an object-based queue handler.
*
* @param mixed $job
* @return mixed
*/
public function getJobBackoff($job)
{
if (! method_exists($job, 'backoff') && ! isset($job->backoff)) {
return;
}
if (is_null($backoff = $job->backoff ?? $job->backoff())) {
return;
}
return Collection::wrap($backoff)
->map(fn ($backoff) => $backoff instanceof DateTimeInterface ? $this->secondsUntil($backoff) : $backoff)
->implode(',');
}
/**
* Get the expiration timestamp for an object-based queue handler.
*
* @param mixed $job
* @return mixed
*/
public function getJobExpiration($job)
{
if (! method_exists($job, 'retryUntil') && ! isset($job->retryUntil)) {
return;
}
$expiration = $job->retryUntil ?? $job->retryUntil();
return $expiration instanceof DateTimeInterface
? $expiration->getTimestamp()
: $expiration;
}
/**
* Determine if the job should be encrypted.
*
* @param object $job
* @return bool
*/
protected function jobShouldBeEncrypted($job)
{
if ($job instanceof ShouldBeEncrypted) {
return true;
}
return isset($job->shouldBeEncrypted) && $job->shouldBeEncrypted;
}
/**
* Create a typical, string based queue payload array.
*
* @param string $job
* @param string $queue
* @param mixed $data
* @return array
*/
protected function createStringPayload($job, $queue, $data)
{
return $this->withCreatePayloadHooks($queue, [
'uuid' => (string) Str::uuid(),
'displayName' => is_string($job) ? explode('@', $job)[0] : null,
'job' => $job,
'maxTries' => null,
'maxExceptions' => null,
'failOnTimeout' => false,
'backoff' => null,
'timeout' => null,
'data' => $data,
'createdAt' => Carbon::now()->getTimestamp(),
]);
}
/**
* Register a callback to be executed when creating job payloads.
*
* @param callable|null $callback
* @return void
*/
public static function createPayloadUsing($callback)
{
if (is_null($callback)) {
static::$createPayloadCallbacks = [];
} else {
static::$createPayloadCallbacks[] = $callback;
}
}
/**
* Create the given payload using any registered payload hooks.
*
* @param string $queue
* @param array $payload
* @return array
*/
protected function withCreatePayloadHooks($queue, array $payload)
{
if (! empty(static::$createPayloadCallbacks)) {
foreach (static::$createPayloadCallbacks as $callback) {
$payload = array_merge($payload, $callback($this->getConnectionName(), $queue, $payload));
}
}
return $payload;
}
/**
* Enqueue a job using the given callback.
*
* @param \Closure|string|object $job
* @param string $payload
* @param string|null $queue
* @param \DateTimeInterface|\DateInterval|int|null $delay
* @param callable $callback
* @return mixed
*/
protected function enqueueUsing($job, $payload, $queue, $delay, $callback)
{
if ($this->shouldDispatchAfterCommit($job) &&
$this->container->bound('db.transactions')) {
if ($job instanceof ShouldBeUnique) {
$this->container->make('db.transactions')->addCallbackForRollback(
function () use ($job) {
(new UniqueLock($this->container->make(Cache::class)))->release($job);
}
);
}
return $this->container->make('db.transactions')->addCallback(
function () use ($queue, $job, $payload, $delay, $callback) {
$this->raiseJobQueueingEvent($queue, $job, $payload, $delay);
return tap($callback($payload, $queue, $delay), function ($jobId) use ($queue, $job, $payload, $delay) {
$this->raiseJobQueuedEvent($queue, $jobId, $job, $payload, $delay);
});
}
);
}
$this->raiseJobQueueingEvent($queue, $job, $payload, $delay);
return tap($callback($payload, $queue, $delay), function ($jobId) use ($queue, $job, $payload, $delay) {
$this->raiseJobQueuedEvent($queue, $jobId, $job, $payload, $delay);
});
}
/**
* Determine if the job should be dispatched after all database transactions have committed.
*
* @param \Closure|string|object $job
* @return bool
*/
protected function shouldDispatchAfterCommit($job)
{
if ($job instanceof ShouldQueueAfterCommit) {
return ! (isset($job->afterCommit) && $job->afterCommit === false);
}
if (! $job instanceof Closure && is_object($job) && isset($job->afterCommit)) {
return $job->afterCommit;
}
return $this->dispatchAfterCommit ?? false;
}
/**
* Raise the job queueing event.
*
* @param string $queue
* @param \Closure|string|object $job
* @param string $payload
* @param \DateTimeInterface|\DateInterval|int|null $delay
* @return void
*/
protected function raiseJobQueueingEvent($queue, $job, $payload, $delay)
{
if ($this->container->bound('events')) {
$delay = ! is_null($delay) ? $this->secondsUntil($delay) : $delay;
$this->container['events']->dispatch(new JobQueueing($this->connectionName, $queue, $job, $payload, $delay));
}
}
/**
* Raise the job queued event.
*
* @param string|null $queue
* @param string|int|null $jobId
* @param \Closure|string|object $job
* @param string $payload
* @param \DateTimeInterface|\DateInterval|int|null $delay
* @return void
*/
protected function raiseJobQueuedEvent($queue, $jobId, $job, $payload, $delay)
{
if ($this->container->bound('events')) {
$delay = ! is_null($delay) ? $this->secondsUntil($delay) : $delay;
$this->container['events']->dispatch(new JobQueued($this->connectionName, $queue, $jobId, $job, $payload, $delay));
}
}
/**
* Get the connection name for the queue.
*
* @return string
*/
public function getConnectionName()
{
return $this->connectionName;
}
/**
* Set the connection name for the queue.
*
* @param string $name
* @return $this
*/
public function setConnectionName($name)
{
$this->connectionName = $name;
return $this;
}
/**
* Get the queue configuration array.
*
* @return array
*/
public function getConfig()
{
return $this->config;
}
/**
* Set the queue configuration array.
*
* @param array $config
* @return $this
*/
public function setConfig(array $config)
{
$this->config = $config;
return $this;
}
/**
* Get the container instance being used by the connection.
*
* @return \Illuminate\Container\Container
*/
public function getContainer()
{
return $this->container;
}
/**
* Set the IoC container instance.
*
* @param \Illuminate\Container\Container $container
* @return void
*/
public function setContainer(Container $container)
{
$this->container = $container;
}
} | php | github | https://github.com/laravel/framework | src/Illuminate/Queue/Queue.php |
from django.test import TestCase
from models import Album, Image
class AlbumModelTests(TestCase):
def setUp(self):
a, created = Album.objects.get_or_create(title="lion", public=True)
# a.save()
b, created = Album.objects.get_or_create(title="cat", public=True)
# b.save()
def test_getAlbumByID(self):
album = Album.objects.get(id=2)
print album.__dict__
self.assertEqual(album.title, 'cat')
def test_getAlbumByTitle(self):
album = Album.objects.get(title='cat')
print album.__dict__
self.assertEqual(album.id, 4)
albums = Album.objects.all()
self.assertEqual(len(albums), 2)
# class ImageModelTests(TestCase):
#
# def test_getImagesByAlbum(self):
#
# album = Album.objects.get(title=category_name_slug)
# context_dict['category_name'] = category.name
#
# # Retrieve all of the associated pages.
# # Note that filter returns >= 1 model instance.
# pages = Page.objects.filter(category=category)
#
#
# images =
# car = Car(make='Acura',model='TL', kilometers=74673, year=2012, color='White', engine_size=3.7, drivetrain='AWD', transmition='MA')
# car.save()
#
# self.assertEqual(car.drivetrain, 'AWD')
# self.assertEqual(car.transmition, 'MA') | unknown | codeparrot/codeparrot-clean | ||
import asyncio
import hastebin
import discord
from discord.ext import commands
from clembot.core.logs import init_loggers
from itertools import chain, cycle
import random
import re
class TextUtil:
# Convert an arbitrary string into something which
# is acceptable as a Discord channel name.
@staticmethod
def sanitize(name):
# Remove all characters other than alphanumerics,
# dashes, underscores, and spaces
ret = re.sub(r"[^a-zA-Z0-9 _\-]", "", name)
# Replace spaces with dashes
ret = ret.replace(" ", "-")
return ret
class EmbedUtil:
def __init__(self):
return
@staticmethod
async def message(channel, description, title=None, footer=None, user=None):
try:
error_message = "The output contains more than 2000 characters."
user_mention = ""
if user:
user_mention = f"Beep Beep! **{user.display_name}** "
if len(description) >= 2000:
discord.Embed(description="{0}".format(error_message), colour=discord.Color.red())
message_embed = discord.Embed(description=f"{user_mention}{description}", colour=discord.Colour.green(), title=title)
if footer:
message_embed.set_footer(text=footer)
return await channel.send(embed=message_embed)
except Exception as error:
print(error)
@staticmethod
async def error(channel, description, user=None):
color = discord.Colour.red()
user_mention = ""
if user:
user_mention = f"Beep Beep! **{user.display_name}** "
error_embed = discord.Embed(description=f"{user_mention}{description}", colour=color)
return await channel.send(embed=error_embed)
class RemoveComma(commands.Converter):
async def convert(self, ctx, argument):
return argument.replace(",", " ").strip()
class HandleAngularBrackets(commands.Converter):
async def convert(self, ctx, argument):
if argument.__contains__("<") or argument.__contains__(">"):
await Utilities._send_error_message(ctx.channel,f"Beep Beep! **{ctx.message.author.display_name}**, **< >** just represents the placeholder. You can provide the values directly!")
return argument.replace("<", "").replace(">", "").strip()
class Utilities(commands.Cog):
logger = init_loggers()
def __init__(self):
return
numbers = {"0": ":zero:", "1": ":one:", "2": ":two:", "3": ":three:", "4": ":four:", "5": ":five:", "6": ":six:", "7": ":seven:", "8": ":eight:", "9": ":nine:"}
def trim_to(self, text, length, delimiter=","):
if len(text) == 0:
return "None"
if text and delimiter:
return text[:text.rfind(delimiter, 0, length)] + " ** and more.**" if len(text) > length else text
return text
def emojify_numbers(self, number):
number_emoji = ""
reverse = "".join(reversed(str(number)))
for digit in reverse[::-1]:
emoji = self.numbers.get(digit)
if not emoji:
emoji = ":regional_indicator_" + digit.lower() + ":"
number_emoji = number_emoji + emoji
return number_emoji
def _normalize(self, emoji):
initial_emoji = emoji
if isinstance(emoji, discord.Reaction):
emoji = emoji.emoji
if isinstance(emoji, discord.Emoji):
emoji = ':%s:%s' % (emoji.name, emoji.id)
elif isinstance(emoji, discord.PartialEmoji):
emoji = emoji._as_reaction()
elif isinstance(emoji, str):
pass
if emoji.count(':') == 1 and not emoji.startswith(':'):
emoji = f":{emoji}"
if emoji.__contains__(">") and emoji.__contains__("<"):
emoji = emoji.replace('<','').replace('>','')
return emoji
def _demojify(self, emoji):
# convert emoji to id
if isinstance(emoji, discord.Reaction):
emoji = emoji.emoji.id
if isinstance(emoji, discord.Emoji):
emoji = emoji.id
elif isinstance(emoji, discord.PartialEmoji):
emoji = emoji.id if emoji.id else emoji.name
elif isinstance(emoji, str):
pass
return emoji
def _emojify(self, emoji):
if emoji.__contains__(">") and emoji.__contains__("<"):
emoji = emoji.replace('<', '').replace('>', '')
return emoji
@classmethod
def _uuid(cls, id):
try:
return '%x' % (hash(id) % 10 ** 8)
except Exception as error:
print(error)
return id
@classmethod
async def _send_error_message(cls, channel, description, user=None):
color = discord.Colour.red()
user_mention = ""
if user:
user_mention = f"Beep Beep! **{user.display_name}** "
error_embed = discord.Embed(description=f"{user_mention}{description}", colour=color)
return await channel.send(embed=error_embed)
@classmethod
async def message(cls, destination, description, user=None):
color = discord.Colour.green()
user_mention = ""
if user:
user_mention = f"Beep Beep! **{user.display_name}** "
error_embed = discord.Embed(description=f"{user_mention}{description}", colour=color)
return await destination.send(embed=error_embed)
@classmethod
async def message_as_text(cls, channel, description):
return await channel.send(description)
@classmethod
async def error(cls, channel, description, user=None):
color = discord.Colour.red()
user_mention = ""
if user:
user_mention = f"Beep Beep! **{user.display_name}** "
error_message = f"{user_mention}{description}"
error_embed = discord.Embed(description=f"{error_message}", colour=color)
cls.logger.error(error_message)
return await channel.send(embed=error_embed)
async def _send_message(self, channel, description, title=None, footer=None, user=None):
try:
error_message = "The output contains more than 2000 characters."
user_mention = ""
if user:
user_mention = f"Beep Beep! **{user.display_name}** "
if len(description) >= 2000:
discord.Embed(description="{0}".format(error_message), colour=color)
color = discord.Colour.green()
message_embed = discord.Embed(description=f"{user_mention}{description}", colour=color, title=title)
if footer:
message_embed.set_footer(text=footer)
return await channel.send(embed=message_embed)
except Exception as error:
print(error)
@classmethod
async def _send_embed(cls, channel, description=None, title=None, additional_fields={}, footer=None):
embed = discord.Embed(description=description, colour=discord.Colour.gold(), title=title)
for label, value in additional_fields.items():
if value:
embed.add_field(name="**{0}**".format(label), value=value, inline=False)
if footer:
embed.set_footer(text=footer)
try:
return await channel.send(embed=embed)
except Exception as error:
return await channel.send(error)
@commands.command(name="export")
async def _export(self, ctx):
return await self._send_message(ctx.channel, "Beep Beep! **{}**, This feature is under-development!".format(ctx.message.author.display_name))
print("_export() called!")
raid_dict = ctx.bot.guild_dict[ctx.guild.id]['raidchannel_dict'][ctx.channel.id]
channel_mentions = ctx.message.raw_channel_mentions
if len(channel_mentions) < 1:
await self._send_error_message(ctx.channel, "Beep Beep! **{}**, Please provide the channel reference to export the details!".format(ctx.message.author.display_name))
@commands.command(name="clean_content")
async def _clean_content(self, message):
message_content = {}
content_without_mentions = message.content
for mention in message.mentions:
mention_text = mention.mention.replace("!", "")
content_without_mentions = content_without_mentions.replace("<@!", "<@").replace(mention_text, '')
# remove extra spaces
message_content['content_without_mentions'] = re.sub(' +', ' ', content_without_mentions)
return message_content
@classmethod
def get_help_embed(self, description, usage, available_value_title, available_values, mode="message"):
if mode == "message":
color = discord.Colour.green()
else:
color = discord.Colour.red()
help_embed = discord.Embed( description="**{0}**".format(description), colour=color)
help_embed.add_field(name="**Usage :**", value = "**{0}**".format(usage))
help_embed.add_field(name="**{0} :**".format(available_value_title), value=_("**{0}**".format(", ".join(available_values))), inline=False)
return help_embed
@classmethod
async def _send_error_message_and_cleanup(self, channel, message, user):
log_message = await self._send_error_message(channel, message, user=user)
await asyncio.sleep(8)
await log_message.delete()
@classmethod
async def get_image_embed(cls, channel, image_url):
embed = discord.Embed(colour=channel.guild.me.colour)
embed.set_thumbnail(url=image_url)
return await channel.send(embed=embed)
async def ask(message, destination, user_list=None, *, react_list=['✅', '❎']):
if user_list and type(user_list) != __builtins__.list:
user_list = [user_list]
def check(reaction, user):
if user_list and type(user_list) is __builtins__.list:
return (user.id in user_list) and (reaction.message.id == message.id) and (reaction.emoji in react_list)
elif not user_list:
return (user.id != message.guild.me.id) and (reaction.message.id == message.id) and (reaction.emoji in react_list)
for r in react_list:
await asyncio.sleep(0.25)
await message.add_reaction(r)
try:
reaction, user = await Clembot.wait_for('reaction_add', check=check, timeout=60)
return reaction, user
except asyncio.TimeoutError:
await message.clear_reactions()
return
@classmethod
async def ask_confirmation(cls, ctx, message, rusure_message, yes_message, no_message, timed_out_message):
author = message.author
channel = message.channel
reaction_list = ['✅', '❎']
# reaction_list = ['❔', '✅', '❎']
rusure = await ctx.channel.send(f"Beep Beep! {rusure_message}")
await rusure.add_reaction( "✅") # checkmark
await rusure.add_reaction( "❎") # cross
def check(react, user):
if user.id != author.id:
return False
return True
# res = await Clembot.wait_for_reaction(reaction_list, message=rusure, check=check, timeout=60)
try:
reaction, user = await ctx.bot.wait_for('reaction_add', check=check, timeout=10)
except asyncio.TimeoutError:
await rusure.delete()
confirmation = await channel.send(_("Beep Beep! {message}".format(message=timed_out_message)))
await asyncio.sleep(1)
await confirmation.delete()
return False
if reaction.emoji == "❎":
await rusure.delete()
confirmation = await channel.send(_("Beep Beep! {message}".format(message=no_message)))
await asyncio.sleep(1)
await confirmation.delete()
return False
elif reaction.emoji == "✅":
await rusure.delete()
confirmation = await channel.send(_("Beep Beep! {message}".format(message=yes_message)))
await asyncio.sleep(1)
await confirmation.delete()
return True
@classmethod
async def send_to_hastebin(cls, destination, whatever):
whatever = whatever.encode('ascii', errors='replace').decode('utf-8')
await Utilities.message(destination, hastebin.post(whatever))
# class SnakeDraft:
#
# @staticmethod
# def draft(cls, n, new_i = None):
# if new_i:
# i = new_i
# while True:
# for i in range(1, n + 1):
# yield i
# for i in range(n, 0, -1):
# yield i
def draft(n, new_i = None):
if new_i:
i = new_i
while True:
for i in range(1, n + 1):
yield i
for i in range(n, 0, -1):
yield i
def draft_next(size_of_team, players_already_drafted, current_player_index):
direction = 0
next_index = players_already_drafted % size_of_team
if (players_already_drafted % (size_of_team * 2)) > size_of_team - 1:
direction = 1
next_index = size_of_team - next_index - 1
print(f"({size_of_team}, {players_already_drafted}, {current_player_index}) {direction} => {next_index}")
return next_index
def get_next(team_size):
all_players = ['P1', 'P2', 'P3', 'P4', 'P5', 'P6', 'P7', 'P8', 'P1', 'P2', 'P3', 'P4', 'P5', 'P6', 'P7', 'P8', 'P1', 'P2', 'P3', 'P4', 'P5', 'P6', 'P7', 'P8']
players = all_players[:team_size]
turn = draft(players.__len__())
text = ""
for i in range(1, players.__len__() * 6 + 1):
index = next(turn)
draft_index = draft_next(players.__len__(), i - 1, index - 1)
print(f"({i}) \t- {index - 1} => {draft_index} {index - 1 == draft_index}")
return 1
number_list = [1,2,3,4,5,6,7,8,9,10,11,12,12,11,10,9,8,7,6,5,4,3,2,1]
def slot(n, x): # 0.03638 sec for 100,000x
return number_list[x % (n*2)]
def test_slot():
print(slot(18,46))
def test_shuffle():
my_list = [1, 2, 3, 4, 5]
list_copy = list(my_list)
random.shuffle(list_copy)
random.shuffle(list_copy)
print(my_list)
print(list_copy)
class ListIterator:
_index = -1
_item_list = []
__list_size = 0
def __init__(self, item_list, current_index=-1):
self._item_list = item_list
self.__list_size = len(self._item_list)
self._index = current_index if current_index == -1 else current_index - 1
def current(self):
return self._item_list[self._index]
def next(self):
self._index += 1
if self._index >= self.__list_size:
self._index = 0
return self._item_list[self._index]
def prev(self):
self._index -= 1;
if self._index < 0:
self._index = len(self._item_list) - 1
return self._item_list[self._index]
def test_cycle():
my_list = [1, 2, 3, 4, 5]
pool = cycle(my_list)
for i in range(1, 40):
print(f"{i} -> {pool.__next__()}")
def test_myclass():
my_list = [1, 2, 3, 4, 5]
random.shuffle(my_list)
print(my_list)
pool = ListIterator(my_list)
for i in range(1, 40):
print(f"{i} -> {pool.next()}")
if i % 5 == 0:
print(f"{i} -> {pool.prev()}")
random.shuffle(my_list)
print(my_list)
pool = ListIterator(my_list, 3)
for i in range(1, 40):
print(f"{i} -> {pool.next()}")
if i % 5 == 0:
print(f"{i} -> {pool.prev()}")
def setup(bot):
bot.add_cog(Utilities())
def main():
get_next(3)
print(f"[utilities.py] main() finished.")
#main() | unknown | codeparrot/codeparrot-clean | ||
"""
Django settings for firstblog1 project.
For more information on this file, see
https://docs.djangoproject.com/en/1.6/topics/settings/
For the full list of settings and their values, see
https://docs.djangoproject.com/en/1.6/ref/settings/
"""
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
import os
BASE_DIR = os.path.dirname(os.path.dirname(__file__))
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/1.6/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = '#kwgaqht6xtl@yhpniw_6jeqlpjs$%qhjxz2x_3#st=*9n&5_f'
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = True
TEMPLATE_DEBUG = True
ALLOWED_HOSTS = []
# Application definition
INSTALLED_APPS = (
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'blog1',
#uncomment the next kine to enable the admin
'django.contrib.admin',
#uncomment the next line to enaable admin documentation
'django.contrib.admindocs',
)
MIDDLEWARE_CLASSES = (
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
)
ROOT_URLCONF = 'firstblog1.urls'
TEMPLATE_DIRS = {
"blog1/templates",
}
WSGI_APPLICATION = 'firstblog1.wsgi.application'
# Database
# https://docs.djangoproject.com/en/1.6/ref/settings/#databases
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.mysql',
'NAME': 'firstblog1', #Name of your Database
'USER': 'root', #Not used with sqlite3
'PASSWORD': '12345', #Password for your MySQL,not used with sqlite3
'HOST': '', #set to empty string for localhost.Not used with sqlite3
'PORT': '', #set to empty string for default.Not used with sqlite3
}
}
# Internationalization
# https://docs.djangoproject.com/en/1.6/topics/i18n/
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_L10N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/1.6/howto/static-files/
STATIC_URL = '/static/' | unknown | codeparrot/codeparrot-clean | ||
<!--Copyright 2021 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
*This model was released on 2021-10-15 and added to Hugging Face Transformers on 2021-12-07.*
# mLUKE
<div class="flex flex-wrap space-x-1">
<img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-DE3412?style=flat&logo=pytorch&logoColor=white">
</div>
## Overview
The mLUKE model was proposed in [mLUKE: The Power of Entity Representations in Multilingual Pretrained Language Models](https://huggingface.co/papers/2110.08151) by Ryokan Ri, Ikuya Yamada, and Yoshimasa Tsuruoka. It's a multilingual extension
of the [LUKE model](https://huggingface.co/papers/2010.01057) trained on the basis of XLM-RoBERTa.
It is based on XLM-RoBERTa and adds entity embeddings, which helps improve performance on various downstream tasks
involving reasoning about entities such as named entity recognition, extractive question answering, relation
classification, cloze-style knowledge completion.
The abstract from the paper is the following:
*Recent studies have shown that multilingual pretrained language models can be effectively improved with cross-lingual
alignment information from Wikipedia entities. However, existing methods only exploit entity information in pretraining
and do not explicitly use entities in downstream tasks. In this study, we explore the effectiveness of leveraging
entity representations for downstream cross-lingual tasks. We train a multilingual language model with 24 languages
with entity representations and show the model consistently outperforms word-based pretrained models in various
cross-lingual transfer tasks. We also analyze the model and the key insight is that incorporating entity
representations into the input allows us to extract more language-agnostic features. We also evaluate the model with a
multilingual cloze prompt task with the mLAMA dataset. We show that entity-based prompt elicits correct factual
knowledge more likely than using only word representations.*
This model was contributed by [ryo0634](https://huggingface.co/ryo0634). The original code can be found [here](https://github.com/studio-ousia/luke).
## Usage tips
One can directly plug in the weights of mLUKE into a LUKE model, like so:
```python
from transformers import LukeModel
model = LukeModel.from_pretrained("studio-ousia/mluke-base")
```
Note that mLUKE has its own tokenizer, [`MLukeTokenizer`]. You can initialize it as follows:
```python
from transformers import MLukeTokenizer
tokenizer = MLukeTokenizer.from_pretrained("studio-ousia/mluke-base")
```
<Tip>
As mLUKE's architecture is equivalent to that of LUKE, one can refer to [LUKE's documentation page](luke) for all
tips, code examples and notebooks.
</Tip>
## MLukeTokenizer
[[autodoc]] MLukeTokenizer
- __call__
- save_vocabulary | unknown | github | https://github.com/huggingface/transformers | docs/source/en/model_doc/mluke.md |
from __future__ import absolute_import
from __future__ import print_function
# This file is part of BurnMan - a thermoelastic and thermodynamic toolkit for the Earth and Planetary Sciences
# Copyright (C) 2012 - 2015 by the BurnMan team, released under the GNU
# GPL v2 or later.
import argparse
import os.path
import sys
if not os.path.exists('burnman') and os.path.exists('../burnman'):
sys.path.insert(1, os.path.abspath('..'))
from burnman.perplex import create_perplex_table
parser = argparse.ArgumentParser(description='Call werami to create a burnman-readable tab file.')
parser.add_argument('--werami_path', metavar='path', type=str, nargs='+', required=True,
help='The path to werami')
parser.add_argument('--project', metavar='project name', type=str, nargs='+', required=True,
help='The name of the project file (without the suffix)')
parser.add_argument('--outfile', metavar='output file', type=str, nargs='+', required=True,
help='The name of the output table file')
parser.add_argument('--n_pressures', type=int, nargs=1, required=True,
help='The number of pressure steps in the grid')
parser.add_argument('--n_temperatures', type=int, nargs=1, required=True,
help='The number of pressure steps in the grid')
parser.add_argument('--pressure_range', type=float, nargs=2,
help='Minimum and maximum values of pressure (Pa; optional)')
parser.add_argument('--temperature_range', type=float, nargs=2,
help='Minimum and maximum values of temperature (K; optional)')
args = parser.parse_args()
if not hasattr(args, 'pressure_range'):
args.pressure_range = None
if not hasattr(args, 'temperature_range'):
args.temperature_range = None
create_perplex_table(args.werami_path, args.project[0], args.outfile[0], args.n_pressures[0], args.n_temperatures[0], args.pressure_range, args.temperature_range) | unknown | codeparrot/codeparrot-clean | ||
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
import threading
import contextlib
from socorro.external.hbase import hbase_client
from configman.config_manager import RequiredConfig
from configman import Namespace
class HBaseSingleConnectionContext(RequiredConfig):
"""a configman compliant class for setup of HBase connections
DO NOT SHARE HBASE CONNECTIONS BETWEEN THREADS
"""
#--------------------------------------------------------------------------
# configman parameter definition section
# here we're setting up the minimal parameters required for connecting
required_config = Namespace()
required_config.add_option(
'number_of_retries',
doc='Max. number of retries when fetching from hbaseClient',
default=0,
reference_value_from='resource.hbase'
)
required_config.add_option(
'hbase_host',
doc='Host to HBase server',
default='localhost',
reference_value_from='resource.hbase',
)
required_config.add_option(
'hbase_port',
doc='Port to HBase server',
default=9090,
reference_value_from='resource.hbase',
)
required_config.add_option(
'hbase_timeout',
doc='timeout in milliseconds for an HBase connection',
default=5000,
reference_value_from='resource.hbase',
)
required_config.add_option(
'temporary_file_system_storage_path',
doc='a local filesystem path where dumps temporarily '
'during processing',
default='/home/socorro/temp',
reference_value_from='resource.hbase',
)
required_config.add_option(
'dump_file_suffix',
doc='the suffix used to identify a dump file (for use in temp files)',
default='.dump',
reference_value_from='resource.hbase',
)
#--------------------------------------------------------------------------
def __init__(self, config, local_config=None):
"""Initialize the parts needed to start making database connections
parameters:
config - the complete config for the app. If a real app, this
would be where a logger or other resources could be
found.
local_config - this is the namespace within the complete config
where the actual database parameters are found"""
super(HBaseSingleConnectionContext, self).__init__()
self.config = config
if local_config is None:
local_config = config
dummy_connection = hbase_client.HBaseConnectionForCrashReports(
local_config.hbase_host,
local_config.hbase_port,
local_config.hbase_timeout,
logger=self.config.logger
)
dummy_connection.close()
self.operational_exceptions = \
dummy_connection.hbaseThriftExceptions
self.operational_exceptions += \
(hbase_client.NoConnectionException,)
self.conditional_exceptions = ()
#--------------------------------------------------------------------------
def connection(self, name_unused=None):
"""return a new database connection
parameters:
name_unused - optional named connections. Used by the
derived class
"""
#self.config.logger.debug('creating new HBase connection')
return hbase_client.HBaseConnectionForCrashReports(
self.config.hbase_host,
self.config.hbase_port,
self.config.hbase_timeout,
logger=self.config.logger
)
#--------------------------------------------------------------------------
@contextlib.contextmanager
def __call__(self, name=None):
"""returns a database connection wrapped in a contextmanager.
The context manager will assure that the connection is closed but will
not try to commit or rollback lingering transactions.
parameters:
name - an optional name for the database connection"""
conn = self.connection(name)
try:
#self.config.logger.debug('connection HBase acquired')
yield conn
finally:
self.close_connection(conn)
#--------------------------------------------------------------------------
def close_connection(self, connection, force=False):
"""close the connection passed in.
This function exists to allow derived classes to override the closing
behavior.
parameters:
connection - the database connection object
force - unused boolean to force closure; used in derived classes
"""
#self.config.logger.debug('connection HBase closed')
connection.close()
#--------------------------------------------------------------------------
def close(self):
"""close any pooled or cached connections. Since this base class
object does no caching, there is no implementation required. Derived
classes may implement it."""
pass
#--------------------------------------------------------------------------
def is_operational_exception(self, msg):
"""return True if a conditional exception is actually an operational
error. Return False if it's a genuine error that should probably be
raised and propagate up.
Some conditional exceptions might be actually be some form of
operational exception "labelled" wrong by the psycopg2 code error
handler.
"""
return False
#--------------------------------------------------------------------------
def force_reconnect(self):
pass
#==============================================================================
class HBaseConnectionContextPooled(HBaseSingleConnectionContext):
"""a configman compliant class that pools HBase database connections"""
#--------------------------------------------------------------------------
def __init__(self, config, local_config=None):
super(HBaseConnectionContextPooled, self).__init__(config,
local_config)
#self.config.logger.debug("HBaseConnectionContextPooled - "
# "setting up connection pool")
self.pool = {}
#--------------------------------------------------------------------------
def connection(self, name=None):
"""return a named connection.
This function will return a named connection by either finding one
in its pool by the name or creating a new one. If no name is given,
it will use the name of the current executing thread as the name of
the connection.
parameters:
name - a name as a string
"""
if not name:
name = self.config.executor_identity()
if name in self.pool:
#self.config.logger.debug('connection: %s', name)
return self.pool[name]
self.pool[name] = \
super(HBaseConnectionContextPooled, self).connection(name)
return self.pool[name]
#--------------------------------------------------------------------------
def close_connection(self, connection, force=False):
"""overriding the baseclass function, this routine will decline to
close a connection at the end of a transaction context. This allows
for reuse of connections."""
if force:
try:
(super(HBaseConnectionContextPooled, self)
.close_connection(connection, force))
except self.operational_exceptions:
self.config.logger.error('HBaseConnectionContextPooled - '
'failed closing')
for name, conn in self.pool.iteritems():
if conn is connection:
break
del self.pool[name]
#--------------------------------------------------------------------------
def close(self):
"""close all pooled connections"""
self.config.logger.debug("HBasePooled - "
"shutting down connection pool")
for name, conn in self.pool.iteritems():
conn.close()
self.config.logger.debug("HBasePooled - connection %s closed"
% name)
#--------------------------------------------------------------------------
def force_reconnect(self):
pass | unknown | codeparrot/codeparrot-clean | ||
/*
* jdmerge.h
*
* This file was part of the Independent JPEG Group's software:
* Copyright (C) 1994-1996, Thomas G. Lane.
* libjpeg-turbo Modifications:
* Copyright (C) 2020, 2022, D. R. Commander.
* For conditions of distribution and use, see the accompanying README.ijg
* file.
*/
#define JPEG_INTERNALS
#include "jpeglib.h"
#include "jsamplecomp.h"
#ifdef UPSAMPLE_MERGING_SUPPORTED
/* Private subobject */
typedef struct {
struct jpeg_upsampler pub; /* public fields */
/* Pointer to routine to do actual upsampling/conversion of one row group */
void (*upmethod) (j_decompress_ptr cinfo, _JSAMPIMAGE input_buf,
JDIMENSION in_row_group_ctr, _JSAMPARRAY output_buf);
/* Private state for YCC->RGB conversion */
int *Cr_r_tab; /* => table for Cr to R conversion */
int *Cb_b_tab; /* => table for Cb to B conversion */
JLONG *Cr_g_tab; /* => table for Cr to G conversion */
JLONG *Cb_g_tab; /* => table for Cb to G conversion */
/* For 2:1 vertical sampling, we produce two output rows at a time.
* We need a "spare" row buffer to hold the second output row if the
* application provides just a one-row buffer; we also use the spare
* to discard the dummy last row if the image height is odd.
*/
_JSAMPROW spare_row;
boolean spare_full; /* T if spare buffer is occupied */
JDIMENSION out_row_width; /* samples per output row */
JDIMENSION rows_to_go; /* counts rows remaining in image */
} my_merged_upsampler;
typedef my_merged_upsampler *my_merged_upsample_ptr;
#endif /* UPSAMPLE_MERGING_SUPPORTED */ | c | github | https://github.com/opencv/opencv | 3rdparty/libjpeg-turbo/src/jdmerge.h |
"""
Starts a service to scan in intervals for new devices.
Will emit EVENT_PLATFORM_DISCOVERED whenever a new service has been discovered.
Knows which components handle certain types, will make sure they are
loaded before the EVENT_PLATFORM_DISCOVERED is fired.
"""
import json
from datetime import timedelta
import logging
import voluptuous as vol
from homeassistant import config_entries
from homeassistant.core import callback
from homeassistant.const import EVENT_HOMEASSISTANT_START
import homeassistant.helpers.config_validation as cv
from homeassistant.helpers.event import async_track_point_in_utc_time
from homeassistant.helpers.discovery import async_load_platform, async_discover
import homeassistant.util.dt as dt_util
DOMAIN = 'discovery'
SCAN_INTERVAL = timedelta(seconds=300)
SERVICE_APPLE_TV = 'apple_tv'
SERVICE_DAIKIN = 'daikin'
SERVICE_DECONZ = 'deconz'
SERVICE_DLNA_DMR = 'dlna_dmr'
SERVICE_ENIGMA2 = 'enigma2'
SERVICE_FREEBOX = 'freebox'
SERVICE_HASS_IOS_APP = 'hass_ios'
SERVICE_HASSIO = 'hassio'
SERVICE_HEOS = 'heos'
SERVICE_IGD = 'igd'
SERVICE_KONNECTED = 'konnected'
SERVICE_MOBILE_APP = 'hass_mobile_app'
SERVICE_NETGEAR = 'netgear_router'
SERVICE_OCTOPRINT = 'octoprint'
SERVICE_ROKU = 'roku'
SERVICE_SABNZBD = 'sabnzbd'
SERVICE_SAMSUNG_PRINTER = 'samsung_printer'
SERVICE_TELLDUSLIVE = 'tellstick'
SERVICE_YEELIGHT = 'yeelight'
SERVICE_WEMO = 'belkin_wemo'
SERVICE_WINK = 'wink'
SERVICE_XIAOMI_GW = 'xiaomi_gw'
CONFIG_ENTRY_HANDLERS = {
SERVICE_DAIKIN: 'daikin',
SERVICE_DECONZ: 'deconz',
'google_cast': 'cast',
SERVICE_HEOS: 'heos',
SERVICE_TELLDUSLIVE: 'tellduslive',
'sonos': 'sonos',
SERVICE_IGD: 'upnp',
}
SERVICE_HANDLERS = {
SERVICE_MOBILE_APP: ('mobile_app', None),
SERVICE_HASS_IOS_APP: ('ios', None),
SERVICE_NETGEAR: ('device_tracker', None),
SERVICE_WEMO: ('wemo', None),
SERVICE_HASSIO: ('hassio', None),
SERVICE_APPLE_TV: ('apple_tv', None),
SERVICE_ENIGMA2: ('media_player', 'enigma2'),
SERVICE_ROKU: ('roku', None),
SERVICE_WINK: ('wink', None),
SERVICE_XIAOMI_GW: ('xiaomi_aqara', None),
SERVICE_SABNZBD: ('sabnzbd', None),
SERVICE_SAMSUNG_PRINTER: ('sensor', 'syncthru'),
SERVICE_KONNECTED: ('konnected', None),
SERVICE_OCTOPRINT: ('octoprint', None),
SERVICE_FREEBOX: ('freebox', None),
SERVICE_YEELIGHT: ('yeelight', None),
'panasonic_viera': ('media_player', 'panasonic_viera'),
'plex_mediaserver': ('media_player', 'plex'),
'yamaha': ('media_player', 'yamaha'),
'logitech_mediaserver': ('media_player', 'squeezebox'),
'directv': ('media_player', 'directv'),
'denonavr': ('media_player', 'denonavr'),
'samsung_tv': ('media_player', 'samsungtv'),
'frontier_silicon': ('media_player', 'frontier_silicon'),
'openhome': ('media_player', 'openhome'),
'harmony': ('remote', 'harmony'),
'bose_soundtouch': ('media_player', 'soundtouch'),
'bluesound': ('media_player', 'bluesound'),
'songpal': ('media_player', 'songpal'),
'kodi': ('media_player', 'kodi'),
'volumio': ('media_player', 'volumio'),
'lg_smart_device': ('media_player', 'lg_soundbar'),
'nanoleaf_aurora': ('light', 'nanoleaf'),
}
OPTIONAL_SERVICE_HANDLERS = {
SERVICE_DLNA_DMR: ('media_player', 'dlna_dmr'),
}
MIGRATED_SERVICE_HANDLERS = {
'axis': None,
'esphome': None,
'ikea_tradfri': None,
'homekit': None,
'philips_hue': None
}
DEFAULT_ENABLED = list(CONFIG_ENTRY_HANDLERS) + list(SERVICE_HANDLERS) + \
list(MIGRATED_SERVICE_HANDLERS)
DEFAULT_DISABLED = list(OPTIONAL_SERVICE_HANDLERS) + \
list(MIGRATED_SERVICE_HANDLERS)
CONF_IGNORE = 'ignore'
CONF_ENABLE = 'enable'
CONFIG_SCHEMA = vol.Schema({
vol.Optional(DOMAIN): vol.Schema({
vol.Optional(CONF_IGNORE, default=[]):
vol.All(cv.ensure_list, [vol.In(DEFAULT_ENABLED)]),
vol.Optional(CONF_ENABLE, default=[]):
vol.All(cv.ensure_list, [
vol.In(DEFAULT_DISABLED + DEFAULT_ENABLED)]),
}),
}, extra=vol.ALLOW_EXTRA)
async def async_setup(hass, config):
"""Start a discovery service."""
from netdisco.discovery import NetworkDiscovery
logger = logging.getLogger(__name__)
netdisco = NetworkDiscovery()
already_discovered = set()
# Disable zeroconf logging, it spams
logging.getLogger('zeroconf').setLevel(logging.CRITICAL)
if DOMAIN in config:
# Platforms ignore by config
ignored_platforms = config[DOMAIN][CONF_IGNORE]
# Optional platforms enabled by config
enabled_platforms = config[DOMAIN][CONF_ENABLE]
else:
ignored_platforms = []
enabled_platforms = []
for platform in enabled_platforms:
if platform in DEFAULT_ENABLED:
logger.warning(
"Please remove %s from your discovery.enable configuration "
"as it is now enabled by default",
platform,
)
async def new_service_found(service, info):
"""Handle a new service if one is found."""
if service in MIGRATED_SERVICE_HANDLERS:
return
if service in ignored_platforms:
logger.info("Ignoring service: %s %s", service, info)
return
discovery_hash = json.dumps([service, info], sort_keys=True)
if discovery_hash in already_discovered:
logger.debug("Already discovered service %s %s.", service, info)
return
already_discovered.add(discovery_hash)
if service in CONFIG_ENTRY_HANDLERS:
await hass.config_entries.flow.async_init(
CONFIG_ENTRY_HANDLERS[service],
context={'source': config_entries.SOURCE_DISCOVERY},
data=info
)
return
comp_plat = SERVICE_HANDLERS.get(service)
if not comp_plat and service in enabled_platforms:
comp_plat = OPTIONAL_SERVICE_HANDLERS[service]
# We do not know how to handle this service.
if not comp_plat:
logger.info("Unknown service discovered: %s %s", service, info)
return
logger.info("Found new service: %s %s", service, info)
component, platform = comp_plat
if platform is None:
await async_discover(hass, service, info, component, config)
else:
await async_load_platform(
hass, component, platform, info, config)
async def scan_devices(now):
"""Scan for devices."""
try:
results = await hass.async_add_job(_discover, netdisco)
for result in results:
hass.async_create_task(new_service_found(*result))
except OSError:
logger.error("Network is unreachable")
async_track_point_in_utc_time(
hass, scan_devices, dt_util.utcnow() + SCAN_INTERVAL)
@callback
def schedule_first(event):
"""Schedule the first discovery when Home Assistant starts up."""
async_track_point_in_utc_time(hass, scan_devices, dt_util.utcnow())
hass.bus.async_listen_once(EVENT_HOMEASSISTANT_START, schedule_first)
return True
def _discover(netdisco):
"""Discover devices."""
results = []
try:
netdisco.scan()
for disc in netdisco.discover():
for service in netdisco.get_info(disc):
results.append((disc, service))
finally:
netdisco.stop()
return results | unknown | codeparrot/codeparrot-clean | ||
#!/usr/bin/python
# Copyright (c) 2012 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
import os.path
import re
import shutil
import sys
import tempfile
import time
import urlparse
import browserprocess
class LaunchFailure(Exception):
pass
def GetPlatform():
if sys.platform == 'darwin':
platform = 'mac'
elif sys.platform.startswith('linux'):
platform = 'linux'
elif sys.platform in ('cygwin', 'win32'):
platform = 'windows'
else:
raise LaunchFailure('Unknown platform: %s' % sys.platform)
return platform
PLATFORM = GetPlatform()
def SelectRunCommand():
# The subprocess module added support for .kill in Python 2.6
assert (sys.version_info[0] >= 3 or (sys.version_info[0] == 2 and
sys.version_info[1] >= 6))
if PLATFORM == 'linux':
return browserprocess.RunCommandInProcessGroup
else:
return browserprocess.RunCommandWithSubprocess
RunCommand = SelectRunCommand()
def RemoveDirectory(path):
retry = 5
sleep_time = 0.25
while True:
try:
shutil.rmtree(path)
except Exception:
# Windows processes sometime hang onto files too long
if retry > 0:
retry -= 1
time.sleep(sleep_time)
sleep_time *= 2
else:
# No luck - don't mask the error
raise
else:
# succeeded
break
# In Windows, subprocess seems to have an issue with file names that
# contain spaces.
def EscapeSpaces(path):
if PLATFORM == 'windows' and ' ' in path:
return '"%s"' % path
return path
def MakeEnv(options):
env = dict(os.environ)
# Enable PPAPI Dev interfaces for testing.
env['NACL_ENABLE_PPAPI_DEV'] = str(options.enable_ppapi_dev)
if options.debug:
env['NACL_PLUGIN_DEBUG'] = '1'
# env['NACL_SRPC_DEBUG'] = '1'
return env
class BrowserLauncher(object):
WAIT_TIME = 20
WAIT_STEPS = 80
SLEEP_TIME = float(WAIT_TIME) / WAIT_STEPS
def __init__(self, options):
self.options = options
self.profile = None
self.binary = None
self.tool_log_dir = None
def KnownPath(self):
raise NotImplementedError
def BinaryName(self):
raise NotImplementedError
def CreateProfile(self):
raise NotImplementedError
def MakeCmd(self, url, host, port):
raise NotImplementedError
def CreateToolLogDir(self):
self.tool_log_dir = tempfile.mkdtemp(prefix='vglogs_')
return self.tool_log_dir
def FindBinary(self):
if self.options.browser_path:
return self.options.browser_path
else:
path = self.KnownPath()
if path is None or not os.path.exists(path):
raise LaunchFailure('Cannot find the browser directory')
binary = os.path.join(path, self.BinaryName())
if not os.path.exists(binary):
raise LaunchFailure('Cannot find the browser binary')
return binary
def WaitForProcessDeath(self):
self.browser_process.Wait(self.WAIT_STEPS, self.SLEEP_TIME)
def Cleanup(self):
self.browser_process.Kill()
RemoveDirectory(self.profile)
if self.tool_log_dir is not None:
RemoveDirectory(self.tool_log_dir)
def MakeProfileDirectory(self):
self.profile = tempfile.mkdtemp(prefix='browserprofile_')
return self.profile
def SetStandardStream(self, env, var_name, redirect_file, is_output):
if redirect_file is None:
return
file_prefix = 'file:'
dev_prefix = 'dev:'
debug_warning = 'DEBUG_ONLY:'
# logic must match src/trusted/service_runtime/nacl_resource.*
# resource specification notation. file: is the default
# interpretation, so we must have an exhaustive list of
# alternative schemes accepted. if we remove the file-is-default
# interpretation, replace with
# is_file = redirect_file.startswith(file_prefix)
# and remove the list of non-file schemes.
is_file = (not (redirect_file.startswith(dev_prefix) or
redirect_file.startswith(debug_warning + dev_prefix)))
if is_file:
if redirect_file.startswith(file_prefix):
bare_file = redirect_file[len(file_prefix)]
else:
bare_file = redirect_file
# why always abspath? does chrome chdir or might it in the
# future? this means we do not test/use the relative path case.
redirect_file = file_prefix + os.path.abspath(bare_file)
else:
bare_file = None # ensure error if used without checking is_file
env[var_name] = redirect_file
if is_output:
# sel_ldr appends program output to the file so we need to clear it
# in order to get the stable result.
if is_file:
if os.path.exists(bare_file):
os.remove(bare_file)
parent_dir = os.path.dirname(bare_file)
# parent directory may not exist.
if not os.path.exists(parent_dir):
os.makedirs(parent_dir)
def Launch(self, cmd, env):
browser_path = cmd[0]
if not os.path.exists(browser_path):
raise LaunchFailure('Browser does not exist %r'% browser_path)
if not os.access(browser_path, os.X_OK):
raise LaunchFailure('Browser cannot be executed %r (Is this binary on an '
'NFS volume?)' % browser_path)
if self.options.sel_ldr:
env['NACL_SEL_LDR'] = self.options.sel_ldr
if self.options.sel_ldr_bootstrap:
env['NACL_SEL_LDR_BOOTSTRAP'] = self.options.sel_ldr_bootstrap
if self.options.irt_library:
env['NACL_IRT_LIBRARY'] = self.options.irt_library
self.SetStandardStream(env, 'NACL_EXE_STDIN',
self.options.nacl_exe_stdin, False)
self.SetStandardStream(env, 'NACL_EXE_STDOUT',
self.options.nacl_exe_stdout, True)
self.SetStandardStream(env, 'NACL_EXE_STDERR',
self.options.nacl_exe_stderr, True)
print 'ENV:', ' '.join(['='.join(pair) for pair in env.iteritems()])
print 'LAUNCHING: %s' % ' '.join(cmd)
sys.stdout.flush()
self.browser_process = RunCommand(cmd, env=env)
def IsRunning(self):
return self.browser_process.IsRunning()
def GetReturnCode(self):
return self.browser_process.GetReturnCode()
def Run(self, url, host, port):
self.binary = EscapeSpaces(self.FindBinary())
self.profile = self.CreateProfile()
if self.options.tool is not None:
self.tool_log_dir = self.CreateToolLogDir()
cmd = self.MakeCmd(url, host, port)
self.Launch(cmd, MakeEnv(self.options))
def EnsureDirectory(path):
if not os.path.exists(path):
os.makedirs(path)
def EnsureDirectoryForFile(path):
EnsureDirectory(os.path.dirname(path))
class ChromeLauncher(BrowserLauncher):
def KnownPath(self):
if PLATFORM == 'linux':
# TODO(ncbray): look in path?
return '/opt/google/chrome'
elif PLATFORM == 'mac':
return '/Applications/Google Chrome.app/Contents/MacOS'
else:
homedir = os.path.expanduser('~')
path = os.path.join(homedir, r'AppData\Local\Google\Chrome\Application')
return path
def BinaryName(self):
if PLATFORM == 'mac':
return 'Google Chrome'
elif PLATFORM == 'windows':
return 'chrome.exe'
else:
return 'chrome'
def MakeEmptyJSONFile(self, path):
EnsureDirectoryForFile(path)
f = open(path, 'w')
f.write('{}')
f.close()
def CreateProfile(self):
profile = self.MakeProfileDirectory()
# Squelch warnings by creating bogus files.
self.MakeEmptyJSONFile(os.path.join(profile, 'Default', 'Preferences'))
self.MakeEmptyJSONFile(os.path.join(profile, 'Local State'))
return profile
def NetLogName(self):
return os.path.join(self.profile, 'netlog.json')
def MakeCmd(self, url, host, port):
cmd = [self.binary,
# --enable-logging enables stderr output from Chromium subprocesses
# on Windows (see
# https://code.google.com/p/chromium/issues/detail?id=171836)
'--enable-logging',
'--disable-web-resources',
'--disable-preconnect',
# This is speculative, sync should not occur with a clean profile.
'--disable-sync',
# This prevents Chrome from making "hidden" network requests at
# startup. These requests could be a source of non-determinism,
# and they also add noise to the netlogs.
'--dns-prefetch-disable',
'--no-first-run',
'--no-default-browser-check',
'--log-level=1',
'--safebrowsing-disable-auto-update',
'--disable-default-apps',
# Suppress metrics reporting. This prevents misconfigured bots,
# people testing at their desktop, etc from poisoning the UMA data.
'--metrics-recording-only',
# Chrome explicitly blacklists some ports as "unsafe" because
# certain protocols use them. Chrome gives an error like this:
# Error 312 (net::ERR_UNSAFE_PORT): Unknown error
# Unfortunately, the browser tester can randomly choose a
# blacklisted port. To work around this, the tester whitelists
# whatever port it is using.
'--explicitly-allowed-ports=%d' % port,
'--user-data-dir=%s' % self.profile]
# Log network requests to assist debugging.
cmd.append('--log-net-log=%s' % self.NetLogName())
if PLATFORM == 'linux':
# Explicitly run with mesa on linux. The test infrastructure doesn't have
# sufficient native GL contextes to run these tests.
cmd.append('--use-gl=osmesa')
if self.options.ppapi_plugin is None:
cmd.append('--enable-nacl')
disable_sandbox = False
# Chrome process can't access file within sandbox
disable_sandbox |= self.options.nacl_exe_stdin is not None
disable_sandbox |= self.options.nacl_exe_stdout is not None
disable_sandbox |= self.options.nacl_exe_stderr is not None
if disable_sandbox:
cmd.append('--no-sandbox')
else:
cmd.append('--register-pepper-plugins=%s;%s'
% (self.options.ppapi_plugin,
self.options.ppapi_plugin_mimetype))
cmd.append('--no-sandbox')
if self.options.browser_extensions:
cmd.append('--load-extension=%s' %
','.join(self.options.browser_extensions))
cmd.append('--enable-experimental-extension-apis')
if self.options.enable_crash_reporter:
cmd.append('--enable-crash-reporter-for-testing')
if self.options.tool == 'memcheck':
cmd = ['src/third_party/valgrind/memcheck.sh',
'-v',
'--xml=yes',
'--leak-check=no',
'--gen-suppressions=all',
'--num-callers=30',
'--trace-children=yes',
'--nacl-file=%s' % (self.options.files[0],),
'--suppressions=' +
'../tools/valgrind/memcheck/suppressions.txt',
'--xml-file=%s/xml.%%p' % (self.tool_log_dir,),
'--log-file=%s/log.%%p' % (self.tool_log_dir,)] + cmd
elif self.options.tool == 'tsan':
cmd = ['src/third_party/valgrind/tsan.sh',
'-v',
'--num-callers=30',
'--trace-children=yes',
'--nacl-file=%s' % (self.options.files[0],),
'--ignore=../tools/valgrind/tsan/ignores.txt',
'--suppressions=../tools/valgrind/tsan/suppressions.txt',
'--log-file=%s/log.%%p' % (self.tool_log_dir,)] + cmd
elif self.options.tool != None:
raise LaunchFailure('Invalid tool name "%s"' % (self.options.tool,))
if self.options.enable_sockets:
cmd.append('--allow-nacl-socket-api=%s' % host)
cmd.extend(self.options.browser_flags)
cmd.append(url)
return cmd | unknown | codeparrot/codeparrot-clean | ||
#define TORCH_ASSERT_ONLY_METHOD_OPERATORS
#include <ATen/Context.h>
#include <ATen/TensorGeometry.h>
#include <ATen/TensorUtils.h>
#include <ATen/core/Tensor.h>
#include <ATen/cuda/CUDAConfig.h>
#include <ATen/cuda/EmptyTensor.h>
#include <ATen/native/ConvUtils.h>
#if AT_CUDNN_ENABLED()
#include <ATen/native/cudnn/ConvShared.h>
#ifndef AT_PER_OPERATOR_HEADERS
#include <ATen/Functions.h>
#include <ATen/NativeFunctions.h>
#else
#include <ATen/ops/cudnn_convolution_add_relu_native.h>
#include <ATen/ops/cudnn_convolution_native.h>
#include <ATen/ops/cudnn_convolution_relu_native.h>
#include <ATen/ops/cudnn_convolution_transpose_native.h>
#include <ATen/ops/empty.h>
#include <ATen/ops/empty_like.h>
#include <ATen/ops/zeros.h>
#include <ATen/ops/zeros_like.h>
#endif
// NOTE [cuDNN API version]
//
// ConvPlaceholders.cpp contains placeholder implementation of cudnn
// convolution when cudnn is not enabled. These operators only raises
// errors, and do no real computation. These operators are implemented
// using current operators.
//
// cuDNN v7 and v8 have different API. ConvShared.{cpp, h} contains
// code shared by v7 and v8. Conv_v7.cpp contains implementation of
// convolution using cuDNN v7 API. Conv_v8.cpp contains implementation
// with v8 API.
//
// NOTE [ Convolution design ]
//
// cuDNN convolutions does not handle bias. Bias is handled outside.
//
// The general strategy:
//
// - cudnn_convolution (Tensor)
// Entry points for clients
//
// - cudnn_convolution_forward (TensorArg)
// Entry point, which may be reused between regular
// convolution and transposed convolution.
//
// - raw_cudnn_convolution_forward_out (Tensor)
// Function that has different implementation on Conv_v7.cpp
// and Conv_v8.cpp
//
// The raw API directly invokes CuDNN and are implemented differently
// on cuDNN v7 and cuDNN v8
//
// There are a few reasons this should never be directly exposed
// via ATen:
//
// - It takes output as a parameter (this should be computed!)
// - It doesn't do input checking
// - It doesn't resize output (it is assumed to be correctly sized)
//
// Where does argument checking happen? Here's the division of
// responsibility:
// - Things that happen in at::Tensor
// - TensorArg allocation
// - Things that happen in TensorArg
// - Check arguments (type, GPU, shape)
namespace at::native {
// ---------------------------------------------------------------------
//
// ConvolutionParams
//
// ---------------------------------------------------------------------
std::ostream& operator<<(std::ostream& out, const ConvolutionParams& params) {
out << "ConvolutionParams \n"
<< " memory_format = " << params.memory_format << '\n'
<< " data_type = " << cudnnTypeToString(params.dataType) << '\n'
<< " padding = " << ArrayRef<int>{params.padding} << '\n'
<< " stride = " << ArrayRef<int>{params.stride} << '\n'
<< " dilation = " << ArrayRef<int>{params.dilation} << '\n'
<< " groups = " << params.groups << '\n'
<< " deterministic = " << (params.deterministic ? "true" : "false")
<< '\n'
<< " allow_tf32 = " << (params.allow_tf32 ? "true" : "false") << '\n';
return out;
}
// NB: This can't be a constructor, because then ConvolutionParams
// would not be a POD anymore.
// TODO: Use TensorGeometry here instead of the entire Tensor, which we
// don't actually need. (OTOH: We can always pass in
// grad_input/grad_output, so this is not very pressing)
void setConvolutionParams(
ConvolutionParams* params,
const at::Tensor& input,
const at::Tensor& weight,
IntArrayRef padding,
IntArrayRef stride,
IntArrayRef dilation,
int64_t groups,
bool deterministic,
bool allow_tf32,
at::MemoryFormat memory_format) {
cudnnDataType_t dataType = getCudnnDataType(input);
memset(params, 0, sizeof(ConvolutionParams));
params->device_id = at::cuda::current_device();
params->dataType = dataType;
// ASSERT(weight.dim() == input.dim())
params->input_dim = input.dim();
params->memory_format = memory_format;
for (int i = 0; i != params->input_dim; ++i) {
params->input_size[i] = static_cast<int>(input.sizes()[i]);
params->weight_size[i] = static_cast<int>(weight.sizes()[i]);
}
// ASSERT(padding.size() == stride.size())
// ASSERT(padding.size() == dilation.size())
for (size_t i = 0; i != padding.size(); ++i) {
params->padding[i] = padding[i];
params->stride[i] = stride[i];
params->dilation[i] = dilation[i];
}
// In principle, we shouldn't parametrize by groups for legacy
// CuDNN, but it doesn't seem worth the effort to actually do this.
params->groups = groups;
params->deterministic = deterministic;
params->allow_tf32 = allow_tf32;
}
std::string repro_from_args(const ConvolutionParams& params) {
auto pybool = [](bool b) -> const char* { return b ? "True" : "False"; };
std::string partial_dtype;
switch (params.dataType) {
case CUDNN_DATA_FLOAT:
partial_dtype = "float";
break;
case CUDNN_DATA_DOUBLE:
partial_dtype = "double";
break;
case CUDNN_DATA_HALF:
partial_dtype = "half";
break;
default:
partial_dtype = "unsupported";
}
const std::string full_dtype = "torch." + partial_dtype;
const int out_channels = params.weight_size[0];
const int in_channels = params.weight_size[1] * params.groups;
const size_t dim = params.input_dim;
const std::string channels_last_xd =
dim == 4 ? "channels_last" : "channels_last_3d";
const std::string to_channels_last =
((params.memory_format == at::MemoryFormat::ChannelsLast) ||
(params.memory_format == at::MemoryFormat::ChannelsLast3d))
? ".to(memory_format=torch." + channels_last_xd + ")"
: "";
std::ostringstream ss;
ss << "You can try to repro this exception using the following code snippet. ";
ss << "If that doesn't trigger the error, please include your original repro script when reporting this issue.\n\n";
ss << "import torch\n";
ss << "torch.backends.cuda.matmul.allow_tf32 = "
<< pybool(
at::globalContext().float32Precision(
at::Float32Backend::CUDA, at::Float32Op::MATMUL) ==
at::Float32Precision::TF32)
<< '\n';
ss << "torch.backends.cudnn.benchmark = "
<< pybool(at::globalContext().benchmarkCuDNN()) << '\n';
ss << "torch.backends.cudnn.deterministic = " << pybool(params.deterministic)
<< '\n';
ss << "torch.backends.cudnn.allow_tf32 = " << pybool(params.allow_tf32)
<< '\n';
ss << "data = torch.randn(" << ArrayRef<int>(params.input_size, dim)
<< ", dtype=" << full_dtype << ", ";
ss << "device='cuda', requires_grad=True)" << to_channels_last << '\n';
ss << "net = torch.nn.Conv" << dim - 2 << "d(" << in_channels << ", "
<< out_channels << ", ";
ss << "kernel_size=" << ArrayRef<int>(¶ms.weight_size[2], dim - 2)
<< ", ";
ss << "padding=" << ArrayRef<int>(params.padding, dim - 2) << ", ";
ss << "stride=" << ArrayRef<int>(params.stride, dim - 2) << ", ";
ss << "dilation=" << ArrayRef<int>(params.dilation, dim - 2) << ", ";
ss << "groups=" << params.groups << ")\n";
ss << "net = net.cuda()." << partial_dtype << "()" << to_channels_last
<< '\n';
ss << "out = net(data)\n";
ss << "out.backward(torch.randn_like(out))\n";
ss << "torch.cuda.synchronize()\n\n";
return ss.str();
}
// ---------------------------------------------------------------------
//
// Convolution forward / Transposed convolution backward
//
// ---------------------------------------------------------------------
void cudnn_convolution_forward_out(
TensorArg& output,
CheckedFrom c,
const TensorArg& input,
const TensorArg& weight,
IntArrayRef padding,
IntArrayRef stride,
IntArrayRef dilation,
int64_t groups,
bool benchmark,
bool deterministic,
bool allow_tf32) {
checkAllSameType(c, {input, weight});
checkAllSameGPU(c, {input, weight});
auto memory_format = output->suggest_memory_format();
convolution_shape_check(
c, input, weight, output, padding, stride, dilation, groups);
Tensor weight_contig = weight->contiguous(memory_format);
Tensor input_contig = input->contiguous(memory_format);
raw_cudnn_convolution_forward_out(
*output,
input_contig,
weight_contig,
padding,
stride,
dilation,
groups,
benchmark,
deterministic,
allow_tf32);
}
Tensor cudnn_convolution(
const Tensor& input_t,
const Tensor& weight_t,
IntArrayRef padding,
IntArrayRef stride,
IntArrayRef dilation,
int64_t groups,
bool benchmark,
bool deterministic,
bool allow_tf32) {
TensorArg input{input_t, "input", 1}, weight{weight_t, "weight", 2};
CheckedFrom c = "cudnn_convolution";
auto memory_format = cudnn_conv_suggest_memory_format(input_t, weight_t);
Tensor output_t = at::detail::empty_cuda(
conv_output_size(
input_t.sizes(), weight_t.sizes(), padding, stride, dilation),
input->options().memory_format(memory_format));
if (output_t.numel() == 0) {
return output_t;
}
// Avoid ambiguity of "output" when this is being used as backwards
TensorArg output{output_t, "result", 0};
cudnn_convolution_forward_out(
output,
c,
input,
weight,
padding,
stride,
dilation,
groups,
benchmark,
deterministic,
allow_tf32);
return *output;
}
at::Tensor& cudnn_convolution_out(
const Tensor& input_t,
const Tensor& weight_t,
IntArrayRef padding,
IntArrayRef stride,
IntArrayRef dilation,
int64_t groups,
bool benchmark,
bool deterministic,
bool allow_tf32,
Tensor& output_t) {
TensorArg input{input_t, "input", 1}, weight{weight_t, "weight", 2};
CheckedFrom c = "cudnn_convolution";
if (output_t.numel() == 0) {
return output_t;
}
TensorArg output{output_t, "result", 0};
cudnn_convolution_forward_out(
output,
c,
input,
weight,
padding,
stride,
dilation,
groups,
benchmark,
deterministic,
allow_tf32);
return output_t;
}
// NB: output_padding not needed here, as there is no ambiguity to resolve
Tensor cudnn_convolution_transpose_backward_input(
const Tensor& grad_output_t,
const Tensor& weight_t,
IntArrayRef padding,
IntArrayRef stride,
IntArrayRef dilation,
int64_t groups,
bool benchmark,
bool deterministic,
bool allow_tf32) {
TensorArg grad_output{grad_output_t, "grad_output", 1},
weight{weight_t, "weight", 2};
auto memory_format =
cudnn_conv_suggest_memory_format(grad_output_t, weight_t);
Tensor output_t = at::detail::empty_cuda(
conv_output_size(
grad_output_t.sizes(), weight_t.sizes(), padding, stride, dilation),
grad_output_t.options().memory_format(memory_format));
if (output_t.numel() == 0) {
return output_t;
}
TensorArg output{output_t, "result", 0};
cudnn_convolution_forward_out(
output,
"cudnn_convolution_transpose_backward_input",
grad_output,
weight,
padding,
stride,
dilation,
groups,
benchmark,
deterministic,
allow_tf32);
return *output;
}
// ---------------------------------------------------------------------
//
// Convolution backward / Transposed convolution forward
//
// ---------------------------------------------------------------------
// NOTE [ Backward vs transpose convolutions ]
//
// Backward and transpose are algorithmically equivalent, but they
// compute their geometry differently. In a backwards, you knew what
// the original size of the input tensor was, so you can cache that
// geometry and fill it directly. In transposed convolution, it is
// more conventional to not explicitly specify the output (previously
// input) size, and compute it. This, however, leaves a degree of
// freedom; this degree of freedom is resolved using the
// output_padding parameter. Both of these interfaces are equivalent,
// but they are differently convenient depending on the use case.
Tensor cudnn_convolution_backward_input(
CheckedFrom c,
IntArrayRef input_size,
const TensorArg& grad_output,
const TensorArg& weight,
IntArrayRef padding,
IntArrayRef stride,
IntArrayRef dilation,
int64_t groups,
bool benchmark,
bool deterministic,
bool allow_tf32) {
checkAllSameType(c, {grad_output, weight});
checkAllSameGPU(c, {grad_output, weight});
auto memory_format = cudnn_conv_suggest_memory_format(*grad_output, *weight);
Tensor grad_input_t = at::detail::empty_cuda(
input_size, grad_output->options().memory_format(memory_format));
// Avoid "grad_input" when this is being used as transposed convolution
TensorArg grad_input{grad_input_t, "result", 0};
convolution_shape_check(
c, grad_input, weight, grad_output, padding, stride, dilation, groups);
Tensor weight_contig = weight->contiguous(memory_format);
Tensor grad_output_contig = grad_output->contiguous(memory_format);
raw_cudnn_convolution_backward_input_out(
*grad_input,
grad_output_contig,
weight_contig,
padding,
stride,
dilation,
groups,
benchmark,
deterministic,
allow_tf32);
return *grad_input;
}
Tensor cudnn_convolution_transpose_forward(
CheckedFrom c,
const TensorArg& grad_output,
const TensorArg& weight,
IntArrayRef padding,
IntArrayRef output_padding,
IntArrayRef stride,
IntArrayRef dilation,
int64_t groups,
bool benchmark,
bool deterministic,
bool allow_tf32) {
auto input_size = conv_input_size(
grad_output->sizes(),
weight->sizes(),
padding,
output_padding,
stride,
dilation,
groups);
return cudnn_convolution_backward_input(
c,
input_size,
grad_output,
weight,
padding,
stride,
dilation,
groups,
benchmark,
deterministic,
allow_tf32);
}
Tensor cudnn_convolution_backward_input(
IntArrayRef input_size,
const Tensor& grad_output_t,
const Tensor& weight_t,
IntArrayRef padding,
IntArrayRef stride,
IntArrayRef dilation,
int64_t groups,
bool benchmark,
bool deterministic,
bool allow_tf32) {
TensorArg grad_output{grad_output_t, "grad_output", 1},
weight{weight_t, "weight", 2};
return cudnn_convolution_backward_input(
"cudnn_convolution_backward_input",
input_size,
grad_output,
weight,
padding,
stride,
dilation,
groups,
benchmark,
deterministic,
allow_tf32);
}
Tensor cudnn_convolution_transpose(
const Tensor& input_t,
const Tensor& weight_t,
IntArrayRef padding,
IntArrayRef output_padding,
IntArrayRef stride,
IntArrayRef dilation,
int64_t groups,
bool benchmark,
bool deterministic,
bool allow_tf32) {
TensorArg input{input_t, "input", 1}, weight{weight_t, "weight", 2};
CheckedFrom c = "cudnn_convolution_transpose";
auto output_t = cudnn_convolution_transpose_forward(
c,
input,
weight,
padding,
output_padding,
stride,
dilation,
groups,
benchmark,
deterministic,
allow_tf32);
return output_t;
}
// ---------------------------------------------------------------------
//
// Convolution backward (weight)
//
// ---------------------------------------------------------------------
Tensor cudnn_convolution_backward_weight(
CheckedFrom c,
IntArrayRef weight_size,
const Tensor& grad_output_t,
const Tensor& input_t,
IntArrayRef padding,
IntArrayRef stride,
IntArrayRef dilation,
int64_t groups,
bool benchmark,
bool deterministic,
bool allow_tf32) {
auto layout = cudnn_conv_suggest_memory_format(input_t, grad_output_t);
Tensor grad_output_contig_t = grad_output_t.contiguous(layout);
TensorArg grad_output_contig{grad_output_contig_t, "grad_output", 1};
Tensor input_contig_t = input_t.contiguous(layout);
TensorArg input{input_contig_t, "input", 2};
checkAllSameType(c, {grad_output_contig, input});
checkAllSameGPU(c, {grad_output_contig, input});
auto grad_weight_t =
at::empty(weight_size, grad_output_contig->options(), layout);
// For uniformity with everything else, although it seems grad_weight
// would be unambiguous too.
TensorArg grad_weight{grad_weight_t, "result", 0};
convolution_shape_check(
c,
input,
grad_weight,
grad_output_contig,
padding,
stride,
dilation,
groups);
raw_cudnn_convolution_backward_weight_out(
*grad_weight,
*grad_output_contig,
*input,
padding,
stride,
dilation,
groups,
benchmark,
deterministic,
allow_tf32);
return grad_weight_t;
}
Tensor cudnn_convolution_backward_weight(
IntArrayRef weight_size,
const Tensor& grad_output_t,
const Tensor& input_t,
IntArrayRef padding,
IntArrayRef stride,
IntArrayRef dilation,
int64_t groups,
bool benchmark,
bool deterministic,
bool allow_tf32) {
return cudnn_convolution_backward_weight(
"cudnn_convolution_backward_weight",
weight_size,
grad_output_t,
input_t,
padding,
stride,
dilation,
groups,
benchmark,
deterministic,
allow_tf32);
}
std::tuple<at::Tensor, at::Tensor> cudnn_convolution_backward(
const at::Tensor& input,
const at::Tensor& grad_output_t,
const at::Tensor& weight,
IntArrayRef padding,
IntArrayRef stride,
IntArrayRef dilation,
int64_t groups,
bool benchmark,
bool deterministic,
bool allow_tf32,
std::array<bool, 2> output_mask) {
Tensor grad_output = grad_output_t.to(input.suggest_memory_format());
Tensor grad_input, grad_weight;
if (input.numel() == 0) {
if (output_mask[0]) {
grad_input = at::empty_like(input, LEGACY_CONTIGUOUS_MEMORY_FORMAT);
}
if (output_mask[1]) {
grad_weight = at::zeros_like(weight, LEGACY_CONTIGUOUS_MEMORY_FORMAT);
}
} else {
if (output_mask[0]) {
grad_input = cudnn_convolution_backward_input(
input.sizes(),
grad_output,
weight,
padding,
stride,
dilation,
groups,
benchmark,
deterministic,
allow_tf32);
}
if (output_mask[1]) {
grad_weight = cudnn_convolution_backward_weight(
weight.sizes(),
grad_output,
input,
padding,
stride,
dilation,
groups,
benchmark,
deterministic,
allow_tf32);
}
}
return std::tuple<Tensor, Tensor>{grad_input, grad_weight};
}
Tensor cudnn_convolution_transpose_backward_weight(
IntArrayRef weight_size,
const Tensor& grad_output_t,
const Tensor& input_t,
IntArrayRef padding,
IntArrayRef stride,
IntArrayRef dilation,
int64_t groups,
bool benchmark,
bool deterministic,
bool allow_tf32) {
return cudnn_convolution_backward_weight(
"cudnn_convolution_backward_weight",
weight_size,
input_t,
grad_output_t,
padding,
stride,
dilation,
groups,
benchmark,
deterministic,
allow_tf32);
}
std::tuple<at::Tensor, at::Tensor> cudnn_convolution_transpose_backward(
const at::Tensor& input,
const at::Tensor& grad_output_t,
const at::Tensor& weight,
IntArrayRef padding,
IntArrayRef output_padding,
IntArrayRef stride,
IntArrayRef dilation,
int64_t groups,
bool benchmark,
bool deterministic,
bool allow_tf32,
std::array<bool, 2> output_mask) {
Tensor grad_output = grad_output_t.contiguous(input.suggest_memory_format());
Tensor grad_input, grad_weight;
if (output_mask[0]) {
grad_input = cudnn_convolution_transpose_backward_input(
grad_output,
weight,
padding,
stride,
dilation,
groups,
benchmark,
deterministic,
allow_tf32);
}
if (output_mask[1]) {
grad_weight = cudnn_convolution_transpose_backward_weight(
weight.sizes(),
grad_output,
input,
padding,
stride,
dilation,
groups,
benchmark,
deterministic,
allow_tf32);
}
return std::tuple<Tensor, Tensor>{grad_input, grad_weight};
}
Tensor cudnn_convolution_relu(
const Tensor& input_t,
const Tensor& weight_t,
const std::optional<Tensor>& bias_t,
IntArrayRef stride,
IntArrayRef padding,
IntArrayRef dilation,
int64_t groups) {
auto memory_format = cudnn_conv_suggest_memory_format(input_t, weight_t);
const Tensor input = input_t.contiguous(memory_format);
const Tensor weight = weight_t.contiguous(memory_format);
// FuseFrozenConvAddRelu performs some tensor shape checking
Tensor output_t = at::detail::empty_cuda(
conv_output_size(
input.sizes(), weight.sizes(), padding, stride, dilation),
input.options().memory_format(memory_format));
if (output_t.numel() == 0) {
return output_t;
}
auto& ctx = at::globalContext();
bool benchmark = ctx.benchmarkCuDNN();
bool allow_tf32 = ctx.allowTF32CuDNN(at::Float32Op::CONV);
auto _bias = bias_t.has_value()
? bias_t.value()
: at::zeros(
{output_t.size(1)},
optTypeMetaToScalarType(output_t.options().dtype_opt()),
output_t.options().layout_opt(),
output_t.options().device_opt(),
output_t.options().pinned_memory_opt());
raw_cudnn_convolution_add_relu_out(
output_t,
input,
weight,
output_t, // use output_t as z to satisfy CUDNN API
0, // alpha
_bias,
stride,
padding,
dilation,
groups,
benchmark, // benchmark
false, // deterministic
allow_tf32 // allow_tf32
);
return output_t;
}
Tensor cudnn_convolution_add_relu(
const Tensor& input_t,
const Tensor& weight_t,
const Tensor& z_t,
const std::optional<Scalar>& alpha,
const std::optional<Tensor>& bias_t,
IntArrayRef stride,
IntArrayRef padding,
IntArrayRef dilation,
int64_t groups) {
auto memory_format = cudnn_conv_suggest_memory_format(input_t, weight_t);
const Tensor input = input_t.contiguous(memory_format);
const Tensor weight = weight_t.contiguous(memory_format);
Tensor z = z_t;
if (z.suggest_memory_format() != memory_format) {
z = z.to(memory_format);
}
z = z.contiguous(memory_format);
// FuseFrozenConvAddRelu performs some tensor shape checking
Tensor output_t = at::detail::empty_cuda(
conv_output_size(
input.sizes(), weight.sizes(), padding, stride, dilation),
input.options().memory_format(memory_format));
if (output_t.numel() == 0) {
return output_t;
}
auto& ctx = at::globalContext();
bool allow_tf32 = ctx.allowTF32CuDNN(at::Float32Op::CONV);
bool benchmark = ctx.benchmarkCuDNN();
auto _alpha = alpha.has_value() ? alpha.value().to<float>() : 1.0;
auto _bias = bias_t.has_value()
? bias_t.value()
: at::zeros(
{output_t.size(1)},
optTypeMetaToScalarType(output_t.options().dtype_opt()),
output_t.options().layout_opt(),
output_t.options().device_opt(),
output_t.options().pinned_memory_opt());
raw_cudnn_convolution_add_relu_out(
output_t,
input,
weight,
z,
_alpha,
_bias,
stride,
padding,
dilation,
groups,
benchmark,
false, // deterministic
allow_tf32 // allow_tf32
);
return output_t;
}
REGISTER_CUDA_DISPATCH(
cudnn_convolution_backward_stub,
&cudnn_convolution_backward)
REGISTER_CUDA_DISPATCH(
cudnn_convolution_transpose_backward_stub,
&cudnn_convolution_transpose_backward)
} // namespace at::native
#endif // AT_CUDNN_ENABLED | cpp | github | https://github.com/pytorch/pytorch | aten/src/ATen/native/cudnn/ConvShared.cpp |
from __future__ import unicode_literals
from django.core.exceptions import ImproperlyConfigured
from django.test import TestCase, override_settings
from django.views.generic.base import View
from django.utils.encoding import force_str
from .models import Author, Artist
class ListViewTests(TestCase):
fixtures = ['generic-views-test-data.json']
urls = 'generic_views.urls'
def test_items(self):
res = self.client.get('/list/dict/')
self.assertEqual(res.status_code, 200)
self.assertTemplateUsed(res, 'generic_views/list.html')
self.assertEqual(res.context['object_list'][0]['first'], 'John')
def test_queryset(self):
res = self.client.get('/list/authors/')
self.assertEqual(res.status_code, 200)
self.assertTemplateUsed(res, 'generic_views/author_list.html')
self.assertEqual(list(res.context['object_list']), list(Author.objects.all()))
self.assertIsInstance(res.context['view'], View)
self.assertIs(res.context['author_list'], res.context['object_list'])
self.assertIsNone(res.context['paginator'])
self.assertIsNone(res.context['page_obj'])
self.assertFalse(res.context['is_paginated'])
def test_paginated_queryset(self):
self._make_authors(100)
res = self.client.get('/list/authors/paginated/')
self.assertEqual(res.status_code, 200)
self.assertTemplateUsed(res, 'generic_views/author_list.html')
self.assertEqual(len(res.context['object_list']), 30)
self.assertIs(res.context['author_list'], res.context['object_list'])
self.assertTrue(res.context['is_paginated'])
self.assertEqual(res.context['page_obj'].number, 1)
self.assertEqual(res.context['paginator'].num_pages, 4)
self.assertEqual(res.context['author_list'][0].name, 'Author 00')
self.assertEqual(list(res.context['author_list'])[-1].name, 'Author 29')
def test_paginated_queryset_shortdata(self):
# Test that short datasets ALSO result in a paginated view.
res = self.client.get('/list/authors/paginated/')
self.assertEqual(res.status_code, 200)
self.assertTemplateUsed(res, 'generic_views/author_list.html')
self.assertEqual(list(res.context['object_list']), list(Author.objects.all()))
self.assertIs(res.context['author_list'], res.context['object_list'])
self.assertEqual(res.context['page_obj'].number, 1)
self.assertEqual(res.context['paginator'].num_pages, 1)
self.assertFalse(res.context['is_paginated'])
def test_paginated_get_page_by_query_string(self):
self._make_authors(100)
res = self.client.get('/list/authors/paginated/', {'page': '2'})
self.assertEqual(res.status_code, 200)
self.assertTemplateUsed(res, 'generic_views/author_list.html')
self.assertEqual(len(res.context['object_list']), 30)
self.assertIs(res.context['author_list'], res.context['object_list'])
self.assertEqual(res.context['author_list'][0].name, 'Author 30')
self.assertEqual(res.context['page_obj'].number, 2)
def test_paginated_get_last_page_by_query_string(self):
self._make_authors(100)
res = self.client.get('/list/authors/paginated/', {'page': 'last'})
self.assertEqual(res.status_code, 200)
self.assertEqual(len(res.context['object_list']), 10)
self.assertIs(res.context['author_list'], res.context['object_list'])
self.assertEqual(res.context['author_list'][0].name, 'Author 90')
self.assertEqual(res.context['page_obj'].number, 4)
def test_paginated_get_page_by_urlvar(self):
self._make_authors(100)
res = self.client.get('/list/authors/paginated/3/')
self.assertEqual(res.status_code, 200)
self.assertTemplateUsed(res, 'generic_views/author_list.html')
self.assertEqual(len(res.context['object_list']), 30)
self.assertIs(res.context['author_list'], res.context['object_list'])
self.assertEqual(res.context['author_list'][0].name, 'Author 60')
self.assertEqual(res.context['page_obj'].number, 3)
def test_paginated_page_out_of_range(self):
self._make_authors(100)
res = self.client.get('/list/authors/paginated/42/')
self.assertEqual(res.status_code, 404)
def test_paginated_invalid_page(self):
self._make_authors(100)
res = self.client.get('/list/authors/paginated/?page=frog')
self.assertEqual(res.status_code, 404)
def test_paginated_custom_paginator_class(self):
self._make_authors(7)
res = self.client.get('/list/authors/paginated/custom_class/')
self.assertEqual(res.status_code, 200)
self.assertEqual(res.context['paginator'].num_pages, 1)
# Custom pagination allows for 2 orphans on a page size of 5
self.assertEqual(len(res.context['object_list']), 7)
def test_paginated_custom_page_kwarg(self):
self._make_authors(100)
res = self.client.get('/list/authors/paginated/custom_page_kwarg/', {'pagina': '2'})
self.assertEqual(res.status_code, 200)
self.assertTemplateUsed(res, 'generic_views/author_list.html')
self.assertEqual(len(res.context['object_list']), 30)
self.assertIs(res.context['author_list'], res.context['object_list'])
self.assertEqual(res.context['author_list'][0].name, 'Author 30')
self.assertEqual(res.context['page_obj'].number, 2)
def test_paginated_custom_paginator_constructor(self):
self._make_authors(7)
res = self.client.get('/list/authors/paginated/custom_constructor/')
self.assertEqual(res.status_code, 200)
# Custom pagination allows for 2 orphans on a page size of 5
self.assertEqual(len(res.context['object_list']), 7)
def test_paginated_orphaned_queryset(self):
self._make_authors(92)
res = self.client.get('/list/authors/paginated-orphaned/')
self.assertEqual(res.status_code, 200)
self.assertEqual(res.context['page_obj'].number, 1)
res = self.client.get(
'/list/authors/paginated-orphaned/', {'page': 'last'})
self.assertEqual(res.status_code, 200)
self.assertEqual(res.context['page_obj'].number, 3)
res = self.client.get(
'/list/authors/paginated-orphaned/', {'page': '3'})
self.assertEqual(res.status_code, 200)
self.assertEqual(res.context['page_obj'].number, 3)
res = self.client.get(
'/list/authors/paginated-orphaned/', {'page': '4'})
self.assertEqual(res.status_code, 404)
def test_paginated_non_queryset(self):
res = self.client.get('/list/dict/paginated/')
self.assertEqual(res.status_code, 200)
self.assertEqual(len(res.context['object_list']), 1)
def test_verbose_name(self):
res = self.client.get('/list/artists/')
self.assertEqual(res.status_code, 200)
self.assertTemplateUsed(res, 'generic_views/list.html')
self.assertEqual(list(res.context['object_list']), list(Artist.objects.all()))
self.assertIs(res.context['artist_list'], res.context['object_list'])
self.assertIsNone(res.context['paginator'])
self.assertIsNone(res.context['page_obj'])
self.assertFalse(res.context['is_paginated'])
def test_allow_empty_false(self):
res = self.client.get('/list/authors/notempty/')
self.assertEqual(res.status_code, 200)
Author.objects.all().delete()
res = self.client.get('/list/authors/notempty/')
self.assertEqual(res.status_code, 404)
def test_template_name(self):
res = self.client.get('/list/authors/template_name/')
self.assertEqual(res.status_code, 200)
self.assertEqual(list(res.context['object_list']), list(Author.objects.all()))
self.assertIs(res.context['author_list'], res.context['object_list'])
self.assertTemplateUsed(res, 'generic_views/list.html')
def test_template_name_suffix(self):
res = self.client.get('/list/authors/template_name_suffix/')
self.assertEqual(res.status_code, 200)
self.assertEqual(list(res.context['object_list']), list(Author.objects.all()))
self.assertIs(res.context['author_list'], res.context['object_list'])
self.assertTemplateUsed(res, 'generic_views/author_objects.html')
def test_context_object_name(self):
res = self.client.get('/list/authors/context_object_name/')
self.assertEqual(res.status_code, 200)
self.assertEqual(list(res.context['object_list']), list(Author.objects.all()))
self.assertNotIn('authors', res.context)
self.assertIs(res.context['author_list'], res.context['object_list'])
self.assertTemplateUsed(res, 'generic_views/author_list.html')
def test_duplicate_context_object_name(self):
res = self.client.get('/list/authors/dupe_context_object_name/')
self.assertEqual(res.status_code, 200)
self.assertEqual(list(res.context['object_list']), list(Author.objects.all()))
self.assertNotIn('authors', res.context)
self.assertNotIn('author_list', res.context)
self.assertTemplateUsed(res, 'generic_views/author_list.html')
def test_missing_items(self):
self.assertRaises(ImproperlyConfigured, self.client.get, '/list/authors/invalid/')
def test_paginated_list_view_does_not_load_entire_table(self):
# Regression test for #17535
self._make_authors(3)
# 1 query for authors
with self.assertNumQueries(1):
self.client.get('/list/authors/notempty/')
# same as above + 1 query to test if authors exist + 1 query for pagination
with self.assertNumQueries(3):
self.client.get('/list/authors/notempty/paginated/')
@override_settings(DEBUG=True)
def test_paginated_list_view_returns_useful_message_on_invalid_page(self):
# test for #19240
# tests that source exception's message is included in page
self._make_authors(1)
res = self.client.get('/list/authors/paginated/2/')
self.assertEqual(res.status_code, 404)
self.assertEqual(force_str(res.context.get('reason')),
"Invalid page (2): That page contains no results")
def _make_authors(self, n):
Author.objects.all().delete()
for i in range(n):
Author.objects.create(name='Author %02i' % i, slug='a%s' % i) | unknown | codeparrot/codeparrot-clean | ||
global:
scrape_interval: 15s | unknown | github | https://github.com/prometheus/prometheus | config/testdata/agent_mode.without_remote_writes.yml |
# Copyright 2015 NEC Corporation. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import json
from tempest_lib.common import rest_client
from tempest_lib import exceptions as lib_exc
from neutron.tests.tempest.common import service_client
from neutron.tests.tempest import exceptions
class V3TokenClientJSON(rest_client.RestClient):
def __init__(self, auth_url, disable_ssl_certificate_validation=None,
ca_certs=None, trace_requests=None):
dscv = disable_ssl_certificate_validation
super(V3TokenClientJSON, self).__init__(
None, None, None, disable_ssl_certificate_validation=dscv,
ca_certs=ca_certs, trace_requests=trace_requests)
if not auth_url:
raise exceptions.InvalidConfiguration('you must specify a v3 uri '
'if using the v3 identity '
'api')
if 'auth/tokens' not in auth_url:
auth_url = auth_url.rstrip('/') + '/auth/tokens'
self.auth_url = auth_url
def auth(self, user_id=None, username=None, password=None, project_id=None,
project_name=None, user_domain_id=None, user_domain_name=None,
project_domain_id=None, project_domain_name=None, domain_id=None,
domain_name=None, token=None):
"""
:param user_id: user id
:param username: user name
:param user_domain_id: the user domain id
:param user_domain_name: the user domain name
:param project_domain_id: the project domain id
:param project_domain_name: the project domain name
:param domain_id: a domain id to scope to
:param domain_name: a domain name to scope to
:param project_id: a project id to scope to
:param project_name: a project name to scope to
:param token: a token to re-scope.
Accepts different combinations of credentials.
Sample sample valid combinations:
- token
- token, project_name, project_domain_id
- user_id, password
- username, password, user_domain_id
- username, password, project_name, user_domain_id, project_domain_id
Validation is left to the server side.
"""
creds = {
'auth': {
'identity': {
'methods': [],
}
}
}
id_obj = creds['auth']['identity']
if token:
id_obj['methods'].append('token')
id_obj['token'] = {
'id': token
}
if (user_id or username) and password:
id_obj['methods'].append('password')
id_obj['password'] = {
'user': {
'password': password,
}
}
if user_id:
id_obj['password']['user']['id'] = user_id
else:
id_obj['password']['user']['name'] = username
_domain = None
if user_domain_id is not None:
_domain = dict(id=user_domain_id)
elif user_domain_name is not None:
_domain = dict(name=user_domain_name)
if _domain:
id_obj['password']['user']['domain'] = _domain
if (project_id or project_name):
_project = dict()
if project_id:
_project['id'] = project_id
elif project_name:
_project['name'] = project_name
if project_domain_id is not None:
_project['domain'] = {'id': project_domain_id}
elif project_domain_name is not None:
_project['domain'] = {'name': project_domain_name}
creds['auth']['scope'] = dict(project=_project)
elif domain_id:
creds['auth']['scope'] = dict(domain={'id': domain_id})
elif domain_name:
creds['auth']['scope'] = dict(domain={'name': domain_name})
body = json.dumps(creds)
resp, body = self.post(self.auth_url, body=body)
self.expected_success(201, resp.status)
return service_client.ResponseBody(resp, body)
def request(self, method, url, extra_headers=False, headers=None,
body=None):
"""A simple HTTP request interface."""
if headers is None:
# Always accept 'json', for xml token client too.
# Because XML response is not easily
# converted to the corresponding JSON one
headers = self.get_headers(accept_type="json")
elif extra_headers:
try:
headers.update(self.get_headers(accept_type="json"))
except (ValueError, TypeError):
headers = self.get_headers(accept_type="json")
resp, resp_body = self.raw_request(url, method,
headers=headers, body=body)
self._log_request(method, url, resp)
if resp.status in [401, 403]:
resp_body = json.loads(resp_body)
raise lib_exc.Unauthorized(resp_body['error']['message'])
elif resp.status not in [200, 201, 204]:
raise exceptions.IdentityError(
'Unexpected status code {0}'.format(resp.status))
return resp, json.loads(resp_body)
def get_token(self, **kwargs):
"""
Returns (token id, token data) for supplied credentials
"""
auth_data = kwargs.pop('auth_data', False)
if not (kwargs.get('user_domain_id') or
kwargs.get('user_domain_name')):
kwargs['user_domain_name'] = 'Default'
if not (kwargs.get('project_domain_id') or
kwargs.get('project_domain_name')):
kwargs['project_domain_name'] = 'Default'
body = self.auth(**kwargs)
token = body.response.get('x-subject-token')
if auth_data:
return token, body['token']
else:
return token | unknown | codeparrot/codeparrot-clean | ||
import jax.numpy as np
from jax.scipy.special import erf, gammaln
from jax import jit, partial, jacrev, random, vmap, grad
from jax.scipy.linalg import cholesky, cho_factor, cho_solve
from utils import inv, softplus, sigmoid, logphi, gaussian_moment_match, softplus_inv, gauss_hermite, \
ensure_positive_precision
pi = 3.141592653589793
def gaussian_first_derivative_wrt_mean(f, m, C, w):
invC = inv(C)
return invC @ (f - m) * w
def gaussian_second_derivative_wrt_mean(f, m, C, w):
invC = inv(C)
return (invC @ (f - m) @ (f - m).T @ invC - invC) * w
class Likelihood(object):
"""
The likelihood model class, p(yₙ|fₙ). Each likelihood implements its own parameter update methods:
Moment matching is used for EP
Statistical linearisation is used for SLEP / UKS / GHKS
Ananlytical linearisation is used for EEP / EKS
Variational expectation is used for VI
If no custom parameter update method is provided, cubature is used (Gauss-Hermite by default).
The requirement for all inference methods to work is the implementation of the following methods:
evaluate_likelihood(), which simply evaluates the likelihood given the latent function
evaluate_log_likelihood()
conditional_moments(), which return E[y|f] and Cov[y|f]
"""
def __init__(self, hyp=None):
"""
:param hyp: (hyper)parameters of the likelihood model
"""
hyp = [] if hyp is None else hyp
self.hyp = softplus_inv(np.array(hyp))
def evaluate_likelihood(self, y, f, hyp=None):
raise NotImplementedError('direct evaluation of this likelihood is not implemented')
def evaluate_log_likelihood(self, y, f, hyp=None):
raise NotImplementedError('direct evaluation of this log-likelihood is not implemented')
def conditional_moments(self, f, hyp=None):
raise NotImplementedError('conditional moments of this likelihood are not implemented')
@partial(jit, static_argnums=(0, 6))
def moment_match_cubature(self, y, cav_mean, cav_cov, hyp=None, power=1.0, cubature_func=None):
"""
TODO: N.B. THIS VERSION IS SUPERCEDED BY THE FUNCTION BELOW. HOWEVER THIS ONE MAY BE MORE STABLE.
Perform moment matching via cubature.
Moment matching invloves computing the log partition function, logZₙ, and its derivatives w.r.t. the cavity mean
logZₙ = log ∫ pᵃ(yₙ|fₙ) 𝓝(fₙ|mₙ,vₙ) dfₙ
with EP power a.
:param y: observed data (yₙ) [scalar]
:param cav_mean: cavity mean (mₙ) [scalar]
:param cav_cov: cavity covariance (cₙ) [scalar]
:param hyp: likelihood hyperparameter [scalar]
:param power: EP power / fraction (a) [scalar]
:param cubature_func: the function to compute sigma points and weights to use during cubature
:return:
lZ: the log partition function, logZₙ [scalar]
dlZ: first derivative of logZₙ w.r.t. mₙ (if derivatives=True) [scalar]
d2lZ: second derivative of logZₙ w.r.t. mₙ (if derivatives=True) [scalar]
"""
if cubature_func is None:
x, w = gauss_hermite(cav_mean.shape[0], 20) # Gauss-Hermite sigma points and weights
else:
x, w = cubature_func(cav_mean.shape[0])
cav_cho, low = cho_factor(cav_cov)
# fsigᵢ=xᵢ√cₙ + mₙ: scale locations according to cavity dist.
sigma_points = cav_cho @ np.atleast_2d(x) + cav_mean
# pre-compute wᵢ pᵃ(yₙ|xᵢ√(2vₙ) + mₙ)
weighted_likelihood_eval = w * self.evaluate_likelihood(y, sigma_points, hyp) ** power
# a different approach, based on the log-likelihood, which can be more stable:
# ll = self.evaluate_log_likelihood(y, sigma_points)
# lmax = np.max(ll)
# weighted_likelihood_eval = np.exp(lmax * power) * w * np.exp(power * (ll - lmax))
# Compute partition function via cubature:
# Zₙ = ∫ pᵃ(yₙ|fₙ) 𝓝(fₙ|mₙ,vₙ) dfₙ
# ≈ ∑ᵢ wᵢ pᵃ(yₙ|fsigᵢ)
Z = np.sum(
weighted_likelihood_eval, axis=-1
)
lZ = np.log(Z)
Zinv = 1.0 / Z
# Compute derivative of partition function via cubature:
# dZₙ/dmₙ = ∫ (fₙ-mₙ) vₙ⁻¹ pᵃ(yₙ|fₙ) 𝓝(fₙ|mₙ,vₙ) dfₙ
# ≈ ∑ᵢ wᵢ (fₙ-mₙ) vₙ⁻¹ pᵃ(yₙ|fsigᵢ)
covinv_f_m = cho_solve((cav_cho, low), sigma_points - cav_mean)
dZ = np.sum(
# (sigma_points - cav_mean) / cav_cov
covinv_f_m
* weighted_likelihood_eval,
axis=-1
)
# dlogZₙ/dmₙ = (dZₙ/dmₙ) / Zₙ
dlZ = Zinv * dZ
# Compute second derivative of partition function via cubature:
# d²Zₙ/dmₙ² = ∫ [(fₙ-mₙ)² vₙ⁻² - vₙ⁻¹] pᵃ(yₙ|fₙ) 𝓝(fₙ|mₙ,vₙ) dfₙ
# ≈ ∑ᵢ wᵢ [(fₙ-mₙ)² vₙ⁻² - vₙ⁻¹] pᵃ(yₙ|fsigᵢ)
d2Z = np.sum(
((sigma_points - cav_mean) ** 2 / cav_cov ** 2 - 1.0 / cav_cov)
* weighted_likelihood_eval
)
# d²logZₙ/dmₙ² = d[(dZₙ/dmₙ) / Zₙ]/dmₙ
# = (d²Zₙ/dmₙ² * Zₙ - (dZₙ/dmₙ)²) / Zₙ²
# = d²Zₙ/dmₙ² / Zₙ - (dlogZₙ/dmₙ)²
d2lZ = -dlZ @ dlZ.T + Zinv * d2Z
id2lZ = inv(ensure_positive_precision(-d2lZ) - 1e-10 * np.eye(d2lZ.shape[0]))
site_mean = cav_mean + id2lZ @ dlZ # approx. likelihood (site) mean (see Rasmussen & Williams p75)
site_cov = power * (-cav_cov + id2lZ) # approx. likelihood (site) variance
return lZ, site_mean, site_cov
@partial(jit, static_argnums=(0, 6))
def moment_match_cubature(self, y, cav_mean, cav_cov, hyp=None, power=1.0, cubature_func=None):
"""
TODO: N.B. THIS VERSION ALLOWS MULTI-DIMENSIONAL MOMENT MATCHING, BUT CAN BE UNSTABLE
Perform moment matching via cubature.
Moment matching invloves computing the log partition function, logZₙ, and its derivatives w.r.t. the cavity mean
logZₙ = log ∫ pᵃ(yₙ|fₙ) 𝓝(fₙ|mₙ,vₙ) dfₙ
with EP power a.
:param y: observed data (yₙ) [scalar]
:param cav_mean: cavity mean (mₙ) [scalar]
:param cav_cov: cavity covariance (cₙ) [scalar]
:param hyp: likelihood hyperparameter [scalar]
:param power: EP power / fraction (a) [scalar]
:param cubature_func: the function to compute sigma points and weights to use during cubature
:return:
lZ: the log partition function, logZₙ [scalar]
dlZ: first derivative of logZₙ w.r.t. mₙ (if derivatives=True) [scalar]
d2lZ: second derivative of logZₙ w.r.t. mₙ (if derivatives=True) [scalar]
"""
if cubature_func is None:
x, w = gauss_hermite(cav_mean.shape[0], 20) # Gauss-Hermite sigma points and weights
else:
x, w = cubature_func(cav_mean.shape[0])
cav_cho, low = cho_factor(cav_cov)
# fsigᵢ=xᵢ√cₙ + mₙ: scale locations according to cavity dist.
sigma_points = cav_cho @ np.atleast_2d(x) + cav_mean
# pre-compute wᵢ pᵃ(yₙ|xᵢ√(2vₙ) + mₙ)
weighted_likelihood_eval = w * self.evaluate_likelihood(y, sigma_points, hyp) ** power
# Compute partition function via cubature:
# Zₙ = ∫ pᵃ(yₙ|fₙ) 𝓝(fₙ|mₙ,vₙ) dfₙ
# ≈ ∑ᵢ wᵢ pᵃ(yₙ|fsigᵢ)
Z = np.sum(
weighted_likelihood_eval, axis=-1
)
lZ = np.log(np.maximum(Z, 1e-8))
Zinv = 1.0 / np.maximum(Z, 1e-8)
# Compute derivative of partition function via cubature:
# dZₙ/dmₙ = ∫ (fₙ-mₙ) vₙ⁻¹ pᵃ(yₙ|fₙ) 𝓝(fₙ|mₙ,vₙ) dfₙ
# ≈ ∑ᵢ wᵢ (fₙ-mₙ) vₙ⁻¹ pᵃ(yₙ|fsigᵢ)
d1 = vmap(
gaussian_first_derivative_wrt_mean, (1, None, None, 1)
)(sigma_points[..., None], cav_mean, cav_cov, weighted_likelihood_eval)
dZ = np.sum(d1, axis=0)
# dlogZₙ/dmₙ = (dZₙ/dmₙ) / Zₙ
dlZ = Zinv * dZ
# Compute second derivative of partition function via cubature:
# d²Zₙ/dmₙ² = ∫ [(fₙ-mₙ)² vₙ⁻² - vₙ⁻¹] pᵃ(yₙ|fₙ) 𝓝(fₙ|mₙ,vₙ) dfₙ
# ≈ ∑ᵢ wᵢ [(fₙ-mₙ)² vₙ⁻² - vₙ⁻¹] pᵃ(yₙ|fsigᵢ)
d2 = vmap(
gaussian_second_derivative_wrt_mean, (1, None, None, 1)
)(sigma_points[..., None], cav_mean, cav_cov, weighted_likelihood_eval)
d2Z = np.sum(d2, axis=0)
# d²logZₙ/dmₙ² = d[(dZₙ/dmₙ) / Zₙ]/dmₙ
# = (d²Zₙ/dmₙ² * Zₙ - (dZₙ/dmₙ)²) / Zₙ²
# = d²Zₙ/dmₙ² / Zₙ - (dlogZₙ/dmₙ)²
d2lZ = -dlZ @ dlZ.T + Zinv * d2Z
id2lZ = inv(ensure_positive_precision(-d2lZ) - 1e-10 * np.eye(d2lZ.shape[0]))
site_mean = cav_mean + id2lZ @ dlZ # approx. likelihood (site) mean (see Rasmussen & Williams p75)
site_cov = power * (-cav_cov + id2lZ) # approx. likelihood (site) variance
return lZ, site_mean, site_cov
@partial(jit, static_argnums=(0, 6))
def moment_match(self, y, m, v, hyp=None, power=1.0, cubature_func=None):
"""
If no custom moment matching method is provided, we use cubature.
"""
return self.moment_match_cubature(y, m, v, hyp, power, cubature_func)
@staticmethod
def link_fn(latent_mean):
return latent_mean
def sample(self, f, rng_key=123):
lik_expectation, lik_variance = self.conditional_moments(f)
lik_std = cholesky(np.diag(np.expand_dims(lik_variance, 0)))
return lik_expectation + lik_std * random.normal(random.PRNGKey(rng_key), shape=f.shape)
@partial(jit, static_argnums=(0, 4))
def statistical_linear_regression_cubature(self, cav_mean, cav_cov, hyp=None, cubature_func=None):
"""
Perform statistical linear regression (SLR) using cubature.
We aim to find a likelihood approximation p(yₙ|fₙ) ≈ 𝓝(yₙ|Afₙ+b,Ω+Var[yₙ|fₙ]).
TODO: this currently assumes an additive noise model (ok for our current applications), make more general
"""
if cubature_func is None:
x, w = gauss_hermite(cav_mean.shape[0], 20) # Gauss-Hermite sigma points and weights
else:
x, w = cubature_func(cav_mean.shape[0])
# fsigᵢ=xᵢ√(vₙ) + mₙ: scale locations according to cavity dist.
sigma_points = cholesky(cav_cov) @ np.atleast_2d(x) + cav_mean
lik_expectation, lik_covariance = self.conditional_moments(sigma_points, hyp)
# Compute zₙ via cubature:
# zₙ = ∫ E[yₙ|fₙ] 𝓝(fₙ|mₙ,vₙ) dfₙ
# ≈ ∑ᵢ wᵢ E[yₙ|fsigᵢ]
mu = np.sum(
w * lik_expectation, axis=-1
)[:, None]
# Compute variance S via cubature:
# S = ∫ [(E[yₙ|fₙ]-zₙ) (E[yₙ|fₙ]-zₙ)' + Cov[yₙ|fₙ]] 𝓝(fₙ|mₙ,vₙ) dfₙ
# ≈ ∑ᵢ wᵢ [(E[yₙ|fsigᵢ]-zₙ) (E[yₙ|fsigᵢ]-zₙ)' + Cov[yₙ|fₙ]]
# TODO: allow for multi-dim cubature
S = np.sum(
w * ((lik_expectation - mu) * (lik_expectation - mu) + lik_covariance), axis=-1
)[:, None]
# Compute cross covariance C via cubature:
# C = ∫ (fₙ-mₙ) (E[yₙ|fₙ]-zₙ)' 𝓝(fₙ|mₙ,vₙ) dfₙ
# ≈ ∑ᵢ wᵢ (fsigᵢ -mₙ) (E[yₙ|fsigᵢ]-zₙ)'
C = np.sum(
w * (sigma_points - cav_mean) * (lik_expectation - mu), axis=-1
)[:, None]
# Compute derivative of z via cubature:
# omega = ∫ E[yₙ|fₙ] vₙ⁻¹ (fₙ-mₙ) 𝓝(fₙ|mₙ,vₙ) dfₙ
# ≈ ∑ᵢ wᵢ E[yₙ|fsigᵢ] vₙ⁻¹ (fsigᵢ-mₙ)
omega = np.sum(
w * lik_expectation * (inv(cav_cov) @ (sigma_points - cav_mean)), axis=-1
)[None, :]
return mu, S, C, omega
@partial(jit, static_argnums=(0, 4))
def statistical_linear_regression(self, m, v, hyp=None, cubature_func=None):
"""
If no custom SLR method is provided, we use cubature.
"""
return self.statistical_linear_regression_cubature(m, v, hyp, cubature_func)
@partial(jit, static_argnums=0)
def observation_model(self, f, sigma, hyp=None):
"""
The implicit observation model is:
h(fₙ,rₙ) = E[yₙ|fₙ] + √Cov[yₙ|fₙ] σₙ
"""
conditional_expectation, conditional_covariance = self.conditional_moments(f, hyp)
obs_model = conditional_expectation + cholesky(conditional_covariance) @ sigma
return np.squeeze(obs_model)
@partial(jit, static_argnums=0)
def analytical_linearisation(self, m, sigma=None, hyp=None):
"""
Compute the Jacobian of the state space observation model w.r.t. the
function fₙ and the noise term σₙ.
The implicit observation model is:
h(fₙ,rₙ) = E[yₙ|fₙ] + √Cov[yₙ|fₙ] σₙ
The Jacobians are evaluated at the means, fₙ=m, σₙ=0, to be used during
Extended Kalman filtering and Extended EP.
"""
sigma = np.array([[0.0]]) if sigma is None else sigma
Jf, Jsigma = jacrev(self.observation_model, argnums=(0, 1))(m, sigma, hyp)
return np.atleast_2d(np.squeeze(Jf)), np.atleast_2d(np.squeeze(Jsigma))
@partial(jit, static_argnums=(0, 5))
def variational_expectation_cubature(self, y, post_mean, post_cov, hyp=None, cubature_func=None):
"""
Computes the "variational expectation" via cubature, i.e. the
expected log-likelihood, and its derivatives w.r.t. the posterior mean
E[log p(yₙ|fₙ)] = ∫ log p(yₙ|fₙ) 𝓝(fₙ|mₙ,vₙ) dfₙ
:param y: observed data (yₙ) [scalar]
:param post_mean: posterior mean (mₙ) [scalar]
:param post_cov: posterior variance (vₙ) [scalar]
:param hyp: likelihood hyperparameter [scalar]
:param cubature_func: the function to compute sigma points and weights to use during cubature
:return:
exp_log_lik: the expected log likelihood, E[log p(yₙ|fₙ)] [scalar]
dE_dm: derivative of E[log p(yₙ|fₙ)] w.r.t. mₙ [scalar]
dE_dv: derivative of E[log p(yₙ|fₙ)] w.r.t. vₙ [scalar]
"""
if cubature_func is None:
x, w = gauss_hermite(post_mean.shape[0], 20) # Gauss-Hermite sigma points and weights
else:
x, w = cubature_func(post_mean.shape[0])
# fsigᵢ=xᵢ√(vₙ) + mₙ: scale locations according to cavity dist.
sigma_points = cholesky(post_cov) @ np.atleast_2d(x) + post_mean
# pre-compute wᵢ log p(yₙ|xᵢ√(2vₙ) + mₙ)
weighted_log_likelihood_eval = w * self.evaluate_log_likelihood(y, sigma_points, hyp)
# Compute expected log likelihood via cubature:
# E[log p(yₙ|fₙ)] = ∫ log p(yₙ|fₙ) 𝓝(fₙ|mₙ,vₙ) dfₙ
# ≈ ∑ᵢ wᵢ p(yₙ|fsigᵢ)
exp_log_lik = np.sum(
weighted_log_likelihood_eval
)
# Compute first derivative via cubature:
# dE[log p(yₙ|fₙ)]/dmₙ = ∫ (fₙ-mₙ) vₙ⁻¹ log p(yₙ|fₙ) 𝓝(fₙ|mₙ,vₙ) dfₙ
# ≈ ∑ᵢ wᵢ (fₙ-mₙ) vₙ⁻¹ log p(yₙ|fsigᵢ)
invv = np.diag(post_cov)[:, None] ** -1
dE_dm = np.sum(
invv * (sigma_points - post_mean)
* weighted_log_likelihood_eval, axis=-1
)[:, None]
# Compute second derivative via cubature (deriv. w.r.t. var = 0.5 * 2nd deriv. w.r.t. mean):
# dE[log p(yₙ|fₙ)]/dvₙ = ∫ [(fₙ-mₙ)² vₙ⁻² - vₙ⁻¹]/2 log p(yₙ|fₙ) 𝓝(fₙ|mₙ,vₙ) dfₙ
# ≈ ∑ᵢ wᵢ [(fₙ-mₙ)² vₙ⁻² - vₙ⁻¹]/2 log p(yₙ|fsigᵢ)
dE_dv = np.sum(
(0.5 * (invv ** 2 * (sigma_points - post_mean) ** 2) - 0.5 * invv)
* weighted_log_likelihood_eval, axis=-1
)
dE_dv = np.diag(dE_dv)
return exp_log_lik, dE_dm, dE_dv
@partial(jit, static_argnums=(0, 5))
def variational_expectation(self, y, m, v, hyp=None, cubature_func=None):
"""
If no custom variational expectation method is provided, we use cubature.
"""
return self.variational_expectation_cubature(y, m, v, hyp, cubature_func)
class Gaussian(Likelihood):
"""
The Gaussian likelihood:
p(yₙ|fₙ) = 𝓝(yₙ|fₙ,σ²)
"""
def __init__(self, variance=0.1):
"""
:param variance: The observation noise variance, σ²
"""
super().__init__(hyp=variance)
self.name = 'Gaussian'
@property
def variance(self):
return softplus(self.hyp)
@partial(jit, static_argnums=0)
def evaluate_likelihood(self, y, f, hyp=None):
"""
Evaluate the Gaussian function 𝓝(yₙ|fₙ,σ²).
Can be used to evaluate Q cubature points.
:param y: observed data yₙ [scalar]
:param f: mean, i.e. the latent function value fₙ [Q, 1]
:param hyp: likelihood variance σ² [scalar]
:return:
𝓝(yₙ|fₙ,σ²), where σ² is the observation noise [Q, 1]
"""
hyp = softplus(self.hyp) if hyp is None else hyp
return (2 * pi * hyp) ** -0.5 * np.exp(-0.5 * (y - f) ** 2 / hyp)
@partial(jit, static_argnums=0)
def evaluate_log_likelihood(self, y, f, hyp=None):
"""
Evaluate the log-Gaussian function log𝓝(yₙ|fₙ,σ²).
Can be used to evaluate Q cubature points.
:param y: observed data yₙ [scalar]
:param f: mean, i.e. the latent function value fₙ [Q, 1]
:param hyp: likelihood variance σ² [scalar]
:return:
log𝓝(yₙ|fₙ,σ²), where σ² is the observation noise [Q, 1]
"""
hyp = softplus(self.hyp) if hyp is None else hyp
return -0.5 * np.log(2 * pi * hyp) - 0.5 * (y - f) ** 2 / hyp
@partial(jit, static_argnums=0)
def conditional_moments(self, f, hyp=None):
"""
The first two conditional moments of a Gaussian are the mean and variance:
E[y|f] = f
Var[y|f] = σ²
"""
hyp = softplus(self.hyp) if hyp is None else hyp
return f, hyp.reshape(-1, 1)
@partial(jit, static_argnums=(0, 6))
def moment_match(self, y, cav_mean, cav_cov, hyp=None, power=1.0, cubature_func=None):
"""
Closed form Gaussian moment matching.
Calculates the log partition function of the EP tilted distribution:
logZₙ = log ∫ 𝓝ᵃ(yₙ|fₙ,σ²) 𝓝(fₙ|mₙ,vₙ) dfₙ = E[𝓝(yₙ|fₙ,σ²)]
and its derivatives w.r.t. mₙ, which are required for moment matching.
:param y: observed data (yₙ) [scalar]
:param cav_mean: cavity mean (mₙ) [scalar]
:param cav_cov: cavity variance (vₙ) [scalar]
:param hyp: observation noise variance (σ²) [scalar]
:param power: EP power / fraction (a) - this is never required for the Gaussian likelihood [scalar]
:param cubature_func: not used
:return:
lZ: the log partition function, logZₙ [scalar]
dlZ: first derivative of logZₙ w.r.t. mₙ (if derivatives=True) [scalar]
d2lZ: second derivative of logZₙ w.r.t. mₙ (if derivatives=True) [scalar]
"""
hyp = softplus(self.hyp) if hyp is None else hyp
return gaussian_moment_match(y, cav_mean, cav_cov, hyp)
class Bernoulli(Likelihood):
"""
Bernoulli likelihood is p(yₙ|fₙ) = Pʸ(1-P)⁽¹⁻ʸ⁾, where P = E[yₙ=1|fₙ].
Link function maps latent GP to [0,1].
The Probit link function, i.e. the Error Function Likelihood:
i.e. the Gaussian (Normal) cumulative density function:
P = E[yₙ=1|fₙ] = Φ(fₙ)
= ∫ 𝓝(x|0,1) dx, where the integral is over (-∞, fₙ],
The Normal CDF is calulcated using the error function:
= (1 + erf(fₙ / √2)) / 2
for erf(z) = (2/√π) ∫ exp(-x²) dx, where the integral is over [0, z]
The logit link function:
P = E[yₙ=1|fₙ] = 1 / 1 + exp(-fₙ)
"""
def __init__(self, link):
super().__init__(hyp=None)
if link == 'logit':
self.link_fn = lambda f: 1 / (1 + np.exp(-f))
self.dlink_fn = lambda f: np.exp(f) / (1 + np.exp(f)) ** 2
self.link = link
elif link == 'probit':
jitter = 1e-10
self.link_fn = lambda f: 0.5 * (1.0 + erf(f / np.sqrt(2.0))) * (1 - 2 * jitter) + jitter
self.dlink_fn = lambda f: grad(self.link_fn)(np.squeeze(f)).reshape(-1, 1)
self.link = link
else:
raise NotImplementedError('link function not implemented')
self.name = 'Bernoulli'
@partial(jit, static_argnums=0)
def evaluate_likelihood(self, y, f, hyp=None):
"""
:param y: observed data yₙ ϵ {-1, +1} [scalar]
:param f: latent function value fₙ ϵ ℝ
:param hyp: dummy input, Probit/Logit has no hyperparameters
:return:
p(yₙ|fₙ) = Pʸ(1-P)⁽¹⁻ʸ⁾
"""
return np.where(np.equal(y, 1), self.link_fn(f), 1 - self.link_fn(f))
@partial(jit, static_argnums=0)
def evaluate_log_likelihood(self, y, f, hyp=None):
"""
:param y: observed data yₙ ϵ {-1, +1} [scalar]
:param f: latent function value fₙ ϵ ℝ
:param hyp: dummy input, Probit has no hyperparameters
:return:
log p(yₙ|fₙ)
"""
return np.log(self.evaluate_likelihood(y, f))
@partial(jit, static_argnums=0)
def conditional_moments(self, f, hyp=None):
"""
The first two conditional moments of a Probit likelihood are:
E[yₙ|fₙ] = Φ(fₙ)
Var[yₙ|fₙ] = Φ(fₙ) (1 - Φ(fₙ))
"""
return self.link_fn(f), self.link_fn(f)-(self.link_fn(f)**2)
@partial(jit, static_argnums=(0, 5, 6))
def moment_match(self, y, m, v, hyp=None, power=1.0, cubature_func=None):
"""
Probit likelihood moment matching.
Calculates the log partition function of the EP tilted distribution:
logZₙ = log ∫ Φᵃ(yₙfₙ) 𝓝(fₙ|mₙ,vₙ) dfₙ
and its derivatives w.r.t. mₙ, which are required for moment matching.
If the EP fraction a = 1, we get
= log Φ(yₙzₙ), where zₙ = mₙ / √(1 + vₙ) [see Rasmussen & Williams p74]
otherwise we must use cubature to compute the log partition and its derivatives.
:param y: observed data (yₙ) [scalar]
:param m: cavity mean (mₙ) [scalar]
:param v: cavity variance (vₙ) [scalar]
:param hyp: dummy variable (Probit has no hyperparameters)
:param power: EP power / fraction (a) [scalar]
:param cubature_func: function returning the sigma points and weights for cubature
:return:
lZ: the log partition function, logZₙ [scalar]
dlZ: first derivative of logZₙ w.r.t. mₙ (if derivatives=True) [scalar]
d2lZ: second derivative of logZₙ w.r.t. mₙ (if derivatives=True) [scalar]
"""
y = np.sign(y) # only allow values of {0, 1}
if power == 1 and self.link == 'probit': # if a = 1, we can calculate the moments in closed form
y = np.sign(y - 0.01) # set zeros to -1 for closed form probit calc
z = m / np.sqrt(1.0 + v)
z = z * y # zₙ = yₙmₙ / √(1 + vₙ)
# logZₙ = log ∫ Φ(yₙfₙ) 𝓝(fₙ|mₙ,vₙ) dfₙ
# = log Φ(yₙmₙ/√(1 + vₙ)) [see Rasmussen & Williams p74]
lZ, dlp = logphi(z)
# dlogZₙ/dmₙ = yₙ dlogΦ(zₙ)/dmₙ / √(1 + vₙ)
dlZ = y * dlp / np.sqrt(1.0 + v) # first derivative w.r.t mₙ
# d²logZₙ/dmₙ² = -dlogΦ(zₙ)/dmₙ (zₙ + dlogΦ(zₙ)/dmₙ) / √(1 + vₙ)
d2lZ = -dlp * (z + dlp) / (1.0 + v) # second derivative w.r.t mₙ
site_mean = m - dlZ / d2lZ # approx. likelihood (site) mean (see Rasmussen & Williams p75)
site_var = - (v + 1 / d2lZ) # approx. likelihood (site) variance
return lZ, site_mean, site_var
else:
# if a is not 1, we can calculate the moments via cubature
return self.moment_match_cubature(y, m, v, None, power, cubature_func)
@partial(jit, static_argnums=0)
def analytical_linearisation(self, m, sigma=None, hyp=None):
"""
Compute the Jacobian of the state space observation model w.r.t. the
function fₙ and the noise term σₙ.
"""
Jf = self.dlink_fn(m) + (
0.5 * (self.link_fn(m) * (1 - self.link_fn(m))) ** -0.5
* self.dlink_fn(m) * (1 - 2 * self.link_fn(m))
) * sigma
Jsigma = (self.link_fn(m) * (1 - self.link_fn(m))) ** 0.5
return Jf, Jsigma
class Probit(Bernoulli):
"""
The probit likelihood = Bernoulli likelihood with probit link.
"""
def __init__(self):
super().__init__(link='probit')
class Erf(Probit):
"""
The error function likelihood = probit = Bernoulli likelihood with probit link.
"""
pass
class Logit(Bernoulli):
"""
The logit likelihood = Bernoulli likelihood with logit link.
"""
def __init__(self):
super().__init__(link='logit')
class Logistic(Logit):
"""
The logistic likelihood = logit = Bernoulli likelihood with logit link.
"""
pass
class Poisson(Likelihood):
"""
The Poisson likelihood:
p(yₙ|fₙ) = Poisson(fₙ) = μʸ exp(-μ) / yₙ!
where μ = g(fₙ) = mean = variance is the Poisson intensity.
yₙ is non-negative integer count data.
No closed form moment matching is available, se we default to using cubature.
Letting Zy = gamma(yₙ+1) = yₙ!, we get log p(yₙ|fₙ) = log(g(fₙ))yₙ - g(fₙ) - log(Zy)
The larger the intensity μ, the stronger the likelihood resembles a Gaussian
since skewness = 1/sqrt(μ) and kurtosis = 1/μ.
Two possible link functions:
'exp': link(fₙ) = exp(fₙ), we have p(yₙ|fₙ) = exp(fₙyₙ-exp(fₙ)) / Zy.
'logistic': link(fₙ) = log(1+exp(fₙ))), we have p(yₙ|fₙ) = logʸ(1+exp(fₙ)))(1+exp(fₙ)) / Zy.
"""
def __init__(self, link='exp'):
"""
:param link: link function, either 'exp' or 'logistic'
"""
super().__init__(hyp=None)
if link == 'exp':
self.link_fn = lambda mu: np.exp(mu)
self.dlink_fn = lambda mu: np.exp(mu)
elif link == 'logistic':
self.link_fn = lambda mu: softplus(mu)
self.dlink_fn = lambda mu: sigmoid(mu)
else:
raise NotImplementedError('link function not implemented')
self.name = 'Poisson'
@partial(jit, static_argnums=0)
def evaluate_likelihood(self, y, f, hyp=None):
"""
Evaluate the Poisson likelihood:
p(yₙ|fₙ) = Poisson(fₙ) = μʸ exp(-μ) / yₙ!
for μ = g(fₙ), where g() is the link function (exponential or logistic).
We use the gamma function to evaluate yₙ! = gamma(yₙ + 1).
Can be used to evaluate Q cubature points when performing moment matching.
:param y: observed data (yₙ) [scalar]
:param f: latent function value (fₙ) [Q, 1]
:param hyp: dummy variable (Poisson has no hyperparameters)
:return:
Poisson(fₙ) = μʸ exp(-μ) / yₙ! [Q, 1]
"""
mu = self.link_fn(f)
return mu**y * np.exp(-mu) / np.exp(gammaln(y + 1))
@partial(jit, static_argnums=0)
def evaluate_log_likelihood(self, y, f, hyp=None):
"""
Evaluate the Poisson log-likelihood:
log p(yₙ|fₙ) = log Poisson(fₙ) = log(μʸ exp(-μ) / yₙ!)
for μ = g(fₙ), where g() is the link function (exponential or logistic).
We use the gamma function to evaluate yₙ! = gamma(yₙ + 1).
Can be used to evaluate Q cubature points when performing moment matching.
:param y: observed data (yₙ) [scalar]
:param f: latent function value (fₙ) [Q, 1]
:param hyp: dummy variable (Poisson has no hyperparameters)
:return:
log Poisson(fₙ) = log(μʸ exp(-μ) / yₙ!) [Q, 1]
"""
mu = self.link_fn(f)
return y * np.log(mu) - mu - gammaln(y + 1)
@partial(jit, static_argnums=0)
def observation_model(self, f, sigma, hyp=None):
"""
TODO: sort out broadcasting so we don't need this additional function (only difference is the transpose)
The implicit observation model is:
h(fₙ,rₙ) = E[yₙ|fₙ] + √Cov[yₙ|fₙ] σₙ
"""
conditional_expectation, conditional_covariance = self.conditional_moments(f, hyp)
obs_model = conditional_expectation + cholesky(conditional_covariance.T) @ sigma
return np.squeeze(obs_model)
@partial(jit, static_argnums=0)
def conditional_moments(self, f, hyp=None):
"""
The first two conditional moments of a Poisson distribution are equal to the intensity:
E[yₙ|fₙ] = link(fₙ)
Var[yₙ|fₙ] = link(fₙ)
"""
# return self.link_fn(f), self.link_fn(f)
return self.link_fn(f), vmap(np.diag, 1, 2)(self.link_fn(f))
@partial(jit, static_argnums=0)
def analytical_linearisation(self, m, sigma=None, hyp=None):
"""
Compute the Jacobian of the state space observation model w.r.t. the
function fₙ and the noise term σₙ.
"""
Jf = np.diag(np.squeeze(self.link_fn(m) + 0.5 * self.link_fn(m) ** -0.5 * self.dlink_fn(m) * sigma, axis=-1))
Jsigma = np.diag(np.squeeze(self.link_fn(m) ** 0.5, axis=-1))
return Jf, Jsigma
class HeteroscedasticNoise(Likelihood):
"""
The Heteroscedastic Noise likelihood:
p(y|f1,f2) = N(y|f1,link(f2)^2)
"""
def __init__(self, link='softplus'):
"""
:param link: link function, either 'exp' or 'softplus' (note that the link is modified with an offset)
"""
super().__init__(hyp=None)
if link == 'exp':
self.link_fn = lambda mu: np.exp(mu - 0.5)
self.dlink_fn = lambda mu: np.exp(mu - 0.5)
elif link == 'softplus':
self.link_fn = lambda mu: softplus(mu - 0.5) + 1e-10
self.dlink_fn = lambda mu: sigmoid(mu - 0.5)
else:
raise NotImplementedError('link function not implemented')
self.name = 'Heteroscedastic Noise'
@partial(jit, static_argnums=0)
def evaluate_likelihood(self, y, f, hyp=None):
"""
Evaluate the likelihood
"""
mu, var = self.conditional_moments(f)
return (2 * pi * var) ** -0.5 * np.exp(-0.5 * (y - mu) ** 2 / var)
@partial(jit, static_argnums=0)
def evaluate_log_likelihood(self, y, f, hyp=None):
"""
Evaluate the log-likelihood
"""
mu, var = self.conditional_moments(f)
return -0.5 * np.log(2 * pi * var) - 0.5 * (y - mu) ** 2 / var
@partial(jit, static_argnums=0)
def conditional_moments(self, f, hyp=None):
"""
"""
return f[0][None, ...], self.link_fn(f[1][None, ...]) ** 2
@partial(jit, static_argnums=(0, 6))
def moment_match(self, y, cav_mean, cav_cov, hyp=None, power=1.0, cubature_func=None):
"""
"""
if cubature_func is None:
x, w = gauss_hermite(1, 20) # Gauss-Hermite sigma points and weights
else:
x, w = cubature_func(1)
# sigma_points = np.sqrt(2) * np.sqrt(v) * x + m # scale locations according to cavity dist.
sigma_points = np.sqrt(cav_cov[1, 1]) * x + cav_mean[1] # fsigᵢ=xᵢ√cₙ + mₙ: scale locations according to cavity
f2 = self.link_fn(sigma_points) ** 2. / power
obs_var = f2 + cav_cov[0, 0]
const = power ** -0.5 * (2 * pi * self.link_fn(sigma_points) ** 2.) ** (0.5 - 0.5 * power)
normpdf = const * (2 * pi * obs_var) ** -0.5 * np.exp(-0.5 * (y - cav_mean[0, 0]) ** 2 / obs_var)
Z = np.sum(w * normpdf)
Zinv = 1. / np.maximum(Z, 1e-8)
lZ = np.log(np.maximum(Z, 1e-8))
dZ_integrand1 = (y - cav_mean[0, 0]) / obs_var * normpdf
dlZ1 = Zinv * np.sum(w * dZ_integrand1)
dZ_integrand2 = (sigma_points - cav_mean[1, 0]) / cav_cov[1, 1] * normpdf
dlZ2 = Zinv * np.sum(w * dZ_integrand2)
d2Z_integrand1 = (-(f2 + cav_cov[0, 0]) ** -1 + ((y - cav_mean[0, 0]) / obs_var) ** 2) * normpdf
d2lZ1 = -dlZ1 ** 2 + Zinv * np.sum(w * d2Z_integrand1)
d2Z_integrand2 = (-cav_cov[1, 1] ** -1 + ((sigma_points - cav_mean[1, 0]) / cav_cov[1, 1]) ** 2) * normpdf
d2lZ2 = -dlZ2 ** 2 + Zinv * np.sum(w * d2Z_integrand2)
dlZ = np.block([[dlZ1],
[dlZ2]])
d2lZ = np.block([[d2lZ1, 0],
[0., d2lZ2]])
id2lZ = inv(ensure_positive_precision(-d2lZ) - 1e-10 * np.eye(d2lZ.shape[0]))
site_mean = cav_mean + id2lZ @ dlZ # approx. likelihood (site) mean (see Rasmussen & Williams p75)
site_cov = power * (-cav_cov + id2lZ) # approx. likelihood (site) variance
return lZ, site_mean, site_cov
@partial(jit, static_argnums=0)
def log_expected_likelihood(self, y, x, w, cav_mean, cav_var, power):
sigma_points = np.sqrt(cav_var[1]) * x + cav_mean[1]
f2 = self.link_fn(sigma_points) ** 2. / power
obs_var = f2 + cav_var[0]
const = power ** -0.5 * (2 * pi * self.link_fn(sigma_points) ** 2.) ** (0.5 - 0.5 * power)
normpdf = const * (2 * pi * obs_var) ** -0.5 * np.exp(-0.5 * (y - cav_mean[0]) ** 2 / obs_var)
Z = np.sum(w * normpdf)
lZ = np.log(Z + 1e-8)
return lZ
@partial(jit, static_argnums=0)
def dlZ_dm(self, y, x, w, cav_mean, cav_var, power):
return jacrev(self.log_expected_likelihood, argnums=3)(y, x, w, cav_mean, cav_var, power)
@partial(jit, static_argnums=(0, 6))
def moment_match_unstable(self, y, cav_mean, cav_cov, hyp=None, power=1.0, cubature_func=None):
"""
TODO: Attempt to compute full site covariance, including cross terms. However, this makes things unstable.
"""
if cubature_func is None:
x, w = gauss_hermite(1, 20) # Gauss-Hermite sigma points and weights
else:
x, w = cubature_func(1)
lZ = self.log_expected_likelihood(y, x, w, np.squeeze(cav_mean), np.squeeze(np.diag(cav_cov)), power)
dlZ = self.dlZ_dm(y, x, w, np.squeeze(cav_mean), np.squeeze(np.diag(cav_cov)), power)[:, None]
d2lZ = jacrev(self.dlZ_dm, argnums=3)(y, x, w, np.squeeze(cav_mean), np.squeeze(np.diag(cav_cov)), power)
# d2lZ = np.diag(np.diag(d2lZ)) # discard cross terms
id2lZ = inv(ensure_positive_precision(-d2lZ) - 1e-10 * np.eye(d2lZ.shape[0]))
site_mean = cav_mean + id2lZ @ dlZ # approx. likelihood (site) mean (see Rasmussen & Williams p75)
site_cov = power * (-cav_cov + id2lZ) # approx. likelihood (site) variance
return lZ, site_mean, site_cov
@partial(jit, static_argnums=(0, 5))
def variational_expectation(self, y, m, v, hyp=None, cubature_func=None):
"""
"""
if cubature_func is None:
x, w = gauss_hermite(1, 20) # Gauss-Hermite sigma points and weights
else:
x, w = cubature_func(1)
m0, m1, v0, v1 = m[0, 0], m[1, 0], v[0, 0], v[1, 1]
sigma_points = np.sqrt(v1) * x + m1 # fsigᵢ=xᵢ√(2vₙ) + mₙ: scale locations according to cavity dist.
# pre-compute wᵢ log p(yₙ|xᵢ√(2vₙ) + mₙ)
var = self.link_fn(sigma_points) ** 2
log_lik = np.log(var) + var ** -1 * ((y - m0) ** 2 + v0)
weighted_log_likelihood_eval = w * log_lik
# Compute expected log likelihood via cubature:
# E[log p(yₙ|fₙ)] = ∫ log p(yₙ|fₙ) 𝓝(fₙ|mₙ,vₙ) dfₙ
# ≈ ∑ᵢ wᵢ p(yₙ|fsigᵢ)
exp_log_lik = -0.5 * np.log(2 * pi) - 0.5 * np.sum(
weighted_log_likelihood_eval
)
# Compute first derivative via cubature:
dE_dm1 = np.sum(
(var ** -1 * (y - m0 + v0)) * w
)
# dE[log p(yₙ|fₙ)]/dmₙ = ∫ (fₙ-mₙ) vₙ⁻¹ log p(yₙ|fₙ) 𝓝(fₙ|mₙ,vₙ) dfₙ
# ≈ ∑ᵢ wᵢ (fₙ-mₙ) vₙ⁻¹ log p(yₙ|fsigᵢ)
dE_dm2 = - 0.5 * np.sum(
weighted_log_likelihood_eval * v1 ** -1 * (sigma_points - m1)
)
# Compute derivative w.r.t. variance:
dE_dv1 = -0.5 * np.sum(
var ** -1 * w
)
# dE[log p(yₙ|fₙ)]/dvₙ = ∫ [(fₙ-mₙ)² vₙ⁻² - vₙ⁻¹]/2 log p(yₙ|fₙ) 𝓝(fₙ|mₙ,vₙ) dfₙ
# ≈ ∑ᵢ wᵢ [(fₙ-mₙ)² vₙ⁻² - vₙ⁻¹]/2 log p(yₙ|fsigᵢ)
dE_dv2 = -0.25 * np.sum(
(v1 ** -2 * (sigma_points - m1) ** 2 - v1 ** -1)
* weighted_log_likelihood_eval
)
dE_dm = np.block([[dE_dm1],
[dE_dm2]])
dE_dv = np.block([[dE_dv1, 0],
[0., dE_dv2]])
return exp_log_lik, dE_dm, dE_dv
@partial(jit, static_argnums=(0, 4))
def statistical_linear_regression(self, cav_mean, cav_cov, hyp=None, cubature_func=None):
"""
Perform statistical linear regression (SLR) using cubature.
We aim to find a likelihood approximation p(yₙ|fₙ) ≈ 𝓝(yₙ|Afₙ+b,Ω+Var[yₙ|fₙ]).
"""
if cubature_func is None:
x, w = gauss_hermite(cav_mean.shape[0], 20) # Gauss-Hermite sigma points and weights
else:
x, w = cubature_func(cav_mean.shape[0])
m0, m1, v0, v1 = cav_mean[0, 0], cav_mean[1, 0], cav_cov[0, 0], cav_cov[1, 1]
# fsigᵢ=xᵢ√(vₙ) + mₙ: scale locations according to cavity dist.
sigma_points = cholesky(cav_cov) @ x + cav_mean
var = self.link_fn(sigma_points[1]) ** 2
# Compute zₙ via cubature:
# zₙ = ∫ E[yₙ|fₙ] 𝓝(fₙ|mₙ,vₙ) dfₙ
# ≈ ∑ᵢ wᵢ E[yₙ|fsigᵢ]
mu = m0.reshape(1, 1)
# Compute variance S via cubature:
# S = ∫ [(E[yₙ|fₙ]-zₙ) (E[yₙ|fₙ]-zₙ)' + Cov[yₙ|fₙ]] 𝓝(fₙ|mₙ,vₙ) dfₙ
# ≈ ∑ᵢ wᵢ [(E[yₙ|fsigᵢ]-zₙ) (E[yₙ|fsigᵢ]-zₙ)' + Cov[yₙ|fₙ]]
S = v0 + np.sum(
w * var
)
S = S.reshape(1, 1)
# Compute cross covariance C via cubature:
# C = ∫ (fₙ-mₙ) (E[yₙ|fₙ]-zₙ)' 𝓝(fₙ|mₙ,vₙ) dfₙ
# ≈ ∑ᵢ wᵢ (fsigᵢ -mₙ) (E[yₙ|fsigᵢ]-zₙ)'
C = np.sum(
w * (sigma_points - cav_mean) * (sigma_points[0] - m0), axis=-1
).reshape(2, 1)
# Compute derivative of z via cubature:
# omega = ∫ E[yₙ|fₙ] vₙ⁻¹ (fₙ-mₙ) 𝓝(fₙ|mₙ,vₙ) dfₙ
# ≈ ∑ᵢ wᵢ E[yₙ|fsigᵢ] vₙ⁻¹ (fsigᵢ-mₙ)
omega = np.block([[1., 0.]])
return mu, S, C, omega
@partial(jit, static_argnums=0)
def analytical_linearisation(self, m, sigma=None, hyp=None):
"""
Compute the Jacobian of the state space observation model w.r.t. the
function fₙ and the noise term σₙ.
"""
return np.block([[np.array(1.0), self.dlink_fn(m[1]) * sigma]]), self.link_fn(np.array([m[1]]))
class AudioAmplitudeDemodulation(Likelihood):
"""
The Audio Amplitude Demodulation likelihood
"""
def __init__(self, variance=0.1):
"""
param hyp: observation noise
"""
super().__init__(hyp=variance)
self.name = 'Audio Amplitude Demodulation'
self.link_fn = lambda f: softplus(f)
self.dlink_fn = lambda f: sigmoid(f) # derivative of the link function
@property
def variance(self):
return softplus(self.hyp)
@partial(jit, static_argnums=0)
def evaluate_likelihood(self, y, f, hyp=None):
"""
Evaluate the likelihood
"""
mu, var = self.conditional_moments(f, hyp)
return (2 * pi * var) ** -0.5 * np.exp(-0.5 * (y - mu) ** 2 / var)
@partial(jit, static_argnums=0)
def evaluate_log_likelihood(self, y, f, hyp=None):
"""
Evaluate the log-likelihood
"""
mu, var = self.conditional_moments(f, hyp)
return -0.5 * np.log(2 * pi * var) - 0.5 * (y - mu) ** 2 / var
@partial(jit, static_argnums=0)
def conditional_moments(self, f, hyp=None):
"""
"""
obs_noise_var = hyp if hyp is not None else self.hyp
num_components = int(f.shape[0] / 2)
subbands, modulators = f[:num_components], self.link_fn(f[num_components:])
return np.atleast_2d(np.sum(subbands * modulators, axis=0)), np.atleast_2d(obs_noise_var)
# return np.atleast_2d(modulators.T @ subbands), np.atleast_2d(obs_noise_var)
@partial(jit, static_argnums=(0, 6))
def moment_match(self, y, cav_mean, cav_cov, hyp=None, power=1.0, cubature_func=None):
"""
"""
num_components = int(cav_mean.shape[0] / 2)
if cubature_func is None:
x, w = gauss_hermite(num_components, 20) # Gauss-Hermite sigma points and weights
else:
x, w = cubature_func(num_components)
subband_mean, modulator_mean = cav_mean[:num_components], self.link_fn(cav_mean[num_components:])
subband_cov, modulator_cov = cav_cov[:num_components, :num_components], cav_cov[num_components:, num_components:]
sigma_points = cholesky(modulator_cov) @ x + modulator_mean
const = power ** -0.5 * (2 * pi * hyp) ** (0.5 - 0.5 * power)
mu = (self.link_fn(sigma_points).T @ subband_mean)[:, 0]
var = hyp / power + (self.link_fn(sigma_points).T ** 2 @ np.diag(subband_cov)[..., None])[:, 0]
normpdf = const * (2 * pi * var) ** -0.5 * np.exp(-0.5 * (y - mu) ** 2 / var)
Z = np.sum(w * normpdf)
Zinv = 1. / (Z + 1e-8)
lZ = np.log(Z + 1e-8)
dZ1 = np.sum(w * self.link_fn(sigma_points) * (y - mu) / var * normpdf, axis=-1)
dZ2 = np.sum(w * (sigma_points - modulator_mean) * np.diag(modulator_cov)[..., None] ** -1 * normpdf, axis=-1)
dlZ = Zinv * np.block([dZ1, dZ2])
d2Z1 = np.sum(w * self.link_fn(sigma_points) ** 2 * (
((y - mu) / var) ** 2
- var ** -1
) * normpdf, axis=-1)
d2Z2 = np.sum(w * (
((sigma_points - modulator_mean) * np.diag(modulator_cov)[..., None] ** -1) ** 2
- np.diag(modulator_cov)[..., None] ** -1
) * normpdf, axis=-1)
d2lZ = np.diag(-dlZ ** 2 + Zinv * np.block([d2Z1, d2Z2]))
id2lZ = inv(ensure_positive_precision(-d2lZ) - 1e-10 * np.eye(d2lZ.shape[0]))
site_mean = cav_mean + id2lZ @ dlZ[..., None] # approx. likelihood (site) mean (see Rasmussen & Williams p75)
site_cov = power * (-cav_cov + id2lZ) # approx. likelihood (site) variance
return lZ, site_mean, site_cov
@partial(jit, static_argnums=0)
def analytical_linearisation(self, m, sigma=None, hyp=None):
"""
"""
obs_noise_var = hyp if hyp is not None else self.hyp
num_components = int(m.shape[0] / 2)
subbands, modulators = m[:num_components], self.link_fn(m[num_components:])
Jf = np.block([[modulators], [subbands * self.dlink_fn(m[num_components:])]])
Jsigma = np.array([[np.sqrt(obs_noise_var)]])
return np.atleast_2d(Jf).T, np.atleast_2d(Jsigma).T
@partial(jit, static_argnums=(0, 4))
def statistical_linear_regression(self, cav_mean, cav_cov, hyp=None, cubature_func=None):
"""
This gives the same result as above - delete
"""
num_components = int(cav_mean.shape[0] / 2)
if cubature_func is None:
x, w = gauss_hermite(num_components, 20) # Gauss-Hermite sigma points and weights
else:
x, w = cubature_func(num_components)
subband_mean, modulator_mean = cav_mean[:num_components], self.link_fn(cav_mean[num_components:])
subband_cov, modulator_cov = cav_cov[:num_components, :num_components], cav_cov[num_components:,
num_components:]
sigma_points = cholesky(modulator_cov) @ x + modulator_mean
lik_expectation, lik_covariance = (self.link_fn(sigma_points).T @ subband_mean).T, hyp
# Compute zₙ via cubature:
# muₙ = ∫ E[yₙ|fₙ] 𝓝(fₙ|mₙ,vₙ) dfₙ
# ≈ ∑ᵢ wᵢ E[yₙ|fsigᵢ]
mu = np.sum(
w * lik_expectation, axis=-1
)[:, None]
# Compute variance S via cubature:
# S = ∫ [(E[yₙ|fₙ]-zₙ) (E[yₙ|fₙ]-zₙ)' + Cov[yₙ|fₙ]] 𝓝(fₙ|mₙ,vₙ) dfₙ
# ≈ ∑ᵢ wᵢ [(E[yₙ|fsigᵢ]-zₙ) (E[yₙ|fsigᵢ]-zₙ)' + Cov[yₙ|fₙ]]
S = np.sum(
w * ((lik_expectation - mu) * (lik_expectation - mu) + lik_covariance), axis=-1
)[:, None]
# Compute cross covariance C via cubature:
# C = ∫ (fₙ-mₙ) (E[yₙ|fₙ]-zₙ)' 𝓝(fₙ|mₙ,vₙ) dfₙ
# ≈ ∑ᵢ wᵢ (fsigᵢ -mₙ) (E[yₙ|fsigᵢ]-zₙ)'
C = np.sum(
w * np.block([[self.link_fn(sigma_points) * np.diag(subband_cov)[..., None]],
[sigma_points - modulator_mean]]) * (lik_expectation - mu), axis=-1
)[:, None]
# Compute derivative of mu via cubature:
omega = np.sum(
w * np.block([[self.link_fn(sigma_points)],
[np.diag(modulator_cov)[..., None] ** -1 * (sigma_points - modulator_mean) * lik_expectation]]), axis=-1
)[None, :]
return mu, S, C, omega
@partial(jit, static_argnums=(0, 5))
def variational_expectation(self, y, post_mean, post_cov, hyp=None, cubature_func=None):
"""
"""
num_components = int(post_mean.shape[0] / 2)
if cubature_func is None:
x, w = gauss_hermite(num_components, 20) # Gauss-Hermite sigma points and weights
else:
x, w = cubature_func(num_components)
subband_mean, modulator_mean = post_mean[:num_components], self.link_fn(post_mean[num_components:])
subband_cov, modulator_cov = post_cov[:num_components, :num_components], post_cov[num_components:,
num_components:]
sigma_points = cholesky(modulator_cov) @ x + modulator_mean
modulator_var = np.diag(subband_cov)[..., None]
mu = (self.link_fn(sigma_points).T @ subband_mean)[:, 0]
lognormpdf = -0.5 * np.log(2 * pi * hyp) - 0.5 * (y - mu) ** 2 / hyp
const = -0.5 / hyp * (self.link_fn(sigma_points).T ** 2 @ modulator_var)[:, 0]
exp_log_lik = np.sum(w * (lognormpdf + const))
dE1 = np.sum(w * self.link_fn(sigma_points) * (y - mu) / hyp, axis=-1)
dE2 = np.sum(w * (sigma_points - modulator_mean) * modulator_var ** -1
* (lognormpdf + const), axis=-1)
dE_dm = np.block([dE1, dE2])[..., None]
d2E1 = np.sum(w * - 0.5 * self.link_fn(sigma_points) ** 2 / hyp, axis=-1)
d2E2 = np.sum(w * 0.5 * (
((sigma_points - modulator_mean) * modulator_var ** -1) ** 2
- modulator_var ** -1
) * (lognormpdf + const), axis=-1)
dE_dv = np.diag(np.block([d2E1, d2E2]))
return exp_log_lik, dE_dm, dE_dv
@partial(jit, static_argnums=0)
def analytical_linearisation(self, m, sigma=None, hyp=None):
"""
Compute the Jacobian of the state space observation model w.r.t. the
function fₙ and the noise term σₙ.
"""
num_components = int(m.shape[0] / 2)
Jf = np.block([[self.link_fn(m[num_components:])], [m[:num_components] * self.dlink_fn(m[num_components:])]]).T
Jsigma = np.array([[hyp ** 0.5]])
return Jf, Jsigma | unknown | codeparrot/codeparrot-clean | ||
# Copyright 2011 VMware, Inc
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import inspect
import os
import random
from oslo_concurrency import processutils
from oslo_config import cfg
from oslo_log import log as logging
from oslo_messaging import server as rpc_server
from oslo_service import loopingcall
from oslo_service import service as common_service
from oslo_utils import excutils
from oslo_utils import importutils
from neutron.common import config
from neutron.common import rpc as n_rpc
from neutron import context
from neutron.db import api as session
from neutron.i18n import _LE, _LI
from neutron import manager
from neutron import wsgi
service_opts = [
cfg.IntOpt('periodic_interval',
default=40,
help=_('Seconds between running periodic tasks')),
cfg.IntOpt('api_workers',
help=_('Number of separate API worker processes for service. '
'If not specified, the default is equal to the number '
'of CPUs available for best performance.')),
cfg.IntOpt('rpc_workers',
default=0,
help=_('Number of RPC worker processes for service')),
cfg.IntOpt('periodic_fuzzy_delay',
default=5,
help=_('Range of seconds to randomly delay when starting the '
'periodic task scheduler to reduce stampeding. '
'(Disable by setting to 0)')),
]
CONF = cfg.CONF
CONF.register_opts(service_opts)
LOG = logging.getLogger(__name__)
class WsgiService(object):
"""Base class for WSGI based services.
For each api you define, you must also define these flags:
:<api>_listen: The address on which to listen
:<api>_listen_port: The port on which to listen
"""
def __init__(self, app_name):
self.app_name = app_name
self.wsgi_app = None
def start(self):
self.wsgi_app = _run_wsgi(self.app_name)
def wait(self):
self.wsgi_app.wait()
class NeutronApiService(WsgiService):
"""Class for neutron-api service."""
@classmethod
def create(cls, app_name='neutron'):
# Setup logging early, supplying both the CLI options and the
# configuration mapping from the config file
# We only update the conf dict for the verbose and debug
# flags. Everything else must be set up in the conf file...
# Log the options used when starting if we're in debug mode...
config.setup_logging()
service = cls(app_name)
return service
def serve_wsgi(cls):
try:
service = cls.create()
service.start()
except Exception:
with excutils.save_and_reraise_exception():
LOG.exception(_LE('Unrecoverable error: please check log '
'for details.'))
return service
class RpcWorker(common_service.ServiceBase):
"""Wraps a worker to be handled by ProcessLauncher"""
def __init__(self, plugin):
self._plugin = plugin
self._servers = []
def start(self):
self._servers = self._plugin.start_rpc_listeners()
def wait(self):
try:
self._wait()
except Exception:
LOG.exception(_LE('done with wait'))
raise
def _wait(self):
LOG.debug('calling RpcWorker wait()')
for server in self._servers:
if isinstance(server, rpc_server.MessageHandlingServer):
LOG.debug('calling wait on %s', server)
server.wait()
else:
LOG.debug('NOT calling wait on %s', server)
LOG.debug('returning from RpcWorker wait()')
def stop(self):
LOG.debug('calling RpcWorker stop()')
for server in self._servers:
if isinstance(server, rpc_server.MessageHandlingServer):
LOG.debug('calling stop on %s', server)
server.stop()
@staticmethod
def reset():
config.reset_service()
def serve_rpc():
plugin = manager.NeutronManager.get_plugin()
# If 0 < rpc_workers then start_rpc_listeners would be called in a
# subprocess and we cannot simply catch the NotImplementedError. It is
# simpler to check this up front by testing whether the plugin supports
# multiple RPC workers.
if not plugin.rpc_workers_supported():
LOG.debug("Active plugin doesn't implement start_rpc_listeners")
if 0 < cfg.CONF.rpc_workers:
LOG.error(_LE("'rpc_workers = %d' ignored because "
"start_rpc_listeners is not implemented."),
cfg.CONF.rpc_workers)
raise NotImplementedError()
try:
rpc = RpcWorker(plugin)
if cfg.CONF.rpc_workers < 1:
LOG.debug('starting rpc directly, workers=%s',
cfg.CONF.rpc_workers)
rpc.start()
return rpc
else:
# dispose the whole pool before os.fork, otherwise there will
# be shared DB connections in child processes which may cause
# DB errors.
LOG.debug('using launcher for rpc, workers=%s',
cfg.CONF.rpc_workers)
session.dispose()
launcher = common_service.ProcessLauncher(cfg.CONF,
wait_interval=1.0)
launcher.launch_service(rpc, workers=cfg.CONF.rpc_workers)
return launcher
except Exception:
with excutils.save_and_reraise_exception():
LOG.exception(_LE('Unrecoverable error: please check log for '
'details.'))
def _get_api_workers():
workers = cfg.CONF.api_workers
if workers is None:
workers = processutils.get_worker_count()
return workers
def _run_wsgi(app_name):
app = config.load_paste_app(app_name)
if not app:
LOG.error(_LE('No known API applications configured.'))
return
server = wsgi.Server("Neutron")
server.start(app, cfg.CONF.bind_port, cfg.CONF.bind_host,
workers=_get_api_workers())
LOG.info(_LI("Neutron service started, listening on %(host)s:%(port)s"),
{'host': cfg.CONF.bind_host, 'port': cfg.CONF.bind_port})
return server
class Service(n_rpc.Service):
"""Service object for binaries running on hosts.
A service takes a manager and enables rpc by listening to queues based
on topic. It also periodically runs tasks on the manager.
"""
def __init__(self, host, binary, topic, manager, report_interval=None,
periodic_interval=None, periodic_fuzzy_delay=None,
*args, **kwargs):
self.binary = binary
self.manager_class_name = manager
manager_class = importutils.import_class(self.manager_class_name)
self.manager = manager_class(host=host, *args, **kwargs)
self.report_interval = report_interval
self.periodic_interval = periodic_interval
self.periodic_fuzzy_delay = periodic_fuzzy_delay
self.saved_args, self.saved_kwargs = args, kwargs
self.timers = []
super(Service, self).__init__(host, topic, manager=self.manager)
def start(self):
self.manager.init_host()
super(Service, self).start()
if self.report_interval:
pulse = loopingcall.FixedIntervalLoopingCall(self.report_state)
pulse.start(interval=self.report_interval,
initial_delay=self.report_interval)
self.timers.append(pulse)
if self.periodic_interval:
if self.periodic_fuzzy_delay:
initial_delay = random.randint(0, self.periodic_fuzzy_delay)
else:
initial_delay = None
periodic = loopingcall.FixedIntervalLoopingCall(
self.periodic_tasks)
periodic.start(interval=self.periodic_interval,
initial_delay=initial_delay)
self.timers.append(periodic)
self.manager.after_start()
def __getattr__(self, key):
manager = self.__dict__.get('manager', None)
return getattr(manager, key)
@classmethod
def create(cls, host=None, binary=None, topic=None, manager=None,
report_interval=None, periodic_interval=None,
periodic_fuzzy_delay=None):
"""Instantiates class and passes back application object.
:param host: defaults to CONF.host
:param binary: defaults to basename of executable
:param topic: defaults to bin_name - 'neutron-' part
:param manager: defaults to CONF.<topic>_manager
:param report_interval: defaults to CONF.report_interval
:param periodic_interval: defaults to CONF.periodic_interval
:param periodic_fuzzy_delay: defaults to CONF.periodic_fuzzy_delay
"""
if not host:
host = CONF.host
if not binary:
binary = os.path.basename(inspect.stack()[-1][1])
if not topic:
topic = binary.rpartition('neutron-')[2]
topic = topic.replace("-", "_")
if not manager:
manager = CONF.get('%s_manager' % topic, None)
if report_interval is None:
report_interval = CONF.report_interval
if periodic_interval is None:
periodic_interval = CONF.periodic_interval
if periodic_fuzzy_delay is None:
periodic_fuzzy_delay = CONF.periodic_fuzzy_delay
service_obj = cls(host, binary, topic, manager,
report_interval=report_interval,
periodic_interval=periodic_interval,
periodic_fuzzy_delay=periodic_fuzzy_delay)
return service_obj
def kill(self):
"""Destroy the service object."""
self.stop()
def stop(self):
super(Service, self).stop()
for x in self.timers:
try:
x.stop()
except Exception:
LOG.exception(_LE("Exception occurs when timer stops"))
self.timers = []
def wait(self):
super(Service, self).wait()
for x in self.timers:
try:
x.wait()
except Exception:
LOG.exception(_LE("Exception occurs when waiting for timer"))
def reset(self):
config.reset_service()
def periodic_tasks(self, raise_on_error=False):
"""Tasks to be run at a periodic interval."""
ctxt = context.get_admin_context()
self.manager.periodic_tasks(ctxt, raise_on_error=raise_on_error)
def report_state(self):
"""Update the state of this service."""
# Todo(gongysh) report state to neutron server
pass | unknown | codeparrot/codeparrot-clean | ||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
test_limits
----------------------------------
Functional tests for `shade` limits method
"""
from shade.tests.functional import base
class TestUsage(base.BaseFunctionalTestCase):
def test_get_our_compute_limits(self):
'''Test quotas functionality'''
limits = self.user_cloud.get_compute_limits()
self.assertIsNotNone(limits)
self.assertTrue(hasattr(limits, 'max_server_meta'))
# Test normalize limits
self.assertFalse(hasattr(limits, 'maxImageMeta'))
def test_get_other_compute_limits(self):
'''Test quotas functionality'''
limits = self.operator_cloud.get_compute_limits('demo')
self.assertIsNotNone(limits)
self.assertTrue(hasattr(limits, 'max_server_meta'))
# Test normalize limits
self.assertFalse(hasattr(limits, 'maxImageMeta'))
def test_get_our_volume_limits(self):
'''Test quotas functionality'''
limits = self.user_cloud.get_volume_limits()
self.assertIsNotNone(limits)
self.assertFalse(hasattr(limits, 'maxTotalVolumes'))
def test_get_other_volume_limits(self):
'''Test quotas functionality'''
limits = self.operator_cloud.get_volume_limits('demo')
self.assertFalse(hasattr(limits, 'maxTotalVolumes')) | unknown | codeparrot/codeparrot-clean | ||
# coding: utf-8
from fabric.api import env, local, cd, run, sudo
from fabric.operations import put
env.use_ssh_config = True
env.keepalive = 60
def tarball():
"""Create tarball for june."""
local('make static')
local('python setup.py sdist --formats=gztar', capture=False)
def upload():
"""Upload tarball to the server."""
dist = local('python setup.py --fullname', capture=True).strip()
run('rm -f ~/tmp/june.tar.gz')
sudo('rm -fr ~/tmp/june')
run('mkdir -p ~/tmp/june')
put('dist/%s.tar.gz' % dist, '~/tmp/june.tar.gz')
with cd('~/tmp/june'):
run('tar xzf ~/tmp/june.tar.gz')
def install():
"""Install june package."""
dist = local('python setup.py --fullname', capture=True).strip()
with cd('~/tmp/june/%s' % dist):
sudo('/srv/venv/june/bin/pip install .')
def restart():
"""Restart remote server"""
run('supervisorctl pid june | xargs kill -HUP')
def bootstrap():
dist = local('python setup.py --fullname', capture=True).strip()
tarball()
upload()
# prepare
sudo('mkdir -p /var/log/june')
sudo('mkdir -p /srv/venv')
sudo('virtualenv /srv/venv/june')
sudo('mkdir -p /www/june/public/static')
install()
# install patch
sudo('/srv/venv/june/bin/pip install MySQL-python redis')
with cd('~/tmp/june/%s' % dist):
sudo('mv etc/nginx.conf /etc/nginx/conf.d/june.conf')
sudo('mv etc/supervisord.conf /etc/supervisor/conf.d/june.conf')
sudo('rm -fr /www/june/etc')
sudo('mv etc /www/june/')
sudo('rm -fr /www/june/public')
sudo('mv june/public /www/june/')
sudo('mv wsgi.py /www/june/') | unknown | codeparrot/codeparrot-clean | ||
#!/usr/bin/env python
#
# __COPYRIGHT__
#
# Permission is hereby granted, free of charge, to any person obtaining
# a copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish,
# distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so, subject to
# the following conditions:
#
# The above copyright notice and this permission notice shall be included
# in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY
# KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE
# WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
#
__revision__ = "__FILE__ __REVISION__ __DATE__ __DEVELOPER__"
"""
Verify that we re-run LaTeX after running BibTeX in response to
changes in a .bib file.
Thanks to Rob Managan for the patch that fixed this, and to Joel B. Mohler
for code clean up and packaging the test case.
"""
import TestSCons
test = TestSCons.TestSCons()
pdflatex = test.where_is('pdflatex')
if not pdflatex:
test.skip_test("Could not find pdflatex; skipping test(s).\n")
test.write(['SConstruct'], """\
import os
env = Environment(tools=['pdftex', 'tex'])
env.PDF( 'bibtest.tex' )
""")
test.write(['bibtest.tex'], r"""
\documentclass{article}
\begin{document}
Learn about cool math in \cite{koblitz:elliptic_curves}.
\bibliographystyle{alpha}
\bibliography{sources}
\end{document}
""")
sources_bib_content = r"""
@book{koblitz:elliptic_curves,
author = "Neal Koblitz",
title = "Elliptic Curves and Modular Forms",
year = "%s",
publisher = "Springer-Verlag New York Inc."
}
"""
test.write('sources.bib', sources_bib_content % '1981')
test.run()
pdf_output_1 = test.read('bibtest.pdf')
test.write('sources.bib', sources_bib_content % '1982')
test.run()
pdf_output_2 = test.read('bibtest.pdf')
# If the PDF file is the same as it was previously, then it didn't
# pick up the change from 1981 to 1982, so fail.
test.fail_test(pdf_output_1 == pdf_output_2)
# Double-check: clean everything and rebuild from scratch, which
# should force the PDF file to be the 1982 version.
test.run(arguments = '-c')
test.run()
pdf_output_3 = test.read('bibtest.pdf')
# If the PDF file is now different than the second run, modulo the
# creation timestamp and the ID and some other PDF garp, then something
# else odd has happened, so fail.
pdf_output_2 = test.normalize_pdf(pdf_output_2)
pdf_output_3 = test.normalize_pdf(pdf_output_3)
if pdf_output_2 != pdf_output_3:
import sys
test.write('bibtest.normalized.2.pdf', pdf_output_2)
test.write('bibtest.normalized.3.pdf', pdf_output_3)
sys.stdout.write("***** 2 and 3 are different!\n")
sys.stdout.write(test.diff_substr(pdf_output_2, pdf_output_3, 80, 80) + '\n')
sys.stdout.write("Output from run 2:\n")
sys.stdout.write(test.stdout(-2) + '\n')
sys.stdout.write("Output from run 3:\n")
sys.stdout.write(test.stdout() + '\n')
sys.stdout.flush()
test.fail_test()
test.pass_test()
# Local Variables:
# tab-width:4
# indent-tabs-mode:nil
# End:
# vim: set expandtab tabstop=4 shiftwidth=4: | unknown | codeparrot/codeparrot-clean | ||
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.kafka.common.utils;
import java.io.BufferedInputStream;
import java.io.FilterInputStream;
import java.io.IOException;
import java.io.InputStream;
import java.nio.ByteBuffer;
/**
* ChunkedBytesStream is a copy of {@link BufferedInputStream} with the following differences:
* - Unlike {@link java.io.BufferedInputStream#skip(long)} this class could be configured to not push skip() to
* input stream. We may want to avoid pushing this to input stream because it's implementation maybe inefficient,
* e.g. the case of ZstdInputStream which allocates a new buffer from buffer pool, per skip call.
* - Unlike {@link java.io.BufferedInputStream}, which allocates an intermediate buffer, this uses a buffer supplier to
* create the intermediate buffer.
* - Unlike {@link java.io.BufferedInputStream}, this implementation does not support {@link InputStream#mark(int)} and
* {@link InputStream#markSupported()} will return false.
* <p>
* Note that:
* - this class is not thread safe and shouldn't be used in scenarios where multiple threads access this.
* - the implementation of this class is performance sensitive. Minor changes such as usage of ByteBuffer instead of byte[]
* can significantly impact performance, hence, proceed with caution.
*/
public class ChunkedBytesStream extends FilterInputStream {
/**
* Supplies the ByteBuffer which is used as intermediate buffer to store the chunk of output data.
*/
private final BufferSupplier bufferSupplier;
/**
* Intermediate buffer to store the chunk of output data. The ChunkedBytesStream is considered closed if
* this buffer is null.
*/
private byte[] intermediateBuf;
/**
* The index one greater than the index of the last valid byte in
* the buffer.
* This value is always in the range <code>0</code> through <code>intermediateBuf.length</code>;
* elements <code>intermediateBuf[0]</code> through <code>intermediateBuf[count-1]
* </code>contain buffered input data obtained
* from the underlying input stream.
*/
protected int count = 0;
/**
* The current position in the buffer. This is the index of the next
* character to be read from the <code>buf</code> array.
* <p>
* This value is always in the range <code>0</code>
* through <code>count</code>. If it is less
* than <code>count</code>, then <code>intermediateBuf[pos]</code>
* is the next byte to be supplied as input;
* if it is equal to <code>count</code>, then
* the next <code>read</code> or <code>skip</code>
* operation will require more bytes to be
* read from the contained input stream.
*/
protected int pos = 0;
/**
* Reference for the intermediate buffer. This reference is only kept for releasing the buffer from the
* buffer supplier.
*/
private final ByteBuffer intermediateBufRef;
/**
* If this flag is true, we will delegate the responsibility of skipping to the
* sourceStream. This is an alternative to reading the data from source stream, storing in an intermediate buffer and
* skipping the values using this implementation.
*/
private final boolean delegateSkipToSourceStream;
public ChunkedBytesStream(InputStream in, BufferSupplier bufferSupplier, int intermediateBufSize, boolean delegateSkipToSourceStream) {
super(in);
this.bufferSupplier = bufferSupplier;
intermediateBufRef = bufferSupplier.get(intermediateBufSize);
if (!intermediateBufRef.hasArray() || (intermediateBufRef.arrayOffset() != 0)) {
throw new IllegalArgumentException("provided ByteBuffer lacks array or has non-zero arrayOffset");
}
intermediateBuf = intermediateBufRef.array();
this.delegateSkipToSourceStream = delegateSkipToSourceStream;
}
/**
* Check to make sure that buffer has not been nulled out due to
* close; if not return it;
*/
private byte[] getBufIfOpen() throws IOException {
byte[] buffer = intermediateBuf;
if (buffer == null)
throw new IOException("Stream closed");
return buffer;
}
/**
* See the general contract of the <code>read</code>
* method of <code>InputStream</code>.
*
* @return the next byte of data, or <code>-1</code> if the end of the
* stream is reached.
* @throws IOException if this input stream has been closed by
* invoking its {@link #close()} method,
* or an I/O error occurs.
* @see BufferedInputStream#read()
*/
@Override
public int read() throws IOException {
if (pos >= count) {
fill();
if (pos >= count)
return -1;
}
return getBufIfOpen()[pos++] & 0xff;
}
/**
* Check to make sure that underlying input stream has not been
* nulled out due to close; if not return it;
*/
InputStream getInIfOpen() throws IOException {
InputStream input = in;
if (input == null)
throw new IOException("Stream closed");
return input;
}
/**
* Fills the intermediate buffer with more data. The amount of new data read is equal to the remaining empty space
* in the buffer. For optimal performance, read as much data as possible in this call.
* This method also assumes that all data has already been read in, hence pos > count.
*/
int fill() throws IOException {
byte[] buffer = getBufIfOpen();
pos = 0;
count = pos;
int n = getInIfOpen().read(buffer, pos, buffer.length - pos);
if (n > 0)
count = n + pos;
return n;
}
@Override
public void close() throws IOException {
byte[] mybuf = intermediateBuf;
intermediateBuf = null;
InputStream input = in;
in = null;
if (mybuf != null)
bufferSupplier.release(intermediateBufRef);
if (input != null)
input.close();
}
/**
* Reads bytes from this byte-input stream into the specified byte array,
* starting at the given offset.
*
* <p> This method implements the general contract of the corresponding
* <code>{@link InputStream#read(byte[], int, int) read}</code> method of
* the <code>{@link InputStream}</code> class. As an additional
* convenience, it attempts to read as many bytes as possible by repeatedly
* invoking the <code>read</code> method of the underlying stream. This
* iterated <code>read</code> continues until one of the following
* conditions becomes true: <ul>
*
* <li> The specified number of bytes have been read,
*
* <li> The <code>read</code> method of the underlying stream returns
* <code>-1</code>, indicating end-of-file, or
*
* <li> The <code>available</code> method of the underlying stream
* returns zero, indicating that further input requests would block.
*
* </ul> If the first <code>read</code> on the underlying stream returns
* <code>-1</code> to indicate end-of-file then this method returns
* <code>-1</code>. Otherwise this method returns the number of bytes
* actually read.
*
* <p> Subclasses of this class are encouraged, but not required, to
* attempt to read as many bytes as possible in the same fashion.
*
* @param b destination buffer.
* @param off offset at which to start storing bytes.
* @param len maximum number of bytes to read.
* @return the number of bytes read, or <code>-1</code> if the end of
* the stream has been reached.
* @throws IOException if this input stream has been closed by
* invoking its {@link #close()} method,
* or an I/O error occurs.
* @see BufferedInputStream#read(byte[], int, int)
*/
public int read(byte[] b, int off, int len) throws IOException {
getBufIfOpen(); // Check for closed stream
if ((off | len | (off + len) | (b.length - (off + len))) < 0) {
throw new IndexOutOfBoundsException();
} else if (len == 0) {
return 0;
}
int n = 0;
for (; ; ) {
int nread = read1(b, off + n, len - n);
if (nread <= 0)
return (n == 0) ? nread : n;
n += nread;
if (n >= len)
return n;
// if not closed but no bytes available, return
InputStream input = in;
if (input != null && input.available() <= 0)
return n;
}
}
/**
* Read characters into a portion of an array, reading from the underlying
* stream at most once if necessary.
* <p>
* Note - Implementation copied from {@link BufferedInputStream}. Slight modification done to remove
* the mark position.
*/
private int read1(byte[] b, int off, int len) throws IOException {
int avail = count - pos;
if (avail <= 0) {
/* If the requested length is at least as large as the buffer, and
if there is no mark/reset activity, do not bother to copy the
bytes into the local buffer. In this way buffered streams will
cascade harmlessly. */
if (len >= getBufIfOpen().length) {
return getInIfOpen().read(b, off, len);
}
fill();
avail = count - pos;
if (avail <= 0) return -1;
}
int cnt = Math.min(avail, len);
System.arraycopy(getBufIfOpen(), pos, b, off, cnt);
pos += cnt;
return cnt;
}
/**
* Skips over and discards exactly {@code n} bytes of data from this input stream.
* If {@code n} is zero, then no bytes are skipped.
* If {@code n} is negative, then no bytes are skipped.
* <p>
* This method blocks until the requested number of bytes has been skipped, end of file is reached, or an
* exception is thrown.
* <p> If end of stream is reached before the stream is at the desired position, then the bytes skipped till than pointed
* are returned.
* <p> If an I/O error occurs, then the input stream may be in an inconsistent state. It is strongly recommended that the
* stream be promptly closed if an I/O error occurs.
* <p>
* This method first skips and discards bytes in the intermediate buffer.
* After that, depending on the value of {@link #delegateSkipToSourceStream}, it either delegates the skipping of
* bytes to the sourceStream or it reads the data from input stream in chunks, copies the data into intermediate
* buffer and skips it.
* <p>
* Starting JDK 12, a new method was introduced in InputStream, skipNBytes which has a similar behaviour as
* this method.
*
* @param toSkip the number of bytes to be skipped.
* @return the actual number of bytes skipped which might be zero.
* @throws IOException if this input stream has been closed by invoking its {@link #close()} method,
* {@code in.skip(n)} throws an IOException, or an I/O error occurs.
*/
@Override
public long skip(long toSkip) throws IOException {
getBufIfOpen(); // Check for closed stream
if (toSkip <= 0) {
return 0;
}
long remaining = toSkip;
// Skip bytes stored in intermediate buffer first
int avail = count - pos;
int bytesSkipped = (int) Math.min(avail, remaining);
pos += bytesSkipped;
remaining -= bytesSkipped;
while (remaining > 0) {
if (delegateSkipToSourceStream) {
// Use sourceStream's skip() to skip the rest.
// conversion to int is acceptable because toSkip and remaining are int.
long delegateBytesSkipped = getInIfOpen().skip(remaining);
if (delegateBytesSkipped == 0) {
// read one byte to check for EOS
if (read() == -1) {
break;
}
// one byte read so decrement number to skip
remaining--;
} else if (delegateBytesSkipped > remaining || delegateBytesSkipped < 0) { // skipped negative or too many bytes
throw new IOException("Unable to skip exactly");
}
remaining -= delegateBytesSkipped;
} else {
// skip from intermediate buffer, filling it first (if required)
if (pos >= count) {
fill();
// if we don't have data in intermediate buffer after fill, then stop skipping
if (pos >= count)
break;
}
avail = count - pos;
bytesSkipped = (int) Math.min(avail, remaining);
pos += bytesSkipped;
remaining -= bytesSkipped;
}
}
return toSkip - remaining;
}
// visible for testing
public InputStream sourceStream() {
return in;
}
/**
* Returns an estimate of the number of bytes that can be read (or
* skipped over) from this input stream without blocking by the next
* invocation of a method for this input stream. The next invocation might be
* the same thread or another thread. A single read or skip of this
* many bytes will not block, but may read or skip fewer bytes.
* <p>
* This method returns the sum of the number of bytes remaining to be read in
* the buffer (<code>count - pos</code>) and the result of calling the
* {@link java.io.FilterInputStream#in in}.available().
*
* @return an estimate of the number of bytes that can be read (or skipped
* over) from this input stream without blocking.
* @throws IOException if this input stream has been closed by
* invoking its {@link #close()} method,
* or an I/O error occurs.
* @see BufferedInputStream#available()
*/
@Override
public synchronized int available() throws IOException {
int n = count - pos;
int avail = getInIfOpen().available();
return n > (Integer.MAX_VALUE - avail)
? Integer.MAX_VALUE
: n + avail;
}
} | java | github | https://github.com/apache/kafka | clients/src/main/java/org/apache/kafka/common/utils/ChunkedBytesStream.java |
'''Test (selected) IDLE Edit menu items.
Edit modules have their own test files
'''
from test.support import requires
requires('gui')
import tkinter as tk
from tkinter import ttk
import unittest
from idlelib import pyshell
class PasteTest(unittest.TestCase):
'''Test pasting into widgets that allow pasting.
On X11, replacing selections requires tk fix.
'''
@classmethod
def setUpClass(cls):
cls.root = root = tk.Tk()
cls.root.withdraw()
pyshell.fix_x11_paste(root)
cls.text = tk.Text(root)
cls.entry = tk.Entry(root)
cls.tentry = ttk.Entry(root)
cls.spin = tk.Spinbox(root)
root.clipboard_clear()
root.clipboard_append('two')
@classmethod
def tearDownClass(cls):
del cls.text, cls.entry, cls.tentry
cls.root.clipboard_clear()
cls.root.update_idletasks()
cls.root.destroy()
del cls.root
def test_paste_text(self):
"Test pasting into text with and without a selection."
text = self.text
for tag, ans in ('', 'onetwo\n'), ('sel', 'two\n'):
with self.subTest(tag=tag, ans=ans):
text.delete('1.0', 'end')
text.insert('1.0', 'one', tag)
text.event_generate('<<Paste>>')
self.assertEqual(text.get('1.0', 'end'), ans)
def test_paste_entry(self):
"Test pasting into an entry with and without a selection."
# Generated <<Paste>> fails for tk entry without empty select
# range for 'no selection'. Live widget works fine.
for entry in self.entry, self.tentry:
for end, ans in (0, 'onetwo'), ('end', 'two'):
with self.subTest(entry=entry, end=end, ans=ans):
entry.delete(0, 'end')
entry.insert(0, 'one')
entry.select_range(0, end)
entry.event_generate('<<Paste>>')
self.assertEqual(entry.get(), ans)
def test_paste_spin(self):
"Test pasting into a spinbox with and without a selection."
# See note above for entry.
spin = self.spin
for end, ans in (0, 'onetwo'), ('end', 'two'):
with self.subTest(end=end, ans=ans):
spin.delete(0, 'end')
spin.insert(0, 'one')
spin.selection('range', 0, end) # see note
spin.event_generate('<<Paste>>')
self.assertEqual(spin.get(), ans)
if __name__ == '__main__':
unittest.main(verbosity=2) | python | github | https://github.com/python/cpython | Lib/idlelib/idle_test/test_editmenu.py |
# -*- coding: utf-8 -*-
###############################################################################
#
# GetWeather
# Retrieves the Yahoo Weather RSS Feed for any specified location by WOEID.
#
# Python versions 2.6, 2.7, 3.x
#
# Copyright 2014, Temboo Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND,
# either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
#
#
###############################################################################
from temboo.core.choreography import Choreography
from temboo.core.choreography import InputSet
from temboo.core.choreography import ResultSet
from temboo.core.choreography import ChoreographyExecution
import json
class GetWeather(Choreography):
def __init__(self, temboo_session):
"""
Create a new instance of the GetWeather Choreo. A TembooSession object, containing a valid
set of Temboo credentials, must be supplied.
"""
super(GetWeather, self).__init__(temboo_session, '/Library/Yahoo/Weather/GetWeather')
def new_input_set(self):
return GetWeatherInputSet()
def _make_result_set(self, result, path):
return GetWeatherResultSet(result, path)
def _make_execution(self, session, exec_id, path):
return GetWeatherChoreographyExecution(session, exec_id, path)
class GetWeatherInputSet(InputSet):
"""
An InputSet with methods appropriate for specifying the inputs to the GetWeather
Choreo. The InputSet object is used to specify input parameters when executing this Choreo.
"""
def set_Day(self, value):
"""
Set the value of the Day input for this Choreo. ((optional, integer) An index in the range 1 to 5 that corresponds to the forecast day you want to retrieve. Today corresponds to 1, tomorrow corresponds to 2, and so on. Defaults to 1.)
"""
super(GetWeatherInputSet, self)._set_input('Day', value)
def set_ResponseFormat(self, value):
"""
Set the value of the ResponseFormat input for this Choreo. ((optional, string) The format that the response should be in. Valid values are: xml (the default) and json.)
"""
super(GetWeatherInputSet, self)._set_input('ResponseFormat', value)
def set_Units(self, value):
"""
Set the value of the Units input for this Choreo. ((optional, string) The unit of temperature in the response. Acceptable inputs: f for Fahrenheit or c for Celsius. Defaults to f. When c is specified, all units measurements returned are changed to metric.)
"""
super(GetWeatherInputSet, self)._set_input('Units', value)
def set_WOEID(self, value):
"""
Set the value of the WOEID input for this Choreo. ((required, integer) Where On Earth ID for the desired location. This unique integer can be found by first running the GetWeatherByCoordinates Choreo.)
"""
super(GetWeatherInputSet, self)._set_input('WOEID', value)
class GetWeatherResultSet(ResultSet):
"""
A ResultSet with methods tailored to the values returned by the GetWeather Choreo.
The ResultSet object is used to retrieve the results of a Choreo execution.
"""
def getJSONFromString(self, str):
return json.loads(str)
def get_Response(self):
"""
Retrieve the value for the "Response" output from this Choreo execution. (The response from Yahoo Weather.)
"""
return self._output.get('Response', None)
def get_ConditionCode(self):
"""
Retrieve the value for the "ConditionCode" output from this Choreo execution. ((integer) A code representing the current condition.)
"""
return self._output.get('ConditionCode', None)
def get_ConditionText(self):
"""
Retrieve the value for the "ConditionText" output from this Choreo execution. ((string) The textual description for the current condition.)
"""
return self._output.get('ConditionText', None)
def get_ForecastCode(self):
"""
Retrieve the value for the "ForecastCode" output from this Choreo execution. ((integer) A code representing the forecast condition.)
"""
return self._output.get('ForecastCode', None)
def get_ForecastText(self):
"""
Retrieve the value for the "ForecastText" output from this Choreo execution. ((string) The textual description for the specified day's forecast condition.)
"""
return self._output.get('ForecastText', None)
def get_High(self):
"""
Retrieve the value for the "High" output from this Choreo execution. ((integer) The high temperature forecast for the specified day.)
"""
return self._output.get('High', None)
def get_Humidity(self):
"""
Retrieve the value for the "Humidity" output from this Choreo execution. ((decimal) The current measurement for atmospheric humidity.)
"""
return self._output.get('Humidity', None)
def get_Low(self):
"""
Retrieve the value for the "Low" output from this Choreo execution. ((integer) The low temperature forecast for the specified day.)
"""
return self._output.get('Low', None)
def get_Pressure(self):
"""
Retrieve the value for the "Pressure" output from this Choreo execution. ((decimal) The current measurement for atmospheric pressure.)
"""
return self._output.get('Pressure', None)
def get_Temperature(self):
"""
Retrieve the value for the "Temperature" output from this Choreo execution. ((integer) The current temperature.)
"""
return self._output.get('Temperature', None)
def get_Visibility(self):
"""
Retrieve the value for the "Visibility" output from this Choreo execution. ((decimal) The current measurement for visibility.)
"""
return self._output.get('Visibility', None)
class GetWeatherChoreographyExecution(ChoreographyExecution):
def _make_result_set(self, response, path):
return GetWeatherResultSet(response, path) | unknown | codeparrot/codeparrot-clean | ||
export const a = "a";
export const b = "b";
export default "default";
export const usedExports = __webpack_exports_info__.usedExports; | javascript | github | https://github.com/webpack/webpack | test/cases/chunks/inline-options/dir13/b.js |
import time
import unittest
import threading
import synapse.async as s_async
import synapse.lib.scope as s_scope
from synapse.tests.common import *
class AsyncTests(SynTest):
def test_async_basics(self):
boss = s_async.Boss()
data = {}
def jobmeth(x, y=20):
return x + y
def jobdork(x, y=20):
raise Exception('hi')
def jobdone(job):
name = job[1].get('name')
data[name] = job
jid1 = s_async.jobid()
jid2 = s_async.jobid()
task1 = (jobmeth, (3,), {})
task2 = (jobdork, (3,), {})
job1 = boss.initJob(jid1, task=task1, name='job1', ondone=jobdone)
job2 = boss.initJob(jid2, task=task2, name='job2', ondone=jobdone)
self.assertEqual( job1[0], jid1 )
self.assertEqual( len(boss.jobs()), 2 )
boss._runJob(job1)
self.assertEqual( len(boss.jobs()), 1 )
boss._runJob(job2)
self.assertEqual( len(boss.jobs()), 0 )
ret1 = data.get('job1')
self.assertIsNotNone(ret1)
self.assertEqual( ret1[1]['ret'], 23 )
ret2 = data.get('job2')
self.assertIsNotNone(ret2)
self.assertEqual( ret2[1]['err'], 'Exception' )
boss.fini()
def test_async_pool_basics(self):
boss = s_async.Boss()
boss.runBossPool(3)
data = {}
def jobmeth(x, y=20):
return x + y
def jobdork(x, y=20):
raise Exception('hi')
def jobdone(job):
name = job[1].get('name')
data[name] = job
jid1 = s_async.jobid()
jid2 = s_async.jobid()
task1 = (jobmeth, (3,), {})
task2 = (jobdork, (3,), {})
job1 = boss.initJob(jid1, task=task1, name='job1', ondone=jobdone)
job2 = boss.initJob(jid2, task=task2, name='job2', ondone=jobdone)
self.assertEqual( job1[0], jid1 )
boss.wait(jid1, timeout=1)
boss.wait(jid2, timeout=1)
ret1 = data.get('job1')
self.assertIsNotNone(ret1)
self.assertEqual( ret1[1]['ret'], 23 )
ret2 = data.get('job2')
self.assertIsNotNone(ret2)
self.assertEqual( ret2[1]['err'], 'Exception' )
boss.fini()
def test_async_timeout(self):
boss = s_async.Boss()
def myjob():
time.sleep(0.2)
jid = s_async.jobid()
job = boss.initJob(jid, task=(myjob,(),{}), timeout=0.01)
boss.wait(jid)
self.assertEqual( job[1]['err'], 'HitMaxTime' )
boss.fini()
def test_async_ondone(self):
boss = s_async.Boss()
boss.runBossPool(3)
data = {}
evt = threading.Event()
def ondone(job):
data['job'] = job
evt.set()
def woot():
return 10
jid = s_async.jobid()
task = s_async.newtask(woot)
boss.initJob(jid, task=task, ondone=ondone)
self.assertTrue( evt.wait(timeout=1) )
job = data.get('job')
self.assertEqual( job[1].get('ret'), 10 )
boss.fini()
def test_async_wait_timeout(self):
def longtime():
time.sleep(0.1)
boss = s_async.Boss()
boss.runBossPool(1)
jid = s_async.jobid()
task = s_async.newtask(longtime)
boss.initJob(jid, task=task)
self.assertFalse( boss.wait(jid,timeout=0.01) )
self.assertTrue( boss.wait(jid,timeout=1) )
boss.fini()
def test_async_wait_syntimeout(self):
def longtime():
time.sleep(0.1)
boss = s_async.Boss()
boss.runBossPool(1)
jid = s_async.jobid()
task = s_async.newtask(longtime)
boss.initJob(jid, task=task)
with s_scope.enter({'syntimeout':0.01}):
self.assertFalse( boss.wait(jid) )
self.assertTrue( boss.wait(jid,timeout=1) )
boss.fini()
def test_async_sugar(self):
boss = s_async.Boss()
job = boss.initJob()
boss.done(job[0],5)
boss.wait(job[0])
self.assertEqual( job[1].get('ret'), 5 )
boss.fini() | unknown | codeparrot/codeparrot-clean | ||
// Copyright The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package tombstones
import (
"math"
"math/rand"
"sync"
"testing"
"time"
"github.com/prometheus/common/promslog"
"github.com/stretchr/testify/require"
"go.uber.org/goleak"
"github.com/prometheus/prometheus/storage"
)
func TestMain(m *testing.M) {
goleak.VerifyTestMain(m)
}
func TestWriteAndReadbackTombstones(t *testing.T) {
tmpdir := t.TempDir()
ref := uint64(0)
stones := NewMemTombstones()
// Generate the tombstones.
for range 100 {
ref += uint64(rand.Int31n(10)) + 1
numRanges := rand.Intn(5) + 1
dranges := make(Intervals, 0, numRanges)
mint := rand.Int63n(time.Now().UnixNano())
for range numRanges {
dranges = dranges.Add(Interval{mint, mint + rand.Int63n(1000)})
mint += rand.Int63n(1000) + 1
}
stones.AddInterval(storage.SeriesRef(ref), dranges...)
}
_, err := WriteFile(promslog.NewNopLogger(), tmpdir, stones)
require.NoError(t, err)
restr, _, err := ReadTombstones(tmpdir)
require.NoError(t, err)
// Compare the two readers.
require.Equal(t, stones, restr)
}
func TestDeletingTombstones(t *testing.T) {
stones := NewMemTombstones()
ref := storage.SeriesRef(42)
mint := rand.Int63n(time.Now().UnixNano())
dranges := make(Intervals, 0, 1)
dranges = dranges.Add(Interval{mint, mint + rand.Int63n(1000)})
stones.AddInterval(ref, dranges...)
stones.AddInterval(storage.SeriesRef(43), dranges...)
intervals, err := stones.Get(ref)
require.NoError(t, err)
require.Equal(t, intervals, dranges)
stones.DeleteTombstones(map[storage.SeriesRef]struct{}{ref: {}})
intervals, err = stones.Get(ref)
require.NoError(t, err)
require.Empty(t, intervals)
}
func TestTombstonesGetWithCopy(t *testing.T) {
stones := NewMemTombstones()
stones.AddInterval(1, Intervals{{Mint: 1, Maxt: 2}, {Mint: 7, Maxt: 8}, {Mint: 11, Maxt: 12}}...)
intervals0, err := stones.Get(1)
require.NoError(t, err)
require.Equal(t, Intervals{{Mint: 1, Maxt: 2}, {Mint: 7, Maxt: 8}, {Mint: 11, Maxt: 12}}, intervals0)
intervals1 := intervals0.Add(Interval{Mint: 4, Maxt: 6})
require.Equal(t, Intervals{{Mint: 1, Maxt: 2}, {Mint: 4, Maxt: 8}, {Mint: 11, Maxt: 12}}, intervals0) // Original slice changed.
require.Equal(t, Intervals{{Mint: 1, Maxt: 2}, {Mint: 4, Maxt: 8}, {Mint: 11, Maxt: 12}}, intervals1)
intervals2, err := stones.Get(1)
require.NoError(t, err)
require.Equal(t, Intervals{{Mint: 1, Maxt: 2}, {Mint: 7, Maxt: 8}, {Mint: 11, Maxt: 12}}, intervals2)
}
func TestTruncateBefore(t *testing.T) {
cases := []struct {
before Intervals
beforeT int64
after Intervals
}{
{
before: Intervals{{1, 2}, {4, 10}, {12, 100}},
beforeT: 3,
after: Intervals{{4, 10}, {12, 100}},
},
{
before: Intervals{{1, 2}, {4, 10}, {12, 100}, {200, 1000}},
beforeT: 900,
after: Intervals{{200, 1000}},
},
{
before: Intervals{{1, 2}, {4, 10}, {12, 100}, {200, 1000}},
beforeT: 2000,
after: nil,
},
{
before: Intervals{{1, 2}, {4, 10}, {12, 100}, {200, 1000}},
beforeT: 0,
after: Intervals{{1, 2}, {4, 10}, {12, 100}, {200, 1000}},
},
}
for _, c := range cases {
ref := storage.SeriesRef(42)
stones := NewMemTombstones()
stones.AddInterval(ref, c.before...)
stones.TruncateBefore(c.beforeT)
ts, err := stones.Get(ref)
require.NoError(t, err)
require.Equal(t, c.after, ts)
}
}
func TestAddingNewIntervals(t *testing.T) {
cases := []struct {
exist Intervals
new Interval
exp Intervals
}{
{
new: Interval{1, 2},
exp: Intervals{{1, 2}},
},
{
exist: Intervals{{1, 2}},
new: Interval{1, 2},
exp: Intervals{{1, 2}},
},
{
exist: Intervals{{1, 4}, {6, 6}},
new: Interval{5, 6},
exp: Intervals{{1, 6}},
},
{
exist: Intervals{{1, 10}, {12, 20}, {25, 30}},
new: Interval{21, 25},
exp: Intervals{{1, 10}, {12, 30}},
},
{
exist: Intervals{{1, 10}, {12, 20}, {25, 30}},
new: Interval{22, 23},
exp: Intervals{{1, 10}, {12, 20}, {22, 23}, {25, 30}},
},
{
exist: Intervals{{1, 2}, {3, 5}, {7, 7}},
new: Interval{6, 7},
exp: Intervals{{1, 2}, {3, 7}},
},
{
exist: Intervals{{1, 10}, {12, 20}, {25, 30}},
new: Interval{18, 23},
exp: Intervals{{1, 10}, {12, 23}, {25, 30}},
},
{
exist: Intervals{{1, 10}, {12, 20}, {25, 30}},
new: Interval{9, 23},
exp: Intervals{{1, 23}, {25, 30}},
},
{
exist: Intervals{{1, 10}, {12, 20}, {25, 30}},
new: Interval{9, 230},
exp: Intervals{{1, 230}},
},
{
exist: Intervals{{5, 10}, {12, 20}, {25, 30}},
new: Interval{1, 4},
exp: Intervals{{1, 10}, {12, 20}, {25, 30}},
},
{
exist: Intervals{{5, 10}, {12, 20}, {25, 30}},
new: Interval{11, 14},
exp: Intervals{{5, 20}, {25, 30}},
},
{
exist: Intervals{{5, 10}, {12, 20}, {25, 30}},
new: Interval{1, 3},
exp: Intervals{{1, 3}, {5, 10}, {12, 20}, {25, 30}},
},
{
exist: Intervals{{5, 10}, {12, 20}, {25, 30}},
new: Interval{35, 40},
exp: Intervals{{5, 10}, {12, 20}, {25, 30}, {35, 40}},
},
{
new: Interval{math.MinInt64, 2},
exp: Intervals{{math.MinInt64, 2}},
},
{
exist: Intervals{{math.MinInt64, 2}},
new: Interval{9, math.MaxInt64},
exp: Intervals{{math.MinInt64, 2}, {9, math.MaxInt64}},
},
{
exist: Intervals{{9, math.MaxInt64}},
new: Interval{math.MinInt64, 2},
exp: Intervals{{math.MinInt64, 2}, {9, math.MaxInt64}},
},
{
exist: Intervals{{9, math.MaxInt64}},
new: Interval{math.MinInt64, 10},
exp: Intervals{{math.MinInt64, math.MaxInt64}},
},
{
exist: Intervals{{9, 10}},
new: Interval{math.MinInt64, 7},
exp: Intervals{{math.MinInt64, 7}, {9, 10}},
},
{
exist: Intervals{{9, 10}},
new: Interval{12, math.MaxInt64},
exp: Intervals{{9, 10}, {12, math.MaxInt64}},
},
{
exist: Intervals{{9, 10}},
new: Interval{math.MinInt64, 8},
exp: Intervals{{math.MinInt64, 10}},
},
{
exist: Intervals{{9, 10}},
new: Interval{11, math.MaxInt64},
exp: Intervals{{9, math.MaxInt64}},
},
}
for _, c := range cases {
t.Run("", func(t *testing.T) {
require.Equal(t, c.exp, c.exist.Add(c.new))
})
}
}
// TestMemTombstonesConcurrency to make sure they are safe to access from different goroutines.
func TestMemTombstonesConcurrency(t *testing.T) {
tomb := NewMemTombstones()
totalRuns := 100
var wg sync.WaitGroup
wg.Add(2)
go func() {
for x := range totalRuns {
tomb.AddInterval(storage.SeriesRef(x), Interval{int64(x), int64(x)})
}
wg.Done()
}()
go func() {
for x := range totalRuns {
_, err := tomb.Get(storage.SeriesRef(x))
require.NoError(t, err)
}
wg.Done()
}()
wg.Wait()
} | go | github | https://github.com/prometheus/prometheus | tsdb/tombstones/tombstones_test.go |
# Copyright (C) 2018-present MongoDB, Inc.
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the Server Side Public License, version 1,
# as published by MongoDB, Inc.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# Server Side Public License for more details.
#
# You should have received a copy of the Server Side Public License
# along with this program. If not, see
# <http://www.mongodb.com/licensing/server-side-public-license>.
#
# As a special exception, the copyright holders give permission to link the
# code of portions of this program with the OpenSSL library under certain
# conditions as described in each individual source file and distribute
# linked combinations including the program with the OpenSSL library. You
# must comply with the Server Side Public License in all respects for
# all of the code used other than as permitted herein. If you modify file(s)
# with this exception, you may extend this exception to your version of the
# file(s), but you are not obligated to do so. If you do not wish to do so,
# delete this exception statement from your version. If you delete this
# exception statement from all source files in the program, then also delete
# it in the license file.
#
"""A support module to load idl modules for testing."""
import os
import sys
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), "..")))
import idl.ast # noqa: F401
import idl.binder # noqa: F401
import idl.compiler # noqa: F401
import idl.errors # noqa: F401
import idl.generator # noqa: F401
import idl.parser # noqa: F401
import idl.syntax # noqa: F401 | python | github | https://github.com/mongodb/mongo | buildscripts/idl/tests/context.py |
# Lispy: Scheme Interpreter in Python
# (c) Peter Norvig, 2010-16; See http://norvig.com/lispy.html
from __future__ import division
import math
import operator as op
# Types
Symbol = str # A Lisp Symbol is implemented as a Python str
List = list # A Lisp List is implemented as a Python list
Number = (int, float) # A Lisp Number is implemented as a Python int or float
# Parsing: parse, tokenize, and read_from_tokens
def parse(program):
"Read a Scheme expression from a string."
return read_from_tokens(tokenize(program))
def tokenize(s):
"Convert a string into a list of tokens."
return s.replace('(', ' ( ').replace(')', ' ) ').split()
def read_from_tokens(tokens):
"Read an expression from a sequence of tokens."
if len(tokens) == 0:
raise SyntaxError('unexpected EOF while reading')
token = tokens.pop(0)
if '(' == token:
L = []
while tokens[0] != ')':
L.append(read_from_tokens(tokens))
tokens.pop(0) # pop off ')'
return L
elif ')' == token:
raise SyntaxError('unexpected )')
else:
return atom(token)
def atom(token):
"Numbers become numbers; every other token is a symbol."
try:
return int(token)
except ValueError:
try:
return float(token)
except ValueError:
return Symbol(token)
# Environments
def standard_env():
"An environment with some Scheme standard procedures."
env = Env()
env.update(vars(math)) # sin, cos, sqrt, pi, ...
env.update({
'+': op.add, '-': op.sub, '*': op.mul, '/': op.truediv,
'>': op.gt, '<': op.lt, '>=': op.ge, '<=': op.le, '=': op.eq,
'abs': abs,
'append': op.add,
'apply': apply,
'begin': lambda *x: x[-1],
'car': lambda x: x[0],
'cdr': lambda x: x[1:],
'cons': lambda x, y: [x] + y,
'eq?': op.is_,
'equal?': op.eq,
'length': len,
'list': lambda *x: list(x),
'list?': lambda x: isinstance(x, list),
'map': map,
'max': max,
'min': min,
'not': op.not_,
'null?': lambda x: x == [],
'number?': lambda x: isinstance(x, Number),
'procedure?': callable,
'round': round,
'symbol?': lambda x: isinstance(x, Symbol),
})
return env
class Env(dict):
"An environment: a dict of {'var':val} pairs, with an outer Env."
def __init__(self, parms=(), args=(), outer=None):
self.update(zip(parms, args))
self.outer = outer
def find(self, var):
"Find the innermost Env where var appears."
return self if (var in self) else self.outer.find(var)
global_env = standard_env()
# Interaction: A REPL
def repl(prompt='lis.py> '):
"A prompt-read-eval-print loop."
while True:
val = eval(parse(raw_input(prompt)))
if val is not None:
print(lispstr(val))
def lispstr(exp):
"Convert a Python object back into a Lisp-readable string."
if isinstance(exp, list):
return '(' + ' '.join(map(lispstr, exp)) + ')'
else:
return str(exp)
# Procedures
class Procedure(object):
"A user-defined Scheme procedure."
def __init__(self, parms, body, env):
self.parms, self.body, self.env = parms, body, env
def __call__(self, *args):
return eval(self.body, Env(self.parms, args, self.env))
# eval
def eval(x, env=global_env):
"Evaluate an expression in an environment."
if isinstance(x, Symbol): # variable reference
return env.find(x)[x]
elif not isinstance(x, List): # constant literal
return x
elif x[0] == 'quote': # (quote exp)
(_, exp) = x
return exp
elif x[0] == 'if': # (if test conseq alt)
(_, test, conseq, alt) = x
exp = (conseq if eval(test, env) else alt)
return eval(exp, env)
elif x[0] == 'define': # (define var exp)
(_, var, exp) = x
env[var] = eval(exp, env)
elif x[0] == 'set!': # (set! var exp)
(_, var, exp) = x
env.find(var)[var] = eval(exp, env)
elif x[0] == 'lambda': # (lambda (var...) body)
(_, parms, body) = x
return Procedure(parms, body, env)
else: # (proc arg...)
proc = eval(x[0], env)
args = [eval(exp, env) for exp in x[1:]]
return proc(*args) | unknown | codeparrot/codeparrot-clean | ||
# -*- coding: utf-8 -*-
# Copyright 2020 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import proto # type: ignore
from google.ads.googleads.v8.enums.types import response_content_type as gage_response_content_type
from google.ads.googleads.v8.resources.types import customer_negative_criterion as gagr_customer_negative_criterion
from google.rpc import status_pb2 # type: ignore
__protobuf__ = proto.module(
package='google.ads.googleads.v8.services',
marshal='google.ads.googleads.v8',
manifest={
'GetCustomerNegativeCriterionRequest',
'MutateCustomerNegativeCriteriaRequest',
'CustomerNegativeCriterionOperation',
'MutateCustomerNegativeCriteriaResponse',
'MutateCustomerNegativeCriteriaResult',
},
)
class GetCustomerNegativeCriterionRequest(proto.Message):
r"""Request message for
[CustomerNegativeCriterionService.GetCustomerNegativeCriterion][google.ads.googleads.v8.services.CustomerNegativeCriterionService.GetCustomerNegativeCriterion].
Attributes:
resource_name (str):
Required. The resource name of the criterion
to fetch.
"""
resource_name = proto.Field(
proto.STRING,
number=1,
)
class MutateCustomerNegativeCriteriaRequest(proto.Message):
r"""Request message for
[CustomerNegativeCriterionService.MutateCustomerNegativeCriteria][google.ads.googleads.v8.services.CustomerNegativeCriterionService.MutateCustomerNegativeCriteria].
Attributes:
customer_id (str):
Required. The ID of the customer whose
criteria are being modified.
operations (Sequence[google.ads.googleads.v8.services.types.CustomerNegativeCriterionOperation]):
Required. The list of operations to perform
on individual criteria.
partial_failure (bool):
If true, successful operations will be
carried out and invalid operations will return
errors. If false, all operations will be carried
out in one transaction if and only if they are
all valid. Default is false.
validate_only (bool):
If true, the request is validated but not
executed. Only errors are returned, not results.
response_content_type (google.ads.googleads.v8.enums.types.ResponseContentTypeEnum.ResponseContentType):
The response content type setting. Determines
whether the mutable resource or just the
resource name should be returned post mutation.
"""
customer_id = proto.Field(
proto.STRING,
number=1,
)
operations = proto.RepeatedField(
proto.MESSAGE,
number=2,
message='CustomerNegativeCriterionOperation',
)
partial_failure = proto.Field(
proto.BOOL,
number=3,
)
validate_only = proto.Field(
proto.BOOL,
number=4,
)
response_content_type = proto.Field(
proto.ENUM,
number=5,
enum=gage_response_content_type.ResponseContentTypeEnum.ResponseContentType,
)
class CustomerNegativeCriterionOperation(proto.Message):
r"""A single operation (create or remove) on a customer level
negative criterion.
Attributes:
create (google.ads.googleads.v8.resources.types.CustomerNegativeCriterion):
Create operation: No resource name is
expected for the new criterion.
remove (str):
Remove operation: A resource name for the removed criterion
is expected, in this format:
``customers/{customer_id}/customerNegativeCriteria/{criterion_id}``
"""
create = proto.Field(
proto.MESSAGE,
number=1,
oneof='operation',
message=gagr_customer_negative_criterion.CustomerNegativeCriterion,
)
remove = proto.Field(
proto.STRING,
number=2,
oneof='operation',
)
class MutateCustomerNegativeCriteriaResponse(proto.Message):
r"""Response message for customer negative criterion mutate.
Attributes:
partial_failure_error (google.rpc.status_pb2.Status):
Errors that pertain to operation failures in the partial
failure mode. Returned only when partial_failure = true and
all errors occur inside the operations. If any errors occur
outside the operations (e.g. auth errors), we return an RPC
level error.
results (Sequence[google.ads.googleads.v8.services.types.MutateCustomerNegativeCriteriaResult]):
All results for the mutate.
"""
partial_failure_error = proto.Field(
proto.MESSAGE,
number=3,
message=status_pb2.Status,
)
results = proto.RepeatedField(
proto.MESSAGE,
number=2,
message='MutateCustomerNegativeCriteriaResult',
)
class MutateCustomerNegativeCriteriaResult(proto.Message):
r"""The result for the criterion mutate.
Attributes:
resource_name (str):
Returned for successful operations.
customer_negative_criterion (google.ads.googleads.v8.resources.types.CustomerNegativeCriterion):
The mutated criterion with only mutable fields after mutate.
The field will only be returned when response_content_type
is set to "MUTABLE_RESOURCE".
"""
resource_name = proto.Field(
proto.STRING,
number=1,
)
customer_negative_criterion = proto.Field(
proto.MESSAGE,
number=2,
message=gagr_customer_negative_criterion.CustomerNegativeCriterion,
)
__all__ = tuple(sorted(__protobuf__.manifest)) | unknown | codeparrot/codeparrot-clean | ||
<!---
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
-->
# Apache Hadoop Changelog
## Release 0.22.1 - Unreleased (as of 2018-09-01)
### INCOMPATIBLE CHANGES:
| JIRA | Summary | Priority | Component | Reporter | Contributor |
|:---- |:---- | :--- |:---- |:---- |:---- |
| [HADOOP-6453](https://issues.apache.org/jira/browse/HADOOP-6453) | Hadoop wrapper script shouldn't ignore an existing JAVA\_LIBRARY\_PATH | Minor | scripts | Chad Metcalf | |
### NEW FEATURES:
| JIRA | Summary | Priority | Component | Reporter | Contributor |
|:---- |:---- | :--- |:---- |:---- |:---- |
| [HADOOP-7119](https://issues.apache.org/jira/browse/HADOOP-7119) | add Kerberos HTTP SPNEGO authentication support to Hadoop JT/NN/DN/TT web-consoles | Major | security | Alejandro Abdelnur | Alejandro Abdelnur |
| [HADOOP-7937](https://issues.apache.org/jira/browse/HADOOP-7937) | Forward port SequenceFile#syncFs and friends from Hadoop 1.x | Major | io | Eli Collins | Tom White |
| [MAPREDUCE-3837](https://issues.apache.org/jira/browse/MAPREDUCE-3837) | Job tracker is not able to recover job in case of crash and after that no user can submit job. | Major | . | Mayank Bansal | Mayank Bansal |
### IMPROVEMENTS:
| JIRA | Summary | Priority | Component | Reporter | Contributor |
|:---- |:---- | :--- |:---- |:---- |:---- |
| [HADOOP-6995](https://issues.apache.org/jira/browse/HADOOP-6995) | Allow wildcards to be used in ProxyUsers configurations | Minor | security | Todd Lipcon | Todd Lipcon |
| [HDFS-1601](https://issues.apache.org/jira/browse/HDFS-1601) | Pipeline ACKs are sent as lots of tiny TCP packets | Major | datanode | Todd Lipcon | Todd Lipcon |
| [HADOOP-7272](https://issues.apache.org/jira/browse/HADOOP-7272) | Remove unnecessary security related info logs | Major | ipc, security | Suresh Srinivas | Suresh Srinivas |
| [HDFS-2246](https://issues.apache.org/jira/browse/HDFS-2246) | Shortcut a local client reads to a Datanodes files directly | Major | . | Sanjay Radia | Jitendra Nath Pandey |
| [HADOOP-7338](https://issues.apache.org/jira/browse/HADOOP-7338) | LocalDirAllocator improvements for MR-2178 | Major | . | Todd Lipcon | Benoy Antony |
| [MAPREDUCE-4403](https://issues.apache.org/jira/browse/MAPREDUCE-4403) | Adding test case for resubmission of jobs in TestRecoveryManager | Minor | jobtracker | Mayank Bansal | Mayank Bansal |
| [MAPREDUCE-4405](https://issues.apache.org/jira/browse/MAPREDUCE-4405) | Adding test case for HierarchicalQueue in TestJobQueueClient | Minor | client | Mayank Bansal | Mayank Bansal |
| [MAPREDUCE-4349](https://issues.apache.org/jira/browse/MAPREDUCE-4349) | Distributed Cache gives inconsistent result if cache Archive files get deleted from task tracker | Minor | . | Mayank Bansal | Mayank Bansal |
| [MAPREDUCE-2353](https://issues.apache.org/jira/browse/MAPREDUCE-2353) | Make the MR changes to reflect the API changes in SecureIO library | Major | security, task, tasktracker | Devaraj Das | Benoy Antony |
| [MAPREDUCE-1521](https://issues.apache.org/jira/browse/MAPREDUCE-1521) | Protection against incorrectly configured reduces | Major | jobtracker | Arun C Murthy | Mahadev konar |
### BUG FIXES:
| JIRA | Summary | Priority | Component | Reporter | Contributor |
|:---- |:---- | :--- |:---- |:---- |:---- |
| [MAPREDUCE-2420](https://issues.apache.org/jira/browse/MAPREDUCE-2420) | JobTracker should be able to renew delegation token over HTTP | Major | . | Boris Shkolnik | Boris Shkolnik |
| [MAPREDUCE-2452](https://issues.apache.org/jira/browse/MAPREDUCE-2452) | Delegation token cancellation shouldn't hold global JobTracker lock | Major | jobtracker | Devaraj Das | Devaraj Das |
| [HADOOP-7621](https://issues.apache.org/jira/browse/HADOOP-7621) | alfredo config should be in a file not readable by users | Critical | security | Alejandro Abdelnur | Aaron T. Myers |
| [HDFS-2698](https://issues.apache.org/jira/browse/HDFS-2698) | BackupNode is downloading image from NameNode for every checkpoint | Major | namenode | Konstantin Shvachko | Konstantin Shvachko |
| [HDFS-1910](https://issues.apache.org/jira/browse/HDFS-1910) | when dfs.name.dir and dfs.name.edits.dir are same fsimage will be saved twice every time | Minor | namenode | Gokul | |
| [MAPREDUCE-3593](https://issues.apache.org/jira/browse/MAPREDUCE-3593) | MAPREDUCE Impersonation is not working in 22 | Major | job submission | Mayank Bansal | Mayank Bansal |
| [MAPREDUCE-3725](https://issues.apache.org/jira/browse/MAPREDUCE-3725) | Hadoop 22 hadoop job -list returns user name as NULL | Major | client | Mayank Bansal | Mayank Bansal |
| [HDFS-2718](https://issues.apache.org/jira/browse/HDFS-2718) | Optimize OP\_ADD in edits loading | Major | namenode | Konstantin Shvachko | Konstantin Shvachko |
| [HDFS-2877](https://issues.apache.org/jira/browse/HDFS-2877) | If locking of a storage dir fails, it will remove the other NN's lock file on exit | Major | namenode | Todd Lipcon | Todd Lipcon |
| [HADOOP-7680](https://issues.apache.org/jira/browse/HADOOP-7680) | TestHardLink fails on Mac OS X, when gnu stat is in path | Major | . | Milind Bhandarkar | Milind Bhandarkar |
| [HDFS-2991](https://issues.apache.org/jira/browse/HDFS-2991) | failure to load edits: ClassCastException | Blocker | namenode | Todd Lipcon | Todd Lipcon |
| [MAPREDUCE-4164](https://issues.apache.org/jira/browse/MAPREDUCE-4164) | Hadoop 22 Exception thrown after task completion causes its reexecution | Major | tasktracker | Mayank Bansal | Mayank Bansal |
| [MAPREDUCE-3863](https://issues.apache.org/jira/browse/MAPREDUCE-3863) | 0.22 branch mvn deploy is not publishing hadoop-streaming JAR | Critical | build | Alejandro Abdelnur | Benoy Antony |
| [HDFS-3368](https://issues.apache.org/jira/browse/HDFS-3368) | Missing blocks due to bad DataNodes coming up and down. | Major | namenode | Konstantin Shvachko | Konstantin Shvachko |
| [MAPREDUCE-2178](https://issues.apache.org/jira/browse/MAPREDUCE-2178) | Race condition in LinuxTaskController permissions handling | Major | security, task-controller | Todd Lipcon | Benoy Antony |
| [MAPREDUCE-4314](https://issues.apache.org/jira/browse/MAPREDUCE-4314) | Synchronization in JvmManager for 0.22 branch | Major | tasktracker | Konstantin Shvachko | Benoy Antony |
| [MAPREDUCE-2377](https://issues.apache.org/jira/browse/MAPREDUCE-2377) | task-controller fails to parse configuration if it doesn't end in \\n | Major | task-controller | Todd Lipcon | Benoy Antony |
| [MAPREDUCE-4318](https://issues.apache.org/jira/browse/MAPREDUCE-4318) | TestRecoveryManager should not use raw and deprecated configuration parameters. | Major | test | Konstantin Shvachko | Benoy Antony |
| [MAPREDUCE-4360](https://issues.apache.org/jira/browse/MAPREDUCE-4360) | Capacity Scheduler Hierarchical leaf queue does not honor the max capacity of container queue | Major | . | Mayank Bansal | Mayank Bansal |
| [HDFS-1584](https://issues.apache.org/jira/browse/HDFS-1584) | Need to check TGT and renew if needed when fetching delegation tokens using HFTP | Major | security | Kan Zhang | Benoy Antony |
| [HDFS-3402](https://issues.apache.org/jira/browse/HDFS-3402) | Fix hdfs scripts for secure datanodes | Minor | scripts, security | Benoy Antony | Benoy Antony |
| [HADOOP-7115](https://issues.apache.org/jira/browse/HADOOP-7115) | Add a cache for getpwuid\_r and getpwgid\_r calls | Major | . | Arun C Murthy | Alejandro Abdelnur |
| [MAPREDUCE-4404](https://issues.apache.org/jira/browse/MAPREDUCE-4404) | Adding Test case for TestMRJobClient to verify the user name | Minor | client | Mayank Bansal | Mayank Bansal |
| [MAPREDUCE-5706](https://issues.apache.org/jira/browse/MAPREDUCE-5706) | toBeDeleted parent directories aren't being cleaned up | Major | security | Robert Kanter | Robert Kanter |
### SUB-TASKS:
| JIRA | Summary | Priority | Component | Reporter | Contributor |
|:---- |:---- | :--- |:---- |:---- |:---- |
| [HDFS-2886](https://issues.apache.org/jira/browse/HDFS-2886) | CreateEditLogs should generate a realistic edit log. | Major | namenode | Konstantin Shvachko | Konstantin Shvachko |
| [HADOOP-8381](https://issues.apache.org/jira/browse/HADOOP-8381) | Substitute \_HOST with hostname for HTTP principals | Minor | security | Benoy Antony | Benoy Antony |
| [HADOOP-8383](https://issues.apache.org/jira/browse/HADOOP-8383) | TestKerberosAuthenticator fails | Minor | security | Benoy Antony | Benoy Antony |
### OTHER:
| JIRA | Summary | Priority | Component | Reporter | Contributor |
|:---- |:---- | :--- |:---- |:---- |:---- |
| [MAPREDUCE-4240](https://issues.apache.org/jira/browse/MAPREDUCE-4240) | Revert MAPREDUCE-2767 | Minor | security | Benoy Antony | Benoy Antony |
| [MAPREDUCE-4243](https://issues.apache.org/jira/browse/MAPREDUCE-4243) | Modify mapreduce build to include task-controller | Minor | build | Benoy Antony | Benoy Antony |
| [MAPREDUCE-4244](https://issues.apache.org/jira/browse/MAPREDUCE-4244) | Fix an issue related to do with setting of correct groups for tasks | Minor | security | Benoy Antony | Benoy Antony |
| [MAPREDUCE-4246](https://issues.apache.org/jira/browse/MAPREDUCE-4246) | Failure in deleting user directories in Secure hadoop | Major | security | Benoy Antony | Benoy Antony |
| [MAPREDUCE-4247](https://issues.apache.org/jira/browse/MAPREDUCE-4247) | TestTaskTrackerLocalization fails | Minor | security | Benoy Antony | Benoy Antony |
| [MAPREDUCE-4248](https://issues.apache.org/jira/browse/MAPREDUCE-4248) | TestRecoveryManager fails | Minor | security | Benoy Antony | Benoy Antony |
| [MAPREDUCE-4249](https://issues.apache.org/jira/browse/MAPREDUCE-4249) | Fix failures in streaming test TestFileArgs | Minor | security | Benoy Antony | Benoy Antony |
| [HADOOP-8357](https://issues.apache.org/jira/browse/HADOOP-8357) | Restore security in Hadoop 0.22 branch | Major | security | Konstantin Shvachko | Benoy Antony | | unknown | github | https://github.com/apache/hadoop | hadoop-common-project/hadoop-common/src/site/markdown/release/0.22.1/CHANGELOG.0.22.1.md |
# Copyright 2015 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for initializers."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import importlib
import math
import numpy as np
from tensorflow.python.eager import backprop
from tensorflow.python.framework import constant_op
from tensorflow.python.framework import dtypes
from tensorflow.python.framework import ops
from tensorflow.python.framework import tensor_shape
from tensorflow.python.framework import test_util
from tensorflow.python.ops import array_ops
from tensorflow.python.ops import gradients_impl
from tensorflow.python.ops import nn_ops
from tensorflow.python.ops import variables
from tensorflow.python.ops.distributions import kullback_leibler
from tensorflow.python.ops.distributions import normal as normal_lib
from tensorflow.python.platform import test
from tensorflow.python.platform import tf_logging
def try_import(name): # pylint: disable=invalid-name
module = None
try:
module = importlib.import_module(name)
except ImportError as e:
tf_logging.warning("Could not import %s: %s" % (name, str(e)))
return module
stats = try_import("scipy.stats")
class NormalTest(test.TestCase):
def setUp(self):
self._rng = np.random.RandomState(123)
def assertAllFinite(self, tensor):
is_finite = np.isfinite(self.evaluate(tensor))
all_true = np.ones_like(is_finite, dtype=np.bool)
self.assertAllEqual(all_true, is_finite)
def _testParamShapes(self, sample_shape, expected):
with self.test_session():
param_shapes = normal_lib.Normal.param_shapes(sample_shape)
mu_shape, sigma_shape = param_shapes["loc"], param_shapes["scale"]
self.assertAllEqual(expected, self.evaluate(mu_shape))
self.assertAllEqual(expected, self.evaluate(sigma_shape))
mu = array_ops.zeros(mu_shape)
sigma = array_ops.ones(sigma_shape)
self.assertAllEqual(
expected,
self.evaluate(array_ops.shape(normal_lib.Normal(mu, sigma).sample())))
def _testParamStaticShapes(self, sample_shape, expected):
param_shapes = normal_lib.Normal.param_static_shapes(sample_shape)
mu_shape, sigma_shape = param_shapes["loc"], param_shapes["scale"]
self.assertEqual(expected, mu_shape)
self.assertEqual(expected, sigma_shape)
@test_util.run_in_graph_and_eager_modes
def testParamShapes(self):
sample_shape = [10, 3, 4]
self._testParamShapes(sample_shape, sample_shape)
self._testParamShapes(constant_op.constant(sample_shape), sample_shape)
@test_util.run_in_graph_and_eager_modes
def testParamStaticShapes(self):
sample_shape = [10, 3, 4]
self._testParamStaticShapes(sample_shape, sample_shape)
self._testParamStaticShapes(
tensor_shape.TensorShape(sample_shape), sample_shape)
@test_util.run_in_graph_and_eager_modes
def testNormalWithSoftplusScale(self):
with self.test_session():
mu = array_ops.zeros((10, 3))
rho = array_ops.ones((10, 3)) * -2.
normal = normal_lib.NormalWithSoftplusScale(loc=mu, scale=rho)
self.assertAllEqual(self.evaluate(mu), self.evaluate(normal.loc))
self.assertAllEqual(
self.evaluate(nn_ops.softplus(rho)), self.evaluate(normal.scale))
@test_util.run_in_graph_and_eager_modes
def testNormalLogPDF(self):
with self.test_session():
batch_size = 6
mu = constant_op.constant([3.0] * batch_size)
sigma = constant_op.constant([math.sqrt(10.0)] * batch_size)
x = np.array([-2.5, 2.5, 4.0, 0.0, -1.0, 2.0], dtype=np.float32)
normal = normal_lib.Normal(loc=mu, scale=sigma)
log_pdf = normal.log_prob(x)
self.assertAllEqual(
self.evaluate(normal.batch_shape_tensor()), log_pdf.get_shape())
self.assertAllEqual(
self.evaluate(normal.batch_shape_tensor()),
self.evaluate(log_pdf).shape)
self.assertAllEqual(normal.batch_shape, log_pdf.get_shape())
self.assertAllEqual(normal.batch_shape, self.evaluate(log_pdf).shape)
pdf = normal.prob(x)
self.assertAllEqual(
self.evaluate(normal.batch_shape_tensor()), pdf.get_shape())
self.assertAllEqual(
self.evaluate(normal.batch_shape_tensor()),
self.evaluate(pdf).shape)
self.assertAllEqual(normal.batch_shape, pdf.get_shape())
self.assertAllEqual(normal.batch_shape, self.evaluate(pdf).shape)
if not stats:
return
expected_log_pdf = stats.norm(self.evaluate(mu),
self.evaluate(sigma)).logpdf(x)
self.assertAllClose(expected_log_pdf, self.evaluate(log_pdf))
self.assertAllClose(np.exp(expected_log_pdf), self.evaluate(pdf))
@test_util.run_in_graph_and_eager_modes
def testNormalLogPDFMultidimensional(self):
with self.test_session():
batch_size = 6
mu = constant_op.constant([[3.0, -3.0]] * batch_size)
sigma = constant_op.constant([[math.sqrt(10.0), math.sqrt(15.0)]] *
batch_size)
x = np.array([[-2.5, 2.5, 4.0, 0.0, -1.0, 2.0]], dtype=np.float32).T
normal = normal_lib.Normal(loc=mu, scale=sigma)
log_pdf = normal.log_prob(x)
log_pdf_values = self.evaluate(log_pdf)
self.assertEqual(log_pdf.get_shape(), (6, 2))
self.assertAllEqual(
self.evaluate(normal.batch_shape_tensor()), log_pdf.get_shape())
self.assertAllEqual(
self.evaluate(normal.batch_shape_tensor()),
self.evaluate(log_pdf).shape)
self.assertAllEqual(normal.batch_shape, log_pdf.get_shape())
self.assertAllEqual(normal.batch_shape, self.evaluate(log_pdf).shape)
pdf = normal.prob(x)
pdf_values = self.evaluate(pdf)
self.assertEqual(pdf.get_shape(), (6, 2))
self.assertAllEqual(
self.evaluate(normal.batch_shape_tensor()), pdf.get_shape())
self.assertAllEqual(
self.evaluate(normal.batch_shape_tensor()), pdf_values.shape)
self.assertAllEqual(normal.batch_shape, pdf.get_shape())
self.assertAllEqual(normal.batch_shape, pdf_values.shape)
if not stats:
return
expected_log_pdf = stats.norm(self.evaluate(mu),
self.evaluate(sigma)).logpdf(x)
self.assertAllClose(expected_log_pdf, log_pdf_values)
self.assertAllClose(np.exp(expected_log_pdf), pdf_values)
@test_util.run_in_graph_and_eager_modes
def testNormalCDF(self):
with self.test_session():
batch_size = 50
mu = self._rng.randn(batch_size)
sigma = self._rng.rand(batch_size) + 1.0
x = np.linspace(-8.0, 8.0, batch_size).astype(np.float64)
normal = normal_lib.Normal(loc=mu, scale=sigma)
cdf = normal.cdf(x)
self.assertAllEqual(
self.evaluate(normal.batch_shape_tensor()), cdf.get_shape())
self.assertAllEqual(
self.evaluate(normal.batch_shape_tensor()),
self.evaluate(cdf).shape)
self.assertAllEqual(normal.batch_shape, cdf.get_shape())
self.assertAllEqual(normal.batch_shape, self.evaluate(cdf).shape)
if not stats:
return
expected_cdf = stats.norm(mu, sigma).cdf(x)
self.assertAllClose(expected_cdf, self.evaluate(cdf), atol=0)
@test_util.run_in_graph_and_eager_modes
def testNormalSurvivalFunction(self):
with self.test_session():
batch_size = 50
mu = self._rng.randn(batch_size)
sigma = self._rng.rand(batch_size) + 1.0
x = np.linspace(-8.0, 8.0, batch_size).astype(np.float64)
normal = normal_lib.Normal(loc=mu, scale=sigma)
sf = normal.survival_function(x)
self.assertAllEqual(
self.evaluate(normal.batch_shape_tensor()), sf.get_shape())
self.assertAllEqual(
self.evaluate(normal.batch_shape_tensor()),
self.evaluate(sf).shape)
self.assertAllEqual(normal.batch_shape, sf.get_shape())
self.assertAllEqual(normal.batch_shape, self.evaluate(sf).shape)
if not stats:
return
expected_sf = stats.norm(mu, sigma).sf(x)
self.assertAllClose(expected_sf, self.evaluate(sf), atol=0)
@test_util.run_in_graph_and_eager_modes
def testNormalLogCDF(self):
with self.test_session():
batch_size = 50
mu = self._rng.randn(batch_size)
sigma = self._rng.rand(batch_size) + 1.0
x = np.linspace(-100.0, 10.0, batch_size).astype(np.float64)
normal = normal_lib.Normal(loc=mu, scale=sigma)
cdf = normal.log_cdf(x)
self.assertAllEqual(
self.evaluate(normal.batch_shape_tensor()), cdf.get_shape())
self.assertAllEqual(
self.evaluate(normal.batch_shape_tensor()),
self.evaluate(cdf).shape)
self.assertAllEqual(normal.batch_shape, cdf.get_shape())
self.assertAllEqual(normal.batch_shape, self.evaluate(cdf).shape)
if not stats:
return
expected_cdf = stats.norm(mu, sigma).logcdf(x)
self.assertAllClose(expected_cdf, self.evaluate(cdf), atol=0, rtol=1e-3)
def testFiniteGradientAtDifficultPoints(self):
for dtype in [np.float32, np.float64]:
g = ops.Graph()
with g.as_default():
mu = variables.Variable(dtype(0.0))
sigma = variables.Variable(dtype(1.0))
dist = normal_lib.Normal(loc=mu, scale=sigma)
x = np.array([-100., -20., -5., 0., 5., 20., 100.]).astype(dtype)
for func in [
dist.cdf, dist.log_cdf, dist.survival_function,
dist.log_survival_function, dist.log_prob, dist.prob
]:
value = func(x)
grads = gradients_impl.gradients(value, [mu, sigma])
with self.test_session(graph=g):
variables.global_variables_initializer().run()
self.assertAllFinite(value)
self.assertAllFinite(grads[0])
self.assertAllFinite(grads[1])
@test_util.run_in_graph_and_eager_modes
def testNormalLogSurvivalFunction(self):
with self.test_session():
batch_size = 50
mu = self._rng.randn(batch_size)
sigma = self._rng.rand(batch_size) + 1.0
x = np.linspace(-10.0, 100.0, batch_size).astype(np.float64)
normal = normal_lib.Normal(loc=mu, scale=sigma)
sf = normal.log_survival_function(x)
self.assertAllEqual(
self.evaluate(normal.batch_shape_tensor()), sf.get_shape())
self.assertAllEqual(
self.evaluate(normal.batch_shape_tensor()),
self.evaluate(sf).shape)
self.assertAllEqual(normal.batch_shape, sf.get_shape())
self.assertAllEqual(normal.batch_shape, self.evaluate(sf).shape)
if not stats:
return
expected_sf = stats.norm(mu, sigma).logsf(x)
self.assertAllClose(expected_sf, self.evaluate(sf), atol=0, rtol=1e-5)
@test_util.run_in_graph_and_eager_modes
def testNormalEntropyWithScalarInputs(self):
# Scipy.stats.norm cannot deal with the shapes in the other test.
with self.test_session():
mu_v = 2.34
sigma_v = 4.56
normal = normal_lib.Normal(loc=mu_v, scale=sigma_v)
entropy = normal.entropy()
self.assertAllEqual(
self.evaluate(normal.batch_shape_tensor()), entropy.get_shape())
self.assertAllEqual(
self.evaluate(normal.batch_shape_tensor()),
self.evaluate(entropy).shape)
self.assertAllEqual(normal.batch_shape, entropy.get_shape())
self.assertAllEqual(normal.batch_shape, self.evaluate(entropy).shape)
# scipy.stats.norm cannot deal with these shapes.
if not stats:
return
expected_entropy = stats.norm(mu_v, sigma_v).entropy()
self.assertAllClose(expected_entropy, self.evaluate(entropy))
@test_util.run_in_graph_and_eager_modes
def testNormalEntropy(self):
with self.test_session():
mu_v = np.array([1.0, 1.0, 1.0])
sigma_v = np.array([[1.0, 2.0, 3.0]]).T
normal = normal_lib.Normal(loc=mu_v, scale=sigma_v)
# scipy.stats.norm cannot deal with these shapes.
sigma_broadcast = mu_v * sigma_v
expected_entropy = 0.5 * np.log(2 * np.pi * np.exp(1) * sigma_broadcast**
2)
entropy = normal.entropy()
np.testing.assert_allclose(expected_entropy, self.evaluate(entropy))
self.assertAllEqual(
self.evaluate(normal.batch_shape_tensor()), entropy.get_shape())
self.assertAllEqual(
self.evaluate(normal.batch_shape_tensor()),
self.evaluate(entropy).shape)
self.assertAllEqual(normal.batch_shape, entropy.get_shape())
self.assertAllEqual(normal.batch_shape, self.evaluate(entropy).shape)
@test_util.run_in_graph_and_eager_modes
def testNormalMeanAndMode(self):
with self.test_session():
# Mu will be broadcast to [7, 7, 7].
mu = [7.]
sigma = [11., 12., 13.]
normal = normal_lib.Normal(loc=mu, scale=sigma)
self.assertAllEqual((3,), normal.mean().get_shape())
self.assertAllEqual([7., 7, 7], self.evaluate(normal.mean()))
self.assertAllEqual((3,), normal.mode().get_shape())
self.assertAllEqual([7., 7, 7], self.evaluate(normal.mode()))
@test_util.run_in_graph_and_eager_modes
def testNormalQuantile(self):
with self.test_session():
batch_size = 52
mu = self._rng.randn(batch_size)
sigma = self._rng.rand(batch_size) + 1.0
p = np.linspace(0., 1.0, batch_size - 2).astype(np.float64)
# Quantile performs piecewise rational approximation so adding some
# special input values to make sure we hit all the pieces.
p = np.hstack((p, np.exp(-33), 1. - np.exp(-33)))
normal = normal_lib.Normal(loc=mu, scale=sigma)
x = normal.quantile(p)
self.assertAllEqual(
self.evaluate(normal.batch_shape_tensor()), x.get_shape())
self.assertAllEqual(
self.evaluate(normal.batch_shape_tensor()),
self.evaluate(x).shape)
self.assertAllEqual(normal.batch_shape, x.get_shape())
self.assertAllEqual(normal.batch_shape, self.evaluate(x).shape)
if not stats:
return
expected_x = stats.norm(mu, sigma).ppf(p)
self.assertAllClose(expected_x, self.evaluate(x), atol=0.)
def _baseQuantileFiniteGradientAtDifficultPoints(self, dtype):
g = ops.Graph()
with g.as_default():
mu = variables.Variable(dtype(0.0))
sigma = variables.Variable(dtype(1.0))
dist = normal_lib.Normal(loc=mu, scale=sigma)
p = variables.Variable(
np.array([0.,
np.exp(-32.), np.exp(-2.),
1. - np.exp(-2.), 1. - np.exp(-32.),
1.]).astype(dtype))
value = dist.quantile(p)
grads = gradients_impl.gradients(value, [mu, p])
with self.test_session(graph=g):
variables.global_variables_initializer().run()
self.assertAllFinite(grads[0])
self.assertAllFinite(grads[1])
def testQuantileFiniteGradientAtDifficultPointsFloat32(self):
self._baseQuantileFiniteGradientAtDifficultPoints(np.float32)
def testQuantileFiniteGradientAtDifficultPointsFloat64(self):
self._baseQuantileFiniteGradientAtDifficultPoints(np.float64)
@test_util.run_in_graph_and_eager_modes
def testNormalVariance(self):
with self.test_session():
# sigma will be broadcast to [7, 7, 7]
mu = [1., 2., 3.]
sigma = [7.]
normal = normal_lib.Normal(loc=mu, scale=sigma)
self.assertAllEqual((3,), normal.variance().get_shape())
self.assertAllEqual([49., 49, 49], self.evaluate(normal.variance()))
@test_util.run_in_graph_and_eager_modes
def testNormalStandardDeviation(self):
with self.test_session():
# sigma will be broadcast to [7, 7, 7]
mu = [1., 2., 3.]
sigma = [7.]
normal = normal_lib.Normal(loc=mu, scale=sigma)
self.assertAllEqual((3,), normal.stddev().get_shape())
self.assertAllEqual([7., 7, 7], self.evaluate(normal.stddev()))
@test_util.run_in_graph_and_eager_modes
def testNormalSample(self):
with self.test_session():
mu = constant_op.constant(3.0)
sigma = constant_op.constant(math.sqrt(3.0))
mu_v = 3.0
sigma_v = np.sqrt(3.0)
n = constant_op.constant(100000)
normal = normal_lib.Normal(loc=mu, scale=sigma)
samples = normal.sample(n)
sample_values = self.evaluate(samples)
# Note that the standard error for the sample mean is ~ sigma / sqrt(n).
# The sample variance similarly is dependent on sigma and n.
# Thus, the tolerances below are very sensitive to number of samples
# as well as the variances chosen.
self.assertEqual(sample_values.shape, (100000,))
self.assertAllClose(sample_values.mean(), mu_v, atol=1e-1)
self.assertAllClose(sample_values.std(), sigma_v, atol=1e-1)
expected_samples_shape = tensor_shape.TensorShape(
[self.evaluate(n)]).concatenate(
tensor_shape.TensorShape(
self.evaluate(normal.batch_shape_tensor())))
self.assertAllEqual(expected_samples_shape, samples.get_shape())
self.assertAllEqual(expected_samples_shape, sample_values.shape)
expected_samples_shape = (
tensor_shape.TensorShape([self.evaluate(n)]).concatenate(
normal.batch_shape))
self.assertAllEqual(expected_samples_shape, samples.get_shape())
self.assertAllEqual(expected_samples_shape, sample_values.shape)
def testNormalFullyReparameterized(self):
mu = constant_op.constant(4.0)
sigma = constant_op.constant(3.0)
with backprop.GradientTape() as tape:
tape.watch(mu)
tape.watch(sigma)
normal = normal_lib.Normal(loc=mu, scale=sigma)
samples = normal.sample(100)
grad_mu, grad_sigma = tape.gradient(samples, [mu, sigma])
self.assertIsNotNone(grad_mu)
self.assertIsNotNone(grad_sigma)
@test_util.run_in_graph_and_eager_modes
def testNormalSampleMultiDimensional(self):
with self.test_session():
batch_size = 2
mu = constant_op.constant([[3.0, -3.0]] * batch_size)
sigma = constant_op.constant([[math.sqrt(2.0), math.sqrt(3.0)]] *
batch_size)
mu_v = [3.0, -3.0]
sigma_v = [np.sqrt(2.0), np.sqrt(3.0)]
n = constant_op.constant(100000)
normal = normal_lib.Normal(loc=mu, scale=sigma)
samples = normal.sample(n)
sample_values = self.evaluate(samples)
# Note that the standard error for the sample mean is ~ sigma / sqrt(n).
# The sample variance similarly is dependent on sigma and n.
# Thus, the tolerances below are very sensitive to number of samples
# as well as the variances chosen.
self.assertEqual(samples.get_shape(), (100000, batch_size, 2))
self.assertAllClose(sample_values[:, 0, 0].mean(), mu_v[0], atol=1e-1)
self.assertAllClose(sample_values[:, 0, 0].std(), sigma_v[0], atol=1e-1)
self.assertAllClose(sample_values[:, 0, 1].mean(), mu_v[1], atol=1e-1)
self.assertAllClose(sample_values[:, 0, 1].std(), sigma_v[1], atol=1e-1)
expected_samples_shape = tensor_shape.TensorShape(
[self.evaluate(n)]).concatenate(
tensor_shape.TensorShape(
self.evaluate(normal.batch_shape_tensor())))
self.assertAllEqual(expected_samples_shape, samples.get_shape())
self.assertAllEqual(expected_samples_shape, sample_values.shape)
expected_samples_shape = (
tensor_shape.TensorShape([self.evaluate(n)]).concatenate(
normal.batch_shape))
self.assertAllEqual(expected_samples_shape, samples.get_shape())
self.assertAllEqual(expected_samples_shape, sample_values.shape)
@test_util.run_in_graph_and_eager_modes
def testNegativeSigmaFails(self):
with self.test_session():
with self.assertRaisesOpError("Condition x > 0 did not hold"):
normal = normal_lib.Normal(
loc=[1.], scale=[-5.], validate_args=True, name="G")
self.evaluate(normal.mean())
@test_util.run_in_graph_and_eager_modes
def testNormalShape(self):
with self.test_session():
mu = constant_op.constant([-3.0] * 5)
sigma = constant_op.constant(11.0)
normal = normal_lib.Normal(loc=mu, scale=sigma)
self.assertEqual(self.evaluate(normal.batch_shape_tensor()), [5])
self.assertEqual(normal.batch_shape, tensor_shape.TensorShape([5]))
self.assertAllEqual(self.evaluate(normal.event_shape_tensor()), [])
self.assertEqual(normal.event_shape, tensor_shape.TensorShape([]))
def testNormalShapeWithPlaceholders(self):
mu = array_ops.placeholder(dtype=dtypes.float32)
sigma = array_ops.placeholder(dtype=dtypes.float32)
normal = normal_lib.Normal(loc=mu, scale=sigma)
with self.test_session() as sess:
# get_batch_shape should return an "<unknown>" tensor.
self.assertEqual(normal.batch_shape, tensor_shape.TensorShape(None))
self.assertEqual(normal.event_shape, ())
self.assertAllEqual(self.evaluate(normal.event_shape_tensor()), [])
self.assertAllEqual(
sess.run(normal.batch_shape_tensor(),
feed_dict={mu: 5.0,
sigma: [1.0, 2.0]}), [2])
@test_util.run_in_graph_and_eager_modes
def testNormalNormalKL(self):
batch_size = 6
mu_a = np.array([3.0] * batch_size)
sigma_a = np.array([1.0, 2.0, 3.0, 1.5, 2.5, 3.5])
mu_b = np.array([-3.0] * batch_size)
sigma_b = np.array([0.5, 1.0, 1.5, 2.0, 2.5, 3.0])
n_a = normal_lib.Normal(loc=mu_a, scale=sigma_a)
n_b = normal_lib.Normal(loc=mu_b, scale=sigma_b)
kl = kullback_leibler.kl_divergence(n_a, n_b)
kl_val = self.evaluate(kl)
kl_expected = ((mu_a - mu_b)**2 / (2 * sigma_b**2) + 0.5 * (
(sigma_a**2 / sigma_b**2) - 1 - 2 * np.log(sigma_a / sigma_b)))
self.assertEqual(kl.get_shape(), (batch_size,))
self.assertAllClose(kl_val, kl_expected)
if __name__ == "__main__":
test.main() | unknown | codeparrot/codeparrot-clean | ||
# Copyright 2014 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
# pylint: disable=W0401,W0614
from telemetry.page.actions.all_page_actions import *
from telemetry.page import page as page_module
from telemetry.page import page_set as page_set_module
class DomPage(page_module.Page):
def __init__(self, url, page_set):
super(DomPage, self).__init__(url=url, page_set=page_set)
class DomPageSet(page_set_module.PageSet):
""" DOM page_cycler benchmark """
def __init__(self):
super(DomPageSet, self).__init__(
# pylint: disable=C0301
serving_dirs=set(['../../../../data/page_cycler/dom']))
urls_list = [
'file://../../../../data/page_cycler/dom/HTMLDocument_write/',
'file://../../../../data/page_cycler/dom/Document_getElementById/',
'file://../../../../data/page_cycler/dom/DOMWindow_document/',
'file://../../../../data/page_cycler/dom/DOMWindow_window/',
'file://../../../../data/page_cycler/dom/Element_getAttribute/',
'file://../../../../data/page_cycler/dom/HTMLCollection_length/',
'file://../../../../data/page_cycler/dom/HTMLElement_className/',
'file://../../../../data/page_cycler/dom/HTMLElement_id/',
'file://../../../../data/page_cycler/dom/NodeList_length/'
]
for url in urls_list:
self.AddPage(DomPage(url, self)) | unknown | codeparrot/codeparrot-clean | ||
from datetime import date
from django.conf import settings
from django.utils import six
from django.utils.crypto import constant_time_compare, salted_hmac
from django.utils.http import base36_to_int, int_to_base36
class PasswordResetTokenGenerator(object):
"""
Strategy object used to generate and check tokens for the password
reset mechanism.
"""
def make_token(self, user):
"""
Returns a token that can be used once to do a password reset
for the given user.
"""
return self._make_token_with_timestamp(user, self._num_days(self._today()))
def check_token(self, user, token):
"""
Check that a password reset token is correct for a given user.
"""
# Parse the token
try:
ts_b36, hash = token.split("-")
except ValueError:
return False
try:
ts = base36_to_int(ts_b36)
except ValueError:
return False
# Check that the timestamp/uid has not been tampered with
if not constant_time_compare(self._make_token_with_timestamp(user, ts), token):
return False
# Check the timestamp is within limit
if (self._num_days(self._today()) - ts) > settings.PASSWORD_RESET_TIMEOUT_DAYS:
return False
return True
def _make_token_with_timestamp(self, user, timestamp):
# timestamp is number of days since 2001-1-1. Converted to
# base 36, this gives us a 3 digit string until about 2121
ts_b36 = int_to_base36(timestamp)
# By hashing on the internal state of the user and using state
# that is sure to change (the password salt will change as soon as
# the password is set, at least for current Django auth, and
# last_login will also change), we produce a hash that will be
# invalid as soon as it is used.
# We limit the hash to 20 chars to keep URL short
key_salt = "django.contrib.auth.tokens.PasswordResetTokenGenerator"
# Ensure results are consistent across DB backends
login_timestamp = '' if user.last_login is None else user.last_login.replace(microsecond=0, tzinfo=None)
value = (six.text_type(user.pk) + user.password +
six.text_type(login_timestamp) + six.text_type(timestamp))
hash = salted_hmac(key_salt, value).hexdigest()[::2]
return "%s-%s" % (ts_b36, hash)
def _num_days(self, dt):
return (dt - date(2001, 1, 1)).days
def _today(self):
# Used for mocking in tests
return date.today()
default_token_generator = PasswordResetTokenGenerator() | unknown | codeparrot/codeparrot-clean | ||
# !/usr/bin/env python
#
# Copyright 2015 clowwindy
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from __future__ import absolute_import, division, print_function, \
with_statement
import string
import struct
import hashlib
__all__ = ['ciphers']
cached_tables = {}
if hasattr(string, 'maketrans'):
maketrans = string.maketrans
translate = string.translate
else:
maketrans = bytes.maketrans
translate = bytes.translate
def get_table(key):
m = hashlib.md5()
m.update(key)
s = m.digest()
a, b = struct.unpack('<QQ', s)
table = maketrans(b'', b'')
table = [table[i: i + 1] for i in range(len(table))]
for i in range(1, 1024):
table.sort(key=lambda x: int(a % (ord(x) + i)))
return table
def init_table(key):
if key not in cached_tables:
encrypt_table = b''.join(get_table(key))
decrypt_table = maketrans(encrypt_table, maketrans(b'', b''))
cached_tables[key] = [encrypt_table, decrypt_table]
return cached_tables[key]
class TableCipher(object):
def __init__(self, cipher_name, key, iv, op):
self._encrypt_table, self._decrypt_table = init_table(key)
self._op = op
def update(self, data):
if self._op:
return translate(data, self._encrypt_table)
else:
return translate(data, self._decrypt_table)
class NoneCipher(object):
def __init__(self, cipher_name, key, iv, op):
pass
def update(self, data):
return data
ciphers = {
'none': (16, 0, NoneCipher),
'table': (16, 0, TableCipher)
}
def test_table_result():
from shadowsocks.common import ord
target1 = [
[60, 53, 84, 138, 217, 94, 88, 23, 39, 242, 219, 35, 12, 157, 165, 181,
255, 143, 83, 247, 162, 16, 31, 209, 190, 171, 115, 65, 38, 41, 21,
245, 236, 46, 121, 62, 166, 233, 44, 154, 153, 145, 230, 49, 128, 216,
173, 29, 241, 119, 64, 229, 194, 103, 131, 110, 26, 197, 218, 59, 204,
56, 27, 34, 141, 221, 149, 239, 192, 195, 24, 155, 170, 183, 11, 254,
213, 37, 137, 226, 75, 203, 55, 19, 72, 248, 22, 129, 33, 175, 178,
10, 198, 71, 77, 36, 113, 167, 48, 2, 117, 140, 142, 66, 199, 232,
243, 32, 123, 54, 51, 82, 57, 177, 87, 251, 150, 196, 133, 5, 253,
130, 8, 184, 14, 152, 231, 3, 186, 159, 76, 89, 228, 205, 156, 96,
163, 146, 18, 91, 132, 85, 80, 109, 172, 176, 105, 13, 50, 235, 127,
0, 189, 95, 98, 136, 250, 200, 108, 179, 211, 214, 106, 168, 78, 79,
74, 210, 30, 73, 201, 151, 208, 114, 101, 174, 92, 52, 120, 240, 15,
169, 220, 182, 81, 224, 43, 185, 40, 99, 180, 17, 212, 158, 42, 90, 9,
191, 45, 6, 25, 4, 222, 67, 126, 1, 116, 124, 206, 69, 61, 7, 68, 97,
202, 63, 244, 20, 28, 58, 93, 134, 104, 144, 227, 147, 102, 118, 135,
148, 47, 238, 86, 112, 122, 70, 107, 215, 100, 139, 223, 225, 164,
237, 111, 125, 207, 160, 187, 246, 234, 161, 188, 193, 249, 252],
[151, 205, 99, 127, 201, 119, 199, 211, 122, 196, 91, 74, 12, 147, 124,
180, 21, 191, 138, 83, 217, 30, 86, 7, 70, 200, 56, 62, 218, 47, 168,
22, 107, 88, 63, 11, 95, 77, 28, 8, 188, 29, 194, 186, 38, 198, 33,
230, 98, 43, 148, 110, 177, 1, 109, 82, 61, 112, 219, 59, 0, 210, 35,
215, 50, 27, 103, 203, 212, 209, 235, 93, 84, 169, 166, 80, 130, 94,
164, 165, 142, 184, 111, 18, 2, 141, 232, 114, 6, 131, 195, 139, 176,
220, 5, 153, 135, 213, 154, 189, 238, 174, 226, 53, 222, 146, 162,
236, 158, 143, 55, 244, 233, 96, 173, 26, 206, 100, 227, 49, 178, 34,
234, 108, 207, 245, 204, 150, 44, 87, 121, 54, 140, 118, 221, 228,
155, 78, 3, 239, 101, 64, 102, 17, 223, 41, 137, 225, 229, 66, 116,
171, 125, 40, 39, 71, 134, 13, 193, 129, 247, 251, 20, 136, 242, 14,
36, 97, 163, 181, 72, 25, 144, 46, 175, 89, 145, 113, 90, 159, 190,
15, 183, 73, 123, 187, 128, 248, 252, 152, 24, 197, 68, 253, 52, 69,
117, 57, 92, 104, 157, 170, 214, 81, 60, 133, 208, 246, 172, 23, 167,
160, 192, 76, 161, 237, 45, 4, 58, 10, 182, 65, 202, 240, 185, 241,
79, 224, 132, 51, 42, 126, 105, 37, 250, 149, 32, 243, 231, 67, 179,
48, 9, 106, 216, 31, 249, 19, 85, 254, 156, 115, 255, 120, 75, 16]]
target2 = [
[124, 30, 170, 247, 27, 127, 224, 59, 13, 22, 196, 76, 72, 154, 32,
209, 4, 2, 131, 62, 101, 51, 230, 9, 166, 11, 99, 80, 208, 112, 36,
248, 81, 102, 130, 88, 218, 38, 168, 15, 241, 228, 167, 117, 158, 41,
10, 180, 194, 50, 204, 243, 246, 251, 29, 198, 219, 210, 195, 21, 54,
91, 203, 221, 70, 57, 183, 17, 147, 49, 133, 65, 77, 55, 202, 122,
162, 169, 188, 200, 190, 125, 63, 244, 96, 31, 107, 106, 74, 143, 116,
148, 78, 46, 1, 137, 150, 110, 181, 56, 95, 139, 58, 3, 231, 66, 165,
142, 242, 43, 192, 157, 89, 175, 109, 220, 128, 0, 178, 42, 255, 20,
214, 185, 83, 160, 253, 7, 23, 92, 111, 153, 26, 226, 33, 176, 144,
18, 216, 212, 28, 151, 71, 206, 222, 182, 8, 174, 205, 201, 152, 240,
155, 108, 223, 104, 239, 98, 164, 211, 184, 34, 193, 14, 114, 187, 40,
254, 12, 67, 93, 217, 6, 94, 16, 19, 82, 86, 245, 24, 197, 134, 132,
138, 229, 121, 5, 235, 238, 85, 47, 103, 113, 179, 69, 250, 45, 135,
156, 25, 61, 75, 44, 146, 189, 84, 207, 172, 119, 53, 123, 186, 120,
171, 68, 227, 145, 136, 100, 90, 48, 79, 159, 149, 39, 213, 236, 126,
52, 60, 225, 199, 105, 73, 233, 252, 118, 215, 35, 115, 64, 37, 97,
129, 161, 177, 87, 237, 141, 173, 191, 163, 140, 234, 232, 249],
[117, 94, 17, 103, 16, 186, 172, 127, 146, 23, 46, 25, 168, 8, 163, 39,
174, 67, 137, 175, 121, 59, 9, 128, 179, 199, 132, 4, 140, 54, 1, 85,
14, 134, 161, 238, 30, 241, 37, 224, 166, 45, 119, 109, 202, 196, 93,
190, 220, 69, 49, 21, 228, 209, 60, 73, 99, 65, 102, 7, 229, 200, 19,
82, 240, 71, 105, 169, 214, 194, 64, 142, 12, 233, 88, 201, 11, 72,
92, 221, 27, 32, 176, 124, 205, 189, 177, 246, 35, 112, 219, 61, 129,
170, 173, 100, 84, 242, 157, 26, 218, 20, 33, 191, 155, 232, 87, 86,
153, 114, 97, 130, 29, 192, 164, 239, 90, 43, 236, 208, 212, 185, 75,
210, 0, 81, 227, 5, 116, 243, 34, 18, 182, 70, 181, 197, 217, 95, 183,
101, 252, 248, 107, 89, 136, 216, 203, 68, 91, 223, 96, 141, 150, 131,
13, 152, 198, 111, 44, 222, 125, 244, 76, 251, 158, 106, 24, 42, 38,
77, 2, 213, 207, 249, 147, 113, 135, 245, 118, 193, 47, 98, 145, 66,
160, 123, 211, 165, 78, 204, 80, 250, 110, 162, 48, 58, 10, 180, 55,
231, 79, 149, 74, 62, 50, 148, 143, 206, 28, 15, 57, 159, 139, 225,
122, 237, 138, 171, 36, 56, 115, 63, 144, 154, 6, 230, 133, 215, 41,
184, 22, 104, 254, 234, 253, 187, 226, 247, 188, 156, 151, 40, 108,
51, 83, 178, 52, 3, 31, 255, 195, 53, 235, 126, 167, 120]]
encrypt_table = b''.join(get_table(b'foobar!'))
decrypt_table = maketrans(encrypt_table, maketrans(b'', b''))
for i in range(0, 256):
assert (target1[0][i] == ord(encrypt_table[i]))
assert (target1[1][i] == ord(decrypt_table[i]))
encrypt_table = b''.join(get_table(b'barfoo!'))
decrypt_table = maketrans(encrypt_table, maketrans(b'', b''))
for i in range(0, 256):
assert (target2[0][i] == ord(encrypt_table[i]))
assert (target2[1][i] == ord(decrypt_table[i]))
def test_encryption():
from shadowsocks.crypto import util
cipher = TableCipher('table', b'test', b'', 1)
decipher = TableCipher('table', b'test', b'', 0)
util.run_cipher(cipher, decipher)
if __name__ == '__main__':
test_table_result()
test_encryption() | unknown | codeparrot/codeparrot-clean | ||
from textwrap import dedent
import pytest
from pandas import (
DataFrame,
Series,
)
pytest.importorskip("jinja2")
from pandas.io.formats.style import Styler
@pytest.fixture
def df():
return DataFrame(
{"A": [0, 1], "B": [-0.61, -1.22], "C": Series(["ab", "cd"], dtype=object)}
)
@pytest.fixture
def styler(df):
return Styler(df, uuid_len=0, precision=2)
def test_basic_table(styler):
result = styler.to_typst()
expected = dedent(
"""\
#table(
columns: 4,
[], [A], [B], [C],
[0], [0], [-0.61], [ab],
[1], [1], [-1.22], [cd],
)"""
)
assert result == expected
def test_concat(styler):
result = styler.concat(styler.data.agg(["sum"]).style).to_typst()
expected = dedent(
"""\
#table(
columns: 4,
[], [A], [B], [C],
[0], [0], [-0.61], [ab],
[1], [1], [-1.22], [cd],
[sum], [1], [-1.830000], [abcd],
)"""
)
assert result == expected
def test_concat_recursion(styler):
df = styler.data
styler1 = styler
styler2 = Styler(df.agg(["sum"]), uuid_len=0, precision=3)
styler3 = Styler(df.agg(["sum"]), uuid_len=0, precision=4)
result = styler1.concat(styler2.concat(styler3)).to_typst()
expected = dedent(
"""\
#table(
columns: 4,
[], [A], [B], [C],
[0], [0], [-0.61], [ab],
[1], [1], [-1.22], [cd],
[sum], [1], [-1.830], [abcd],
[sum], [1], [-1.8300], [abcd],
)"""
)
assert result == expected
def test_concat_chain(styler):
df = styler.data
styler1 = styler
styler2 = Styler(df.agg(["sum"]), uuid_len=0, precision=3)
styler3 = Styler(df.agg(["sum"]), uuid_len=0, precision=4)
result = styler1.concat(styler2).concat(styler3).to_typst()
expected = dedent(
"""\
#table(
columns: 4,
[], [A], [B], [C],
[0], [0], [-0.61], [ab],
[1], [1], [-1.22], [cd],
[sum], [1], [-1.830], [abcd],
[sum], [1], [-1.8300], [abcd],
)"""
)
assert result == expected | python | github | https://github.com/pandas-dev/pandas | pandas/tests/io/formats/style/test_to_typst.py |
# -*- coding: utf-8 -*-
# Part of Odoo. See LICENSE file for full copyright and licensing details.
import psycopg2
from odoo.models import BaseModel
from odoo.tests.common import TransactionCase
from odoo.tools import mute_logger
import odoo.osv.expression as expression
class TestExpression(TransactionCase):
def test_00_in_not_in_m2m(self):
# Create 4 partners with no category, or one or two categories (out of two categories).
categories = self.env['res.partner.category']
cat_a = categories.create({'name': 'test_expression_category_A'})
cat_b = categories.create({'name': 'test_expression_category_B'})
partners = self.env['res.partner']
a = partners.create({'name': 'test_expression_partner_A', 'category_id': [(6, 0, [cat_a.id])]})
b = partners.create({'name': 'test_expression_partner_B', 'category_id': [(6, 0, [cat_b.id])]})
ab = partners.create({'name': 'test_expression_partner_AB', 'category_id': [(6, 0, [cat_a.id, cat_b.id])]})
c = partners.create({'name': 'test_expression_partner_C'})
# The tests.
# On a one2many or many2many field, `in` should be read `contains` (and
# `not in` should be read `doesn't contain`.
with_a = partners.search([('category_id', 'in', [cat_a.id])])
self.assertEqual(a + ab, with_a, "Search for category_id in cat_a failed.")
with_b = partners.search([('category_id', 'in', [cat_b.id])])
self.assertEqual(b + ab, with_b, "Search for category_id in cat_b failed.")
# Partners with the category A or the category B.
with_a_or_b = partners.search([('category_id', 'in', [cat_a.id, cat_b.id])])
self.assertEqual(a + b + ab, with_a_or_b, "Search for category_id contains cat_a or cat_b failed.")
# Show that `contains list` is really `contains element or contains element`.
with_a_or_with_b = partners.search(['|', ('category_id', 'in', [cat_a.id]), ('category_id', 'in', [cat_b.id])])
self.assertEqual(a + b + ab, with_a_or_with_b, "Search for category_id contains cat_a or contains cat_b failed.")
# If we change the OR in AND...
with_a_and_b = partners.search([('category_id', 'in', [cat_a.id]), ('category_id', 'in', [cat_b.id])])
self.assertEqual(ab, with_a_and_b, "Search for category_id contains cat_a and cat_b failed.")
# Partners without category A and without category B.
without_a_or_b = partners.search([('category_id', 'not in', [cat_a.id, cat_b.id])])
self.assertFalse(without_a_or_b & (a + b + ab), "Search for category_id doesn't contain cat_a or cat_b failed (1).")
self.assertTrue(c in without_a_or_b, "Search for category_id doesn't contain cat_a or cat_b failed (2).")
# Show that `doesn't contain list` is really `doesn't contain element and doesn't contain element`.
without_a_and_without_b = partners.search([('category_id', 'not in', [cat_a.id]), ('category_id', 'not in', [cat_b.id])])
self.assertFalse(without_a_and_without_b & (a + b + ab), "Search for category_id doesn't contain cat_a and cat_b failed (1).")
self.assertTrue(c in without_a_and_without_b, "Search for category_id doesn't contain cat_a and cat_b failed (2).")
# We can exclude any partner containing the category A.
without_a = partners.search([('category_id', 'not in', [cat_a.id])])
self.assertTrue(a not in without_a, "Search for category_id doesn't contain cat_a failed (1).")
self.assertTrue(ab not in without_a, "Search for category_id doesn't contain cat_a failed (2).")
self.assertLessEqual(b + c, without_a, "Search for category_id doesn't contain cat_a failed (3).")
# (Obviously we can do the same for cateory B.)
without_b = partners.search([('category_id', 'not in', [cat_b.id])])
self.assertTrue(b not in without_b, "Search for category_id doesn't contain cat_b failed (1).")
self.assertTrue(ab not in without_b, "Search for category_id doesn't contain cat_b failed (2).")
self.assertLessEqual(a + c, without_b, "Search for category_id doesn't contain cat_b failed (3).")
def test_05_not_str_m2m(self):
partners = self.env['res.partner']
categories = self.env['res.partner.category']
cids = {}
for name in 'A B AB'.split():
cids[name] = categories.create({'name': name}).id
partners_config = {
'0': [],
'a': [cids['A']],
'b': [cids['B']],
'ab': [cids['AB']],
'a b': [cids['A'], cids['B']],
'b ab': [cids['B'], cids['AB']],
}
pids = {}
for name, cat_ids in partners_config.iteritems():
pids[name] = partners.create({'name': name, 'category_id': [(6, 0, cat_ids)]}).id
base_domain = [('id', 'in', pids.values())]
def test(op, value, expected):
found_ids = partners.search(base_domain + [('category_id', op, value)]).ids
expected_ids = [pids[name] for name in expected]
self.assertItemsEqual(found_ids, expected_ids, '%s %r should return %r' % (op, value, expected))
test('=', 'A', ['a', 'a b'])
test('!=', 'B', ['0', 'a', 'ab'])
test('like', 'A', ['a', 'ab', 'a b', 'b ab'])
test('not ilike', 'B', ['0', 'a'])
test('not like', 'AB', ['0', 'a', 'b', 'a b'])
def test_10_hierarchy_in_m2m(self):
Partner = self.env['res.partner']
Category = self.env['res.partner.category']
# search through m2m relation
partners = Partner.search([('category_id', 'child_of', self.ref('base.res_partner_category_0'))])
self.assertTrue(partners)
# setup test partner categories
categ_root = Category.create({'name': 'Root category'})
categ_0 = Category.create({'name': 'Parent category', 'parent_id': categ_root.id})
categ_1 = Category.create({'name': 'Child1', 'parent_id': categ_0.id})
# test hierarchical search in m2m with child id (list of ids)
cats = Category.search([('id', 'child_of', categ_root.ids)])
self.assertEqual(len(cats), 3)
# test hierarchical search in m2m with child id (single id)
cats = Category.search([('id', 'child_of', categ_root.id)])
self.assertEqual(len(cats), 3)
# test hierarchical search in m2m with child ids
cats = Category.search([('id', 'child_of', (categ_0 + categ_1).ids)])
self.assertEqual(len(cats), 2)
# test hierarchical search in m2m with child ids
cats = Category.search([('id', 'child_of', categ_0.ids)])
self.assertEqual(len(cats), 2)
# test hierarchical search in m2m with child ids
cats = Category.search([('id', 'child_of', categ_1.ids)])
self.assertEqual(len(cats), 1)
# test hierarchical search in m2m with parent id (list of ids)
cats = Category.search([('id', 'parent_of', categ_1.ids)])
self.assertEqual(len(cats), 3)
# test hierarchical search in m2m with parent id (single id)
cats = Category.search([('id', 'parent_of', categ_1.id)])
self.assertEqual(len(cats), 3)
# test hierarchical search in m2m with parent ids
cats = Category.search([('id', 'parent_of', (categ_root + categ_0).ids)])
self.assertEqual(len(cats), 2)
# test hierarchical search in m2m with parent ids
cats = Category.search([('id', 'parent_of', categ_0.ids)])
self.assertEqual(len(cats), 2)
# test hierarchical search in m2m with parent ids
cats = Category.search([('id', 'parent_of', categ_root.ids)])
self.assertEqual(len(cats), 1)
def test_10_equivalent_id(self):
# equivalent queries
Currency = self.env['res.currency']
non_currency_id = max(Currency.search([]).ids) + 1003
res_0 = Currency.search([])
res_1 = Currency.search([('name', 'not like', 'probably_unexisting_name')])
self.assertEqual(res_0, res_1)
res_2 = Currency.search([('id', 'not in', [non_currency_id])])
self.assertEqual(res_0, res_2)
res_3 = Currency.search([('id', 'not in', [])])
self.assertEqual(res_0, res_3)
res_4 = Currency.search([('id', '!=', False)])
self.assertEqual(res_0, res_4)
# equivalent queries, integer and string
Partner = self.env['res.partner']
all_partners = Partner.search([])
self.assertTrue(len(all_partners) > 1)
one = all_partners[0]
others = all_partners[1:]
res_1 = Partner.search([('id', '=', one.id)])
self.assertEqual(one, res_1)
# Partner.search([('id', '!=', others)]) # not permitted
res_2 = Partner.search([('id', 'not in', others.ids)])
self.assertEqual(one, res_2)
res_3 = Partner.search(['!', ('id', '!=', one.id)])
self.assertEqual(one, res_3)
res_4 = Partner.search(['!', ('id', 'in', others.ids)])
self.assertEqual(one, res_4)
# res_5 = Partner.search([('id', 'in', one)]) # TODO make it permitted, just like for child_of
# self.assertEqual(one, res_5)
res_6 = Partner.search([('id', 'in', [one.id])])
self.assertEqual(one, res_6)
res_7 = Partner.search([('name', '=', one.name)])
self.assertEqual(one, res_7)
res_8 = Partner.search([('name', 'in', [one.name])])
# res_9 = Partner.search([('name', 'in', one.name)]) # TODO
def test_15_m2o(self):
Partner = self.env['res.partner']
# testing equality with name
partners = Partner.search([('parent_id', '=', 'Agrolait')])
self.assertTrue(partners)
# testing the in operator with name
partners = Partner.search([('parent_id', 'in', 'Agrolait')])
self.assertTrue(partners)
# testing the in operator with a list of names
partners = Partner.search([('parent_id', 'in', ['Agrolait', 'ASUStek'])])
self.assertTrue(partners)
# check if many2one works with empty search list
partners = Partner.search([('company_id', 'in', [])])
self.assertFalse(partners)
# create new company with partners, and partners with no company
company2 = self.env['res.company'].create({'name': 'Acme 2'})
for i in xrange(4):
Partner.create({'name': 'P of Acme %s' % i, 'company_id': company2.id})
for i in xrange(4):
Partner.create({'name': 'P of All %s' % i, 'company_id': False})
# check if many2one works with negative empty list
all_partners = Partner.search([])
res_partners = Partner.search(['|', ('company_id', 'not in', []), ('company_id', '=', False)])
self.assertEqual(all_partners, res_partners, "not in [] fails")
# check that many2one will pick the correct records with a list
partners = Partner.search([('company_id', 'in', [False])])
self.assertTrue(len(partners) >= 4, "We should have at least 4 partners with no company")
# check that many2one will exclude the correct records with a list
partners = Partner.search([('company_id', 'not in', [1])])
self.assertTrue(len(partners) >= 4, "We should have at least 4 partners not related to company #1")
# check that many2one will exclude the correct records with a list and False
partners = Partner.search(['|', ('company_id', 'not in', [1]),
('company_id', '=', False)])
self.assertTrue(len(partners) >= 8, "We should have at least 8 partners not related to company #1")
# check that multi-level expressions also work
partners = Partner.search([('company_id.partner_id', 'in', [])])
self.assertFalse(partners)
# check that multi-level expressions with negative op work
all_partners = Partner.search([('company_id', '!=', False)])
res_partners = Partner.search([('company_id.partner_id', 'not in', [])])
self.assertEqual(all_partners, res_partners, "not in [] fails")
# Test the '(not) like/in' behavior. res.partner and its parent_id
# column are used because parent_id is a many2one, allowing to test the
# Null value, and there are actually some null and non-null values in
# the demo data.
all_partners = Partner.search([])
non_partner_id = max(all_partners.ids) + 1
with_parent = all_partners.filtered(lambda p: p.parent_id)
without_parent = all_partners.filtered(lambda p: not p.parent_id)
with_website = all_partners.filtered(lambda p: p.website)
# We treat null values differently than in SQL. For instance in SQL:
# SELECT id FROM res_partner WHERE parent_id NOT IN (0)
# will return only the records with non-null parent_id.
# SELECT id FROM res_partner WHERE parent_id IN (0)
# will return expectedly nothing (our ids always begin at 1).
# This means the union of those two results will give only some
# records, but not all present in database.
#
# When using domains and the ORM's search method, we think it is
# more intuitive that the union returns all the records, and that
# a domain like ('parent_id', 'not in', [0]) will return all
# the records. For instance, if you perform a search for the companies
# that don't have OpenERP has a parent company, you expect to find,
# among others, the companies that don't have parent company.
#
# existing values be treated similarly if we simply check that some
# existing value belongs to them.
res_0 = Partner.search([('parent_id', 'not like', 'probably_unexisting_name')]) # get all rows, included null parent_id
self.assertEqual(res_0, all_partners)
res_1 = Partner.search([('parent_id', 'not in', [non_partner_id])]) # get all rows, included null parent_id
self.assertEqual(res_1, all_partners)
res_2 = Partner.search([('parent_id', '!=', False)]) # get rows with not null parent_id, deprecated syntax
self.assertEqual(res_2, with_parent)
res_3 = Partner.search([('parent_id', 'not in', [])]) # get all rows, included null parent_id
self.assertEqual(res_3, all_partners)
res_4 = Partner.search([('parent_id', 'not in', [False])]) # get rows with not null parent_id
self.assertEqual(res_4, with_parent)
res_4b = Partner.search([('parent_id', 'not ilike', '')]) # get only rows without parent
self.assertEqual(res_4b, without_parent)
# The results of these queries, when combined with queries 0..4 must
# give the whole set of ids.
res_5 = Partner.search([('parent_id', 'like', 'probably_unexisting_name')])
self.assertFalse(res_5)
res_6 = Partner.search([('parent_id', 'in', [non_partner_id])])
self.assertFalse(res_6)
res_7 = Partner.search([('parent_id', '=', False)])
self.assertEqual(res_7, without_parent)
res_8 = Partner.search([('parent_id', 'in', [])])
self.assertFalse(res_8)
res_9 = Partner.search([('parent_id', 'in', [False])])
self.assertEqual(res_9, without_parent)
res_9b = Partner.search([('parent_id', 'ilike', '')]) # get those with a parent
self.assertEqual(res_9b, with_parent)
# These queries must return exactly the results than the queries 0..4,
# i.e. not ... in ... must be the same as ... not in ... .
res_10 = Partner.search(['!', ('parent_id', 'like', 'probably_unexisting_name')])
self.assertEqual(res_0, res_10)
res_11 = Partner.search(['!', ('parent_id', 'in', [non_partner_id])])
self.assertEqual(res_1, res_11)
res_12 = Partner.search(['!', ('parent_id', '=', False)])
self.assertEqual(res_2, res_12)
res_13 = Partner.search(['!', ('parent_id', 'in', [])])
self.assertEqual(res_3, res_13)
res_14 = Partner.search(['!', ('parent_id', 'in', [False])])
self.assertEqual(res_4, res_14)
# Testing many2one field is not enough, a regular char field is tested
res_15 = Partner.search([('website', 'in', [])])
self.assertFalse(res_15)
res_16 = Partner.search([('website', 'not in', [])])
self.assertEqual(res_16, all_partners)
res_17 = Partner.search([('website', '!=', False)])
self.assertEqual(res_17, with_website)
# check behavior for required many2one fields: currency_id is required
companies = self.env['res.company'].search([])
res_101 = companies.search([('currency_id', 'not ilike', '')]) # get no companies
self.assertFalse(res_101)
res_102 = companies.search([('currency_id', 'ilike', '')]) # get all companies
self.assertEqual(res_102, companies)
def test_in_operator(self):
""" check that we can use the 'in' operator for plain fields """
menus = self.env['ir.ui.menu'].search([('sequence', 'in', [1, 2, 10, 20])])
self.assertTrue(menus)
def test_15_o2m(self):
Partner = self.env['res.partner']
# test one2many operator with empty search list
partners = Partner.search([('child_ids', 'in', [])])
self.assertFalse(partners)
# test one2many operator with False
partners = Partner.search([('child_ids', '=', False)])
for partner in partners:
self.assertFalse(partner.child_ids)
# verify domain evaluation for one2many != False and one2many == False
categories = self.env['res.partner.category'].search([])
parents = categories.search([('child_ids', '!=', False)])
self.assertEqual(parents, categories.filtered(lambda c: c.child_ids))
leafs = categories.search([('child_ids', '=', False)])
self.assertEqual(leafs, categories.filtered(lambda c: not c.child_ids))
# test many2many operator with empty search list
partners = Partner.search([('category_id', 'in', [])])
self.assertFalse(partners)
# test many2many operator with False
partners = Partner.search([('category_id', '=', False)])
for partner in partners:
self.assertFalse(partner.category_id)
# filtering on nonexistent value across x2many should return nothing
partners = Partner.search([('child_ids.city', '=', 'foo')])
self.assertFalse(partners)
def test_15_equivalent_one2many_1(self):
Company = self.env['res.company']
company3 = Company.create({'name': 'Acme 3'})
company4 = Company.create({'name': 'Acme 4', 'parent_id': company3.id})
# one2many towards same model
res_1 = Company.search([('child_ids', 'in', company3.child_ids.ids)]) # any company having a child of company3 as child
self.assertEqual(res_1, company3)
res_2 = Company.search([('child_ids', 'in', company3.child_ids[0].ids)]) # any company having the first child of company3 as child
self.assertEqual(res_2, company3)
# child_of x returns x and its children (direct or not).
expected = company3 + company4
res_1 = Company.search([('id', 'child_of', [company3.id])])
self.assertEqual(res_1, expected)
res_2 = Company.search([('id', 'child_of', company3.id)])
self.assertEqual(res_2, expected)
res_3 = Company.search([('id', 'child_of', [company3.name])])
self.assertEqual(res_3, expected)
res_4 = Company.search([('id', 'child_of', company3.name)])
self.assertEqual(res_4, expected)
# parent_of x returns x and its parents (direct or not).
expected = company3 + company4
res_1 = Company.search([('id', 'parent_of', [company4.id])])
self.assertEqual(res_1, expected)
res_2 = Company.search([('id', 'parent_of', company4.id)])
self.assertEqual(res_2, expected)
res_3 = Company.search([('id', 'parent_of', [company4.name])])
self.assertEqual(res_3, expected)
res_4 = Company.search([('id', 'parent_of', company4.name)])
self.assertEqual(res_4, expected)
# try testing real subsets with IN/NOT IN
Partner = self.env['res.partner']
Users = self.env['res.users']
p1, _ = Partner.name_create("Dédé Boitaclou")
p2, _ = Partner.name_create("Raoulette Pizza O'poil")
u1a = Users.create({'login': 'dbo', 'partner_id': p1}).id
u1b = Users.create({'login': 'dbo2', 'partner_id': p1}).id
u2 = Users.create({'login': 'rpo', 'partner_id': p2}).id
self.assertEqual([p1], Partner.search([('user_ids', 'in', u1a)]).ids, "o2m IN accept single int on right side")
self.assertEqual([p1], Partner.search([('user_ids', '=', 'Dédé Boitaclou')]).ids, "o2m NOT IN matches none on the right side")
self.assertEqual([], Partner.search([('user_ids', 'in', [10000])]).ids, "o2m NOT IN matches none on the right side")
self.assertEqual([p1,p2], Partner.search([('user_ids', 'in', [u1a,u2])]).ids, "o2m IN matches any on the right side")
all_ids = Partner.search([]).ids
self.assertEqual(set(all_ids) - set([p1]), set(Partner.search([('user_ids', 'not in', u1a)]).ids), "o2m NOT IN matches none on the right side")
self.assertEqual(set(all_ids) - set([p1]), set(Partner.search([('user_ids', '!=', 'Dédé Boitaclou')]).ids), "o2m NOT IN matches none on the right side")
self.assertEqual(set(all_ids) - set([p1,p2]), set(Partner.search([('user_ids', 'not in', [u1b, u2])]).ids), "o2m NOT IN matches none on the right side")
def test_15_equivalent_one2many_2(self):
Currency = self.env['res.currency']
CurrencyRate = self.env['res.currency.rate']
# create a currency and a currency rate
currency = Currency.create({'name': 'ZZZ', 'symbol': 'ZZZ', 'rounding': 1.0})
currency_rate = CurrencyRate.create({'name': '2010-01-01', 'currency_id': currency.id, 'rate': 1.0})
non_currency_id = currency_rate.id + 1000
default_currency = Currency.browse(1)
# search the currency via its rates one2many (the one2many must point back at the currency)
currency_rate1 = CurrencyRate.search([('name', 'not like', 'probably_unexisting_name')])
currency_rate2 = CurrencyRate.search([('id', 'not in', [non_currency_id])])
self.assertEqual(currency_rate1, currency_rate2)
currency_rate3 = CurrencyRate.search([('id', 'not in', [])])
self.assertEqual(currency_rate1, currency_rate3)
# one2many towards another model
res_3 = Currency.search([('rate_ids', 'in', default_currency.rate_ids.ids)]) # currencies having a rate of main currency
self.assertEqual(res_3, default_currency)
res_4 = Currency.search([('rate_ids', 'in', default_currency.rate_ids[0].ids)]) # currencies having first rate of main currency
self.assertEqual(res_4, default_currency)
res_5 = Currency.search([('rate_ids', 'in', default_currency.rate_ids[0].id)]) # currencies having first rate of main currency
self.assertEqual(res_5, default_currency)
# res_6 = Currency.search([('rate_ids', 'in', [default_currency.rate_ids[0].name])])
# res_7 = Currency.search([('rate_ids', '=', default_currency.rate_ids[0].name)])
# res_8 = Currency.search([('rate_ids', 'like', default_currency.rate_ids[0].name)])
res_9 = Currency.search([('rate_ids', 'like', 'probably_unexisting_name')])
self.assertFalse(res_9)
# Currency.search([('rate_ids', 'unexisting_op', 'probably_unexisting_name')]) # TODO expected exception
# get the currencies referenced by some currency rates using a weird negative domain
res_10 = Currency.search([('rate_ids', 'not like', 'probably_unexisting_name')])
res_11 = Currency.search([('rate_ids', 'not in', [non_currency_id])])
self.assertEqual(res_10, res_11)
res_12 = Currency.search([('rate_ids', '!=', False)])
self.assertEqual(res_10, res_12)
res_13 = Currency.search([('rate_ids', 'not in', [])])
self.assertEqual(res_10, res_13)
def test_20_expression_parse(self):
# TDE note: those tests have been added when refactoring the expression.parse() method.
# They come in addition to the already existing tests; maybe some tests
# will be a bit redundant
Users = self.env['res.users']
# Create users
a = Users.create({'name': 'test_A', 'login': 'test_A'})
b1 = Users.create({'name': 'test_B', 'login': 'test_B'})
b2 = Users.create({'name': 'test_B2', 'login': 'test_B2', 'parent_id': b1.partner_id.id})
# Test1: simple inheritance
users = Users.search([('name', 'like', 'test')])
self.assertEqual(users, a + b1 + b2, 'searching through inheritance failed')
users = Users.search([('name', '=', 'test_B')])
self.assertEqual(users, b1, 'searching through inheritance failed')
# Test2: inheritance + relational fields
users = Users.search([('child_ids.name', 'like', 'test_B')])
self.assertEqual(users, b1, 'searching through inheritance failed')
# Special =? operator mean "is equal if right is set, otherwise always True"
users = Users.search([('name', 'like', 'test'), ('parent_id', '=?', False)])
self.assertEqual(users, a + b1 + b2, '(x =? False) failed')
users = Users.search([('name', 'like', 'test'), ('parent_id', '=?', b1.partner_id.id)])
self.assertEqual(users, b2, '(x =? id) failed')
def test_30_normalize_domain(self):
norm_domain = domain = ['&', (1, '=', 1), ('a', '=', 'b')]
self.assertEqual(norm_domain, expression.normalize_domain(domain), "Normalized domains should be left untouched")
domain = [('x', 'in', ['y', 'z']), ('a.v', '=', 'e'), '|', '|', ('a', '=', 'b'), '!', ('c', '>', 'd'), ('e', '!=', 'f'), ('g', '=', 'h')]
norm_domain = ['&', '&', '&'] + domain
self.assertEqual(norm_domain, expression.normalize_domain(domain), "Non-normalized domains should be properly normalized")
def test_40_negating_long_expression(self):
source = ['!', '&', ('user_id', '=', 4), ('partner_id', 'in', [1, 2])]
expect = ['|', ('user_id', '!=', 4), ('partner_id', 'not in', [1, 2])]
self.assertEqual(expression.distribute_not(source), expect,
"distribute_not on expression applied wrongly")
pos_leaves = [[('a', 'in', [])], [('d', '!=', 3)]]
neg_leaves = [[('a', 'not in', [])], [('d', '=', 3)]]
source = expression.OR([expression.AND(pos_leaves)] * 1000)
expect = source
self.assertEqual(expression.distribute_not(source), expect,
"distribute_not on long expression without negation operator should not alter it")
source = ['!'] + source
expect = expression.AND([expression.OR(neg_leaves)] * 1000)
self.assertEqual(expression.distribute_not(source), expect,
"distribute_not on long expression applied wrongly")
def test_accent(self):
if not self.registry.has_unaccent:
return
Company = self.env['res.company']
helene = Company.create({'name': u'Hélène'})
self.assertEqual(helene, Company.search([('name','ilike','Helene')]))
self.assertEqual(helene, Company.search([('name','ilike','hélène')]))
self.assertNotIn(helene, Company.search([('name','not ilike','Helene')]))
self.assertNotIn(helene, Company.search([('name','not ilike','hélène')]))
def test_like_wildcards(self):
# check that =like/=ilike expressions are working on an untranslated field
Partner = self.env['res.partner']
partners = Partner.search([('name', '=like', 'A_U_TeK')])
self.assertTrue(len(partners) == 1, "Must match one partner (ASUSTeK)")
partners = Partner.search([('name', '=ilike', 'c%')])
self.assertTrue(len(partners) >= 1, "Must match one partner (China Export)")
# check that =like/=ilike expressions are working on translated field
Country = self.env['res.country']
countries = Country.search([('name', '=like', 'Ind__')])
self.assertTrue(len(countries) == 1, "Must match India only")
countries = Country.search([('name', '=ilike', 'z%')])
self.assertTrue(len(countries) == 3, "Must match only countries with names starting with Z (currently 3)")
def test_translate_search(self):
Country = self.env['res.country']
belgium = self.env.ref('base.be')
domains = [
[('name', '=', 'Belgium')],
[('name', 'ilike', 'Belgi')],
[('name', 'in', ['Belgium', 'Care Bears'])],
]
for domain in domains:
countries = Country.search(domain)
self.assertEqual(countries, belgium)
def test_long_table_alias(self):
# To test the 64 characters limit for table aliases in PostgreSQL
self.patch_order('res.users', 'partner_id')
self.patch_order('res.partner', 'commercial_partner_id,company_id,name')
self.patch_order('res.company', 'parent_id')
self.env['res.users'].search([('name', '=', 'test')])
@mute_logger('odoo.sql_db')
def test_invalid(self):
""" verify that invalid expressions are refused, even for magic fields """
Country = self.env['res.country']
with self.assertRaises(ValueError):
Country.search([('does_not_exist', '=', 'foo')])
with self.assertRaises(ValueError):
Country.search([('create_date', '>>', 'foo')])
with self.assertRaises(psycopg2.DataError):
Country.search([('create_date', '=', "1970-01-01'); --")])
def test_active(self):
# testing for many2many field with category vendor and active=False
Partner = self.env['res.partner']
vals = {
'name': 'OpenERP Test',
'active': False,
'category_id': [(6, 0, [self.ref("base.res_partner_category_1")])],
'child_ids': [(0, 0, {'name': 'address of OpenERP Test', 'country_id': self.ref("base.be")})],
}
Partner.create(vals)
partner = Partner.search([('category_id', 'ilike', 'vendor'), ('active', '=', False)])
self.assertTrue(partner, "Record not Found with category vendor and active False.")
# testing for one2many field with country Belgium and active=False
partner = Partner.search([('child_ids.country_id','=','Belgium'),('active','=',False)])
self.assertTrue(partner, "Record not Found with country Belgium and active False.")
def test_lp1071710(self):
""" Check that we can exclude translated fields (bug lp:1071710) """
# first install french language
self.env['ir.translation'].load_module_terms(['base'], ['fr_FR'])
# actual test
Country = self.env['res.country']
be = self.env.ref('base.be')
not_be = Country.with_context(lang='fr_FR').search([('name', '!=', 'Belgique')])
self.assertNotIn(be, not_be)
# indirect search via m2o
Partner = self.env['res.partner']
agrolait = Partner.search([('name', '=', 'Agrolait')])
not_be = Partner.search([('country_id', '!=', 'Belgium')])
self.assertNotIn(agrolait, not_be)
not_be = Partner.with_context(lang='fr_FR').search([('country_id', '!=', 'Belgique')])
self.assertNotIn(agrolait, not_be)
class TestAutoJoin(TransactionCase):
def setUp(self):
super(TestAutoJoin, self).setUp()
# Mock BaseModel._where_calc(), to be able to proceed to some tests about generated expression
self._reinit_mock()
BaseModel_where_calc = BaseModel._where_calc
def _where_calc(model, *args, **kwargs):
""" Mock `_where_calc` to be able to test its results. Store them
into some internal variable for latter processing. """
query = BaseModel_where_calc(model, *args, **kwargs)
self.query_list.append(query)
return query
self.patch(BaseModel, '_where_calc', _where_calc)
def _reinit_mock(self):
self.query_list = []
def test_auto_join(self):
unaccent = expression.get_unaccent_wrapper(self.cr)
# Get models
partner_obj = self.env['res.partner']
state_obj = self.env['res.country.state']
bank_obj = self.env['res.partner.bank']
# Get test columns
def patch_auto_join(model, fname, value):
self.patch(model._fields[fname], 'auto_join', value)
def patch_domain(model, fname, value):
self.patch(model._fields[fname], 'domain', value)
# Get country/state data
country_us = self.env['res.country'].search([('code', 'like', 'US')], limit=1)
states = self.env['res.country.state'].search([('country_id', '=', country_us.id)], limit=2)
# Create demo data: partners and bank object
p_a = partner_obj.create({'name': 'test__A', 'state_id': states[0].id})
p_b = partner_obj.create({'name': 'test__B', 'state_id': states[1].id})
p_aa = partner_obj.create({'name': 'test__AA', 'parent_id': p_a.id, 'state_id': states[0].id})
p_ab = partner_obj.create({'name': 'test__AB', 'parent_id': p_a.id, 'state_id': states[1].id})
p_ba = partner_obj.create({'name': 'test__BA', 'parent_id': p_b.id, 'state_id': states[0].id})
b_aa = bank_obj.create({'acc_number': '123', 'acc_type': 'bank', 'partner_id': p_aa.id})
b_ab = bank_obj.create({'acc_number': '456', 'acc_type': 'bank', 'partner_id': p_ab.id})
b_ba = bank_obj.create({'acc_number': '789', 'acc_type': 'bank', 'partner_id': p_ba.id})
# --------------------------------------------------
# Test1: basics about the attribute
# --------------------------------------------------
patch_auto_join(partner_obj, 'category_id', True)
with self.assertRaises(NotImplementedError):
partner_obj.search([('category_id.name', '=', 'foo')])
# --------------------------------------------------
# Test2: one2many
# --------------------------------------------------
name_test = '12'
# Do: one2many without _auto_join
self._reinit_mock()
partners = partner_obj.search([('bank_ids.sanitized_acc_number', 'like', name_test)])
# Test result
self.assertEqual(partners, p_aa,
"_auto_join off: ('bank_ids.sanitized_acc_number', 'like', '..'): incorrect result")
# Test produced queries
self.assertEqual(len(self.query_list), 3,
"_auto_join off: ('bank_ids.sanitized_acc_number', 'like', '..') should produce 3 queries (1 in res_partner_bank, 2 on res_partner)")
sql_query = self.query_list[0].get_sql()
self.assertIn('res_partner_bank', sql_query[0],
"_auto_join off: ('bank_ids.sanitized_acc_number', 'like', '..') first query incorrect main table")
expected = "%s::text like %s" % (unaccent('"res_partner_bank"."sanitized_acc_number"'), unaccent('%s'))
self.assertIn(expected, sql_query[1],
"_auto_join off: ('bank_ids.sanitized_acc_number', 'like', '..') first query incorrect where condition")
self.assertEqual(['%' + name_test + '%'], sql_query[2],
"_auto_join off: ('bank_ids.sanitized_acc_number', 'like', '..') first query incorrect parameter")
sql_query = self.query_list[2].get_sql()
self.assertIn('res_partner', sql_query[0],
"_auto_join off: ('bank_ids.sanitized_acc_number', 'like', '..') third query incorrect main table")
self.assertIn('"res_partner"."id" in (%s)', sql_query[1],
"_auto_join off: ('bank_ids.sanitized_acc_number', 'like', '..') third query incorrect where condition")
self.assertIn(p_aa.id, sql_query[2],
"_auto_join off: ('bank_ids.sanitized_acc_number', 'like', '..') third query incorrect parameter")
# Do: cascaded one2many without _auto_join
self._reinit_mock()
partners = partner_obj.search([('child_ids.bank_ids.id', 'in', [b_aa.id, b_ba.id])])
# Test result
self.assertEqual(partners, p_a + p_b,
"_auto_join off: ('child_ids.bank_ids.id', 'in', [..]): incorrect result")
# Test produced queries
self.assertEqual(len(self.query_list), 5,
"_auto_join off: ('child_ids.bank_ids.id', 'in', [..]) should produce 5 queries (1 in res_partner_bank, 4 on res_partner)")
# Do: one2many with _auto_join
patch_auto_join(partner_obj, 'bank_ids', True)
self._reinit_mock()
partners = partner_obj.search([('bank_ids.sanitized_acc_number', 'like', name_test)])
# Test result
self.assertEqual(partners, p_aa,
"_auto_join on: ('bank_ids.sanitized_acc_number', 'like', '..') incorrect result")
# Test produced queries
self.assertEqual(len(self.query_list), 1,
"_auto_join on: ('bank_ids.sanitized_acc_number', 'like', '..') should produce 1 query")
sql_query = self.query_list[0].get_sql()
self.assertIn('"res_partner"', sql_query[0],
"_auto_join on: ('bank_ids.sanitized_acc_number', 'like', '..') query incorrect main table")
self.assertIn('"res_partner_bank" as "res_partner__bank_ids"', sql_query[0],
"_auto_join on: ('bank_ids.sanitized_acc_number', 'like', '..') query incorrect join")
expected = "%s::text like %s" % (unaccent('"res_partner__bank_ids"."sanitized_acc_number"'), unaccent('%s'))
self.assertIn(expected, sql_query[1],
"_auto_join on: ('bank_ids.sanitized_acc_number', 'like', '..') query incorrect where condition")
self.assertIn('"res_partner"."id"="res_partner__bank_ids"."partner_id"', sql_query[1],
"_auto_join on: ('bank_ids.sanitized_acc_number', 'like', '..') query incorrect join condition")
self.assertIn('%' + name_test + '%', sql_query[2],
"_auto_join on: ('bank_ids.sanitized_acc_number', 'like', '..') query incorrect parameter")
# Do: one2many with _auto_join, test final leaf is an id
self._reinit_mock()
bank_ids = [b_aa.id, b_ab.id]
partners = partner_obj.search([('bank_ids.id', 'in', bank_ids)])
# Test result
self.assertEqual(partners, p_aa + p_ab,
"_auto_join on: ('bank_ids.id', 'in', [..]) incorrect result")
# Test produced queries
self.assertEqual(len(self.query_list), 1,
"_auto_join on: ('bank_ids.id', 'in', [..]) should produce 1 query")
sql_query = self.query_list[0].get_sql()
self.assertIn('"res_partner"', sql_query[0],
"_auto_join on: ('bank_ids.id', 'in', [..]) query incorrect main table")
self.assertIn('"res_partner__bank_ids"."id" in (%s,%s)', sql_query[1],
"_auto_join on: ('bank_ids.id', 'in', [..]) query incorrect where condition")
self.assertLessEqual(set(bank_ids), set(sql_query[2]),
"_auto_join on: ('bank_ids.id', 'in', [..]) query incorrect parameter")
# Do: 2 cascaded one2many with _auto_join, test final leaf is an id
patch_auto_join(partner_obj, 'child_ids', True)
self._reinit_mock()
bank_ids = [b_aa.id, b_ba.id]
partners = partner_obj.search([('child_ids.bank_ids.id', 'in', bank_ids)])
# Test result
self.assertEqual(partners, p_a + p_b,
"_auto_join on: ('child_ids.bank_ids.id', 'not in', [..]): incorrect result")
# # Test produced queries
self.assertEqual(len(self.query_list), 1,
"_auto_join on: ('child_ids.bank_ids.id', 'in', [..]) should produce 1 query")
sql_query = self.query_list[0].get_sql()
self.assertIn('"res_partner"', sql_query[0],
"_auto_join on: ('child_ids.bank_ids.id', 'in', [..]) incorrect main table")
self.assertIn('"res_partner" as "res_partner__child_ids"', sql_query[0],
"_auto_join on: ('child_ids.bank_ids.id', 'in', [..]) query incorrect join")
self.assertIn('"res_partner_bank" as "res_partner__child_ids__bank_ids"', sql_query[0],
"_auto_join on: ('child_ids.bank_ids.id', 'in', [..]) query incorrect join")
self.assertIn('"res_partner__child_ids__bank_ids"."id" in (%s,%s)', sql_query[1],
"_auto_join on: ('child_ids.bank_ids.id', 'in', [..]) query incorrect where condition")
self.assertIn('"res_partner"."id"="res_partner__child_ids"."parent_id"', sql_query[1],
"_auto_join on: ('child_ids.bank_ids.id', 'in', [..]) query incorrect join condition")
self.assertIn('"res_partner__child_ids"."id"="res_partner__child_ids__bank_ids"."partner_id"', sql_query[1],
"_auto_join on: ('child_ids.bank_ids.id', 'in', [..]) query incorrect join condition")
self.assertLessEqual(set(bank_ids), set(sql_query[2][-2:]),
"_auto_join on: ('child_ids.bank_ids.id', 'in', [..]) query incorrect parameter")
# --------------------------------------------------
# Test3: many2one
# --------------------------------------------------
name_test = 'US'
# Do: many2one without _auto_join
self._reinit_mock()
partners = partner_obj.search([('state_id.country_id.code', 'like', name_test)])
# Test result: at least our added data + demo data
self.assertLessEqual(p_a + p_b + p_aa + p_ab + p_ba, partners,
"_auto_join off: ('state_id.country_id.code', 'like', '..') incorrect result")
# Test produced queries
self.assertEqual(len(self.query_list), 3,
"_auto_join off: ('state_id.country_id.code', 'like', '..') should produce 3 queries (1 on res_country, 1 on res_country_state, 1 on res_partner)")
# Do: many2one with 1 _auto_join on the first many2one
patch_auto_join(partner_obj, 'state_id', True)
self._reinit_mock()
partners = partner_obj.search([('state_id.country_id.code', 'like', name_test)])
# Test result: at least our added data + demo data
self.assertLessEqual(p_a + p_b + p_aa + p_ab + p_ba, partners,
"_auto_join on for state_id: ('state_id.country_id.code', 'like', '..') incorrect result")
# Test produced queries
self.assertEqual(len(self.query_list), 2,
"_auto_join on for state_id: ('state_id.country_id.code', 'like', '..') should produce 2 query")
sql_query = self.query_list[0].get_sql()
self.assertIn('"res_country"', sql_query[0],
"_auto_join on for state_id: ('state_id.country_id.code', 'like', '..') query 1 incorrect main table")
expected = "%s::text like %s" % (unaccent('"res_country"."code"'), unaccent('%s'))
self.assertIn(expected, sql_query[1],
"_auto_join on for state_id: ('state_id.country_id.code', 'like', '..') query 1 incorrect where condition")
self.assertEqual(['%' + name_test + '%'], sql_query[2],
"_auto_join on for state_id: ('state_id.country_id.code', 'like', '..') query 1 incorrect parameter")
sql_query = self.query_list[1].get_sql()
self.assertIn('"res_partner"', sql_query[0],
"_auto_join on for state_id: ('state_id.country_id.code', 'like', '..') query 2 incorrect main table")
self.assertIn('"res_country_state" as "res_partner__state_id"', sql_query[0],
"_auto_join on for state_id: ('state_id.country_id.code', 'like', '..') query 2 incorrect join")
self.assertIn('"res_partner__state_id"."country_id" in (%s)', sql_query[1],
"_auto_join on for state_id: ('state_id.country_id.code', 'like', '..') query 2 incorrect where condition")
self.assertIn('"res_partner"."state_id"="res_partner__state_id"."id"', sql_query[1],
"_auto_join on for state_id: ('state_id.country_id.code', 'like', '..') query 2 incorrect join condition")
# Do: many2one with 1 _auto_join on the second many2one
patch_auto_join(partner_obj, 'state_id', False)
patch_auto_join(state_obj, 'country_id', True)
self._reinit_mock()
partners = partner_obj.search([('state_id.country_id.code', 'like', name_test)])
# Test result: at least our added data + demo data
self.assertLessEqual(p_a + p_b + p_aa + p_ab + p_ba, partners,
"_auto_join on for country_id: ('state_id.country_id.code', 'like', '..') incorrect result")
# Test produced queries
self.assertEqual(len(self.query_list), 2,
"_auto_join on for country_id: ('state_id.country_id.code', 'like', '..') should produce 2 query")
# -- first query
sql_query = self.query_list[0].get_sql()
self.assertIn('"res_country_state"', sql_query[0],
"_auto_join on for country_id: ('state_id.country_id.code', 'like', '..') query 1 incorrect main table")
self.assertIn('"res_country" as "res_country_state__country_id"', sql_query[0],
"_auto_join on for country_id: ('state_id.country_id.code', 'like', '..') query 1 incorrect join")
expected = "%s::text like %s" % (unaccent('"res_country_state__country_id"."code"'), unaccent('%s'))
self.assertIn(expected, sql_query[1],
"_auto_join on for country_id: ('state_id.country_id.code', 'like', '..') query 1 incorrect where condition")
self.assertIn('"res_country_state"."country_id"="res_country_state__country_id"."id"', sql_query[1],
"_auto_join on for country_id: ('state_id.country_id.code', 'like', '..') query 1 incorrect join condition")
self.assertEqual(['%' + name_test + '%'], sql_query[2],
"_auto_join on for country_id: ('state_id.country_id.code', 'like', '..') query 1 incorrect parameter")
# -- second query
sql_query = self.query_list[1].get_sql()
self.assertIn('"res_partner"', sql_query[0],
"_auto_join on for country_id: ('state_id.country_id.code', 'like', '..') query 2 incorrect main table")
self.assertIn('"res_partner"."state_id" in', sql_query[1],
"_auto_join on for country_id: ('state_id.country_id.code', 'like', '..') query 2 incorrect where condition")
# Do: many2one with 2 _auto_join
patch_auto_join(partner_obj, 'state_id', True)
patch_auto_join(state_obj, 'country_id', True)
self._reinit_mock()
partners = partner_obj.search([('state_id.country_id.code', 'like', name_test)])
# Test result: at least our added data + demo data
self.assertLessEqual(p_a + p_b + p_aa + p_ab + p_ba, partners,
"_auto_join on: ('state_id.country_id.code', 'like', '..') incorrect result")
# Test produced queries
self.assertEqual(len(self.query_list), 1,
"_auto_join on: ('state_id.country_id.code', 'like', '..') should produce 1 query")
sql_query = self.query_list[0].get_sql()
self.assertIn('"res_partner"', sql_query[0],
"_auto_join on: ('state_id.country_id.code', 'like', '..') query incorrect main table")
self.assertIn('"res_country_state" as "res_partner__state_id"', sql_query[0],
"_auto_join on: ('state_id.country_id.code', 'like', '..') query incorrect join")
self.assertIn('"res_country" as "res_partner__state_id__country_id"', sql_query[0],
"_auto_join on: ('state_id.country_id.code', 'like', '..') query incorrect join")
expected = "%s::text like %s" % (unaccent('"res_partner__state_id__country_id"."code"'), unaccent('%s'))
self.assertIn(expected, sql_query[1],
"_auto_join on: ('state_id.country_id.code', 'like', '..') query incorrect where condition")
self.assertIn('"res_partner"."state_id"="res_partner__state_id"."id"', sql_query[1],
"_auto_join on: ('state_id.country_id.code', 'like', '..') query incorrect join condition")
self.assertIn('"res_partner__state_id"."country_id"="res_partner__state_id__country_id"."id"', sql_query[1],
"_auto_join on: ('state_id.country_id.code', 'like', '..') query incorrect join condition")
self.assertIn('%' + name_test + '%', sql_query[2],
"_auto_join on: ('state_id.country_id.code', 'like', '..') query incorrect parameter")
# --------------------------------------------------
# Test4: domain attribute on one2many fields
# --------------------------------------------------
patch_auto_join(partner_obj, 'child_ids', True)
patch_auto_join(partner_obj, 'bank_ids', True)
patch_domain(partner_obj, 'child_ids', lambda self: ['!', ('name', '=', self._name)])
patch_domain(partner_obj, 'bank_ids', [('sanitized_acc_number', 'like', '2')])
# Do: 2 cascaded one2many with _auto_join, test final leaf is an id
self._reinit_mock()
partners = partner_obj.search(['&', (1, '=', 1), ('child_ids.bank_ids.id', 'in', [b_aa.id, b_ba.id])])
# Test result: at least one of our added data
self.assertLessEqual(p_a, partners,
"_auto_join on one2many with domains incorrect result")
self.assertFalse((p_ab + p_ba) & partners,
"_auto_join on one2many with domains incorrect result")
# Test produced queries that domains effectively present
sql_query = self.query_list[0].get_sql()
expected = "%s::text like %s" % (unaccent('"res_partner__child_ids__bank_ids"."sanitized_acc_number"'), unaccent('%s'))
self.assertIn(expected, sql_query[1],
"_auto_join on one2many with domains incorrect result")
# TDE TODO: check first domain has a correct table name
self.assertIn('"res_partner__child_ids"."name" = %s', sql_query[1],
"_auto_join on one2many with domains incorrect result")
patch_domain(partner_obj, 'child_ids', lambda self: [('name', '=', '__%s' % self._name)])
self._reinit_mock()
partners = partner_obj.search(['&', (1, '=', 1), ('child_ids.bank_ids.id', 'in', [b_aa.id, b_ba.id])])
# Test result: no one
self.assertFalse(partners,
"_auto_join on one2many with domains incorrect result")
# ----------------------------------------
# Test5: result-based tests
# ----------------------------------------
patch_auto_join(partner_obj, 'bank_ids', False)
patch_auto_join(partner_obj, 'child_ids', False)
patch_auto_join(partner_obj, 'state_id', False)
patch_auto_join(partner_obj, 'parent_id', False)
patch_auto_join(state_obj, 'country_id', False)
patch_domain(partner_obj, 'child_ids', [])
patch_domain(partner_obj, 'bank_ids', [])
# Do: ('child_ids.state_id.country_id.code', 'like', '..') without _auto_join
self._reinit_mock()
partners = partner_obj.search([('child_ids.state_id.country_id.code', 'like', name_test)])
# Test result: at least our added data + demo data
self.assertLessEqual(p_a + p_b, partners,
"_auto_join off: ('child_ids.state_id.country_id.code', 'like', '..') incorrect result")
# Test produced queries
self.assertEqual(len(self.query_list), 5,
"_auto_join off: ('child_ids.state_id.country_id.code', 'like', '..') number of queries incorrect")
# Do: ('child_ids.state_id.country_id.code', 'like', '..') with _auto_join
patch_auto_join(partner_obj, 'child_ids', True)
patch_auto_join(partner_obj, 'state_id', True)
patch_auto_join(state_obj, 'country_id', True)
self._reinit_mock()
partners = partner_obj.search([('child_ids.state_id.country_id.code', 'like', name_test)])
# Test result: at least our added data + demo data
self.assertLessEqual(p_a + p_b, partners,
"_auto_join on: ('child_ids.state_id.country_id.code', 'like', '..') incorrect result")
# Test produced queries
self.assertEqual(len(self.query_list), 1,
"_auto_join on: ('child_ids.state_id.country_id.code', 'like', '..') number of queries incorrect") | unknown | codeparrot/codeparrot-clean | ||
# -*- coding: utf-8 -*-
##############################################################################
#
# This module uses OpenERP, Open Source Management Solution Framework.
# Copyright (C) 2012-Today Serpent Consulting Services
# (<http://www.serpentcs.com>)
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>
#
##############################################################################
from . import mass_editing_wizard | unknown | codeparrot/codeparrot-clean | ||
# Copyright 2015 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for Softsign and SoftsignGrad."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
import tensorflow as tf
class SoftsignTest(tf.test.TestCase):
def _npSoftsign(self, np_features):
return np_features / (1 + np.abs(np_features))
def _testSoftsign(self, np_features, use_gpu=False):
np_softsign = self._npSoftsign(np_features)
with self.test_session(use_gpu=use_gpu):
softsign = tf.nn.softsign(np_features)
tf_softsign = softsign.eval()
self.assertAllClose(np_softsign, tf_softsign)
self.assertShapeEqual(np_softsign, softsign)
def testNumbers(self):
for t in [np.float, np.double]:
self._testSoftsign(
np.array([[-9, 7, -5, 3, -1], [1, -3, 5, -7, 9]]).astype(t),
use_gpu=False)
self._testSoftsign(
np.array([[-9, 7, -5, 3, -1], [1, -3, 5, -7, 9]]).astype(t),
use_gpu=True)
def testGradient(self):
with self.test_session():
x = tf.constant(
[-0.9, -0.7, -0.5, -0.3, -0.1, 0.1, 0.3, 0.5, 0.7, 0.9],
shape=[2, 5], name="x")
y = tf.nn.softsign(x, name="softsign")
x_init = np.asarray(
[[-0.9, -0.7, -0.5, -0.3, -0.1], [0.1, 0.3, 0.5, 0.7, 0.9]],
dtype=np.float32, order="F")
err = tf.test.compute_gradient_error(x,
[2, 5],
y,
[2, 5],
x_init_value=x_init)
print("softsign (float) gradient err = ", err)
self.assertLess(err, 1e-4)
if __name__ == "__main__":
tf.test.main() | unknown | codeparrot/codeparrot-clean | ||
import React, { FC } from "react";
// import { useToastContext } from "../../contexts/ToastContext";
import { formatSeries } from "../../lib/formatSeries";
import classes from "./SeriesName.module.css";
import { escapeString } from "../../lib/escapeString";
import { notifications } from "@mantine/notifications";
import {
maybeQuoteLabelName,
metricContainsExtendedCharset,
} from "../../promql/utils";
interface SeriesNameProps {
labels: { [key: string]: string } | null;
format: boolean;
}
const copyMatcher = (matcher: string) => {
if ("clipboard" in navigator) {
navigator.clipboard
.writeText(matcher)
.then(() =>
notifications.show({
title: "Copied matcher!",
message: `Label matcher ${matcher} copied to clipboard`,
})
)
.catch(() =>
notifications.show({
color: "red",
title: "Failed to copy matcher!",
message: "Label matcher could not be copied to clipboard.",
})
);
} else {
notifications.show({
color: "red",
title: "Failed to copy matcher!",
message:
"Clipboard API is not supported in this context (most likely due to non-HTTPS origin).",
});
}
};
const SeriesName: FC<SeriesNameProps> = ({ labels, format }) => {
const renderFormatted = (): React.ReactElement => {
const metricExtendedCharset =
labels && metricContainsExtendedCharset(labels.__name__ || "");
const labelNodes: React.ReactElement[] = [];
let first = true;
// If the metric name uses the extended new charset, we need to escape it,
// put it into the label matcher list, and make sure it's the first item.
if (metricExtendedCharset) {
labelNodes.push(
<span key="__name__">
<span className={classes.labelValue}>
"{escapeString(labels.__name__)}"
</span>
</span>
);
first = false;
}
for (const label in labels) {
if (label === "__name__") {
continue;
}
labelNodes.push(
<span key={label}>
{!first && ", "}
<span
className={classes.labelPair}
onDoubleClick={(e) => copyMatcher(e.currentTarget.innerText)}
title="Double click to copy label matcher"
>
<span className={classes.labelName}>
{maybeQuoteLabelName(label)}
</span>
=
<span className={classes.labelValue}>
"{escapeString(labels[label])}"
</span>
</span>
</span>
);
if (first) {
first = false;
}
}
return (
<span>
{!metricExtendedCharset && (
<span className={classes.metricName}>
{labels ? labels.__name__ : ""}
</span>
)}
{"{"}
{labelNodes}
{"}"}
</span>
);
};
if (labels === null) {
return <>scalar</>;
}
if (format) {
return renderFormatted();
}
// Return a simple text node. This is much faster to scroll through
// for longer lists (hundreds of items).
return <>{formatSeries(labels)}</>;
};
export default SeriesName; | typescript | github | https://github.com/prometheus/prometheus | web/ui/mantine-ui/src/pages/query/SeriesName.tsx |
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# ===--- test_compare_perf_tests.py --------------------------------------===//
#
# This source file is part of the Swift.org open source project
#
# Copyright (c) 2014 - 2017 Apple Inc. and the Swift project authors
# Licensed under Apache License v2.0 with Runtime Library Exception
#
# See https://swift.org/LICENSE.txt for license information
# See https://swift.org/CONTRIBUTORS.txt for the list of Swift project authors
#
# ===---------------------------------------------------------------------===//
import json
import os
import shutil
import sys
import tempfile
import unittest
from compare_perf_tests import LogParser
from compare_perf_tests import PerformanceTestResult
from compare_perf_tests import ReportFormatter
from compare_perf_tests import ResultComparison
from compare_perf_tests import TestComparator
from compare_perf_tests import main
from compare_perf_tests import parse_args
from test_utils import captured_output
class TestPerformanceTestResult(unittest.TestCase):
def test_init(self):
header = "#,TEST,SAMPLES,MIN,MAX,MEAN,SD,MEDIAN"
log_line = "1,AngryPhonebook,20,10664,12933,11035,576,10884"
r = PerformanceTestResult.fromOldFormat(header, log_line)
self.assertEqual(r.test_num, 1)
self.assertEqual(r.name, "AngryPhonebook")
self.assertEqual(
(r.num_samples, r.min_value, r.max_value, r.mean, r.sd, r.median),
(20, 10664, 12933, 11035, 576, 10884),
)
self.assertEqual(r.samples, [])
header = "#,TEST,SAMPLES,MIN,MAX,MEAN,SD,MEDIAN,MAX_RSS"
log_line = "1,AngryPhonebook,1,12045,12045,12045,0,12045,10510336"
r = PerformanceTestResult.fromOldFormat(header, log_line)
self.assertEqual(r.max_rss, 10510336)
def test_init_quantiles(self):
header = "#,TEST,SAMPLES,MIN(μs),MEDIAN(μs),MAX(μs)"
log = "1,Ackermann,3,54383,54512,54601"
r = PerformanceTestResult.fromQuantileFormat(header, log)
self.assertEqual(r.test_num, 1)
self.assertEqual(r.name, "Ackermann")
self.assertEqual(
(r.num_samples, r.min_value, r.median, r.max_value),
(3, 54383, 54512, 54601)
)
self.assertAlmostEqual(r.mean, 54498.67, places=2)
self.assertAlmostEqual(r.sd, 109.61, places=2)
self.assertEqual(r.samples, [54383, 54512, 54601])
header = "#,TEST,SAMPLES,MIN(μs),MEDIAN(μs),MAX(μs),MAX_RSS(B)"
log = "1,Ackermann,3,54529,54760,55807,266240"
r = PerformanceTestResult.fromQuantileFormat(header, log)
self.assertEqual((len(r.samples), r.max_rss), (3, 266240))
header = "#,TEST,SAMPLES,MIN(μs),Q1(μs),Q2(μs),Q3(μs),MAX(μs)"
log = "1,Ackermann,5,54570,54593,54644,57212,58304"
r = PerformanceTestResult.fromQuantileFormat(header, log)
self.assertEqual(
(r.num_samples, r.min_value, r.median, r.max_value),
(5, 54570, 54644, 58304)
)
self.assertEqual((r.q1, r.q3), (54581.5, 57758))
self.assertEqual(len(r.samples), 5)
header = "#,TEST,SAMPLES,MIN(μs),Q1(μs),Q2(μs),Q3(μs),MAX(μs),MAX_RSS(B)"
log = "1,Ackermann,5,54686,54731,54774,55030,63466,270336"
r = PerformanceTestResult.fromQuantileFormat(header, log)
self.assertEqual(r.num_samples, 5)
self.assertEqual(len(r.samples), 5)
self.assertEqual(r.max_rss, 270336)
def test_init_delta_quantiles(self):
# 2-quantile from 2 samples in repeated min, when delta encoded,
# the difference is 0, which is omitted -- only separator remains
header = "#,TEST,SAMPLES,MIN(μs),𝚫MEDIAN,𝚫MAX"
log = "202,DropWhileArray,2,265,,22"
r = PerformanceTestResult.fromQuantileFormat(header, log)
self.assertEqual((r.num_samples, r.min_value, r.median, r.max_value),
(2, 265, 276, 287))
self.assertEqual(len(r.samples), 2)
self.assertEqual(r.num_samples, 2)
def test_init_oversampled_quantiles(self):
"""When num_samples is < quantile + 1, some of the measurements are
repeated in the report summary. Samples should contain only true
values, discarding the repeated artifacts from quantile estimation.
The test string is slightly massaged output of the following R script:
subsample <- function(x, q) {
quantile(1:x, probs=((0:(q-1))/(q-1)), type=1)}
tbl <- function(s) t(sapply(1:s, function(x) {
qs <- subsample(x, s); c(qs[1], diff(qs)) }))
sapply(c(3, 5, 11, 21), tbl)
TODO: Delete this test when we delete quantile support from the
benchmark harness. Reconstructing samples from quantiles as this code is
trying to do is not really statistically sound, which is why we're going
to delete most of this in favor of an architecture where the
lowest-level benchmarking logic reports samples, we store and pass
raw sample data around as much as possible, and summary statistics are
only computed as necessary for actual reporting (and then discarded,
since we can recompute anything we need if we always have the raw
samples available).
"""
def validatePTR(deq): # construct from delta encoded quantiles string
deq = deq.split(",")
num_samples = deq.count("1")
r = PerformanceTestResult(
["0", "B", str(num_samples)] + deq, quantiles=True, delta=True
)
self.assertEqual(len(r.samples), num_samples)
self.assertEqual(r.samples, range(1, num_samples + 1))
delta_encoded_quantiles = """
1,,
1,,1
1,,,,
1,,,1,
1,,1,1,
1,,1,1,1
1,,,,,,,,,,
1,,,,,,1,,,,
1,,,,1,,,1,,,
1,,,1,,,1,,1,,
1,,,1,,1,,1,,1,
1,,1,,1,,1,1,,1,
1,,1,1,,1,1,,1,1,
1,,1,1,1,,1,1,1,1,
1,,1,1,1,1,1,1,1,1,
1,,1,1,1,1,1,1,1,1,1
1,,,,,,,,,,,,,,,,,,,,
1,,,,,,,,,,,1,,,,,,,,,
1,,,,,,,1,,,,,,,1,,,,,,
1,,,,,,1,,,,,1,,,,,1,,,,
1,,,,,1,,,,1,,,,1,,,,1,,,
1,,,,1,,,1,,,,1,,,1,,,1,,,
1,,,1,,,1,,,1,,,1,,,1,,,1,,
1,,,1,,,1,,1,,,1,,1,,,1,,1,,
1,,,1,,1,,1,,1,,,1,,1,,1,,1,,
1,,,1,,1,,1,,1,,1,,1,,1,,1,,1,
1,,1,,1,,1,,1,,1,1,,1,,1,,1,,1,
1,,1,,1,,1,1,,1,,1,1,,1,,1,1,,1,
1,,1,,1,1,,1,1,,1,1,,1,1,,1,1,,1,
1,,1,1,,1,1,,1,1,,1,1,1,,1,1,,1,1,
1,,1,1,,1,1,1,,1,1,1,,1,1,1,,1,1,1,
1,,1,1,1,,1,1,1,1,,1,1,1,1,,1,1,1,1,
1,,1,1,1,1,1,,1,1,1,1,1,1,,1,1,1,1,1,
1,,1,1,1,1,1,1,1,1,,1,1,1,1,1,1,1,1,1,
1,,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,
1,,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1"""
map(validatePTR, delta_encoded_quantiles.split("\n")[1:])
def test_init_meta(self):
header = (
"#,TEST,SAMPLES,MIN(μs),MAX(μs),MEAN(μs),SD(μs),"
+ "MEDIAN(μs),PAGES,ICS,YIELD"
)
log = "1,Ackermann,200,715,1281,726,47,715,7,29,15"
r = PerformanceTestResult.fromOldFormat(header, log)
self.assertEqual((r.test_num, r.name), (1, "Ackermann"))
self.assertEqual(
(r.num_samples, r.min_value, r.max_value, r.mean, r.sd, r.median),
(200, 715, 1281, 726, 47, 715),
)
self.assertEqual((r.mem_pages, r.involuntary_cs, r.yield_count), (7, 29, 15))
header = (
"#,TEST,SAMPLES,MIN(μs),MAX(μs),MEAN(μs),SD(μs),MEDIAN(μs),"
+ "MAX_RSS(B),PAGES,ICS,YIELD"
)
log = "1,Ackermann,200,715,1951,734,97,715,36864,9,50,15"
r = PerformanceTestResult.fromOldFormat(header, log)
self.assertEqual(
(r.num_samples, r.min_value, r.max_value, r.mean, r.sd, r.median),
(200, 715, 1951, 734, 97, 715),
)
self.assertEqual(
(r.mem_pages, r.involuntary_cs, r.yield_count, r.max_rss),
(9, 50, 15, 36864),
)
header = "#,TEST,SAMPLES,MIN(μs),MAX(μs),PAGES,ICS,YIELD"
log = "1,Ackermann,200,715,3548,8,31,15"
r = PerformanceTestResult.fromOldFormat(header, log)
self.assertEqual((r.num_samples, r.min_value, r.max_value), (200, 715, 3548))
self.assertEqual(r.samples, [])
self.assertEqual((r.mem_pages, r.involuntary_cs, r.yield_count), (8, 31, 15))
header = "#,TEST,SAMPLES,MIN(μs),MAX(μs),MAX_RSS(B),PAGES,ICS,YIELD"
log = "1,Ackermann,200,715,1259,32768,8,28,15"
r = PerformanceTestResult.fromOldFormat(header, log)
self.assertEqual((r.num_samples, r.min_value, r.max_value), (200, 715, 1259))
self.assertEqual(r.samples, [])
self.assertEqual(r.max_rss, 32768)
self.assertEqual((r.mem_pages, r.involuntary_cs, r.yield_count), (8, 28, 15))
def test_merge(self):
tests = [
"""{"number":1,"name":"AngryPhonebook",
"samples":[12045]}""",
"""{"number":1,"name":"AngryPhonebook",
"samples":[12325],"max_rss":10510336}""",
"""{"number":1,"name":"AngryPhonebook",
"samples":[11616],"max_rss":10502144}""",
"""{"number":1,"name":"AngryPhonebook",
"samples":[12270],"max_rss":10498048}"""
]
results = [PerformanceTestResult(json) for json in tests]
def as_tuple(r):
return (
r.num_samples,
r.min_value,
r.max_value,
round(r.mean, 2),
round(r.sd, 2),
r.median,
r.max_rss,
)
r = results[0]
self.assertEqual(as_tuple(r), (1, 12045, 12045, 12045, 0, 12045, None))
r.merge(results[1])
self.assertEqual(
as_tuple(r),
(2, 12045, 12325, 12185, 197.99, 12185, 10510336),
)
r.merge(results[2])
self.assertEqual(
as_tuple(r),
(3, 11616, 12325, 11995.33, 357.1, 12045, 10502144),
)
r.merge(results[3])
self.assertEqual(
as_tuple(r),
(4, 11616, 12325, 12064, 322.29, 12157.5, 10498048),
)
def test_legacy_merge(self):
header = """#,TEST,NUM_SAMPLES,MIN,MAX,MEAN,SD,MEDIAN, MAX_RSS"""
tests = [
"""1,AngryPhonebook,8,12045,12045,12045,0,12045""",
"""1,AngryPhonebook,8,12325,12325,12325,0,12325,10510336""",
"""1,AngryPhonebook,8,11616,11616,11616,0,11616,10502144""",
"""1,AngryPhonebook,8,12270,12270,12270,0,12270,10498048"""
]
results = [PerformanceTestResult.fromOldFormat(header, row) for row in tests]
def as_tuple(r):
return (
r.num_samples,
r.min_value,
r.max_value,
round(r.mean, 2),
round(r.sd, 2) if r.sd is not None else None,
r.median,
r.max_rss,
)
r = results[0]
self.assertEqual(as_tuple(r), (8, 12045, 12045, 12045, 0, 12045, None))
r.merge(results[1])
self.assertEqual(
as_tuple(r), # Note: SD, Median are lost
(16, 12045, 12325, 12185, None, None, 10510336),
)
r.merge(results[2])
self.assertEqual(
as_tuple(r),
(24, 11616, 12325, 11995.33, None, None, 10502144),
)
r.merge(results[3])
self.assertEqual(
as_tuple(r),
(32, 11616, 12325, 12064, None, None, 10498048),
)
class TestResultComparison(unittest.TestCase):
def setUp(self):
self.r0 = PerformanceTestResult(
"""{"number":101,"name":"GlobalClass",
"samples":[0,0,0,0,0],"max_rss":10185728}"""
)
self.r01 = PerformanceTestResult(
"""{"number":101,"name":"GlobalClass",
"samples":[20,20,20],"max_rss":10185728}"""
)
self.r1 = PerformanceTestResult(
"""{"number":1,"name":"AngryPhonebook",
"samples":[12325],"max_rss":10510336}"""
)
self.r2 = PerformanceTestResult(
"""{"number":1,"name":"AngryPhonebook",
"samples":[11616],"max_rss":10502144}"""
)
self.r3 = PerformanceTestResult(
"""{"number":1,"name":"AngryPhonebook",
"samples":[11616,12326],"max_rss":10502144}"""
)
def test_init(self):
rc = ResultComparison(self.r1, self.r2)
self.assertEqual(rc.name, "AngryPhonebook")
self.assertAlmostEqual(rc.ratio, 12325.0 / 11616.0)
self.assertAlmostEqual(rc.delta, (((11616.0 / 12325.0) - 1) * 100), places=3)
# handle test results that sometimes change to zero, when compiler
# optimizes out the body of the incorrectly written test
rc = ResultComparison(self.r0, self.r0)
self.assertEqual(rc.name, "GlobalClass")
self.assertAlmostEqual(rc.ratio, 1)
self.assertAlmostEqual(rc.delta, 0, places=3)
rc = ResultComparison(self.r0, self.r01)
self.assertAlmostEqual(rc.ratio, 0, places=3)
self.assertAlmostEqual(rc.delta, 2000000, places=3)
rc = ResultComparison(self.r01, self.r0)
self.assertAlmostEqual(rc.ratio, 20001)
self.assertAlmostEqual(rc.delta, -99.995, places=3)
# disallow comparison of different test results
self.assertRaises(AssertionError, ResultComparison, self.r0, self.r1)
def test_values_is_dubious(self):
self.assertFalse(ResultComparison(self.r1, self.r2).is_dubious)
# new.min < old.min < new.max
self.assertTrue(ResultComparison(self.r1, self.r3).is_dubious)
# other way around: old.min < new.min < old.max
self.assertTrue(ResultComparison(self.r3, self.r1).is_dubious)
class FileSystemIntegration(unittest.TestCase):
def setUp(self):
# Create a temporary directory
self.test_dir = tempfile.mkdtemp()
def tearDown(self):
# Remove the directory after the test
shutil.rmtree(self.test_dir)
def write_temp_file(self, file_name, data):
temp_file_name = os.path.join(self.test_dir, file_name)
with open(temp_file_name, "w") as f:
for line in data:
f.write(line)
f.write('\n')
return temp_file_name
class OldAndNewLog(unittest.TestCase):
old_log_content = [
"""{"number":1,"name":"AngryPhonebook","""
+ """"samples":[10458,12714,11000],"max_rss":10204365}""",
"""{"number":2,"name":"AnyHashableWithAClass","""
+ """"samples":[247027,319065,259056,259056],"max_rss":10250445}""",
"""{"number":3,"name":"Array2D","""
+ """"samples":[335831,400221,346622,346622],"max_rss":28297216}""",
"""{"number":4,"name":"ArrayAppend","""
+ """"samples":[23641,29000,24990,24990],"max_rss":11149926}""",
"""{"number":34,"name":"BitCount","samples":[3,4,4,4],"max_rss":10192896}""",
"""{"number":35,"name":"ByteSwap","samples":[4,6,4,4],"max_rss":10185933}"""
]
new_log_content = [
"""{"number":265,"name":"TwoSum","samples":[5006,5679,5111,5111]}""",
"""{"number":35,"name":"ByteSwap","samples":[0,0,0,0,0]}""",
"""{"number":34,"name":"BitCount","samples":[9,9,9,9]}""",
"""{"number":4,"name":"ArrayAppend","samples":[20000,29000,24990,24990]}""",
"""{"number":3,"name":"Array2D","samples":[335831,400221,346622,346622]}""",
"""{"number":1,"name":"AngryPhonebook","samples":[10458,12714,11000,11000]}"""
]
def makeResult(json_text):
return PerformanceTestResult(json.loads(json_text))
old_results = dict(
[
(r.name, r) for r in map(makeResult, old_log_content)
]
)
new_results = dict(
[
(r.name, r) for r in map(makeResult, new_log_content)
]
)
def assert_report_contains(self, texts, report):
assert not isinstance(texts, str)
for text in texts:
self.assertIn(text, report)
class TestLogParser(unittest.TestCase):
def test_parse_results_csv(self):
"""Ignores unknown lines, extracts data from supported formats."""
log = """#,TEST,SAMPLES,MIN(us),MAX(us),MEAN(us),SD(us),MEDIAN(us)
7,Array.append.Array.Int?,20,10,10,10,0,10
21,Bridging.NSArray.as!.Array.NSString,20,11,11,11,0,11
42,Flatten.Array.Tuple4.lazy.for-in.Reserve,20,3,4,4,0,4
Total performance tests executed: 1
"""
parser = LogParser()
results = parser.parse_results(log.splitlines())
self.assertTrue(isinstance(results[0], PerformanceTestResult))
self.assertEqual(results[0].name, "Array.append.Array.Int?")
self.assertEqual(results[1].name, "Bridging.NSArray.as!.Array.NSString")
self.assertEqual(results[2].name, "Flatten.Array.Tuple4.lazy.for-in.Reserve")
def test_parse_results_tab_delimited(self):
log = "34\tBitCount\t20\t3\t4\t4\t0\t4"
parser = LogParser()
results = parser.parse_results(log.splitlines())
self.assertTrue(isinstance(results[0], PerformanceTestResult))
self.assertEqual(results[0].name, "BitCount")
def test_parse_results_formatted_text(self):
"""Parse format that Benchmark_Driver prints to console"""
log = """
# TEST SAMPLES MIN(μs) MAX(μs) MEAN(μs) SD(μs) MEDIAN(μs) MAX_RSS(B)
3 Array2D 20 2060 2188 2099 0 2099 20915200
Total performance tests executed: 1
"""
parser = LogParser()
results = parser.parse_results(log.splitlines()[1:]) # without 1st \n
self.assertTrue(isinstance(results[0], PerformanceTestResult))
r = results[0]
self.assertEqual(r.name, "Array2D")
self.assertEqual(r.max_rss, 20915200)
def test_parse_quantiles(self):
"""Gathers samples from reported quantiles. Handles optional memory."""
r = LogParser.results_from_string(
"""#,TEST,SAMPLES,QMIN(μs),MEDIAN(μs),MAX(μs)
1,Ackermann,3,54383,54512,54601"""
)["Ackermann"]
self.assertEqual(r.samples, [54383, 54512, 54601])
r = LogParser.results_from_string(
"""#,TEST,SAMPLES,QMIN(μs),MEDIAN(μs),MAX(μs),MAX_RSS(B)
1,Ackermann,3,54529,54760,55807,266240"""
)["Ackermann"]
self.assertEqual(r.samples, [54529, 54760, 55807])
self.assertEqual(r.max_rss, 266240)
def test_parse_delta_quantiles(self):
r = LogParser.results_from_string( # 2-quantile aka. median
"#,TEST,SAMPLES,QMIN(μs),𝚫MEDIAN,𝚫MAX\n0,B,1,101,,"
)["B"]
self.assertEqual(
(r.num_samples, r.min_value, r.median, r.max_value, len(r.samples)),
(1, 101, 101, 101, 1),
)
r = LogParser.results_from_string(
"#,TEST,SAMPLES,QMIN(μs),𝚫MEDIAN,𝚫MAX\n0,B,2,101,,1"
)["B"]
self.assertEqual(
(r.num_samples, r.min_value, r.median, r.max_value, len(r.samples)),
(2, 101, 101.5, 102, 2),
)
r = LogParser.results_from_string( # 20-quantiles aka. ventiles
"#,TEST,SAMPLES,QMIN(μs),𝚫V1,𝚫V2,𝚫V3,𝚫V4,𝚫V5,𝚫V6,𝚫V7,𝚫V8,"
+ "𝚫V9,𝚫VA,𝚫VB,𝚫VC,𝚫VD,𝚫VE,𝚫VF,𝚫VG,𝚫VH,𝚫VI,𝚫VJ,𝚫MAX\n"
+ "202,DropWhileArray,200,214,,,,,,,,,,,,1,,,,,,2,16,464"
)["DropWhileArray"]
self.assertEqual(
(r.num_samples, r.min_value, r.max_value, len(r.samples)),
(200, 214, 697, 0),
)
def test_parse_meta(self):
r = LogParser.results_from_string(
"#,TEST,SAMPLES,MIN(μs),MAX(μs),MEAN(μs),SD(μs),MEDIAN(μs),"
+ "PAGES,ICS,YIELD\n"
+ "0,B,1,2,2,2,0,2,7,29,15"
)["B"]
self.assertEqual(
(r.min_value, r.mem_pages, r.involuntary_cs, r.yield_count), (2, 7, 29, 15)
)
r = LogParser.results_from_string(
"#,TEST,SAMPLES,MIN(μs),MAX(μs),MEAN(μs),SD(μs),MEDIAN(μs),"
+ "MAX_RSS(B),PAGES,ICS,YIELD\n"
+ "0,B,1,3,3,3,0,3,36864,9,50,15"
)["B"]
self.assertEqual(
(r.min_value, r.mem_pages, r.involuntary_cs, r.yield_count, r.max_rss),
(3, 9, 50, 15, 36864),
)
r = LogParser.results_from_string(
"#,TEST,SAMPLES,QMIN(μs),MAX(μs),PAGES,ICS,YIELD\n" + "0,B,1,4,4,8,31,15"
)["B"]
self.assertEqual(
(r.min_value, r.mem_pages, r.involuntary_cs, r.yield_count), (4, 8, 31, 15)
)
r = LogParser.results_from_string(
"#,TEST,SAMPLES,QMIN(μs),MAX(μs),MAX_RSS(B),PAGES,ICS,YIELD\n"
+ "0,B,1,5,5,32768,8,28,15"
)["B"]
self.assertEqual(
(r.min_value, r.mem_pages, r.involuntary_cs, r.yield_count, r.max_rss),
(5, 8, 28, 15, 32768),
)
def test_results_from_merge(self):
"""Parsing concatenated log merges same PerformanceTestResults"""
concatenated_logs = """#,TEST,SAMPLES,MIN,MAX,MEAN,SD,MEDIAN
4,ArrayAppend,20,23641,29000,24990,0,24990
4,ArrayAppend,1,20000,20000,20000,0,20000"""
results = LogParser.results_from_string(concatenated_logs)
self.assertEqual(list(results.keys()), ["ArrayAppend"])
result = results["ArrayAppend"]
self.assertTrue(isinstance(result, PerformanceTestResult))
self.assertEqual(result.min_value, 20000)
self.assertEqual(result.max_value, 29000)
class TestTestComparator(OldAndNewLog):
def test_init(self):
def names(tests):
return [t.name for t in tests]
tc = TestComparator(self.old_results, self.new_results, 0.05)
self.assertEqual(names(tc.unchanged), ["AngryPhonebook", "Array2D"])
# self.assertEqual(names(tc.increased), ["ByteSwap", "ArrayAppend"])
self.assertEqual(names(tc.decreased), ["BitCount"])
self.assertEqual(names(tc.added), ["TwoSum"])
self.assertEqual(names(tc.removed), ["AnyHashableWithAClass"])
# other way around
tc = TestComparator(self.new_results, self.old_results, 0.05)
self.assertEqual(names(tc.unchanged), ["AngryPhonebook", "Array2D"])
self.assertEqual(names(tc.increased), ["BitCount"])
self.assertEqual(names(tc.decreased), ["ByteSwap", "ArrayAppend"])
self.assertEqual(names(tc.added), ["AnyHashableWithAClass"])
self.assertEqual(names(tc.removed), ["TwoSum"])
# delta_threshold determines the sorting into change groups;
# report only change above 100% (ByteSwap's runtime went to 0):
tc = TestComparator(self.old_results, self.new_results, 1)
self.assertEqual(
names(tc.unchanged),
["AngryPhonebook", "Array2D", "ArrayAppend", "BitCount"],
)
self.assertEqual(names(tc.increased), ["ByteSwap"])
self.assertEqual(tc.decreased, [])
class TestReportFormatter(OldAndNewLog):
def setUp(self):
super(TestReportFormatter, self).setUp()
self.tc = TestComparator(self.old_results, self.new_results, 0.05)
self.rf = ReportFormatter(self.tc, changes_only=False)
self.markdown = self.rf.markdown()
self.git = self.rf.git()
self.html = self.rf.html()
def assert_markdown_contains(self, texts):
self.assert_report_contains(texts, self.markdown)
def assert_git_contains(self, texts):
self.assert_report_contains(texts, self.git)
def assert_html_contains(self, texts):
self.assert_report_contains(texts, self.html)
def test_values(self):
self.assertEqual(
ReportFormatter.values(
PerformanceTestResult(
"""{"number":1,"name":"AngryPhonebook",
"samples":[10664,12933,11035,10884]}"""
)
),
("AngryPhonebook", "10664", "12933", "11379", "—"),
)
self.assertEqual(
ReportFormatter.values(
PerformanceTestResult(
"""{"number":1,"name":"AngryPhonebook",
"samples":[12045],"max_rss":10510336}"""
)
),
("AngryPhonebook", "12045", "12045", "12045", "10510336"),
)
r1 = PerformanceTestResult(
"""{"number":1,"name":"AngryPhonebook",
"samples":[12325],"max_rss":10510336}"""
)
r2 = PerformanceTestResult(
"""{"number":1,"name":"AngryPhonebook",
"samples":[11616],"max_rss":10510336}"""
)
self.assertEqual(
ReportFormatter.values(ResultComparison(r1, r2)),
("AngryPhonebook", "12325", "11616", "-5.8%", "1.06x"),
)
self.assertEqual(
ReportFormatter.values(ResultComparison(r2, r1)),
("AngryPhonebook", "11616", "12325", "+6.1%", "0.94x"),
)
r1 = PerformanceTestResult(
"""{"number":1,"name":"AngryPhonebook",
"samples":[12325],"max_rss":10510336}"""
)
r2 = PerformanceTestResult(
"""{"number":1,"name":"AngryPhonebook",
"samples":[11616,12326],"max_rss":10510336}"""
)
self.assertEqual(
ReportFormatter.values(ResultComparison(r1, r2))[4],
"1.06x (?)", # is_dubious
)
def test_justified_columns(self):
"""Table columns are all formatted with same width, defined by the
longest value.
"""
self.assert_markdown_contains(
[
"AnyHashableWithAClass | 247027 | 319065 | 271051 | 10250445",
"Array2D | 335831 | 335831 | +0.0% | 1.00x",
]
)
self.assert_git_contains(
[
"AnyHashableWithAClass 247027 319065 271051 10250445",
"Array2D 335831 335831 +0.0% 1.00x",
]
)
def test_column_headers(self):
"""Report contains table headers for ResultComparisons and changed
PerformanceTestResults.
"""
performance_test_result = self.tc.added[0]
self.assertEqual(
ReportFormatter.header_for(performance_test_result),
("TEST", "MIN", "MAX", "MEAN", "MAX_RSS"),
)
comparison_result = self.tc.increased[0]
self.assertEqual(
ReportFormatter.header_for(comparison_result),
("TEST", "OLD", "NEW", "DELTA", "RATIO"),
)
self.assert_markdown_contains(
[
"TEST | OLD | NEW | DELTA | RATIO",
":--- | ---: | ---: | ---: | ---: ",
"TEST | MIN | MAX | MEAN | MAX_RSS",
]
)
self.assert_git_contains(
[
"TEST OLD NEW DELTA RATIO",
"TEST MIN MAX MEAN MAX_RSS",
]
)
self.assert_html_contains(
[
"""
<th align='left'>OLD</th>
<th align='left'>NEW</th>
<th align='left'>DELTA</th>
<th align='left'>RATIO</th>""",
"""
<th align='left'>MIN</th>
<th align='left'>MAX</th>
<th align='left'>MEAN</th>
<th align='left'>MAX_RSS</th>""",
]
)
def test_emphasize_speedup(self):
"""Emphasize speedup values for regressions and improvements"""
# tests in No Changes don't have emphasized speedup
self.assert_markdown_contains(
[
"BitCount | 3 | 9 | +199.9% | **0.33x**",
"ByteSwap | 4 | 0 | -100.0% | **4001.00x**",
"AngryPhonebook | 10458 | 10458 | +0.0% | 1.00x ",
"ArrayAppend | 23641 | 20000 | -15.4% | **1.18x (?)**",
]
)
self.assert_git_contains(
[
"BitCount 3 9 +199.9% **0.33x**",
"ByteSwap 4 0 -100.0% **4001.00x**",
"AngryPhonebook 10458 10458 +0.0% 1.00x",
"ArrayAppend 23641 20000 -15.4% **1.18x (?)**",
]
)
self.assert_html_contains(
[
"""
<tr>
<td align='left'>BitCount</td>
<td align='left'>3</td>
<td align='left'>9</td>
<td align='left'>+199.9%</td>
<td align='left'><font color='red'>0.33x</font></td>
</tr>""",
"""
<tr>
<td align='left'>ByteSwap</td>
<td align='left'>4</td>
<td align='left'>0</td>
<td align='left'>-100.0%</td>
<td align='left'><font color='green'>4001.00x</font></td>
</tr>""",
"""
<tr>
<td align='left'>AngryPhonebook</td>
<td align='left'>10458</td>
<td align='left'>10458</td>
<td align='left'>+0.0%</td>
<td align='left'><font color='black'>1.00x</font></td>
</tr>""",
]
)
def test_sections(self):
"""Report is divided into sections with summaries."""
self.assert_markdown_contains(
[
"""<details open>
<summary>Regression (1)</summary>""",
"""<details >
<summary>Improvement (2)</summary>""",
"""<details >
<summary>No Changes (2)</summary>""",
"""<details open>
<summary>Added (1)</summary>""",
"""<details open>
<summary>Removed (1)</summary>""",
]
)
self.assert_git_contains(
[
"Regression (1): \n",
"Improvement (2): \n",
"No Changes (2): \n",
"Added (1): \n",
"Removed (1): \n",
]
)
self.assert_html_contains(
[
"<th align='left'>Regression (1)</th>",
"<th align='left'>Improvement (2)</th>",
"<th align='left'>No Changes (2)</th>",
"<th align='left'>Added (1)</th>",
"<th align='left'>Removed (1)</th>",
]
)
def test_report_only_changes(self):
"""Leave out tests without significant change."""
rf = ReportFormatter(self.tc, changes_only=True)
markdown, git, html = rf.markdown(), rf.git(), rf.html()
self.assertNotIn("No Changes", markdown)
self.assertNotIn("AngryPhonebook", markdown)
self.assertNotIn("No Changes", git)
self.assertNotIn("AngryPhonebook", git)
self.assertNotIn("No Changes", html)
self.assertNotIn("AngryPhonebook", html)
def test_single_table_report(self):
"""Single table report has inline headers and no elaborate sections."""
self.tc.removed = [] # test handling empty section
rf = ReportFormatter(self.tc, changes_only=True, single_table=True)
markdown = rf.markdown()
self.assertNotIn("<details", markdown) # no sections
self.assertNotIn("\n\n", markdown) # table must not be broken
self.assertNotIn("Removed", markdown)
self.assert_report_contains(
[
"\n**Regression** ",
"| **OLD**",
"| **NEW**",
"| **DELTA**",
"| **RATIO**",
"\n**Added** ",
"| **MIN**",
"| **MAX**",
"| **MEAN**",
"| **MAX_RSS**",
],
markdown,
)
# Single delimiter row:
self.assertIn("\n:---", markdown) # first column is left aligned
self.assertEqual(markdown.count("| ---:"), 4) # other, right aligned
# Separator before every inline header (new section):
self.assertEqual(markdown.count(" | | | | "), 2)
git = rf.git()
self.assertNotIn("): \n", git) # no sections
self.assertNotIn("REMOVED", git)
self.assert_report_contains(
[
"\nREGRESSION ",
" OLD ",
" NEW ",
" DELTA ",
" RATIO ",
"\n\nADDED ",
" MIN ",
" MAX ",
" MEAN ",
" MAX_RSS ",
],
git,
)
# Separator before every inline header (new section):
self.assertEqual(git.count("\n\n"), 2)
class Test_parse_args(unittest.TestCase):
required = ["--old-file", "old.log", "--new-file", "new.log"]
def test_required_input_arguments(self):
with captured_output() as (_, err):
self.assertRaises(SystemExit, parse_args, [])
self.assertIn("usage: compare_perf_tests.py", err.getvalue())
args = parse_args(self.required)
self.assertEqual(args.old_file, "old.log")
self.assertEqual(args.new_file, "new.log")
def test_format_argument(self):
self.assertEqual(parse_args(self.required).format, "markdown")
self.assertEqual(
parse_args(self.required + ["--format", "markdown"]).format, "markdown"
)
self.assertEqual(parse_args(self.required + ["--format", "git"]).format, "git")
self.assertEqual(
parse_args(self.required + ["--format", "html"]).format, "html"
)
with captured_output() as (_, err):
self.assertRaises(
SystemExit, parse_args, self.required + ["--format", "bogus"]
)
self.assertIn(
"error: argument --format: invalid choice: 'bogus' "
"(choose from ",
err.getvalue(),
)
def test_delta_threshold_argument(self):
# default value
args = parse_args(self.required)
self.assertEqual(args.delta_threshold, 0.05)
# float parsing
args = parse_args(self.required + ["--delta-threshold", "0.1"])
self.assertEqual(args.delta_threshold, 0.1)
args = parse_args(self.required + ["--delta-threshold", "1"])
self.assertEqual(args.delta_threshold, 1.0)
args = parse_args(self.required + ["--delta-threshold", ".2"])
self.assertEqual(args.delta_threshold, 0.2)
with captured_output() as (_, err):
self.assertRaises(
SystemExit, parse_args, self.required + ["--delta-threshold", "2,2"]
)
self.assertIn(
" error: argument --delta-threshold: invalid float " "value: '2,2'",
err.getvalue(),
)
def test_output_argument(self):
self.assertEqual(parse_args(self.required).output, None)
self.assertEqual(
parse_args(self.required + ["--output", "report.log"]).output, "report.log"
)
def test_changes_only_argument(self):
self.assertFalse(parse_args(self.required).changes_only)
self.assertTrue(parse_args(self.required + ["--changes-only"]).changes_only)
class Test_compare_perf_tests_main(OldAndNewLog, FileSystemIntegration):
"""Integration test that invokes the whole comparison script."""
markdown = [
"<summary>Regression (1)</summary>",
"TEST | OLD | NEW | DELTA | RATIO",
"BitCount | 3 | 9 | +199.9% | **0.33x**",
]
git = [
"Regression (1):",
"TEST OLD NEW DELTA RATIO",
"BitCount 3 9 +199.9% **0.33x**",
]
html = ["<html>", "<td align='left'>BitCount</td>"]
def setUp(self):
super(Test_compare_perf_tests_main, self).setUp()
self.old_log = self.write_temp_file("old.log", self.old_log_content)
self.new_log = self.write_temp_file("new.log", self.new_log_content)
def execute_main_with_format(self, report_format, test_output=False):
report_file = self.test_dir + "report.log"
args = [
"compare_perf_tests.py",
"--old-file",
self.old_log,
"--new-file",
self.new_log,
"--format",
report_format,
]
sys.argv = args if not test_output else args + ["--output", report_file]
with captured_output() as (out, _):
main()
report_out = out.getvalue()
if test_output:
with open(report_file, "r") as f:
report = f.read()
# because print adds newline, add one here, too:
report_file = str(report + "\n")
else:
report_file = None
return report_out, report_file
def test_markdown(self):
"""Writes Markdown formatted report to stdout"""
report_out, _ = self.execute_main_with_format("markdown")
self.assert_report_contains(self.markdown, report_out)
def test_markdown_output(self):
"""Writes Markdown formatted report to stdout and `--output` file."""
report_out, report_file = self.execute_main_with_format(
"markdown", test_output=True
)
self.assertEqual(report_out, report_file)
self.assert_report_contains(self.markdown, report_file)
def test_git(self):
"""Writes Git formatted report to stdout."""
report_out, _ = self.execute_main_with_format("git")
self.assert_report_contains(self.git, report_out)
def test_git_output(self):
"""Writes Git formatted report to stdout and `--output` file."""
report_out, report_file = self.execute_main_with_format("git", test_output=True)
self.assertEqual(report_out, report_file)
self.assert_report_contains(self.git, report_file)
def test_html(self):
"""Writes HTML formatted report to stdout."""
report_out, _ = self.execute_main_with_format("html")
self.assert_report_contains(self.html, report_out)
def test_html_output(self):
"""Writes HTML formatted report to stdout and `--output` file."""
report_out, report_file = self.execute_main_with_format(
"html", test_output=True
)
self.assertEqual(report_out, report_file)
self.assert_report_contains(self.html, report_file)
if __name__ == "__main__":
unittest.main() | python | github | https://github.com/apple/swift | benchmark/scripts/test_compare_perf_tests.py |
import sys
from os import walk
from os.path import isdir, join, abspath, dirname
import pep8
import time
htmlmode = False
pep8_ignores = (
'E125', # continuation line does not
# distinguish itself from next logical line
'E126', # continuation line over-indented for hanging indent
'E127', # continuation line over-indented for visual indent
'E128') # continuation line under-indented for visual indent
class KivyStyleChecker(pep8.Checker):
def __init__(self, filename):
pep8.Checker.__init__(self, filename, ignore=pep8_ignores)
def report_error(self, line_number, offset, text, check):
if htmlmode is False:
return pep8.Checker.report_error(self,
line_number, offset, text, check)
# html generation
print('<tr><td>{0}</td><td>{1}</td></tr>'.format(line_number, text))
if __name__ == '__main__':
def usage():
print('Usage: python pep8kivy.py [-html] <file_or_folder_to_check>*')
print('Folders will be checked recursively.')
sys.exit(1)
if len(sys.argv) < 2:
usage()
if sys.argv[1] == '-html':
if len(sys.argv) < 3:
usage()
else:
htmlmode = True
targets = sys.argv[-1].split()
elif sys.argv == 2:
targets = sys.argv[-1]
else:
targets = sys.argv[-1].split()
def check(fn):
try:
checker = KivyStyleChecker(fn)
except IOError:
# File couldn't be opened, so was deleted apparently.
# Don't check deleted files.
return 0
return checker.check_all()
errors = 0
exclude_dirs = ['/lib', '/coverage', '/pep8', '/doc']
exclude_files = ['kivy/gesture.py', 'osx/build.py', 'win32/build.py',
'kivy/tools/stub-gl-debug.py',
'kivy/modules/webdebugger.py']
for target in targets:
if isdir(target):
if htmlmode:
path = join(dirname(abspath(__file__)), 'pep8base.html')
print(open(path, 'r').read())
print('''<p>Generated: %s</p><table>''' % (time.strftime('%c')))
for dirpath, dirnames, filenames in walk(target):
cont = False
for pat in exclude_dirs:
if pat in dirpath:
cont = True
break
if cont:
continue
for filename in filenames:
if not filename.endswith('.py'):
continue
cont = False
complete_filename = join(dirpath, filename)
for pat in exclude_files:
if complete_filename.endswith(pat):
cont = True
if cont:
continue
if htmlmode:
print('<tr><th colspan="2">%s</td></tr>' \
% complete_filename)
errors += check(complete_filename)
if htmlmode:
print('</div></div></table></body></html>')
else:
# Got a single file to check
for pat in exclude_dirs + exclude_files:
if pat in target:
break
else:
if target.endswith('.py'):
errors += check(target)
# If errors is 0 we return with 0. That's just fine.
sys.exit(errors) | unknown | codeparrot/codeparrot-clean | ||
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from neutron._i18n import _
class _FeatureFlag(object):
def is_compatible(self, value):
if value == self.requires:
return True
if value and self.supports:
return True
return False
def __init__(self, supports, requires):
self.supports = supports
self.requires = requires
if requires and not supports:
raise RuntimeError(_("A driver can't require a feature and not "
"support it."))
UNSUPPORTED = _FeatureFlag(supports=False, requires=False)
OPTIONAL = _FeatureFlag(supports=True, requires=False)
MANDATORY = _FeatureFlag(supports=True, requires=True)
class L3ServiceProvider(object):
"""Base class for L3 service provider drivers.
On __init__ this will be given a handle to the l3 plugin. It is then the
responsibility of the driver to subscribe to the events it is interested
in (e.g. router_create, router_update, router_delete, etc).
The 'ha' and 'distributed' attributes below are used to determine if a
router request with the 'ha' or 'distributed' attribute can be supported
by this particular driver. These attributes must be present.
The 'use_integrated_agent_scheduler' flag indicates whether or not routers
which belong to the driver should be automatically scheduled using the L3
agent scheduler integrated into Neutron.
"""
ha_support = UNSUPPORTED
distributed_support = UNSUPPORTED
use_integrated_agent_scheduler = False
def __init__(self, l3plugin):
self.l3plugin = l3plugin | unknown | codeparrot/codeparrot-clean | ||
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
MoEDAL and CERN@school - Skimming Panoptes classifications.
See the README.md file and the GitHub wiki for more information.
http://cernatschool.web.cern.ch
"""
# Import the code needed to manage files.
import os, glob
#...for parsing the arguments.
import argparse
#...for the logging.
import logging as lg
#...for the CSV file processing.
import csv
#...for the JSON data handling.
import json
#...for the time (being).
import time, calendar
if __name__ == "__main__":
print("*")
print("*================================================*")
print("* CERN@school - Panoptes classification skimming *")
print("*================================================*")
# Get the datafile path from the command line.
parser = argparse.ArgumentParser()
parser.add_argument("inputPath", help="Path to the input dataset.")
parser.add_argument("outputPath", help="The path for the output files.")
parser.add_argument("workflowVersion", help="The workflow version.")
parser.add_argument("-v", "--verbose", help="Increase output verbosity", action="store_true")
args = parser.parse_args()
## The path to the data file.
datapath = args.inputPath
# Check if the input file exists. If it doesn't, quit.
if not os.path.exists(datapath):
raise IOError("* ERROR: '%s' input file does not exist!" % (datapath))
## The output path.
outputpath = args.outputPath
# Check if the output directory exists. If it doesn't, quit.
if not os.path.isdir(outputpath):
raise IOError("* ERROR: '%s' output directory does not exist!" % (outputpath))
## The workflow version.
#
# FIXME: validation on the workflow version string.
workflow_version = args.workflowVersion
# Set the logging level.
if args.verbose:
level=lg.DEBUG
else:
level=lg.INFO
# Configure the logging.
lg.basicConfig(filename=os.path.join(outputpath, 'log_skim-classifications.log'), filemode='w', level=level)
print("*")
print("* Input path : '%s'" % (datapath))
print("* Output path : '%s'" % (outputpath))
print("* Workflow version : '%s'" % (workflow_version))
print("*")
lg.info(" *================================================*")
lg.info(" * CERN@school - Panoptes classification skimming *")
lg.info(" *================================================*")
lg.info(" *")
lg.info(" * Input path : '%s'" % (datapath))
lg.info(" * Output path : '%s'" % (outputpath))
lg.info(" * Workflow version : '%s'" % (workflow_version))
lg.info(" *")
## The headers.
headers = []
## The annotations {"UserID_SubjectID":annotation}.
anno_dict = {}
## A dictionary of logged on users.
logged_on_users = {}
## A dictionary of non-logged on users.
non_logged_on_users = {}
## A dictionary of subjects {subject_id:num_classifications}.
subject_dict = {}
## A dictionary of subjects (logged-on users).
#
# { subject_id:number_of_classifications }
logged_on_subject_dict = {}
## A dictionary of subjects (non-logged-on users).
#
# { subject_id:number_of_classifications }
non_logged_on_subject_dict = {}
## The total number of classifications (as a check).
total_classifications_check = 0
# Read in the file.
with open(datapath, "r") as df:
## The CSV file reader.
reader = csv.reader(df)
# Loop over the rows of the CSV file via the reader.
for i, row in enumerate(reader):
if i == 0:
# Extract the header information.
headers = row
else:
# Extract the data.
# Check if the workflow is the correct version.
#
## The workflow version.
workflow_v = row[5]
#
#lg.info(" * Workflow version: %s" % (workflow_version))
#
if workflow_v != workflow_version:
continue
# The User's ID.
user_id = ""
## The time stamp the classification was created at (string).
time_stamp_string = row[6]
## The UNIX time stamp the classification was created at (seconds).
time_stamp_sec = calendar.timegm(time.strptime(time_stamp_string, "%Y-%m-%d %H:%M:%S %Z"))
## The subject ID [image number]_[row]_[col].
subject_id = json.loads(row[11]).values()[0]["id"]
#
#lg.info(" *--> Subject ID: %s" % (subject_id))
if subject_id not in subject_dict.keys():
subject_dict[subject_id] = 1
else:
subject_dict[subject_id] += 1
# Was the user logged in?
if row[1] != "":
# For logged in users, use the User ID and the UNIX timestamp.
user_id = row[1] + ":%d" % (time_stamp_sec)
# Add the user to the logged-on user list.
if user_id not in logged_on_users.keys():
logged_on_users[user_id] = []
# Add to the logged-on user subject classification count.
if subject_id not in logged_on_subject_dict.keys():
logged_on_subject_dict[subject_id] = 1
else:
logged_on_subject_dict[subject_id] += 1
else:
# For non-logged in users, use the User IP hash and UNIX timestamp.
user_id = row[2] + ":%d" % (time_stamp_sec)
# Add the user to the non-logged-on user list.
if user_id not in non_logged_on_users.keys():
non_logged_on_users[user_id] = []
# Add to the non-logged-on user subject classification count.
if subject_id not in non_logged_on_subject_dict.keys():
non_logged_on_subject_dict[subject_id] = 1
else:
non_logged_on_subject_dict[subject_id] += 1
## The annotation ID (should be unique!).
anno_id = "%s-%s" % (user_id, subject_id)
#
#lg.info(" *--> Annotation ID: %s" % (anno_id))
## The annotation.
anno = row[10]
# Add the annotation to the dictionary.
if anno_id in anno_dict.keys():
lg.error(" * ERROR!")
lg.error(" * User ID: %s" % (user_id))
lg.error(" * Subject: %s" % (subject_id))
raise IOError("* ERROR: The same user has classified the same subject twice!")
else:
anno_dict[anno_id] = anno
lg.info(" *")
## The header information.
lg.info(" *--------------------")
lg.info(" * HEADER INFORMATION ")
lg.info(" *--------------------")
for i, field_name in enumerate(headers):
lg.info(" * %02d: '%s'" % (i, field_name))
lg.info(" *")
#=========================================================================
# Write out the new annotation file.
#=========================================================================
#
# We want to write out a new, skimmed set of annotations ready to be
# processed by other scripts in this repo.
## The annotation file string (for writing).
annos = ""
# Loop over the annotations and write them to the output string.
for anno_id, anno in anno_dict.iteritems():
annos += "%s,%s\n" % (anno_id, anno)
## The skimmed CSV filename.
skimmed_csv_filename = os.path.join(outputpath, "annotations.csv")
#
with open(skimmed_csv_filename, "w") as sf:
sf.write(annos)
#=========================================================================
# User summary information.
#=========================================================================
#
# We also want to know a bit about the users who made the classifications.
## The number of unique logged-on users.
num_logged_on_users = len(logged_on_users)
## The number of unique non-logged-on users.
num_non_logged_on_users = len(non_logged_on_users)
lg.info(" *")
lg.info(" *------------------")
lg.info(" * USER INFORMATION ")
lg.info(" *------------------")
lg.info(" *")
lg.info(" * Number of unique logged-on users: % 6d" % (num_logged_on_users))
lg.info(" * Number of unique non-logged-on users: % 6d" % (num_non_logged_on_users))
lg.info(" *")
## The number of subjects classified.
num_subjects = len(subject_dict)
lg.info(" *----------------------------")
lg.info(" * CLASSIFICATION INFORMATION ")
lg.info(" *----------------------------")
lg.info(" *")
lg.info(" * Number of subjects classified: % 6d" % (num_subjects))
lg.info(" *")
## A count of the number of classifications.
total_classifications = 0
## A count of the classifications by logged-on users.
tot_logged = 0
## A count of the classifications by non-logged-on users.
tot_non_logged = 0
lg.info(" * Number of classifications per subject:")
lg.info(" *")
# Produce an ordered CSV file of the number of classifications
# per subject (all users, logged-on users, non-logged-on users).
## The string for the CSV file content (headers provided).
subjects_vs_classifications = "subject_id,total,by_logged_on_users,by_non_logged_on_users\n"
for i, sub_id in enumerate(sorted(subject_dict.keys())):
total_classifications += subject_dict[sub_id]
if sub_id in logged_on_subject_dict.keys():
l_s = logged_on_subject_dict[sub_id]
tot_logged += l_s
else:
l_s = 0
if sub_id in non_logged_on_subject_dict.keys():
n_s = non_logged_on_subject_dict[sub_id]
tot_non_logged += n_s
else:
n_s = 0
lg.info(" *--> %s: % 6d (% 3d + %3d)" % (sub_id, subject_dict[sub_id], l_s, n_s))
subjects_vs_classifications += "%s,%d,%d,%d" % (sub_id, subject_dict[sub_id],l_s,n_s)
if i < len(subject_dict):
subjects_vs_classifications += "\n"
#
subjects_vs_classifications_filename = os.path.join(outputpath, "subjects.csv")
#
with open(subjects_vs_classifications_filename, "w") as sf:
sf.write(subjects_vs_classifications)
lg.info(" *")
lg.info(" * Total (count) : % 6d" % (total_classifications))
lg.info(" * Total (from annotations) : % 6d" % (len(anno_dict)))
lg.info(" * Total (logged-on, non-logged-on) : % 6d (%d + %d)" % (tot_logged + tot_non_logged, tot_logged, tot_non_logged))
lg.info(" *")
lg.info(" * For the skimmed annotations, see : '%s'" % (skimmed_csv_filename))
lg.info(" * For the clasificatios per subject, see : '%s'" % (subjects_vs_classifications_filename))
lg.info(" *") | unknown | codeparrot/codeparrot-clean | ||
//// [tests/cases/compiler/allowSyntheticDefaultImports7.ts] ////
//// [b.d.ts]
export function foo();
export function bar();
//// [a.ts]
import { default as Foo } from "./b";
Foo.bar();
Foo.foo();
//// [a.js]
System.register(["./b"], function (exports_1, context_1) {
"use strict";
var b_1;
var __moduleName = context_1 && context_1.id;
return {
setters: [
function (b_1_1) {
b_1 = b_1_1;
}
],
execute: function () {
b_1.default.bar();
b_1.default.foo();
}
};
}); | javascript | github | https://github.com/microsoft/TypeScript | tests/baselines/reference/allowSyntheticDefaultImports7.js |
import * as evaluator from "../../_namespaces/evaluator.js";
import * as ts from "../../_namespaces/ts.js";
describe("unittests:: evaluation:: forAwaitOfEvaluation", () => {
it("sync (es5)", async () => {
const result = evaluator.evaluateTypeScript(
`
let i = 0;
const iterator: IterableIterator<any> = {
[Symbol.iterator]() { return this; },
next() {
switch (i++) {
case 0: return { value: 1, done: false };
case 1: return { value: Promise.resolve(2), done: false };
case 2: return { value: new Promise<number>(resolve => setTimeout(resolve, 100, 3)), done: false };
default: return { value: undefined, done: true };
}
}
};
export const output: any[] = [];
export async function main() {
for await (const item of iterator) {
output.push(item);
}
}`,
{ downlevelIteration: true },
);
await result.main();
assert.strictEqual(result.output[0], 1);
assert.strictEqual(result.output[1], 2);
assert.strictEqual(result.output[2], 3);
});
it("sync (es2015)", async () => {
const result = evaluator.evaluateTypeScript(
`
let i = 0;
const iterator: IterableIterator<any> = {
[Symbol.iterator]() { return this; },
next() {
switch (i++) {
case 0: return { value: 1, done: false };
case 1: return { value: Promise.resolve(2), done: false };
case 2: return { value: new Promise<number>(resolve => setTimeout(resolve, 100, 3)), done: false };
default: return { value: undefined, done: true };
}
}
};
export const output: any[] = [];
export async function main() {
for await (const item of iterator) {
output.push(item);
}
}`,
{ target: ts.ScriptTarget.ES2015 },
);
await result.main();
assert.strictEqual(result.output[0], 1);
assert.strictEqual(result.output[1], 2);
assert.strictEqual(result.output[2], 3);
});
it("async (es5)", async () => {
const result = evaluator.evaluateTypeScript(
`
let i = 0;
const iterator = {
[Symbol.asyncIterator](): AsyncIterableIterator<any> { return this; },
async next() {
switch (i++) {
case 0: return { value: 1, done: false };
case 1: return { value: Promise.resolve(2), done: false };
case 2: return { value: new Promise<number>(resolve => setTimeout(resolve, 100, 3)), done: false };
default: return { value: undefined, done: true };
}
}
};
export const output: any[] = [];
export async function main() {
for await (const item of iterator) {
output.push(item);
}
}`,
{ downlevelIteration: true },
);
await result.main();
assert.strictEqual(result.output[0], 1);
assert.instanceOf(result.output[1], Promise);
assert.instanceOf(result.output[2], Promise);
});
it("async (es2015)", async () => {
const result = evaluator.evaluateTypeScript(
`
let i = 0;
const iterator = {
[Symbol.asyncIterator](): AsyncIterableIterator<any> { return this; },
async next() {
switch (i++) {
case 0: return { value: 1, done: false };
case 1: return { value: Promise.resolve(2), done: false };
case 2: return { value: new Promise<number>(resolve => setTimeout(resolve, 100, 3)), done: false };
default: return { value: undefined, done: true };
}
}
};
export const output: any[] = [];
export async function main() {
for await (const item of iterator) {
output.push(item);
}
}`,
{ target: ts.ScriptTarget.ES2015 },
);
await result.main();
assert.strictEqual(result.output[0], 1);
assert.instanceOf(result.output[1], Promise);
assert.instanceOf(result.output[2], Promise);
});
it("call return when user code return (es2015)", async () => {
const result = evaluator.evaluateTypeScript(
`
let returnCalled = false;
async function f() {
const iterator = {
[Symbol.asyncIterator](): AsyncIterableIterator<any> { return this; },
async next() {
return { value: undefined, done: false };
},
async return() {
returnCalled = true;
}
};
for await (const item of iterator) {
return;
}
}
export async function main() {
try { await f(); } catch { }
return returnCalled;
}
`,
{ target: ts.ScriptTarget.ES2015 },
);
assert.isTrue(await result.main());
});
it("call return when user code break (es2015)", async () => {
const result = evaluator.evaluateTypeScript(
`
let returnCalled = false;
async function f() {
const iterator = {
[Symbol.asyncIterator](): AsyncIterableIterator<any> { return this; },
async next() {
return { value: undefined, done: false };
},
async return() {
returnCalled = true;
}
};
for await (const item of iterator) {
break;
}
}
export async function main() {
try { await f(); } catch { }
return returnCalled;
}
`,
{ target: ts.ScriptTarget.ES2015 },
);
assert.isTrue(await result.main());
});
it("call return when user code throws (es2015)", async () => {
const result = evaluator.evaluateTypeScript(
`
let returnCalled = false;
async function f() {
const iterator = {
[Symbol.asyncIterator](): AsyncIterableIterator<any> { return this; },
async next() {
return { value: undefined, done: false };
},
async return() {
returnCalled = true;
}
};
for await (const item of iterator) {
throw new Error();
}
}
export async function main() {
try { await f(); } catch { }
return returnCalled;
}
`,
{ target: ts.ScriptTarget.ES2015 },
);
assert.isTrue(await result.main());
});
it("don't call return when non-user code throws (es2015)", async () => {
const result = evaluator.evaluateTypeScript(
`
let returnCalled = false;
async function f() {
let i = 0;
const iterator = {
[Symbol.asyncIterator](): AsyncIterableIterator<any> { return this; },
async next() {
i++;
if (i < 2) return { value: undefined, done: false };
throw new Error();
},
async return() {
returnCalled = true;
}
};
for await (const item of iterator) {
}
}
export async function main() {
try { await f(); } catch { }
return returnCalled;
}
`,
{ target: ts.ScriptTarget.ES2015 },
);
assert.isFalse(await result.main());
});
it("don't call return when user code continue (es2015)", async () => {
const result = evaluator.evaluateTypeScript(
`
let returnCalled = false;
async function f() {
let i = 0;
const iterator = {
[Symbol.asyncIterator](): AsyncIterableIterator<any> { return this; },
async next() {
i++;
if (i < 2) return { value: undefined, done: false };
throw new Error();
},
async return() {
returnCalled = true;
}
};
for await (const item of iterator) {
continue;
}
}
export async function main() {
try { await f(); } catch { }
return returnCalled;
}
`,
{ target: ts.ScriptTarget.ES2015 },
);
assert.isFalse(await result.main());
});
it("don't call return when user code continue to local label (es2015)", async () => {
const result = evaluator.evaluateTypeScript(
`
let returnCalled = false;
async function f() {
let i = 0;
const iterator = {
[Symbol.asyncIterator](): AsyncIterableIterator<any> { return this; },
async next() {
i++;
if (i < 2) return { value: undefined, done: false };
throw new Error();
},
async return() {
returnCalled = true;
}
};
outerLoop:
for (const outerItem of [1, 2, 3]) {
innerLoop:
for await (const item of iterator) {
continue innerLoop;
}
}
}
export async function main() {
try { await f(); } catch { }
return returnCalled;
}
`,
{ target: ts.ScriptTarget.ES2015 },
);
assert.isFalse(await result.main());
});
it("call return when user code continue to non-local label (es2015)", async () => {
const result = evaluator.evaluateTypeScript(
`
let returnCalled = false;
async function f() {
let i = 0;
const iterator = {
[Symbol.asyncIterator](): AsyncIterableIterator<any> { return this; },
async next() {
i++;
if (i < 2) return { value: undefined, done: false };
return { value: undefined, done: true };
},
async return() {
returnCalled = true;
}
};
outerLoop:
for (const outerItem of [1, 2, 3]) {
innerLoop:
for await (const item of iterator) {
continue outerLoop;
}
}
}
export async function main() {
try { await f(); } catch { }
return returnCalled;
}
`,
{ target: ts.ScriptTarget.ES2015 },
);
assert.isTrue(await result.main());
});
}); | typescript | github | https://github.com/microsoft/TypeScript | src/testRunner/unittests/evaluation/forAwaitOf.ts |
"""cx_Freeze extension
Extends:
- the 'build' command with the 'exe-command' option to allow using a
different command from 'build_exe' to build executables from Python scripts.
- the 'install' command with the 'skip-sub-commands' option to allow not
running a set of sub commands, e.g.:
install --skip-sub-commands=install_lib,install_scripts,install_data
- the 'bdist_msi' command to handle LaunchAfterInstall and a clean uninstall.
"""
import distutils.command.build
import sys
import os
from cx_Freeze.dist import build as cx_build
from cx_Freeze.dist import install as cx_install
from cx_Freeze.dist import setup as cx_setup
from cx_Freeze.dist import _AddCommandClass
class build(cx_build):
cx_build.user_options.append(
('exe-command=', None, "Python script executables command"))
def initialize_options(self):
cx_build.initialize_options(self)
self.exe_command = 'build_exe'
def get_sub_commands(self):
subCommands = distutils.command.build.build.get_sub_commands(self)
if self.distribution.executables:
subCommands.append(self.exe_command)
return subCommands
class install(cx_install):
cx_install.user_options.append(
('skip-sub-commands=', None,
"sub commands to ignore when running command"))
def initialize_options(self):
cx_install.initialize_options(self)
self.skip_sub_commands = None
def get_sub_commands(self):
subCommands = cx_install.get_sub_commands(self)
if self.skip_sub_commands:
skip_sub_commands = self.skip_sub_commands.split(',')
for cmd in skip_sub_commands:
if cmd in subCommands:
subCommands.remove(cmd)
return subCommands
if sys.platform == 'win32':
from cx_Freeze.windist import bdist_msi as cx_bdist_msi
class bdist_msi(cx_bdist_msi):
attribs = None
def finalize_options(self):
self.distribution.get_name()
if self.initial_target_dir is None:
if distutils.util.get_platform() == "win-amd64":
programFilesFolder = "ProgramFiles64Folder"
else:
programFilesFolder = "ProgramFilesFolder"
self.initial_target_dir = r"[%s]\%s" % (programFilesFolder,
self.attribs.get_install_dir())
# Using old style class so can't use super
import cx_Freeze
cx_Freeze.windist.bdist_msi.finalize_options(self)
def get_executable(self):
return self.attribs.get_win_targetName()
def get_license(self):
return self.attribs.get_licence()
def add_licence_dialog(self):
import msilib
msilib.add_data(self.db, 'InstallUISequence',
[("LicenceDialog", None, 380)])
dialog = distutils.command.bdist_msi.PyDialog(self.db,
"LicenceDialog",
self.x, self.y, self.width, self.height, self.modal,
self.title, "Next", "Next", "Cancel")
dialog.text("LicenseTitle", 15, 10, 320, 20, 0x3, "License")
dialog.control("License", "ScrollableText",
15, 30, 340, 200, 0x7, None,
self.get_license(), None, None)
dialog.control("LicenseAccepted", "CheckBox",
15, 240, 320, 20, 0x3,
"LicenseAccepted",
"I've accepted this agreement", None, None)
button = dialog.cancel("Cancel", "Next")
button.event("EndDialog", "Exit")
button = dialog.next("Next", "Cancel", active=False)
button.condition("Enable", "LicenseAccepted")
button.condition("Disable", "not LicenseAccepted")
button.event("EndDialog", "Return")
def add_exit_dialog(self):
import msilib
if self.get_license() is not None:
self.add_licence_dialog()
dialog = distutils.command.bdist_msi.PyDialog(self.db,
"ExitDialog",
self.x, self.y, self.width, self.height, self.modal,
self.title, "Finish", "Finish", "Finish")
dialog.title("Completing the [ProductName]")
dialog.back("< Back", "Finish", active=False)
dialog.cancel("Cancel", "Back", active=False)
dialog.text("Description", 15, 235, 320, 20, 0x30003,
"Click the Finish button to exit the installer.")
button = dialog.next("Finish", "Cancel", name="Finish")
button.event("EndDialog", "Return")
msilib.add_data(self.db, "Property",
[("StartClient", "1")])
# Launch product checkbox
c = dialog.control("LaunchAfterInstall", "CheckBox",
15, 200, 320, 20, 0x3,
"StartClient", "Launch [ProductName]",
None, None)
c.condition("Hide", 'Progress1<>"Install"')
# 18 is for execute a .exe from install
msilib.add_data(self.db, "CustomAction", [("LaunchNuxeoDrive", 82,
"launcher.exe",
self.get_executable())])
button.event("DoAction", "LaunchNuxeoDrive",
'StartClient=1 and Progress1="Install"')
msilib.add_data(self.db, "CustomAction", [("NuxeoDriveCleanUp", 82,
self.get_executable(),
"uninstall")])
# Deffered action with noImpersonate to have the correct privileges
msilib.add_data(self.db, "CustomAction", [("NuxeoDriveFolderCleanUp", 3234,
"TARGETDIR",
"cmd.exe /C \"rmdir /S /Q appdata\"")])
msilib.add_data(self.db, "InstallExecuteSequence",
[("NuxeoDriveCleanUp",
'REMOVE="ALL" AND NOT UPGRADINGPRODUCTCODE',
1260)])
# After InstallInitialize
msilib.add_data(self.db, "InstallExecuteSequence",
[("NuxeoDriveFolderCleanUp",
'REMOVE="ALL" AND NOT UPGRADINGPRODUCTCODE',
1560)])
# Add product icon
icon_file = os.path.join(self.attribs.get_icons_home(), self.attribs.get_win_icon())
if os.path.exists(icon_file):
msilib.add_data(self.db, "Property", [("ARPPRODUCTICON", "InstallIcon")])
msilib.add_data(self.db, "Icon", [("InstallIcon", msilib.Binary(icon_file))])
# Allow to customize the MSI
if getattr(self.attribs, 'customize_msi', None) is not None:
self.attribs.customize_msi(self.db)
# Override cx_Freeze setup to override build and install commands.
def setup(**attrs):
commandClasses = attrs.setdefault("cmdclass", {})
_AddCommandClass(commandClasses, "build", build)
_AddCommandClass(commandClasses, "install", install)
if sys.platform == 'win32':
bdist_msi.attribs = attrs.get("attribs")
_AddCommandClass(commandClasses, "bdist_msi", bdist_msi)
cx_setup(**attrs) | unknown | codeparrot/codeparrot-clean | ||
# Typical run:
# C:\home\eric\wrk\scipy\weave\examples>python fibonacci.py
# Recursively computing the first 30 fibonacci numbers:
# speed in python: 4.31599998474
# speed in c: 0.0499999523163
# speed up: 86.32
# Looping to compute the first 30 fibonacci numbers:
# speed in python: 0.000520999908447
# speed in c: 5.00000715256e-005
# speed up: 10.42
# fib(30) 832040 832040 832040 832040
from __future__ import absolute_import, print_function
import sys
sys.path.insert(0,'..')
import ext_tools
def build_fibonacci():
""" Builds an extension module with fibonacci calculators.
"""
mod = ext_tools.ext_module('fibonacci_ext')
a = 1 # this is effectively a type declaration
# recursive fibonacci in C
fib_code = """
int fib1(int a)
{
if(a <= 2)
return 1;
else
return fib1(a-2) + fib1(a-1);
}
"""
ext_code = """
return_val = fib1(a);
"""
fib = ext_tools.ext_function('c_fib1',ext_code,['a'])
fib.customize.add_support_code(fib_code)
mod.add_function(fib)
# looping fibonacci in C
fib_code = """
int fib2( int a )
{
int last, next_to_last, result;
if( a <= 2 )
return 1;
last = next_to_last = 1;
for(int i = 2; i < a; i++ )
{
result = last + next_to_last;
next_to_last = last;
last = result;
}
return result;
}
"""
ext_code = """
return_val = fib2(a);
"""
fib = ext_tools.ext_function('c_fib2',ext_code,['a'])
fib.customize.add_support_code(fib_code)
mod.add_function(fib)
mod.compile()
try:
import fibonacci_ext
except ImportError:
build_fibonacci()
import fibonacci_ext
c_fib1 = fibonacci_ext.c_fib1
c_fib2 = fibonacci_ext.c_fib2
#################################################################
# This where it might normally end, but we've added some timings
# below. Recursive solutions are much slower, and C is 10-50x faster
# than equivalent in Python for this simple little routine
#
#################################################################
def py_fib1(a):
if a <= 2:
return 1
else:
return py_fib1(a-2) + py_fib1(a-1)
def py_fib2(a):
if a <= 2:
return 1
last = next_to_last = 1
for i in range(2,a):
result = last + next_to_last
next_to_last = last
last = result
return result
import time
def recurse_compare(n):
print('Recursively computing the first %d fibonacci numbers:' % n)
t1 = time.time()
for i in range(n):
py_fib1(i)
t2 = time.time()
py = t2 - t1
print(' speed in python:', t2 - t1)
# load into cache
c_fib1(i)
t1 = time.time()
for i in range(n):
c_fib1(i)
t2 = time.time()
print(' speed in c:',t2 - t1)
print(' speed up: %3.2f' % (py/(t2-t1)))
def loop_compare(m,n):
print('Looping to compute the first %d fibonacci numbers:' % n)
t1 = time.time()
for i in range(m):
for i in range(n):
py_fib2(i)
t2 = time.time()
py = (t2-t1)
print(' speed in python:', (t2 - t1)/m)
# load into cache
c_fib2(i)
t1 = time.time()
for i in range(m):
for i in range(n):
c_fib2(i)
t2 = time.time()
print(' speed in c:',(t2 - t1) / m)
print(' speed up: %3.2f' % (py/(t2-t1)))
if __name__ == "__main__":
n = 30
recurse_compare(n)
m = 1000
loop_compare(m,n)
print('fib(30)', c_fib1(30),py_fib1(30),c_fib2(30),py_fib2(30)) | unknown | codeparrot/codeparrot-clean | ||
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import re
from .saferscanner import SaferScanner
class LexingError(Exception):
@classmethod
def from_text(cls, rulestr, unmatched, msg='Lexing error'):
bad_char = len(rulestr) - len(unmatched)
linenum = rulestr[:bad_char].count('\n') + 1
charnum = len(rulestr[:bad_char].rsplit('\n', 1)[-1]) + 1
snippet_start = max(0, min(len(rulestr), bad_char - 10))
snippet_end = max(0, min(len(rulestr), bad_char + 10))
msg += " (Error at: '...%s...')" % (rulestr[snippet_start:snippet_end],)
raise cls(linenum, charnum, msg)
def __init__(self, linenum, charnum, msg='Lexing error'):
self.linenum = linenum
self.charnum = charnum
self.msg = msg
self.args = (linenum, charnum, msg)
def __str__(self):
return '%s at line %d, char %d' % (self.msg, self.linenum, self.charnum)
class Hint:
def __init__(self, text):
self.text = text
def __hash__(self):
return hash((id(self.__class__), self.text))
def __eq__(self, other):
return isinstance(other, self.__class__) and other.text == self.text
def __repr__(self):
return '%s(%r)' % (self.__class__, self.text)
def is_hint(x):
return isinstance(x, Hint)
class ParseContext:
"""
These are meant to be immutable, although it would be something of a
pain to enforce that in python.
"""
def __init__(self, ruleset, bindings, matched, remainder, productionname):
self.ruleset = ruleset
self.bindings = bindings
self.matched = matched
self.remainder = remainder
self.productionname = productionname
def get_production_by_name(self, name):
return self.ruleset[name]
def get_completer(self, symname):
return self.ruleset[(self.productionname, symname)]
def get_binding(self, name, default=None):
return self.bindings.get(name, default)
def with_binding(self, name, val):
newbinds = self.bindings.copy()
newbinds[name] = val
return self.__class__(self.ruleset, newbinds, self.matched,
self.remainder, self.productionname)
def with_match(self, num):
return self.__class__(self.ruleset, self.bindings,
self.matched + self.remainder[:num],
self.remainder[num:], self.productionname)
def with_production_named(self, newname):
return self.__class__(self.ruleset, self.bindings, self.matched,
self.remainder, newname)
def extract_orig(self, tokens=None):
if tokens is None:
tokens = self.matched
if not tokens:
return ''
orig = self.bindings.get('*SRC*', None)
if orig is None:
# pretty much just guess
return ' '.join([t[1] for t in tokens])
# low end of span for first token, to high end of span for last token
orig_text = orig[tokens[0][2][0]:tokens[-1][2][1]]
# Convert all unicode tokens to ascii, where possible. This
# helps avoid problems with performing unicode-incompatible
# operations on tokens (like .lower()). See CASSANDRA-9083
# for one example of this.
try:
orig_text = orig_text.encode('ascii')
except UnicodeEncodeError:
pass
return orig_text
def __repr__(self):
return '<%s matched=%r remainder=%r prodname=%r bindings=%r>' \
% (self.__class__.__name__, self.matched, self.remainder, self.productionname, self.bindings)
class matcher:
def __init__(self, arg):
self.arg = arg
def match(self, ctxt, completions):
raise NotImplementedError
def match_with_results(self, ctxt, completions):
matched_before = len(ctxt.matched)
newctxts = self.match(ctxt, completions)
return [(newctxt, newctxt.matched[matched_before:]) for newctxt in newctxts]
@staticmethod
def try_registered_completion(ctxt, symname, completions):
debugging = ctxt.get_binding('*DEBUG*', False)
if ctxt.remainder or completions is None:
return False
try:
completer = ctxt.get_completer(symname)
except KeyError:
return False
if debugging:
print "Trying completer %r with %r" % (completer, ctxt)
try:
new_compls = completer(ctxt)
except Exception:
if debugging:
import traceback
traceback.print_exc()
return False
if debugging:
print "got %r" % (new_compls,)
completions.update(new_compls)
return True
def __repr__(self):
return '%s(%r)' % (self.__class__.__name__, self.arg)
class choice(matcher):
def match(self, ctxt, completions):
foundctxts = []
for a in self.arg:
subctxts = a.match(ctxt, completions)
foundctxts.extend(subctxts)
return foundctxts
class one_or_none(matcher):
def match(self, ctxt, completions):
return [ctxt] + list(self.arg.match(ctxt, completions))
class repeat(matcher):
def match(self, ctxt, completions):
found = [ctxt]
ctxts = [ctxt]
while True:
new_ctxts = []
for c in ctxts:
new_ctxts.extend(self.arg.match(c, completions))
if not new_ctxts:
return found
found.extend(new_ctxts)
ctxts = new_ctxts
class rule_reference(matcher):
def match(self, ctxt, completions):
prevname = ctxt.productionname
try:
rule = ctxt.get_production_by_name(self.arg)
except KeyError:
raise ValueError("Can't look up production rule named %r" % (self.arg,))
output = rule.match(ctxt.with_production_named(self.arg), completions)
return [c.with_production_named(prevname) for c in output]
class rule_series(matcher):
def match(self, ctxt, completions):
ctxts = [ctxt]
for patpiece in self.arg:
new_ctxts = []
for c in ctxts:
new_ctxts.extend(patpiece.match(c, completions))
if not new_ctxts:
return ()
ctxts = new_ctxts
return ctxts
class named_symbol(matcher):
def __init__(self, name, arg):
matcher.__init__(self, arg)
self.name = name
def match(self, ctxt, completions):
pass_in_compls = completions
if self.try_registered_completion(ctxt, self.name, completions):
# don't collect other completions under this; use a dummy
pass_in_compls = set()
results = self.arg.match_with_results(ctxt, pass_in_compls)
return [c.with_binding(self.name, ctxt.extract_orig(matchtoks)) for (c, matchtoks) in results]
def __repr__(self):
return '%s(%r, %r)' % (self.__class__.__name__, self.name, self.arg)
class named_collector(named_symbol):
def match(self, ctxt, completions):
pass_in_compls = completions
if self.try_registered_completion(ctxt, self.name, completions):
# don't collect other completions under this; use a dummy
pass_in_compls = set()
output = []
for ctxt, matchtoks in self.arg.match_with_results(ctxt, pass_in_compls):
oldval = ctxt.get_binding(self.name, ())
output.append(ctxt.with_binding(self.name, oldval + (ctxt.extract_orig(matchtoks),)))
return output
class terminal_matcher(matcher):
def pattern(self):
raise NotImplementedError
class regex_rule(terminal_matcher):
def __init__(self, pat):
terminal_matcher.__init__(self, pat)
self.regex = pat
self.re = re.compile(pat + '$', re.I | re.S)
def match(self, ctxt, completions):
if ctxt.remainder:
if self.re.match(ctxt.remainder[0][1]):
return [ctxt.with_match(1)]
elif completions is not None:
completions.add(Hint('<%s>' % ctxt.productionname))
return []
def pattern(self):
return self.regex
class text_match(terminal_matcher):
alpha_re = re.compile(r'[a-zA-Z]')
def __init__(self, text):
try:
terminal_matcher.__init__(self, eval(text))
except SyntaxError:
print "bad syntax %r" % (text,)
def match(self, ctxt, completions):
if ctxt.remainder:
if self.arg.lower() == ctxt.remainder[0][1].lower():
return [ctxt.with_match(1)]
elif completions is not None:
completions.add(self.arg)
return []
def pattern(self):
# can't use (?i) here- Scanner component regex flags won't be applied
def ignorecaseify(matchobj):
c = matchobj.group(0)
return '[%s%s]' % (c.upper(), c.lower())
return self.alpha_re.sub(ignorecaseify, re.escape(self.arg))
class case_match(text_match):
def match(self, ctxt, completions):
if ctxt.remainder:
if self.arg == ctxt.remainder[0][1]:
return [ctxt.with_match(1)]
elif completions is not None:
completions.add(self.arg)
return []
def pattern(self):
return re.escape(self.arg)
class word_match(text_match):
def pattern(self):
return r'\b' + text_match.pattern(self) + r'\b'
class case_word_match(case_match):
def pattern(self):
return r'\b' + case_match.pattern(self) + r'\b'
class terminal_type_matcher(matcher):
def __init__(self, tokentype, submatcher):
matcher.__init__(self, tokentype)
self.tokentype = tokentype
self.submatcher = submatcher
def match(self, ctxt, completions):
if ctxt.remainder:
if ctxt.remainder[0][0] == self.tokentype:
return [ctxt.with_match(1)]
elif completions is not None:
self.submatcher.match(ctxt, completions)
return []
def __repr__(self):
return '%s(%r, %r)' % (self.__class__.__name__, self.tokentype, self.submatcher)
class ParsingRuleSet:
RuleSpecScanner = SaferScanner([
(r'::=', lambda s,t: t),
(r'\[[a-z0-9_]+\]=', lambda s,t: ('named_collector', t[1:-2])),
(r'[a-z0-9_]+=', lambda s,t: ('named_symbol', t[:-1])),
(r'/(\[\^?.[^]]*\]|[^/]|\\.)*/', lambda s,t: ('regex', t[1:-1].replace(r'\/', '/'))),
(r'"([^"]|\\.)*"', lambda s,t: ('litstring', t)),
(r'<[^>]*>', lambda s,t: ('reference', t[1:-1])),
(r'\bJUNK\b', lambda s,t: ('junk', t)),
(r'[@()|?*;]', lambda s,t: t),
(r'\s+', None),
(r'#[^\n]*', None),
], re.I | re.S)
def __init__(self):
self.ruleset = {}
self.scanner = None
self.terminals = []
@classmethod
def from_rule_defs(cls, rule_defs):
prs = cls()
prs.ruleset, prs.terminals = cls.parse_rules(rule_defs)
return prs
@classmethod
def parse_rules(cls, rulestr):
tokens, unmatched = cls.RuleSpecScanner.scan(rulestr)
if unmatched:
raise LexingError.from_text(rulestr, unmatched, msg="Syntax rules unparseable")
rules = {}
terminals = []
tokeniter = iter(tokens)
for t in tokeniter:
if isinstance(t, tuple) and t[0] in ('reference', 'junk'):
assign = tokeniter.next()
if assign != '::=':
raise ValueError('Unexpected token %r; expected "::="' % (assign,))
name = t[1]
production = cls.read_rule_tokens_until(';', tokeniter)
if isinstance(production, terminal_matcher):
terminals.append((name, production))
production = terminal_type_matcher(name, production)
rules[name] = production
else:
raise ValueError('Unexpected token %r; expected name' % (t,))
return rules, terminals
@staticmethod
def mkrule(pieces):
if isinstance(pieces, (tuple, list)):
if len(pieces) == 1:
return pieces[0]
return rule_series(pieces)
return pieces
@classmethod
def read_rule_tokens_until(cls, endtoks, tokeniter):
if isinstance(endtoks, basestring):
endtoks = (endtoks,)
counttarget = None
if isinstance(endtoks, int):
counttarget = endtoks
endtoks = ()
countsofar = 0
myrules = []
mybranches = [myrules]
for t in tokeniter:
countsofar += 1
if t in endtoks:
if len(mybranches) == 1:
return cls.mkrule(mybranches[0])
return choice(map(cls.mkrule, mybranches))
if isinstance(t, tuple):
if t[0] == 'reference':
t = rule_reference(t[1])
elif t[0] == 'litstring':
if t[1][1].isalnum() or t[1][1] == '_':
t = word_match(t[1])
else:
t = text_match(t[1])
elif t[0] == 'regex':
t = regex_rule(t[1])
elif t[0] == 'named_collector':
t = named_collector(t[1], cls.read_rule_tokens_until(1, tokeniter))
elif t[0] == 'named_symbol':
t = named_symbol(t[1], cls.read_rule_tokens_until(1, tokeniter))
elif t == '(':
t = cls.read_rule_tokens_until(')', tokeniter)
elif t == '?':
t = one_or_none(myrules.pop(-1))
elif t == '*':
t = repeat(myrules.pop(-1))
elif t == '@':
x = tokeniter.next()
if not isinstance(x, tuple) or x[0] != 'litstring':
raise ValueError("Unexpected token %r following '@'" % (x,))
t = case_match(x[1])
elif t == '|':
myrules = []
mybranches.append(myrules)
continue
else:
raise ValueError('Unparseable rule token %r after %r' % (t, myrules[-1]))
myrules.append(t)
if countsofar == counttarget:
if len(mybranches) == 1:
return cls.mkrule(mybranches[0])
return choice(map(cls.mkrule, mybranches))
raise ValueError('Unexpected end of rule tokens')
def append_rules(self, rulestr):
rules, terminals = self.parse_rules(rulestr)
self.ruleset.update(rules)
self.terminals.extend(terminals)
if terminals:
self.scanner = None # recreate it if/when necessary
def register_completer(self, func, rulename, symname):
self.ruleset[(rulename, symname)] = func
def make_lexer(self):
def make_handler(name):
if name == 'JUNK':
return None
return lambda s, t: (name, t, s.match.span())
regexes = [(p.pattern(), make_handler(name)) for (name, p) in self.terminals]
return SaferScanner(regexes, re.I | re.S).scan
def lex(self, text):
if self.scanner is None:
self.scanner = self.make_lexer()
tokens, unmatched = self.scanner(text)
if unmatched:
raise LexingError.from_text(text, unmatched, 'text could not be lexed')
return tokens
def parse(self, startsymbol, tokens, init_bindings=None):
if init_bindings is None:
init_bindings = {}
ctxt = ParseContext(self.ruleset, init_bindings, (), tuple(tokens), startsymbol)
pattern = self.ruleset[startsymbol]
return pattern.match(ctxt, None)
def whole_match(self, startsymbol, tokens, srcstr=None):
bindings = {}
if srcstr is not None:
bindings['*SRC*'] = srcstr
for c in self.parse(startsymbol, tokens, init_bindings=bindings):
if not c.remainder:
return c
def lex_and_parse(self, text, startsymbol='Start'):
return self.parse(startsymbol, self.lex(text), init_bindings={'*SRC*': text})
def lex_and_whole_match(self, text, startsymbol='Start'):
tokens = self.lex(text)
return self.whole_match(startsymbol, tokens, srcstr=text)
def complete(self, startsymbol, tokens, init_bindings=None):
if init_bindings is None:
init_bindings = {}
ctxt = ParseContext(self.ruleset, init_bindings, (), tuple(tokens), startsymbol)
pattern = self.ruleset[startsymbol]
if init_bindings.get('*DEBUG*', False):
completions = Debugotron(stream=sys.stderr)
else:
completions = set()
pattern.match(ctxt, completions)
return completions
import sys, traceback
class Debugotron(set):
depth = 10
def __init__(self, initializer=(), stream=sys.stdout):
set.__init__(self, initializer)
self.stream = stream
def add(self, item):
self._note_addition(item)
set.add(self, item)
def _note_addition(self, foo):
self.stream.write("\nitem %r added by:\n" % (foo,))
frame = sys._getframe().f_back.f_back
for i in range(self.depth):
name = frame.f_code.co_name
filename = frame.f_code.co_filename
lineno = frame.f_lineno
if 'self' in frame.f_locals:
clsobj = frame.f_locals['self']
line = '%s.%s() (%s:%d)' % (clsobj, name, filename, lineno)
else:
line = '%s (%s:%d)' % (name, filename, lineno)
self.stream.write(' - %s\n' % (line,))
if i == 0 and 'ctxt' in frame.f_locals:
self.stream.write(' - %s\n' % (frame.f_locals['ctxt'],))
frame = frame.f_back
def update(self, items):
if items:
self._note_addition(items)
set.update(self, items) | unknown | codeparrot/codeparrot-clean | ||
import { bench } from 'vitest'
import { segment } from './segment'
const values = [
['hover:focus:underline', ':'],
['var(--a, 0 0 1px rgb(0, 0, 0)), 0 0 1px rgb(0, 0, 0)', ','],
['var(--some-value,env(safe-area-inset-top,var(--some-other-value,env(safe-area-inset))))', ','],
]
bench('segment', () => {
for (let [value, sep] of values) {
segment(value, sep)
}
}) | typescript | github | https://github.com/tailwindlabs/tailwindcss | packages/tailwindcss/src/utils/segment.bench.ts |
prelude: |
class A0
def method_missing(m); m end
end
class A1
def method_missing(m, a) a; end
end
class S
def method_missing(m, *a) a; end
end
class B
def method_missing(m, kw: 1) kw end
end
class SB
def method_missing(m, *a, kw: 1) kw end
end
t0 = 0.times.to_a
t1 = 1.times.to_a
t10 = 10.times.to_a
t200 = 200.times.to_a
kw = {kw: 2}
a0 = A0.new
a1 = A1.new
s = S.new
b = B.new
sb = SB.new
benchmark:
method_missing_simple_0: |
a0.()
method_missing_simple_1: |
a1.x(1)
method_missing_simple_0_splat: |
a0.(*t0)
method_missing_simple_1_splat: |
a1.(*t1)
method_missing_no_splat: |
s.()
method_missing_0_splat: |
s.(*t0)
method_missing_1_splat: |
s.(*t1)
method_missing_10_splat: |
s.(*t10)
method_missing_200_splat: |
s.(*t200)
method_missing_kw: |
b.(kw: 1)
method_missing_no_kw: |
b.()
method_missing_kw_splat: |
b.(**kw)
method_missing_0_splat_kw: |
sb.(*t0, **kw)
method_missing_1_splat_kw: |
sb.(*t1, **kw)
method_missing_10_splat_kw: |
sb.(*t10, **kw)
method_missing_200_splat_kw: |
sb.(*t200, **kw)
loop_count: 1000000 | unknown | github | https://github.com/ruby/ruby | benchmark/vm_call_method_missing.yml |
/* Copyright 2017 - 2025 R. Thomas
* Copyright 2017 - 2025 Quarkslab
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
#ifndef LIEF_DEX_DEOPT_TYPES_H
#define LIEF_DEX_DEOPT_TYPES_H
#include <cstdint>
#include <unordered_map>
namespace LIEF {
namespace DEX {
class Class;
class Method;
// Method Index: {dex_pc: index, ...}
using dex2dex_method_info_t = std::unordered_map<uint32_t, uint32_t>;
using dex2dex_class_info_t = std::unordered_map<Method*, dex2dex_method_info_t>;
using dex2dex_info_t = std::unordered_map<Class*, dex2dex_class_info_t>;
}
}
#endif | unknown | github | https://github.com/nodejs/node | deps/LIEF/include/LIEF/DEX/deopt.hpp |
#!/usr/bin/env bash
# Copyright 2023 The Cockroach Authors.
#
# Use of this software is governed by the CockroachDB Software License
# included in the /LICENSE file.
set -euxo pipefail
gcloud secrets versions access 2 --secret=engflow-mesolite-key > /home/agent/engflow.key
gcloud secrets versions access 2 --secret=engflow-mesolite-crt > /home/agent/engflow.crt | unknown | github | https://github.com/cockroachdb/cockroach | build/github/get-engflow-keys.sh |
# -*- coding: utf-8 -*-
"""
***************************************************************************
GridDataMetrics.py
---------------------
Date : October 2013
Copyright : (C) 2013 by Alexander Bruy
Email : alexander dot bruy at gmail dot com
***************************************************************************
* *
* This program is free software; you can redistribute it and/or modify *
* it under the terms of the GNU General Public License as published by *
* the Free Software Foundation; either version 2 of the License, or *
* (at your option) any later version. *
* *
***************************************************************************
"""
__author__ = 'Alexander Bruy'
__date__ = 'October 2013'
__copyright__ = '(C) 2013, Alexander Bruy'
# This will get replaced with a git SHA1 when you do a git archive
__revision__ = '$Format:%H$'
import os
from processing.algs.gdal.GdalAlgorithm import GdalAlgorithm
from processing.core.parameters import ParameterVector
from processing.core.parameters import ParameterTableField
from processing.core.parameters import ParameterSelection
from processing.core.parameters import ParameterNumber
from processing.core.outputs import OutputRaster
from processing.algs.gdal.GdalUtils import GdalUtils
class GridDataMetrics(GdalAlgorithm):
INPUT = 'INPUT'
Z_FIELD = 'Z_FIELD'
METRIC = 'METRIC'
RADIUS_1 = 'RADIUS_1'
RADIUS_2 = 'RADIUS_2'
MIN_POINTS = 'MIN_POINTS'
ANGLE = 'ANGLE'
NODATA = 'NODATA'
OUTPUT = 'OUTPUT'
RTYPE = 'RTYPE'
TYPE = ['Byte', 'Int16', 'UInt16', 'UInt32', 'Int32', 'Float32', 'Float64']
DATA_METRICS = ['Minimum', 'Maximum', 'Range', 'Count', 'Average distance',
'Average distance between points']
def commandLineName(self):
return "gdalogr:griddatametrics"
def defineCharacteristics(self):
self.name = 'Grid (Data metrics)'
self.group = '[GDAL] Analysis'
self.addParameter(ParameterVector(self.INPUT,
self.tr('Input layer'), [ParameterVector.VECTOR_TYPE_POINT]))
self.addParameter(ParameterTableField(self.Z_FIELD,
self.tr('Z field'), self.INPUT,
ParameterTableField.DATA_TYPE_NUMBER, True))
self.addParameter(ParameterSelection(self.METRIC,
self.tr('Metrics'), self.DATA_METRICS, 0))
self.addParameter(ParameterNumber(self.RADIUS_1,
self.tr('Radius 1'), 0.0, 99999999.999999, 0.0))
self.addParameter(ParameterNumber(self.RADIUS_2,
self.tr('Radius 2'), 0.0, 99999999.999999, 0.0))
self.addParameter(ParameterNumber(self.MIN_POINTS,
self.tr('Min points'), 0.0, 99999999.999999, 0.0))
self.addParameter(ParameterNumber(self.ANGLE,
self.tr('Angle'), 0.0, 359.0, 0.0))
self.addParameter(ParameterNumber(self.NODATA,
self.tr('Nodata'), 0.0, 99999999.999999, 0.0))
self.addParameter(ParameterSelection(self.RTYPE,
self.tr('Output raster type'), self.TYPE, 5))
self.addOutput(OutputRaster(self.OUTPUT, self.tr('Output file')))
def processAlgorithm(self, progress):
arguments = ['-l']
arguments.append(
os.path.basename(os.path.splitext(
unicode(self.getParameterValue(self.INPUT)))[0]))
fieldName = self.getParameterValue(self.Z_FIELD)
if fieldName is not None and fieldName != '':
arguments.append('-zfield')
arguments.append(fieldName)
metric = self.getParameterValue(self.METRIC)
if metric == 0:
params = 'minimum'
elif metric == 1:
params = 'maximum'
elif metric == 2:
params = 'range'
elif metric == 3:
params = 'count'
elif metric == 4:
params = 'average_distance'
elif metric == 5:
params = 'average_distance_pts'
params += ':radius1=%s' % self.getParameterValue(self.RADIUS_1)
params += ':radius2=%s' % self.getParameterValue(self.RADIUS_2)
params += ':angle=%s' % self.getParameterValue(self.ANGLE)
params += ':min_points=%s' % self.getParameterValue(self.MIN_POINTS)
params += ':nodata=%s' % self.getParameterValue(self.NODATA)
arguments.append('-a')
arguments.append(params)
arguments.append('-ot')
arguments.append(self.TYPE[self.getParameterValue(self.RTYPE)])
arguments.append(unicode(self.getParameterValue(self.INPUT)))
arguments.append(unicode(self.getOutputValue(self.OUTPUT)))
GdalUtils.runGdal(['gdal_grid',
GdalUtils.escapeAndJoin(arguments)], progress) | unknown | codeparrot/codeparrot-clean | ||
import {identity} from 'shared-runtime';
function Foo() {
const CONSTANT = 1;
const x = {
foo() {
return identity(CONSTANT);
},
};
return x.foo();
}
export const FIXTURE_ENTRYPOINT = {
fn: Foo,
params: [{}],
}; | javascript | github | https://github.com/facebook/react | compiler/packages/babel-plugin-react-compiler/src/__tests__/fixtures/compiler/constant-prop-to-object-method.js |
import util from 'util';
import {Readable} from 'stream';
import utils from "../utils.js";
import readBlob from "./readBlob.js";
import platform from "../platform/index.js";
const BOUNDARY_ALPHABET = platform.ALPHABET.ALPHA_DIGIT + '-_';
const textEncoder = typeof TextEncoder === 'function' ? new TextEncoder() : new util.TextEncoder();
const CRLF = '\r\n';
const CRLF_BYTES = textEncoder.encode(CRLF);
const CRLF_BYTES_COUNT = 2;
class FormDataPart {
constructor(name, value) {
const {escapeName} = this.constructor;
const isStringValue = utils.isString(value);
let headers = `Content-Disposition: form-data; name="${escapeName(name)}"${
!isStringValue && value.name ? `; filename="${escapeName(value.name)}"` : ''
}${CRLF}`;
if (isStringValue) {
value = textEncoder.encode(String(value).replace(/\r?\n|\r\n?/g, CRLF));
} else {
headers += `Content-Type: ${value.type || "application/octet-stream"}${CRLF}`
}
this.headers = textEncoder.encode(headers + CRLF);
this.contentLength = isStringValue ? value.byteLength : value.size;
this.size = this.headers.byteLength + this.contentLength + CRLF_BYTES_COUNT;
this.name = name;
this.value = value;
}
async *encode(){
yield this.headers;
const {value} = this;
if(utils.isTypedArray(value)) {
yield value;
} else {
yield* readBlob(value);
}
yield CRLF_BYTES;
}
static escapeName(name) {
return String(name).replace(/[\r\n"]/g, (match) => ({
'\r' : '%0D',
'\n' : '%0A',
'"' : '%22',
}[match]));
}
}
const formDataToStream = (form, headersHandler, options) => {
const {
tag = 'form-data-boundary',
size = 25,
boundary = tag + '-' + platform.generateString(size, BOUNDARY_ALPHABET)
} = options || {};
if(!utils.isFormData(form)) {
throw TypeError('FormData instance required');
}
if (boundary.length < 1 || boundary.length > 70) {
throw Error('boundary must be 10-70 characters long')
}
const boundaryBytes = textEncoder.encode('--' + boundary + CRLF);
const footerBytes = textEncoder.encode('--' + boundary + '--' + CRLF);
let contentLength = footerBytes.byteLength;
const parts = Array.from(form.entries()).map(([name, value]) => {
const part = new FormDataPart(name, value);
contentLength += part.size;
return part;
});
contentLength += boundaryBytes.byteLength * parts.length;
contentLength = utils.toFiniteNumber(contentLength);
const computedHeaders = {
'Content-Type': `multipart/form-data; boundary=${boundary}`
}
if (Number.isFinite(contentLength)) {
computedHeaders['Content-Length'] = contentLength;
}
headersHandler && headersHandler(computedHeaders);
return Readable.from((async function *() {
for(const part of parts) {
yield boundaryBytes;
yield* part.encode();
}
yield footerBytes;
})());
};
export default formDataToStream; | javascript | github | https://github.com/axios/axios | lib/helpers/formDataToStream.js |
from django.conf import settings
from django.core.urlresolvers import reverse
from django.db import models
from django.db.models import F, Max, Min
from django.db.transaction import atomic
from django.contrib.contenttypes.models import ContentType
from django_comments.models import Comment
def max_thread_level_for_content_type(content_type):
app_model = "%s.%s" % (content_type.app_label, content_type.model)
if app_model in settings.COMMENTS_MAX_THREAD_LEVEL_BY_APP_MODEL:
return settings.COMMENTS_MAX_THREAD_LEVEL_BY_APP_MODEL[app_model]
else:
return settings.COMMENTS_MAX_THREAD_LEVEL
class MaxThreadLevelExceededException(Exception):
def __init__(self, content_type=None):
self.max_by_app = max_thread_level_for_content_type(content_type)
def __str__(self):
return "Can not post comments over the thread level {}".format(self.max_by_app)
class ThreadedCommentManager(models.Manager):
def for_app_models(self, *args):
"""Return ThreadedComments for pairs "app.model" given in args"""
content_types = []
for app_model in args:
app, model = app_model.split(".")
content_types.append(ContentType.objects.get(app_label=app,
model=model))
return self.for_content_types(content_types)
def for_content_types(self, content_types):
qs = self.get_queryset().filter(content_type__in=content_types).reverse()
return qs
class ThreadedComment(Comment):
title = models.CharField(max_length=255, blank=True)
thread_id = models.IntegerField(default=0, db_index=True)
parent_id = models.IntegerField(default=0)
level = models.SmallIntegerField(default=0)
order = models.IntegerField(default=1, db_index=True)
objects = ThreadedCommentManager()
class Meta:
ordering = ('-thread_id', 'order')
verbose_name = 'comment'
def save(self, *args, **kwargs):
is_new = self.pk is None
super().save(*args, **kwargs)
if is_new:
if not self.parent_id:
self.parent_id = self.id
self.thread_id = self.id
else:
if max_thread_level_for_content_type(self.content_type):
with atomic():
self._calculate_thread_data()
else:
raise MaxThreadLevelExceededException(self.content_type)
kwargs["force_insert"] = False
super().save(*args, **kwargs)
def get_absolute_url(self):
if self.content_type.name == 'project':
project = ThreadedComment.get_project_for_comment(self)
return reverse('projects:comments-detail', kwargs={
'pk': project.pk,
'slug': project.slug,
'comment_pk': self.pk
})
return super().get_absolute_url()
def _calculate_thread_data(self):
# Implements the following approach:
# http://www.sqlteam.com/article/sql-for-threaded-discussion-forums
parent = ThreadedComment.objects.get(pk=self.parent_id)
if parent.level == max_thread_level_for_content_type(self.content_type):
raise MaxThreadLevelExceededException(self.content_type)
self.thread_id = parent.thread_id
self.level = parent.level + 1
qc_eq_thread = ThreadedComment.objects.filter(thread_id=parent.thread_id)
qc_ge_level = qc_eq_thread.filter(level__lte=parent.level,
order__gt=parent.order)
if qc_ge_level.count():
min_order = qc_ge_level.aggregate(Min('order'))['order__min']
ThreadedComment.objects.filter(thread_id=parent.thread_id, order__gte=min_order).update(order=F('order')+1)
self.order = min_order
else:
max_order = qc_eq_thread.aggregate(Max('order'))['order__max']
self.order = max_order + 1
def allow_thread(self):
if self.level < max_thread_level_for_content_type(self.content_type):
return True
else:
return False
@classmethod
def get_project_for_comment(cls, instance):
content_type = instance.content_type
# Comment on a project
if content_type.model == 'project':
return instance.content_object
# Comment on a document/biblio
if hasattr(instance.content_object, 'project'):
return instance.content_object.project
# Replies
return ThreadedComment.get_project_for_comment(instance.content_object)
def get_edit_url(self):
project = ThreadedComment.get_project_for_comment(self)
return reverse('projects:comments-edit', kwargs={
'pk': project.pk,
'slug': project.slug,
'comment_pk': self.pk
})
def get_delete_url(self):
project = ThreadedComment.get_project_for_comment(self)
return reverse('projects:comments-delete', kwargs={
'pk': project.pk,
'slug': project.slug,
'comment_pk': self.pk
})
@property
def display_type(self):
if self.is_reply():
return 'Reply'
return 'Comment'
def is_reply(self):
if self.content_type.name.lower() == 'comment':
return True
return False | unknown | codeparrot/codeparrot-clean | ||
######################################################################
# Copyright (C) 2011,2012,2014 Jaakko Luttinen
#
# This file is licensed under Version 3.0 of the GNU General Public
# License. See LICENSE for a text of the license.
######################################################################
######################################################################
# This file is part of BayesPy.
#
# BayesPy is free software: you can redistribute it and/or modify it
# under the terms of the GNU General Public License version 3 as
# published by the Free Software Foundation.
#
# BayesPy is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with BayesPy. If not, see <http://www.gnu.org/licenses/>.
######################################################################
"""
Module for the categorical distribution node.
"""
import numpy as np
#from .expfamily import ExponentialFamily
#from .expfamily import ExponentialFamilyDistribution
from .expfamily import useconstructor
from .multinomial import (MultinomialMoments,
MultinomialDistribution,
Multinomial)
#from .dirichlet import Dirichlet, DirichletMoments
from .node import ensureparents
from bayespy.utils import random
from bayespy.utils import utils
class CategoricalMoments(MultinomialMoments):
"""
Class for the moments of categorical variables.
"""
def __init__(self, categories):
"""
Create moments object for categorical variables
"""
self.D = categories
super().__init__()
def compute_fixed_moments(self, x):
"""
Compute the moments for a fixed value
"""
# Check that x is valid
x = np.asanyarray(x)
if not utils.isinteger(x):
raise ValueError("Values must be integers")
if np.any(x < 0) or np.any(x >= self.D):
raise ValueError("Invalid category index")
u0 = np.zeros((np.size(x), self.D))
u0[[np.arange(np.size(x)), np.ravel(x)]] = 1
u0 = np.reshape(u0, np.shape(x) + (self.D,))
return [u0]
def compute_dims_from_values(self, x):
"""
Return the shape of the moments for a fixed value.
The observations are scalar.
"""
return ( (self.D,), )
class CategoricalDistribution(MultinomialDistribution):
"""
Class for the VMP formulas of categorical variables.
"""
def __init__(self, categories):
"""
Create VMP formula node for a categorical variable
`categories` is the total number of categories.
"""
if not isinstance(categories, int):
raise ValueError("Number of categories must be integer")
if categories < 0:
raise ValueError("Number of categoriess must be non-negative")
self.D = categories
super().__init__(1)
def compute_message_to_parent(self, parent, index, u, u_p):
"""
Compute the message to a parent node.
"""
return super().compute_message_to_parent(parent, index, u, u_p)
def compute_phi_from_parents(self, u_p, mask=True):
"""
Compute the natural parameter vector given parent moments.
"""
return super().compute_phi_from_parents(u_p, mask=mask)
def compute_moments_and_cgf(self, phi, mask=True):
"""
Compute the moments and :math:`g(\phi)`.
"""
return super().compute_moments_and_cgf(phi, mask=mask)
def compute_cgf_from_parents(self, u_p):
"""
Compute :math:`\mathrm{E}_{q(p)}[g(p)]`
"""
return super().compute_cgf_from_parents(u_p)
def compute_fixed_moments_and_f(self, x, mask=True):
"""
Compute the moments and :math:`f(x)` for a fixed value.
"""
# Check the validity of x
x = np.asanyarray(x)
if not utils.isinteger(x):
raise ValueError("Values must be integers")
if np.any(x < 0) or np.any(x >= self.D):
raise ValueError("Invalid category index")
# Form a binary matrix with only one non-zero (1) in the last axis
u0 = np.zeros((np.size(x), self.D))
u0[[np.arange(np.size(x)), np.ravel(x)]] = 1
u0 = np.reshape(u0, np.shape(x) + (self.D,))
u = [u0]
# f(x) is zero
f = 0
return (u, f)
class Categorical(Multinomial):
"""
Node for categorical random variables.
"""
@classmethod
@ensureparents
def _constructor(cls, p, **kwargs):
"""
Constructs distribution and moments objects.
This method is called if useconstructor decorator is used for __init__.
Becase the distribution and moments object depend on the number of
categories, that is, they depend on the parent node, this method can be
used to construct those objects.
"""
# Get the number of categories
D = p.dims[0][0]
parents = [p]
moments = CategoricalMoments(D)
distribution = CategoricalDistribution(D)
return (parents,
kwargs,
( (D,), ),
cls._total_plates(kwargs.get('plates'),
distribution.plates_from_parent(0, p.plates)),
distribution,
moments,
cls._parent_moments)
def random(self):
"""
Draw a random sample from the distribution.
"""
logp = self.phi[0]
logp -= np.amax(logp, axis=-1, keepdims=True)
p = np.exp(logp)
return random.categorical(p, size=self.plates)
def show(self):
"""
Print the distribution using standard parameterization.
"""
p = self.u[0]
print("%s ~ Categorical(p)" % self.name)
print(" p = ")
print(p) | unknown | codeparrot/codeparrot-clean | ||
# Welcome to \Rails
## What's \Rails?
\Rails is a web application framework that includes everything needed to
create database-backed web applications according to the
[Model-View-Controller (MVC)](https://en.wikipedia.org/wiki/Model-view-controller)
pattern.
Understanding the MVC pattern is key to understanding \Rails. MVC divides your
application into three layers: Model, View, and Controller, each with a specific responsibility.
## Model layer
The _**Model layer**_ represents the domain model (such as Account, Product,
Person, Post, etc.) and encapsulates the business logic specific to
your application. In \Rails, database-backed model classes are derived from
`ActiveRecord::Base`. [Active Record](files/activerecord/README.rdoc) allows you to present the data from
database rows as objects and embellish these data objects with business logic
methods.
Although most \Rails models are backed by a database, models can also be ordinary
Ruby classes, or Ruby classes that implement a set of interfaces as provided by
the [Active Model](files/activemodel/README.rdoc) module.
## View layer
The _**View layer**_ is composed of "templates" that are responsible for providing
appropriate representations of your application's resources. Templates can
come in a variety of formats, but most view templates are HTML with embedded
Ruby code (\ERB files). Views are typically rendered to generate a controller response
or to generate the body of an email. In \Rails, View generation is handled by [Action View](files/actionview/README.rdoc).
## Controller layer
The _**Controller layer**_ is responsible for handling incoming HTTP requests and
providing a suitable response. Usually, this means returning HTML, but \Rails controllers
can also generate XML, JSON, PDFs, mobile-specific views, and more. Controllers load and
manipulate models, and render view templates in order to generate the appropriate HTTP response.
In \Rails, incoming requests are routed by Action Dispatch to an appropriate controller, and
controller classes are derived from `ActionController::Base`. Action Dispatch and Action Controller
are bundled together in [Action Pack](files/actionpack/README.rdoc).
## Frameworks and libraries
[Active Record](files/activerecord/README.rdoc), [Active Model](files/activemodel/README.rdoc), [Action Pack](files/actionpack/README.rdoc), and [Action View](files/actionview/README.rdoc) can each be used independently outside \Rails.
In addition to that, \Rails also comes with:
- [Action Mailer](files/actionmailer/README.rdoc), a library to generate and send emails
- [Action Mailbox](files/actionmailbox/README.md), a library to receive emails within a \Rails application
- [Active Job](files/activejob/README.md), a framework for declaring jobs and making them run on a variety of queuing backends
- [Action Cable](files/actioncable/README.md), a framework to integrate WebSockets with a \Rails application
- [Active Storage](files/activestorage/README.md), a library to attach cloud and local files to \Rails applications
- [Action Text](files/actiontext/README.md), a library to handle rich text content
- [Active Support](files/activesupport/README.rdoc), a collection of utility classes and standard library extensions that are useful for \Rails, and may also be used independently outside \Rails
## Getting Started
1. Install \Rails at the command prompt if you haven't yet:
$ gem install rails
2. At the command prompt, create a new \Rails application:
$ rails new myapp
where "myapp" is the application name.
3. Change directory to `myapp` and start the web server:
$ cd myapp
$ bin/rails server
Run with `--help` or `-h` for options.
4. Go to `http://localhost:3000` and you'll see the \Rails bootscreen with your
\Rails and Ruby versions.
5. Follow the guidelines to start developing your application. You may find the
following resources handy:
* [Getting Started with Rails](https://guides.rubyonrails.org/getting_started.html)
* [Ruby on Rails Guides](https://guides.rubyonrails.org)
* [The API Documentation](https://api.rubyonrails.org)
## Contributing
We encourage you to contribute to Ruby on \Rails! Please check out the
[Contributing to Ruby on Rails guide](https://edgeguides.rubyonrails.org/contributing_to_ruby_on_rails.html) for guidelines about how to proceed. [Join us!](https://contributors.rubyonrails.org)
Trying to report a possible security vulnerability in \Rails? Please
check out our [security policy](https://rubyonrails.org/security) for
guidelines about how to proceed.
Everyone interacting in \Rails and its sub-projects' codebases, issue trackers, chat rooms, and mailing lists is expected to follow the \Rails [code of conduct](https://rubyonrails.org/conduct).
## License
Ruby on \Rails is released under the [MIT License](https://opensource.org/licenses/MIT). | unknown | github | https://github.com/rails/rails | railties/RDOC_MAIN.md |
# (originally from https://github.com/tensorflow/models/tree/master/research/differential_privacy,
# possibly with some small edits by @corcra)
# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Defines Accountant class for keeping track of privacy spending.
A privacy accountant keeps track of privacy spendings. It has methods
accumulate_privacy_spending and get_privacy_spent. Here we only define
AmortizedAccountant which tracks the privacy spending in the amortized
way. It uses privacy amplication via sampling to compute the privacy
spending for each batch and strong composition (specialized for Gaussian
noise) for accumulate the privacy spending.
"""
from __future__ import division
import abc
import collections
import math
import sys
import numpy
import tensorflow as tf
from differential_privacy.dp_sgd.dp_optimizer import utils
EpsDelta = collections.namedtuple("EpsDelta", ["spent_eps", "spent_delta"])
import pdb
# TODO(liqzhang) To ensure the same API for AmortizedAccountant and
# MomentsAccountant, we pass the union of arguments to both, so we
# have unused_sigma for AmortizedAccountant and unused_eps_delta for
# MomentsAccountant. Consider to revise the API to avoid the unused
# arguments. It would be good to use @abc.abstractmethod, etc, to
# define the common interface as a base class.
class AmortizedAccountant(object):
"""Keep track of privacy spending in an amortized way.
AmortizedAccountant accumulates the privacy spending by assuming
all the examples are processed uniformly at random so the spending is
amortized among all the examples. And we assume that we use Gaussian noise
so the accumulation is on eps^2 and delta, using advanced composition.
"""
def __init__(self, total_examples):
"""Initialization. Currently only support amortized tracking.
Args:
total_examples: total number of examples.
"""
assert total_examples > 0
self._total_examples = total_examples
self._eps_squared_sum = tf.Variable(tf.zeros([1]), trainable=False,
name="eps_squared_sum")
self._delta_sum = tf.Variable(tf.zeros([1]), trainable=False,
name="delta_sum")
def accumulate_privacy_spending(self, eps_delta, unused_sigma,
num_examples):
"""Accumulate the privacy spending.
Currently only support approximate privacy. Here we assume we use Gaussian
noise on randomly sampled batch so we get better composition: 1. the per
batch privacy is computed using privacy amplication via sampling bound;
2. the composition is done using the composition with Gaussian noise.
TODO(liqzhang) Add a link to a document that describes the bounds used.
Args:
eps_delta: EpsDelta pair which can be tensors.
unused_sigma: the noise sigma. Unused for this accountant.
num_examples: the number of examples involved.
Returns:
a TensorFlow operation for updating the privacy spending.
"""
eps, delta = eps_delta
with tf.control_dependencies(
[tf.Assert(tf.greater(delta, 0),
["delta needs to be greater than 0"])]):
amortize_ratio = (tf.cast(num_examples, tf.float32) * 1.0 /
self._total_examples)
# Use privacy amplification via sampling bound.
# See Lemma 2.2 in http://arxiv.org/pdf/1405.7085v2.pdf
# TODO(liqzhang) Add a link to a document with formal statement
# and proof.
amortize_eps = tf.reshape(tf.log(1.0 + amortize_ratio * (
tf.exp(eps) - 1.0)), [1])
amortize_delta = tf.reshape(amortize_ratio * delta, [1])
return tf.group(*[tf.assign_add(self._eps_squared_sum,
tf.square(amortize_eps)),
tf.assign_add(self._delta_sum, amortize_delta)])
def get_privacy_spent(self, sess, target_eps=None):
"""Report the spending so far.
Args:
sess: the session to run the tensor.
target_eps: the target epsilon. Unused.
Returns:
the list containing a single EpsDelta, with values as Python floats (as
opposed to numpy.float64). This is to be consistent with
MomentAccountant which can return a list of (eps, delta) pair.
"""
# pylint: disable=unused-argument
unused_target_eps = target_eps
eps_squared_sum, delta_sum = sess.run([self._eps_squared_sum,
self._delta_sum])
return [EpsDelta(math.sqrt(eps_squared_sum), float(delta_sum))]
class MomentsAccountant(object):
"""Privacy accountant which keeps track of moments of privacy loss.
Note: The constructor of this class creates tf.Variables that must
be initialized with tf.global_variables_initializer() or similar calls.
MomentsAccountant accumulates the high moments of the privacy loss. It
requires a method for computing differenital moments of the noise (See
below for the definition). So every specific accountant should subclass
this class by implementing _differential_moments method.
Denote by X_i the random variable of privacy loss at the i-th step.
Consider two databases D, D' which differ by one item. X_i takes value
log Pr[M(D')==x]/Pr[M(D)==x] with probability Pr[M(D)==x].
In MomentsAccountant, we keep track of y_i(L) = log E[exp(L X_i)] for some
large enough L. To compute the final privacy spending, we apply Chernoff
bound (assuming the random noise added at each step is independent) to
bound the total privacy loss Z = sum X_i as follows:
Pr[Z > e] = Pr[exp(L Z) > exp(L e)]
< E[exp(L Z)] / exp(L e)
= Prod_i E[exp(L X_i)] / exp(L e)
= exp(sum_i log E[exp(L X_i)]) / exp(L e)
= exp(sum_i y_i(L) - L e)
Hence the mechanism is (e, d)-differentially private for
d = exp(sum_i y_i(L) - L e).
We require d < 1, i.e. e > sum_i y_i(L) / L. We maintain y_i(L) for several
L to compute the best d for any give e (normally should be the lowest L
such that 2 * sum_i y_i(L) / L < e.
We further assume that at each step, the mechanism operates on a random
sample with sampling probability q = batch_size / total_examples. Then
E[exp(L X)] = E[(Pr[M(D)==x / Pr[M(D')==x])^L]
By distinguishing two cases of whether D < D' or D' < D, we have
that
E[exp(L X)] <= max (I1, I2)
where
I1 = (1-q) E ((1-q) + q P(X+1) / P(X))^L + q E ((1-q) + q P(X) / P(X-1))^L
I2 = E (P(X) / ((1-q) + q P(X+1)))^L
In order to compute I1 and I2, one can consider to
1. use an asymptotic bound, which recovers the advance composition theorem;
2. use the closed formula (like GaussianMomentsAccountant);
3. use numerical integration or random sample estimation.
Dependent on the distribution, we can often obtain a tigher estimation on
the moments and hence a more accurate estimation of the privacy loss than
obtained using generic composition theorems.
"""
__metaclass__ = abc.ABCMeta
def __init__(self, total_examples, moment_orders=32):
"""Initialize a MomentsAccountant.
Args:
total_examples: total number of examples.
moment_orders: the order of moments to keep.
"""
assert total_examples > 0
self._total_examples = total_examples
self._moment_orders = (moment_orders
if isinstance(moment_orders, (list, tuple))
else range(1, moment_orders + 1))
self._max_moment_order = max(self._moment_orders)
assert self._max_moment_order < 100, "The moment order is too large."
self._log_moments = [tf.Variable(numpy.float64(0.0),
trainable=False,
name=("log_moments-%d" % moment_order))
for moment_order in self._moment_orders]
@abc.abstractmethod
def _compute_log_moment(self, sigma, q, moment_order):
"""Compute high moment of privacy loss.
Args:
sigma: the noise sigma, in the multiples of the sensitivity.
q: the sampling ratio.
moment_order: the order of moment.
Returns:
log E[exp(moment_order * X)]
"""
pass
def accumulate_privacy_spending(self, unused_eps_delta,
sigma, num_examples):
"""Accumulate privacy spending.
In particular, accounts for privacy spending when we assume there
are num_examples, and we are releasing the vector
(sum_{i=1}^{num_examples} x_i) + Normal(0, stddev=l2norm_bound*sigma)
where l2norm_bound is the maximum l2_norm of each example x_i, and
the num_examples have been randomly selected out of a pool of
self.total_examples.
Args:
unused_eps_delta: EpsDelta pair which can be tensors. Unused
in this accountant.
sigma: the noise sigma, in the multiples of the sensitivity (that is,
if the l2norm sensitivity is k, then the caller must have added
Gaussian noise with stddev=k*sigma to the result of the query).
num_examples: the number of examples involved.
Returns:
a TensorFlow operation for updating the privacy spending.
"""
q = tf.cast(num_examples, tf.float64) * 1.0 / self._total_examples
moments_accum_ops = []
for i in range(len(self._log_moments)):
moment = self._compute_log_moment(sigma, q, self._moment_orders[i])
moments_accum_ops.append(tf.assign_add(self._log_moments[i], moment))
return tf.group(*moments_accum_ops)
def _compute_delta(self, log_moments, eps):
"""Compute delta for given log_moments and eps.
Args:
log_moments: the log moments of privacy loss, in the form of pairs
of (moment_order, log_moment)
eps: the target epsilon.
Returns:
delta
"""
min_delta = 1.0
for moment_order, log_moment in log_moments:
if math.isinf(log_moment) or math.isnan(log_moment):
sys.stderr.write("The %d-th order is inf or Nan\n" % moment_order)
continue
if log_moment < moment_order * eps:
min_delta = min(min_delta,
math.exp(log_moment - moment_order * eps))
return min_delta
def _compute_eps(self, log_moments, delta):
min_eps = float("inf")
for moment_order, log_moment in log_moments:
if math.isinf(log_moment) or math.isnan(log_moment):
sys.stderr.write("The %d-th order is inf or Nan\n" % moment_order)
continue
min_eps = min(min_eps, (log_moment - math.log(delta)) / moment_order)
return min_eps
def get_privacy_spent(self, sess, target_eps=None, target_deltas=None):
"""Compute privacy spending in (e, d)-DP form for a single or list of eps.
Args:
sess: the session to run the tensor.
target_eps: a list of target epsilon's for which we would like to
compute corresponding delta value.
target_deltas: a list of target deltas for which we would like to
compute the corresponding eps value. Caller must specify
either target_eps or target_delta.
Returns:
A list of EpsDelta pairs.
"""
assert (target_eps is None) ^ (target_deltas is None)
eps_deltas = []
log_moments = sess.run(self._log_moments)
log_moments_with_order = numpy.array(list(zip(self._moment_orders, log_moments)))
if target_eps is not None:
for eps in target_eps:
delta = self._compute_delta(log_moments_with_order, eps)
eps_deltas.append(EpsDelta(eps, delta))
else:
assert target_deltas
for delta in target_deltas:
eps_deltas.append(
EpsDelta(self._compute_eps(log_moments_with_order, delta), delta))
return eps_deltas
class GaussianMomentsAccountant(MomentsAccountant):
"""MomentsAccountant which assumes Gaussian noise.
GaussianMomentsAccountant assumes the noise added is centered Gaussian
noise N(0, sigma^2 I). In this case, we can compute the differential moments
accurately using a formula.
For asymptotic bound, for Gaussian noise with variance sigma^2, we can show
for L < sigma^2, q L < sigma,
log E[exp(L X)] = O(q^2 L^2 / sigma^2).
Using this we derive that for training T epoches, with batch ratio q,
the Gaussian mechanism with variance sigma^2 (with q < 1/sigma) is (e, d)
private for d = exp(T/q q^2 L^2 / sigma^2 - L e). Setting L = sigma^2,
Tq = e/2, the mechanism is (e, exp(-e sigma^2/2))-DP. Equivalently, the
mechanism is (e, d)-DP if sigma = sqrt{2 log(1/d)}/e, q < 1/sigma,
and T < e/(2q). This bound is better than the bound obtained using general
composition theorems, by an Omega(sqrt{log k}) factor on epsilon, if we run
k steps. Since we use direct estimate, the obtained privacy bound has tight
constant.
For GaussianMomentAccountant, it suffices to compute I1, as I1 >= I2,
which reduce to computing E(P(x+s)/P(x+s-1) - 1)^i for s = 0 and 1. In the
companion gaussian_moments.py file, we supply procedure for computing both
I1 and I2 (the computation of I2 is through multi-precision integration
package). It can be verified that indeed I1 >= I2 for wide range of parameters
we have tried, though at the moment we are unable to prove this claim.
We recommend that when using this accountant, users independently verify
using gaussian_moments.py that for their parameters, I1 is indeed larger
than I2. This can be done by following the instructions in
gaussian_moments.py.
"""
def __init__(self, total_examples, moment_orders=32):
"""Initialization.
Args:
total_examples: total number of examples.
moment_orders: the order of moments to keep.
"""
super(self.__class__, self).__init__(total_examples, moment_orders)
self._binomial_table = utils.GenerateBinomialTable(self._max_moment_order)
def _differential_moments(self, sigma, s, t):
"""Compute 0 to t-th differential moments for Gaussian variable.
E[(P(x+s)/P(x+s-1)-1)^t]
= sum_{i=0}^t (t choose i) (-1)^{t-i} E[(P(x+s)/P(x+s-1))^i]
= sum_{i=0}^t (t choose i) (-1)^{t-i} E[exp(-i*(2*x+2*s-1)/(2*sigma^2))]
= sum_{i=0}^t (t choose i) (-1)^{t-i} exp(i(i+1-2*s)/(2 sigma^2))
Args:
sigma: the noise sigma, in the multiples of the sensitivity.
s: the shift.
t: 0 to t-th moment.
Returns:
0 to t-th moment as a tensor of shape [t+1].
"""
assert t <= self._max_moment_order, ("The order of %d is out "
"of the upper bound %d."
% (t, self._max_moment_order))
binomial = tf.slice(self._binomial_table, [0, 0],
[t + 1, t + 1])
signs = numpy.zeros((t + 1, t + 1), dtype=numpy.float64)
for i in range(t + 1):
for j in range(t + 1):
signs[i, j] = 1.0 - 2 * ((i - j) % 2)
exponents = tf.constant([j * (j + 1.0 - 2.0 * s) / (2.0 * sigma * sigma)
for j in range(t + 1)], dtype=tf.float64)
# x[i, j] = binomial[i, j] * signs[i, j] = (i choose j) * (-1)^{i-j}
x = tf.multiply(binomial, signs)
# y[i, j] = x[i, j] * exp(exponents[j])
# = (i choose j) * (-1)^{i-j} * exp(j(j-1)/(2 sigma^2))
# Note: this computation is done by broadcasting pointwise multiplication
# between [t+1, t+1] tensor and [t+1] tensor.
y = tf.multiply(x, tf.exp(exponents))
# z[i] = sum_j y[i, j]
# = sum_j (i choose j) * (-1)^{i-j} * exp(j(j-1)/(2 sigma^2))
z = tf.reduce_sum(y, 1)
return z
def _compute_log_moment(self, sigma, q, moment_order):
"""Compute high moment of privacy loss.
Args:
sigma: the noise sigma, in the multiples of the sensitivity.
q: the sampling ratio.
moment_order: the order of moment.
Returns:
log E[exp(moment_order * X)]
"""
assert moment_order <= self._max_moment_order, ("The order of %d is out "
"of the upper bound %d."
% (moment_order,
self._max_moment_order))
binomial_table = tf.slice(self._binomial_table, [moment_order, 0],
[1, moment_order + 1])
# qs = [1 q q^2 ... q^L] = exp([0 1 2 ... L] * log(q))
qs = tf.exp(tf.constant([i * 1.0 for i in range(moment_order + 1)],
dtype=tf.float64) * tf.cast(
tf.log(q), dtype=tf.float64))
moments0 = self._differential_moments(sigma, 0.0, moment_order)
term0 = tf.reduce_sum(binomial_table * qs * moments0)
moments1 = self._differential_moments(sigma, 1.0, moment_order)
term1 = tf.reduce_sum(binomial_table * qs * moments1)
return tf.squeeze(tf.log(tf.cast(q * term0 + (1.0 - q) * term1,
tf.float64)))
class DummyAccountant(object):
"""An accountant that does no accounting."""
def accumulate_privacy_spending(self, *unused_args):
return tf.no_op()
def get_privacy_spent(self, unused_sess, **unused_kwargs):
return [EpsDelta(numpy.inf, 1.0)] | unknown | codeparrot/codeparrot-clean | ||
#!/usr/bin/env python
import datetime
import os
import sys
def extract_datetime_from_line(line, year):
# Expected format: I0210 13:39:22.381027 25210 solver.cpp:204] Iteration 100, lr = 0.00992565
line = line.strip().split()
month = int(line[0][1:3])
day = int(line[0][3:])
timestamp = line[1]
pos = timestamp.rfind('.')
ts = [int(x) for x in timestamp[:pos].split(':')]
hour = ts[0]
minute = ts[1]
second = ts[2]
microsecond = int(timestamp[pos + 1:])
dt = datetime.datetime(year, month, day, hour, minute, second, microsecond)
return dt
def get_log_created_year(input_file):
"""Get year from log file system timestamp
"""
log_created_time = os.path.getctime(input_file)
log_created_year = datetime.datetime.fromtimestamp(log_created_time).year
return log_created_year
def get_start_time(line_iterable, year):
"""Find start time from group of lines
"""
start_datetime = None
for line in line_iterable:
line = line.strip()
if line.find('Solving') != -1:
start_datetime = extract_datetime_from_line(line, year)
break
return start_datetime
def extract_seconds(input_file, output_file):
with open(input_file, 'r') as f:
lines = f.readlines()
log_created_year = get_log_created_year(input_file)
start_datetime = get_start_time(lines, log_created_year)
assert start_datetime, 'Start time not found'
out = open(output_file, 'w')
for line in lines:
line = line.strip()
if line.find('Iteration') != -1:
dt = extract_datetime_from_line(line, log_created_year)
elapsed_seconds = (dt - start_datetime).total_seconds()
out.write('%f\n' % elapsed_seconds)
out.close()
if __name__ == '__main__':
if len(sys.argv) < 3:
print('Usage: ./extract_seconds input_file output_file')
exit(1)
extract_seconds(sys.argv[1], sys.argv[2]) | unknown | codeparrot/codeparrot-clean | ||
#ifndef RBIMPL_INTERN_FILE_H /*-*-C++-*-vi:se ft=cpp:*/
#define RBIMPL_INTERN_FILE_H
/**
* @file
* @author Ruby developers <ruby-core@ruby-lang.org>
* @copyright This file is a part of the programming language Ruby.
* Permission is hereby granted, to either redistribute and/or
* modify this file, provided that the conditions mentioned in the
* file COPYING are met. Consult the file for details.
* @warning Symbols prefixed with either `RBIMPL` or `rbimpl` are
* implementation details. Don't take them as canon. They could
* rapidly appear then vanish. The name (path) of this header file
* is also an implementation detail. Do not expect it to persist
* at the place it is now. Developers are free to move it anywhere
* anytime at will.
* @note To ruby-core: remember that this header can be possibly
* recursively included from extension libraries written in C++.
* Do not expect for instance `__VA_ARGS__` is always available.
* We assume C99 for ruby itself but we don't assume languages of
* extension libraries. They could be written in C++98.
* @brief Public APIs related to ::rb_cFile.
*/
#include "ruby/internal/attr/nonnull.h"
#include "ruby/internal/attr/pure.h"
#include "ruby/internal/dllexport.h"
#include "ruby/internal/value.h"
#if !defined RUBY_EXPORT && !defined RUBY_NO_OLD_COMPATIBILITY
# include "ruby/backward.h"
#endif
RBIMPL_SYMBOL_EXPORT_BEGIN()
/* file.c */
RBIMPL_ATTR_NONNULL(())
/**
* Identical to rb_file_expand_path(), except how arguments are passed.
*
* @param[in] argc Number of objects of `argv`.
* @param[in] argv Filename, and base directory, in that order.
* @exception rb_eArgError Wrong `argc`.
* @exception rb_eTypeError Non-string passed.
* @exception rb_eEncCompatError No conversion from arguments to a path.
* @return Expanded path.
*
* @internal
*
* It seems nobody actually uses this function right now. Maybe delete it?
*/
VALUE rb_file_s_expand_path(int argc, const VALUE *argv);
/**
* Identical to rb_file_absolute_path(), except it additionally understands
* `~`. If a given pathname starts with `~someone/`, that part expands to the
* user's home directory (or that of current process' owner's in case of `~/`).
*
* @param[in] fname Relative file name.
* @param[in] dname Lookup base directory name, or in case
* ::RUBY_Qnil is passed the process' current
* working directory is assumed.
* @exception rb_eArgError Home directory is not absolute.
* @exception rb_eTypeError Non-string passed.
* @exception rb_eEncCompatError No conversion from arguments to a path.
* @return Expanded path.
*/
VALUE rb_file_expand_path(VALUE fname, VALUE dname);
RBIMPL_ATTR_NONNULL(())
/**
* Identical to rb_file_absolute_path(), except how arguments are passed.
*
* @param[in] argc Number of objects of `argv`.
* @param[in] argv Filename, and base directory, in that order.
* @exception rb_eArgError Wrong `argc`.
* @exception rb_eTypeError Non-string passed.
* @exception rb_eEncCompatError No conversion from arguments to a path.
* @return Expanded path.
*
* @internal
*
* It seems nobody actually uses this function right now. Maybe delete it?
*/
VALUE rb_file_s_absolute_path(int argc, const VALUE *argv);
/**
* Maps a relative path to its absolute representation. Relative paths are
* referenced from the passed directory name, or from the process' current
* working directory in case ::RUBY_Qnil is passed.
*
* @param[in] fname Relative file name.
* @param[in] dname Lookup base directory name, or in case
* ::RUBY_Qnil is passed the process' current
* working directory is assumed.
* @exception rb_eArgError Strings contain NUL bytes.
* @exception rb_eTypeError Non-string passed.
* @exception rb_eEncCompatError No conversion from arguments to a path.
* @return Expanded path.
*/
VALUE rb_file_absolute_path(VALUE fname, VALUE dname);
/**
* Strips a file path's last component (and trailing separators if any). This
* function is relatively simple on POSIX environments; just splits the input
* with `/`, strips the last one, if something remains joins them again,
* otherwise the return value is `"."`. However when it comes to Windows this
* function is quite very much complicated. We have to take UNC etc. into
* account. So for instance `"C:foo"`'s dirname is `"C:."`.
*
* @param[in] fname File name to strip.
* @exception rb_eTypeError `fname` is not a String.
* @exception rb_eArgError `fname` contains NUL bytes.
* @exception rb_eEncCompatError `fname`'s encoding is not path-compat.
* @return A dirname of `fname`.
* @note This is a "pure" operation; it computes the return value solely
* from the passed object and never does any file IO.
*/
VALUE rb_file_dirname(VALUE fname);
RBIMPL_ATTR_NONNULL(())
/**
* Resolves a feature's path. This function takes for instance `"json"` and
* `[".so", ".rb"]`, and iterates over the `$LOAD_PATH` to see if there is
* either `json.so` or `json.rb` in the directory.
*
* This is not what everything `require` does, but at least `require` is built
* on top of it.
*
* @param[in,out] feature File to search, and return buffer.
* @param[in] exts List of file extensions.
* @exception rb_eTypeError `feature` is not a String.
* @exception rb_eArgError `feature` contains NUL bytes.
* @exception rb_eEncCompatError `feature`'s encoding is not path-compat.
* @retval 0 Not found
* @retval otherwise Found index in `ext`, plus one.
* @post `*feature` is a resolved path.
*/
int rb_find_file_ext(VALUE *feature, const char *const *exts);
/**
* Identical to rb_find_file_ext(), except it takes a feature name and is
* extension at once, e.g. `"json.rb"`. This difference is much like how
* `require` and `load` are different.
*
* @param[in] path A path relative to `$LOAD_PATH`.
* @exception rb_eTypeError `path` is not a String.
* @exception rb_eArgError `path` contains NUL bytes.
* @exception rb_eEncCompatError `path`'s encoding is not path-compat.
* @return Expanded path.
*/
VALUE rb_find_file(VALUE path);
/**
* Queries if the given path is either a directory, or a symlink that
* (potentially recursively) points to such thing.
*
* @param[in] _ Ignored (why...?)
* @param[in] path String, or IO. In case of IO it issues
* `fstat(2)` instead of `stat(2)`.
* @exception rb_eFrozenError `path` is a frozen IO (why...?)
* @exception rb_eTypeError `path` is neither String nor IO.
* @exception rb_eArgError `path` contains NUL bytes.
* @exception rb_eEncCompatError `path`'s encoding is not path-compat.
* @retval RUBY_Qtrue `path` is a directory.
* @retval RUBY_Qfalse Otherwise.
*/
VALUE rb_file_directory_p(VALUE _, VALUE path);
/**
* Converts a string into an "OS Path" encoding, if any. In most operating
* systems there are no such things like per-OS default encoding of filename.
* For them this function is no-op. However most notably on MacOS, pathnames
* are UTF-8 encoded. It converts the given string into such encoding.
*
* @param[in] path An instance of ::rb_cString.
* @exception rb_eEncCompatError `path`'s encoding is not path-compat.
* @return `path`'s contents converted to the OS' path encoding.
*/
VALUE rb_str_encode_ospath(VALUE path);
RBIMPL_ATTR_NONNULL(())
RBIMPL_ATTR_PURE()
/**
* Queries if the given path is an absolute path. On POSIX environments it is
* as easy as `path[0] == '/'`. However on Windows, drive letters and UNC
* paths are also taken into account.
*
* @param[in] path A possibly relative path string.
* @retval 1 `path` is absolute.
* @retval 0 `path` is relative.
*/
int rb_is_absolute_path(const char *path);
/**
* Queries the file size of the given file. Because this function calls
* `fstat(2)` internally, it is a failure to pass a closed file to this
* function.
*
* This function flushes the passed file's buffer if any. Can take time.
*
* @param[in] file A file object.
* @exception rb_eFrozenError `file` is frozen.
* @exception rb_eIOError `file` is closed.
* @exception rb_eSystemCallError Permission denied etc.
* @exception rb_eNoMethodError The given non-file object doesn't respond
* to `#size`.
* @return The size of the passed file.
* @note Passing a non-regular file such as a UNIX domain socket to this
* function is not a failure. But the return value is
* unpredictable. POSIX's `<sys/stat.h>` states that "the use of
* this field is unspecified" then.
*/
rb_off_t rb_file_size(VALUE file);
RBIMPL_SYMBOL_EXPORT_END()
#endif /* RBIMPL_INTERN_FILE_H */ | c | github | https://github.com/ruby/ruby | include/ruby/internal/intern/file.h |
import os
import dj_database_url
### Basic config
BASE = os.path.abspath(os.path.join(os.path.dirname(__file__), '..', '..'))
DEBUG = TEMPLATE_DEBUG = True
SITE_ID = 1
SECRET_KEY = 'its-a-secret-to-everybody'
# Until Sentry works on Py3, do errors the old-fashioned way.
ADMINS = []
# General project information
# These are available in the template as SITE_INFO.<title>
SITE_VARIABLES = {
'site_name': 'Python.org',
'site_descript': 'The official home of the Python Programming Language',
}
### Databases
DATABASES = {
'default': dj_database_url.config(default='postgres:///python.org')
}
### Locale settings
TIME_ZONE = 'UTC'
LANGUAGE_CODE = 'en-us'
USE_I18N = True
USE_L10N = True
USE_TZ = True
DATE_FORMAT = 'Y-m-j'
### Files (media and static)
MEDIA_ROOT = os.path.join(BASE, 'media')
MEDIA_URL = '/m/'
# Absolute path to the directory static files should be collected to.
# Don't put anything in this directory yourself; store your static files
# in apps' "static/" subdirectories and in STATICFILES_DIRS.
# Example: "/var/www/example.com/static/"
STATIC_ROOT = os.path.join(BASE, 'static-root')
STATIC_URL = '/static/'
STATICFILES_DIRS = [
os.path.join(BASE, 'static'),
]
STATICFILES_STORAGE = 'pipeline.storage.PipelineStorage'
### Authentication
AUTHENTICATION_BACKENDS = (
# Needed to login by username in Django admin, regardless of `allauth`
"django.contrib.auth.backends.ModelBackend",
# `allauth` specific authentication methods, such as login by e-mail
"allauth.account.auth_backends.AuthenticationBackend",
)
LOGIN_REDIRECT_URL = 'home'
ACCOUNT_LOGOUT_REDIRECT_URL = 'home'
ACCOUNT_EMAIL_REQUIRED = True
ACCOUNT_UNIQUE_EMAIL = True
ACCOUNT_EMAIL_VERIFICATION = 'mandatory'
SOCIALACCOUNT_EMAIL_REQUIRED = True
SOCIALACCOUNT_EMAIL_VERIFICATION = True
SOCIALACCOUNT_QUERY_EMAIL = True
### Templates
TEMPLATE_DIRS = [
os.path.join(BASE, 'templates')
]
TEMPLATE_CONTEXT_PROCESSORS = [
"django.contrib.auth.context_processors.auth",
"django.core.context_processors.debug",
"django.core.context_processors.i18n",
"django.core.context_processors.media",
"django.core.context_processors.static",
"django.core.context_processors.tz",
"django.core.context_processors.request",
"allauth.account.context_processors.account",
"allauth.socialaccount.context_processors.socialaccount",
"django.contrib.messages.context_processors.messages",
"pydotorg.context_processors.site_info",
"pydotorg.context_processors.url_name",
]
### URLs, WSGI, middleware, etc.
ROOT_URLCONF = 'pydotorg.urls'
MIDDLEWARE_CLASSES = (
'pydotorg.middleware.AdminNoCaching',
'django.middleware.common.CommonMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
'pages.middleware.PageFallbackMiddleware',
'django.contrib.redirects.middleware.RedirectFallbackMiddleware',
)
AUTH_USER_MODEL = 'users.User'
WSGI_APPLICATION = 'pydotorg.wsgi.application'
### Apps
INSTALLED_APPS = [
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.sites',
'django.contrib.redirects',
'django.contrib.messages',
'django.contrib.staticfiles',
'django.contrib.comments',
'django.contrib.admin',
'django.contrib.admindocs',
'django_comments_xtd',
'jsonfield',
'pipeline',
'sitetree',
'timedelta',
'imagekit',
'haystack',
'honeypot',
'users',
'boxes',
'cms',
'companies',
'feedbacks',
'community',
'jobs',
'pages',
'sponsors',
'successstories',
'events',
'minutes',
'peps',
'blogs',
'downloads',
'codesamples',
'allauth',
'allauth.account',
'allauth.socialaccount',
#'allauth.socialaccount.providers.facebook',
#'allauth.socialaccount.providers.github',
#'allauth.socialaccount.providers.openid',
#'allauth.socialaccount.providers.twitter',
# Tastypie needs the `users` app to be already loaded.
'tastypie',
]
# Fixtures
FIXTURE_DIRS = (
os.path.join(BASE, 'fixtures'),
)
### Testing
SKIP_NETWORK_TESTS = True
### Logging
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'filters': {
'require_debug_false': {
'()': 'django.utils.log.RequireDebugFalse'
}
},
'handlers': {
'mail_admins': {
'level': 'ERROR',
'filters': ['require_debug_false'],
'class': 'django.utils.log.AdminEmailHandler'
}
},
'loggers': {
'django.request': {
'handlers': ['mail_admins'],
'level': 'ERROR',
'propagate': True,
},
}
}
### Comments
COMMENTS_APP = 'django_comments_xtd'
COMMENTS_XTD_MAX_THREAD_LEVEL = 0
COMMENTS_XTD_FORM_CLASS = "jobs.forms.JobCommentForm"
### Honeypot
HONEYPOT_FIELD_NAME = 'email_body_text'
HONEYPOT_VALUE = 'write your message'
### Blog Feed URL
PYTHON_BLOG_FEED_URL = "http://feeds.feedburner.com/PythonInsider"
PYTHON_BLOG_URL = "http://blog.python.org"
### Registration mailing lists
MAILING_LIST_PSF_MEMBERS = "psf-members-announce-request@python.org"
### PEP Repo Location
PEP_REPO_PATH = ''
### Fastly ###
FASTLY_API_KEY = False # Set to Fastly API key in production to allow pages to
# be purged on save
# Jobs
JOB_THRESHOLD_DAYS = 90
### Pipeline
from .pipeline import (
PIPELINE_CSS, PIPELINE_JS,
PIPELINE_COMPILERS,
PIPELINE_SASS_BINARY, PIPELINE_SASS_ARGUMENTS,
PIPELINE_CSS_COMPRESSOR, PIPELINE_JS_COMPRESSOR,
) | unknown | codeparrot/codeparrot-clean |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.