text_prompt stringlengths 157 13.1k | code_prompt stringlengths 7 19.8k ⌀ |
|---|---|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def wrap_node(self, node, options):
'''\
celery registers tasks by decorating them, and so do we, so the user
can pass a celery task and we'll wrap our code with theirs in a nice
package celery can execute.
'''
if 'celery_task' in options:
return options['celery_task'](node)
return self.celery_task(node) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def checkpoint(key=0, unpickler=pickle.load, pickler=pickle.dump, work_dir=gettempdir(), refresh=False):
""" A utility decorator to save intermediate results of a function. It is the caller's responsibility to specify a key naming scheme such that the output of each function call with different arguments is stored in a separate file. :param key: The key to store the computed intermediate output of the decorated function. if key is a string, it is used directly as the name. if key is a string.Template object, you can specify your file-naming convention using the standard string.Template conventions. Since string.Template uses named substitutions, it can handle only keyword arguments. Therfore, in addition to the standard Template conventions, an additional feature is provided to help with non-keyword arguments. For instance if you have a function definition as f(m, n, arg3='myarg3',arg4='myarg4'). Say you want your key to be: n followed by an _ followed by 'text' followed by arg3 followed by a . followed by arg4. Let n = 3, arg3='out', arg4='txt', then you are interested in getting '3_textout.txt'. This is written as key=Template('{1}_text$arg3.$arg4') The filename is first generated by substituting the kwargs, i.e key_id.substitute(kwargs), this would give the string '{1}_textout.txt' as output. This is further processed by a call to format with args as the argument, where the second argument is picked (since counting starts from 0), and we get 3_textout.txt. if key is a callable function, it is called with the same arguments as that of the function, in a special format. is an iterable containing the un-named arguments of the function, and kwarg is a dictionary containing the keyword arguments. For instance, the above example can be written as: key = lambda arg, kwarg: '%d_text%s.%s'.format(arg[1], kwarg['arg3'], kwarg['arg4']) Or one can define a function that takes the same arguments: def key_namer(args, kwargs):
return '%d_text%s.%s'.format(arg[1], kwarg['arg3'], kwarg['arg4']) This way you can do complex argument processing and name generation. :param pickler: The function that loads the saved object and returns. This should ideally be of the same format as the one that is computed. However, in certain cases, it is enough as long as it provides the information necessary for the caller, even if it is not exactly same as the object returned by the function. :param unpickler: The function that saves the computed object into a file. :param work_dir: The location where the checkpoint files are stored. :param do_refresh: If enabled, this will not skip, effectively disabling the decoration @checkpoint. REFRESHING: One of the intended ways to use the refresh feature is as follows: Say you are checkpointing a function f1, f2; have a file or a place where you define refresh variables: defs.py: ------- REFRESH_f1 = True REFRESH_f2 = os.environ['F2_REFRESH'] # can set this externally code.py: ------- your code. your code. This way, you have control on what to refresh without modifying the code, by setting the defs either via input or by modifying defs.py. """ |
def decorator(func):
def wrapped(*args, **kwargs):
# If first arg is a string, use it directly.
if isinstance(key, str):
save_file = os.path.join(work_dir, key)
elif isinstance(key, Template):
save_file = os.path.join(work_dir, key.substitute(kwargs))
save_file = save_file.format(*args)
elif isinstance(key, types.FunctionType):
save_file = os.path.join(work_dir, key(args, kwargs))
else:
logging.warn('Using 0-th argument as default.')
save_file = os.path.join(work_dir, '{0}')
save_file = save_file.format(args[key])
logging.info('checkpoint@ %s' % save_file)
# cache_file doesn't exist, run the function and save output in checkpoint.
if isinstance(refresh, types.FunctionType):
do_refresh = refresh()
else:
do_refresh = refresh
if do_refresh or not os.path.exists(path=save_file): # Otherwise compute it save it and return it.
# If the program fails, don't checkpoint.
try:
out = func(*args, **kwargs)
except: # a blank raise re-raises the last exception.
raise
else: # If the program is successful, then go ahead and call the save function.
with open(save_file, 'wb') as f:
pickler(out, f)
return out
# Otherwise, load the checkpoint file and send it.
else:
logging.info("Checkpoint exists. Loading from: %s" % save_file)
with open(save_file, 'rb') as f:
return unpickler(f)
# Todo: Sending options to load/save functions.
return wrapped
return decorator |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def run():
"""Display the arguments as a braille graph on standard output.""" |
# We override the program name to reflect that this script must be run with
# the python executable.
parser = argparse.ArgumentParser(
prog='python -m braillegraph',
description='Print a braille bar graph of the given integers.'
)
# This flag sets the end string that we'll print. If we pass end=None to
# print(), it will use its default. If we pass end='', it will suppress the
# newline character.
parser.add_argument('-n', '--no-newline', action='store_const',
dest='end', const='', default=None,
help='do not print the trailing newline character')
# Add subparsers for the directions
subparsers = parser.add_subparsers(title='directions')
horizontal_parser = subparsers.add_parser('horizontal',
help='a horizontal graph')
horizontal_parser.set_defaults(
func=lambda args: horizontal_graph(args.integers)
)
horizontal_parser.add_argument('integers', metavar='N', type=int,
nargs='+', help='an integer')
vertical_parser = subparsers.add_parser('vertical',
help='a vertical graph')
vertical_parser.set_defaults(
func=lambda args: vertical_graph(args.integers, sep=args.sep)
)
vertical_parser.add_argument('integers', metavar='N', type=int, nargs='+',
help='an integer')
# The separator for groups of bars (i.e., "lines"). If we pass None,
# vertical_parser will use its default.
vertical_parser.add_argument('-s', '--sep', action='store', default=None,
help='separator for groups of bars')
args = parser.parse_args()
print(args.func(args), end=args.end) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _rnd_date(start, end):
"""Internal random date generator. """ |
return date.fromordinal(random.randint(start.toordinal(), end.toordinal())) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def rnd_date_list_high_performance(size, start=date(1970, 1, 1), end=None, **kwargs):
""" Generate mass random date. :param size: int, number of :param start: date similar object, int / str / date / datetime :param end: date similar object, int / str / date / datetime, default today's date :param kwargs: args placeholder :return: list of datetime.date """ |
if end is None:
end = date.today()
start_days = to_ordinal(parser.parse_datetime(start))
end_days = to_ordinal(parser.parse_datetime(end))
_assert_correct_start_end(start_days, end_days)
if has_np: # pragma: no cover
return [
from_ordinal(days)
for days in np.random.randint(start_days, end_days, size)
]
else:
return [
from_ordinal(random.randint(start_days, end_days))
for _ in range(size)
] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def day_interval(year, month, day, milliseconds=False, return_string=False):
""" Return a start datetime and end datetime of a day. :param milliseconds: Minimum time resolution. :param return_string: If you want string instead of datetime, set True Usage Example:: datetime(2014, 6, 17, 0, 0, 0) datetime(2014, 6, 17, 23, 59, 59) """ |
if milliseconds: # pragma: no cover
delta = timedelta(milliseconds=1)
else:
delta = timedelta(seconds=1)
start = datetime(year, month, day)
end = datetime(year, month, day) + timedelta(days=1) - delta
if not return_string:
return start, end
else:
return str(start), str(end) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def month_interval(year, month, milliseconds=False, return_string=False):
""" Return a start datetime and end datetime of a month. :param milliseconds: Minimum time resolution. :param return_string: If you want string instead of datetime, set True Usage Example:: datetime(2000, 2, 1, 0, 0, 0) datetime(2000, 2, 29, 23, 59, 59) """ |
if milliseconds: # pragma: no cover
delta = timedelta(milliseconds=1)
else:
delta = timedelta(seconds=1)
if month == 12:
start = datetime(year, month, 1)
end = datetime(year + 1, 1, 1) - delta
else:
start = datetime(year, month, 1)
end = datetime(year, month + 1, 1) - delta
if not return_string:
return start, end
else:
return str(start), str(end) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def year_interval(year, milliseconds=False, return_string=False):
""" Return a start datetime and end datetime of a year. :param milliseconds: Minimum time resolution. :param return_string: If you want string instead of datetime, set True Usage Example:: datetime(2007, 1, 1, 0, 0, 0) datetime(2007, 12, 31, 23, 59, 59) """ |
if milliseconds: # pragma: no cover
delta = timedelta(milliseconds=1)
else:
delta = timedelta(seconds=1)
start = datetime(year, 1, 1)
end = datetime(year + 1, 1, 1) - delta
if not return_string:
return start, end
else:
return str(start), str(end) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_milestone(self, title):
""" given the title as str, looks for an existing milestone or create a new one, and return the object """ |
if not title:
return GithubObject.NotSet
if not hasattr(self, '_milestones'):
self._milestones = {m.title: m for m in self.repo.get_milestones()}
milestone = self._milestones.get(title)
if not milestone:
milestone = self.repo.create_milestone(title=title)
return milestone |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_assignee(self, login):
""" given the user login, looks for a user in assignee list of the repo and return it if was found. """ |
if not login:
return GithubObject.NotSet
if not hasattr(self, '_assignees'):
self._assignees = {c.login: c for c in self.repo.get_assignees()}
if login not in self._assignees:
# warning
print("{} doesn't belong to this repo. This issue won't be assigned.".format(login))
return self._assignees.get(login) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def sender(self, issues):
""" push a list of issues to github """ |
for issue in issues:
state = self.get_state(issue.state)
if issue.number:
try:
gh_issue = self.repo.get_issue(issue.number)
original_state = gh_issue.state
if original_state == state:
action = 'Updated'
elif original_state == 'closed':
action = 'Reopened'
else:
action = 'Closed'
gh_issue.edit(title=issue.title,
body=issue.body,
labels=issue.labels,
milestone=self.get_milestone(issue.milestone),
assignee=self.get_assignee(issue.assignee),
state=self.get_state(issue.state)
)
print('{} #{}: {}'.format(action, gh_issue.number, gh_issue.title))
except GithubException:
print('Not found #{}: {} (ignored)'.format(issue.number, issue.title))
continue
else:
gh_issue = self.repo.create_issue(title=issue.title,
body=issue.body,
labels=issue.labels,
milestone=self.get_milestone(issue.milestone),
assignee=self.get_assignee(issue.assignee))
print('Created #{}: {}'.format(gh_issue.number, gh_issue.title)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def wrap_node(self, node, options):
'''
we have the option to construct nodes here, so we can use different
queues for nodes without having to have different queue objects.
'''
job_kwargs = {
'queue': options.get('queue', 'default'),
'connection': options.get('connection', self.redis_connection),
'timeout': options.get('timeout', None),
'result_ttl': options.get('result_ttl', 500),
}
return job(**job_kwargs)(node) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def create_albaran_automatic(pk, list_lines):
""" creamos de forma automatica el albaran """ |
line_bd = SalesLineAlbaran.objects.filter(line_order__pk__in=list_lines).values_list('line_order__pk')
if line_bd.count() == 0 or len(list_lines) != len(line_bd[0]):
# solo aquellas lineas de pedidos que no estan ya albarandas
if line_bd.count() != 0:
for x in line_bd[0]:
list_lines.pop(list_lines.index(x))
GenLineProduct.create_albaran_from_order(pk, list_lines) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def create_invoice_from_albaran(pk, list_lines):
""" la pk y list_lines son de albaranes, necesitamos la info de las lineas de pedidos """ |
context = {}
if list_lines:
new_list_lines = [x[0] for x in SalesLineAlbaran.objects.values_list('line_order__pk').filter(
pk__in=[int(x) for x in list_lines]
).exclude(invoiced=True)]
if new_list_lines:
lo = SalesLineOrder.objects.values_list('order__pk').filter(pk__in=new_list_lines)[:1]
if lo and lo[0] and lo[0][0]:
new_pk = lo[0][0]
context = GenLineProduct.create_invoice_from_order(new_pk, new_list_lines)
if 'error' not in context or not context['error']:
SalesLineAlbaran.objects.filter(
pk__in=[int(x) for x in list_lines]
).exclude(invoiced=True).update(invoiced=True)
return context
else:
error = _('Pedido no encontrado')
else:
error = _('Lineas no relacionadas con pedido')
else:
error = _('Lineas no seleccionadas')
context['error'] = error
return context |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def create_invoice_from_ticket(pk, list_lines):
""" la pk y list_lines son de ticket, necesitamos la info de las lineas de pedidos """ |
context = {}
if list_lines:
new_list_lines = [x[0] for x in SalesLineTicket.objects.values_list('line_order__pk').filter(pk__in=[int(x) for x in list_lines])]
if new_list_lines:
lo = SalesLineOrder.objects.values_list('order__pk').filter(pk__in=new_list_lines)[:1]
if lo and lo[0] and lo[0][0]:
new_pk = lo[0][0]
return GenLineProduct.create_invoice_from_order(new_pk, new_list_lines)
else:
error = _('Pedido no encontrado')
else:
error = _('Lineas no relacionadas con pedido')
else:
error = _('Lineas no seleccionadas')
context['error'] = error
return context |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _check_values(in_values):
""" Check if values need to be converted before they get mogrify'd """ |
out_values = []
for value in in_values:
# if isinstance(value, (dict, list)):
# out_values.append(json.dumps(value))
# else:
out_values.append(value)
return tuple(out_values) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def clone(srcpath, destpath, vcs=None):
"""Clone an existing repository. :param str srcpath: Path to an existing repository :param str destpath: Desired path of new repository :param str vcs: Either ``git``, ``hg``, or ``svn`` :returns VCSRepo: The newly cloned repository If ``vcs`` is not given, then the repository type is discovered from ``srcpath`` via :func:`probe`. """ |
vcs = vcs or probe(srcpath)
cls = _get_repo_class(vcs)
return cls.clone(srcpath, destpath) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def probe(path):
"""Probe a repository for its type. :param str path: The path of the repository :raises UnknownVCSType: if the repository type couldn't be inferred :returns str: either ``git``, ``hg``, or ``svn`` This function employs some heuristics to guess the type of the repository. """ |
import os
from .common import UnknownVCSType
if os.path.isdir(os.path.join(path, '.git')):
return 'git'
elif os.path.isdir(os.path.join(path, '.hg')):
return 'hg'
elif (
os.path.isfile(os.path.join(path, 'config')) and
os.path.isdir(os.path.join(path, 'objects')) and
os.path.isdir(os.path.join(path, 'refs')) and
os.path.isdir(os.path.join(path, 'branches'))
):
return 'git'
elif (
os.path.isfile(os.path.join(path, 'format')) and
os.path.isdir(os.path.join(path, 'conf')) and
os.path.isdir(os.path.join(path, 'db')) and
os.path.isdir(os.path.join(path, 'locks'))
):
return 'svn'
else:
raise UnknownVCSType(path) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def open(path, vcs=None):
"""Open an existing repository :param str path: The path of the repository :param vcs: If specified, assume the given repository type to avoid auto-detection. Either ``git``, ``hg``, or ``svn``. :raises UnknownVCSType: if the repository type couldn't be inferred If ``vcs`` is not specified, it is inferred via :func:`probe`. """ |
import os
assert os.path.isdir(path), path + ' is not a directory'
vcs = vcs or probe(path)
cls = _get_repo_class(vcs)
return cls(path) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _check_attributes(self, attributes, extra=None):
"""Check if attributes given to the constructor can be used to instanciate a valid node.""" |
extra = extra or ()
unknown_keys = set(attributes) - set(self._possible_attributes) - set(extra)
if unknown_keys:
logger.warning('%s got unknown attributes: %s' %
(self.__class__.__name__, unknown_keys)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def main(args=None):
""" Entry point for the tag CLI. Isolated as a method so that the CLI can be called by other Python code (e.g. for testing), in which case the arguments are passed to the function. If no arguments are passed to the function, parse them from the command line. """ |
if args is None:
args = tag.cli.parser().parse_args()
assert args.cmd in mains
mainmethod = mains[args.cmd]
mainmethod(args) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _build_request(request):
"""Build message to transfer over the socket from a request.""" |
msg = bytes([request['cmd']])
if 'dest' in request:
msg += bytes([request['dest']])
else:
msg += b'\0'
if 'sha' in request:
msg += request['sha']
else:
for dummy in range(64):
msg += b'0'
logging.debug("Request (%d): %s", len(msg), msg)
return msg |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def main():
"""Show example using the API.""" |
__async__ = True
logging.basicConfig(format="%(levelname)-10s %(message)s",
level=logging.DEBUG)
if len(sys.argv) != 2:
logging.error("Must specify configuration file")
sys.exit()
config = configparser.ConfigParser()
config.read(sys.argv[1])
password = config.get('default', 'password')
if __async__:
client = Client(config.get('default', 'host'),
config.getint('default', 'port'), password, _callback)
else:
client = Client(config.get('default', 'host'),
config.getint('default', 'port'),
password)
status = client.messages()
msg = status[0]
print(msg)
print(client.mp3(msg['sha'].encode('utf-8')))
while True:
continue |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def start(self):
"""Start thread.""" |
if not self._thread:
logging.info("Starting asterisk mbox thread")
# Ensure signal queue is empty
try:
while True:
self.signal.get(False)
except queue.Empty:
pass
self._thread = threading.Thread(target=self._loop)
self._thread.setDaemon(True)
self._thread.start() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def stop(self):
"""Stop thread.""" |
if self._thread:
self.signal.put("Stop")
self._thread.join()
if self._soc:
self._soc.shutdown()
self._soc.close()
self._thread = None |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _recv_msg(self):
"""Read a message from the server.""" |
command = ord(recv_blocking(self._soc, 1))
msglen = recv_blocking(self._soc, 4)
msglen = ((msglen[0] << 24) + (msglen[1] << 16) +
(msglen[2] << 8) + msglen[3])
msg = recv_blocking(self._soc, msglen)
return command, msg |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _loop(self):
"""Handle data.""" |
request = {}
connected = False
while True:
timeout = None
sockets = [self.request_queue, self.signal]
if not connected:
try:
self._clear_request(request)
self._connect()
self._soc.send(_build_request(
{'cmd': cmd.CMD_MESSAGE_LIST}))
self._soc.send(_build_request(
{'cmd': cmd.CMD_MESSAGE_CDR_AVAILABLE}))
connected = True
except ConnectionRefusedError:
timeout = 5.0
if connected:
sockets.append(self._soc)
readable, _writable, _errored = select.select(
sockets, [], [], timeout)
if self.signal in readable:
break
if self._soc in readable:
# We have incoming data
try:
command, msg = self._recv_msg()
self._handle_msg(command, msg, request)
except (RuntimeError, ConnectionResetError):
logging.warning("Lost connection")
connected = False
self._clear_request(request)
if self.request_queue in readable:
request = self.request_queue.get()
self.request_queue.task_done()
if not connected:
self._clear_request(request)
else:
if (request['cmd'] == cmd.CMD_MESSAGE_LIST and
self._status and
(not self._callback or 'sync' in request)):
self.result_queue.put(
[cmd.CMD_MESSAGE_LIST, self._status])
request = {}
else:
self._soc.send(_build_request(request)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def mp3(self, sha, **kwargs):
"""Get raw MP3 of a message.""" |
return self._queue_msg({'cmd': cmd.CMD_MESSAGE_MP3,
'sha': _get_bytes(sha)}, **kwargs) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def delete(self, sha, **kwargs):
"""Delete a message.""" |
return self._queue_msg({'cmd': cmd.CMD_MESSAGE_DELETE,
'sha': _get_bytes(sha)}, **kwargs) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_cdr(self, start=0, count=-1, **kwargs):
"""Request range of CDR messages""" |
sha = encode_to_sha("{:d},{:d}".format(start, count))
return self._queue_msg({'cmd': cmd.CMD_MESSAGE_CDR,
'sha': sha}, **kwargs) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def path(self) -> Path: """A Path for this name object joining field names from `self.get_path_pattern_list` with this object's name""" |
args = list(self._iter_translated_field_names(self.get_path_pattern_list()))
args.append(self.get_name())
return Path(*args) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def fold(self, predicate):
"""Takes a predicate and applies it to each node starting from the leaves and making the return value propagate.""" |
childs = {x:y.fold(predicate) for (x,y) in self._attributes.items()
if isinstance(y, SerializableTypedAttributesHolder)}
return predicate(self, childs) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def the_one(cls):
"""Get the single global HelpUrlExpert object.""" |
if cls.THE_ONE is None:
cls.THE_ONE = cls(settings.HELP_TOKENS_INI_FILE)
return cls.THE_ONE |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_config_value(self, section_name, option, default_option="default"):
""" Read a value from the configuration, with a default. Args: section_name (str):
name of the section in the configuration from which the option should be found. option (str):
name of the configuration option. default_option (str):
name of the default configuration option whose value should be returned if the requested option is not found. Returns: str: the value from the ini file. """ |
if self.config is None:
self.config = configparser.ConfigParser()
self.config.read(self.ini_file_name)
if option:
try:
return self.config.get(section_name, option)
except configparser.NoOptionError:
log.debug(
"Didn't find a configuration option for '%s' section and '%s' option",
section_name, option,
)
return self.config.get(section_name, default_option) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def url_for_token(self, token):
"""Find the full URL for a help token.""" |
book_url = self.get_config_value("pages", token)
book, _, url_tail = book_url.partition(':')
book_base = settings.HELP_TOKENS_BOOKS[book]
url = book_base
lang = getattr(settings, "HELP_TOKENS_LANGUAGE_CODE", None)
if lang is not None:
lang = self.get_config_value("locales", lang)
url += "/" + lang
version = getattr(settings, "HELP_TOKENS_VERSION", None)
if version is not None:
url += "/" + version
url += "/" + url_tail
return url |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def multi_load_data_custom(Channel, TraceTitle, RunNos, directoryPath='.', calcPSD=True, NPerSegmentPSD=1000000):
""" Lets you load multiple datasets named with the LeCroy's custom naming scheme at once. Parameters Channel : int The channel you want to load TraceTitle : string The custom trace title of the files. RunNos : sequence Sequence of run numbers you want to load RepeatNos : sequence Sequence of repeat numbers you want to load directoryPath : string, optional The path to the directory housing the data The default is the current directory Returns ------- Data : list A list containing the DataObjects that were loaded. """ |
# files = glob('{}/*'.format(directoryPath))
# files_CorrectChannel = []
# for file_ in files:
# if 'C{}'.format(Channel) in file_:
# files_CorrectChannel.append(file_)
# files_CorrectRunNo = []
# for RunNo in RunNos:
# files_match = _fnmatch.filter(
# files_CorrectChannel, '*C{}'.format(Channel)+TraceTitle+str(RunNo).zfill(5)+'.*')
# for file_ in files_match:
# files_CorrectRunNo.append(file_)
matching_files = search_data_custom(Channel, TraceTitle, RunNos, directoryPath)
cpu_count = _cpu_count()
workerPool = _Pool(cpu_count)
# for filepath in files_CorrectRepeatNo:
# print(filepath)
# data.append(load_data(filepath))
load_data_partial = _partial(load_data, calcPSD=calcPSD, NPerSegmentPSD=NPerSegmentPSD)
data = workerPool.map(load_data_partial, matching_files)
workerPool.close()
workerPool.terminate()
workerPool.join()
return data |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def search_data_custom(Channel, TraceTitle, RunNos, directoryPath='.'):
""" Lets you create a list with full file paths of the files named with the LeCroy's custom naming scheme. Parameters Channel : int The channel you want to load TraceTitle : string The custom trace title of the files. RunNos : sequence Sequence of run numbers you want to load RepeatNos : sequence Sequence of repeat numbers you want to load directoryPath : string, optional The path to the directory housing the data The default is the current directory Returns ------- Paths : list A list containing the full file paths of the files you were looking for. """ |
files = glob('{}/*'.format(directoryPath))
files_CorrectChannel = []
for file_ in files:
if 'C{}'.format(Channel) in file_:
files_CorrectChannel.append(file_)
files_CorrectRunNo = []
for RunNo in RunNos:
files_match = _fnmatch.filter(
files_CorrectChannel, '*C{}'.format(Channel)+TraceTitle+str(RunNo).zfill(5)+'.*')
for file_ in files_match:
files_CorrectRunNo.append(file_)
print("loading the following files: {}".format(files_CorrectRunNo))
paths = files_CorrectRunNo
return paths |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def calc_temp(Data_ref, Data):
""" Calculates the temperature of a data set relative to a reference. The reference is assumed to be at 300K. Parameters Data_ref : DataObject Reference data set, assumed to be 300K Data : DataObject Data object to have the temperature calculated for Returns ------- T : uncertainties.ufloat The temperature of the data set """ |
T = 300 * ((Data.A * Data_ref.Gamma) / (Data_ref.A * Data.Gamma))
Data.T = T
return T |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def fit_curvefit(p0, datax, datay, function, **kwargs):
""" Fits the data to a function using scipy.optimise.curve_fit Parameters p0 : array_like initial parameters to use for fitting datax : array_like x data to use for fitting datay : array_like y data to use for fitting function : function funcion to be fit to the data kwargs keyword arguments to be passed to scipy.optimise.curve_fit Returns ------- pfit_curvefit : array Optimal values for the parameters so that the sum of the squared residuals of ydata is minimized perr_curvefit : array One standard deviation errors in the optimal values for the parameters """ |
pfit, pcov = \
_curve_fit(function, datax, datay, p0=p0,
epsfcn=0.0001, **kwargs)
error = []
for i in range(len(pfit)):
try:
error.append(_np.absolute(pcov[i][i])**0.5)
except:
error.append(_np.NaN)
pfit_curvefit = pfit
perr_curvefit = _np.array(error)
return pfit_curvefit, perr_curvefit |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def moving_average(array, n=3):
""" Calculates the moving average of an array. Parameters array : array The array to have the moving average taken of n : int The number of points of moving average to take Returns ------- MovingAverageArray : array The n-point moving average of the input array """ |
ret = _np.cumsum(array, dtype=float)
ret[n:] = ret[n:] - ret[:-n]
return ret[n - 1:] / n |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def fit_autocorrelation(autocorrelation, time, GammaGuess, TrapFreqGuess=None, method='energy', MakeFig=True, show_fig=True):
""" Fits exponential relaxation theory to data. Parameters autocorrelation : array array containing autocorrelation to be fitted time : array array containing the time of each point the autocorrelation was evaluated GammaGuess : float The approximate Big Gamma (in radians) to use initially TrapFreqGuess : float The approximate trapping frequency to use initially in Hz. method : string, optional To choose which autocorrelation fit is needed. 'position' : equation 4.20 from Tongcang Li's 2013 thesis (DOI: 10.1007/978-1-4614-6031-2) 'energy' : proper exponential energy correlation decay (DOI: 10.1103/PhysRevE.94.062151) MakeFig : bool, optional Whether to construct and return the figure object showing the fitting. defaults to True show_fig : bool, optional Whether to show the figure object when it has been created. defaults to True Returns ------- ParamsFit - Fitted parameters: 'variance'-method : [Gamma] 'position'-method : [Gamma, AngularTrappingFrequency] ParamsFitErr - Error in fitted parameters: 'varaince'-method : [GammaErr] 'position'-method : [GammaErr, AngularTrappingFrequencyErr] fig : matplotlib.figure.Figure object figure object containing the plot ax : matplotlib.axes.Axes object axes with the data plotted of the: - initial data - final fit """ |
datax = time
datay = autocorrelation
method = method.lower()
if method == 'energy':
p0 = _np.array([GammaGuess])
Params_Fit, Params_Fit_Err = fit_curvefit(p0,
datax,
datay,
_energy_autocorrelation_fitting_eqn)
autocorrelation_fit = _energy_autocorrelation_fitting_eqn(_np.arange(0,datax[-1],1e-7),
Params_Fit[0])
elif method == 'position':
AngTrapFreqGuess = 2 * _np.pi * TrapFreqGuess
p0 = _np.array([GammaGuess, AngTrapFreqGuess])
Params_Fit, Params_Fit_Err = fit_curvefit(p0,
datax,
datay,
_position_autocorrelation_fitting_eqn)
autocorrelation_fit = _position_autocorrelation_fitting_eqn(_np.arange(0,datax[-1],1e-7),
Params_Fit[0],
Params_Fit[1])
if MakeFig == True:
fig = _plt.figure(figsize=properties["default_fig_size"])
ax = fig.add_subplot(111)
ax.plot(datax*1e6, datay,
'.', color="darkblue", label="Autocorrelation Data", alpha=0.5)
ax.plot(_np.arange(0,datax[-1],1e-7)*1e6, autocorrelation_fit,
color="red", label="fit")
ax.set_xlim([0,
30e6/Params_Fit[0]/(2*_np.pi)])
legend = ax.legend(loc="best", frameon = 1)
frame = legend.get_frame()
frame.set_facecolor('white')
frame.set_edgecolor('white')
ax.set_xlabel("time (us)")
ax.set_ylabel(r"$\left | \frac{\langle x(t)x(t+\tau) \rangle}{\langle x(t)x(t) \rangle} \right |$")
if show_fig == True:
_plt.show()
return Params_Fit, Params_Fit_Err, fig, ax
else:
return Params_Fit, Params_Fit_Err, None, None |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def IFFT_filter(Signal, SampleFreq, lowerFreq, upperFreq, PyCUDA = False):
""" Filters data using fft -> zeroing out fft bins -> ifft Parameters Signal : ndarray Signal to be filtered SampleFreq : float Sample frequency of signal lowerFreq : float Lower frequency of bandpass to allow through filter upperFreq : float Upper frequency of bandpass to allow through filter PyCUDA : bool, optional If True, uses PyCUDA to accelerate the FFT and IFFT via using your NVIDIA-GPU If False, performs FFT and IFFT with conventional scipy.fftpack Returns ------- FilteredData : ndarray Array containing the filtered data """ |
if PyCUDA==True:
Signalfft=calc_fft_with_PyCUDA(Signal)
else:
print("starting fft")
Signalfft = scipy.fftpack.fft(Signal)
print("starting freq calc")
freqs = _np.fft.fftfreq(len(Signal)) * SampleFreq
print("starting bin zeroing")
Signalfft[_np.where(freqs < lowerFreq)] = 0
Signalfft[_np.where(freqs > upperFreq)] = 0
if PyCUDA==True:
FilteredSignal = 2 * calc_ifft_with_PyCUDA(Signalfft)
else:
print("starting ifft")
FilteredSignal = 2 * scipy.fftpack.ifft(Signalfft)
print("done")
return _np.real(FilteredSignal) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def calc_fft_with_PyCUDA(Signal):
""" Calculates the FFT of the passed signal by using the scikit-cuda libary which relies on PyCUDA Parameters Signal : ndarray Signal to be transformed into Fourier space Returns ------- Signalfft : ndarray Array containing the signal's FFT """ |
print("starting fft")
Signal = Signal.astype(_np.float32)
Signal_gpu = _gpuarray.to_gpu(Signal)
Signalfft_gpu = _gpuarray.empty(len(Signal)//2+1,_np.complex64)
plan = _Plan(Signal.shape,_np.float32,_np.complex64)
_fft(Signal_gpu, Signalfft_gpu, plan)
Signalfft = Signalfft_gpu.get() #only 2N+1 long
Signalfft = _np.hstack((Signalfft,_np.conj(_np.flipud(Signalfft[1:len(Signal)//2]))))
print("fft done")
return Signalfft |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def calc_ifft_with_PyCUDA(Signalfft):
""" Calculates the inverse-FFT of the passed FFT-signal by using the scikit-cuda libary which relies on PyCUDA Parameters Signalfft : ndarray FFT-Signal to be transformed into Real space Returns ------- Signal : ndarray Array containing the ifft signal """ |
print("starting ifft")
Signalfft = Signalfft.astype(_np.complex64)
Signalfft_gpu = _gpuarray.to_gpu(Signalfft[0:len(Signalfft)//2+1])
Signal_gpu = _gpuarray.empty(len(Signalfft),_np.float32)
plan = _Plan(len(Signalfft),_np.complex64,_np.float32)
_ifft(Signalfft_gpu, Signal_gpu, plan)
Signal = Signal_gpu.get()/(2*len(Signalfft)) #normalising as CUDA IFFT is un-normalised
print("ifft done")
return Signal |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def make_butterworth_b_a(lowcut, highcut, SampleFreq, order=5, btype='band'):
""" Generates the b and a coefficients for a butterworth IIR filter. Parameters lowcut : float frequency of lower bandpass limit highcut : float frequency of higher bandpass limit SampleFreq : float Sample frequency of filter order : int, optional order of IIR filter. Is 5 by default btype : string, optional type of filter to make e.g. (band, low, high) Returns ------- b : ndarray coefficients multiplying the current and past inputs (feedforward coefficients) a : ndarray coefficients multiplying the past outputs (feedback coefficients) """ |
nyq = 0.5 * SampleFreq
low = lowcut / nyq
high = highcut / nyq
if btype.lower() == 'band':
b, a = scipy.signal.butter(order, [low, high], btype = btype)
elif btype.lower() == 'low':
b, a = scipy.signal.butter(order, low, btype = btype)
elif btype.lower() == 'high':
b, a = scipy.signal.butter(order, high, btype = btype)
else:
raise ValueError('Filter type unknown')
return b, a |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def make_butterworth_bandpass_b_a(CenterFreq, bandwidth, SampleFreq, order=5, btype='band'):
""" Generates the b and a coefficients for a butterworth bandpass IIR filter. Parameters CenterFreq : float central frequency of bandpass bandwidth : float width of the bandpass from centre to edge SampleFreq : float Sample frequency of filter order : int, optional order of IIR filter. Is 5 by default btype : string, optional type of filter to make e.g. (band, low, high) Returns ------- b : ndarray coefficients multiplying the current and past inputs (feedforward coefficients) a : ndarray coefficients multiplying the past outputs (feedback coefficients) """ |
lowcut = CenterFreq-bandwidth/2
highcut = CenterFreq+bandwidth/2
b, a = make_butterworth_b_a(lowcut, highcut, SampleFreq, order, btype)
return b, a |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_freq_response(a, b, show_fig=True, SampleFreq=(2 * pi), NumOfFreqs=500, whole=False):
""" This function takes an array of coefficients and finds the frequency response of the filter using scipy.signal.freqz. show_fig sets if the response should be plotted Parameters b : array_like Coefficients multiplying the x values (inputs of the filter) a : array_like Coefficients multiplying the y values (outputs of the filter) show_fig : bool, optional Verbosity of function (i.e. whether to plot frequency and phase response or whether to just return the values.) Options (Default is 1):
False - Do not plot anything, just return values True - Plot Frequency and Phase response and return values SampleFreq : float, optional Sample frequency (in Hz) to simulate (used to convert frequency range to normalised frequency range) NumOfFreqs : int, optional Number of frequencies to use to simulate the frequency and phase response of the filter. Default is 500. Whole : bool, optional Sets whether to plot the whole response (0 to sample freq) or just to plot 0 to Nyquist (SampleFreq/2):
False - (default) plot 0 to Nyquist (SampleFreq/2) True - plot the whole response (0 to sample freq) Returns ------- freqList : ndarray Array containing the frequencies at which the gain is calculated GainArray : ndarray Array containing the gain in dB of the filter when simulated (20*log_10(A_out/A_in)) PhaseDiffArray : ndarray Array containing the phase response of the filter - phase difference between the input signal and output signal at different frequencies """ |
w, h = scipy.signal.freqz(b=b, a=a, worN=NumOfFreqs, whole=whole)
freqList = w / (pi) * SampleFreq / 2.0
himag = _np.array([hi.imag for hi in h])
GainArray = 20 * _np.log10(_np.abs(h))
PhaseDiffArray = _np.unwrap(_np.arctan2(_np.imag(h), _np.real(h)))
fig1 = _plt.figure()
ax1 = fig1.add_subplot(111)
ax1.plot(freqList, GainArray, '-', label="Specified Filter")
ax1.set_title("Frequency Response")
if SampleFreq == 2 * pi:
ax1.set_xlabel(("$\Omega$ - Normalized frequency "
"($\pi$=Nyquist Frequency)"))
else:
ax1.set_xlabel("frequency (Hz)")
ax1.set_ylabel("Gain (dB)")
ax1.set_xlim([0, SampleFreq / 2.0])
if show_fig == True:
_plt.show()
fig2 = _plt.figure()
ax2 = fig2.add_subplot(111)
ax2.plot(freqList, PhaseDiffArray, '-', label="Specified Filter")
ax2.set_title("Phase Response")
if SampleFreq == 2 * pi:
ax2.set_xlabel(("$\Omega$ - Normalized frequency "
"($\pi$=Nyquist Frequency)"))
else:
ax2.set_xlabel("frequency (Hz)")
ax2.set_ylabel("Phase Difference")
ax2.set_xlim([0, SampleFreq / 2.0])
if show_fig == True:
_plt.show()
return freqList, GainArray, PhaseDiffArray, fig1, ax1, fig2, ax2 |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def multi_plot_PSD(DataArray, xlim=[0, 500], units="kHz", LabelArray=[], ColorArray=[], alphaArray=[], show_fig=True):
""" plot the pulse spectral density for multiple data sets on the same axes. Parameters DataArray : array-like array of DataObject instances for which to plot the PSDs xlim : array-like, optional 2 element array specifying the lower and upper x limit for which to plot the Power Spectral Density units : string units to use for the x axis LabelArray : array-like, optional array of labels for each data-set to be plotted ColorArray : array-like, optional array of colors for each data-set to be plotted show_fig : bool, optional If True runs plt.show() before returning figure if False it just returns the figure object. (the default is True, it shows the figure) Returns ------- fig : matplotlib.figure.Figure object The figure object created ax : matplotlib.axes.Axes object The axes object created """ |
unit_prefix = units[:-2] # removed the last 2 chars
if LabelArray == []:
LabelArray = ["DataSet {}".format(i)
for i in _np.arange(0, len(DataArray), 1)]
if ColorArray == []:
ColorArray = _np.empty(len(DataArray))
ColorArray = list(ColorArray)
for i, ele in enumerate(ColorArray):
ColorArray[i] = None
if alphaArray == []:
alphaArray = _np.empty(len(DataArray))
alphaArray = list(alphaArray)
for i, ele in enumerate(alphaArray):
alphaArray[i] = None
fig = _plt.figure(figsize=properties['default_fig_size'])
ax = fig.add_subplot(111)
for i, data in enumerate(DataArray):
ax.semilogy(unit_conversion(data.freqs, unit_prefix), data.PSD, label=LabelArray[i], color=ColorArray[i], alpha=alphaArray[i])
ax.set_xlabel("Frequency ({})".format(units))
ax.set_xlim(xlim)
ax.grid(which="major")
legend = ax.legend(loc="best", frameon = 1)
frame = legend.get_frame()
frame.set_facecolor('white')
frame.set_edgecolor('white')
ax.set_ylabel("PSD ($v^2/Hz$)")
_plt.title('filedir=%s' % (DataArray[0].filedir))
if show_fig == True:
_plt.show()
return fig, ax |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def multi_plot_time(DataArray, SubSampleN=1, units='s', xlim=None, ylim=None, LabelArray=[], show_fig=True):
""" plot the time trace for multiple data sets on the same axes. Parameters DataArray : array-like array of DataObject instances for which to plot the PSDs SubSampleN : int, optional Number of intervals between points to remove (to sub-sample data so that you effectively have lower sample rate to make plotting easier and quicker. xlim : array-like, optional 2 element array specifying the lower and upper x limit for which to plot the time signal LabelArray : array-like, optional array of labels for each data-set to be plotted show_fig : bool, optional If True runs plt.show() before returning figure if False it just returns the figure object. (the default is True, it shows the figure) Returns ------- fig : matplotlib.figure.Figure object The figure object created ax : matplotlib.axes.Axes object The axes object created """ |
unit_prefix = units[:-1] # removed the last char
if LabelArray == []:
LabelArray = ["DataSet {}".format(i)
for i in _np.arange(0, len(DataArray), 1)]
fig = _plt.figure(figsize=properties['default_fig_size'])
ax = fig.add_subplot(111)
for i, data in enumerate(DataArray):
ax.plot(unit_conversion(data.time.get_array()[::SubSampleN], unit_prefix), data.voltage[::SubSampleN],
alpha=0.8, label=LabelArray[i])
ax.set_xlabel("time (s)")
if xlim != None:
ax.set_xlim(xlim)
if ylim != None:
ax.set_ylim(ylim)
ax.grid(which="major")
legend = ax.legend(loc="best", frameon = 1)
frame = legend.get_frame()
frame.set_facecolor('white')
frame.set_edgecolor('white')
ax.set_ylabel("voltage (V)")
if show_fig == True:
_plt.show()
return fig, ax |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def multi_subplots_time(DataArray, SubSampleN=1, units='s', xlim=None, ylim=None, LabelArray=[], show_fig=True):
""" plot the time trace on multiple axes Parameters DataArray : array-like array of DataObject instances for which to plot the PSDs SubSampleN : int, optional Number of intervals between points to remove (to sub-sample data so that you effectively have lower sample rate to make plotting easier and quicker. xlim : array-like, optional 2 element array specifying the lower and upper x limit for which to plot the time signal LabelArray : array-like, optional array of labels for each data-set to be plotted show_fig : bool, optional If True runs plt.show() before returning figure if False it just returns the figure object. (the default is True, it shows the figure) Returns ------- fig : matplotlib.figure.Figure object The figure object created axs : list of matplotlib.axes.Axes objects The list of axes object created """ |
unit_prefix = units[:-1] # removed the last char
NumDataSets = len(DataArray)
if LabelArray == []:
LabelArray = ["DataSet {}".format(i)
for i in _np.arange(0, len(DataArray), 1)]
fig, axs = _plt.subplots(NumDataSets, 1)
for i, data in enumerate(DataArray):
axs[i].plot(unit_conversion(data.time.get_array()[::SubSampleN], unit_prefix), data.voltage[::SubSampleN],
alpha=0.8, label=LabelArray[i])
axs[i].set_xlabel("time ({})".format(units))
axs[i].grid(which="major")
axs[i].legend(loc="best")
axs[i].set_ylabel("voltage (V)")
if xlim != None:
axs[i].set_xlim(xlim)
if ylim != None:
axs[i].set_ylim(ylim)
if show_fig == True:
_plt.show()
return fig, axs |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def calc_autocorrelation(Signal, FFT=False, PyCUDA=False):
""" Calculates the autocorrelation from a given Signal via using Parameters Signal : array-like Array containing the signal to have the autocorrelation calculated for FFT : optional, bool Uses FFT to accelerate autocorrelation calculation, but assumes certain certain periodicity on the signal to autocorrelate. Zero-padding is added to account for this periodicity assumption. PyCUDA : bool, optional If True, uses PyCUDA to accelerate the FFT and IFFT via using your NVIDIA-GPU If False, performs FFT and IFFT with conventional scipy.fftpack Returns ------- Autocorrelation : ndarray Array containing the value of the autocorrelation evaluated at the corresponding amount of shifted array-index. """ |
if FFT==True:
Signal_padded = scipy.fftpack.ifftshift((Signal-_np.average(Signal))/_np.std(Signal))
n, = Signal_padded.shape
Signal_padded = _np.r_[Signal_padded[:n//2], _np.zeros_like(Signal_padded), Signal_padded[n//2:]]
if PyCUDA==True:
f = calc_fft_with_PyCUDA(Signal_padded)
else:
f = scipy.fftpack.fft(Signal_padded)
p = _np.absolute(f)**2
if PyCUDA==True:
autocorr = calc_ifft_with_PyCUDA(p)
else:
autocorr = scipy.fftpack.ifft(p)
return _np.real(autocorr)[:n//2]/(_np.arange(n//2)[::-1]+n//2)
else:
Signal = Signal - _np.mean(Signal)
autocorr = scipy.signal.correlate(Signal, Signal, mode='full')
return autocorr[autocorr.size//2:]/autocorr[autocorr.size//2] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _GetRealImagArray(Array):
""" Returns the real and imaginary components of each element in an array and returns them in 2 resulting arrays. Parameters Array : ndarray Input array Returns ------- RealArray : ndarray The real components of the input array ImagArray : ndarray The imaginary components of the input array """ |
ImagArray = _np.array([num.imag for num in Array])
RealArray = _np.array([num.real for num in Array])
return RealArray, ImagArray |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _GetComplexConjugateArray(Array):
""" Calculates the complex conjugate of each element in an array and returns the resulting array. Parameters Array : ndarray Input array Returns ------- ConjArray : ndarray The complex conjugate of the input array. """ |
ConjArray = _np.array([num.conj() for num in Array])
return ConjArray |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def fm_discriminator(Signal):
""" Calculates the digital FM discriminator from a real-valued time signal. Parameters Signal : array-like A real-valued time signal Returns ------- fmDiscriminator : array-like The digital FM discriminator of the argument signal """ |
S_analytic = _hilbert(Signal)
S_analytic_star = _GetComplexConjugateArray(S_analytic)
S_analytic_hat = S_analytic[1:] * S_analytic_star[:-1]
R, I = _GetRealImagArray(S_analytic_hat)
fmDiscriminator = _np.arctan2(I, R)
return fmDiscriminator |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def find_collisions(Signal, tolerance=50):
""" Finds collision events in the signal from the shift in phase of the signal. Parameters Signal : array_like Array containing the values of the signal of interest containing a single frequency. tolerance : float Percentage tolerance, if the value of the FM Discriminator varies from the mean by this percentage it is counted as being during a collision event (or the aftermath of an event). Returns ------- Collisions : ndarray Array of booleans, true if during a collision event, false otherwise. """ |
fmd = fm_discriminator(Signal)
mean_fmd = _np.mean(fmd)
Collisions = [_is_this_a_collision(
[value, mean_fmd, tolerance]) for value in fmd]
return Collisions |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def count_collisions(Collisions):
""" Counts the number of unique collisions and gets the collision index. Parameters Collisions : array_like Array of booleans, containing true if during a collision event, false otherwise. Returns ------- CollisionCount : int Number of unique collisions CollisionIndicies : list Indicies of collision occurance """ |
CollisionCount = 0
CollisionIndicies = []
lastval = True
for i, val in enumerate(Collisions):
if val == True and lastval == False:
CollisionIndicies.append(i)
CollisionCount += 1
lastval = val
return CollisionCount, CollisionIndicies |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def steady_state_potential(xdata,HistBins=100):
""" Calculates the steady state potential. Used in fit_radius_from_potentials. Parameters xdata : ndarray Position data for a degree of freedom HistBins : int Number of bins to use for histogram of xdata. Number of position points at which the potential is calculated. Returns ------- position : ndarray positions at which potential has been calculated potential : ndarray value of potential at the positions above """ |
import numpy as _np
pops=_np.histogram(xdata,HistBins)[0]
bins=_np.histogram(xdata,HistBins)[1]
bins=bins[0:-1]
bins=bins+_np.mean(_np.diff(bins))
#normalise pops
pops=pops/float(_np.sum(pops))
return bins,-_np.log(pops) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def calc_z0_and_conv_factor_from_ratio_of_harmonics(z, z2, NA=0.999):
""" Calculates the Conversion Factor and physical amplitude of motion in nms by comparison of the ratio of the heights of the z signal and second harmonic of z. Parameters z : ndarray array containing z signal in volts z2 : ndarray array containing second harmonic of z signal in volts NA : float NA of mirror used in experiment Returns ------- z0 : float Physical average amplitude of motion in nms ConvFactor : float Conversion Factor between volts and nms """ |
V1 = calc_mean_amp(z)
V2 = calc_mean_amp(z2)
ratio = V2/V1
beta = 4*ratio
laserWavelength = 1550e-9 # in m
k0 = (2*pi)/(laserWavelength)
WaistSize = laserWavelength/(pi*NA)
Zr = pi*WaistSize**2/laserWavelength
z0 = beta/(k0 - 1/Zr)
ConvFactor = V1/z0
T0 = 300
return z0, ConvFactor |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def calc_mass_from_z0(z0, w0):
""" Calculates the mass of the particle using the equipartition from the angular frequency of the z signal and the average amplitude of the z signal in nms. Parameters z0 : float Physical average amplitude of motion in nms w0 : float Angular Frequency of z motion Returns ------- mass : float mass in kgs """ |
T0 = 300
mFromEquipartition = Boltzmann*T0/(w0**2 * z0**2)
return mFromEquipartition |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def calc_mass_from_fit_and_conv_factor(A, Damping, ConvFactor):
""" Calculates mass from the A parameter from fitting, the damping from fitting in angular units and the Conversion factor calculated from comparing the ratio of the z signal and first harmonic of z. Parameters A : float A factor calculated from fitting Damping : float damping in radians/second calcualted from fitting ConvFactor : float conversion factor between volts and nms Returns ------- mass : float mass in kgs """ |
T0 = 300
mFromA = 2*Boltzmann*T0/(pi*A) * ConvFactor**2 * Damping
return mFromA |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def unit_conversion(array, unit_prefix, current_prefix=""):
""" Converts an array or value to of a certain unit scale to another unit scale. Accepted units are: E - exa - 1e18 P - peta - 1e15 T - tera - 1e12 G - giga - 1e9 M - mega - 1e6 k - kilo - 1e3 m - milli - 1e-3 u - micro - 1e-6 n - nano - 1e-9 p - pico - 1e-12 f - femto - 1e-15 a - atto - 1e-18 Parameters array : ndarray Array to be converted unit_prefix : string desired unit (metric) prefix (e.g. nm would be n, ms would be m) current_prefix : optional, string current prefix of units of data (assumed to be in SI units by default (e.g. m or s) Returns ------- converted_array : ndarray Array multiplied such as to be in the units specified """ |
UnitDict = {
'E': 1e18,
'P': 1e15,
'T': 1e12,
'G': 1e9,
'M': 1e6,
'k': 1e3,
'': 1,
'm': 1e-3,
'u': 1e-6,
'n': 1e-9,
'p': 1e-12,
'f': 1e-15,
'a': 1e-18,
}
try:
Desired_units = UnitDict[unit_prefix]
except KeyError:
raise ValueError("You entered {} for the unit_prefix, this is not a valid prefix".format(unit_prefix))
try:
Current_units = UnitDict[current_prefix]
except KeyError:
raise ValueError("You entered {} for the current_prefix, this is not a valid prefix".format(current_prefix))
conversion_multiplication = Current_units/Desired_units
converted_array = array*conversion_multiplication
return converted_array |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_wigner(z, freq, sample_freq, histbins=200, show_plot=False):
""" Calculates an approximation to the wigner quasi-probability distribution by splitting the z position array into slices of the length of one period of the motion. This slice is then associated with phase from -180 to 180 degrees. These slices are then histogramed in order to get a distribution of counts of where the particle is observed at each phase. The 2d array containing the counts varying with position and phase is then passed through the inverse radon transformation using the Simultaneous Algebraic Reconstruction Technique approximation from the scikit-image package. Parameters z : ndarray trace of z motion freq : float frequency of motion sample_freq : float sample frequency of the z array histbins : int, optional (default=200) number of bins to use in histogramming data for each phase show_plot : bool, optional (default=False) Whether or not to plot the phase distribution Returns ------- iradon_output : ndarray 2d array of size (histbins x histbins) bin_centres : ndarray positions of the bin centres """ |
phase, phase_slices = extract_slices(z, freq, sample_freq, show_plot=False)
counts_array, bin_edges = histogram_phase(phase_slices, phase, histbins, show_plot=show_plot)
diff = bin_edges[1] - bin_edges[0]
bin_centres = bin_edges[:-1] + diff
iradon_output = _iradon_sart(counts_array, theta=phase)
#_plt.imshow(iradon_output, extent=[bin_centres[0], bin_centres[-1], bin_centres[0], bin_centres[-1]])
#_plt.show()
return iradon_output, bin_centres |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def plot_wigner3d(iradon_output, bin_centres, bin_centre_units="", cmap=_cm.cubehelix_r, view=(10, -45), figsize=(10, 10)):
""" Plots the wigner space representation as a 3D surface plot. Parameters iradon_output : ndarray 2d array of size (histbins x histbins) bin_centres : ndarray positions of the bin centres bin_centre_units : string, optional (default="") Units in which the bin_centres are given cmap : matplotlib.cm.cmap, optional (default=cm.cubehelix_r) color map to use for Wigner view : tuple, optional (default=(10, -45)) view angle for 3d wigner plot figsize : tuple, optional (default=(10, 10)) tuple defining size of figure created Returns ------- fig : matplotlib.figure.Figure object figure showing the wigner function ax : matplotlib.axes.Axes object axes containing the object """ |
fig = _plt.figure(figsize=figsize)
ax = fig.add_subplot(111, projection='3d')
resid1 = iradon_output.sum(axis=0)
resid2 = iradon_output.sum(axis=1)
x = bin_centres # replace with x
y = bin_centres # replace with p (xdot/omega)
xpos, ypos = _np.meshgrid(x, y)
X = xpos
Y = ypos
Z = iradon_output
ax.set_xlabel("x ({})".format(bin_centre_units))
ax.set_xlabel("y ({})".format(bin_centre_units))
ax.scatter(_np.min(X)*_np.ones_like(y), y, resid2/_np.max(resid2)*_np.max(Z), alpha=0.7)
ax.scatter(x, _np.max(Y)*_np.ones_like(x), resid1/_np.max(resid1)*_np.max(Z), alpha=0.7)
# Plot the surface.
surf = ax.plot_surface(X, Y, Z, cmap=cmap,
linewidth=0, antialiased=False)
# Customize the z axis.
#ax.set_zlim(-1.01, 1.01)
#ax.zaxis.set_major_locator(LinearLocator(10))
#ax.zaxis.set_major_formatter(FormatStrFormatter('%.02f'))
# Add a color bar which maps values to colors.
fig.colorbar(surf, shrink=0.5, aspect=5)
ax.view_init(view[0], view[1])
return fig, ax |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def plot_wigner2d(iradon_output, bin_centres, cmap=_cm.cubehelix_r, figsize=(6, 6)):
""" Plots the wigner space representation as a 2D heatmap. Parameters iradon_output : ndarray 2d array of size (histbins x histbins) bin_centres : ndarray positions of the bin centres cmap : matplotlib.cm.cmap, optional (default=cm.cubehelix_r) color map to use for Wigner figsize : tuple, optional (default=(6, 6)) tuple defining size of figure created Returns ------- fig : matplotlib.figure.Figure object figure showing the wigner function ax : matplotlib.axes.Axes object axes containing the object """ |
xx, yy = _np.meshgrid(bin_centres, bin_centres)
resid1 = iradon_output.sum(axis=0)
resid2 = iradon_output.sum(axis=1)
wigner_marginal_seperation = 0.001
left, width = 0.2, 0.65-0.1 # left = left side of hexbin and hist_x
bottom, height = 0.1, 0.65-0.1 # bottom = bottom of hexbin and hist_y
bottom_h = height + bottom + wigner_marginal_seperation
left_h = width + left + wigner_marginal_seperation
cbar_pos = [0.03, bottom, 0.05, 0.02+width]
rect_wigner = [left, bottom, width, height]
rect_histx = [left, bottom_h, width, 0.2]
rect_histy = [left_h, bottom, 0.2, height]
# start with a rectangular Figure
fig = _plt.figure(figsize=figsize)
axWigner = _plt.axes(rect_wigner)
axHistx = _plt.axes(rect_histx)
axHisty = _plt.axes(rect_histy)
pcol = axWigner.pcolor(xx, yy, iradon_output, cmap=cmap)
binwidth = bin_centres[1] - bin_centres[0]
axHistx.bar(bin_centres, resid2, binwidth)
axHisty.barh(bin_centres, resid1, binwidth)
_plt.setp(axHistx.get_xticklabels(), visible=False) # sets x ticks to be invisible while keeping gridlines
_plt.setp(axHisty.get_yticklabels(), visible=False) # sets x ticks to be invisible while keeping gridlines
for tick in axHisty.get_xticklabels():
tick.set_rotation(-90)
cbaraxes = fig.add_axes(cbar_pos) # This is the position for the colorbar
#cbar = _plt.colorbar(axp, cax = cbaraxes)
cbar = fig.colorbar(pcol, cax = cbaraxes, drawedges=False) #, orientation="horizontal"
cbar.solids.set_edgecolor("face")
cbar.solids.set_rasterized(True)
cbar.ax.set_yticklabels(cbar.ax.yaxis.get_ticklabels(), y=0, rotation=45)
#cbar.set_label(cbarlabel, labelpad=-25, y=1.05, rotation=0)
plotlimits = _np.max(_np.abs(bin_centres))
axWigner.axis((-plotlimits, plotlimits, -plotlimits, plotlimits))
axHistx.set_xlim(axWigner.get_xlim())
axHisty.set_ylim(axWigner.get_ylim())
return fig, axWigner, axHistx, axHisty, cbar |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_time_data(self, timeStart=None, timeEnd=None):
""" Gets the time and voltage data. Parameters timeStart : float, optional The time get data from. By default it uses the first time point timeEnd : float, optional The time to finish getting data from. By default it uses the last time point Returns ------- time : ndarray array containing the value of time (in seconds) at which the voltage is sampled voltage : ndarray array containing the sampled voltages """ |
if timeStart == None:
timeStart = self.timeStart
if timeEnd == None:
timeEnd = self.timeEnd
time = self.time.get_array()
StartIndex = _np.where(time == take_closest(time, timeStart))[0][0]
EndIndex = _np.where(time == take_closest(time, timeEnd))[0][0]
if EndIndex == len(time) - 1:
EndIndex = EndIndex + 1 # so that it does not remove the last element
return time[StartIndex:EndIndex], self.voltage[StartIndex:EndIndex] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def plot_time_data(self, timeStart=None, timeEnd=None, units='s', show_fig=True):
""" plot time data against voltage data. Parameters timeStart : float, optional The time to start plotting from. By default it uses the first time point timeEnd : float, optional The time to finish plotting at. By default it uses the last time point units : string, optional units of time to plot on the x axis - defaults to s show_fig : bool, optional If True runs plt.show() before returning figure if False it just returns the figure object. (the default is True, it shows the figure) Returns ------- fig : matplotlib.figure.Figure object The figure object created ax : matplotlib.axes.Axes object The subplot object created """ |
unit_prefix = units[:-1] # removed the last char
if timeStart == None:
timeStart = self.timeStart
if timeEnd == None:
timeEnd = self.timeEnd
time = self.time.get_array()
StartIndex = _np.where(time == take_closest(time, timeStart))[0][0]
EndIndex = _np.where(time == take_closest(time, timeEnd))[0][0]
fig = _plt.figure(figsize=properties['default_fig_size'])
ax = fig.add_subplot(111)
ax.plot(unit_conversion(time[StartIndex:EndIndex], unit_prefix),
self.voltage[StartIndex:EndIndex])
ax.set_xlabel("time ({})".format(units))
ax.set_ylabel("voltage (V)")
ax.set_xlim([timeStart, timeEnd])
if show_fig == True:
_plt.show()
return fig, ax |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def plot_PSD(self, xlim=None, units="kHz", show_fig=True, timeStart=None, timeEnd=None, *args, **kwargs):
""" plot the pulse spectral density. Parameters xlim : array_like, optional The x limits of the plotted PSD [LowerLimit, UpperLimit] Default value is [0, SampleFreq/2] units : string, optional Units of frequency to plot on the x axis - defaults to kHz show_fig : bool, optional If True runs plt.show() before returning figure if False it just returns the figure object. (the default is True, it shows the figure) Returns ------- fig : matplotlib.figure.Figure object The figure object created ax : matplotlib.axes.Axes object The subplot object created """ |
# self.get_PSD()
if timeStart == None and timeEnd == None:
freqs = self.freqs
PSD = self.PSD
else:
freqs, PSD = self.get_PSD(timeStart=timeStart, timeEnd=timeEnd)
unit_prefix = units[:-2]
if xlim == None:
xlim = [0, unit_conversion(self.SampleFreq/2, unit_prefix)]
fig = _plt.figure(figsize=properties['default_fig_size'])
ax = fig.add_subplot(111)
ax.semilogy(unit_conversion(freqs, unit_prefix), PSD, *args, **kwargs)
ax.set_xlabel("Frequency ({})".format(units))
ax.set_xlim(xlim)
ax.grid(which="major")
ax.set_ylabel("$S_{xx}$ ($V^2/Hz$)")
if show_fig == True:
_plt.show()
return fig, ax |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def calc_area_under_PSD(self, lowerFreq, upperFreq):
""" Sums the area under the PSD from lowerFreq to upperFreq. Parameters lowerFreq : float The lower limit of frequency to sum from upperFreq : float The upper limit of frequency to sum to Returns ------- AreaUnderPSD : float The area under the PSD from lowerFreq to upperFreq """ |
Freq_startAreaPSD = take_closest(self.freqs, lowerFreq)
index_startAreaPSD = int(_np.where(self.freqs == Freq_startAreaPSD)[0][0])
Freq_endAreaPSD = take_closest(self.freqs, upperFreq)
index_endAreaPSD = int(_np.where(self.freqs == Freq_endAreaPSD)[0][0])
AreaUnderPSD = sum(self.PSD[index_startAreaPSD: index_endAreaPSD])
return AreaUnderPSD |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_fit_auto(self, CentralFreq, MaxWidth=15000, MinWidth=500, WidthIntervals=500, MakeFig=True, show_fig=True, silent=False):
""" Tries a range of regions to search for peaks and runs the one with the least error and returns the parameters with the least errors. Parameters CentralFreq : float The central frequency to use for the fittings. MaxWidth : float, optional The maximum bandwidth to use for the fitting of the peaks. MinWidth : float, optional The minimum bandwidth to use for the fitting of the peaks. WidthIntervals : float, optional The intervals to use in going between the MaxWidth and MinWidth. show_fig : bool, optional Whether to plot and show the final (best) fitting or not. Returns ------- OmegaTrap : ufloat Trapping frequency A : ufloat A parameter Gamma : ufloat Gamma, the damping parameter fig : matplotlib.figure.Figure object The figure object created showing the PSD of the data with the fit ax : matplotlib.axes.Axes object The axes object created showing the PSD of the data with the fit """ |
MinTotalSumSquaredError = _np.infty
for Width in _np.arange(MaxWidth, MinWidth - WidthIntervals, -WidthIntervals):
try:
OmegaTrap, A, Gamma,_ , _ \
= self.get_fit_from_peak(
CentralFreq - Width / 2,
CentralFreq + Width / 2,
silent=True,
MakeFig=False,
show_fig=False)
except RuntimeError:
_warnings.warn("Couldn't find good fit with width {}".format(
Width), RuntimeWarning)
val = _uncertainties.ufloat(_np.NaN, _np.NaN)
OmegaTrap = val
A = val
Gamma = val
TotalSumSquaredError = (
A.std_dev / A.n)**2 + (Gamma.std_dev / Gamma.n)**2 + (OmegaTrap.std_dev / OmegaTrap.n)**2
#print("totalError: {}".format(TotalSumSquaredError))
if TotalSumSquaredError < MinTotalSumSquaredError:
MinTotalSumSquaredError = TotalSumSquaredError
BestWidth = Width
if silent != True:
print("found best")
try:
OmegaTrap, A, Gamma, fig, ax \
= self.get_fit_from_peak(CentralFreq - BestWidth / 2,
CentralFreq + BestWidth / 2,
MakeFig=MakeFig,
show_fig=show_fig,
silent=silent)
except UnboundLocalError:
raise ValueError("A best width was not found, try increasing the number of widths tried by either decreasing WidthIntervals or MinWidth or increasing MaxWidth")
OmegaTrap = self.OmegaTrap
A = self.A
Gamma = self.Gamma
self.FTrap = OmegaTrap/(2*pi)
return OmegaTrap, A, Gamma, fig, ax |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def calc_gamma_from_variance_autocorrelation_fit(self, NumberOfOscillations, GammaGuess=None, silent=False, MakeFig=True, show_fig=True):
""" Calculates the total damping, i.e. Gamma, by splitting the time trace into chunks of NumberOfOscillations oscillations and calculated the variance of each of these chunks. This array of varainces is then used for the autocorrleation. The autocorrelation is fitted with an exponential relaxation function and the function returns the parameters with errors. Parameters NumberOfOscillations : int The number of oscillations each chunk of the timetrace used to calculate the variance should contain. GammaGuess : float, optional Inital guess for BigGamma (in radians) Silent : bool, optional Whether it prints the values fitted or is silent. MakeFig : bool, optional Whether to construct and return the figure object showing the fitting. defaults to True show_fig : bool, optional Whether to show the figure object when it has been created. defaults to True Returns ------- Gamma : ufloat Big Gamma, the total damping in radians fig : matplotlib.figure.Figure object The figure object created showing the autocorrelation of the data with the fit ax : matplotlib.axes.Axes object The axes object created showing the autocorrelation of the data with the fit """ |
try:
SplittedArraySize = int(self.SampleFreq/self.FTrap.n) * NumberOfOscillations
except KeyError:
ValueError('You forgot to do the spectrum fit to specify self.FTrap exactly.')
VoltageArraySize = len(self.voltage)
SnippetsVariances = _np.var(self.voltage[:VoltageArraySize-_np.mod(VoltageArraySize,SplittedArraySize)].reshape(-1,SplittedArraySize),axis=1)
autocorrelation = calc_autocorrelation(SnippetsVariances)
time = _np.array(range(len(autocorrelation))) * SplittedArraySize / self.SampleFreq
if GammaGuess==None:
Gamma_Initial = (time[4]-time[0])/(autocorrelation[0]-autocorrelation[4])
else:
Gamma_Initial = GammaGuess
if MakeFig == True:
Params, ParamsErr, fig, ax = fit_autocorrelation(
autocorrelation, time, Gamma_Initial, MakeFig=MakeFig, show_fig=show_fig)
else:
Params, ParamsErr, _ , _ = fit_autocorrelation(
autocorrelation, time, Gamma_Initial, MakeFig=MakeFig, show_fig=show_fig)
if silent == False:
print("\n")
print(
"Big Gamma: {} +- {}% ".format(Params[0], ParamsErr[0] / Params[0] * 100))
Gamma = _uncertainties.ufloat(Params[0], ParamsErr[0])
if MakeFig == True:
return Gamma, fig, ax
else:
return Gamma, None, None |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def calc_gamma_from_energy_autocorrelation_fit(self, GammaGuess=None, silent=False, MakeFig=True, show_fig=True):
""" Calculates the total damping, i.e. Gamma, by calculating the energy each point in time. This energy array is then used for the autocorrleation. The autocorrelation is fitted with an exponential relaxation function and the function returns the parameters with errors. Parameters GammaGuess : float, optional Inital guess for BigGamma (in radians) silent : bool, optional Whether it prints the values fitted or is silent. MakeFig : bool, optional Whether to construct and return the figure object showing the fitting. defaults to True show_fig : bool, optional Whether to show the figure object when it has been created. defaults to True Returns ------- Gamma : ufloat Big Gamma, the total damping in radians fig : matplotlib.figure.Figure object The figure object created showing the autocorrelation of the data with the fit ax : matplotlib.axes.Axes object The axes object created showing the autocorrelation of the data with the fit """ |
autocorrelation = calc_autocorrelation(self.voltage[:-1]**2*self.OmegaTrap.n**2+(_np.diff(self.voltage)*self.SampleFreq)**2)
time = self.time.get_array()[:len(autocorrelation)]
if GammaGuess==None:
Gamma_Initial = (time[4]-time[0])/(autocorrelation[0]-autocorrelation[4])
else:
Gamma_Initial = GammaGuess
if MakeFig == True:
Params, ParamsErr, fig, ax = fit_autocorrelation(
autocorrelation, time, Gamma_Initial, MakeFig=MakeFig, show_fig=show_fig)
else:
Params, ParamsErr, _ , _ = fit_autocorrelation(
autocorrelation, time, Gamma_Initial, MakeFig=MakeFig, show_fig=show_fig)
if silent == False:
print("\n")
print(
"Big Gamma: {} +- {}% ".format(Params[0], ParamsErr[0] / Params[0] * 100))
Gamma = _uncertainties.ufloat(Params[0], ParamsErr[0])
if MakeFig == True:
return Gamma, fig, ax
else:
return Gamma, None, None |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def extract_parameters(self, P_mbar, P_Error, method="chang"):
""" Extracts the Radius, mass and Conversion factor for a particle. Parameters P_mbar : float The pressure in mbar when the data was taken. P_Error : float The error in the pressure value (as a decimal e.g. 15% = 0.15) Returns ------- Radius : uncertainties.ufloat The radius of the particle in m Mass : uncertainties.ufloat The mass of the particle in kg ConvFactor : uncertainties.ufloat The conversion factor between volts/m """ |
[R, M, ConvFactor], [RErr, MErr, ConvFactorErr] = \
extract_parameters(P_mbar, P_Error,
self.A.n, self.A.std_dev,
self.Gamma.n, self.Gamma.std_dev,
method = method)
self.Radius = _uncertainties.ufloat(R, RErr)
self.Mass = _uncertainties.ufloat(M, MErr)
self.ConvFactor = _uncertainties.ufloat(ConvFactor, ConvFactorErr)
return self.Radius, self.Mass, self.ConvFactor |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_value(self, ColumnName, RunNo):
""" Retreives the value of the collumn named ColumnName associated with a particular run number. Parameters ColumnName : string The name of the desired org-mode table's collumn RunNo : int The run number for which to retreive the pressure value Returns ------- Value : float The value for the column's name and associated run number """ |
Value = float(self.ORGTableData[self.ORGTableData.RunNo == '{}'.format(
RunNo)][ColumnName])
return Value |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def steady_state_potential(xdata,HistBins=100):
""" Calculates the steady state potential. Parameters xdata : ndarray Position data for a degree of freedom HistBins : int Number of bins to use for histogram of xdata. Number of position points at which the potential is calculated. Returns ------- position : ndarray positions at which potential has been calculated potential : ndarray value of potential at the positions above """ |
import numpy as np
pops=np.histogram(xdata,HistBins)[0]
bins=np.histogram(xdata,HistBins)[1]
bins=bins[0:-1]
bins=bins+np.mean(np.diff(bins))
#normalise pops
pops=pops/float(np.sum(pops))
return bins,-np.log(pops) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def finished(finished_status, update_interval, status_key, edit_at_key):
""" Create dict query for pymongo that getting all finished task. :param finished_status: int, status code that greater or equal than this will be considered as finished. :param update_interval: int, the record will be updated every x seconds. :param status_key: status code field key, support dot notation. :param edit_at_key: edit_at time field key, support dot notation. :return: dict, a pymongo filter. **中文文档** 状态码大于某个值, 并且, 更新时间在最近一段时间以内. """ |
return {
status_key: {"$gte": finished_status},
edit_at_key: {
"$gte": x_seconds_before_now(update_interval),
},
} |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def unfinished(finished_status, update_interval, status_key, edit_at_key):
""" Create dict query for pymongo that getting all unfinished task. :param finished_status: int, status code that less than this will be considered as unfinished. :param update_interval: int, the record will be updated every x seconds. :param status_key: status code field key, support dot notation. :param edit_at_key: edit_at time field key, support dot notation. :return: dict, a pymongo filter. **中文文档** 状态码小于某个值, 或者, 现在距离更新时间已经超过一定阈值. """ |
return {
"$or": [
{status_key: {"$lt": finished_status}},
{edit_at_key: {"$lt": x_seconds_before_now(update_interval)}},
]
} |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def getCommandLine(self):
"""Insert the precursor and change directory commands """ |
commandLine = self.precursor + self.sep if self.precursor else ''
commandLine += self.cd + ' ' + self.path + self.sep if self.path else ''
commandLine += PosixCommand.getCommandLine(self)
return commandLine |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _policy_psets(policy_instances):
"""Find all permission sets making use of all of a list of policy_instances. The input is an array of policy instances. """ |
if len(policy_instances) == 0:
# Special case: find any permission sets that don't have
# associated policy instances.
return PermissionSet.objects.filter(policyinstance__isnull=True)
else:
return PermissionSet.objects.filter(
policyinstance__policy__in=policy_instances).distinct() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _get_permission_set_tree(user):
""" Helper to return cached permission set tree from user instance if set, else generates and returns analyzed permission set tree. Does not cache set automatically, that must be done explicitely. """ |
if hasattr(user, CACHED_PSET_PROPERTY_KEY):
return getattr(user, CACHED_PSET_PROPERTY_KEY)
if user.is_authenticated():
try:
return user.permissionset.first().tree()
except AttributeError:
raise ObjectDoesNotExist
return PermissionSet.objects.get(anonymous_user=True).tree() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def ensure_permission_set_tree_cached(user):
""" Helper to cache permission set tree on user instance """ |
if hasattr(user, CACHED_PSET_PROPERTY_KEY):
return
try:
setattr(
user, CACHED_PSET_PROPERTY_KEY, _get_permission_set_tree(user))
except ObjectDoesNotExist: # No permission set
pass |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def parsed(self):
"""Get the JSON dictionary object which represents the content. This property is cached and only parses the content once. """ |
if not self._parsed:
self._parsed = json.loads(self.content)
return self._parsed |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def cleanup_logger(self):
"""Clean up logger to close out file handles. After this is called, writing to self.log will get logs ending up getting discarded. """ |
self.log_handler.close()
self.log.removeHandler(self.log_handler) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def update_configs(self, release):
""" Update the fedora-atomic.git repositories for a given release """ |
git_repo = release['git_repo']
git_cache = release['git_cache']
if not os.path.isdir(git_cache):
self.call(['git', 'clone', '--mirror', git_repo, git_cache])
else:
self.call(['git', 'fetch', '--all', '--prune'], cwd=git_cache)
git_dir = release['git_dir'] = os.path.join(release['tmp_dir'],
os.path.basename(git_repo))
self.call(['git', 'clone', '-b', release['git_branch'],
git_cache, git_dir])
if release['delete_repo_files']:
for repo_file in glob.glob(os.path.join(git_dir, '*.repo')):
self.log.info('Deleting %s' % repo_file)
os.unlink(repo_file) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def mock_cmd(self, release, *cmd, **kwargs):
"""Run a mock command in the chroot for a given release""" |
fmt = '{mock_cmd}'
if kwargs.get('new_chroot') is True:
fmt +=' --new-chroot'
fmt += ' --configdir={mock_dir}'
return self.call(fmt.format(**release).split()
+ list(cmd)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def generate_mock_config(self, release):
"""Dynamically generate our mock configuration""" |
mock_tmpl = pkg_resources.resource_string(__name__, 'templates/mock.mako')
mock_dir = release['mock_dir'] = os.path.join(release['tmp_dir'], 'mock')
mock_cfg = os.path.join(release['mock_dir'], release['mock'] + '.cfg')
os.mkdir(mock_dir)
for cfg in ('site-defaults.cfg', 'logging.ini'):
os.symlink('/etc/mock/%s' % cfg, os.path.join(mock_dir, cfg))
with file(mock_cfg, 'w') as cfg:
mock_out = Template(mock_tmpl).render(**release)
self.log.debug('Writing %s:\n%s', mock_cfg, mock_out)
cfg.write(mock_out) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def mock_chroot(self, release, cmd, **kwargs):
"""Run a commend in the mock container for a release""" |
return self.mock_cmd(release, '--chroot', cmd, **kwargs) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def generate_repo_files(self, release):
"""Dynamically generate our yum repo configuration""" |
repo_tmpl = pkg_resources.resource_string(__name__, 'templates/repo.mako')
repo_file = os.path.join(release['git_dir'], '%s.repo' % release['repo'])
with file(repo_file, 'w') as repo:
repo_out = Template(repo_tmpl).render(**release)
self.log.debug('Writing repo file %s:\n%s', repo_file, repo_out)
repo.write(repo_out)
self.log.info('Wrote repo configuration to %s', repo_file) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def ostree_init(self, release):
"""Initialize the OSTree for a release""" |
out = release['output_dir'].rstrip('/')
base = os.path.dirname(out)
if not os.path.isdir(base):
self.log.info('Creating %s', base)
os.makedirs(base, mode=0755)
if not os.path.isdir(out):
self.mock_chroot(release, release['ostree_init']) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def ostree_compose(self, release):
"""Compose the OSTree in the mock container""" |
start = datetime.utcnow()
treefile = os.path.join(release['git_dir'], 'treefile.json')
cmd = release['ostree_compose'] % treefile
with file(treefile, 'w') as tree:
json.dump(release['treefile'], tree)
# Only use new_chroot for the invocation, as --clean and --new-chroot are buggy together right now
out, err, rcode = self.mock_chroot(release, cmd, new_chroot=True)
ref = None
commitid = None
for line in out.split('\n'):
if ' => ' in line:
# This line is the: ref => commitid line
line = line.replace('\n', '')
ref, _, commitid = line.partition(' => ')
self.log.info('rpm-ostree compose complete (%s), ref %s, commitid %s',
datetime.utcnow() - start, ref, commitid)
return ref, commitid |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def update_ostree_summary(self, release):
"""Update the ostree summary file and return a path to it""" |
self.log.info('Updating the ostree summary for %s', release['name'])
self.mock_chroot(release, release['ostree_summary'])
return os.path.join(release['output_dir'], 'summary') |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def sync_in(self, release):
"""Sync the canonical repo to our local working directory""" |
tree = release['canonical_dir']
if os.path.exists(tree) and release.get('rsync_in_objs'):
out = release['output_dir']
if not os.path.isdir(out):
self.log.info('Creating %s', out)
os.makedirs(out)
self.call(release['rsync_in_objs'])
self.call(release['rsync_in_rest']) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def sync_out(self, release):
"""Sync our tree to the canonical location""" |
if release.get('rsync_out_objs'):
tree = release['canonical_dir']
if not os.path.isdir(tree):
self.log.info('Creating %s', tree)
os.makedirs(tree)
self.call(release['rsync_out_objs'])
self.call(release['rsync_out_rest']) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def call(self, cmd, **kwargs):
"""A simple subprocess wrapper""" |
if isinstance(cmd, basestring):
cmd = cmd.split()
self.log.info('Running %s', cmd)
p = subprocess.Popen(cmd, stdout=subprocess.PIPE,
stderr=subprocess.PIPE, **kwargs)
out, err = p.communicate()
if out:
self.log.info(out)
if err:
if p.returncode == 0:
self.log.info(err)
else:
self.log.error(err)
if p.returncode != 0:
self.log.error('returncode = %d' % p.returncode)
raise Exception
return out, err, p.returncode |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def intersect(self, other):
""" Determine the interval of overlap between this range and another. :returns: a new Range object representing the overlapping interval, or `None` if the ranges do not overlap. """ |
if not self.overlap(other):
return None
newstart = max(self._start, other.start)
newend = min(self._end, other.end)
return Range(newstart, newend) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def overlap(self, other):
"""Determine whether this range overlaps with another.""" |
if self._start < other.end and self._end > other.start:
return True
return False |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def contains(self, other):
"""Determine whether this range contains another.""" |
return self._start <= other.start and self._end >= other.end |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def transform(self, offset):
""" Shift this range by the specified offset. Note: the resulting range must be a valid interval. """ |
assert self._start + offset > 0, \
('offset {} invalid; resulting range [{}, {}) is '
'undefined'.format(offset, self._start+offset, self._end+offset))
self._start += offset
self._end += offset |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def run(cls, command, cwd=".", **kwargs):
""" Make a subprocess call, collect its output and returncode. Returns CommandResult instance as ValueObject. """ |
assert isinstance(command, six.string_types)
command_result = CommandResult()
command_result.command = command
use_shell = cls.USE_SHELL
if "shell" in kwargs:
use_shell = kwargs.pop("shell")
# -- BUILD COMMAND ARGS:
if six.PY2 and isinstance(command, six.text_type):
# -- PREPARE-FOR: shlex.split()
# In PY2, shlex.split() requires bytes string (non-unicode).
# In PY3, shlex.split() accepts unicode string.
command = codecs.encode(command, "utf-8")
cmdargs = shlex.split(command)
# -- TRANSFORM COMMAND (optional)
command0 = cmdargs[0]
real_command = cls.COMMAND_MAP.get(command0, None)
if real_command:
cmdargs0 = real_command.split()
cmdargs = cmdargs0 + cmdargs[1:]
preprocessors = cls.PREPROCESSOR_MAP.get(command0)
if preprocessors:
cmdargs = cls.preprocess_command(preprocessors, cmdargs, command, cwd)
# -- RUN COMMAND:
try:
process = subprocess.Popen(cmdargs,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
universal_newlines=True,
shell=use_shell,
cwd=cwd, **kwargs)
out, err = process.communicate()
if six.PY2: # py3: we get unicode strings, py2 not
default_encoding = 'UTF-8'
out = six.text_type(out, process.stdout.encoding or default_encoding)
err = six.text_type(err, process.stderr.encoding or default_encoding)
process.poll()
assert process.returncode is not None
command_result.stdout = out
command_result.stderr = err
command_result.returncode = process.returncode
if cls.DEBUG:
print("shell.cwd={0}".format(kwargs.get("cwd", None)))
print("shell.command: {0}".format(" ".join(cmdargs)))
print("shell.command.output:\n{0};".format(command_result.output))
except OSError as e:
command_result.stderr = u"OSError: %s" % e
command_result.returncode = e.errno
assert e.errno != 0
postprocessors = cls.POSTPROCESSOR_MAP.get(command0)
if postprocessors:
command_result = cls.postprocess_command(postprocessors, command_result)
return command_result |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_field_template(self, bound_field, template_name=None):
""" Uses a special field template for widget with multiple inputs. It only applies if no other template than the default one has been defined. """ |
template_name = super().get_field_template(bound_field, template_name)
if (template_name == self.field_template and
isinstance(bound_field.field.widget, (
forms.RadioSelect, forms.CheckboxSelectMultiple))):
return 'tapeforms/fields/foundation_fieldset.html'
return template_name |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def printer(self):
"""Prints PDA state attributes""" |
print " ID " + repr(self.id)
if self.type == 0:
print " Tag: - "
print " Start State - "
elif self.type == 1:
print " Push " + repr(self.sym)
elif self.type == 2:
print " Pop State " + repr(self.sym)
elif self.type == 3:
print " Read State " + repr(self.sym)
elif self.type == 4:
print " Stop State " + repr(self.sym)
for j in self.trans:
if len(self.trans[j]) > 1 or (len(self.trans[j]) == 1):
for symbol in self.trans[j]:
print " On Symbol " + repr(symbol) + " Transition To State " + repr(j) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.