_id stringlengths 2 7 | title stringlengths 1 88 | partition stringclasses 3
values | text stringlengths 31 13.1k | language stringclasses 1
value | meta_information dict |
|---|---|---|---|---|---|
q6700 | PipAccelerator.initialize_directories | train | def initialize_directories(self):
"""Automatically create local directories required by pip-accel."""
| python | {
"resource": ""
} |
q6701 | PipAccelerator.clean_source_index | train | def clean_source_index(self):
"""
Cleanup broken symbolic links in the local source distribution index.
The purpose of this method requires some context to understand. Let me
preface this by stating that I realize I'm probably overcomplicating
things, but I like to preserve forward / backward compatibility when
possible and I don't feel like dropping everyone's locally cached
source distribution archives without a good reason to do so. With that
out of the way:
- Versions of pip-accel based on pip 1.4.x maintained a local source
distribution index based on a directory containing symbolic links
pointing directly into pip's download cache. When files were removed
from pip's download cache, broken symbolic links remained in
pip-accel's local source distribution index directory. This resulted
in very confusing error messages. To avoid this
:func:`clean_source_index()` cleaned up broken symbolic links
whenever pip-accel was about to invoke pip.
- More recent versions of pip (6.x) no longer support the same style of
download cache that contains source distribution archives that can be
re-used directly by pip-accel. To cope with the changes in pip 6.x
new versions of pip-accel tell pip to download source distribution
archives directly into the local source distribution index directory
maintained by pip-accel.
- It is very reasonable for users of pip-accel to have multiple
versions of pip-accel installed on their system (imagine a dozen
Python virtual environments that won't all be updated at the same
time; this is the situation I always find myself in :-). These
versions of pip-accel will be sharing the same local source
distribution index directory.
- All of this leads up to the local source distribution index directory
containing a mixture of symbolic links and regular files with no
obvious way to atomically and gracefully upgrade the local source
distribution index directory while avoiding fights between old and
new versions of pip-accel :-).
- I could of course switch to storing the new local source distribution
index in a differently named directory (avoiding potential conflicts
between multiple versions of pip-accel) but then I would have to
| python | {
"resource": ""
} |
q6702 | PipAccelerator.install_from_arguments | train | def install_from_arguments(self, arguments, **kw):
"""
Download, unpack, build and install the specified requirements.
This function is a simple wrapper for :func:`get_requirements()`,
:func:`install_requirements()` and :func:`cleanup_temporary_directories()`
that implements the default behavior of the pip accelerator. If you're
extending or embedding pip-accel you may want to call the underlying
methods instead.
If the requirement set includes wheels and ``setuptools >= 0.8`` is not
yet installed, it will be added to the requirement set and installed
together with the other requirement(s) in order to enable the usage of
distributions installed from wheels (their metadata is different).
:param arguments: The command line arguments to ``pip install ..`` (a
list of strings).
:param kw: Any keyword arguments are passed on to
:func:`install_requirements()`.
:returns: The result of :func:`install_requirements()`.
"""
try:
requirements = self.get_requirements(arguments, use_wheels=self.arguments_allow_wheels(arguments))
have_wheels = | python | {
"resource": ""
} |
q6703 | PipAccelerator.get_requirements | train | def get_requirements(self, arguments, max_retries=None, use_wheels=False):
"""
Use pip to download and unpack the requested source distribution archives.
:param arguments: The command line arguments to ``pip install ...`` (a
list of strings).
:param max_retries: The maximum number of times that pip will be asked
to download distribution archives (this helps to
deal with intermittent failures). If this is
:data:`None` then :attr:`~.Config.max_retries` is
used.
:param use_wheels: Whether pip and pip-accel are allowed to use wheels_
(:data:`False` by default for backwards compatibility
with callers that use pip-accel as a Python API).
.. warning:: Requirements which are already installed are not included
in the result. If this breaks your use case consider using
pip's ``--ignore-installed`` option.
"""
arguments = self.decorate_arguments(arguments)
# Demote hash sum mismatch log messages from CRITICAL to DEBUG (hiding
# implementation details from users unless they want to see them).
with DownloadLogFilter():
with SetupRequiresPatch(self.config, self.eggs_links):
# Use a new build directory for each run of get_requirements().
self.create_build_directory()
# Check whether -U or --upgrade was given.
if any(match_option(a, '-U', '--upgrade') for a in arguments):
logger.info("Checking index(es) for new version (-U or --upgrade was given) ..")
else:
# If -U or --upgrade wasn't given and all requirements can be
# satisfied using the archives in pip-accel's local source
# index we don't need pip to connect to PyPI looking for new
# versions (that will just slow us down).
try:
return self.unpack_source_dists(arguments, use_wheels=use_wheels)
| python | {
"resource": ""
} |
q6704 | PipAccelerator.unpack_source_dists | train | def unpack_source_dists(self, arguments, use_wheels=False):
"""
Find and unpack local source distributions and discover their metadata.
:param arguments: The command line arguments to ``pip install ...`` (a
list of strings).
:param use_wheels: Whether pip and pip-accel are allowed to use wheels_
(:data:`False` by default for backwards compatibility
with callers that use pip-accel as a Python API).
:returns: A list of :class:`pip_accel.req.Requirement` objects.
:raises: Any exceptions raised by pip, for example
:exc:`pip.exceptions.DistributionNotFound` when not all
| python | {
"resource": ""
} |
q6705 | PipAccelerator.download_source_dists | train | def download_source_dists(self, arguments, use_wheels=False):
"""
Download missing source distributions.
:param arguments: The command line arguments to ``pip install ...`` (a
list of strings).
:param use_wheels: Whether pip and pip-accel are allowed to use wheels_
(:data:`False` by default for backwards compatibility
with callers that use pip-accel as a Python API).
:raises: Any exceptions raised by pip.
| python | {
"resource": ""
} |
q6706 | PipAccelerator.transform_pip_requirement_set | train | def transform_pip_requirement_set(self, requirement_set):
"""
Transform pip's requirement set into one that `pip-accel` can work with.
:param requirement_set: The :class:`pip.req.RequirementSet` object
reported by pip.
:returns: A list of :class:`pip_accel.req.Requirement` objects.
This function converts the :class:`pip.req.RequirementSet` object
reported by pip into a list of :class:`pip_accel.req.Requirement`
objects.
"""
filtered_requirements = []
for requirement in requirement_set.requirements.values():
# The `satisfied_by' property is set by pip when a requirement is
# already satisfied (i.e. a version of the package that satisfies
# the requirement is already installed) and -I, --ignore-installed
# is not used. We filter out these requirements because pip never
# unpacks distributions for these requirements, so pip-accel can't
# do anything useful with such requirements.
if requirement.satisfied_by:
continue
| python | {
"resource": ""
} |
q6707 | PipAccelerator.clear_build_directory | train | def clear_build_directory(self):
"""Clear the build directory where pip unpacks the source distribution archives."""
stat = os.stat(self.build_directory)
| python | {
"resource": ""
} |
q6708 | PipAccelerator.cleanup_temporary_directories | train | def cleanup_temporary_directories(self):
"""Delete the build directories and any temporary directories created by pip."""
while self.build_directories:
shutil.rmtree(self.build_directories.pop())
| python | {
"resource": ""
} |
q6709 | DownloadLogFilter.filter | train | def filter(self, record):
"""Change the severity of selected log records."""
if isinstance(record.msg, basestring):
message = record.msg.lower()
if all(kw in message for kw in self.KEYWORDS):
| python | {
"resource": ""
} |
q6710 | main | train | def main():
"""The command line interface for the ``pip-accel`` program."""
arguments = sys.argv[1:]
# If no arguments are given, the help text of pip-accel is printed.
if not arguments:
usage()
sys.exit(0)
# If no install subcommand is given we pass the command line straight
# to pip without any changes and exit immediately afterwards.
if 'install' not in arguments:
# This will not return.
os.execvp('pip', ['pip'] + arguments)
else:
arguments = [arg for arg in arguments if arg != 'install']
config = Config()
# Initialize logging output.
coloredlogs.install(
fmt=config.log_format,
level=config.log_verbosity,
)
# Adjust verbosity based on -v, -q, --verbose, --quiet options.
for argument in list(arguments):
if match_option(argument, '-v', '--verbose'):
coloredlogs.increase_verbosity()
| python | {
"resource": ""
} |
q6711 | LocalCacheBackend.get | train | def get(self, filename):
"""
Check if a distribution archive exists in the local cache.
:param filename: The filename of the distribution archive (a string).
:returns: The pathname of a distribution archive on the local file
system or :data:`None`.
"""
pathname = os.path.join(self.config.binary_cache, filename)
| python | {
"resource": ""
} |
q6712 | LocalCacheBackend.put | train | def put(self, filename, handle):
"""
Store a distribution archive in the local cache.
:param filename: The filename of the distribution archive (a string).
:param handle: A file-like object that provides access to the
distribution archive.
"""
file_in_cache = os.path.join(self.config.binary_cache, filename)
logger.debug("Storing distribution archive in local | python | {
"resource": ""
} |
q6713 | MultiEmailField.to_python | train | def to_python(self, value):
"Normalize data to a list of strings."
# Return None if no input was given.
if not value:
| python | {
"resource": ""
} |
q6714 | MultiEmailWidget.prep_value | train | def prep_value(self, value):
""" Prepare value before effectively render widget """
if value in MULTI_EMAIL_FIELD_EMPTY_VALUES:
return ""
elif isinstance(value, six.string_types):
return | python | {
"resource": ""
} |
q6715 | tagged | train | def tagged(*tags: Tags) -> Callable:
global GREENSIM_TAG_ATTRIBUTE
"""
Decorator for adding a label to the process.
These labels are applied to any child Processes produced by event
"""
def hook(event: Callable):
def wrapper(*args, **kwargs):
| python | {
"resource": ""
} |
q6716 | select | train | def select(*signals: Signal, **kwargs) -> List[Signal]:
"""
Allows the current process to wait for multiple concurrent signals. Waits until one of the signals turns on, at
which point this signal is returned.
:param timeout:
If this parameter is not ``None``, it is taken as a delay at the end of which the process times out, and
stops waiting on the set of :py:class:`Signal`s. In such a situation, a :py:class:`Timeout` exception is raised
on the process.
"""
class CleanUp(Interrupt):
pass
timeout = kwargs.get("timeout", None)
if not isinstance(timeout, (float, int, type(None))):
| python | {
"resource": ""
} |
q6717 | Simulator.add_in | train | def add_in(self, delay: float, fn_process: Callable, *args: Any, **kwargs: Any) -> 'Process':
"""
Adds a process to the simulation, which is made to start after the given delay in simulated time.
See method add() for more details.
"""
process = Process(self, fn_process, self._gr)
| python | {
"resource": ""
} |
q6718 | Simulator.add_at | train | def add_at(self, moment: float, fn_process: Callable, *args: Any, **kwargs: Any) -> 'Process':
"""
Adds a process to the simulation, which is made to start at the given exact time on the simulated clock. Note
that times in the past when compared to the current moment | python | {
"resource": ""
} |
q6719 | Simulator.step | train | def step(self) -> None:
"""
Runs a single event of the simulation.
"""
event = heappop(self._events)
| python | {
"resource": ""
} |
q6720 | Simulator.stop | train | def stop(self) -> None:
"""
Stops the running simulation once the current event is done executing.
"""
if self.is_running:
if _logger is not None: | python | {
"resource": ""
} |
q6721 | Simulator._clear | train | def _clear(self) -> None:
"""
Resets the internal state of the simulator, and sets the simulated clock back to 0.0. This discards all
outstanding events and tears down hanging process instances.
"""
for _, event, _, _ in self.events():
| python | {
"resource": ""
} |
q6722 | Process.current | train | def current() -> 'Process':
"""
Returns the instance of the process that is executing at the current moment.
"""
curr = greenlet.getcurrent()
if not isinstance(curr, Process):
| python | {
"resource": ""
} |
q6723 | Signal.turn_on | train | def turn_on(self) -> "Signal":
"""
Turns on the signal. If processes are waiting, they are all resumed. This may be invoked from any code.
Remark that while processes are simultaneously resumed in simulated time, they are effectively resumed in the
sequence corresponding to the queue discipline. Therefore, if one of the resumed processes turns the signal back
off, remaining resumed processes join back the queue. If the queue discipline is not monotonic (for instance, | python | {
"resource": ""
} |
q6724 | Signal.turn_off | train | def turn_off(self) -> "Signal":
"""
Turns off the signal. This may be invoked from any code.
"""
if _logger is not None:
| python | {
"resource": ""
} |
q6725 | Signal.wait | train | def wait(self, timeout: Optional[float] = None) -> None:
"""
Makes the current process wait for the signal. If it is closed, it will join the signal's queue.
:param timeout:
If this parameter is not ``None``, it is taken as a delay at the end of which the process times out, and
stops waiting for the :py:class:`Signal`. In such a | python | {
"resource": ""
} |
q6726 | Resource.take | train | def take(self, num_instances: int = 1, timeout: Optional[float] = None) -> None:
"""
The current process reserves a certain number of instances. If there are not enough instances available, the
process is made to join a queue. When this method returns, the process holds the instances it has requested to
take.
:param num_instances:
Number of resource instances to take.
:param timeout:
If this parameter is not ``None``, it is taken as a delay at the end of which the process times out, and
leaves the queue forcibly. In such a situation, a :py:class:`Timeout` exception is raised on the process.
"""
if num_instances < 1:
raise ValueError(f"Process must request at least 1 instance; here requested {num_instances}.")
if num_instances > self.num_instances_total:
raise ValueError(
f"Process must request at most {self.num_instances_total} instances; here requested {num_instances}."
)
if _logger is not None:
self._log(INFO, "take", num_instances=num_instances, free=self.num_instances_free)
proc = Process.current()
| python | {
"resource": ""
} |
q6727 | Resource.release | train | def release(self, num_instances: int = 1) -> None:
"""
The current process releases instances it has previously taken. It may thus release less than it has taken.
These released instances become free. If the total number of free instances then satisfy the request of the top
process of the waiting queue, it is popped off the queue and resumed.
"""
proc = Process.current()
error_format = "Process %s holds %s instances, but requests to release more (%s)"
if self._usage.get(proc, 0) > 0:
if num_instances > self._usage[proc]:
raise ValueError(
error_format % (proc.local.name, self._usage[proc], num_instances)
)
self._usage[proc] -= num_instances
self._num_instances_free += num_instances
if _logger is not None:
self._log(
INFO,
"release",
num_instances=num_instances,
keeping=self._usage[proc],
| python | {
"resource": ""
} |
q6728 | capture_print | train | def capture_print(file_dest_maybe: Optional[IO] = None):
"""Progress capture that writes updated metrics to an interactive terminal."""
file_dest: IO = file_dest_maybe or sys.stderr
def _print_progress(progress_min: float, rt_remaining: float, _mc: MeasureComparison) -> None:
nonlocal file_dest
percent_progress = progress_min * 100.0
time_remaining, | python | {
"resource": ""
} |
q6729 | calcTm | train | def calcTm(seq, mv_conc=50, dv_conc=0, dntp_conc=0.8, dna_conc=50,
max_nn_length=60, tm_method='santalucia',
salt_corrections_method='santalucia'):
''' Return the tm of `seq` as a float.
'''
tm_meth = _tm_methods.get(tm_method)
if tm_meth is None:
raise ValueError('{} is not a valid tm calculation method'.format(
tm_method))
salt_meth = _salt_corrections_methods.get(salt_corrections_method)
if salt_meth is None:
raise ValueError('{} is not a valid salt correction method'.format(
salt_corrections_method))
# For whatever reason mv_conc and dna_conc have to be ints
args = [pjoin(PRIMER3_HOME, | python | {
"resource": ""
} |
q6730 | _parse_ntthal | train | def _parse_ntthal(ntthal_output):
''' Helper method that uses regex to parse ntthal output. '''
parsed_vals = re.search(_ntthal_re, ntthal_output)
return THERMORESULT(
True, # Structure found
| python | {
"resource": ""
} |
q6731 | calcThermo | train | def calcThermo(seq1, seq2, calc_type='ANY', mv_conc=50, dv_conc=0,
dntp_conc=0.8, dna_conc=50, temp_c=37, max_loop=30,
temp_only=False):
""" Main subprocess wrapper for calls to the ntthal executable.
Returns a named tuple with tm, ds, dh, and dg values or None if no
structure / complex could be computed.
"""
args = [pjoin(PRIMER3_HOME, 'ntthal'),
| python | {
"resource": ""
} |
q6732 | calcHairpin | train | def calcHairpin(seq, mv_conc=50, dv_conc=0, dntp_conc=0.8, dna_conc=50,
temp_c=37, max_loop=30, temp_only=False):
''' Return a namedtuple of the dS, dH, dG, and Tm of any hairpin struct
present. | python | {
"resource": ""
} |
q6733 | calcHeterodimer | train | def calcHeterodimer(seq1, seq2, mv_conc=50, dv_conc=0, dntp_conc=0.8,
dna_conc=50, temp_c=37, max_loop=30, temp_only=False):
''' Return a tuple of the dS, dH, dG, and Tm of any predicted | python | {
"resource": ""
} |
q6734 | designPrimers | train | def designPrimers(p3_args, input_log=None, output_log=None, err_log=None):
''' Return the raw primer3_core output for the provided primer3 args.
Returns an ordered dict of the boulderIO-format primer3 output file
'''
sp = subprocess.Popen([pjoin(PRIMER3_HOME, 'primer3_core')],
stdout=subprocess.PIPE, stdin=subprocess.PIPE,
stderr=subprocess.STDOUT)
p3_args.setdefault('PRIMER_THERMODYNAMIC_PARAMETERS_PATH',
pjoin(PRIMER3_HOME, 'primer3_config/'))
in_str = _formatBoulderIO(p3_args)
if input_log:
| python | {
"resource": ""
} |
q6735 | makeExecutable | train | def makeExecutable(fp):
''' Adds the executable bit to the file at filepath `fp`
'''
mode = ((os.stat(fp).st_mode) | 0o555) & 0o7777
| python | {
"resource": ""
} |
q6736 | calcHairpin | train | def calcHairpin(seq, mv_conc=50.0, dv_conc=0.0, dntp_conc=0.8, dna_conc=50.0,
temp_c=37, max_loop=30):
''' Calculate the hairpin formation thermodynamics of a DNA sequence.
**Note that the maximum length of `seq` is 60 bp.** This is a cap suggested
by the Primer3 team as the longest reasonable sequence length for which
a two-state NN model produces reliable results (see primer3/src/libnano/thal.h:50).
Args:
seq (str): DNA sequence to analyze for hairpin formation
mv_conc (float/int, optional): Monovalent cation conc. (mM)
dv_conc (float/int, optional): Divalent cation conc. (mM)
dntp_conc (float/int, optional): dNTP conc. (mM)
dna_conc (float/int, optional): DNA conc. (nM)
| python | {
"resource": ""
} |
q6737 | calcEndStability | train | def calcEndStability(seq1, seq2, mv_conc=50, dv_conc=0, dntp_conc=0.8,
dna_conc=50, temp_c=37, max_loop=30):
''' Calculate the 3' end stability of DNA sequence `seq1` against DNA
sequence `seq2`.
**Note that at least one of the two sequences must by <60 bp in length.**
This is a cap imposed by Primer3 as the longest reasonable sequence length
for which a two-state NN model produces reliable results (see
primer3/src/libnano/thal.h:50).
Args:
seq1 (str) : DNA sequence to analyze for 3' end
hybridization against the target
sequence
seq2 (str) : Target DNA sequence to analyze for
seq1 3' end hybridization
mv_conc (float/int, | python | {
"resource": ""
} |
q6738 | designPrimers | train | def designPrimers(seq_args, global_args=None, misprime_lib=None,
mishyb_lib=None, debug=False):
''' Run the Primer3 design process.
If the global args have been previously set (either by a pervious
`designPrimers` call or by a `setGlobals` call), `designPrimers` may be
called with seqArgs alone (as a means of optimization).
Args:
seq_args (dict) : Primer3 sequence/design args as per
| python | {
"resource": ""
} |
q6739 | GradeBook.unravel_sections | train | def unravel_sections(section_data):
"""Unravels section type dictionary into flat list of sections with
section type set as an attribute.
Args:
section_data(dict): Data return from py:method::get_sections
Returns:
| python | {
"resource": ""
} |
q6740 | GradeBook.unravel_staff | train | def unravel_staff(staff_data):
"""Unravels staff role dictionary into flat list of staff
members with ``role`` set as an attribute.
Args:
staff_data(dict): Data return from py:method::get_staff
Returns:
list: Flat list of staff members with ``role`` set to
role type (i.e. course_admin, instructor, TA, etc)
"""
staff_list = []
for role, | python | {
"resource": ""
} |
q6741 | GradeBook.get_gradebook_id | train | def get_gradebook_id(self, gbuuid):
"""Return gradebookid for a given gradebook uuid.
Args:
gbuuid (str): gradebook uuid, i.e. ``STELLAR:/project/gbngtest``
Raises:
PyLmodUnexpectedData: No gradebook id returned
requests.RequestException: Exception connection error
ValueError: Unable to decode response content
Returns:
str: value of gradebook id
"""
gradebook = self.get('gradebook', params={'uuid': gbuuid})
if 'data' not in gradebook:
| python | {
"resource": ""
} |
q6742 | GradeBook.get_options | train | def get_options(self, gradebook_id):
"""Get options for gradebook.
Get options dictionary for a gradebook. Options include gradebook
attributes.
Args:
gradebook_id (str): unique identifier for gradebook, i.e. ``2314``
Returns:
An example return value is:
.. code-block:: python
{
u'data':
{
u'accessLevel': u'class',
u'archived': False,
u'calc_on_approved_only': False,
u'configured': None,
u'courseName': u'',
u'courseNumber': u'mitxdemosite',
u'deriveOverallGrades': False,
u'gradebookEwsEnabled': False,
u'gradebookId': 1293808,
u'gradebookName': u'Gradebook for mitxdemosite',
u'gradebookReadOnly': False,
u'gradebookVisibleToAdvisors': False,
u'graders_change_approved': False,
u'hideExcuseButtonInUI': False,
u'homeworkBetaEnabled': False,
u'membershipQualifier': u'/project/mitxdemosite',
u'membershipSource': u'stellar',
u'student_sees_actual_grades': True,
u'student_sees_category_info': True,
u'student_sees_comments': True,
u'student_sees_cumulative_score': True,
| python | {
"resource": ""
} |
q6743 | GradeBook.get_assignments | train | def get_assignments(
self,
gradebook_id='',
simple=False,
max_points=True,
avg_stats=False,
grading_stats=False
):
"""Get assignments for a gradebook.
Return list of assignments for a given gradebook,
specified by a py:attribute::gradebook_id. You can control
if additional parameters are returned, but the response
time with py:attribute::avg_stats and py:attribute::grading_stats
enabled is significantly longer.
Args:
gradebook_id (str): unique identifier for gradebook, i.e. ``2314``
simple (bool): return just assignment names, default= ``False``
max_points (bool):
Max points is a property of the grading scheme for the
assignment rather than a property of the assignment itself,
default= ``True``
avg_stats (bool): return average grade, default= ``False``
grading_stats (bool):
return grading statistics, i.e. number of approved grades,
unapproved grades, etc., default= ``False``
Raises:
requests.RequestException: Exception connection error
ValueError: Unable to decode response content
Returns:
list: list of assignment dictionaries
An example return value is:
.. code-block:: python
[
{
u'assignmentId': 2431240,
u'categoryId': 1293820,
u'description': u'',
u'dueDate': 1372392000000,
u'dueDateString': u'06-28-2013',
u'gradebookId': 1293808,
u'graderVisible': True,
u'gradingSchemeId': 2431243,
u'gradingSchemeType': u'NUMERIC',
u'isComposite': False,
u'isHomework': False,
u'maxPointsTotal': 10.0,
u'name': u'Homework 1',
u'shortName': u'HW1',
u'userDeleted': False,
u'weight': 1.0
},
{
u'assignmentId': 16708850,
u'categoryId': 1293820,
u'description': u'',
u'dueDate': 1383541200000,
u'dueDateString': u'11-04-2013',
| python | {
"resource": ""
} |
q6744 | GradeBook.get_assignment_by_name | train | def get_assignment_by_name(self, assignment_name, assignments=None):
"""Get assignment by name.
Get an assignment by name. It works by retrieving all assignments
and returning the first assignment with a matching name. If the
optional parameter ``assignments`` is provided, it uses this
collection rather than retrieving all assignments from the service.
Args:
assignment_name (str): name of assignment
assignments (list): assignments to search, default: None
When ``assignments`` is unspecified, all assignments
are retrieved from the service.
Raises:
requests.RequestException: Exception connection error
ValueError: Unable to decode response content
Returns:
tuple: tuple of assignment id and assignment dictionary
.. code-block:: python
(
16708850,
{
u'assignmentId': 16708850,
u'categoryId': 1293820,
u'description': u'',
u'dueDate': 1383541200000,
u'dueDateString': u'11-04-2013',
u'gradebookId': 1293808,
u'graderVisible': False,
u'gradingSchemeId': 16708851,
| python | {
"resource": ""
} |
q6745 | GradeBook.create_assignment | train | def create_assignment( # pylint: disable=too-many-arguments
self,
name,
short_name,
weight,
max_points,
due_date_str,
gradebook_id='',
**kwargs
):
"""Create a new assignment.
Create a new assignment. By default, assignments are created
under the `Uncategorized` category.
Args:
name (str): descriptive assignment name,
i.e. ``new NUMERIC SIMPLE ASSIGNMENT``
short_name (str): short name of assignment, one word of
no more than 5 characters, i.e. ``SAnew``
weight (str): floating point value for weight, i.e. ``1.0``
max_points (str): floating point value for maximum point
total, i.e. ``100.0``
due_date_str (str): due date as string in ``mm-dd-yyyy``
format, i.e. ``08-21-2011``
gradebook_id (str): unique identifier for gradebook, i.e. ``2314``
kwargs (dict): dictionary containing additional parameters,
i.e. ``graderVisible``, ``totalAverage``, and ``categoryId``.
For example:
.. code-block:: python
{
u'graderVisible': True,
u'totalAverage': None
u'categoryId': 1007964,
}
Raises:
requests.RequestException: Exception connection error
ValueError: Unable to decode response content
Returns:
dict: dictionary containing ``data``, ``status`` and ``message``
for example:
.. code-block:: python
{
u'data':
{
u'assignmentId': 18490492,
u'categoryId': 1293820,
u'description': u'',
| python | {
"resource": ""
} |
q6746 | GradeBook.set_grade | train | def set_grade(
self,
assignment_id,
student_id,
grade_value,
gradebook_id='',
**kwargs
):
"""Set numerical grade for student and assignment.
Set a numerical grade for for a student and assignment. Additional
options
for grade ``mode`` are: OVERALL_GRADE = ``1``, REGULAR_GRADE = ``2``
To set 'excused' as the grade, enter ``None`` for letter and
numeric grade values,
and pass ``x`` as the ``specialGradeValue``.
``ReturnAffectedValues`` flag determines whether or not to return
student cumulative points and
impacted assignment category grades (average and student grade).
Args:
assignment_id (str): numerical ID for assignment
student_id (str): numerical ID for student
grade_value (str): numerical grade value
gradebook_id (str): unique identifier for gradebook, i.e. ``2314``
kwargs (dict): dictionary of additional parameters
.. code-block:: python
{
u'letterGradeValue':None,
u'booleanGradeValue':None,
u'specialGradeValue':None,
u'mode':2,
u'isGradeApproved':False,
u'comment':None,
u'returnAffectedValues': True,
}
Raises:
requests.RequestException: Exception connection error
ValueError: Unable to decode response content
Returns:
dict: dictionary containing response ``status`` and | python | {
"resource": ""
} |
q6747 | GradeBook.multi_grade | train | def multi_grade(self, grade_array, gradebook_id=''):
"""Set multiple grades for students.
Set multiple student grades for a gradebook. The grades are passed
as a list of dictionaries.
Each grade dictionary in ``grade_array`` must contain a
``studentId`` and a ``assignmentId``.
Options for grade mode are: OVERALL_GRADE = ``1``,
REGULAR_GRADE = ``2``
To set 'excused' as the grade, enter ``None`` for
``letterGradeValue`` and ``numericGradeValue``,
and pass ``x`` as the ``specialGradeValue``.
The ``ReturnAffectedValues`` flag determines whether to return
student cumulative points and impacted assignment category
grades (average and student grade)
.. code-block:: python
[
{
u'comment': None,
u'booleanGradeValue': None,
u'studentId': 1135,
u'assignmentId': 4522,
u'specialGradeValue': None,
u'returnAffectedValues': True,
u'letterGradeValue': None,
u'mode': 2,
u'numericGradeValue': 50,
u'isGradeApproved': False
},
{
u'comment': None,
u'booleanGradeValue': None,
u'studentId': 1135,
u'assignmentId': 4522,
u'specialGradeValue': u'x',
u'returnAffectedValues': True,
u'letterGradeValue': None,
u'mode': 2,
u'numericGradeValue': None,
| python | {
"resource": ""
} |
q6748 | GradeBook.get_sections | train | def get_sections(self, gradebook_id='', simple=False):
"""Get the sections for a gradebook.
Return a dictionary of types of sections containing a list of that
type for a given gradebook. Specified by a gradebookid.
If simple=True, a list of dictionaries is provided for each
section regardless of type. The dictionary only contains one
key ``SectionName``.
Args:
gradebook_id (str): unique identifier for gradebook, i.e. ``2314``
simple (bool): return a list of section names only
Raises:
requests.RequestException: Exception connection error
ValueError: Unable to decode response content
Returns:
dict: Dictionary of section types where each type has a
list of sections
An example return value is:
.. code-block:: python
{
u'recitation':
[
{
u'editable': False,
u'groupId': 1293925,
u'groupingScheme': u'Recitation',
u'members': None,
u'name': u'Unassigned',
u'shortName': u'DefaultGroupNoCollisionPlease1234',
u'staffs': None
},
{
u'editable': True,
u'groupId': 1327565,
| python | {
"resource": ""
} |
q6749 | GradeBook.get_section_by_name | train | def get_section_by_name(self, section_name):
"""Get a section by its name.
Get a list of sections for a given gradebook,
specified by a gradebookid.
Args:
section_name (str): The section's name.
Raises:
requests.RequestException: Exception connection error
ValueError: Unable to decode response content
Returns:
tuple: tuple of group id, and section dictionary
An example return value is:
.. code-block:: python
(
1327565,
{
u'editable': True,
u'groupId': 1327565,
| python | {
"resource": ""
} |
q6750 | GradeBook.get_students | train | def get_students(
self,
gradebook_id='',
simple=False,
section_name='',
include_photo=False,
include_grade_info=False,
include_grade_history=False,
include_makeup_grades=False
):
"""Get students for a gradebook.
Get a list of students for a given gradebook,
specified by a gradebook id. Does not include grade data.
Args:
gradebook_id (str): unique identifier for gradebook, i.e. ``2314``
simple (bool):
if ``True``, just return dictionary with keys ``email``,
``name``, ``section``, default = ``False``
section_name (str): section name
include_photo (bool): include student photo, default= ``False``
include_grade_info (bool):
include student's grade info, default= ``False``
include_grade_history (bool):
include student's grade history, default= ``False``
include_makeup_grades (bool):
include student's makeup grades, default= ``False``
Raises:
requests.RequestException: Exception connection error
ValueError: Unable to decode response content
Returns:
list: list of student dictionaries
.. code-block:: python
[{
u'accountEmail': u'stellar.test2@gmail.com',
u'displayName': u'Molly Parker',
u'photoUrl': None,
u'middleName': None,
u'section': u'Unassigned',
u'sectionId': 1293925,
u'editable': False,
u'overallGradeInformation': None,
u'studentId': 1145,
u'studentAssignmentInfo': None,
u'sortableName': u'Parker, Molly',
u'surname': u'Parker',
u'givenName': u'Molly',
u'nickName': u'Molly',
u'email': u'stellar.test2@gmail.com'
},]
"""
# These are parameters required for the remote API call, so
# there aren't too many arguments, or too many variables
# pylint: disable=too-many-arguments,too-many-locals
# Set params by arguments
params = dict(
includePhoto=json.dumps(include_photo),
includeGradeInfo=json.dumps(include_grade_info),
includeGradeHistory=json.dumps(include_grade_history),
| python | {
"resource": ""
} |
q6751 | GradeBook.get_student_by_email | train | def get_student_by_email(self, email, students=None):
"""Get a student based on an email address.
Calls ``self.get_students()`` to get list of all students,
if not passed as the ``students`` parameter.
Args:
email (str): student email
students (list): dictionary of students to search, default: None
When ``students`` is unspecified, all students in gradebook
are retrieved.
Raises:
requests.RequestException: Exception connection error
ValueError: Unable to decode response content
| python | {
"resource": ""
} |
q6752 | GradeBook.spreadsheet2gradebook | train | def spreadsheet2gradebook(
self,
csv_file,
email_field=None,
approve_grades=False,
use_max_points_column=False,
max_points_column=None,
normalize_column=None
):
"""Upload grade spreadsheet to gradebook.
Upload grades from CSV format spreadsheet file into the
Learning Modules gradebook. The spreadsheet must have a column
named ``External email`` which is used as the student's email
address (for looking up and matching studentId).
These columns are disregarded: ``ID``, ``Username``,
``Full Name``, ``edX email``, ``External email``,
as well as the strings passed in ``max_points_column``
and ``normalize_column``, if any.
All other columns are taken as assignments.
If ``email_field`` is specified, then that field name is taken as
the student's email.
.. code-block:: none
External email,AB Assignment 01,AB Assignment 02
jeannechiang@gmail.com,1.0,0.9
stellar.test2@gmail.com,0.2,0.4
stellar.test1@gmail.com,0.93,0.77
Args:
csv_reader (str): filename of csv data, or readable file object
email_field (str): student's email
approve_grades (bool): Should grades be auto approved?
use_max_points_column (bool):
If ``True``, read the max points and normalize values
from the CSV and use the max points value in place of
the default if normalized is ``False``.
max_points_column (str): The name of the max_pts column. All
rows contain the same number, the max points for
the assignment.
normalize_column (str): The name of the normalize column which
indicates whether to use the max points value.
Raises:
PyLmodFailedAssignmentCreation: Failed to create assignment
requests.RequestException: Exception connection error
ValueError: Unable to decode response content
Returns:
tuple: tuple of dictionary containing | python | {
"resource": ""
} |
q6753 | GradeBook.get_staff | train | def get_staff(self, gradebook_id, simple=False):
"""Get staff list for gradebook.
Get staff list for the gradebook specified. Optionally, return
a less detailed list by specifying ``simple = True``.
If simple=True, return a list of dictionaries, one dictionary
for each member. The dictionary contains a member's ``email``,
``displayName``, and ``role``. Members with multiple roles will
appear in the list once for each role.
Args:
gradebook_id (str): unique identifier for gradebook, i.e. ``2314``
simple (bool): Return a staff list with less detail. Default
is ``False``.
Returns:
An example return value is:
.. code-block:: python
{
u'data': {
u'COURSE_ADMIN': [
{
u'accountEmail': u'benfranklin@mit.edu',
u'displayName': u'Benjamin Franklin',
u'editable': False,
u'email': u'benfranklin@mit.edu',
u'givenName': u'Benjamin',
u'middleName': None,
u'mitId': u'921344431',
u'nickName': u'Benjamin',
u'personId': 10710616,
u'sortableName': u'Franklin, Benjamin',
u'surname': u'Franklin',
u'year': None
},
],
u'COURSE_PROF': [
{
u'accountEmail': u'dduck@mit.edu',
u'displayName': u'Donald Duck',
u'editable': False,
u'email': u'dduck@mit.edu',
u'givenName': u'Donald',
u'middleName': None,
u'mitId': u'916144889',
u'nickName': u'Donald',
u'personId': 8117160,
u'sortableName': u'Duck, Donald',
u'surname': u'Duck',
u'year': None
},
| python | {
"resource": ""
} |
q6754 | Membership.get_group | train | def get_group(self, uuid=None):
"""Get group data based on uuid.
Args:
uuid (str): optional uuid. defaults to self.cuuid
Raises:
PyLmodUnexpectedData: No data was returned.
requests.RequestException: Exception connection error
Returns:
| python | {
"resource": ""
} |
q6755 | Membership.get_group_id | train | def get_group_id(self, uuid=None):
"""Get group id based on uuid.
Args:
uuid (str): optional uuid. defaults to self.cuuid
Raises:
PyLmodUnexpectedData: No group data was returned.
requests.RequestException: Exception connection error
Returns:
int: numeric group id
"""
group_data = self.get_group(uuid)
try:
return group_data['response']['docs'][0]['id']
| python | {
"resource": ""
} |
q6756 | Membership.get_membership | train | def get_membership(self, uuid=None):
"""Get membership data based on uuid.
Args:
uuid (str): optional uuid. defaults to self.cuuid
Raises:
PyLmodUnexpectedData: No data was returned.
| python | {
"resource": ""
} |
q6757 | Membership.email_has_role | train | def email_has_role(self, email, role_name, uuid=None):
"""Determine if an email is associated with a role.
Args:
email (str): user email
role_name (str): user role
uuid (str): optional uuid. defaults to self.cuuid
Raises:
PyLmodUnexpectedData: Unexpected data was returned.
requests.RequestException: Exception connection error
Returns:
bool: True or False if email has role_name
| python | {
"resource": ""
} |
q6758 | Membership.get_course_id | train | def get_course_id(self, course_uuid):
"""Get course id based on uuid.
Args:
uuid (str): course uuid, i.e. /project/mitxdemosite
Raises:
PyLmodUnexpectedData: No course data was returned.
requests.RequestException: Exception connection error
Returns:
int: numeric course id
"""
course_data = self.get(
'courseguide/course?uuid={uuid}'.format(
uuid=course_uuid or self.course_id
),
params=None
)
try:
return course_data['response']['docs'][0]['id']
except KeyError:
failure_message = ('KeyError in get_course_id - '
'got {0}'.format(course_data))
| python | {
"resource": ""
} |
q6759 | Membership.get_course_guide_staff | train | def get_course_guide_staff(self, course_id=''):
"""Get the staff roster for a course.
Get a list of staff members for a given course,
specified by a course id.
Args:
course_id (int): unique identifier for course, i.e. ``2314``
Raises:
requests.RequestException: Exception connection error
ValueError: Unable to decode response content
Returns:
list: list of dictionaries containing staff data
An example return value is:
.. code-block:: python
[
{
u'displayName': u'Huey Duck',
u'role': u'TA',
| python | {
"resource": ""
} |
q6760 | Base._data_to_json | train | def _data_to_json(data):
"""Convert to json if it isn't already a string.
Args:
| python | {
"resource": ""
} |
q6761 | Base._url_format | train | def _url_format(self, service):
"""Generate URL from urlbase and service.
Args:
service (str): The endpoint service to use, i.e. gradebook
Returns:
str: URL to where the request should be made
| python | {
"resource": ""
} |
q6762 | Base.rest_action | train | def rest_action(self, func, url, **kwargs):
"""Routine to do low-level REST operation, with retry.
Args:
func (callable): API function to call
url (str): service URL endpoint
kwargs (dict): addition parameters
Raises:
requests.RequestException: Exception connection error
ValueError: Unable to decode response content
Returns:
list: the json-encoded content of the response
"""
try:
response = func(url, timeout=self.TIMEOUT, **kwargs)
| python | {
"resource": ""
} |
q6763 | Base.get | train | def get(self, service, params=None):
"""Generic GET operation for retrieving data from Learning Modules API.
.. code-block:: python
gbk.get('students/{gradebookId}', params=params, gradebookId=gbid)
Args:
service (str): The endpoint service to use, i.e. gradebook
params (dict): additional parameters to add to the call
Raises:
requests.RequestException: Exception connection error
ValueError: Unable to decode response content
| python | {
"resource": ""
} |
q6764 | Base.post | train | def post(self, service, data):
"""Generic POST operation for sending data to Learning Modules API.
Data should be a JSON string or a dict. If it is not a string,
it is turned into a JSON string for the POST body.
Args:
service (str): The endpoint service to use, i.e. gradebook
data (json or dict): the data payload
Raises:
requests.RequestException: Exception connection error
ValueError: Unable to decode response content
Returns:
list: the json-encoded content of | python | {
"resource": ""
} |
q6765 | Base.delete | train | def delete(self, service):
"""Generic DELETE operation for Learning Modules API.
Args:
service (str): The endpoint service to use, i.e. gradebook
Raises:
requests.RequestException: Exception connection error
ValueError: Unable to decode response content
Returns:
| python | {
"resource": ""
} |
q6766 | raw_connection_from | train | def raw_connection_from(engine_or_conn):
"""Extract a raw_connection and determine if it should be automatically closed.
Only connections opened by this package will be closed | python | {
"resource": ""
} |
q6767 | makedirs | train | def makedirs(name, mode=0o777, exist_ok=False):
"""cheapo replacement for py3 makedirs with support for exist_ok
"""
if os.path.exists(name):
if not exist_ok:
| python | {
"resource": ""
} |
q6768 | fetch_in_thread | train | def fetch_in_thread(sr, nsa):
"""fetch a sequence in a thread
"""
def fetch_seq(q, nsa):
pid, ppid = os.getpid(), os.getppid()
q.put((pid, ppid, sr[nsa])) | python | {
"resource": ""
} |
q6769 | Application.run | train | def run(self):
"""Run install process."""
try:
self.linux.verify_system_status()
except InstallSkipError:
Log.info('Install skipped.')
return
work_dir = tempfile.mkdtemp(suffix='-rpm-py-installer')
Log.info("Created working directory '{0}'".format(work_dir))
with Cmd.pushd(work_dir):
self.rpm_py.download_and_install()
| python | {
"resource": ""
} |
q6770 | RpmPy.download_and_install | train | def download_and_install(self):
"""Download and install RPM Python binding."""
if self.is_installed_from_bin:
try:
self.installer.install_from_rpm_py_package()
return
except RpmPyPackageNotFoundError as e:
Log.warn('RPM Py Package not found. reason: {0}'.format(e))
# Pass to try to install from the source.
pass
# Download and install from the source.
top_dir_name = self.downloader.download_and_expand()
rpm_py_dir | python | {
"resource": ""
} |
q6771 | RpmPyVersion.git_branch | train | def git_branch(self):
"""Git branch name."""
info = self.info
| python | {
"resource": ""
} |
q6772 | SetupPy.add_patchs_to_build_without_pkg_config | train | def add_patchs_to_build_without_pkg_config(self, lib_dir, include_dir):
"""Add patches to remove pkg-config command and rpm.pc part.
Replace with given library_path: lib_dir and include_path: include_dir
without rpm.pc file.
"""
additional_patches = [
{
'src': r"pkgconfig\('--libs-only-L'\)",
'dest': "['{0}']".format(lib_dir),
},
# Considering -libs-only-l and -libs-only-L
# https://github.com/rpm-software-management/rpm/pull/327
{
| python | {
"resource": ""
} |
q6773 | SetupPy.apply_and_save | train | def apply_and_save(self):
"""Apply replaced words and patches, and save setup.py file."""
patches = self.patches
content = None
with open(self.IN_PATH) as f_in:
# As setup.py.in file size is 2.4 KByte.
# it's fine to read entire content.
content = f_in.read()
# Replace words.
for key in self.replaced_word_dict:
content = content.replace(key, self.replaced_word_dict[key])
# Apply patches.
out_patches = []
for patch in patches:
pattern = re.compile(patch['src'], re.MULTILINE)
(content, subs_num) = re.subn(pattern, patch['dest'],
content)
if subs_num > 0:
patch['applied'] = True
| python | {
"resource": ""
} |
q6774 | Downloader.download_and_expand | train | def download_and_expand(self):
"""Download and expand RPM Python binding."""
top_dir_name = None
if self.git_branch:
# Download a source by git clone.
top_dir_name = self._download_and_expand_by_git()
else:
# Download a source from the arcihve URL.
# Downloading the compressed archive is better than "git clone",
# because it is faster.
# If download failed due to URL not found, try "git clone".
| python | {
"resource": ""
} |
q6775 | Installer._make_lib_file_symbolic_links | train | def _make_lib_file_symbolic_links(self):
"""Make symbolic links for lib files.
Make symbolic links from system library files or downloaded lib files
to downloaded source library files.
For example, case: Fedora x86_64
Make symbolic links
from
a. /usr/lib64/librpmio.so* (one of them)
b. /usr/lib64/librpm.so* (one of them)
c. If rpm-build-libs package is installed,
/usr/lib64/librpmbuild.so* (one of them)
otherwise, downloaded and extracted rpm-build-libs.
./usr/lib64/librpmbuild.so* (one of them)
c. If rpm-build-libs package is installed,
/usr/lib64/librpmsign.so* (one of them)
otherwise, downloaded and extracted rpm-build-libs.
./usr/lib64/librpmsign.so* (one of them)
to
a. rpm/rpmio/.libs/librpmio.so
b. rpm/lib/.libs/librpm.so
c. rpm/build/.libs/librpmbuild.so
d. rpm/sign/.libs/librpmsign.so
.
This is a status after running "make" on actual rpm build process.
"""
so_file_dict = {
'rpmio': {
'sym_src_dir': self.rpm.lib_dir,
'sym_dst_dir': 'rpmio/.libs',
'require': True,
},
'rpm': {
'sym_src_dir': self.rpm.lib_dir,
'sym_dst_dir': 'lib/.libs',
'require': True,
},
'rpmbuild': {
'sym_src_dir': self.rpm.lib_dir,
'sym_dst_dir': 'build/.libs',
'require': True,
},
'rpmsign': {
'sym_src_dir': self.rpm.lib_dir,
'sym_dst_dir': 'sign/.libs',
},
}
| python | {
"resource": ""
} |
q6776 | Installer._copy_each_include_files_to_include_dir | train | def _copy_each_include_files_to_include_dir(self):
"""Copy include header files for each directory to include directory.
Copy include header files
from
rpm/
rpmio/*.h
lib/*.h
build/*.h
sign/*.h
to
rpm/
include/
rpm/*.h
.
This is a status after running "make" on actual rpm build process.
"""
src_header_dirs = [
'rpmio',
'lib',
'build',
'sign',
]
with Cmd.pushd('..'):
src_include_dir = os.path.abspath('./include')
for header_dir in src_header_dirs:
if not os.path.isdir(header_dir):
message_format = "Skip not existing header directory '{0}'"
Log.debug(message_format.format(header_dir))
continue
header_files = Cmd.find(header_dir, '*.h')
for header_file in header_files:
pattern = '^{0}/'.format(header_dir)
| python | {
"resource": ""
} |
q6777 | Installer._rpm_py_has_popt_devel_dep | train | def _rpm_py_has_popt_devel_dep(self):
"""Check if the RPM Python binding has a depndency to popt-devel.
Search include header files in the source code to check it.
"""
found = False
with open('../include/rpm/rpmlib.h') as f_in:
| python | {
"resource": ""
} |
q6778 | FedoraInstaller.install_from_rpm_py_package | train | def install_from_rpm_py_package(self):
"""Run install from RPM Python binding RPM package."""
self._download_and_extract_rpm_py_package()
# Find ./usr/lib64/pythonN.N/site-packages/rpm directory.
# A binary built by same version Python with used Python is target
# for the safe installation.
if self.rpm.has_set_up_py_in():
# If RPM has setup.py.in, this strict check is okay.
# Because we can still install from the source.
py_dir_name = 'python{0}.{1}'.format(
sys.version_info[0], sys.version_info[1])
else:
# If RPM does not have setup.py.in such as CentOS6,
# Only way to install is by different Python's RPM package.
py_dir_name = '*'
python_lib_dir_pattern = os.path.join(
'usr', '*', py_dir_name, 'site-packages')
rpm_dir_pattern = os.path.join(python_lib_dir_pattern, 'rpm')
downloaded_rpm_dirs = glob.glob(rpm_dir_pattern)
if not downloaded_rpm_dirs:
message = 'Directory with a pattern: {0} not found.'.format(
rpm_dir_pattern)
raise RpmPyPackageNotFoundError(message)
src_rpm_dir = downloaded_rpm_dirs[0]
# Remove rpm directory for the possible installed directories.
for rpm_dir in self.python.python_lib_rpm_dirs:
if os.path.isdir(rpm_dir):
Log.debug("Remove existing rpm directory {0}".format(rpm_dir))
shutil.rmtree(rpm_dir)
dst_rpm_dir | python | {
"resource": ""
} |
q6779 | Linux.get_instance | train | def get_instance(cls, python, rpm_path, **kwargs):
"""Get OS object."""
linux = None
if Cmd.which('apt-get'):
linux = DebianLinux(python, rpm_path, **kwargs)
| python | {
"resource": ""
} |
q6780 | Python.python_lib_rpm_dirs | train | def python_lib_rpm_dirs(self):
"""Both arch and non-arch site-packages directories."""
libs = [self.python_lib_arch_dir, self.python_lib_non_arch_dir]
| python | {
"resource": ""
} |
q6781 | Rpm.version | train | def version(self):
"""RPM vesion string."""
stdout = Cmd.sh_e_out('{0} --version'.format(self.rpm_path))
| python | {
"resource": ""
} |
q6782 | Rpm.is_system_rpm | train | def is_system_rpm(self):
"""Check if the RPM is system RPM."""
sys_rpm_paths = [
'/usr/bin/rpm',
# On CentOS6, system RPM is installed in this directory.
'/bin/rpm',
]
matched = False | python | {
"resource": ""
} |
q6783 | Rpm.is_package_installed | train | def is_package_installed(self, package_name):
"""Check if the RPM package is installed."""
if not package_name:
raise ValueError('package_name required.')
installed = True
try:
Cmd.sh_e('{0} --query {1} | python | {
"resource": ""
} |
q6784 | FedoraRpm.is_downloadable | train | def is_downloadable(self):
"""Return if rpm is downloadable by the package command.
Check if dnf or yum plugin package exists.
"""
is_plugin_avaiable = False
if self.is_dnf:
is_plugin_avaiable = self.is_package_installed(
'dnf-plugins-core')
else:
""" yum environment.
Make sure
# yum -y --downloadonly --downloaddir=. install package_name
is only available for | python | {
"resource": ""
} |
q6785 | FedoraRpm.download | train | def download(self, package_name):
"""Download given package."""
if not package_name:
ValueError('package_name required.')
if self.is_dnf:
cmd = 'dnf download {0}.{1}'.format(package_name, self.arch)
else:
cmd = 'yumdownloader {0}.{1}'.format(package_name, self.arch)
try:
Cmd.sh_e(cmd, stdout=subprocess.PIPE)
except CmdError as e:
for out in (e.stdout, e.stderr):
for line in out.split('\n'):
if re.match(r'^No package | python | {
"resource": ""
} |
q6786 | FedoraRpm.extract | train | def extract(self, package_name):
"""Extract given package."""
for cmd in ['rpm2cpio', 'cpio']:
if not Cmd.which(cmd):
message = '{0} command not found. Install {0}.'.format(cmd)
raise InstallError(message)
pattern = '{0}*{1}.rpm'.format(package_name, self.arch)
| python | {
"resource": ""
} |
q6787 | Cmd.sh_e | train | def sh_e(cls, cmd, **kwargs):
"""Run the command. It behaves like "sh -e".
It raises InstallError if the command failed.
"""
Log.debug('CMD: {0}'.format(cmd))
cmd_kwargs = {
'shell': True,
}
cmd_kwargs.update(kwargs)
env = os.environ.copy()
# Better to parse English output
env['LC_ALL'] = 'en_US.utf-8'
if 'env' in kwargs:
env.update(kwargs['env'])
cmd_kwargs['env'] = env
# Capture stderr to show it on error message.
cmd_kwargs['stderr'] = subprocess.PIPE
proc = None
try:
proc = subprocess.Popen(cmd, **cmd_kwargs)
stdout, stderr = proc.communicate()
returncode = proc.returncode
message_format = (
'CMD Return Code: [{0}], Stdout: [{1}], Stderr: [{2}]'
)
Log.debug(message_format.format(returncode, stdout, stderr))
if stdout is not None:
stdout = stdout.decode('utf-8')
| python | {
"resource": ""
} |
q6788 | Cmd.sh_e_out | train | def sh_e_out(cls, cmd, **kwargs):
"""Run the command. and returns the stdout."""
cmd_kwargs = | python | {
"resource": ""
} |
q6789 | Cmd.cd | train | def cd(cls, directory):
"""Change directory. It behaves like "cd directory"."""
| python | {
"resource": ""
} |
q6790 | Cmd.pushd | train | def pushd(cls, new_dir):
"""Change directory, and back to previous directory.
It behaves like "pushd directory; something; popd".
"""
previous_dir = os.getcwd()
try:
new_ab_dir = None
| python | {
"resource": ""
} |
q6791 | Cmd.which | train | def which(cls, cmd):
"""Return an absolute path of the command.
It behaves like "which command".
"""
| python | {
"resource": ""
} |
q6792 | Cmd.curl_remote_name | train | def curl_remote_name(cls, file_url):
"""Download file_url, and save as a file name of the URL.
It behaves like "curl -O or --remote-name".
It raises HTTPError if the file_url not found.
"""
tar_gz_file_name = file_url.split('/')[-1]
if sys.version_info >= (3, 2):
from urllib.request import urlopen
from urllib.error import HTTPError
else:
from | python | {
"resource": ""
} |
q6793 | Cmd.tar_extract | train | def tar_extract(cls, tar_comp_file_path):
"""Extract tar.gz or tar bz2 file.
It behaves like
- tar xzf tar_gz_file_path
- tar xjf tar_bz2_file_path
It raises tarfile.ReadError if the file is broken.
"""
| python | {
"resource": ""
} |
q6794 | Cmd.find | train | def find(cls, searched_dir, pattern):
"""Find matched files.
It does not include symbolic file in the result.
"""
Log.debug('find {0} with pattern: {1}'.format(searched_dir, pattern))
matched_files = []
for root_dir, dir_names, file_names in os.walk(searched_dir,
followlinks=False):
for file_name in | python | {
"resource": ""
} |
q6795 | Utils.version_str2tuple | train | def version_str2tuple(cls, version_str):
"""Version info.
tuple object. ex. ('4', '14', '0', 'rc1')
"""
if not isinstance(version_str, str):
ValueError('version_str invalid instance.')
version_info_list = re.findall(r'[0-9a-zA-Z]+', version_str)
| python | {
"resource": ""
} |
q6796 | install_rpm_py | train | def install_rpm_py():
"""Install RPM Python binding."""
python_path = sys.executable
cmd = '{0} install.py'.format(python_path)
exit_status = os.system(cmd) | python | {
"resource": ""
} |
q6797 | _get_bgzip_version | train | def _get_bgzip_version(exe):
"""return bgzip version as string"""
p = subprocess.Popen([exe, "-h"], stdout=subprocess.PIPE, stderr=subprocess.STDOUT, universal_newlines=True)
output = p.communicate()
version_line = output[0].splitlines()[1]
| python | {
"resource": ""
} |
q6798 | _find_bgzip | train | def _find_bgzip():
"""return path to bgzip if found and meets version requirements, else exception"""
missing_file_exception = OSError if six.PY2 else FileNotFoundError
min_bgzip_version = ".".join(map(str, min_bgzip_version_info))
exe = os.environ.get("SEQREPO_BGZIP_PATH", which("bgzip") or "/usr/bin/bgzip")
try:
bgzip_version = _get_bgzip_version(exe)
except AttributeError:
raise RuntimeError("Didn't find version string in bgzip executable ({exe})".format(exe=exe))
except missing_file_exception:
raise RuntimeError("{exe} doesn't exist; you need to install htslib (See https://github.com/biocommons/biocommons.seqrepo#requirements)".format(exe=exe))
except Exception:
| python | {
"resource": ""
} |
q6799 | SeqRepo.translate_alias | train | def translate_alias(self, alias, namespace=None, target_namespaces=None, translate_ncbi_namespace=None):
"""given an alias and optional namespace, return a list of all other
aliases for same sequence
"""
if translate_ncbi_namespace is None:
translate_ncbi_namespace = self.translate_ncbi_namespace
seq_id = self._get_unique_seqid(alias=alias, namespace=namespace)
aliases = self.aliases.fetch_aliases(seq_id=seq_id,
| python | {
"resource": ""
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.