id int32 0 252k | repo stringlengths 7 55 | path stringlengths 4 127 | func_name stringlengths 1 88 | original_string stringlengths 75 19.8k | language stringclasses 1
value | code stringlengths 51 19.8k | code_tokens list | docstring stringlengths 3 17.3k | docstring_tokens list | sha stringlengths 40 40 | url stringlengths 87 242 |
|---|---|---|---|---|---|---|---|---|---|---|---|
224,900 | graphistry/pygraphistry | graphistry/pygraphistry.py | PyGraphistry.register | def register(key=None, server=None, protocol=None, api=None, certificate_validation=None, bolt=None):
"""API key registration and server selection
Changing the key effects all derived Plotter instances.
:param key: API key.
:type key: String.
:param server: URL of the visualization server.
:type server: Optional string.
:param protocol: Protocol used to contact visualization server
:type protocol: Optional string.
:returns: None.
:rtype: None.
**Example: Standard**
::
import graphistry
graphistry.register(key="my api key")
**Example: Developer**
::
import graphistry
graphistry.register('my api key', server='staging', protocol='https')
**Example: Through environment variable**
::
export GRAPHISTRY_API_KEY = 'my api key'
::
import graphistry
graphistry.register()
"""
PyGraphistry.api_key(key)
PyGraphistry.server(server)
PyGraphistry.api_version(api)
PyGraphistry.protocol(protocol)
PyGraphistry.certificate_validation(certificate_validation)
PyGraphistry.authenticate()
PyGraphistry.set_bolt_driver(bolt) | python | def register(key=None, server=None, protocol=None, api=None, certificate_validation=None, bolt=None):
PyGraphistry.api_key(key)
PyGraphistry.server(server)
PyGraphistry.api_version(api)
PyGraphistry.protocol(protocol)
PyGraphistry.certificate_validation(certificate_validation)
PyGraphistry.authenticate()
PyGraphistry.set_bolt_driver(bolt) | [
"def",
"register",
"(",
"key",
"=",
"None",
",",
"server",
"=",
"None",
",",
"protocol",
"=",
"None",
",",
"api",
"=",
"None",
",",
"certificate_validation",
"=",
"None",
",",
"bolt",
"=",
"None",
")",
":",
"PyGraphistry",
".",
"api_key",
"(",
"key",
... | API key registration and server selection
Changing the key effects all derived Plotter instances.
:param key: API key.
:type key: String.
:param server: URL of the visualization server.
:type server: Optional string.
:param protocol: Protocol used to contact visualization server
:type protocol: Optional string.
:returns: None.
:rtype: None.
**Example: Standard**
::
import graphistry
graphistry.register(key="my api key")
**Example: Developer**
::
import graphistry
graphistry.register('my api key', server='staging', protocol='https')
**Example: Through environment variable**
::
export GRAPHISTRY_API_KEY = 'my api key'
::
import graphistry
graphistry.register() | [
"API",
"key",
"registration",
"and",
"server",
"selection"
] | 3dfc50e60232c6f5fedd6e5fa9d3048b606944b8 | https://github.com/graphistry/pygraphistry/blob/3dfc50e60232c6f5fedd6e5fa9d3048b606944b8/graphistry/pygraphistry.py#L171-L212 |
224,901 | graphistry/pygraphistry | graphistry/pygraphistry.py | PyGraphistry.hypergraph | def hypergraph(raw_events, entity_types=None, opts={}, drop_na=True, drop_edge_attrs=False, verbose=True, direct=False):
"""Transform a dataframe into a hypergraph.
:param Dataframe raw_events: Dataframe to transform
:param List entity_types: Optional list of columns (strings) to turn into nodes, None signifies all
:param Dict opts: See below
:param bool drop_edge_attrs: Whether to include each row's attributes on its edges, defaults to False (include)
:param bool verbose: Whether to print size information
:param bool direct: Omit hypernode and instead strongly connect nodes in an event
Create a graph out of the dataframe, and return the graph components as dataframes,
and the renderable result Plotter. It reveals relationships between the rows and between column values.
This transform is useful for lists of events, samples, relationships, and other structured high-dimensional data.
The transform creates a node for every row, and turns a row's column entries into node attributes.
If direct=False (default), every unique value within a column is also turned into a node.
Edges are added to connect a row's nodes to each of its column nodes, or if direct=True, to one another.
Nodes are given the attribute 'type' corresponding to the originating column name, or in the case of a row, 'EventID'.
Consider a list of events. Each row represents a distinct event, and each column some metadata about an event.
If multiple events have common metadata, they will be transitively connected through those metadata values.
The layout algorithm will try to cluster the events together.
Conversely, if an event has unique metadata, the unique metadata will turn into nodes that only have connections to the event node, and the clustering algorithm will cause them to form a ring around the event node.
Best practice is to set EVENTID to a row's unique ID,
SKIP to all non-categorical columns (or entity_types to all categorical columns),
and CATEGORY to group columns with the same kinds of values.
The optional ``opts={...}`` configuration options are:
* 'EVENTID': Column name to inspect for a row ID. By default, uses the row index.
* 'CATEGORIES': Dictionary mapping a category name to inhabiting columns. E.g., {'IP': ['srcAddress', 'dstAddress']}. If the same IP appears in both columns, this makes the transform generate one node for it, instead of one for each column.
* 'DELIM': When creating node IDs, defines the separator used between the column name and node value
* 'SKIP': List of column names to not turn into nodes. For example, dates and numbers are often skipped.
* 'EDGES': For direct=True, instead of making all edges, pick column pairs. E.g., {'a': ['b', 'd'], 'd': ['d']} creates edges between columns a->b and a->d, and self-edges d->d.
:returns: {'entities': DF, 'events': DF, 'edges': DF, 'nodes': DF, 'graph': Plotter}
:rtype: Dictionary
**Example**
::
import graphistry
h = graphistry.hypergraph(my_df)
g = h['graph'].plot()
"""
from . import hyper
return hyper.Hypergraph().hypergraph(PyGraphistry, raw_events, entity_types, opts, drop_na, drop_edge_attrs, verbose, direct) | python | def hypergraph(raw_events, entity_types=None, opts={}, drop_na=True, drop_edge_attrs=False, verbose=True, direct=False):
from . import hyper
return hyper.Hypergraph().hypergraph(PyGraphistry, raw_events, entity_types, opts, drop_na, drop_edge_attrs, verbose, direct) | [
"def",
"hypergraph",
"(",
"raw_events",
",",
"entity_types",
"=",
"None",
",",
"opts",
"=",
"{",
"}",
",",
"drop_na",
"=",
"True",
",",
"drop_edge_attrs",
"=",
"False",
",",
"verbose",
"=",
"True",
",",
"direct",
"=",
"False",
")",
":",
"from",
".",
... | Transform a dataframe into a hypergraph.
:param Dataframe raw_events: Dataframe to transform
:param List entity_types: Optional list of columns (strings) to turn into nodes, None signifies all
:param Dict opts: See below
:param bool drop_edge_attrs: Whether to include each row's attributes on its edges, defaults to False (include)
:param bool verbose: Whether to print size information
:param bool direct: Omit hypernode and instead strongly connect nodes in an event
Create a graph out of the dataframe, and return the graph components as dataframes,
and the renderable result Plotter. It reveals relationships between the rows and between column values.
This transform is useful for lists of events, samples, relationships, and other structured high-dimensional data.
The transform creates a node for every row, and turns a row's column entries into node attributes.
If direct=False (default), every unique value within a column is also turned into a node.
Edges are added to connect a row's nodes to each of its column nodes, or if direct=True, to one another.
Nodes are given the attribute 'type' corresponding to the originating column name, or in the case of a row, 'EventID'.
Consider a list of events. Each row represents a distinct event, and each column some metadata about an event.
If multiple events have common metadata, they will be transitively connected through those metadata values.
The layout algorithm will try to cluster the events together.
Conversely, if an event has unique metadata, the unique metadata will turn into nodes that only have connections to the event node, and the clustering algorithm will cause them to form a ring around the event node.
Best practice is to set EVENTID to a row's unique ID,
SKIP to all non-categorical columns (or entity_types to all categorical columns),
and CATEGORY to group columns with the same kinds of values.
The optional ``opts={...}`` configuration options are:
* 'EVENTID': Column name to inspect for a row ID. By default, uses the row index.
* 'CATEGORIES': Dictionary mapping a category name to inhabiting columns. E.g., {'IP': ['srcAddress', 'dstAddress']}. If the same IP appears in both columns, this makes the transform generate one node for it, instead of one for each column.
* 'DELIM': When creating node IDs, defines the separator used between the column name and node value
* 'SKIP': List of column names to not turn into nodes. For example, dates and numbers are often skipped.
* 'EDGES': For direct=True, instead of making all edges, pick column pairs. E.g., {'a': ['b', 'd'], 'd': ['d']} creates edges between columns a->b and a->d, and self-edges d->d.
:returns: {'entities': DF, 'events': DF, 'edges': DF, 'nodes': DF, 'graph': Plotter}
:rtype: Dictionary
**Example**
::
import graphistry
h = graphistry.hypergraph(my_df)
g = h['graph'].plot() | [
"Transform",
"a",
"dataframe",
"into",
"a",
"hypergraph",
"."
] | 3dfc50e60232c6f5fedd6e5fa9d3048b606944b8 | https://github.com/graphistry/pygraphistry/blob/3dfc50e60232c6f5fedd6e5fa9d3048b606944b8/graphistry/pygraphistry.py#L216-L268 |
224,902 | graphistry/pygraphistry | graphistry/pygraphistry.py | PyGraphistry.bind | def bind(node=None, source=None, destination=None,
edge_title=None, edge_label=None, edge_color=None, edge_weight=None,
point_title=None, point_label=None, point_color=None, point_size=None):
"""Create a base plotter.
Typically called at start of a program. For parameters, see ``plotter.bind()`` .
:returns: Plotter.
:rtype: Plotter.
**Example**
::
import graphistry
g = graphistry.bind()
"""
from . import plotter
return plotter.Plotter().bind(source, destination, node, \
edge_title, edge_label, edge_color, edge_weight, \
point_title, point_label, point_color, point_size) | python | def bind(node=None, source=None, destination=None,
edge_title=None, edge_label=None, edge_color=None, edge_weight=None,
point_title=None, point_label=None, point_color=None, point_size=None):
from . import plotter
return plotter.Plotter().bind(source, destination, node, \
edge_title, edge_label, edge_color, edge_weight, \
point_title, point_label, point_color, point_size) | [
"def",
"bind",
"(",
"node",
"=",
"None",
",",
"source",
"=",
"None",
",",
"destination",
"=",
"None",
",",
"edge_title",
"=",
"None",
",",
"edge_label",
"=",
"None",
",",
"edge_color",
"=",
"None",
",",
"edge_weight",
"=",
"None",
",",
"point_title",
"... | Create a base plotter.
Typically called at start of a program. For parameters, see ``plotter.bind()`` .
:returns: Plotter.
:rtype: Plotter.
**Example**
::
import graphistry
g = graphistry.bind() | [
"Create",
"a",
"base",
"plotter",
"."
] | 3dfc50e60232c6f5fedd6e5fa9d3048b606944b8 | https://github.com/graphistry/pygraphistry/blob/3dfc50e60232c6f5fedd6e5fa9d3048b606944b8/graphistry/pygraphistry.py#L320-L344 |
224,903 | althonos/InstaLooter | instalooter/_uadetect.py | UserAgentRequestHandler.do_HEAD | def do_HEAD(self):
"""Serve a HEAD request."""
self.queue.put(self.headers.get("User-Agent"))
self.send_response(six.moves.BaseHTTPServer.HTTPStatus.OK)
self.send_header("Location", self.path)
self.end_headers() | python | def do_HEAD(self):
self.queue.put(self.headers.get("User-Agent"))
self.send_response(six.moves.BaseHTTPServer.HTTPStatus.OK)
self.send_header("Location", self.path)
self.end_headers() | [
"def",
"do_HEAD",
"(",
"self",
")",
":",
"self",
".",
"queue",
".",
"put",
"(",
"self",
".",
"headers",
".",
"get",
"(",
"\"User-Agent\"",
")",
")",
"self",
".",
"send_response",
"(",
"six",
".",
"moves",
".",
"BaseHTTPServer",
".",
"HTTPStatus",
".",
... | Serve a HEAD request. | [
"Serve",
"a",
"HEAD",
"request",
"."
] | e894d8da368dd57423dd0fda4ac479ea2ea0c3c1 | https://github.com/althonos/InstaLooter/blob/e894d8da368dd57423dd0fda4ac479ea2ea0c3c1/instalooter/_uadetect.py#L23-L28 |
224,904 | althonos/InstaLooter | instalooter/looters.py | ProfileLooter.pages | def pages(self):
# type: () -> ProfileIterator
"""Obtain an iterator over Instagram post pages.
Returns:
PageIterator: an iterator over the instagram post pages.
Raises:
ValueError: when the requested user does not exist.
RuntimeError: when the user is a private account
and there is no logged user (or the logged user
does not follow that account).
"""
if self._owner_id is None:
it = ProfileIterator.from_username(self._username, self.session)
self._owner_id = it.owner_id
return it
return ProfileIterator(self._owner_id, self.session, self.rhx) | python | def pages(self):
# type: () -> ProfileIterator
if self._owner_id is None:
it = ProfileIterator.from_username(self._username, self.session)
self._owner_id = it.owner_id
return it
return ProfileIterator(self._owner_id, self.session, self.rhx) | [
"def",
"pages",
"(",
"self",
")",
":",
"# type: () -> ProfileIterator",
"if",
"self",
".",
"_owner_id",
"is",
"None",
":",
"it",
"=",
"ProfileIterator",
".",
"from_username",
"(",
"self",
".",
"_username",
",",
"self",
".",
"session",
")",
"self",
".",
"_o... | Obtain an iterator over Instagram post pages.
Returns:
PageIterator: an iterator over the instagram post pages.
Raises:
ValueError: when the requested user does not exist.
RuntimeError: when the user is a private account
and there is no logged user (or the logged user
does not follow that account). | [
"Obtain",
"an",
"iterator",
"over",
"Instagram",
"post",
"pages",
"."
] | e894d8da368dd57423dd0fda4ac479ea2ea0c3c1 | https://github.com/althonos/InstaLooter/blob/e894d8da368dd57423dd0fda4ac479ea2ea0c3c1/instalooter/looters.py#L705-L723 |
224,905 | althonos/InstaLooter | instalooter/looters.py | PostLooter.medias | def medias(self, timeframe=None):
"""Return a generator that yields only the refered post.
Yields:
dict: a media dictionary obtained from the given post.
Raises:
StopIteration: if the post does not fit the timeframe.
"""
info = self.info
if timeframe is not None:
start, end = TimedMediasIterator.get_times(timeframe)
timestamp = info.get("taken_at_timestamp") or info["media"]
if not (start >= timestamp >= end):
raise StopIteration
yield info | python | def medias(self, timeframe=None):
info = self.info
if timeframe is not None:
start, end = TimedMediasIterator.get_times(timeframe)
timestamp = info.get("taken_at_timestamp") or info["media"]
if not (start >= timestamp >= end):
raise StopIteration
yield info | [
"def",
"medias",
"(",
"self",
",",
"timeframe",
"=",
"None",
")",
":",
"info",
"=",
"self",
".",
"info",
"if",
"timeframe",
"is",
"not",
"None",
":",
"start",
",",
"end",
"=",
"TimedMediasIterator",
".",
"get_times",
"(",
"timeframe",
")",
"timestamp",
... | Return a generator that yields only the refered post.
Yields:
dict: a media dictionary obtained from the given post.
Raises:
StopIteration: if the post does not fit the timeframe. | [
"Return",
"a",
"generator",
"that",
"yields",
"only",
"the",
"refered",
"post",
"."
] | e894d8da368dd57423dd0fda4ac479ea2ea0c3c1 | https://github.com/althonos/InstaLooter/blob/e894d8da368dd57423dd0fda4ac479ea2ea0c3c1/instalooter/looters.py#L810-L826 |
224,906 | althonos/InstaLooter | instalooter/looters.py | PostLooter.download | def download(self,
destination, # type: Union[str, fs.base.FS]
condition=None, # type: Optional[Callable[[dict], bool]]
media_count=None, # type: Optional[int]
timeframe=None, # type: Optional[_Timeframe]
new_only=False, # type: bool
pgpbar_cls=None, # type: Optional[Type[ProgressBar]]
dlpbar_cls=None, # type: Optional[Type[ProgressBar]]
):
# type: (...) -> int
"""Download the refered post to the destination.
See `InstaLooter.download` for argument reference.
Note:
This function, opposed to other *looter* implementations, will
not spawn new threads, but simply use the main thread to download
the files.
Since a worker is in charge of downloading a *media* at a time
(and not a *file*), there would be no point in spawning more.
"""
destination, close_destination = self._init_destfs(destination)
queue = Queue() # type: Queue[Dict]
medias_queued = self._fill_media_queue(
queue, destination, iter(self.medias()), media_count,
new_only, condition)
queue.put(None)
worker = InstaDownloader(
queue=queue,
destination=destination,
namegen=self.namegen,
add_metadata=self.add_metadata,
dump_json=self.dump_json,
dump_only=self.dump_only,
pbar=None,
session=self.session)
worker.run()
return medias_queued | python | def download(self,
destination, # type: Union[str, fs.base.FS]
condition=None, # type: Optional[Callable[[dict], bool]]
media_count=None, # type: Optional[int]
timeframe=None, # type: Optional[_Timeframe]
new_only=False, # type: bool
pgpbar_cls=None, # type: Optional[Type[ProgressBar]]
dlpbar_cls=None, # type: Optional[Type[ProgressBar]]
):
# type: (...) -> int
destination, close_destination = self._init_destfs(destination)
queue = Queue() # type: Queue[Dict]
medias_queued = self._fill_media_queue(
queue, destination, iter(self.medias()), media_count,
new_only, condition)
queue.put(None)
worker = InstaDownloader(
queue=queue,
destination=destination,
namegen=self.namegen,
add_metadata=self.add_metadata,
dump_json=self.dump_json,
dump_only=self.dump_only,
pbar=None,
session=self.session)
worker.run()
return medias_queued | [
"def",
"download",
"(",
"self",
",",
"destination",
",",
"# type: Union[str, fs.base.FS]",
"condition",
"=",
"None",
",",
"# type: Optional[Callable[[dict], bool]]",
"media_count",
"=",
"None",
",",
"# type: Optional[int]",
"timeframe",
"=",
"None",
",",
"# type: Optional... | Download the refered post to the destination.
See `InstaLooter.download` for argument reference.
Note:
This function, opposed to other *looter* implementations, will
not spawn new threads, but simply use the main thread to download
the files.
Since a worker is in charge of downloading a *media* at a time
(and not a *file*), there would be no point in spawning more. | [
"Download",
"the",
"refered",
"post",
"to",
"the",
"destination",
"."
] | e894d8da368dd57423dd0fda4ac479ea2ea0c3c1 | https://github.com/althonos/InstaLooter/blob/e894d8da368dd57423dd0fda4ac479ea2ea0c3c1/instalooter/looters.py#L828-L870 |
224,907 | althonos/InstaLooter | instalooter/cli/logutils.py | warn_logging | def warn_logging(logger):
# type: (logging.Logger) -> Callable
"""Create a `showwarning` function that uses the given logger.
Arguments:
logger (~logging.Logger): the logger to use.
Returns:
function: a function that can be used as the `warnings.showwarning`
callback.
"""
def showwarning(message, category, filename, lineno, file=None, line=None):
logger.warning(message)
return showwarning | python | def warn_logging(logger):
# type: (logging.Logger) -> Callable
def showwarning(message, category, filename, lineno, file=None, line=None):
logger.warning(message)
return showwarning | [
"def",
"warn_logging",
"(",
"logger",
")",
":",
"# type: (logging.Logger) -> Callable",
"def",
"showwarning",
"(",
"message",
",",
"category",
",",
"filename",
",",
"lineno",
",",
"file",
"=",
"None",
",",
"line",
"=",
"None",
")",
":",
"logger",
".",
"warni... | Create a `showwarning` function that uses the given logger.
Arguments:
logger (~logging.Logger): the logger to use.
Returns:
function: a function that can be used as the `warnings.showwarning`
callback. | [
"Create",
"a",
"showwarning",
"function",
"that",
"uses",
"the",
"given",
"logger",
"."
] | e894d8da368dd57423dd0fda4ac479ea2ea0c3c1 | https://github.com/althonos/InstaLooter/blob/e894d8da368dd57423dd0fda4ac479ea2ea0c3c1/instalooter/cli/logutils.py#L16-L30 |
224,908 | althonos/InstaLooter | instalooter/cli/logutils.py | wrap_warnings | def wrap_warnings(logger):
"""Have the function patch `warnings.showwarning` with the given logger.
Arguments:
logger (~logging.logger): the logger to wrap warnings with when
the decorated function is called.
Returns:
`function`: a decorator function.
"""
def decorator(func):
@functools.wraps(func)
def new_func(*args, **kwargs):
showwarning = warnings.showwarning
warnings.showwarning = warn_logging(logger)
try:
return func(*args, **kwargs)
finally:
warnings.showwarning = showwarning
return new_func
return decorator | python | def wrap_warnings(logger):
def decorator(func):
@functools.wraps(func)
def new_func(*args, **kwargs):
showwarning = warnings.showwarning
warnings.showwarning = warn_logging(logger)
try:
return func(*args, **kwargs)
finally:
warnings.showwarning = showwarning
return new_func
return decorator | [
"def",
"wrap_warnings",
"(",
"logger",
")",
":",
"def",
"decorator",
"(",
"func",
")",
":",
"@",
"functools",
".",
"wraps",
"(",
"func",
")",
"def",
"new_func",
"(",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
":",
"showwarning",
"=",
"warnings",
".",... | Have the function patch `warnings.showwarning` with the given logger.
Arguments:
logger (~logging.logger): the logger to wrap warnings with when
the decorated function is called.
Returns:
`function`: a decorator function. | [
"Have",
"the",
"function",
"patch",
"warnings",
".",
"showwarning",
"with",
"the",
"given",
"logger",
"."
] | e894d8da368dd57423dd0fda4ac479ea2ea0c3c1 | https://github.com/althonos/InstaLooter/blob/e894d8da368dd57423dd0fda4ac479ea2ea0c3c1/instalooter/cli/logutils.py#L33-L54 |
224,909 | althonos/InstaLooter | instalooter/cli/time.py | date_from_isoformat | def date_from_isoformat(isoformat_date):
"""Convert an ISO-8601 date into a `datetime.date` object.
Argument:
isoformat_date (str): a date in ISO-8601 format (YYYY-MM-DD)
Returns:
~datetime.date: the object corresponding to the given ISO date.
Raises:
ValueError: when the date could not be converted successfully.
See Also:
`ISO-8601 specification <https://en.wikipedia.org/wiki/ISO_8601>`_.
"""
year, month, day = isoformat_date.split('-')
return datetime.date(int(year), int(month), int(day)) | python | def date_from_isoformat(isoformat_date):
year, month, day = isoformat_date.split('-')
return datetime.date(int(year), int(month), int(day)) | [
"def",
"date_from_isoformat",
"(",
"isoformat_date",
")",
":",
"year",
",",
"month",
",",
"day",
"=",
"isoformat_date",
".",
"split",
"(",
"'-'",
")",
"return",
"datetime",
".",
"date",
"(",
"int",
"(",
"year",
")",
",",
"int",
"(",
"month",
")",
",",
... | Convert an ISO-8601 date into a `datetime.date` object.
Argument:
isoformat_date (str): a date in ISO-8601 format (YYYY-MM-DD)
Returns:
~datetime.date: the object corresponding to the given ISO date.
Raises:
ValueError: when the date could not be converted successfully.
See Also:
`ISO-8601 specification <https://en.wikipedia.org/wiki/ISO_8601>`_. | [
"Convert",
"an",
"ISO",
"-",
"8601",
"date",
"into",
"a",
"datetime",
".",
"date",
"object",
"."
] | e894d8da368dd57423dd0fda4ac479ea2ea0c3c1 | https://github.com/althonos/InstaLooter/blob/e894d8da368dd57423dd0fda4ac479ea2ea0c3c1/instalooter/cli/time.py#L10-L26 |
224,910 | althonos/InstaLooter | instalooter/cli/time.py | get_times_from_cli | def get_times_from_cli(cli_token):
"""Convert a CLI token to a datetime tuple.
Argument:
cli_token (str): an isoformat datetime token ([ISO date]:[ISO date])
or a special value among:
* thisday
* thisweek
* thismonth
* thisyear
Returns:
tuple: a datetime.date objects couple, where the first item is
the start of a time frame and the second item the end of the
time frame. Both elements can also be None, if no date was
provided.
Raises:
ValueError: when the CLI token is not in the right format
(no colon in the token, not one of the special values, dates
are not in proper ISO-8601 format.)
See Also:
`ISO-8601 specification <https://en.wikipedia.org/wiki/ISO_8601>`_.
"""
today = datetime.date.today()
if cli_token=="thisday":
return today, today
elif cli_token=="thisweek":
return today, today - dateutil.relativedelta.relativedelta(days=7)
elif cli_token=="thismonth":
return today, today - dateutil.relativedelta.relativedelta(months=1)
elif cli_token=="thisyear":
return today, today - dateutil.relativedelta.relativedelta(years=1)
else:
try:
start_date, stop_date = cli_token.split(':')
except ValueError:
raise ValueError("--time parameter must contain a colon (:)")
if not start_date and not stop_date: # ':', no start date, no stop date
return None, None
try:
start_date = date_from_isoformat(start_date) if start_date else None
stop_date = date_from_isoformat(stop_date) if stop_date else None
except ValueError:
raise ValueError("--time parameter was not provided ISO formatted dates")
return start_date, stop_date | python | def get_times_from_cli(cli_token):
today = datetime.date.today()
if cli_token=="thisday":
return today, today
elif cli_token=="thisweek":
return today, today - dateutil.relativedelta.relativedelta(days=7)
elif cli_token=="thismonth":
return today, today - dateutil.relativedelta.relativedelta(months=1)
elif cli_token=="thisyear":
return today, today - dateutil.relativedelta.relativedelta(years=1)
else:
try:
start_date, stop_date = cli_token.split(':')
except ValueError:
raise ValueError("--time parameter must contain a colon (:)")
if not start_date and not stop_date: # ':', no start date, no stop date
return None, None
try:
start_date = date_from_isoformat(start_date) if start_date else None
stop_date = date_from_isoformat(stop_date) if stop_date else None
except ValueError:
raise ValueError("--time parameter was not provided ISO formatted dates")
return start_date, stop_date | [
"def",
"get_times_from_cli",
"(",
"cli_token",
")",
":",
"today",
"=",
"datetime",
".",
"date",
".",
"today",
"(",
")",
"if",
"cli_token",
"==",
"\"thisday\"",
":",
"return",
"today",
",",
"today",
"elif",
"cli_token",
"==",
"\"thisweek\"",
":",
"return",
... | Convert a CLI token to a datetime tuple.
Argument:
cli_token (str): an isoformat datetime token ([ISO date]:[ISO date])
or a special value among:
* thisday
* thisweek
* thismonth
* thisyear
Returns:
tuple: a datetime.date objects couple, where the first item is
the start of a time frame and the second item the end of the
time frame. Both elements can also be None, if no date was
provided.
Raises:
ValueError: when the CLI token is not in the right format
(no colon in the token, not one of the special values, dates
are not in proper ISO-8601 format.)
See Also:
`ISO-8601 specification <https://en.wikipedia.org/wiki/ISO_8601>`_. | [
"Convert",
"a",
"CLI",
"token",
"to",
"a",
"datetime",
"tuple",
"."
] | e894d8da368dd57423dd0fda4ac479ea2ea0c3c1 | https://github.com/althonos/InstaLooter/blob/e894d8da368dd57423dd0fda4ac479ea2ea0c3c1/instalooter/cli/time.py#L29-L77 |
224,911 | althonos/InstaLooter | instalooter/batch.py | BatchRunner.run_all | def run_all(self):
# type: () -> None
"""Run all the jobs specified in the configuration file.
"""
logger.debug("Creating batch session")
session = Session()
for section_id in self.parser.sections():
self.run_job(section_id, session=session) | python | def run_all(self):
# type: () -> None
logger.debug("Creating batch session")
session = Session()
for section_id in self.parser.sections():
self.run_job(section_id, session=session) | [
"def",
"run_all",
"(",
"self",
")",
":",
"# type: () -> None",
"logger",
".",
"debug",
"(",
"\"Creating batch session\"",
")",
"session",
"=",
"Session",
"(",
")",
"for",
"section_id",
"in",
"self",
".",
"parser",
".",
"sections",
"(",
")",
":",
"self",
".... | Run all the jobs specified in the configuration file. | [
"Run",
"all",
"the",
"jobs",
"specified",
"in",
"the",
"configuration",
"file",
"."
] | e894d8da368dd57423dd0fda4ac479ea2ea0c3c1 | https://github.com/althonos/InstaLooter/blob/e894d8da368dd57423dd0fda4ac479ea2ea0c3c1/instalooter/batch.py#L120-L128 |
224,912 | althonos/InstaLooter | instalooter/batch.py | BatchRunner.run_job | def run_job(self, section_id, session=None):
# type: (Text, Optional[Session]) -> None
"""Run a job as described in the section named ``section_id``.
Raises:
KeyError: when the section could not be found.
"""
if not self.parser.has_section(section_id):
raise KeyError('section not found: {}'.format(section_id))
session = session or Session()
for name, looter_cls in six.iteritems(self._CLS_MAP):
targets = self.get_targets(self._get(section_id, name))
quiet = self._getboolean(
section_id, "quiet", self.args.get("--quiet", False))
if targets:
logger.info("Launching {} job for section {}".format(name, section_id))
for target, directory in six.iteritems(targets):
try:
logger.info("Downloading {} to {}".format(target, directory))
looter = looter_cls(
target,
add_metadata=self._getboolean(section_id, 'add-metadata', False),
get_videos=self._getboolean(section_id, 'get-videos', False),
videos_only=self._getboolean(section_id, 'videos-only', False),
jobs=self._getint(section_id, 'jobs', 16),
template=self._get(section_id, 'template', '{id}'),
dump_json=self._getboolean(section_id, 'dump-json', False),
dump_only=self._getboolean(section_id, 'dump-only', False),
extended_dump=self._getboolean(section_id, 'extended-dump', False),
session=session)
if self.parser.has_option(section_id, 'username'):
looter.logout()
username = self._get(section_id, 'username')
password = self._get(section_id, 'password') or \
getpass.getpass('Password for "{}": '.format(username))
looter.login(username, password)
n = looter.download(
directory,
media_count=self._getint(section_id, 'num-to-dl'),
# FIXME: timeframe=self._get(section_id, 'timeframe'),
new_only=self._getboolean(section_id, 'new', False),
pgpbar_cls=None if quiet else TqdmProgressBar,
dlpbar_cls=None if quiet else TqdmProgressBar)
logger.success("Downloaded %i medias !", n)
except Exception as exception:
logger.error(six.text_type(exception)) | python | def run_job(self, section_id, session=None):
# type: (Text, Optional[Session]) -> None
if not self.parser.has_section(section_id):
raise KeyError('section not found: {}'.format(section_id))
session = session or Session()
for name, looter_cls in six.iteritems(self._CLS_MAP):
targets = self.get_targets(self._get(section_id, name))
quiet = self._getboolean(
section_id, "quiet", self.args.get("--quiet", False))
if targets:
logger.info("Launching {} job for section {}".format(name, section_id))
for target, directory in six.iteritems(targets):
try:
logger.info("Downloading {} to {}".format(target, directory))
looter = looter_cls(
target,
add_metadata=self._getboolean(section_id, 'add-metadata', False),
get_videos=self._getboolean(section_id, 'get-videos', False),
videos_only=self._getboolean(section_id, 'videos-only', False),
jobs=self._getint(section_id, 'jobs', 16),
template=self._get(section_id, 'template', '{id}'),
dump_json=self._getboolean(section_id, 'dump-json', False),
dump_only=self._getboolean(section_id, 'dump-only', False),
extended_dump=self._getboolean(section_id, 'extended-dump', False),
session=session)
if self.parser.has_option(section_id, 'username'):
looter.logout()
username = self._get(section_id, 'username')
password = self._get(section_id, 'password') or \
getpass.getpass('Password for "{}": '.format(username))
looter.login(username, password)
n = looter.download(
directory,
media_count=self._getint(section_id, 'num-to-dl'),
# FIXME: timeframe=self._get(section_id, 'timeframe'),
new_only=self._getboolean(section_id, 'new', False),
pgpbar_cls=None if quiet else TqdmProgressBar,
dlpbar_cls=None if quiet else TqdmProgressBar)
logger.success("Downloaded %i medias !", n)
except Exception as exception:
logger.error(six.text_type(exception)) | [
"def",
"run_job",
"(",
"self",
",",
"section_id",
",",
"session",
"=",
"None",
")",
":",
"# type: (Text, Optional[Session]) -> None",
"if",
"not",
"self",
".",
"parser",
".",
"has_section",
"(",
"section_id",
")",
":",
"raise",
"KeyError",
"(",
"'section not fou... | Run a job as described in the section named ``section_id``.
Raises:
KeyError: when the section could not be found. | [
"Run",
"a",
"job",
"as",
"described",
"in",
"the",
"section",
"named",
"section_id",
"."
] | e894d8da368dd57423dd0fda4ac479ea2ea0c3c1 | https://github.com/althonos/InstaLooter/blob/e894d8da368dd57423dd0fda4ac479ea2ea0c3c1/instalooter/batch.py#L130-L185 |
224,913 | DataBiosphere/toil | docs/vendor/sphinxcontrib/fulltoc.py | html_page_context | def html_page_context(app, pagename, templatename, context, doctree):
"""Event handler for the html-page-context signal.
Modifies the context directly.
- Replaces the 'toc' value created by the HTML builder with one
that shows all document titles and the local table of contents.
- Sets display_toc to True so the table of contents is always
displayed, even on empty pages.
- Replaces the 'toctree' function with one that uses the entire
document structure, ignores the maxdepth argument, and uses
only prune and collapse.
"""
rendered_toc = get_rendered_toctree(app.builder, pagename)
context['toc'] = rendered_toc
context['display_toc'] = True # force toctree to display
if "toctree" not in context:
# json builder doesn't use toctree func, so nothing to replace
return
def make_toctree(collapse=True):
return get_rendered_toctree(app.builder,
pagename,
prune=False,
collapse=collapse,
)
context['toctree'] = make_toctree | python | def html_page_context(app, pagename, templatename, context, doctree):
rendered_toc = get_rendered_toctree(app.builder, pagename)
context['toc'] = rendered_toc
context['display_toc'] = True # force toctree to display
if "toctree" not in context:
# json builder doesn't use toctree func, so nothing to replace
return
def make_toctree(collapse=True):
return get_rendered_toctree(app.builder,
pagename,
prune=False,
collapse=collapse,
)
context['toctree'] = make_toctree | [
"def",
"html_page_context",
"(",
"app",
",",
"pagename",
",",
"templatename",
",",
"context",
",",
"doctree",
")",
":",
"rendered_toc",
"=",
"get_rendered_toctree",
"(",
"app",
".",
"builder",
",",
"pagename",
")",
"context",
"[",
"'toc'",
"]",
"=",
"rendere... | Event handler for the html-page-context signal.
Modifies the context directly.
- Replaces the 'toc' value created by the HTML builder with one
that shows all document titles and the local table of contents.
- Sets display_toc to True so the table of contents is always
displayed, even on empty pages.
- Replaces the 'toctree' function with one that uses the entire
document structure, ignores the maxdepth argument, and uses
only prune and collapse. | [
"Event",
"handler",
"for",
"the",
"html",
"-",
"page",
"-",
"context",
"signal",
".",
"Modifies",
"the",
"context",
"directly",
".",
"-",
"Replaces",
"the",
"toc",
"value",
"created",
"by",
"the",
"HTML",
"builder",
"with",
"one",
"that",
"shows",
"all",
... | a8252277ff814e7bee0971139c2344f88e44b644 | https://github.com/DataBiosphere/toil/blob/a8252277ff814e7bee0971139c2344f88e44b644/docs/vendor/sphinxcontrib/fulltoc.py#L23-L49 |
224,914 | DataBiosphere/toil | docs/vendor/sphinxcontrib/fulltoc.py | get_rendered_toctree | def get_rendered_toctree(builder, docname, prune=False, collapse=True):
"""Build the toctree relative to the named document,
with the given parameters, and then return the rendered
HTML fragment.
"""
fulltoc = build_full_toctree(builder,
docname,
prune=prune,
collapse=collapse,
)
rendered_toc = builder.render_partial(fulltoc)['fragment']
return rendered_toc | python | def get_rendered_toctree(builder, docname, prune=False, collapse=True):
fulltoc = build_full_toctree(builder,
docname,
prune=prune,
collapse=collapse,
)
rendered_toc = builder.render_partial(fulltoc)['fragment']
return rendered_toc | [
"def",
"get_rendered_toctree",
"(",
"builder",
",",
"docname",
",",
"prune",
"=",
"False",
",",
"collapse",
"=",
"True",
")",
":",
"fulltoc",
"=",
"build_full_toctree",
"(",
"builder",
",",
"docname",
",",
"prune",
"=",
"prune",
",",
"collapse",
"=",
"coll... | Build the toctree relative to the named document,
with the given parameters, and then return the rendered
HTML fragment. | [
"Build",
"the",
"toctree",
"relative",
"to",
"the",
"named",
"document",
"with",
"the",
"given",
"parameters",
"and",
"then",
"return",
"the",
"rendered",
"HTML",
"fragment",
"."
] | a8252277ff814e7bee0971139c2344f88e44b644 | https://github.com/DataBiosphere/toil/blob/a8252277ff814e7bee0971139c2344f88e44b644/docs/vendor/sphinxcontrib/fulltoc.py#L52-L63 |
224,915 | DataBiosphere/toil | docs/vendor/sphinxcontrib/fulltoc.py | build_full_toctree | def build_full_toctree(builder, docname, prune, collapse):
"""Return a single toctree starting from docname containing all
sub-document doctrees.
"""
env = builder.env
doctree = env.get_doctree(env.config.master_doc)
toctrees = []
for toctreenode in doctree.traverse(addnodes.toctree):
toctree = env.resolve_toctree(docname, builder, toctreenode,
collapse=collapse,
prune=prune,
)
toctrees.append(toctree)
if not toctrees:
return None
result = toctrees[0]
for toctree in toctrees[1:]:
if toctree:
result.extend(toctree.children)
env.resolve_references(result, docname, builder)
return result | python | def build_full_toctree(builder, docname, prune, collapse):
env = builder.env
doctree = env.get_doctree(env.config.master_doc)
toctrees = []
for toctreenode in doctree.traverse(addnodes.toctree):
toctree = env.resolve_toctree(docname, builder, toctreenode,
collapse=collapse,
prune=prune,
)
toctrees.append(toctree)
if not toctrees:
return None
result = toctrees[0]
for toctree in toctrees[1:]:
if toctree:
result.extend(toctree.children)
env.resolve_references(result, docname, builder)
return result | [
"def",
"build_full_toctree",
"(",
"builder",
",",
"docname",
",",
"prune",
",",
"collapse",
")",
":",
"env",
"=",
"builder",
".",
"env",
"doctree",
"=",
"env",
".",
"get_doctree",
"(",
"env",
".",
"config",
".",
"master_doc",
")",
"toctrees",
"=",
"[",
... | Return a single toctree starting from docname containing all
sub-document doctrees. | [
"Return",
"a",
"single",
"toctree",
"starting",
"from",
"docname",
"containing",
"all",
"sub",
"-",
"document",
"doctrees",
"."
] | a8252277ff814e7bee0971139c2344f88e44b644 | https://github.com/DataBiosphere/toil/blob/a8252277ff814e7bee0971139c2344f88e44b644/docs/vendor/sphinxcontrib/fulltoc.py#L66-L86 |
224,916 | DataBiosphere/toil | src/toil/provisioners/aws/awsProvisioner.py | awsRetry | def awsRetry(f):
"""
This decorator retries the wrapped function if aws throws unexpected errors
errors.
It should wrap any function that makes use of boto
"""
@wraps(f)
def wrapper(*args, **kwargs):
for attempt in retry(delays=truncExpBackoff(),
timeout=300,
predicate=awsRetryPredicate):
with attempt:
return f(*args, **kwargs)
return wrapper | python | def awsRetry(f):
@wraps(f)
def wrapper(*args, **kwargs):
for attempt in retry(delays=truncExpBackoff(),
timeout=300,
predicate=awsRetryPredicate):
with attempt:
return f(*args, **kwargs)
return wrapper | [
"def",
"awsRetry",
"(",
"f",
")",
":",
"@",
"wraps",
"(",
"f",
")",
"def",
"wrapper",
"(",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
":",
"for",
"attempt",
"in",
"retry",
"(",
"delays",
"=",
"truncExpBackoff",
"(",
")",
",",
"timeout",
"=",
"300... | This decorator retries the wrapped function if aws throws unexpected errors
errors.
It should wrap any function that makes use of boto | [
"This",
"decorator",
"retries",
"the",
"wrapped",
"function",
"if",
"aws",
"throws",
"unexpected",
"errors",
"errors",
".",
"It",
"should",
"wrap",
"any",
"function",
"that",
"makes",
"use",
"of",
"boto"
] | a8252277ff814e7bee0971139c2344f88e44b644 | https://github.com/DataBiosphere/toil/blob/a8252277ff814e7bee0971139c2344f88e44b644/src/toil/provisioners/aws/awsProvisioner.py#L75-L88 |
224,917 | DataBiosphere/toil | src/toil/provisioners/aws/awsProvisioner.py | AWSProvisioner._readClusterSettings | def _readClusterSettings(self):
"""
Reads the cluster settings from the instance metadata, which assumes the instance
is the leader.
"""
instanceMetaData = get_instance_metadata()
region = zoneToRegion(self._zone)
conn = boto.ec2.connect_to_region(region)
instance = conn.get_all_instances(instance_ids=[instanceMetaData["instance-id"]])[0].instances[0]
self.clusterName = str(instance.tags["Name"])
self._buildContext()
self._subnetID = instance.subnet_id
self._leaderPrivateIP = instanceMetaData['local-ipv4'] # this is PRIVATE IP
self._keyName = list(instanceMetaData['public-keys'].keys())[0]
self._tags = self.getLeader().tags
self._masterPublicKey = self._setSSH() | python | def _readClusterSettings(self):
instanceMetaData = get_instance_metadata()
region = zoneToRegion(self._zone)
conn = boto.ec2.connect_to_region(region)
instance = conn.get_all_instances(instance_ids=[instanceMetaData["instance-id"]])[0].instances[0]
self.clusterName = str(instance.tags["Name"])
self._buildContext()
self._subnetID = instance.subnet_id
self._leaderPrivateIP = instanceMetaData['local-ipv4'] # this is PRIVATE IP
self._keyName = list(instanceMetaData['public-keys'].keys())[0]
self._tags = self.getLeader().tags
self._masterPublicKey = self._setSSH() | [
"def",
"_readClusterSettings",
"(",
"self",
")",
":",
"instanceMetaData",
"=",
"get_instance_metadata",
"(",
")",
"region",
"=",
"zoneToRegion",
"(",
"self",
".",
"_zone",
")",
"conn",
"=",
"boto",
".",
"ec2",
".",
"connect_to_region",
"(",
"region",
")",
"i... | Reads the cluster settings from the instance metadata, which assumes the instance
is the leader. | [
"Reads",
"the",
"cluster",
"settings",
"from",
"the",
"instance",
"metadata",
"which",
"assumes",
"the",
"instance",
"is",
"the",
"leader",
"."
] | a8252277ff814e7bee0971139c2344f88e44b644 | https://github.com/DataBiosphere/toil/blob/a8252277ff814e7bee0971139c2344f88e44b644/src/toil/provisioners/aws/awsProvisioner.py#L107-L122 |
224,918 | DataBiosphere/toil | src/toil/provisioners/aws/awsProvisioner.py | AWSProvisioner.destroyCluster | def destroyCluster(self):
"""
Terminate instances and delete the profile and security group.
"""
assert self._ctx
def expectedShutdownErrors(e):
return e.status == 400 and 'dependent object' in e.body
instances = self._getNodesInCluster(nodeType=None, both=True)
spotIDs = self._getSpotRequestIDs()
if spotIDs:
self._ctx.ec2.cancel_spot_instance_requests(request_ids=spotIDs)
instancesToTerminate = awsFilterImpairedNodes(instances, self._ctx.ec2)
vpcId = None
if instancesToTerminate:
vpcId = instancesToTerminate[0].vpc_id
self._deleteIAMProfiles(instances=instancesToTerminate)
self._terminateInstances(instances=instancesToTerminate)
if len(instances) == len(instancesToTerminate):
logger.debug('Deleting security group...')
removed = False
for attempt in retry(timeout=300, predicate=expectedShutdownErrors):
with attempt:
for sg in self._ctx.ec2.get_all_security_groups():
if sg.name == self.clusterName and vpcId and sg.vpc_id == vpcId:
try:
self._ctx.ec2.delete_security_group(group_id=sg.id)
removed = True
except BotoServerError as e:
if e.error_code == 'InvalidGroup.NotFound':
pass
else:
raise
if removed:
logger.debug('... Succesfully deleted security group')
else:
assert len(instances) > len(instancesToTerminate)
# the security group can't be deleted until all nodes are terminated
logger.warning('The TOIL_AWS_NODE_DEBUG environment variable is set and some nodes '
'have failed health checks. As a result, the security group & IAM '
'roles will not be deleted.') | python | def destroyCluster(self):
assert self._ctx
def expectedShutdownErrors(e):
return e.status == 400 and 'dependent object' in e.body
instances = self._getNodesInCluster(nodeType=None, both=True)
spotIDs = self._getSpotRequestIDs()
if spotIDs:
self._ctx.ec2.cancel_spot_instance_requests(request_ids=spotIDs)
instancesToTerminate = awsFilterImpairedNodes(instances, self._ctx.ec2)
vpcId = None
if instancesToTerminate:
vpcId = instancesToTerminate[0].vpc_id
self._deleteIAMProfiles(instances=instancesToTerminate)
self._terminateInstances(instances=instancesToTerminate)
if len(instances) == len(instancesToTerminate):
logger.debug('Deleting security group...')
removed = False
for attempt in retry(timeout=300, predicate=expectedShutdownErrors):
with attempt:
for sg in self._ctx.ec2.get_all_security_groups():
if sg.name == self.clusterName and vpcId and sg.vpc_id == vpcId:
try:
self._ctx.ec2.delete_security_group(group_id=sg.id)
removed = True
except BotoServerError as e:
if e.error_code == 'InvalidGroup.NotFound':
pass
else:
raise
if removed:
logger.debug('... Succesfully deleted security group')
else:
assert len(instances) > len(instancesToTerminate)
# the security group can't be deleted until all nodes are terminated
logger.warning('The TOIL_AWS_NODE_DEBUG environment variable is set and some nodes '
'have failed health checks. As a result, the security group & IAM '
'roles will not be deleted.') | [
"def",
"destroyCluster",
"(",
"self",
")",
":",
"assert",
"self",
".",
"_ctx",
"def",
"expectedShutdownErrors",
"(",
"e",
")",
":",
"return",
"e",
".",
"status",
"==",
"400",
"and",
"'dependent object'",
"in",
"e",
".",
"body",
"instances",
"=",
"self",
... | Terminate instances and delete the profile and security group. | [
"Terminate",
"instances",
"and",
"delete",
"the",
"profile",
"and",
"security",
"group",
"."
] | a8252277ff814e7bee0971139c2344f88e44b644 | https://github.com/DataBiosphere/toil/blob/a8252277ff814e7bee0971139c2344f88e44b644/src/toil/provisioners/aws/awsProvisioner.py#L196-L236 |
224,919 | DataBiosphere/toil | src/toil/provisioners/aws/awsProvisioner.py | AWSProvisioner._waitForIP | def _waitForIP(cls, instance):
"""
Wait until the instances has a public IP address assigned to it.
:type instance: boto.ec2.instance.Instance
"""
logger.debug('Waiting for ip...')
while True:
time.sleep(a_short_time)
instance.update()
if instance.ip_address or instance.public_dns_name or instance.private_ip_address:
logger.debug('...got ip')
break | python | def _waitForIP(cls, instance):
logger.debug('Waiting for ip...')
while True:
time.sleep(a_short_time)
instance.update()
if instance.ip_address or instance.public_dns_name or instance.private_ip_address:
logger.debug('...got ip')
break | [
"def",
"_waitForIP",
"(",
"cls",
",",
"instance",
")",
":",
"logger",
".",
"debug",
"(",
"'Waiting for ip...'",
")",
"while",
"True",
":",
"time",
".",
"sleep",
"(",
"a_short_time",
")",
"instance",
".",
"update",
"(",
")",
"if",
"instance",
".",
"ip_add... | Wait until the instances has a public IP address assigned to it.
:type instance: boto.ec2.instance.Instance | [
"Wait",
"until",
"the",
"instances",
"has",
"a",
"public",
"IP",
"address",
"assigned",
"to",
"it",
"."
] | a8252277ff814e7bee0971139c2344f88e44b644 | https://github.com/DataBiosphere/toil/blob/a8252277ff814e7bee0971139c2344f88e44b644/src/toil/provisioners/aws/awsProvisioner.py#L394-L406 |
224,920 | DataBiosphere/toil | src/toil/lib/ec2.py | wait_instances_running | def wait_instances_running(ec2, instances):
"""
Wait until no instance in the given iterable is 'pending'. Yield every instance that
entered the running state as soon as it does.
:param boto.ec2.connection.EC2Connection ec2: the EC2 connection to use for making requests
:param Iterator[Instance] instances: the instances to wait on
:rtype: Iterator[Instance]
"""
running_ids = set()
other_ids = set()
while True:
pending_ids = set()
for i in instances:
if i.state == 'pending':
pending_ids.add(i.id)
elif i.state == 'running':
assert i.id not in running_ids
running_ids.add(i.id)
yield i
else:
assert i.id not in other_ids
other_ids.add(i.id)
yield i
log.info('%i instance(s) pending, %i running, %i other.',
*map(len, (pending_ids, running_ids, other_ids)))
if not pending_ids:
break
seconds = max(a_short_time, min(len(pending_ids), 10 * a_short_time))
log.info('Sleeping for %is', seconds)
time.sleep(seconds)
for attempt in retry_ec2():
with attempt:
instances = ec2.get_only_instances(list(pending_ids)) | python | def wait_instances_running(ec2, instances):
running_ids = set()
other_ids = set()
while True:
pending_ids = set()
for i in instances:
if i.state == 'pending':
pending_ids.add(i.id)
elif i.state == 'running':
assert i.id not in running_ids
running_ids.add(i.id)
yield i
else:
assert i.id not in other_ids
other_ids.add(i.id)
yield i
log.info('%i instance(s) pending, %i running, %i other.',
*map(len, (pending_ids, running_ids, other_ids)))
if not pending_ids:
break
seconds = max(a_short_time, min(len(pending_ids), 10 * a_short_time))
log.info('Sleeping for %is', seconds)
time.sleep(seconds)
for attempt in retry_ec2():
with attempt:
instances = ec2.get_only_instances(list(pending_ids)) | [
"def",
"wait_instances_running",
"(",
"ec2",
",",
"instances",
")",
":",
"running_ids",
"=",
"set",
"(",
")",
"other_ids",
"=",
"set",
"(",
")",
"while",
"True",
":",
"pending_ids",
"=",
"set",
"(",
")",
"for",
"i",
"in",
"instances",
":",
"if",
"i",
... | Wait until no instance in the given iterable is 'pending'. Yield every instance that
entered the running state as soon as it does.
:param boto.ec2.connection.EC2Connection ec2: the EC2 connection to use for making requests
:param Iterator[Instance] instances: the instances to wait on
:rtype: Iterator[Instance] | [
"Wait",
"until",
"no",
"instance",
"in",
"the",
"given",
"iterable",
"is",
"pending",
".",
"Yield",
"every",
"instance",
"that",
"entered",
"the",
"running",
"state",
"as",
"soon",
"as",
"it",
"does",
"."
] | a8252277ff814e7bee0971139c2344f88e44b644 | https://github.com/DataBiosphere/toil/blob/a8252277ff814e7bee0971139c2344f88e44b644/src/toil/lib/ec2.py#L66-L99 |
224,921 | DataBiosphere/toil | src/toil/lib/ec2.py | wait_spot_requests_active | def wait_spot_requests_active(ec2, requests, timeout=None, tentative=False):
"""
Wait until no spot request in the given iterator is in the 'open' state or, optionally,
a timeout occurs. Yield spot requests as soon as they leave the 'open' state.
:param Iterator[SpotInstanceRequest] requests:
:param float timeout: Maximum time in seconds to spend waiting or None to wait forever. If a
timeout occurs, the remaining open requests will be cancelled.
:param bool tentative: if True, give up on a spot request at the earliest indication of it
not being fulfilled immediately
:rtype: Iterator[list[SpotInstanceRequest]]
"""
if timeout is not None:
timeout = time.time() + timeout
active_ids = set()
other_ids = set()
open_ids = None
def cancel():
log.warn('Cancelling remaining %i spot requests.', len(open_ids))
ec2.cancel_spot_instance_requests(list(open_ids))
def spot_request_not_found(e):
error_code = 'InvalidSpotInstanceRequestID.NotFound'
return isinstance(e, EC2ResponseError) and e.error_code == error_code
try:
while True:
open_ids, eval_ids, fulfill_ids = set(), set(), set()
batch = []
for r in requests:
if r.state == 'open':
open_ids.add(r.id)
if r.status.code == 'pending-evaluation':
eval_ids.add(r.id)
elif r.status.code == 'pending-fulfillment':
fulfill_ids.add(r.id)
else:
log.info(
'Request %s entered status %s indicating that it will not be '
'fulfilled anytime soon.', r.id, r.status.code)
elif r.state == 'active':
assert r.id not in active_ids
active_ids.add(r.id)
batch.append(r)
else:
assert r.id not in other_ids
other_ids.add(r.id)
batch.append(r)
if batch:
yield batch
log.info('%i spot requests(s) are open (%i of which are pending evaluation and %i '
'are pending fulfillment), %i are active and %i are in another state.',
*map(len, (open_ids, eval_ids, fulfill_ids, active_ids, other_ids)))
if not open_ids or tentative and not eval_ids and not fulfill_ids:
break
sleep_time = 2 * a_short_time
if timeout is not None and time.time() + sleep_time >= timeout:
log.warn('Timed out waiting for spot requests.')
break
log.info('Sleeping for %is', sleep_time)
time.sleep(sleep_time)
for attempt in retry_ec2(retry_while=spot_request_not_found):
with attempt:
requests = ec2.get_all_spot_instance_requests(
list(open_ids))
except BaseException:
if open_ids:
with panic(log):
cancel()
raise
else:
if open_ids:
cancel() | python | def wait_spot_requests_active(ec2, requests, timeout=None, tentative=False):
if timeout is not None:
timeout = time.time() + timeout
active_ids = set()
other_ids = set()
open_ids = None
def cancel():
log.warn('Cancelling remaining %i spot requests.', len(open_ids))
ec2.cancel_spot_instance_requests(list(open_ids))
def spot_request_not_found(e):
error_code = 'InvalidSpotInstanceRequestID.NotFound'
return isinstance(e, EC2ResponseError) and e.error_code == error_code
try:
while True:
open_ids, eval_ids, fulfill_ids = set(), set(), set()
batch = []
for r in requests:
if r.state == 'open':
open_ids.add(r.id)
if r.status.code == 'pending-evaluation':
eval_ids.add(r.id)
elif r.status.code == 'pending-fulfillment':
fulfill_ids.add(r.id)
else:
log.info(
'Request %s entered status %s indicating that it will not be '
'fulfilled anytime soon.', r.id, r.status.code)
elif r.state == 'active':
assert r.id not in active_ids
active_ids.add(r.id)
batch.append(r)
else:
assert r.id not in other_ids
other_ids.add(r.id)
batch.append(r)
if batch:
yield batch
log.info('%i spot requests(s) are open (%i of which are pending evaluation and %i '
'are pending fulfillment), %i are active and %i are in another state.',
*map(len, (open_ids, eval_ids, fulfill_ids, active_ids, other_ids)))
if not open_ids or tentative and not eval_ids and not fulfill_ids:
break
sleep_time = 2 * a_short_time
if timeout is not None and time.time() + sleep_time >= timeout:
log.warn('Timed out waiting for spot requests.')
break
log.info('Sleeping for %is', sleep_time)
time.sleep(sleep_time)
for attempt in retry_ec2(retry_while=spot_request_not_found):
with attempt:
requests = ec2.get_all_spot_instance_requests(
list(open_ids))
except BaseException:
if open_ids:
with panic(log):
cancel()
raise
else:
if open_ids:
cancel() | [
"def",
"wait_spot_requests_active",
"(",
"ec2",
",",
"requests",
",",
"timeout",
"=",
"None",
",",
"tentative",
"=",
"False",
")",
":",
"if",
"timeout",
"is",
"not",
"None",
":",
"timeout",
"=",
"time",
".",
"time",
"(",
")",
"+",
"timeout",
"active_ids"... | Wait until no spot request in the given iterator is in the 'open' state or, optionally,
a timeout occurs. Yield spot requests as soon as they leave the 'open' state.
:param Iterator[SpotInstanceRequest] requests:
:param float timeout: Maximum time in seconds to spend waiting or None to wait forever. If a
timeout occurs, the remaining open requests will be cancelled.
:param bool tentative: if True, give up on a spot request at the earliest indication of it
not being fulfilled immediately
:rtype: Iterator[list[SpotInstanceRequest]] | [
"Wait",
"until",
"no",
"spot",
"request",
"in",
"the",
"given",
"iterator",
"is",
"in",
"the",
"open",
"state",
"or",
"optionally",
"a",
"timeout",
"occurs",
".",
"Yield",
"spot",
"requests",
"as",
"soon",
"as",
"they",
"leave",
"the",
"open",
"state",
"... | a8252277ff814e7bee0971139c2344f88e44b644 | https://github.com/DataBiosphere/toil/blob/a8252277ff814e7bee0971139c2344f88e44b644/src/toil/lib/ec2.py#L102-L179 |
224,922 | DataBiosphere/toil | src/toil/lib/ec2.py | create_ondemand_instances | def create_ondemand_instances(ec2, image_id, spec, num_instances=1):
"""
Requests the RunInstances EC2 API call but accounts for the race between recently created
instance profiles, IAM roles and an instance creation that refers to them.
:rtype: list[Instance]
"""
instance_type = spec['instance_type']
log.info('Creating %s instance(s) ... ', instance_type)
for attempt in retry_ec2(retry_for=a_long_time,
retry_while=inconsistencies_detected):
with attempt:
return ec2.run_instances(image_id,
min_count=num_instances,
max_count=num_instances,
**spec).instances | python | def create_ondemand_instances(ec2, image_id, spec, num_instances=1):
instance_type = spec['instance_type']
log.info('Creating %s instance(s) ... ', instance_type)
for attempt in retry_ec2(retry_for=a_long_time,
retry_while=inconsistencies_detected):
with attempt:
return ec2.run_instances(image_id,
min_count=num_instances,
max_count=num_instances,
**spec).instances | [
"def",
"create_ondemand_instances",
"(",
"ec2",
",",
"image_id",
",",
"spec",
",",
"num_instances",
"=",
"1",
")",
":",
"instance_type",
"=",
"spec",
"[",
"'instance_type'",
"]",
"log",
".",
"info",
"(",
"'Creating %s instance(s) ... '",
",",
"instance_type",
")... | Requests the RunInstances EC2 API call but accounts for the race between recently created
instance profiles, IAM roles and an instance creation that refers to them.
:rtype: list[Instance] | [
"Requests",
"the",
"RunInstances",
"EC2",
"API",
"call",
"but",
"accounts",
"for",
"the",
"race",
"between",
"recently",
"created",
"instance",
"profiles",
"IAM",
"roles",
"and",
"an",
"instance",
"creation",
"that",
"refers",
"to",
"them",
"."
] | a8252277ff814e7bee0971139c2344f88e44b644 | https://github.com/DataBiosphere/toil/blob/a8252277ff814e7bee0971139c2344f88e44b644/src/toil/lib/ec2.py#L240-L255 |
224,923 | DataBiosphere/toil | version_template.py | distVersion | def distVersion():
"""
The distribution version identifying a published release on PyPI.
"""
from pkg_resources import parse_version
build_number = buildNumber()
parsedBaseVersion = parse_version(baseVersion)
if isinstance(parsedBaseVersion, tuple):
raise RuntimeError("Setuptools version 8.0 or newer required. Update by running "
"'pip install setuptools --upgrade'")
if build_number is not None and parsedBaseVersion.is_prerelease:
return baseVersion + '.dev' + build_number
else:
return baseVersion | python | def distVersion():
from pkg_resources import parse_version
build_number = buildNumber()
parsedBaseVersion = parse_version(baseVersion)
if isinstance(parsedBaseVersion, tuple):
raise RuntimeError("Setuptools version 8.0 or newer required. Update by running "
"'pip install setuptools --upgrade'")
if build_number is not None and parsedBaseVersion.is_prerelease:
return baseVersion + '.dev' + build_number
else:
return baseVersion | [
"def",
"distVersion",
"(",
")",
":",
"from",
"pkg_resources",
"import",
"parse_version",
"build_number",
"=",
"buildNumber",
"(",
")",
"parsedBaseVersion",
"=",
"parse_version",
"(",
"baseVersion",
")",
"if",
"isinstance",
"(",
"parsedBaseVersion",
",",
"tuple",
"... | The distribution version identifying a published release on PyPI. | [
"The",
"distribution",
"version",
"identifying",
"a",
"published",
"release",
"on",
"PyPI",
"."
] | a8252277ff814e7bee0971139c2344f88e44b644 | https://github.com/DataBiosphere/toil/blob/a8252277ff814e7bee0971139c2344f88e44b644/version_template.py#L56-L70 |
224,924 | DataBiosphere/toil | src/toil/lib/throttle.py | GlobalThrottle.throttle | def throttle( self, wait=True ):
"""
If the wait parameter is True, this method returns True after suspending the current
thread as necessary to ensure that no less than the configured minimum interval passed
since the most recent time an invocation of this method returned True in any thread.
If the wait parameter is False, this method immediatly returns True if at least the
configured minimum interval has passed since the most recent time this method returned
True in any thread, or False otherwise.
"""
# I think there is a race in Thread.start(), hence the lock
with self.thread_start_lock:
if not self.thread_started:
self.thread.start( )
self.thread_started = True
return self.semaphore.acquire( blocking=wait ) | python | def throttle( self, wait=True ):
# I think there is a race in Thread.start(), hence the lock
with self.thread_start_lock:
if not self.thread_started:
self.thread.start( )
self.thread_started = True
return self.semaphore.acquire( blocking=wait ) | [
"def",
"throttle",
"(",
"self",
",",
"wait",
"=",
"True",
")",
":",
"# I think there is a race in Thread.start(), hence the lock",
"with",
"self",
".",
"thread_start_lock",
":",
"if",
"not",
"self",
".",
"thread_started",
":",
"self",
".",
"thread",
".",
"start",
... | If the wait parameter is True, this method returns True after suspending the current
thread as necessary to ensure that no less than the configured minimum interval passed
since the most recent time an invocation of this method returned True in any thread.
If the wait parameter is False, this method immediatly returns True if at least the
configured minimum interval has passed since the most recent time this method returned
True in any thread, or False otherwise. | [
"If",
"the",
"wait",
"parameter",
"is",
"True",
"this",
"method",
"returns",
"True",
"after",
"suspending",
"the",
"current",
"thread",
"as",
"necessary",
"to",
"ensure",
"that",
"no",
"less",
"than",
"the",
"configured",
"minimum",
"interval",
"passed",
"sinc... | a8252277ff814e7bee0971139c2344f88e44b644 | https://github.com/DataBiosphere/toil/blob/a8252277ff814e7bee0971139c2344f88e44b644/src/toil/lib/throttle.py#L53-L68 |
224,925 | DataBiosphere/toil | src/toil/lib/throttle.py | LocalThrottle.throttle | def throttle( self, wait=True ):
"""
If the wait parameter is True, this method returns True after suspending the current
thread as necessary to ensure that no less than the configured minimum interval has
passed since the last invocation of this method in the current thread returned True.
If the wait parameter is False, this method immediatly returns True (if at least the
configured minimum interval has passed since the last time this method returned True in
the current thread) or False otherwise.
"""
now = time.time( )
last_invocation = self.per_thread.last_invocation
if last_invocation is not None:
interval = now - last_invocation
if interval < self.min_interval:
if wait:
remainder = self.min_interval - interval
time.sleep( remainder )
else:
return False
self.per_thread.last_invocation = now
return True | python | def throttle( self, wait=True ):
now = time.time( )
last_invocation = self.per_thread.last_invocation
if last_invocation is not None:
interval = now - last_invocation
if interval < self.min_interval:
if wait:
remainder = self.min_interval - interval
time.sleep( remainder )
else:
return False
self.per_thread.last_invocation = now
return True | [
"def",
"throttle",
"(",
"self",
",",
"wait",
"=",
"True",
")",
":",
"now",
"=",
"time",
".",
"time",
"(",
")",
"last_invocation",
"=",
"self",
".",
"per_thread",
".",
"last_invocation",
"if",
"last_invocation",
"is",
"not",
"None",
":",
"interval",
"=",
... | If the wait parameter is True, this method returns True after suspending the current
thread as necessary to ensure that no less than the configured minimum interval has
passed since the last invocation of this method in the current thread returned True.
If the wait parameter is False, this method immediatly returns True (if at least the
configured minimum interval has passed since the last time this method returned True in
the current thread) or False otherwise. | [
"If",
"the",
"wait",
"parameter",
"is",
"True",
"this",
"method",
"returns",
"True",
"after",
"suspending",
"the",
"current",
"thread",
"as",
"necessary",
"to",
"ensure",
"that",
"no",
"less",
"than",
"the",
"configured",
"minimum",
"interval",
"has",
"passed"... | a8252277ff814e7bee0971139c2344f88e44b644 | https://github.com/DataBiosphere/toil/blob/a8252277ff814e7bee0971139c2344f88e44b644/src/toil/lib/throttle.py#L97-L118 |
224,926 | DataBiosphere/toil | src/toil/batchSystems/parasol.py | ParasolBatchSystem._runParasol | def _runParasol(self, command, autoRetry=True):
"""
Issues a parasol command using popen to capture the output. If the command fails then it
will try pinging parasol until it gets a response. When it gets a response it will
recursively call the issue parasol command, repeating this pattern for a maximum of N
times. The final exit value will reflect this.
"""
command = list(concat(self.parasolCommand, command))
while True:
logger.debug('Running %r', command)
process = subprocess.Popen(command,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
bufsize=-1)
stdout, stderr = process.communicate()
status = process.wait()
for line in stderr.decode('utf-8').split('\n'):
if line: logger.warn(line)
if status == 0:
return 0, stdout.decode('utf-8').split('\n')
message = 'Command %r failed with exit status %i' % (command, status)
if autoRetry:
logger.warn(message)
else:
logger.error(message)
return status, None
logger.warn('Waiting for a 10s, before trying again')
time.sleep(10) | python | def _runParasol(self, command, autoRetry=True):
command = list(concat(self.parasolCommand, command))
while True:
logger.debug('Running %r', command)
process = subprocess.Popen(command,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
bufsize=-1)
stdout, stderr = process.communicate()
status = process.wait()
for line in stderr.decode('utf-8').split('\n'):
if line: logger.warn(line)
if status == 0:
return 0, stdout.decode('utf-8').split('\n')
message = 'Command %r failed with exit status %i' % (command, status)
if autoRetry:
logger.warn(message)
else:
logger.error(message)
return status, None
logger.warn('Waiting for a 10s, before trying again')
time.sleep(10) | [
"def",
"_runParasol",
"(",
"self",
",",
"command",
",",
"autoRetry",
"=",
"True",
")",
":",
"command",
"=",
"list",
"(",
"concat",
"(",
"self",
".",
"parasolCommand",
",",
"command",
")",
")",
"while",
"True",
":",
"logger",
".",
"debug",
"(",
"'Runnin... | Issues a parasol command using popen to capture the output. If the command fails then it
will try pinging parasol until it gets a response. When it gets a response it will
recursively call the issue parasol command, repeating this pattern for a maximum of N
times. The final exit value will reflect this. | [
"Issues",
"a",
"parasol",
"command",
"using",
"popen",
"to",
"capture",
"the",
"output",
".",
"If",
"the",
"command",
"fails",
"then",
"it",
"will",
"try",
"pinging",
"parasol",
"until",
"it",
"gets",
"a",
"response",
".",
"When",
"it",
"gets",
"a",
"res... | a8252277ff814e7bee0971139c2344f88e44b644 | https://github.com/DataBiosphere/toil/blob/a8252277ff814e7bee0971139c2344f88e44b644/src/toil/batchSystems/parasol.py#L104-L131 |
224,927 | DataBiosphere/toil | src/toil/batchSystems/parasol.py | ParasolBatchSystem.issueBatchJob | def issueBatchJob(self, jobNode):
"""
Issues parasol with job commands.
"""
self.checkResourceRequest(jobNode.memory, jobNode.cores, jobNode.disk)
MiB = 1 << 20
truncatedMemory = (old_div(jobNode.memory, MiB)) * MiB
# Look for a batch for jobs with these resource requirements, with
# the memory rounded down to the nearest megabyte. Rounding down
# meams the new job can't ever decrease the memory requirements
# of jobs already in the batch.
if len(self.resultsFiles) >= self.maxBatches:
raise RuntimeError( 'Number of batches reached limit of %i' % self.maxBatches)
try:
results = self.resultsFiles[(truncatedMemory, jobNode.cores)]
except KeyError:
results = getTempFile(rootDir=self.parasolResultsDir)
self.resultsFiles[(truncatedMemory, jobNode.cores)] = results
# Prefix the command with environment overrides, optionally looking them up from the
# current environment if the value is None
command = ' '.join(concat('env', self.__environment(), jobNode.command))
parasolCommand = ['-verbose',
'-ram=%i' % jobNode.memory,
'-cpu=%i' % jobNode.cores,
'-results=' + results,
'add', 'job', command]
# Deal with the cpus
self.usedCpus += jobNode.cores
while True: # Process finished results with no wait
try:
jobID = self.cpuUsageQueue.get_nowait()
except Empty:
break
if jobID in list(self.jobIDsToCpu.keys()):
self.usedCpus -= self.jobIDsToCpu.pop(jobID)
assert self.usedCpus >= 0
while self.usedCpus > self.maxCores: # If we are still waiting
jobID = self.cpuUsageQueue.get()
if jobID in list(self.jobIDsToCpu.keys()):
self.usedCpus -= self.jobIDsToCpu.pop(jobID)
assert self.usedCpus >= 0
# Now keep going
while True:
line = self._runParasol(parasolCommand)[1][0]
match = self.parasolOutputPattern.match(line)
if match is None:
# This is because parasol add job will return success, even if the job was not
# properly issued!
logger.debug('We failed to properly add the job, we will try again after a 5s.')
time.sleep(5)
else:
jobID = int(match.group(1))
self.jobIDsToCpu[jobID] = jobNode.cores
self.runningJobs.add(jobID)
logger.debug("Got the parasol job id: %s from line: %s" % (jobID, line))
return jobID | python | def issueBatchJob(self, jobNode):
self.checkResourceRequest(jobNode.memory, jobNode.cores, jobNode.disk)
MiB = 1 << 20
truncatedMemory = (old_div(jobNode.memory, MiB)) * MiB
# Look for a batch for jobs with these resource requirements, with
# the memory rounded down to the nearest megabyte. Rounding down
# meams the new job can't ever decrease the memory requirements
# of jobs already in the batch.
if len(self.resultsFiles) >= self.maxBatches:
raise RuntimeError( 'Number of batches reached limit of %i' % self.maxBatches)
try:
results = self.resultsFiles[(truncatedMemory, jobNode.cores)]
except KeyError:
results = getTempFile(rootDir=self.parasolResultsDir)
self.resultsFiles[(truncatedMemory, jobNode.cores)] = results
# Prefix the command with environment overrides, optionally looking them up from the
# current environment if the value is None
command = ' '.join(concat('env', self.__environment(), jobNode.command))
parasolCommand = ['-verbose',
'-ram=%i' % jobNode.memory,
'-cpu=%i' % jobNode.cores,
'-results=' + results,
'add', 'job', command]
# Deal with the cpus
self.usedCpus += jobNode.cores
while True: # Process finished results with no wait
try:
jobID = self.cpuUsageQueue.get_nowait()
except Empty:
break
if jobID in list(self.jobIDsToCpu.keys()):
self.usedCpus -= self.jobIDsToCpu.pop(jobID)
assert self.usedCpus >= 0
while self.usedCpus > self.maxCores: # If we are still waiting
jobID = self.cpuUsageQueue.get()
if jobID in list(self.jobIDsToCpu.keys()):
self.usedCpus -= self.jobIDsToCpu.pop(jobID)
assert self.usedCpus >= 0
# Now keep going
while True:
line = self._runParasol(parasolCommand)[1][0]
match = self.parasolOutputPattern.match(line)
if match is None:
# This is because parasol add job will return success, even if the job was not
# properly issued!
logger.debug('We failed to properly add the job, we will try again after a 5s.')
time.sleep(5)
else:
jobID = int(match.group(1))
self.jobIDsToCpu[jobID] = jobNode.cores
self.runningJobs.add(jobID)
logger.debug("Got the parasol job id: %s from line: %s" % (jobID, line))
return jobID | [
"def",
"issueBatchJob",
"(",
"self",
",",
"jobNode",
")",
":",
"self",
".",
"checkResourceRequest",
"(",
"jobNode",
".",
"memory",
",",
"jobNode",
".",
"cores",
",",
"jobNode",
".",
"disk",
")",
"MiB",
"=",
"1",
"<<",
"20",
"truncatedMemory",
"=",
"(",
... | Issues parasol with job commands. | [
"Issues",
"parasol",
"with",
"job",
"commands",
"."
] | a8252277ff814e7bee0971139c2344f88e44b644 | https://github.com/DataBiosphere/toil/blob/a8252277ff814e7bee0971139c2344f88e44b644/src/toil/batchSystems/parasol.py#L135-L192 |
224,928 | DataBiosphere/toil | src/toil/batchSystems/parasol.py | ParasolBatchSystem.getJobIDsForResultsFile | def getJobIDsForResultsFile(self, resultsFile):
"""
Get all queued and running jobs for a results file.
"""
jobIDs = []
for line in self._runParasol(['-extended', 'list', 'jobs'])[1]:
fields = line.strip().split()
if len(fields) == 0 or fields[-1] != resultsFile:
continue
jobID = fields[0]
jobIDs.append(int(jobID))
return set(jobIDs) | python | def getJobIDsForResultsFile(self, resultsFile):
jobIDs = []
for line in self._runParasol(['-extended', 'list', 'jobs'])[1]:
fields = line.strip().split()
if len(fields) == 0 or fields[-1] != resultsFile:
continue
jobID = fields[0]
jobIDs.append(int(jobID))
return set(jobIDs) | [
"def",
"getJobIDsForResultsFile",
"(",
"self",
",",
"resultsFile",
")",
":",
"jobIDs",
"=",
"[",
"]",
"for",
"line",
"in",
"self",
".",
"_runParasol",
"(",
"[",
"'-extended'",
",",
"'list'",
",",
"'jobs'",
"]",
")",
"[",
"1",
"]",
":",
"fields",
"=",
... | Get all queued and running jobs for a results file. | [
"Get",
"all",
"queued",
"and",
"running",
"jobs",
"for",
"a",
"results",
"file",
"."
] | a8252277ff814e7bee0971139c2344f88e44b644 | https://github.com/DataBiosphere/toil/blob/a8252277ff814e7bee0971139c2344f88e44b644/src/toil/batchSystems/parasol.py#L226-L237 |
224,929 | DataBiosphere/toil | src/toil/batchSystems/parasol.py | ParasolBatchSystem.getIssuedBatchJobIDs | def getIssuedBatchJobIDs(self):
"""
Gets the list of jobs issued to parasol in all results files, but not including jobs
created by other users.
"""
issuedJobs = set()
for resultsFile in itervalues(self.resultsFiles):
issuedJobs.update(self.getJobIDsForResultsFile(resultsFile))
return list(issuedJobs) | python | def getIssuedBatchJobIDs(self):
issuedJobs = set()
for resultsFile in itervalues(self.resultsFiles):
issuedJobs.update(self.getJobIDsForResultsFile(resultsFile))
return list(issuedJobs) | [
"def",
"getIssuedBatchJobIDs",
"(",
"self",
")",
":",
"issuedJobs",
"=",
"set",
"(",
")",
"for",
"resultsFile",
"in",
"itervalues",
"(",
"self",
".",
"resultsFiles",
")",
":",
"issuedJobs",
".",
"update",
"(",
"self",
".",
"getJobIDsForResultsFile",
"(",
"re... | Gets the list of jobs issued to parasol in all results files, but not including jobs
created by other users. | [
"Gets",
"the",
"list",
"of",
"jobs",
"issued",
"to",
"parasol",
"in",
"all",
"results",
"files",
"but",
"not",
"including",
"jobs",
"created",
"by",
"other",
"users",
"."
] | a8252277ff814e7bee0971139c2344f88e44b644 | https://github.com/DataBiosphere/toil/blob/a8252277ff814e7bee0971139c2344f88e44b644/src/toil/batchSystems/parasol.py#L239-L248 |
224,930 | DataBiosphere/toil | src/toil/batchSystems/parasol.py | ParasolBatchSystem.getRunningBatchJobIDs | def getRunningBatchJobIDs(self):
"""
Returns map of running jobIDs and the time they have been running.
"""
# Example lines..
# r 5410186 benedictpaten worker 1247029663 localhost
# r 5410324 benedictpaten worker 1247030076 localhost
runningJobs = {}
issuedJobs = self.getIssuedBatchJobIDs()
for line in self._runParasol(['pstat2'])[1]:
if line != '':
match = self.runningPattern.match(line)
if match is not None:
jobID = int(match.group(1))
startTime = int(match.group(2))
if jobID in issuedJobs: # It's one of our jobs
runningJobs[jobID] = time.time() - startTime
return runningJobs | python | def getRunningBatchJobIDs(self):
# Example lines..
# r 5410186 benedictpaten worker 1247029663 localhost
# r 5410324 benedictpaten worker 1247030076 localhost
runningJobs = {}
issuedJobs = self.getIssuedBatchJobIDs()
for line in self._runParasol(['pstat2'])[1]:
if line != '':
match = self.runningPattern.match(line)
if match is not None:
jobID = int(match.group(1))
startTime = int(match.group(2))
if jobID in issuedJobs: # It's one of our jobs
runningJobs[jobID] = time.time() - startTime
return runningJobs | [
"def",
"getRunningBatchJobIDs",
"(",
"self",
")",
":",
"# Example lines..",
"# r 5410186 benedictpaten worker 1247029663 localhost",
"# r 5410324 benedictpaten worker 1247030076 localhost",
"runningJobs",
"=",
"{",
"}",
"issuedJobs",
"=",
"self",
".",
"getIssuedBatchJobIDs",
"(",... | Returns map of running jobIDs and the time they have been running. | [
"Returns",
"map",
"of",
"running",
"jobIDs",
"and",
"the",
"time",
"they",
"have",
"been",
"running",
"."
] | a8252277ff814e7bee0971139c2344f88e44b644 | https://github.com/DataBiosphere/toil/blob/a8252277ff814e7bee0971139c2344f88e44b644/src/toil/batchSystems/parasol.py#L250-L267 |
224,931 | DataBiosphere/toil | src/toil/batchSystems/parasol.py | ParasolBatchSystem.updatedJobWorker | def updatedJobWorker(self):
"""
We use the parasol results to update the status of jobs, adding them
to the list of updated jobs.
Results have the following structure.. (thanks Mark D!)
int status; /* Job status - wait() return format. 0 is good. */
char *host; /* Machine job ran on. */
char *jobId; /* Job queuing system job ID */
char *exe; /* Job executable file (no path) */
int usrTicks; /* 'User' CPU time in ticks. */
int sysTicks; /* 'System' CPU time in ticks. */
unsigned submitTime; /* Job submission time in seconds since 1/1/1970 */
unsigned startTime; /* Job start time in seconds since 1/1/1970 */
unsigned endTime; /* Job end time in seconds since 1/1/1970 */
char *user; /* User who ran job */
char *errFile; /* Location of stderr file on host */
Plus you finally have the command name.
"""
resultsFiles = set()
resultsFileHandles = []
try:
while self.running:
# Look for any new results files that have been created, and open them
newResultsFiles = set(os.listdir(self.parasolResultsDir)).difference(resultsFiles)
for newFile in newResultsFiles:
newFilePath = os.path.join(self.parasolResultsDir, newFile)
resultsFileHandles.append(open(newFilePath, 'r'))
resultsFiles.add(newFile)
for fileHandle in resultsFileHandles:
while self.running:
line = fileHandle.readline()
if not line:
break
assert line[-1] == '\n'
(status, host, jobId, exe, usrTicks, sysTicks, submitTime, startTime,
endTime, user, errFile, command) = line[:-1].split(None, 11)
status = int(status)
jobId = int(jobId)
if os.WIFEXITED(status):
status = os.WEXITSTATUS(status)
else:
status = -status
self.cpuUsageQueue.put(jobId)
startTime = int(startTime)
endTime = int(endTime)
if endTime == startTime:
# Both, start and end time is an integer so to get sub-second
# accuracy we use the ticks reported by Parasol as an approximation.
# This isn't documented but what Parasol calls "ticks" is actually a
# hundredth of a second. Parasol does the unit conversion early on
# after a job finished. Search paraNode.c for ticksToHundreths. We
# also cheat a little by always reporting at least one hundredth of a
# second.
usrTicks = int(usrTicks)
sysTicks = int(sysTicks)
wallTime = float( max( 1, usrTicks + sysTicks) ) * 0.01
else:
wallTime = float(endTime - startTime)
self.updatedJobsQueue.put((jobId, status, wallTime))
time.sleep(1)
except:
logger.warn("Error occurred while parsing parasol results files.")
raise
finally:
for fileHandle in resultsFileHandles:
fileHandle.close() | python | def updatedJobWorker(self):
resultsFiles = set()
resultsFileHandles = []
try:
while self.running:
# Look for any new results files that have been created, and open them
newResultsFiles = set(os.listdir(self.parasolResultsDir)).difference(resultsFiles)
for newFile in newResultsFiles:
newFilePath = os.path.join(self.parasolResultsDir, newFile)
resultsFileHandles.append(open(newFilePath, 'r'))
resultsFiles.add(newFile)
for fileHandle in resultsFileHandles:
while self.running:
line = fileHandle.readline()
if not line:
break
assert line[-1] == '\n'
(status, host, jobId, exe, usrTicks, sysTicks, submitTime, startTime,
endTime, user, errFile, command) = line[:-1].split(None, 11)
status = int(status)
jobId = int(jobId)
if os.WIFEXITED(status):
status = os.WEXITSTATUS(status)
else:
status = -status
self.cpuUsageQueue.put(jobId)
startTime = int(startTime)
endTime = int(endTime)
if endTime == startTime:
# Both, start and end time is an integer so to get sub-second
# accuracy we use the ticks reported by Parasol as an approximation.
# This isn't documented but what Parasol calls "ticks" is actually a
# hundredth of a second. Parasol does the unit conversion early on
# after a job finished. Search paraNode.c for ticksToHundreths. We
# also cheat a little by always reporting at least one hundredth of a
# second.
usrTicks = int(usrTicks)
sysTicks = int(sysTicks)
wallTime = float( max( 1, usrTicks + sysTicks) ) * 0.01
else:
wallTime = float(endTime - startTime)
self.updatedJobsQueue.put((jobId, status, wallTime))
time.sleep(1)
except:
logger.warn("Error occurred while parsing parasol results files.")
raise
finally:
for fileHandle in resultsFileHandles:
fileHandle.close() | [
"def",
"updatedJobWorker",
"(",
"self",
")",
":",
"resultsFiles",
"=",
"set",
"(",
")",
"resultsFileHandles",
"=",
"[",
"]",
"try",
":",
"while",
"self",
".",
"running",
":",
"# Look for any new results files that have been created, and open them",
"newResultsFiles",
... | We use the parasol results to update the status of jobs, adding them
to the list of updated jobs.
Results have the following structure.. (thanks Mark D!)
int status; /* Job status - wait() return format. 0 is good. */
char *host; /* Machine job ran on. */
char *jobId; /* Job queuing system job ID */
char *exe; /* Job executable file (no path) */
int usrTicks; /* 'User' CPU time in ticks. */
int sysTicks; /* 'System' CPU time in ticks. */
unsigned submitTime; /* Job submission time in seconds since 1/1/1970 */
unsigned startTime; /* Job start time in seconds since 1/1/1970 */
unsigned endTime; /* Job end time in seconds since 1/1/1970 */
char *user; /* User who ran job */
char *errFile; /* Location of stderr file on host */
Plus you finally have the command name. | [
"We",
"use",
"the",
"parasol",
"results",
"to",
"update",
"the",
"status",
"of",
"jobs",
"adding",
"them",
"to",
"the",
"list",
"of",
"updated",
"jobs",
"."
] | a8252277ff814e7bee0971139c2344f88e44b644 | https://github.com/DataBiosphere/toil/blob/a8252277ff814e7bee0971139c2344f88e44b644/src/toil/batchSystems/parasol.py#L283-L351 |
224,932 | DataBiosphere/toil | src/toil/wdl/wdl_functions.py | process_infile | def process_infile(f, fileStore):
"""
Takes an array of files or a single file and imports into the jobstore.
This returns a tuple or an array of tuples replacing all previous path
strings. Toil does not preserve a file's original name upon import and
so the tuple keeps track of this with the format: '(filepath, preserveThisFilename)'
:param f: String or an Array. The smallest element must be a string,
so: an array of strings, an array of arrays of strings... etc.
:param fileStore: The filestore object that is called to load files into the filestore.
:return: A tuple or an array of tuples.
"""
# check if this has already been processed
if isinstance(f, tuple):
return f
elif isinstance(f, list):
return process_array_infile(f, fileStore)
elif isinstance(f, basestring):
return process_single_infile(f, fileStore)
else:
raise RuntimeError('Error processing file: '.format(str(f))) | python | def process_infile(f, fileStore):
# check if this has already been processed
if isinstance(f, tuple):
return f
elif isinstance(f, list):
return process_array_infile(f, fileStore)
elif isinstance(f, basestring):
return process_single_infile(f, fileStore)
else:
raise RuntimeError('Error processing file: '.format(str(f))) | [
"def",
"process_infile",
"(",
"f",
",",
"fileStore",
")",
":",
"# check if this has already been processed",
"if",
"isinstance",
"(",
"f",
",",
"tuple",
")",
":",
"return",
"f",
"elif",
"isinstance",
"(",
"f",
",",
"list",
")",
":",
"return",
"process_array_in... | Takes an array of files or a single file and imports into the jobstore.
This returns a tuple or an array of tuples replacing all previous path
strings. Toil does not preserve a file's original name upon import and
so the tuple keeps track of this with the format: '(filepath, preserveThisFilename)'
:param f: String or an Array. The smallest element must be a string,
so: an array of strings, an array of arrays of strings... etc.
:param fileStore: The filestore object that is called to load files into the filestore.
:return: A tuple or an array of tuples. | [
"Takes",
"an",
"array",
"of",
"files",
"or",
"a",
"single",
"file",
"and",
"imports",
"into",
"the",
"jobstore",
"."
] | a8252277ff814e7bee0971139c2344f88e44b644 | https://github.com/DataBiosphere/toil/blob/a8252277ff814e7bee0971139c2344f88e44b644/src/toil/wdl/wdl_functions.py#L202-L223 |
224,933 | DataBiosphere/toil | src/toil/provisioners/gceProvisioner.py | GCEProvisioner._readCredentials | def _readCredentials(self):
"""
Get the credentials from the file specified by GOOGLE_APPLICATION_CREDENTIALS.
"""
self._googleJson = os.getenv('GOOGLE_APPLICATION_CREDENTIALS')
if not self._googleJson:
raise RuntimeError('GOOGLE_APPLICATION_CREDENTIALS not set.')
try:
with open(self._googleJson) as jsonFile:
self.googleConnectionParams = json.loads(jsonFile.read())
except:
raise RuntimeError('GCEProvisioner: Could not parse the Google service account json file %s'
% self._googleJson)
self._projectId = self.googleConnectionParams['project_id']
self._clientEmail = self.googleConnectionParams['client_email']
self._credentialsPath = self._googleJson
self._masterPublicKey = None
self._gceDriver = self._getDriver() | python | def _readCredentials(self):
self._googleJson = os.getenv('GOOGLE_APPLICATION_CREDENTIALS')
if not self._googleJson:
raise RuntimeError('GOOGLE_APPLICATION_CREDENTIALS not set.')
try:
with open(self._googleJson) as jsonFile:
self.googleConnectionParams = json.loads(jsonFile.read())
except:
raise RuntimeError('GCEProvisioner: Could not parse the Google service account json file %s'
% self._googleJson)
self._projectId = self.googleConnectionParams['project_id']
self._clientEmail = self.googleConnectionParams['client_email']
self._credentialsPath = self._googleJson
self._masterPublicKey = None
self._gceDriver = self._getDriver() | [
"def",
"_readCredentials",
"(",
"self",
")",
":",
"self",
".",
"_googleJson",
"=",
"os",
".",
"getenv",
"(",
"'GOOGLE_APPLICATION_CREDENTIALS'",
")",
"if",
"not",
"self",
".",
"_googleJson",
":",
"raise",
"RuntimeError",
"(",
"'GOOGLE_APPLICATION_CREDENTIALS not set... | Get the credentials from the file specified by GOOGLE_APPLICATION_CREDENTIALS. | [
"Get",
"the",
"credentials",
"from",
"the",
"file",
"specified",
"by",
"GOOGLE_APPLICATION_CREDENTIALS",
"."
] | a8252277ff814e7bee0971139c2344f88e44b644 | https://github.com/DataBiosphere/toil/blob/a8252277ff814e7bee0971139c2344f88e44b644/src/toil/provisioners/gceProvisioner.py#L92-L110 |
224,934 | DataBiosphere/toil | src/toil/provisioners/gceProvisioner.py | GCEProvisioner.destroyCluster | def destroyCluster(self):
"""
Try a few times to terminate all of the instances in the group.
"""
logger.debug("Destroying cluster %s" % self.clusterName)
instancesToTerminate = self._getNodesInCluster()
attempts = 0
while instancesToTerminate and attempts < 3:
self._terminateInstances(instances=instancesToTerminate)
instancesToTerminate = self._getNodesInCluster()
attempts += 1
# remove group
instanceGroup = self._gceDriver.ex_get_instancegroup(self.clusterName, zone=self._zone)
instanceGroup.destroy() | python | def destroyCluster(self):
logger.debug("Destroying cluster %s" % self.clusterName)
instancesToTerminate = self._getNodesInCluster()
attempts = 0
while instancesToTerminate and attempts < 3:
self._terminateInstances(instances=instancesToTerminate)
instancesToTerminate = self._getNodesInCluster()
attempts += 1
# remove group
instanceGroup = self._gceDriver.ex_get_instancegroup(self.clusterName, zone=self._zone)
instanceGroup.destroy() | [
"def",
"destroyCluster",
"(",
"self",
")",
":",
"logger",
".",
"debug",
"(",
"\"Destroying cluster %s\"",
"%",
"self",
".",
"clusterName",
")",
"instancesToTerminate",
"=",
"self",
".",
"_getNodesInCluster",
"(",
")",
"attempts",
"=",
"0",
"while",
"instancesToT... | Try a few times to terminate all of the instances in the group. | [
"Try",
"a",
"few",
"times",
"to",
"terminate",
"all",
"of",
"the",
"instances",
"in",
"the",
"group",
"."
] | a8252277ff814e7bee0971139c2344f88e44b644 | https://github.com/DataBiosphere/toil/blob/a8252277ff814e7bee0971139c2344f88e44b644/src/toil/provisioners/gceProvisioner.py#L202-L216 |
224,935 | DataBiosphere/toil | src/toil/provisioners/gceProvisioner.py | GCEProvisioner._injectWorkerFiles | def _injectWorkerFiles(self, node, botoExists):
"""
Set up the credentials on the worker.
"""
node.waitForNode('toil_worker', keyName=self._keyName)
node.copySshKeys(self._keyName)
node.injectFile(self._credentialsPath, GoogleJobStore.nodeServiceAccountJson, 'toil_worker')
if self._sseKey:
node.injectFile(self._sseKey, self._sseKey, 'toil_worker')
if botoExists:
node.injectFile(self._botoPath, self.NODE_BOTO_PATH, 'toil_worker') | python | def _injectWorkerFiles(self, node, botoExists):
node.waitForNode('toil_worker', keyName=self._keyName)
node.copySshKeys(self._keyName)
node.injectFile(self._credentialsPath, GoogleJobStore.nodeServiceAccountJson, 'toil_worker')
if self._sseKey:
node.injectFile(self._sseKey, self._sseKey, 'toil_worker')
if botoExists:
node.injectFile(self._botoPath, self.NODE_BOTO_PATH, 'toil_worker') | [
"def",
"_injectWorkerFiles",
"(",
"self",
",",
"node",
",",
"botoExists",
")",
":",
"node",
".",
"waitForNode",
"(",
"'toil_worker'",
",",
"keyName",
"=",
"self",
".",
"_keyName",
")",
"node",
".",
"copySshKeys",
"(",
"self",
".",
"_keyName",
")",
"node",
... | Set up the credentials on the worker. | [
"Set",
"up",
"the",
"credentials",
"on",
"the",
"worker",
"."
] | a8252277ff814e7bee0971139c2344f88e44b644 | https://github.com/DataBiosphere/toil/blob/a8252277ff814e7bee0971139c2344f88e44b644/src/toil/provisioners/gceProvisioner.py#L338-L348 |
224,936 | DataBiosphere/toil | src/toil/provisioners/gceProvisioner.py | GCEProvisioner._getDriver | def _getDriver(self):
""" Connect to GCE """
driverCls = get_driver(Provider.GCE)
return driverCls(self._clientEmail,
self._googleJson,
project=self._projectId,
datacenter=self._zone) | python | def _getDriver(self):
driverCls = get_driver(Provider.GCE)
return driverCls(self._clientEmail,
self._googleJson,
project=self._projectId,
datacenter=self._zone) | [
"def",
"_getDriver",
"(",
"self",
")",
":",
"driverCls",
"=",
"get_driver",
"(",
"Provider",
".",
"GCE",
")",
"return",
"driverCls",
"(",
"self",
".",
"_clientEmail",
",",
"self",
".",
"_googleJson",
",",
"project",
"=",
"self",
".",
"_projectId",
",",
"... | Connect to GCE | [
"Connect",
"to",
"GCE"
] | a8252277ff814e7bee0971139c2344f88e44b644 | https://github.com/DataBiosphere/toil/blob/a8252277ff814e7bee0971139c2344f88e44b644/src/toil/provisioners/gceProvisioner.py#L357-L363 |
224,937 | DataBiosphere/toil | src/toil/provisioners/gceProvisioner.py | GCEProvisioner.ex_create_multiple_nodes | def ex_create_multiple_nodes(
self, base_name, size, image, number, location=None,
ex_network='default', ex_subnetwork=None, ex_tags=None,
ex_metadata=None, ignore_errors=True, use_existing_disk=True,
poll_interval=2, external_ip='ephemeral',
ex_disk_type='pd-standard', ex_disk_auto_delete=True,
ex_service_accounts=None, timeout=DEFAULT_TASK_COMPLETION_TIMEOUT,
description=None, ex_can_ip_forward=None, ex_disks_gce_struct=None,
ex_nic_gce_struct=None, ex_on_host_maintenance=None,
ex_automatic_restart=None, ex_image_family=None,
ex_preemptible=None):
"""
Monkey patch to gce.py in libcloud to allow disk and images to be specified.
Also changed name to a uuid below.
The prefix 'wp' identifies preemptable nodes and 'wn' non-preemptable nodes.
"""
# if image and ex_disks_gce_struct:
# raise ValueError("Cannot specify both 'image' and "
# "'ex_disks_gce_struct'.")
driver = self._getDriver()
if image and ex_image_family:
raise ValueError("Cannot specify both 'image' and "
"'ex_image_family'")
location = location or driver.zone
if not hasattr(location, 'name'):
location = driver.ex_get_zone(location)
if not hasattr(size, 'name'):
size = driver.ex_get_size(size, location)
if not hasattr(ex_network, 'name'):
ex_network = driver.ex_get_network(ex_network)
if ex_subnetwork and not hasattr(ex_subnetwork, 'name'):
ex_subnetwork = \
driver.ex_get_subnetwork(ex_subnetwork,
region=driver._get_region_from_zone(
location))
if ex_image_family:
image = driver.ex_get_image_from_family(ex_image_family)
if image and not hasattr(image, 'name'):
image = driver.ex_get_image(image)
if not hasattr(ex_disk_type, 'name'):
ex_disk_type = driver.ex_get_disktype(ex_disk_type, zone=location)
node_attrs = {'size': size,
'image': image,
'location': location,
'network': ex_network,
'subnetwork': ex_subnetwork,
'tags': ex_tags,
'metadata': ex_metadata,
'ignore_errors': ignore_errors,
'use_existing_disk': use_existing_disk,
'external_ip': external_ip,
'ex_disk_type': ex_disk_type,
'ex_disk_auto_delete': ex_disk_auto_delete,
'ex_service_accounts': ex_service_accounts,
'description': description,
'ex_can_ip_forward': ex_can_ip_forward,
'ex_disks_gce_struct': ex_disks_gce_struct,
'ex_nic_gce_struct': ex_nic_gce_struct,
'ex_on_host_maintenance': ex_on_host_maintenance,
'ex_automatic_restart': ex_automatic_restart,
'ex_preemptible': ex_preemptible}
# List for holding the status information for disk/node creation.
status_list = []
for i in range(number):
name = 'wp' if ex_preemptible else 'wn'
name += str(uuid.uuid4()) #'%s-%03d' % (base_name, i)
status = {'name': name, 'node_response': None, 'node': None}
status_list.append(status)
start_time = time.time()
complete = False
while not complete:
if (time.time() - start_time >= timeout):
raise Exception("Timeout (%s sec) while waiting for multiple "
"instances")
complete = True
time.sleep(poll_interval)
for status in status_list:
# Create the node or check status if already in progress.
if not status['node']:
if not status['node_response']:
driver._multi_create_node(status, node_attrs)
else:
driver._multi_check_node(status, node_attrs)
# If any of the nodes have not been created (or failed) we are
# not done yet.
if not status['node']:
complete = False
# Return list of nodes
node_list = []
for status in status_list:
node_list.append(status['node'])
return node_list | python | def ex_create_multiple_nodes(
self, base_name, size, image, number, location=None,
ex_network='default', ex_subnetwork=None, ex_tags=None,
ex_metadata=None, ignore_errors=True, use_existing_disk=True,
poll_interval=2, external_ip='ephemeral',
ex_disk_type='pd-standard', ex_disk_auto_delete=True,
ex_service_accounts=None, timeout=DEFAULT_TASK_COMPLETION_TIMEOUT,
description=None, ex_can_ip_forward=None, ex_disks_gce_struct=None,
ex_nic_gce_struct=None, ex_on_host_maintenance=None,
ex_automatic_restart=None, ex_image_family=None,
ex_preemptible=None):
# if image and ex_disks_gce_struct:
# raise ValueError("Cannot specify both 'image' and "
# "'ex_disks_gce_struct'.")
driver = self._getDriver()
if image and ex_image_family:
raise ValueError("Cannot specify both 'image' and "
"'ex_image_family'")
location = location or driver.zone
if not hasattr(location, 'name'):
location = driver.ex_get_zone(location)
if not hasattr(size, 'name'):
size = driver.ex_get_size(size, location)
if not hasattr(ex_network, 'name'):
ex_network = driver.ex_get_network(ex_network)
if ex_subnetwork and not hasattr(ex_subnetwork, 'name'):
ex_subnetwork = \
driver.ex_get_subnetwork(ex_subnetwork,
region=driver._get_region_from_zone(
location))
if ex_image_family:
image = driver.ex_get_image_from_family(ex_image_family)
if image and not hasattr(image, 'name'):
image = driver.ex_get_image(image)
if not hasattr(ex_disk_type, 'name'):
ex_disk_type = driver.ex_get_disktype(ex_disk_type, zone=location)
node_attrs = {'size': size,
'image': image,
'location': location,
'network': ex_network,
'subnetwork': ex_subnetwork,
'tags': ex_tags,
'metadata': ex_metadata,
'ignore_errors': ignore_errors,
'use_existing_disk': use_existing_disk,
'external_ip': external_ip,
'ex_disk_type': ex_disk_type,
'ex_disk_auto_delete': ex_disk_auto_delete,
'ex_service_accounts': ex_service_accounts,
'description': description,
'ex_can_ip_forward': ex_can_ip_forward,
'ex_disks_gce_struct': ex_disks_gce_struct,
'ex_nic_gce_struct': ex_nic_gce_struct,
'ex_on_host_maintenance': ex_on_host_maintenance,
'ex_automatic_restart': ex_automatic_restart,
'ex_preemptible': ex_preemptible}
# List for holding the status information for disk/node creation.
status_list = []
for i in range(number):
name = 'wp' if ex_preemptible else 'wn'
name += str(uuid.uuid4()) #'%s-%03d' % (base_name, i)
status = {'name': name, 'node_response': None, 'node': None}
status_list.append(status)
start_time = time.time()
complete = False
while not complete:
if (time.time() - start_time >= timeout):
raise Exception("Timeout (%s sec) while waiting for multiple "
"instances")
complete = True
time.sleep(poll_interval)
for status in status_list:
# Create the node or check status if already in progress.
if not status['node']:
if not status['node_response']:
driver._multi_create_node(status, node_attrs)
else:
driver._multi_check_node(status, node_attrs)
# If any of the nodes have not been created (or failed) we are
# not done yet.
if not status['node']:
complete = False
# Return list of nodes
node_list = []
for status in status_list:
node_list.append(status['node'])
return node_list | [
"def",
"ex_create_multiple_nodes",
"(",
"self",
",",
"base_name",
",",
"size",
",",
"image",
",",
"number",
",",
"location",
"=",
"None",
",",
"ex_network",
"=",
"'default'",
",",
"ex_subnetwork",
"=",
"None",
",",
"ex_tags",
"=",
"None",
",",
"ex_metadata",... | Monkey patch to gce.py in libcloud to allow disk and images to be specified.
Also changed name to a uuid below.
The prefix 'wp' identifies preemptable nodes and 'wn' non-preemptable nodes. | [
"Monkey",
"patch",
"to",
"gce",
".",
"py",
"in",
"libcloud",
"to",
"allow",
"disk",
"and",
"images",
"to",
"be",
"specified",
".",
"Also",
"changed",
"name",
"to",
"a",
"uuid",
"below",
".",
"The",
"prefix",
"wp",
"identifies",
"preemptable",
"nodes",
"a... | a8252277ff814e7bee0971139c2344f88e44b644 | https://github.com/DataBiosphere/toil/blob/a8252277ff814e7bee0971139c2344f88e44b644/src/toil/provisioners/gceProvisioner.py#L382-L479 |
224,938 | DataBiosphere/toil | docs/conf.py | fetch_parent_dir | def fetch_parent_dir(filepath, n=1):
'''Returns a parent directory, n places above the input filepath.
Equivalent to something like: '/home/user/dir'.split('/')[-2] if n=2.
'''
filepath = os.path.realpath(filepath)
for i in range(n):
filepath = os.path.dirname(filepath)
return os.path.basename(filepath) | python | def fetch_parent_dir(filepath, n=1):
'''Returns a parent directory, n places above the input filepath.
Equivalent to something like: '/home/user/dir'.split('/')[-2] if n=2.
'''
filepath = os.path.realpath(filepath)
for i in range(n):
filepath = os.path.dirname(filepath)
return os.path.basename(filepath) | [
"def",
"fetch_parent_dir",
"(",
"filepath",
",",
"n",
"=",
"1",
")",
":",
"filepath",
"=",
"os",
".",
"path",
".",
"realpath",
"(",
"filepath",
")",
"for",
"i",
"in",
"range",
"(",
"n",
")",
":",
"filepath",
"=",
"os",
".",
"path",
".",
"dirname",
... | Returns a parent directory, n places above the input filepath.
Equivalent to something like: '/home/user/dir'.split('/')[-2] if n=2. | [
"Returns",
"a",
"parent",
"directory",
"n",
"places",
"above",
"the",
"input",
"filepath",
"."
] | a8252277ff814e7bee0971139c2344f88e44b644 | https://github.com/DataBiosphere/toil/blob/a8252277ff814e7bee0971139c2344f88e44b644/docs/conf.py#L29-L37 |
224,939 | DataBiosphere/toil | src/toil/utils/toilStatus.py | ToilStatus.print_dot_chart | def print_dot_chart(self):
"""Print a dot output graph representing the workflow."""
print("digraph toil_graph {")
print("# This graph was created from job-store: %s" % self.jobStoreName)
# Make job IDs to node names map
jobsToNodeNames = dict(enumerate(map(lambda job: job.jobStoreID, self.jobsToReport)))
# Print the nodes
for job in set(self.jobsToReport):
print('%s [label="%s %s"];' % (
jobsToNodeNames[job.jobStoreID], job.jobName, job.jobStoreID))
# Print the edges
for job in set(self.jobsToReport):
for level, jobList in enumerate(job.stack):
for childJob in jobList:
# Check, b/c successor may be finished / not in the set of jobs
if childJob.jobStoreID in jobsToNodeNames:
print('%s -> %s [label="%i"];' % (
jobsToNodeNames[job.jobStoreID],
jobsToNodeNames[childJob.jobStoreID], level))
print("}") | python | def print_dot_chart(self):
print("digraph toil_graph {")
print("# This graph was created from job-store: %s" % self.jobStoreName)
# Make job IDs to node names map
jobsToNodeNames = dict(enumerate(map(lambda job: job.jobStoreID, self.jobsToReport)))
# Print the nodes
for job in set(self.jobsToReport):
print('%s [label="%s %s"];' % (
jobsToNodeNames[job.jobStoreID], job.jobName, job.jobStoreID))
# Print the edges
for job in set(self.jobsToReport):
for level, jobList in enumerate(job.stack):
for childJob in jobList:
# Check, b/c successor may be finished / not in the set of jobs
if childJob.jobStoreID in jobsToNodeNames:
print('%s -> %s [label="%i"];' % (
jobsToNodeNames[job.jobStoreID],
jobsToNodeNames[childJob.jobStoreID], level))
print("}") | [
"def",
"print_dot_chart",
"(",
"self",
")",
":",
"print",
"(",
"\"digraph toil_graph {\"",
")",
"print",
"(",
"\"# This graph was created from job-store: %s\"",
"%",
"self",
".",
"jobStoreName",
")",
"# Make job IDs to node names map",
"jobsToNodeNames",
"=",
"dict",
"(",... | Print a dot output graph representing the workflow. | [
"Print",
"a",
"dot",
"output",
"graph",
"representing",
"the",
"workflow",
"."
] | a8252277ff814e7bee0971139c2344f88e44b644 | https://github.com/DataBiosphere/toil/blob/a8252277ff814e7bee0971139c2344f88e44b644/src/toil/utils/toilStatus.py#L53-L75 |
224,940 | DataBiosphere/toil | src/toil/utils/toilStatus.py | ToilStatus.printJobLog | def printJobLog(self):
"""Takes a list of jobs, finds their log files, and prints them to the terminal."""
for job in self.jobsToReport:
if job.logJobStoreFileID is not None:
msg = "LOG_FILE_OF_JOB:%s LOG: =======>\n" % job
with job.getLogFileHandle(self.jobStore) as fH:
msg += fH.read()
msg += "<========="
else:
msg = "LOG_FILE_OF_JOB:%s LOG: Job has no log file" % job
print(msg) | python | def printJobLog(self):
for job in self.jobsToReport:
if job.logJobStoreFileID is not None:
msg = "LOG_FILE_OF_JOB:%s LOG: =======>\n" % job
with job.getLogFileHandle(self.jobStore) as fH:
msg += fH.read()
msg += "<========="
else:
msg = "LOG_FILE_OF_JOB:%s LOG: Job has no log file" % job
print(msg) | [
"def",
"printJobLog",
"(",
"self",
")",
":",
"for",
"job",
"in",
"self",
".",
"jobsToReport",
":",
"if",
"job",
".",
"logJobStoreFileID",
"is",
"not",
"None",
":",
"msg",
"=",
"\"LOG_FILE_OF_JOB:%s LOG: =======>\\n\"",
"%",
"job",
"with",
"job",
".",
"getLog... | Takes a list of jobs, finds their log files, and prints them to the terminal. | [
"Takes",
"a",
"list",
"of",
"jobs",
"finds",
"their",
"log",
"files",
"and",
"prints",
"them",
"to",
"the",
"terminal",
"."
] | a8252277ff814e7bee0971139c2344f88e44b644 | https://github.com/DataBiosphere/toil/blob/a8252277ff814e7bee0971139c2344f88e44b644/src/toil/utils/toilStatus.py#L77-L87 |
224,941 | DataBiosphere/toil | src/toil/utils/toilStatus.py | ToilStatus.printJobChildren | def printJobChildren(self):
"""Takes a list of jobs, and prints their successors."""
for job in self.jobsToReport:
children = "CHILDREN_OF_JOB:%s " % job
for level, jobList in enumerate(job.stack):
for childJob in jobList:
children += "\t(CHILD_JOB:%s,PRECEDENCE:%i)" % (childJob, level)
print(children) | python | def printJobChildren(self):
for job in self.jobsToReport:
children = "CHILDREN_OF_JOB:%s " % job
for level, jobList in enumerate(job.stack):
for childJob in jobList:
children += "\t(CHILD_JOB:%s,PRECEDENCE:%i)" % (childJob, level)
print(children) | [
"def",
"printJobChildren",
"(",
"self",
")",
":",
"for",
"job",
"in",
"self",
".",
"jobsToReport",
":",
"children",
"=",
"\"CHILDREN_OF_JOB:%s \"",
"%",
"job",
"for",
"level",
",",
"jobList",
"in",
"enumerate",
"(",
"job",
".",
"stack",
")",
":",
"for",
... | Takes a list of jobs, and prints their successors. | [
"Takes",
"a",
"list",
"of",
"jobs",
"and",
"prints",
"their",
"successors",
"."
] | a8252277ff814e7bee0971139c2344f88e44b644 | https://github.com/DataBiosphere/toil/blob/a8252277ff814e7bee0971139c2344f88e44b644/src/toil/utils/toilStatus.py#L89-L96 |
224,942 | DataBiosphere/toil | src/toil/utils/toilStatus.py | ToilStatus.printAggregateJobStats | def printAggregateJobStats(self, properties, childNumber):
"""Prints a job's ID, log file, remaining tries, and other properties."""
for job in self.jobsToReport:
lf = lambda x: "%s:%s" % (x, str(x in properties))
print("\t".join(("JOB:%s" % job,
"LOG_FILE:%s" % job.logJobStoreFileID,
"TRYS_REMAINING:%i" % job.remainingRetryCount,
"CHILD_NUMBER:%s" % childNumber,
lf("READY_TO_RUN"), lf("IS_ZOMBIE"),
lf("HAS_SERVICES"), lf("IS_SERVICE")))) | python | def printAggregateJobStats(self, properties, childNumber):
for job in self.jobsToReport:
lf = lambda x: "%s:%s" % (x, str(x in properties))
print("\t".join(("JOB:%s" % job,
"LOG_FILE:%s" % job.logJobStoreFileID,
"TRYS_REMAINING:%i" % job.remainingRetryCount,
"CHILD_NUMBER:%s" % childNumber,
lf("READY_TO_RUN"), lf("IS_ZOMBIE"),
lf("HAS_SERVICES"), lf("IS_SERVICE")))) | [
"def",
"printAggregateJobStats",
"(",
"self",
",",
"properties",
",",
"childNumber",
")",
":",
"for",
"job",
"in",
"self",
".",
"jobsToReport",
":",
"lf",
"=",
"lambda",
"x",
":",
"\"%s:%s\"",
"%",
"(",
"x",
",",
"str",
"(",
"x",
"in",
"properties",
")... | Prints a job's ID, log file, remaining tries, and other properties. | [
"Prints",
"a",
"job",
"s",
"ID",
"log",
"file",
"remaining",
"tries",
"and",
"other",
"properties",
"."
] | a8252277ff814e7bee0971139c2344f88e44b644 | https://github.com/DataBiosphere/toil/blob/a8252277ff814e7bee0971139c2344f88e44b644/src/toil/utils/toilStatus.py#L98-L107 |
224,943 | DataBiosphere/toil | src/toil/utils/toilStatus.py | ToilStatus.report_on_jobs | def report_on_jobs(self):
"""
Gathers information about jobs such as its child jobs and status.
:returns jobStats: Pairings of a useful category and a list of jobs which fall into it.
:rtype dict:
"""
hasChildren = []
readyToRun = []
zombies = []
hasLogFile = []
hasServices = []
services = []
properties = set()
for job in self.jobsToReport:
if job.logJobStoreFileID is not None:
hasLogFile.append(job)
childNumber = reduce(lambda x, y: x + y, map(len, job.stack) + [0])
if childNumber > 0: # Total number of successors > 0
hasChildren.append(job)
properties.add("HAS_CHILDREN")
elif job.command is not None:
# Job has no children and a command to run. Indicates job could be run.
readyToRun.append(job)
properties.add("READY_TO_RUN")
else:
# Job has no successors and no command, so is a zombie job.
zombies.append(job)
properties.add("IS_ZOMBIE")
if job.services:
hasServices.append(job)
properties.add("HAS_SERVICES")
if job.startJobStoreID or job.terminateJobStoreID or job.errorJobStoreID:
# These attributes are only set in service jobs
services.append(job)
properties.add("IS_SERVICE")
jobStats = {'hasChildren': hasChildren,
'readyToRun': readyToRun,
'zombies': zombies,
'hasServices': hasServices,
'services': services,
'hasLogFile': hasLogFile,
'properties': properties,
'childNumber': childNumber}
return jobStats | python | def report_on_jobs(self):
hasChildren = []
readyToRun = []
zombies = []
hasLogFile = []
hasServices = []
services = []
properties = set()
for job in self.jobsToReport:
if job.logJobStoreFileID is not None:
hasLogFile.append(job)
childNumber = reduce(lambda x, y: x + y, map(len, job.stack) + [0])
if childNumber > 0: # Total number of successors > 0
hasChildren.append(job)
properties.add("HAS_CHILDREN")
elif job.command is not None:
# Job has no children and a command to run. Indicates job could be run.
readyToRun.append(job)
properties.add("READY_TO_RUN")
else:
# Job has no successors and no command, so is a zombie job.
zombies.append(job)
properties.add("IS_ZOMBIE")
if job.services:
hasServices.append(job)
properties.add("HAS_SERVICES")
if job.startJobStoreID or job.terminateJobStoreID or job.errorJobStoreID:
# These attributes are only set in service jobs
services.append(job)
properties.add("IS_SERVICE")
jobStats = {'hasChildren': hasChildren,
'readyToRun': readyToRun,
'zombies': zombies,
'hasServices': hasServices,
'services': services,
'hasLogFile': hasLogFile,
'properties': properties,
'childNumber': childNumber}
return jobStats | [
"def",
"report_on_jobs",
"(",
"self",
")",
":",
"hasChildren",
"=",
"[",
"]",
"readyToRun",
"=",
"[",
"]",
"zombies",
"=",
"[",
"]",
"hasLogFile",
"=",
"[",
"]",
"hasServices",
"=",
"[",
"]",
"services",
"=",
"[",
"]",
"properties",
"=",
"set",
"(",
... | Gathers information about jobs such as its child jobs and status.
:returns jobStats: Pairings of a useful category and a list of jobs which fall into it.
:rtype dict: | [
"Gathers",
"information",
"about",
"jobs",
"such",
"as",
"its",
"child",
"jobs",
"and",
"status",
"."
] | a8252277ff814e7bee0971139c2344f88e44b644 | https://github.com/DataBiosphere/toil/blob/a8252277ff814e7bee0971139c2344f88e44b644/src/toil/utils/toilStatus.py#L109-L156 |
224,944 | DataBiosphere/toil | src/toil/utils/toilStatus.py | ToilStatus.getPIDStatus | def getPIDStatus(jobStoreName):
"""
Determine the status of a process with a particular pid.
Checks to see if a process exists or not.
:return: A string indicating the status of the PID of the workflow as stored in the jobstore.
:rtype: str
"""
try:
jobstore = Toil.resumeJobStore(jobStoreName)
except NoSuchJobStoreException:
return 'QUEUED'
except NoSuchFileException:
return 'QUEUED'
try:
with jobstore.readSharedFileStream('pid.log') as pidFile:
pid = int(pidFile.read())
try:
os.kill(pid, 0) # Does not kill process when 0 is passed.
except OSError: # Process not found, must be done.
return 'COMPLETED'
else:
return 'RUNNING'
except NoSuchFileException:
pass
return 'QUEUED' | python | def getPIDStatus(jobStoreName):
try:
jobstore = Toil.resumeJobStore(jobStoreName)
except NoSuchJobStoreException:
return 'QUEUED'
except NoSuchFileException:
return 'QUEUED'
try:
with jobstore.readSharedFileStream('pid.log') as pidFile:
pid = int(pidFile.read())
try:
os.kill(pid, 0) # Does not kill process when 0 is passed.
except OSError: # Process not found, must be done.
return 'COMPLETED'
else:
return 'RUNNING'
except NoSuchFileException:
pass
return 'QUEUED' | [
"def",
"getPIDStatus",
"(",
"jobStoreName",
")",
":",
"try",
":",
"jobstore",
"=",
"Toil",
".",
"resumeJobStore",
"(",
"jobStoreName",
")",
"except",
"NoSuchJobStoreException",
":",
"return",
"'QUEUED'",
"except",
"NoSuchFileException",
":",
"return",
"'QUEUED'",
... | Determine the status of a process with a particular pid.
Checks to see if a process exists or not.
:return: A string indicating the status of the PID of the workflow as stored in the jobstore.
:rtype: str | [
"Determine",
"the",
"status",
"of",
"a",
"process",
"with",
"a",
"particular",
"pid",
"."
] | a8252277ff814e7bee0971139c2344f88e44b644 | https://github.com/DataBiosphere/toil/blob/a8252277ff814e7bee0971139c2344f88e44b644/src/toil/utils/toilStatus.py#L159-L186 |
224,945 | DataBiosphere/toil | src/toil/utils/toilStatus.py | ToilStatus.getStatus | def getStatus(jobStoreName):
"""
Determine the status of a workflow.
If the jobstore does not exist, this returns 'QUEUED', assuming it has not been created yet.
Checks for the existence of files created in the toil.Leader.run(). In toil.Leader.run(), if a workflow completes
with failed jobs, 'failed.log' is created, otherwise 'succeeded.log' is written. If neither of these exist,
the leader is still running jobs.
:return: A string indicating the status of the workflow. ['COMPLETED', 'RUNNING', 'ERROR', 'QUEUED']
:rtype: str
"""
try:
jobstore = Toil.resumeJobStore(jobStoreName)
except NoSuchJobStoreException:
return 'QUEUED'
except NoSuchFileException:
return 'QUEUED'
try:
with jobstore.readSharedFileStream('succeeded.log') as successful:
pass
return 'COMPLETED'
except NoSuchFileException:
try:
with jobstore.readSharedFileStream('failed.log') as failed:
pass
return 'ERROR'
except NoSuchFileException:
pass
return 'RUNNING' | python | def getStatus(jobStoreName):
try:
jobstore = Toil.resumeJobStore(jobStoreName)
except NoSuchJobStoreException:
return 'QUEUED'
except NoSuchFileException:
return 'QUEUED'
try:
with jobstore.readSharedFileStream('succeeded.log') as successful:
pass
return 'COMPLETED'
except NoSuchFileException:
try:
with jobstore.readSharedFileStream('failed.log') as failed:
pass
return 'ERROR'
except NoSuchFileException:
pass
return 'RUNNING' | [
"def",
"getStatus",
"(",
"jobStoreName",
")",
":",
"try",
":",
"jobstore",
"=",
"Toil",
".",
"resumeJobStore",
"(",
"jobStoreName",
")",
"except",
"NoSuchJobStoreException",
":",
"return",
"'QUEUED'",
"except",
"NoSuchFileException",
":",
"return",
"'QUEUED'",
"tr... | Determine the status of a workflow.
If the jobstore does not exist, this returns 'QUEUED', assuming it has not been created yet.
Checks for the existence of files created in the toil.Leader.run(). In toil.Leader.run(), if a workflow completes
with failed jobs, 'failed.log' is created, otherwise 'succeeded.log' is written. If neither of these exist,
the leader is still running jobs.
:return: A string indicating the status of the workflow. ['COMPLETED', 'RUNNING', 'ERROR', 'QUEUED']
:rtype: str | [
"Determine",
"the",
"status",
"of",
"a",
"workflow",
"."
] | a8252277ff814e7bee0971139c2344f88e44b644 | https://github.com/DataBiosphere/toil/blob/a8252277ff814e7bee0971139c2344f88e44b644/src/toil/utils/toilStatus.py#L189-L220 |
224,946 | DataBiosphere/toil | src/toil/utils/toilStatus.py | ToilStatus.fetchRootJob | def fetchRootJob(self):
"""
Fetches the root job from the jobStore that provides context for all other jobs.
Exactly the same as the jobStore.loadRootJob() function, but with a different
exit message if the root job is not found (indicating the workflow ran successfully
to completion and certain stats cannot be gathered from it meaningfully such
as which jobs are left to run).
:raises JobException: if the root job does not exist.
"""
try:
return self.jobStore.loadRootJob()
except JobException:
print('Root job is absent. The workflow may have completed successfully.', file=sys.stderr)
raise | python | def fetchRootJob(self):
try:
return self.jobStore.loadRootJob()
except JobException:
print('Root job is absent. The workflow may have completed successfully.', file=sys.stderr)
raise | [
"def",
"fetchRootJob",
"(",
"self",
")",
":",
"try",
":",
"return",
"self",
".",
"jobStore",
".",
"loadRootJob",
"(",
")",
"except",
"JobException",
":",
"print",
"(",
"'Root job is absent. The workflow may have completed successfully.'",
",",
"file",
"=",
"sys",
... | Fetches the root job from the jobStore that provides context for all other jobs.
Exactly the same as the jobStore.loadRootJob() function, but with a different
exit message if the root job is not found (indicating the workflow ran successfully
to completion and certain stats cannot be gathered from it meaningfully such
as which jobs are left to run).
:raises JobException: if the root job does not exist. | [
"Fetches",
"the",
"root",
"job",
"from",
"the",
"jobStore",
"that",
"provides",
"context",
"for",
"all",
"other",
"jobs",
"."
] | a8252277ff814e7bee0971139c2344f88e44b644 | https://github.com/DataBiosphere/toil/blob/a8252277ff814e7bee0971139c2344f88e44b644/src/toil/utils/toilStatus.py#L222-L237 |
224,947 | DataBiosphere/toil | src/toil/utils/toilStatus.py | ToilStatus.fetchUserJobs | def fetchUserJobs(self, jobs):
"""
Takes a user input array of jobs, verifies that they are in the jobStore
and returns the array of jobsToReport.
:param list jobs: A list of jobs to be verified.
:returns jobsToReport: A list of jobs which are verified to be in the jobStore.
"""
jobsToReport = []
for jobID in jobs:
try:
jobsToReport.append(self.jobStore.load(jobID))
except JobException:
print('The job %s could not be found.' % jobID, file=sys.stderr)
raise
return jobsToReport | python | def fetchUserJobs(self, jobs):
jobsToReport = []
for jobID in jobs:
try:
jobsToReport.append(self.jobStore.load(jobID))
except JobException:
print('The job %s could not be found.' % jobID, file=sys.stderr)
raise
return jobsToReport | [
"def",
"fetchUserJobs",
"(",
"self",
",",
"jobs",
")",
":",
"jobsToReport",
"=",
"[",
"]",
"for",
"jobID",
"in",
"jobs",
":",
"try",
":",
"jobsToReport",
".",
"append",
"(",
"self",
".",
"jobStore",
".",
"load",
"(",
"jobID",
")",
")",
"except",
"Job... | Takes a user input array of jobs, verifies that they are in the jobStore
and returns the array of jobsToReport.
:param list jobs: A list of jobs to be verified.
:returns jobsToReport: A list of jobs which are verified to be in the jobStore. | [
"Takes",
"a",
"user",
"input",
"array",
"of",
"jobs",
"verifies",
"that",
"they",
"are",
"in",
"the",
"jobStore",
"and",
"returns",
"the",
"array",
"of",
"jobsToReport",
"."
] | a8252277ff814e7bee0971139c2344f88e44b644 | https://github.com/DataBiosphere/toil/blob/a8252277ff814e7bee0971139c2344f88e44b644/src/toil/utils/toilStatus.py#L239-L254 |
224,948 | DataBiosphere/toil | src/toil/utils/toilStatus.py | ToilStatus.traverseJobGraph | def traverseJobGraph(self, rootJob, jobsToReport=None, foundJobStoreIDs=None):
"""
Find all current jobs in the jobStore and return them as an Array.
:param jobNode rootJob: The root job of the workflow.
:param list jobsToReport: A list of jobNodes to be added to and returned.
:param set foundJobStoreIDs: A set of jobStoreIDs used to keep track of jobStoreIDs encountered in traversal.
:returns jobsToReport: The list of jobs currently in the job graph.
"""
if jobsToReport is None:
jobsToReport = []
if foundJobStoreIDs is None:
foundJobStoreIDs = set()
if rootJob.jobStoreID in foundJobStoreIDs:
return jobsToReport
foundJobStoreIDs.add(rootJob.jobStoreID)
jobsToReport.append(rootJob)
# Traverse jobs in stack
for jobs in rootJob.stack:
for successorJobStoreID in [x.jobStoreID for x in jobs]:
if successorJobStoreID not in foundJobStoreIDs and self.jobStore.exists(successorJobStoreID):
self.traverseJobGraph(self.jobStore.load(successorJobStoreID), jobsToReport, foundJobStoreIDs)
# Traverse service jobs
for jobs in rootJob.services:
for serviceJobStoreID in [x.jobStoreID for x in jobs]:
if self.jobStore.exists(serviceJobStoreID):
if serviceJobStoreID in foundJobStoreIDs:
raise RuntimeError('Service job was unexpectedly found while traversing ')
foundJobStoreIDs.add(serviceJobStoreID)
jobsToReport.append(self.jobStore.load(serviceJobStoreID))
return jobsToReport | python | def traverseJobGraph(self, rootJob, jobsToReport=None, foundJobStoreIDs=None):
if jobsToReport is None:
jobsToReport = []
if foundJobStoreIDs is None:
foundJobStoreIDs = set()
if rootJob.jobStoreID in foundJobStoreIDs:
return jobsToReport
foundJobStoreIDs.add(rootJob.jobStoreID)
jobsToReport.append(rootJob)
# Traverse jobs in stack
for jobs in rootJob.stack:
for successorJobStoreID in [x.jobStoreID for x in jobs]:
if successorJobStoreID not in foundJobStoreIDs and self.jobStore.exists(successorJobStoreID):
self.traverseJobGraph(self.jobStore.load(successorJobStoreID), jobsToReport, foundJobStoreIDs)
# Traverse service jobs
for jobs in rootJob.services:
for serviceJobStoreID in [x.jobStoreID for x in jobs]:
if self.jobStore.exists(serviceJobStoreID):
if serviceJobStoreID in foundJobStoreIDs:
raise RuntimeError('Service job was unexpectedly found while traversing ')
foundJobStoreIDs.add(serviceJobStoreID)
jobsToReport.append(self.jobStore.load(serviceJobStoreID))
return jobsToReport | [
"def",
"traverseJobGraph",
"(",
"self",
",",
"rootJob",
",",
"jobsToReport",
"=",
"None",
",",
"foundJobStoreIDs",
"=",
"None",
")",
":",
"if",
"jobsToReport",
"is",
"None",
":",
"jobsToReport",
"=",
"[",
"]",
"if",
"foundJobStoreIDs",
"is",
"None",
":",
"... | Find all current jobs in the jobStore and return them as an Array.
:param jobNode rootJob: The root job of the workflow.
:param list jobsToReport: A list of jobNodes to be added to and returned.
:param set foundJobStoreIDs: A set of jobStoreIDs used to keep track of jobStoreIDs encountered in traversal.
:returns jobsToReport: The list of jobs currently in the job graph. | [
"Find",
"all",
"current",
"jobs",
"in",
"the",
"jobStore",
"and",
"return",
"them",
"as",
"an",
"Array",
"."
] | a8252277ff814e7bee0971139c2344f88e44b644 | https://github.com/DataBiosphere/toil/blob/a8252277ff814e7bee0971139c2344f88e44b644/src/toil/utils/toilStatus.py#L256-L291 |
224,949 | DataBiosphere/toil | src/toil/__init__.py | lookupEnvVar | def lookupEnvVar(name, envName, defaultValue):
"""
Use this for looking up environment variables that control Toil and are important enough to
log the result of that lookup.
:param str name: the human readable name of the variable
:param str envName: the name of the environment variable to lookup
:param str defaultValue: the fall-back value
:return: the value of the environment variable or the default value the variable is not set
:rtype: str
"""
try:
value = os.environ[envName]
except KeyError:
log.info('Using default %s of %s as %s is not set.', name, defaultValue, envName)
return defaultValue
else:
log.info('Overriding %s of %s with %s from %s.', name, defaultValue, value, envName)
return value | python | def lookupEnvVar(name, envName, defaultValue):
try:
value = os.environ[envName]
except KeyError:
log.info('Using default %s of %s as %s is not set.', name, defaultValue, envName)
return defaultValue
else:
log.info('Overriding %s of %s with %s from %s.', name, defaultValue, value, envName)
return value | [
"def",
"lookupEnvVar",
"(",
"name",
",",
"envName",
",",
"defaultValue",
")",
":",
"try",
":",
"value",
"=",
"os",
".",
"environ",
"[",
"envName",
"]",
"except",
"KeyError",
":",
"log",
".",
"info",
"(",
"'Using default %s of %s as %s is not set.'",
",",
"na... | Use this for looking up environment variables that control Toil and are important enough to
log the result of that lookup.
:param str name: the human readable name of the variable
:param str envName: the name of the environment variable to lookup
:param str defaultValue: the fall-back value
:return: the value of the environment variable or the default value the variable is not set
:rtype: str | [
"Use",
"this",
"for",
"looking",
"up",
"environment",
"variables",
"that",
"control",
"Toil",
"and",
"are",
"important",
"enough",
"to",
"log",
"the",
"result",
"of",
"that",
"lookup",
"."
] | a8252277ff814e7bee0971139c2344f88e44b644 | https://github.com/DataBiosphere/toil/blob/a8252277ff814e7bee0971139c2344f88e44b644/src/toil/__init__.py#L236-L254 |
224,950 | DataBiosphere/toil | src/toil/__init__.py | checkDockerImageExists | def checkDockerImageExists(appliance):
"""
Attempts to check a url registryName for the existence of a docker image with a given tag.
:param str appliance: The url of a docker image's registry (with a tag) of the form:
'quay.io/<repo_path>:<tag>' or '<repo_path>:<tag>'.
Examples: 'quay.io/ucsc_cgl/toil:latest', 'ubuntu:latest', or
'broadinstitute/genomes-in-the-cloud:2.0.0'.
:return: Raises an exception if the docker image cannot be found or is invalid. Otherwise, it
will return the appliance string.
:rtype: str
"""
if currentCommit in appliance:
return appliance
registryName, imageName, tag = parseDockerAppliance(appliance)
if registryName == 'docker.io':
return requestCheckDockerIo(origAppliance=appliance, imageName=imageName, tag=tag)
else:
return requestCheckRegularDocker(origAppliance=appliance, registryName=registryName, imageName=imageName, tag=tag) | python | def checkDockerImageExists(appliance):
if currentCommit in appliance:
return appliance
registryName, imageName, tag = parseDockerAppliance(appliance)
if registryName == 'docker.io':
return requestCheckDockerIo(origAppliance=appliance, imageName=imageName, tag=tag)
else:
return requestCheckRegularDocker(origAppliance=appliance, registryName=registryName, imageName=imageName, tag=tag) | [
"def",
"checkDockerImageExists",
"(",
"appliance",
")",
":",
"if",
"currentCommit",
"in",
"appliance",
":",
"return",
"appliance",
"registryName",
",",
"imageName",
",",
"tag",
"=",
"parseDockerAppliance",
"(",
"appliance",
")",
"if",
"registryName",
"==",
"'docke... | Attempts to check a url registryName for the existence of a docker image with a given tag.
:param str appliance: The url of a docker image's registry (with a tag) of the form:
'quay.io/<repo_path>:<tag>' or '<repo_path>:<tag>'.
Examples: 'quay.io/ucsc_cgl/toil:latest', 'ubuntu:latest', or
'broadinstitute/genomes-in-the-cloud:2.0.0'.
:return: Raises an exception if the docker image cannot be found or is invalid. Otherwise, it
will return the appliance string.
:rtype: str | [
"Attempts",
"to",
"check",
"a",
"url",
"registryName",
"for",
"the",
"existence",
"of",
"a",
"docker",
"image",
"with",
"a",
"given",
"tag",
"."
] | a8252277ff814e7bee0971139c2344f88e44b644 | https://github.com/DataBiosphere/toil/blob/a8252277ff814e7bee0971139c2344f88e44b644/src/toil/__init__.py#L257-L276 |
224,951 | DataBiosphere/toil | src/toil/__init__.py | parseDockerAppliance | def parseDockerAppliance(appliance):
"""
Takes string describing a docker image and returns the parsed
registry, image reference, and tag for that image.
Example: "quay.io/ucsc_cgl/toil:latest"
Should return: "quay.io", "ucsc_cgl/toil", "latest"
If a registry is not defined, the default is: "docker.io"
If a tag is not defined, the default is: "latest"
:param appliance: The full url of the docker image originally
specified by the user (or the default).
e.g. "quay.io/ucsc_cgl/toil:latest"
:return: registryName, imageName, tag
"""
appliance = appliance.lower()
# get the tag
if ':' in appliance:
tag = appliance.split(':')[-1]
appliance = appliance[:-(len(':' + tag))] # remove only the tag
else:
# default to 'latest' if no tag is specified
tag = 'latest'
# get the registry and image
registryName = 'docker.io' # default if not specified
imageName = appliance # will be true if not specified
if '/' in appliance and '.' in appliance.split('/')[0]:
registryName = appliance.split('/')[0]
imageName = appliance[len(registryName):]
registryName = registryName.strip('/')
imageName = imageName.strip('/')
return registryName, imageName, tag | python | def parseDockerAppliance(appliance):
appliance = appliance.lower()
# get the tag
if ':' in appliance:
tag = appliance.split(':')[-1]
appliance = appliance[:-(len(':' + tag))] # remove only the tag
else:
# default to 'latest' if no tag is specified
tag = 'latest'
# get the registry and image
registryName = 'docker.io' # default if not specified
imageName = appliance # will be true if not specified
if '/' in appliance and '.' in appliance.split('/')[0]:
registryName = appliance.split('/')[0]
imageName = appliance[len(registryName):]
registryName = registryName.strip('/')
imageName = imageName.strip('/')
return registryName, imageName, tag | [
"def",
"parseDockerAppliance",
"(",
"appliance",
")",
":",
"appliance",
"=",
"appliance",
".",
"lower",
"(",
")",
"# get the tag",
"if",
"':'",
"in",
"appliance",
":",
"tag",
"=",
"appliance",
".",
"split",
"(",
"':'",
")",
"[",
"-",
"1",
"]",
"appliance... | Takes string describing a docker image and returns the parsed
registry, image reference, and tag for that image.
Example: "quay.io/ucsc_cgl/toil:latest"
Should return: "quay.io", "ucsc_cgl/toil", "latest"
If a registry is not defined, the default is: "docker.io"
If a tag is not defined, the default is: "latest"
:param appliance: The full url of the docker image originally
specified by the user (or the default).
e.g. "quay.io/ucsc_cgl/toil:latest"
:return: registryName, imageName, tag | [
"Takes",
"string",
"describing",
"a",
"docker",
"image",
"and",
"returns",
"the",
"parsed",
"registry",
"image",
"reference",
"and",
"tag",
"for",
"that",
"image",
"."
] | a8252277ff814e7bee0971139c2344f88e44b644 | https://github.com/DataBiosphere/toil/blob/a8252277ff814e7bee0971139c2344f88e44b644/src/toil/__init__.py#L279-L314 |
224,952 | DataBiosphere/toil | src/toil/__init__.py | requestCheckRegularDocker | def requestCheckRegularDocker(origAppliance, registryName, imageName, tag):
"""
Checks to see if an image exists using the requests library.
URL is based on the docker v2 schema described here:
https://docs.docker.com/registry/spec/manifest-v2-2/
This has the following format:
https://{websitehostname}.io/v2/{repo}/manifests/{tag}
Does not work with the official (docker.io) site, because they require an OAuth token, so a
separate check is done for docker.io images.
:param str origAppliance: The full url of the docker image originally
specified by the user (or the default).
e.g. "quay.io/ucsc_cgl/toil:latest"
:param str registryName: The url of a docker image's registry. e.g. "quay.io"
:param str imageName: The image, including path and excluding the tag. e.g. "ucsc_cgl/toil"
:param str tag: The tag used at that docker image's registry. e.g. "latest"
:return: Return True if match found. Raise otherwise.
"""
ioURL = 'https://{webhost}/v2/{pathName}/manifests/{tag}' \
''.format(webhost=registryName, pathName=imageName, tag=tag)
response = requests.head(ioURL)
if not response.ok:
raise ApplianceImageNotFound(origAppliance, ioURL, response.status_code)
else:
return origAppliance | python | def requestCheckRegularDocker(origAppliance, registryName, imageName, tag):
ioURL = 'https://{webhost}/v2/{pathName}/manifests/{tag}' \
''.format(webhost=registryName, pathName=imageName, tag=tag)
response = requests.head(ioURL)
if not response.ok:
raise ApplianceImageNotFound(origAppliance, ioURL, response.status_code)
else:
return origAppliance | [
"def",
"requestCheckRegularDocker",
"(",
"origAppliance",
",",
"registryName",
",",
"imageName",
",",
"tag",
")",
":",
"ioURL",
"=",
"'https://{webhost}/v2/{pathName}/manifests/{tag}'",
"''",
".",
"format",
"(",
"webhost",
"=",
"registryName",
",",
"pathName",
"=",
... | Checks to see if an image exists using the requests library.
URL is based on the docker v2 schema described here:
https://docs.docker.com/registry/spec/manifest-v2-2/
This has the following format:
https://{websitehostname}.io/v2/{repo}/manifests/{tag}
Does not work with the official (docker.io) site, because they require an OAuth token, so a
separate check is done for docker.io images.
:param str origAppliance: The full url of the docker image originally
specified by the user (or the default).
e.g. "quay.io/ucsc_cgl/toil:latest"
:param str registryName: The url of a docker image's registry. e.g. "quay.io"
:param str imageName: The image, including path and excluding the tag. e.g. "ucsc_cgl/toil"
:param str tag: The tag used at that docker image's registry. e.g. "latest"
:return: Return True if match found. Raise otherwise. | [
"Checks",
"to",
"see",
"if",
"an",
"image",
"exists",
"using",
"the",
"requests",
"library",
"."
] | a8252277ff814e7bee0971139c2344f88e44b644 | https://github.com/DataBiosphere/toil/blob/a8252277ff814e7bee0971139c2344f88e44b644/src/toil/__init__.py#L354-L381 |
224,953 | DataBiosphere/toil | src/toil/__init__.py | requestCheckDockerIo | def requestCheckDockerIo(origAppliance, imageName, tag):
"""
Checks docker.io to see if an image exists using the requests library.
URL is based on the docker v2 schema. Requires that an access token be fetched first.
:param str origAppliance: The full url of the docker image originally
specified by the user (or the default). e.g. "ubuntu:latest"
:param str imageName: The image, including path and excluding the tag. e.g. "ubuntu"
:param str tag: The tag used at that docker image's registry. e.g. "latest"
:return: Return True if match found. Raise otherwise.
"""
# only official images like 'busybox' or 'ubuntu'
if '/' not in imageName:
imageName = 'library/' + imageName
token_url = 'https://auth.docker.io/token?service=registry.docker.io&scope=repository:{repo}:pull'.format(repo=imageName)
requests_url = 'https://registry-1.docker.io/v2/{repo}/manifests/{tag}'.format(repo=imageName, tag=tag)
token = requests.get(token_url)
jsonToken = token.json()
bearer = jsonToken["token"]
response = requests.head(requests_url, headers={'Authorization': 'Bearer {}'.format(bearer)})
if not response.ok:
raise ApplianceImageNotFound(origAppliance, requests_url, response.status_code)
else:
return origAppliance | python | def requestCheckDockerIo(origAppliance, imageName, tag):
# only official images like 'busybox' or 'ubuntu'
if '/' not in imageName:
imageName = 'library/' + imageName
token_url = 'https://auth.docker.io/token?service=registry.docker.io&scope=repository:{repo}:pull'.format(repo=imageName)
requests_url = 'https://registry-1.docker.io/v2/{repo}/manifests/{tag}'.format(repo=imageName, tag=tag)
token = requests.get(token_url)
jsonToken = token.json()
bearer = jsonToken["token"]
response = requests.head(requests_url, headers={'Authorization': 'Bearer {}'.format(bearer)})
if not response.ok:
raise ApplianceImageNotFound(origAppliance, requests_url, response.status_code)
else:
return origAppliance | [
"def",
"requestCheckDockerIo",
"(",
"origAppliance",
",",
"imageName",
",",
"tag",
")",
":",
"# only official images like 'busybox' or 'ubuntu'",
"if",
"'/'",
"not",
"in",
"imageName",
":",
"imageName",
"=",
"'library/'",
"+",
"imageName",
"token_url",
"=",
"'https://... | Checks docker.io to see if an image exists using the requests library.
URL is based on the docker v2 schema. Requires that an access token be fetched first.
:param str origAppliance: The full url of the docker image originally
specified by the user (or the default). e.g. "ubuntu:latest"
:param str imageName: The image, including path and excluding the tag. e.g. "ubuntu"
:param str tag: The tag used at that docker image's registry. e.g. "latest"
:return: Return True if match found. Raise otherwise. | [
"Checks",
"docker",
".",
"io",
"to",
"see",
"if",
"an",
"image",
"exists",
"using",
"the",
"requests",
"library",
"."
] | a8252277ff814e7bee0971139c2344f88e44b644 | https://github.com/DataBiosphere/toil/blob/a8252277ff814e7bee0971139c2344f88e44b644/src/toil/__init__.py#L384-L410 |
224,954 | DataBiosphere/toil | src/toil/jobGraph.py | JobGraph.restartCheckpoint | def restartCheckpoint(self, jobStore):
"""Restart a checkpoint after the total failure of jobs in its subtree.
Writes the changes to the jobStore immediately. All the
checkpoint's successors will be deleted, but its retry count
will *not* be decreased.
Returns a list with the IDs of any successors deleted.
"""
assert self.checkpoint is not None
successorsDeleted = []
if self.stack or self.services or self.command != None:
if self.command != None:
assert self.command == self.checkpoint
logger.debug("Checkpoint job already has command set to run")
else:
self.command = self.checkpoint
jobStore.update(self) # Update immediately to ensure that checkpoint
# is made before deleting any remaining successors
if self.stack or self.services:
# If the subtree of successors is not complete restart everything
logger.debug("Checkpoint job has unfinished successor jobs, deleting the jobs on the stack: %s, services: %s " %
(self.stack, self.services))
# Delete everything on the stack, as these represent successors to clean
# up as we restart the queue
def recursiveDelete(jobGraph2):
# Recursive walk the stack to delete all remaining jobs
for jobs in jobGraph2.stack + jobGraph2.services:
for jobNode in jobs:
if jobStore.exists(jobNode.jobStoreID):
recursiveDelete(jobStore.load(jobNode.jobStoreID))
else:
logger.debug("Job %s has already been deleted", jobNode)
if jobGraph2 != self:
logger.debug("Checkpoint is deleting old successor job: %s", jobGraph2.jobStoreID)
jobStore.delete(jobGraph2.jobStoreID)
successorsDeleted.append(jobGraph2.jobStoreID)
recursiveDelete(self)
self.stack = [ [], [] ] # Initialise the job to mimic the state of a job
# that has been previously serialised but which as yet has no successors
self.services = [] # Empty the services
# Update the jobStore to avoid doing this twice on failure and make this clean.
jobStore.update(self)
return successorsDeleted | python | def restartCheckpoint(self, jobStore):
assert self.checkpoint is not None
successorsDeleted = []
if self.stack or self.services or self.command != None:
if self.command != None:
assert self.command == self.checkpoint
logger.debug("Checkpoint job already has command set to run")
else:
self.command = self.checkpoint
jobStore.update(self) # Update immediately to ensure that checkpoint
# is made before deleting any remaining successors
if self.stack or self.services:
# If the subtree of successors is not complete restart everything
logger.debug("Checkpoint job has unfinished successor jobs, deleting the jobs on the stack: %s, services: %s " %
(self.stack, self.services))
# Delete everything on the stack, as these represent successors to clean
# up as we restart the queue
def recursiveDelete(jobGraph2):
# Recursive walk the stack to delete all remaining jobs
for jobs in jobGraph2.stack + jobGraph2.services:
for jobNode in jobs:
if jobStore.exists(jobNode.jobStoreID):
recursiveDelete(jobStore.load(jobNode.jobStoreID))
else:
logger.debug("Job %s has already been deleted", jobNode)
if jobGraph2 != self:
logger.debug("Checkpoint is deleting old successor job: %s", jobGraph2.jobStoreID)
jobStore.delete(jobGraph2.jobStoreID)
successorsDeleted.append(jobGraph2.jobStoreID)
recursiveDelete(self)
self.stack = [ [], [] ] # Initialise the job to mimic the state of a job
# that has been previously serialised but which as yet has no successors
self.services = [] # Empty the services
# Update the jobStore to avoid doing this twice on failure and make this clean.
jobStore.update(self)
return successorsDeleted | [
"def",
"restartCheckpoint",
"(",
"self",
",",
"jobStore",
")",
":",
"assert",
"self",
".",
"checkpoint",
"is",
"not",
"None",
"successorsDeleted",
"=",
"[",
"]",
"if",
"self",
".",
"stack",
"or",
"self",
".",
"services",
"or",
"self",
".",
"command",
"!=... | Restart a checkpoint after the total failure of jobs in its subtree.
Writes the changes to the jobStore immediately. All the
checkpoint's successors will be deleted, but its retry count
will *not* be decreased.
Returns a list with the IDs of any successors deleted. | [
"Restart",
"a",
"checkpoint",
"after",
"the",
"total",
"failure",
"of",
"jobs",
"in",
"its",
"subtree",
"."
] | a8252277ff814e7bee0971139c2344f88e44b644 | https://github.com/DataBiosphere/toil/blob/a8252277ff814e7bee0971139c2344f88e44b644/src/toil/jobGraph.py#L123-L171 |
224,955 | DataBiosphere/toil | src/toil/lib/context.py | Context.absolute_name | def absolute_name(self, name):
"""
Returns the absolute form of the specified resource name. If the specified name is
already absolute, that name will be returned unchanged, otherwise the given name will be
prefixed with the namespace this object was configured with.
Relative names starting with underscores are disallowed.
>>> ctx = Context( 'us-west-1b', namespace='/' )
>>> ctx.absolute_name('bar')
'/bar'
>>> ctx.absolute_name('/bar')
'/bar'
>>> ctx.absolute_name('')
'/'
>>> ctx.absolute_name('/')
'/'
>>> ctx.absolute_name('_bar') # doctest: +IGNORE_EXCEPTION_DETAIL
Traceback (most recent call last):
...
InvalidPathError: Invalid path '/_bar'
>>> ctx.absolute_name('/_bar') # doctest: +IGNORE_EXCEPTION_DETAIL
Traceback (most recent call last):
...
InvalidPathError: Invalid path '/_bar'
>>> ctx = Context( 'us-west-1b', namespace='/foo/' )
>>> ctx.absolute_name('bar')
'/foo/bar'
>>> ctx.absolute_name('bar/')
'/foo/bar/'
>>> ctx.absolute_name('bar1/bar2')
'/foo/bar1/bar2'
>>> ctx.absolute_name('/bar')
'/bar'
>>> ctx.absolute_name('')
'/foo/'
>>> ctx.absolute_name('/')
'/'
>>> ctx.absolute_name('_bar') # doctest: +IGNORE_EXCEPTION_DETAIL
Traceback (most recent call last):
...
InvalidPathError: Invalid path '/foo/_bar'
>>> ctx.absolute_name('/_bar') # doctest: +IGNORE_EXCEPTION_DETAIL
Traceback (most recent call last):
...
InvalidPathError: Invalid path '/_bar'
"""
if self.is_absolute_name(name):
result = name
else:
result = self.namespace + name
if not self.name_re.match(result):
raise self.InvalidPathError(result)
return result | python | def absolute_name(self, name):
if self.is_absolute_name(name):
result = name
else:
result = self.namespace + name
if not self.name_re.match(result):
raise self.InvalidPathError(result)
return result | [
"def",
"absolute_name",
"(",
"self",
",",
"name",
")",
":",
"if",
"self",
".",
"is_absolute_name",
"(",
"name",
")",
":",
"result",
"=",
"name",
"else",
":",
"result",
"=",
"self",
".",
"namespace",
"+",
"name",
"if",
"not",
"self",
".",
"name_re",
"... | Returns the absolute form of the specified resource name. If the specified name is
already absolute, that name will be returned unchanged, otherwise the given name will be
prefixed with the namespace this object was configured with.
Relative names starting with underscores are disallowed.
>>> ctx = Context( 'us-west-1b', namespace='/' )
>>> ctx.absolute_name('bar')
'/bar'
>>> ctx.absolute_name('/bar')
'/bar'
>>> ctx.absolute_name('')
'/'
>>> ctx.absolute_name('/')
'/'
>>> ctx.absolute_name('_bar') # doctest: +IGNORE_EXCEPTION_DETAIL
Traceback (most recent call last):
...
InvalidPathError: Invalid path '/_bar'
>>> ctx.absolute_name('/_bar') # doctest: +IGNORE_EXCEPTION_DETAIL
Traceback (most recent call last):
...
InvalidPathError: Invalid path '/_bar'
>>> ctx = Context( 'us-west-1b', namespace='/foo/' )
>>> ctx.absolute_name('bar')
'/foo/bar'
>>> ctx.absolute_name('bar/')
'/foo/bar/'
>>> ctx.absolute_name('bar1/bar2')
'/foo/bar1/bar2'
>>> ctx.absolute_name('/bar')
'/bar'
>>> ctx.absolute_name('')
'/foo/'
>>> ctx.absolute_name('/')
'/'
>>> ctx.absolute_name('_bar') # doctest: +IGNORE_EXCEPTION_DETAIL
Traceback (most recent call last):
...
InvalidPathError: Invalid path '/foo/_bar'
>>> ctx.absolute_name('/_bar') # doctest: +IGNORE_EXCEPTION_DETAIL
Traceback (most recent call last):
...
InvalidPathError: Invalid path '/_bar' | [
"Returns",
"the",
"absolute",
"form",
"of",
"the",
"specified",
"resource",
"name",
".",
"If",
"the",
"specified",
"name",
"is",
"already",
"absolute",
"that",
"name",
"will",
"be",
"returned",
"unchanged",
"otherwise",
"the",
"given",
"name",
"will",
"be",
... | a8252277ff814e7bee0971139c2344f88e44b644 | https://github.com/DataBiosphere/toil/blob/a8252277ff814e7bee0971139c2344f88e44b644/src/toil/lib/context.py#L251-L305 |
224,956 | DataBiosphere/toil | src/toil/lib/context.py | Context.to_aws_name | def to_aws_name(self, name):
"""
Returns a transliteration of the name that safe to use for resource names on AWS. If the
given name is relative, it converted to its absolute form before the transliteration.
The transliteration uses two consequitive '_' to encode a single '_' and a single '_' to
separate the name components. AWS-safe names are by definition absolute such that the
leading separator can be removed. This leads to fairly readable AWS-safe names,
especially for names in the root namespace, where the transliteration is the identity
function if the input does not contain any '_'.
This scheme only works if name components don't start with '_'. Without that condition,
'/_' would become '___' the inverse of which is '_/'.
>>> ctx = Context( 'us-west-1b', namespace='/' )
>>> ctx.to_aws_name( 'foo' )
'foo'
>>> ctx.from_aws_name( 'foo' )
'foo'
Illegal paths that would introduce ambiguity need to raise an exception
>>> ctx.to_aws_name('/_') # doctest: +IGNORE_EXCEPTION_DETAIL
Traceback (most recent call last):
...
InvalidPathError: Invalid path '/_'
>>> ctx.to_aws_name('/_/') # doctest: +IGNORE_EXCEPTION_DETAIL
Traceback (most recent call last):
...
InvalidPathError: Invalid path '/_/'
>>> ctx.from_aws_name('___') # doctest: +IGNORE_EXCEPTION_DETAIL
Traceback (most recent call last):
...
InvalidPathError: Invalid path '/_/'
>>> ctx.to_aws_name( 'foo_bar')
'foo__bar'
>>> ctx.from_aws_name( 'foo__bar')
'foo_bar'
>>> ctx.to_aws_name( '/sub_ns/foo_bar')
'sub__ns_foo__bar'
>>> ctx.to_aws_name( 'sub_ns/foo_bar')
'sub__ns_foo__bar'
>>> ctx.from_aws_name( 'sub__ns_foo__bar' )
'sub_ns/foo_bar'
>>> ctx.to_aws_name( 'g_/' )
'g___'
>>> ctx.from_aws_name( 'g___' )
'g_/'
>>> ctx = Context( 'us-west-1b', namespace='/this_ns/' )
>>> ctx.to_aws_name( 'foo' )
'this__ns_foo'
>>> ctx.from_aws_name( 'this__ns_foo' )
'foo'
>>> ctx.to_aws_name( 'foo_bar')
'this__ns_foo__bar'
>>> ctx.from_aws_name( 'this__ns_foo__bar')
'foo_bar'
>>> ctx.to_aws_name( '/other_ns/foo_bar' )
'other__ns_foo__bar'
>>> ctx.from_aws_name( 'other__ns_foo__bar' )
'/other_ns/foo_bar'
>>> ctx.to_aws_name( 'other_ns/foo_bar' )
'this__ns_other__ns_foo__bar'
>>> ctx.from_aws_name( 'this__ns_other__ns_foo__bar' )
'other_ns/foo_bar'
>>> ctx.to_aws_name( '/this_ns/foo_bar' )
'this__ns_foo__bar'
>>> ctx.from_aws_name( 'this__ns_foo__bar' )
'foo_bar'
"""
name = self.absolute_name(name)
assert name.startswith('/')
return name[1:].replace('_', '__').replace('/', '_') | python | def to_aws_name(self, name):
name = self.absolute_name(name)
assert name.startswith('/')
return name[1:].replace('_', '__').replace('/', '_') | [
"def",
"to_aws_name",
"(",
"self",
",",
"name",
")",
":",
"name",
"=",
"self",
".",
"absolute_name",
"(",
"name",
")",
"assert",
"name",
".",
"startswith",
"(",
"'/'",
")",
"return",
"name",
"[",
"1",
":",
"]",
".",
"replace",
"(",
"'_'",
",",
"'__... | Returns a transliteration of the name that safe to use for resource names on AWS. If the
given name is relative, it converted to its absolute form before the transliteration.
The transliteration uses two consequitive '_' to encode a single '_' and a single '_' to
separate the name components. AWS-safe names are by definition absolute such that the
leading separator can be removed. This leads to fairly readable AWS-safe names,
especially for names in the root namespace, where the transliteration is the identity
function if the input does not contain any '_'.
This scheme only works if name components don't start with '_'. Without that condition,
'/_' would become '___' the inverse of which is '_/'.
>>> ctx = Context( 'us-west-1b', namespace='/' )
>>> ctx.to_aws_name( 'foo' )
'foo'
>>> ctx.from_aws_name( 'foo' )
'foo'
Illegal paths that would introduce ambiguity need to raise an exception
>>> ctx.to_aws_name('/_') # doctest: +IGNORE_EXCEPTION_DETAIL
Traceback (most recent call last):
...
InvalidPathError: Invalid path '/_'
>>> ctx.to_aws_name('/_/') # doctest: +IGNORE_EXCEPTION_DETAIL
Traceback (most recent call last):
...
InvalidPathError: Invalid path '/_/'
>>> ctx.from_aws_name('___') # doctest: +IGNORE_EXCEPTION_DETAIL
Traceback (most recent call last):
...
InvalidPathError: Invalid path '/_/'
>>> ctx.to_aws_name( 'foo_bar')
'foo__bar'
>>> ctx.from_aws_name( 'foo__bar')
'foo_bar'
>>> ctx.to_aws_name( '/sub_ns/foo_bar')
'sub__ns_foo__bar'
>>> ctx.to_aws_name( 'sub_ns/foo_bar')
'sub__ns_foo__bar'
>>> ctx.from_aws_name( 'sub__ns_foo__bar' )
'sub_ns/foo_bar'
>>> ctx.to_aws_name( 'g_/' )
'g___'
>>> ctx.from_aws_name( 'g___' )
'g_/'
>>> ctx = Context( 'us-west-1b', namespace='/this_ns/' )
>>> ctx.to_aws_name( 'foo' )
'this__ns_foo'
>>> ctx.from_aws_name( 'this__ns_foo' )
'foo'
>>> ctx.to_aws_name( 'foo_bar')
'this__ns_foo__bar'
>>> ctx.from_aws_name( 'this__ns_foo__bar')
'foo_bar'
>>> ctx.to_aws_name( '/other_ns/foo_bar' )
'other__ns_foo__bar'
>>> ctx.from_aws_name( 'other__ns_foo__bar' )
'/other_ns/foo_bar'
>>> ctx.to_aws_name( 'other_ns/foo_bar' )
'this__ns_other__ns_foo__bar'
>>> ctx.from_aws_name( 'this__ns_other__ns_foo__bar' )
'other_ns/foo_bar'
>>> ctx.to_aws_name( '/this_ns/foo_bar' )
'this__ns_foo__bar'
>>> ctx.from_aws_name( 'this__ns_foo__bar' )
'foo_bar' | [
"Returns",
"a",
"transliteration",
"of",
"the",
"name",
"that",
"safe",
"to",
"use",
"for",
"resource",
"names",
"on",
"AWS",
".",
"If",
"the",
"given",
"name",
"is",
"relative",
"it",
"converted",
"to",
"its",
"absolute",
"form",
"before",
"the",
"transli... | a8252277ff814e7bee0971139c2344f88e44b644 | https://github.com/DataBiosphere/toil/blob/a8252277ff814e7bee0971139c2344f88e44b644/src/toil/lib/context.py#L307-L388 |
224,957 | DataBiosphere/toil | src/toil/resource.py | Resource.create | def create(cls, jobStore, leaderPath):
"""
Saves the content of the file or directory at the given path to the given job store
and returns a resource object representing that content for the purpose of obtaining it
again at a generic, public URL. This method should be invoked on the leader node.
:param toil.jobStores.abstractJobStore.AbstractJobStore jobStore:
:param str leaderPath:
:rtype: Resource
"""
pathHash = cls._pathHash(leaderPath)
contentHash = hashlib.md5()
# noinspection PyProtectedMember
with cls._load(leaderPath) as src:
with jobStore.writeSharedFileStream(sharedFileName=pathHash, isProtected=False) as dst:
userScript = src.read()
contentHash.update(userScript)
dst.write(userScript)
return cls(name=os.path.basename(leaderPath),
pathHash=pathHash,
url=jobStore.getSharedPublicUrl(sharedFileName=pathHash),
contentHash=contentHash.hexdigest()) | python | def create(cls, jobStore, leaderPath):
pathHash = cls._pathHash(leaderPath)
contentHash = hashlib.md5()
# noinspection PyProtectedMember
with cls._load(leaderPath) as src:
with jobStore.writeSharedFileStream(sharedFileName=pathHash, isProtected=False) as dst:
userScript = src.read()
contentHash.update(userScript)
dst.write(userScript)
return cls(name=os.path.basename(leaderPath),
pathHash=pathHash,
url=jobStore.getSharedPublicUrl(sharedFileName=pathHash),
contentHash=contentHash.hexdigest()) | [
"def",
"create",
"(",
"cls",
",",
"jobStore",
",",
"leaderPath",
")",
":",
"pathHash",
"=",
"cls",
".",
"_pathHash",
"(",
"leaderPath",
")",
"contentHash",
"=",
"hashlib",
".",
"md5",
"(",
")",
"# noinspection PyProtectedMember",
"with",
"cls",
".",
"_load",... | Saves the content of the file or directory at the given path to the given job store
and returns a resource object representing that content for the purpose of obtaining it
again at a generic, public URL. This method should be invoked on the leader node.
:param toil.jobStores.abstractJobStore.AbstractJobStore jobStore:
:param str leaderPath:
:rtype: Resource | [
"Saves",
"the",
"content",
"of",
"the",
"file",
"or",
"directory",
"at",
"the",
"given",
"path",
"to",
"the",
"given",
"job",
"store",
"and",
"returns",
"a",
"resource",
"object",
"representing",
"that",
"content",
"for",
"the",
"purpose",
"of",
"obtaining",... | a8252277ff814e7bee0971139c2344f88e44b644 | https://github.com/DataBiosphere/toil/blob/a8252277ff814e7bee0971139c2344f88e44b644/src/toil/resource.py#L72-L95 |
224,958 | DataBiosphere/toil | src/toil/resource.py | Resource.prepareSystem | def prepareSystem(cls):
"""
Prepares this system for the downloading and lookup of resources. This method should only
be invoked on a worker node. It is idempotent but not thread-safe.
"""
try:
resourceRootDirPath = os.environ[cls.rootDirPathEnvName]
except KeyError:
# Create directory holding local copies of requested resources ...
resourceRootDirPath = mkdtemp()
# .. and register its location in an environment variable such that child processes
# can find it.
os.environ[cls.rootDirPathEnvName] = resourceRootDirPath
assert os.path.isdir(resourceRootDirPath) | python | def prepareSystem(cls):
try:
resourceRootDirPath = os.environ[cls.rootDirPathEnvName]
except KeyError:
# Create directory holding local copies of requested resources ...
resourceRootDirPath = mkdtemp()
# .. and register its location in an environment variable such that child processes
# can find it.
os.environ[cls.rootDirPathEnvName] = resourceRootDirPath
assert os.path.isdir(resourceRootDirPath) | [
"def",
"prepareSystem",
"(",
"cls",
")",
":",
"try",
":",
"resourceRootDirPath",
"=",
"os",
".",
"environ",
"[",
"cls",
".",
"rootDirPathEnvName",
"]",
"except",
"KeyError",
":",
"# Create directory holding local copies of requested resources ...",
"resourceRootDirPath",
... | Prepares this system for the downloading and lookup of resources. This method should only
be invoked on a worker node. It is idempotent but not thread-safe. | [
"Prepares",
"this",
"system",
"for",
"the",
"downloading",
"and",
"lookup",
"of",
"resources",
".",
"This",
"method",
"should",
"only",
"be",
"invoked",
"on",
"a",
"worker",
"node",
".",
"It",
"is",
"idempotent",
"but",
"not",
"thread",
"-",
"safe",
"."
] | a8252277ff814e7bee0971139c2344f88e44b644 | https://github.com/DataBiosphere/toil/blob/a8252277ff814e7bee0971139c2344f88e44b644/src/toil/resource.py#L104-L117 |
224,959 | DataBiosphere/toil | src/toil/resource.py | Resource.cleanSystem | def cleanSystem(cls):
"""
Removes all downloaded, localized resources
"""
resourceRootDirPath = os.environ[cls.rootDirPathEnvName]
os.environ.pop(cls.rootDirPathEnvName)
shutil.rmtree(resourceRootDirPath)
for k, v in list(os.environ.items()):
if k.startswith(cls.resourceEnvNamePrefix):
os.environ.pop(k) | python | def cleanSystem(cls):
resourceRootDirPath = os.environ[cls.rootDirPathEnvName]
os.environ.pop(cls.rootDirPathEnvName)
shutil.rmtree(resourceRootDirPath)
for k, v in list(os.environ.items()):
if k.startswith(cls.resourceEnvNamePrefix):
os.environ.pop(k) | [
"def",
"cleanSystem",
"(",
"cls",
")",
":",
"resourceRootDirPath",
"=",
"os",
".",
"environ",
"[",
"cls",
".",
"rootDirPathEnvName",
"]",
"os",
".",
"environ",
".",
"pop",
"(",
"cls",
".",
"rootDirPathEnvName",
")",
"shutil",
".",
"rmtree",
"(",
"resourceR... | Removes all downloaded, localized resources | [
"Removes",
"all",
"downloaded",
"localized",
"resources"
] | a8252277ff814e7bee0971139c2344f88e44b644 | https://github.com/DataBiosphere/toil/blob/a8252277ff814e7bee0971139c2344f88e44b644/src/toil/resource.py#L120-L129 |
224,960 | DataBiosphere/toil | src/toil/resource.py | Resource.lookup | def lookup(cls, leaderPath):
"""
Returns a resource object representing a resource created from a file or directory at the
given path on the leader. This method should be invoked on the worker. The given path
does not need to refer to an existing file or directory on the worker, it only identifies
the resource within an instance of toil. This method returns None if no resource for the
given path exists.
:rtype: Resource
"""
pathHash = cls._pathHash(leaderPath)
try:
path_key = cls.resourceEnvNamePrefix + pathHash
s = os.environ[path_key]
except KeyError:
log.warn("'%s' may exist, but is not yet referenced by the worker (KeyError from os.environ[]).", str(path_key))
return None
else:
self = cls.unpickle(s)
assert self.pathHash == pathHash
return self | python | def lookup(cls, leaderPath):
pathHash = cls._pathHash(leaderPath)
try:
path_key = cls.resourceEnvNamePrefix + pathHash
s = os.environ[path_key]
except KeyError:
log.warn("'%s' may exist, but is not yet referenced by the worker (KeyError from os.environ[]).", str(path_key))
return None
else:
self = cls.unpickle(s)
assert self.pathHash == pathHash
return self | [
"def",
"lookup",
"(",
"cls",
",",
"leaderPath",
")",
":",
"pathHash",
"=",
"cls",
".",
"_pathHash",
"(",
"leaderPath",
")",
"try",
":",
"path_key",
"=",
"cls",
".",
"resourceEnvNamePrefix",
"+",
"pathHash",
"s",
"=",
"os",
".",
"environ",
"[",
"path_key"... | Returns a resource object representing a resource created from a file or directory at the
given path on the leader. This method should be invoked on the worker. The given path
does not need to refer to an existing file or directory on the worker, it only identifies
the resource within an instance of toil. This method returns None if no resource for the
given path exists.
:rtype: Resource | [
"Returns",
"a",
"resource",
"object",
"representing",
"a",
"resource",
"created",
"from",
"a",
"file",
"or",
"directory",
"at",
"the",
"given",
"path",
"on",
"the",
"leader",
".",
"This",
"method",
"should",
"be",
"invoked",
"on",
"the",
"worker",
".",
"Th... | a8252277ff814e7bee0971139c2344f88e44b644 | https://github.com/DataBiosphere/toil/blob/a8252277ff814e7bee0971139c2344f88e44b644/src/toil/resource.py#L138-L158 |
224,961 | DataBiosphere/toil | src/toil/resource.py | Resource.localDirPath | def localDirPath(self):
"""
The path to the directory containing the resource on the worker.
"""
rootDirPath = os.environ[self.rootDirPathEnvName]
return os.path.join(rootDirPath, self.contentHash) | python | def localDirPath(self):
rootDirPath = os.environ[self.rootDirPathEnvName]
return os.path.join(rootDirPath, self.contentHash) | [
"def",
"localDirPath",
"(",
"self",
")",
":",
"rootDirPath",
"=",
"os",
".",
"environ",
"[",
"self",
".",
"rootDirPathEnvName",
"]",
"return",
"os",
".",
"path",
".",
"join",
"(",
"rootDirPath",
",",
"self",
".",
"contentHash",
")"
] | The path to the directory containing the resource on the worker. | [
"The",
"path",
"to",
"the",
"directory",
"containing",
"the",
"resource",
"on",
"the",
"worker",
"."
] | a8252277ff814e7bee0971139c2344f88e44b644 | https://github.com/DataBiosphere/toil/blob/a8252277ff814e7bee0971139c2344f88e44b644/src/toil/resource.py#L192-L197 |
224,962 | DataBiosphere/toil | src/toil/resource.py | Resource._download | def _download(self, dstFile):
"""
Download this resource from its URL to the given file object.
:type dstFile: io.BytesIO|io.FileIO
"""
for attempt in retry(predicate=lambda e: isinstance(e, HTTPError) and e.code == 400):
with attempt:
with closing(urlopen(self.url)) as content:
buf = content.read()
contentHash = hashlib.md5(buf)
assert contentHash.hexdigest() == self.contentHash
dstFile.write(buf) | python | def _download(self, dstFile):
for attempt in retry(predicate=lambda e: isinstance(e, HTTPError) and e.code == 400):
with attempt:
with closing(urlopen(self.url)) as content:
buf = content.read()
contentHash = hashlib.md5(buf)
assert contentHash.hexdigest() == self.contentHash
dstFile.write(buf) | [
"def",
"_download",
"(",
"self",
",",
"dstFile",
")",
":",
"for",
"attempt",
"in",
"retry",
"(",
"predicate",
"=",
"lambda",
"e",
":",
"isinstance",
"(",
"e",
",",
"HTTPError",
")",
"and",
"e",
".",
"code",
"==",
"400",
")",
":",
"with",
"attempt",
... | Download this resource from its URL to the given file object.
:type dstFile: io.BytesIO|io.FileIO | [
"Download",
"this",
"resource",
"from",
"its",
"URL",
"to",
"the",
"given",
"file",
"object",
"."
] | a8252277ff814e7bee0971139c2344f88e44b644 | https://github.com/DataBiosphere/toil/blob/a8252277ff814e7bee0971139c2344f88e44b644/src/toil/resource.py#L235-L247 |
224,963 | DataBiosphere/toil | src/toil/resource.py | ModuleDescriptor._check_conflict | def _check_conflict(cls, dirPath, name):
"""
Check whether the module of the given name conflicts with another module on the sys.path.
:param dirPath: the directory from which the module was originally loaded
:param name: the mpdule name
"""
old_sys_path = sys.path
try:
sys.path = [d for d in old_sys_path if os.path.realpath(d) != os.path.realpath(dirPath)]
try:
colliding_module = importlib.import_module(name)
except ImportError:
pass
else:
raise ResourceException(
"The user module '%s' collides with module '%s from '%s'." % (
name, colliding_module.__name__, colliding_module.__file__))
finally:
sys.path = old_sys_path | python | def _check_conflict(cls, dirPath, name):
old_sys_path = sys.path
try:
sys.path = [d for d in old_sys_path if os.path.realpath(d) != os.path.realpath(dirPath)]
try:
colliding_module = importlib.import_module(name)
except ImportError:
pass
else:
raise ResourceException(
"The user module '%s' collides with module '%s from '%s'." % (
name, colliding_module.__name__, colliding_module.__file__))
finally:
sys.path = old_sys_path | [
"def",
"_check_conflict",
"(",
"cls",
",",
"dirPath",
",",
"name",
")",
":",
"old_sys_path",
"=",
"sys",
".",
"path",
"try",
":",
"sys",
".",
"path",
"=",
"[",
"d",
"for",
"d",
"in",
"old_sys_path",
"if",
"os",
".",
"path",
".",
"realpath",
"(",
"d... | Check whether the module of the given name conflicts with another module on the sys.path.
:param dirPath: the directory from which the module was originally loaded
:param name: the mpdule name | [
"Check",
"whether",
"the",
"module",
"of",
"the",
"given",
"name",
"conflicts",
"with",
"another",
"module",
"on",
"the",
"sys",
".",
"path",
"."
] | a8252277ff814e7bee0971139c2344f88e44b644 | https://github.com/DataBiosphere/toil/blob/a8252277ff814e7bee0971139c2344f88e44b644/src/toil/resource.py#L422-L441 |
224,964 | DataBiosphere/toil | src/toil/resource.py | ModuleDescriptor._getResourceClass | def _getResourceClass(self):
"""
Return the concrete subclass of Resource that's appropriate for auto-deploying this module.
"""
if self.fromVirtualEnv:
subcls = VirtualEnvResource
elif os.path.isdir(self._resourcePath):
subcls = DirectoryResource
elif os.path.isfile(self._resourcePath):
subcls = FileResource
elif os.path.exists(self._resourcePath):
raise AssertionError("Neither a file or a directory: '%s'" % self._resourcePath)
else:
raise AssertionError("No such file or directory: '%s'" % self._resourcePath)
return subcls | python | def _getResourceClass(self):
if self.fromVirtualEnv:
subcls = VirtualEnvResource
elif os.path.isdir(self._resourcePath):
subcls = DirectoryResource
elif os.path.isfile(self._resourcePath):
subcls = FileResource
elif os.path.exists(self._resourcePath):
raise AssertionError("Neither a file or a directory: '%s'" % self._resourcePath)
else:
raise AssertionError("No such file or directory: '%s'" % self._resourcePath)
return subcls | [
"def",
"_getResourceClass",
"(",
"self",
")",
":",
"if",
"self",
".",
"fromVirtualEnv",
":",
"subcls",
"=",
"VirtualEnvResource",
"elif",
"os",
".",
"path",
".",
"isdir",
"(",
"self",
".",
"_resourcePath",
")",
":",
"subcls",
"=",
"DirectoryResource",
"elif"... | Return the concrete subclass of Resource that's appropriate for auto-deploying this module. | [
"Return",
"the",
"concrete",
"subclass",
"of",
"Resource",
"that",
"s",
"appropriate",
"for",
"auto",
"-",
"deploying",
"this",
"module",
"."
] | a8252277ff814e7bee0971139c2344f88e44b644 | https://github.com/DataBiosphere/toil/blob/a8252277ff814e7bee0971139c2344f88e44b644/src/toil/resource.py#L461-L475 |
224,965 | DataBiosphere/toil | src/toil/resource.py | ModuleDescriptor.localize | def localize(self):
"""
Check if this module was saved as a resource. If it was, return a new module descriptor
that points to a local copy of that resource. Should only be called on a worker node. On
the leader, this method returns this resource, i.e. self.
:rtype: toil.resource.Resource
"""
if not self._runningOnWorker():
log.warn('The localize() method should only be invoked on a worker.')
resource = Resource.lookup(self._resourcePath)
if resource is None:
return self
else:
def stash(tmpDirPath):
# Save the original dirPath such that we can restore it in globalize()
with open(os.path.join(tmpDirPath, '.stash'), 'w') as f:
f.write('1' if self.fromVirtualEnv else '0')
f.write(self.dirPath)
resource.download(callback=stash)
return self.__class__(dirPath=resource.localDirPath,
name=self.name,
fromVirtualEnv=self.fromVirtualEnv) | python | def localize(self):
if not self._runningOnWorker():
log.warn('The localize() method should only be invoked on a worker.')
resource = Resource.lookup(self._resourcePath)
if resource is None:
return self
else:
def stash(tmpDirPath):
# Save the original dirPath such that we can restore it in globalize()
with open(os.path.join(tmpDirPath, '.stash'), 'w') as f:
f.write('1' if self.fromVirtualEnv else '0')
f.write(self.dirPath)
resource.download(callback=stash)
return self.__class__(dirPath=resource.localDirPath,
name=self.name,
fromVirtualEnv=self.fromVirtualEnv) | [
"def",
"localize",
"(",
"self",
")",
":",
"if",
"not",
"self",
".",
"_runningOnWorker",
"(",
")",
":",
"log",
".",
"warn",
"(",
"'The localize() method should only be invoked on a worker.'",
")",
"resource",
"=",
"Resource",
".",
"lookup",
"(",
"self",
".",
"_... | Check if this module was saved as a resource. If it was, return a new module descriptor
that points to a local copy of that resource. Should only be called on a worker node. On
the leader, this method returns this resource, i.e. self.
:rtype: toil.resource.Resource | [
"Check",
"if",
"this",
"module",
"was",
"saved",
"as",
"a",
"resource",
".",
"If",
"it",
"was",
"return",
"a",
"new",
"module",
"descriptor",
"that",
"points",
"to",
"a",
"local",
"copy",
"of",
"that",
"resource",
".",
"Should",
"only",
"be",
"called",
... | a8252277ff814e7bee0971139c2344f88e44b644 | https://github.com/DataBiosphere/toil/blob/a8252277ff814e7bee0971139c2344f88e44b644/src/toil/resource.py#L477-L500 |
224,966 | DataBiosphere/toil | src/toil/resource.py | ModuleDescriptor._resourcePath | def _resourcePath(self):
"""
The path to the directory that should be used when shipping this module and its siblings
around as a resource.
"""
if self.fromVirtualEnv:
return self.dirPath
elif '.' in self.name:
return os.path.join(self.dirPath, self._rootPackage())
else:
initName = self._initModuleName(self.dirPath)
if initName:
raise ResourceException(
"Toil does not support loading a user script from a package directory. You "
"may want to remove %s from %s or invoke the user script as a module via "
"'PYTHONPATH=\"%s\" python -m %s.%s'." %
tuple(concat(initName, self.dirPath, os.path.split(self.dirPath), self.name)))
return self.dirPath | python | def _resourcePath(self):
if self.fromVirtualEnv:
return self.dirPath
elif '.' in self.name:
return os.path.join(self.dirPath, self._rootPackage())
else:
initName = self._initModuleName(self.dirPath)
if initName:
raise ResourceException(
"Toil does not support loading a user script from a package directory. You "
"may want to remove %s from %s or invoke the user script as a module via "
"'PYTHONPATH=\"%s\" python -m %s.%s'." %
tuple(concat(initName, self.dirPath, os.path.split(self.dirPath), self.name)))
return self.dirPath | [
"def",
"_resourcePath",
"(",
"self",
")",
":",
"if",
"self",
".",
"fromVirtualEnv",
":",
"return",
"self",
".",
"dirPath",
"elif",
"'.'",
"in",
"self",
".",
"name",
":",
"return",
"os",
".",
"path",
".",
"join",
"(",
"self",
".",
"dirPath",
",",
"sel... | The path to the directory that should be used when shipping this module and its siblings
around as a resource. | [
"The",
"path",
"to",
"the",
"directory",
"that",
"should",
"be",
"used",
"when",
"shipping",
"this",
"module",
"and",
"its",
"siblings",
"around",
"as",
"a",
"resource",
"."
] | a8252277ff814e7bee0971139c2344f88e44b644 | https://github.com/DataBiosphere/toil/blob/a8252277ff814e7bee0971139c2344f88e44b644/src/toil/resource.py#L543-L560 |
224,967 | DataBiosphere/toil | src/toil/utils/toilDebugFile.py | fetchJobStoreFiles | def fetchJobStoreFiles(jobStore, options):
"""
Takes a list of file names as glob patterns, searches for these within a
given directory, and attempts to take all of the files found and copy them
into options.localFilePath.
:param jobStore: A fileJobStore object.
:param options.fetch: List of file glob patterns to search
for in the jobStore and copy into options.localFilePath.
:param options.localFilePath: Local directory to copy files into.
:param options.jobStore: The path to the jobStore directory.
"""
for jobStoreFile in options.fetch:
jobStoreHits = recursiveGlob(directoryname=options.jobStore,
glob_pattern=jobStoreFile)
for jobStoreFileID in jobStoreHits:
logger.debug("Copying job store file: %s to %s",
jobStoreFileID,
options.localFilePath[0])
jobStore.readFile(jobStoreFileID,
os.path.join(options.localFilePath[0],
os.path.basename(jobStoreFileID)),
symlink=options.useSymlinks) | python | def fetchJobStoreFiles(jobStore, options):
for jobStoreFile in options.fetch:
jobStoreHits = recursiveGlob(directoryname=options.jobStore,
glob_pattern=jobStoreFile)
for jobStoreFileID in jobStoreHits:
logger.debug("Copying job store file: %s to %s",
jobStoreFileID,
options.localFilePath[0])
jobStore.readFile(jobStoreFileID,
os.path.join(options.localFilePath[0],
os.path.basename(jobStoreFileID)),
symlink=options.useSymlinks) | [
"def",
"fetchJobStoreFiles",
"(",
"jobStore",
",",
"options",
")",
":",
"for",
"jobStoreFile",
"in",
"options",
".",
"fetch",
":",
"jobStoreHits",
"=",
"recursiveGlob",
"(",
"directoryname",
"=",
"options",
".",
"jobStore",
",",
"glob_pattern",
"=",
"jobStoreFil... | Takes a list of file names as glob patterns, searches for these within a
given directory, and attempts to take all of the files found and copy them
into options.localFilePath.
:param jobStore: A fileJobStore object.
:param options.fetch: List of file glob patterns to search
for in the jobStore and copy into options.localFilePath.
:param options.localFilePath: Local directory to copy files into.
:param options.jobStore: The path to the jobStore directory. | [
"Takes",
"a",
"list",
"of",
"file",
"names",
"as",
"glob",
"patterns",
"searches",
"for",
"these",
"within",
"a",
"given",
"directory",
"and",
"attempts",
"to",
"take",
"all",
"of",
"the",
"files",
"found",
"and",
"copy",
"them",
"into",
"options",
".",
... | a8252277ff814e7bee0971139c2344f88e44b644 | https://github.com/DataBiosphere/toil/blob/a8252277ff814e7bee0971139c2344f88e44b644/src/toil/utils/toilDebugFile.py#L47-L69 |
224,968 | DataBiosphere/toil | src/toil/utils/toilDebugFile.py | printContentsOfJobStore | def printContentsOfJobStore(jobStorePath, nameOfJob=None):
"""
Fetch a list of all files contained in the jobStore directory input if
nameOfJob is not declared, otherwise it only prints out the names of files
for that specific job for which it can find a match. Also creates a logFile
containing this same record of job files in the working directory.
:param jobStorePath: Directory path to recursively look for files.
:param nameOfJob: Default is None, which prints out all files in the jobStore.
If specified, it will print all jobStore files that have been written to the
jobStore by that job.
"""
if nameOfJob:
glob = "*" + nameOfJob + "*"
logFile = nameOfJob + "_fileset.txt"
else:
glob = "*"
logFile = "jobstore_files.txt"
nameOfJob = ""
list_of_files = recursiveGlob(directoryname=jobStorePath, glob_pattern=glob)
if os.path.exists(logFile):
os.remove(logFile)
for gfile in sorted(list_of_files):
if not gfile.endswith('.new'):
logger.debug(nameOfJob + "File: %s", os.path.basename(gfile))
with open(logFile, "a+") as f:
f.write(os.path.basename(gfile))
f.write("\n") | python | def printContentsOfJobStore(jobStorePath, nameOfJob=None):
if nameOfJob:
glob = "*" + nameOfJob + "*"
logFile = nameOfJob + "_fileset.txt"
else:
glob = "*"
logFile = "jobstore_files.txt"
nameOfJob = ""
list_of_files = recursiveGlob(directoryname=jobStorePath, glob_pattern=glob)
if os.path.exists(logFile):
os.remove(logFile)
for gfile in sorted(list_of_files):
if not gfile.endswith('.new'):
logger.debug(nameOfJob + "File: %s", os.path.basename(gfile))
with open(logFile, "a+") as f:
f.write(os.path.basename(gfile))
f.write("\n") | [
"def",
"printContentsOfJobStore",
"(",
"jobStorePath",
",",
"nameOfJob",
"=",
"None",
")",
":",
"if",
"nameOfJob",
":",
"glob",
"=",
"\"*\"",
"+",
"nameOfJob",
"+",
"\"*\"",
"logFile",
"=",
"nameOfJob",
"+",
"\"_fileset.txt\"",
"else",
":",
"glob",
"=",
"\"*... | Fetch a list of all files contained in the jobStore directory input if
nameOfJob is not declared, otherwise it only prints out the names of files
for that specific job for which it can find a match. Also creates a logFile
containing this same record of job files in the working directory.
:param jobStorePath: Directory path to recursively look for files.
:param nameOfJob: Default is None, which prints out all files in the jobStore.
If specified, it will print all jobStore files that have been written to the
jobStore by that job. | [
"Fetch",
"a",
"list",
"of",
"all",
"files",
"contained",
"in",
"the",
"jobStore",
"directory",
"input",
"if",
"nameOfJob",
"is",
"not",
"declared",
"otherwise",
"it",
"only",
"prints",
"out",
"the",
"names",
"of",
"files",
"for",
"that",
"specific",
"job",
... | a8252277ff814e7bee0971139c2344f88e44b644 | https://github.com/DataBiosphere/toil/blob/a8252277ff814e7bee0971139c2344f88e44b644/src/toil/utils/toilDebugFile.py#L71-L100 |
224,969 | DataBiosphere/toil | src/toil/job.py | BaseJob.disk | def disk(self):
"""
The maximum number of bytes of disk the job will require to run.
"""
if self._disk is not None:
return self._disk
elif self._config is not None:
return self._config.defaultDisk
else:
raise AttributeError("Default value for 'disk' cannot be determined") | python | def disk(self):
if self._disk is not None:
return self._disk
elif self._config is not None:
return self._config.defaultDisk
else:
raise AttributeError("Default value for 'disk' cannot be determined") | [
"def",
"disk",
"(",
"self",
")",
":",
"if",
"self",
".",
"_disk",
"is",
"not",
"None",
":",
"return",
"self",
".",
"_disk",
"elif",
"self",
".",
"_config",
"is",
"not",
"None",
":",
"return",
"self",
".",
"_config",
".",
"defaultDisk",
"else",
":",
... | The maximum number of bytes of disk the job will require to run. | [
"The",
"maximum",
"number",
"of",
"bytes",
"of",
"disk",
"the",
"job",
"will",
"require",
"to",
"run",
"."
] | a8252277ff814e7bee0971139c2344f88e44b644 | https://github.com/DataBiosphere/toil/blob/a8252277ff814e7bee0971139c2344f88e44b644/src/toil/job.py#L86-L95 |
224,970 | DataBiosphere/toil | src/toil/job.py | BaseJob.memory | def memory(self):
"""
The maximum number of bytes of memory the job will require to run.
"""
if self._memory is not None:
return self._memory
elif self._config is not None:
return self._config.defaultMemory
else:
raise AttributeError("Default value for 'memory' cannot be determined") | python | def memory(self):
if self._memory is not None:
return self._memory
elif self._config is not None:
return self._config.defaultMemory
else:
raise AttributeError("Default value for 'memory' cannot be determined") | [
"def",
"memory",
"(",
"self",
")",
":",
"if",
"self",
".",
"_memory",
"is",
"not",
"None",
":",
"return",
"self",
".",
"_memory",
"elif",
"self",
".",
"_config",
"is",
"not",
"None",
":",
"return",
"self",
".",
"_config",
".",
"defaultMemory",
"else",
... | The maximum number of bytes of memory the job will require to run. | [
"The",
"maximum",
"number",
"of",
"bytes",
"of",
"memory",
"the",
"job",
"will",
"require",
"to",
"run",
"."
] | a8252277ff814e7bee0971139c2344f88e44b644 | https://github.com/DataBiosphere/toil/blob/a8252277ff814e7bee0971139c2344f88e44b644/src/toil/job.py#L98-L107 |
224,971 | DataBiosphere/toil | src/toil/job.py | BaseJob.cores | def cores(self):
"""
The number of CPU cores required.
"""
if self._cores is not None:
return self._cores
elif self._config is not None:
return self._config.defaultCores
else:
raise AttributeError("Default value for 'cores' cannot be determined") | python | def cores(self):
if self._cores is not None:
return self._cores
elif self._config is not None:
return self._config.defaultCores
else:
raise AttributeError("Default value for 'cores' cannot be determined") | [
"def",
"cores",
"(",
"self",
")",
":",
"if",
"self",
".",
"_cores",
"is",
"not",
"None",
":",
"return",
"self",
".",
"_cores",
"elif",
"self",
".",
"_config",
"is",
"not",
"None",
":",
"return",
"self",
".",
"_config",
".",
"defaultCores",
"else",
":... | The number of CPU cores required. | [
"The",
"number",
"of",
"CPU",
"cores",
"required",
"."
] | a8252277ff814e7bee0971139c2344f88e44b644 | https://github.com/DataBiosphere/toil/blob/a8252277ff814e7bee0971139c2344f88e44b644/src/toil/job.py#L110-L119 |
224,972 | DataBiosphere/toil | src/toil/job.py | BaseJob.preemptable | def preemptable(self):
"""
Whether the job can be run on a preemptable node.
"""
if self._preemptable is not None:
return self._preemptable
elif self._config is not None:
return self._config.defaultPreemptable
else:
raise AttributeError("Default value for 'preemptable' cannot be determined") | python | def preemptable(self):
if self._preemptable is not None:
return self._preemptable
elif self._config is not None:
return self._config.defaultPreemptable
else:
raise AttributeError("Default value for 'preemptable' cannot be determined") | [
"def",
"preemptable",
"(",
"self",
")",
":",
"if",
"self",
".",
"_preemptable",
"is",
"not",
"None",
":",
"return",
"self",
".",
"_preemptable",
"elif",
"self",
".",
"_config",
"is",
"not",
"None",
":",
"return",
"self",
".",
"_config",
".",
"defaultPree... | Whether the job can be run on a preemptable node. | [
"Whether",
"the",
"job",
"can",
"be",
"run",
"on",
"a",
"preemptable",
"node",
"."
] | a8252277ff814e7bee0971139c2344f88e44b644 | https://github.com/DataBiosphere/toil/blob/a8252277ff814e7bee0971139c2344f88e44b644/src/toil/job.py#L122-L131 |
224,973 | DataBiosphere/toil | src/toil/job.py | BaseJob._requirements | def _requirements(self):
"""
Gets a dictionary of all the object's resource requirements. Unset values are defaulted to None
"""
return {'memory': getattr(self, 'memory', None),
'cores': getattr(self, 'cores', None),
'disk': getattr(self, 'disk', None),
'preemptable': getattr(self, 'preemptable', None)} | python | def _requirements(self):
return {'memory': getattr(self, 'memory', None),
'cores': getattr(self, 'cores', None),
'disk': getattr(self, 'disk', None),
'preemptable': getattr(self, 'preemptable', None)} | [
"def",
"_requirements",
"(",
"self",
")",
":",
"return",
"{",
"'memory'",
":",
"getattr",
"(",
"self",
",",
"'memory'",
",",
"None",
")",
",",
"'cores'",
":",
"getattr",
"(",
"self",
",",
"'cores'",
",",
"None",
")",
",",
"'disk'",
":",
"getattr",
"(... | Gets a dictionary of all the object's resource requirements. Unset values are defaulted to None | [
"Gets",
"a",
"dictionary",
"of",
"all",
"the",
"object",
"s",
"resource",
"requirements",
".",
"Unset",
"values",
"are",
"defaulted",
"to",
"None"
] | a8252277ff814e7bee0971139c2344f88e44b644 | https://github.com/DataBiosphere/toil/blob/a8252277ff814e7bee0971139c2344f88e44b644/src/toil/job.py#L134-L141 |
224,974 | DataBiosphere/toil | src/toil/job.py | BaseJob._parseResource | def _parseResource(name, value):
"""
Parse a Toil job's resource requirement value and apply resource-specific type checks. If the
value is a string, a binary or metric unit prefix in it will be evaluated and the
corresponding integral value will be returned.
:param str name: The name of the resource
:param None|str|float|int value: The resource value
:rtype: int|float|None
>>> Job._parseResource('cores', None)
>>> Job._parseResource('cores', 1), Job._parseResource('disk', 1), \
Job._parseResource('memory', 1)
(1, 1, 1)
>>> Job._parseResource('cores', '1G'), Job._parseResource('disk', '1G'), \
Job._parseResource('memory', '1G')
(1073741824, 1073741824, 1073741824)
>>> Job._parseResource('cores', 1.1)
1.1
>>> Job._parseResource('disk', 1.1) # doctest: +IGNORE_EXCEPTION_DETAIL
Traceback (most recent call last):
...
TypeError: The 'disk' requirement does not accept values that are of <type 'float'>
>>> Job._parseResource('memory', object()) # doctest: +IGNORE_EXCEPTION_DETAIL
Traceback (most recent call last):
...
TypeError: The 'memory' requirement does not accept values that are of ...
"""
assert name in ('memory', 'disk', 'cores')
if value is None:
return value
elif isinstance(value, (str, bytes)):
value = human2bytes(value)
if isinstance(value, int):
return value
elif isinstance(value, float) and name == 'cores':
return value
else:
raise TypeError("The '%s' requirement does not accept values that are of %s"
% (name, type(value))) | python | def _parseResource(name, value):
assert name in ('memory', 'disk', 'cores')
if value is None:
return value
elif isinstance(value, (str, bytes)):
value = human2bytes(value)
if isinstance(value, int):
return value
elif isinstance(value, float) and name == 'cores':
return value
else:
raise TypeError("The '%s' requirement does not accept values that are of %s"
% (name, type(value))) | [
"def",
"_parseResource",
"(",
"name",
",",
"value",
")",
":",
"assert",
"name",
"in",
"(",
"'memory'",
",",
"'disk'",
",",
"'cores'",
")",
"if",
"value",
"is",
"None",
":",
"return",
"value",
"elif",
"isinstance",
"(",
"value",
",",
"(",
"str",
",",
... | Parse a Toil job's resource requirement value and apply resource-specific type checks. If the
value is a string, a binary or metric unit prefix in it will be evaluated and the
corresponding integral value will be returned.
:param str name: The name of the resource
:param None|str|float|int value: The resource value
:rtype: int|float|None
>>> Job._parseResource('cores', None)
>>> Job._parseResource('cores', 1), Job._parseResource('disk', 1), \
Job._parseResource('memory', 1)
(1, 1, 1)
>>> Job._parseResource('cores', '1G'), Job._parseResource('disk', '1G'), \
Job._parseResource('memory', '1G')
(1073741824, 1073741824, 1073741824)
>>> Job._parseResource('cores', 1.1)
1.1
>>> Job._parseResource('disk', 1.1) # doctest: +IGNORE_EXCEPTION_DETAIL
Traceback (most recent call last):
...
TypeError: The 'disk' requirement does not accept values that are of <type 'float'>
>>> Job._parseResource('memory', object()) # doctest: +IGNORE_EXCEPTION_DETAIL
Traceback (most recent call last):
...
TypeError: The 'memory' requirement does not accept values that are of ... | [
"Parse",
"a",
"Toil",
"job",
"s",
"resource",
"requirement",
"value",
"and",
"apply",
"resource",
"-",
"specific",
"type",
"checks",
".",
"If",
"the",
"value",
"is",
"a",
"string",
"a",
"binary",
"or",
"metric",
"unit",
"prefix",
"in",
"it",
"will",
"be"... | a8252277ff814e7bee0971139c2344f88e44b644 | https://github.com/DataBiosphere/toil/blob/a8252277ff814e7bee0971139c2344f88e44b644/src/toil/job.py#L144-L183 |
224,975 | DataBiosphere/toil | src/toil/job.py | Job.addFollowOn | def addFollowOn(self, followOnJob):
"""
Adds a follow-on job, follow-on jobs will be run after the child jobs and \
their successors have been run.
:param toil.job.Job followOnJob:
:return: followOnJob
:rtype: toil.job.Job
"""
self._followOns.append(followOnJob)
followOnJob._addPredecessor(self)
return followOnJob | python | def addFollowOn(self, followOnJob):
self._followOns.append(followOnJob)
followOnJob._addPredecessor(self)
return followOnJob | [
"def",
"addFollowOn",
"(",
"self",
",",
"followOnJob",
")",
":",
"self",
".",
"_followOns",
".",
"append",
"(",
"followOnJob",
")",
"followOnJob",
".",
"_addPredecessor",
"(",
"self",
")",
"return",
"followOnJob"
] | Adds a follow-on job, follow-on jobs will be run after the child jobs and \
their successors have been run.
:param toil.job.Job followOnJob:
:return: followOnJob
:rtype: toil.job.Job | [
"Adds",
"a",
"follow",
"-",
"on",
"job",
"follow",
"-",
"on",
"jobs",
"will",
"be",
"run",
"after",
"the",
"child",
"jobs",
"and",
"\\",
"their",
"successors",
"have",
"been",
"run",
"."
] | a8252277ff814e7bee0971139c2344f88e44b644 | https://github.com/DataBiosphere/toil/blob/a8252277ff814e7bee0971139c2344f88e44b644/src/toil/job.py#L349-L360 |
224,976 | DataBiosphere/toil | src/toil/job.py | Job.addService | def addService(self, service, parentService=None):
"""
Add a service.
The :func:`toil.job.Job.Service.start` method of the service will be called
after the run method has completed but before any successors are run.
The service's :func:`toil.job.Job.Service.stop` method will be called once
the successors of the job have been run.
Services allow things like databases and servers to be started and accessed
by jobs in a workflow.
:raises toil.job.JobException: If service has already been made the child of a job or another service.
:param toil.job.Job.Service service: Service to add.
:param toil.job.Job.Service parentService: Service that will be started before 'service' is
started. Allows trees of services to be established. parentService must be a service
of this job.
:return: a promise that will be replaced with the return value from
:func:`toil.job.Job.Service.start` of service in any successor of the job.
:rtype: toil.job.Promise
"""
if parentService is not None:
# Do check to ensure that parentService is a service of this job
def check(services):
for jS in services:
if jS.service == parentService or check(jS.service._childServices):
return True
return False
if not check(self._services):
raise JobException("Parent service is not a service of the given job")
return parentService._addChild(service)
else:
if service._hasParent:
raise JobException("The service already has a parent service")
service._hasParent = True
jobService = ServiceJob(service)
self._services.append(jobService)
return jobService.rv() | python | def addService(self, service, parentService=None):
if parentService is not None:
# Do check to ensure that parentService is a service of this job
def check(services):
for jS in services:
if jS.service == parentService or check(jS.service._childServices):
return True
return False
if not check(self._services):
raise JobException("Parent service is not a service of the given job")
return parentService._addChild(service)
else:
if service._hasParent:
raise JobException("The service already has a parent service")
service._hasParent = True
jobService = ServiceJob(service)
self._services.append(jobService)
return jobService.rv() | [
"def",
"addService",
"(",
"self",
",",
"service",
",",
"parentService",
"=",
"None",
")",
":",
"if",
"parentService",
"is",
"not",
"None",
":",
"# Do check to ensure that parentService is a service of this job",
"def",
"check",
"(",
"services",
")",
":",
"for",
"j... | Add a service.
The :func:`toil.job.Job.Service.start` method of the service will be called
after the run method has completed but before any successors are run.
The service's :func:`toil.job.Job.Service.stop` method will be called once
the successors of the job have been run.
Services allow things like databases and servers to be started and accessed
by jobs in a workflow.
:raises toil.job.JobException: If service has already been made the child of a job or another service.
:param toil.job.Job.Service service: Service to add.
:param toil.job.Job.Service parentService: Service that will be started before 'service' is
started. Allows trees of services to be established. parentService must be a service
of this job.
:return: a promise that will be replaced with the return value from
:func:`toil.job.Job.Service.start` of service in any successor of the job.
:rtype: toil.job.Promise | [
"Add",
"a",
"service",
"."
] | a8252277ff814e7bee0971139c2344f88e44b644 | https://github.com/DataBiosphere/toil/blob/a8252277ff814e7bee0971139c2344f88e44b644/src/toil/job.py#L372-L409 |
224,977 | DataBiosphere/toil | src/toil/job.py | Job.addChildFn | def addChildFn(self, fn, *args, **kwargs):
"""
Adds a function as a child job.
:param fn: Function to be run as a child job with ``*args`` and ``**kwargs`` as \
arguments to this function. See toil.job.FunctionWrappingJob for reserved \
keyword arguments used to specify resource requirements.
:return: The new child job that wraps fn.
:rtype: toil.job.FunctionWrappingJob
"""
if PromisedRequirement.convertPromises(kwargs):
return self.addChild(PromisedRequirementFunctionWrappingJob.create(fn, *args, **kwargs))
else:
return self.addChild(FunctionWrappingJob(fn, *args, **kwargs)) | python | def addChildFn(self, fn, *args, **kwargs):
if PromisedRequirement.convertPromises(kwargs):
return self.addChild(PromisedRequirementFunctionWrappingJob.create(fn, *args, **kwargs))
else:
return self.addChild(FunctionWrappingJob(fn, *args, **kwargs)) | [
"def",
"addChildFn",
"(",
"self",
",",
"fn",
",",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
":",
"if",
"PromisedRequirement",
".",
"convertPromises",
"(",
"kwargs",
")",
":",
"return",
"self",
".",
"addChild",
"(",
"PromisedRequirementFunctionWrappingJob",
... | Adds a function as a child job.
:param fn: Function to be run as a child job with ``*args`` and ``**kwargs`` as \
arguments to this function. See toil.job.FunctionWrappingJob for reserved \
keyword arguments used to specify resource requirements.
:return: The new child job that wraps fn.
:rtype: toil.job.FunctionWrappingJob | [
"Adds",
"a",
"function",
"as",
"a",
"child",
"job",
"."
] | a8252277ff814e7bee0971139c2344f88e44b644 | https://github.com/DataBiosphere/toil/blob/a8252277ff814e7bee0971139c2344f88e44b644/src/toil/job.py#L413-L426 |
224,978 | DataBiosphere/toil | src/toil/job.py | Job.addFollowOnFn | def addFollowOnFn(self, fn, *args, **kwargs):
"""
Adds a function as a follow-on job.
:param fn: Function to be run as a follow-on job with ``*args`` and ``**kwargs`` as \
arguments to this function. See toil.job.FunctionWrappingJob for reserved \
keyword arguments used to specify resource requirements.
:return: The new follow-on job that wraps fn.
:rtype: toil.job.FunctionWrappingJob
"""
if PromisedRequirement.convertPromises(kwargs):
return self.addFollowOn(PromisedRequirementFunctionWrappingJob.create(fn, *args, **kwargs))
else:
return self.addFollowOn(FunctionWrappingJob(fn, *args, **kwargs)) | python | def addFollowOnFn(self, fn, *args, **kwargs):
if PromisedRequirement.convertPromises(kwargs):
return self.addFollowOn(PromisedRequirementFunctionWrappingJob.create(fn, *args, **kwargs))
else:
return self.addFollowOn(FunctionWrappingJob(fn, *args, **kwargs)) | [
"def",
"addFollowOnFn",
"(",
"self",
",",
"fn",
",",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
":",
"if",
"PromisedRequirement",
".",
"convertPromises",
"(",
"kwargs",
")",
":",
"return",
"self",
".",
"addFollowOn",
"(",
"PromisedRequirementFunctionWrappingJo... | Adds a function as a follow-on job.
:param fn: Function to be run as a follow-on job with ``*args`` and ``**kwargs`` as \
arguments to this function. See toil.job.FunctionWrappingJob for reserved \
keyword arguments used to specify resource requirements.
:return: The new follow-on job that wraps fn.
:rtype: toil.job.FunctionWrappingJob | [
"Adds",
"a",
"function",
"as",
"a",
"follow",
"-",
"on",
"job",
"."
] | a8252277ff814e7bee0971139c2344f88e44b644 | https://github.com/DataBiosphere/toil/blob/a8252277ff814e7bee0971139c2344f88e44b644/src/toil/job.py#L428-L441 |
224,979 | DataBiosphere/toil | src/toil/job.py | Job.checkNewCheckpointsAreLeafVertices | def checkNewCheckpointsAreLeafVertices(self):
"""
A checkpoint job is a job that is restarted if either it fails, or if any of \
its successors completely fails, exhausting their retries.
A job is a leaf it is has no successors.
A checkpoint job must be a leaf when initially added to the job graph. When its \
run method is invoked it can then create direct successors. This restriction is made
to simplify implementation.
:raises toil.job.JobGraphDeadlockException: if there exists a job being added to the graph for which \
checkpoint=True and which is not a leaf.
"""
roots = self.getRootJobs() # Roots jobs of component, these are preexisting jobs in the graph
# All jobs in the component of the job graph containing self
jobs = set()
list(map(lambda x : x._dfs(jobs), roots))
# Check for each job for which checkpoint is true that it is a cut vertex or leaf
for y in [x for x in jobs if x.checkpoint]:
if y not in roots: # The roots are the prexisting jobs
if not Job._isLeafVertex(y):
raise JobGraphDeadlockException("New checkpoint job %s is not a leaf in the job graph" % y) | python | def checkNewCheckpointsAreLeafVertices(self):
roots = self.getRootJobs() # Roots jobs of component, these are preexisting jobs in the graph
# All jobs in the component of the job graph containing self
jobs = set()
list(map(lambda x : x._dfs(jobs), roots))
# Check for each job for which checkpoint is true that it is a cut vertex or leaf
for y in [x for x in jobs if x.checkpoint]:
if y not in roots: # The roots are the prexisting jobs
if not Job._isLeafVertex(y):
raise JobGraphDeadlockException("New checkpoint job %s is not a leaf in the job graph" % y) | [
"def",
"checkNewCheckpointsAreLeafVertices",
"(",
"self",
")",
":",
"roots",
"=",
"self",
".",
"getRootJobs",
"(",
")",
"# Roots jobs of component, these are preexisting jobs in the graph",
"# All jobs in the component of the job graph containing self",
"jobs",
"=",
"set",
"(",
... | A checkpoint job is a job that is restarted if either it fails, or if any of \
its successors completely fails, exhausting their retries.
A job is a leaf it is has no successors.
A checkpoint job must be a leaf when initially added to the job graph. When its \
run method is invoked it can then create direct successors. This restriction is made
to simplify implementation.
:raises toil.job.JobGraphDeadlockException: if there exists a job being added to the graph for which \
checkpoint=True and which is not a leaf. | [
"A",
"checkpoint",
"job",
"is",
"a",
"job",
"that",
"is",
"restarted",
"if",
"either",
"it",
"fails",
"or",
"if",
"any",
"of",
"\\",
"its",
"successors",
"completely",
"fails",
"exhausting",
"their",
"retries",
"."
] | a8252277ff814e7bee0971139c2344f88e44b644 | https://github.com/DataBiosphere/toil/blob/a8252277ff814e7bee0971139c2344f88e44b644/src/toil/job.py#L672-L696 |
224,980 | DataBiosphere/toil | src/toil/job.py | Job._addPredecessor | def _addPredecessor(self, predecessorJob):
"""
Adds a predecessor job to the set of predecessor jobs. Raises a \
RuntimeError if the job is already a predecessor.
"""
if predecessorJob in self._directPredecessors:
raise RuntimeError("The given job is already a predecessor of this job")
self._directPredecessors.add(predecessorJob) | python | def _addPredecessor(self, predecessorJob):
if predecessorJob in self._directPredecessors:
raise RuntimeError("The given job is already a predecessor of this job")
self._directPredecessors.add(predecessorJob) | [
"def",
"_addPredecessor",
"(",
"self",
",",
"predecessorJob",
")",
":",
"if",
"predecessorJob",
"in",
"self",
".",
"_directPredecessors",
":",
"raise",
"RuntimeError",
"(",
"\"The given job is already a predecessor of this job\"",
")",
"self",
".",
"_directPredecessors",
... | Adds a predecessor job to the set of predecessor jobs. Raises a \
RuntimeError if the job is already a predecessor. | [
"Adds",
"a",
"predecessor",
"job",
"to",
"the",
"set",
"of",
"predecessor",
"jobs",
".",
"Raises",
"a",
"\\",
"RuntimeError",
"if",
"the",
"job",
"is",
"already",
"a",
"predecessor",
"."
] | a8252277ff814e7bee0971139c2344f88e44b644 | https://github.com/DataBiosphere/toil/blob/a8252277ff814e7bee0971139c2344f88e44b644/src/toil/job.py#L864-L871 |
224,981 | DataBiosphere/toil | src/toil/job.py | Job._dfs | def _dfs(self, visited):
"""
Adds the job and all jobs reachable on a directed path from current node to the given set.
"""
if self not in visited:
visited.add(self)
for successor in self._children + self._followOns:
successor._dfs(visited) | python | def _dfs(self, visited):
if self not in visited:
visited.add(self)
for successor in self._children + self._followOns:
successor._dfs(visited) | [
"def",
"_dfs",
"(",
"self",
",",
"visited",
")",
":",
"if",
"self",
"not",
"in",
"visited",
":",
"visited",
".",
"add",
"(",
"self",
")",
"for",
"successor",
"in",
"self",
".",
"_children",
"+",
"self",
".",
"_followOns",
":",
"successor",
".",
"_dfs... | Adds the job and all jobs reachable on a directed path from current node to the given set. | [
"Adds",
"the",
"job",
"and",
"all",
"jobs",
"reachable",
"on",
"a",
"directed",
"path",
"from",
"current",
"node",
"to",
"the",
"given",
"set",
"."
] | a8252277ff814e7bee0971139c2344f88e44b644 | https://github.com/DataBiosphere/toil/blob/a8252277ff814e7bee0971139c2344f88e44b644/src/toil/job.py#L1024-L1031 |
224,982 | DataBiosphere/toil | src/toil/job.py | Job._checkJobGraphAcylicDFS | def _checkJobGraphAcylicDFS(self, stack, visited, extraEdges):
"""
DFS traversal to detect cycles in augmented job graph.
"""
if self not in visited:
visited.add(self)
stack.append(self)
for successor in self._children + self._followOns + extraEdges[self]:
successor._checkJobGraphAcylicDFS(stack, visited, extraEdges)
assert stack.pop() == self
if self in stack:
stack.append(self)
raise JobGraphDeadlockException("A cycle of job dependencies has been detected '%s'" % stack) | python | def _checkJobGraphAcylicDFS(self, stack, visited, extraEdges):
if self not in visited:
visited.add(self)
stack.append(self)
for successor in self._children + self._followOns + extraEdges[self]:
successor._checkJobGraphAcylicDFS(stack, visited, extraEdges)
assert stack.pop() == self
if self in stack:
stack.append(self)
raise JobGraphDeadlockException("A cycle of job dependencies has been detected '%s'" % stack) | [
"def",
"_checkJobGraphAcylicDFS",
"(",
"self",
",",
"stack",
",",
"visited",
",",
"extraEdges",
")",
":",
"if",
"self",
"not",
"in",
"visited",
":",
"visited",
".",
"add",
"(",
"self",
")",
"stack",
".",
"append",
"(",
"self",
")",
"for",
"successor",
... | DFS traversal to detect cycles in augmented job graph. | [
"DFS",
"traversal",
"to",
"detect",
"cycles",
"in",
"augmented",
"job",
"graph",
"."
] | a8252277ff814e7bee0971139c2344f88e44b644 | https://github.com/DataBiosphere/toil/blob/a8252277ff814e7bee0971139c2344f88e44b644/src/toil/job.py#L1033-L1045 |
224,983 | DataBiosphere/toil | src/toil/job.py | Job._getImpliedEdges | def _getImpliedEdges(roots):
"""
Gets the set of implied edges. See Job.checkJobGraphAcylic
"""
#Get nodes in job graph
nodes = set()
for root in roots:
root._dfs(nodes)
##For each follow-on edge calculate the extra implied edges
#Adjacency list of implied edges, i.e. map of jobs to lists of jobs
#connected by an implied edge
extraEdges = dict([(n, []) for n in nodes])
for job in nodes:
if len(job._followOns) > 0:
#Get set of jobs connected by a directed path to job, starting
#with a child edge
reacheable = set()
for child in job._children:
child._dfs(reacheable)
#Now add extra edges
for descendant in reacheable:
extraEdges[descendant] += job._followOns[:]
return extraEdges | python | def _getImpliedEdges(roots):
#Get nodes in job graph
nodes = set()
for root in roots:
root._dfs(nodes)
##For each follow-on edge calculate the extra implied edges
#Adjacency list of implied edges, i.e. map of jobs to lists of jobs
#connected by an implied edge
extraEdges = dict([(n, []) for n in nodes])
for job in nodes:
if len(job._followOns) > 0:
#Get set of jobs connected by a directed path to job, starting
#with a child edge
reacheable = set()
for child in job._children:
child._dfs(reacheable)
#Now add extra edges
for descendant in reacheable:
extraEdges[descendant] += job._followOns[:]
return extraEdges | [
"def",
"_getImpliedEdges",
"(",
"roots",
")",
":",
"#Get nodes in job graph",
"nodes",
"=",
"set",
"(",
")",
"for",
"root",
"in",
"roots",
":",
"root",
".",
"_dfs",
"(",
"nodes",
")",
"##For each follow-on edge calculate the extra implied edges",
"#Adjacency list of i... | Gets the set of implied edges. See Job.checkJobGraphAcylic | [
"Gets",
"the",
"set",
"of",
"implied",
"edges",
".",
"See",
"Job",
".",
"checkJobGraphAcylic"
] | a8252277ff814e7bee0971139c2344f88e44b644 | https://github.com/DataBiosphere/toil/blob/a8252277ff814e7bee0971139c2344f88e44b644/src/toil/job.py#L1048-L1071 |
224,984 | DataBiosphere/toil | src/toil/job.py | Job._createEmptyJobGraphForJob | def _createEmptyJobGraphForJob(self, jobStore, command=None, predecessorNumber=0):
"""
Create an empty job for the job.
"""
# set _config to determine user determined default values for resource requirements
self._config = jobStore.config
return jobStore.create(JobNode.fromJob(self, command=command,
predecessorNumber=predecessorNumber)) | python | def _createEmptyJobGraphForJob(self, jobStore, command=None, predecessorNumber=0):
# set _config to determine user determined default values for resource requirements
self._config = jobStore.config
return jobStore.create(JobNode.fromJob(self, command=command,
predecessorNumber=predecessorNumber)) | [
"def",
"_createEmptyJobGraphForJob",
"(",
"self",
",",
"jobStore",
",",
"command",
"=",
"None",
",",
"predecessorNumber",
"=",
"0",
")",
":",
"# set _config to determine user determined default values for resource requirements",
"self",
".",
"_config",
"=",
"jobStore",
".... | Create an empty job for the job. | [
"Create",
"an",
"empty",
"job",
"for",
"the",
"job",
"."
] | a8252277ff814e7bee0971139c2344f88e44b644 | https://github.com/DataBiosphere/toil/blob/a8252277ff814e7bee0971139c2344f88e44b644/src/toil/job.py#L1078-L1085 |
224,985 | DataBiosphere/toil | src/toil/job.py | Job._makeJobGraphs | def _makeJobGraphs(self, jobGraph, jobStore):
"""
Creates a jobGraph for each job in the job graph, recursively.
"""
jobsToJobGraphs = {self:jobGraph}
for successors in (self._followOns, self._children):
jobs = [successor._makeJobGraphs2(jobStore, jobsToJobGraphs) for successor in successors]
jobGraph.stack.append(jobs)
return jobsToJobGraphs | python | def _makeJobGraphs(self, jobGraph, jobStore):
jobsToJobGraphs = {self:jobGraph}
for successors in (self._followOns, self._children):
jobs = [successor._makeJobGraphs2(jobStore, jobsToJobGraphs) for successor in successors]
jobGraph.stack.append(jobs)
return jobsToJobGraphs | [
"def",
"_makeJobGraphs",
"(",
"self",
",",
"jobGraph",
",",
"jobStore",
")",
":",
"jobsToJobGraphs",
"=",
"{",
"self",
":",
"jobGraph",
"}",
"for",
"successors",
"in",
"(",
"self",
".",
"_followOns",
",",
"self",
".",
"_children",
")",
":",
"jobs",
"=",
... | Creates a jobGraph for each job in the job graph, recursively. | [
"Creates",
"a",
"jobGraph",
"for",
"each",
"job",
"in",
"the",
"job",
"graph",
"recursively",
"."
] | a8252277ff814e7bee0971139c2344f88e44b644 | https://github.com/DataBiosphere/toil/blob/a8252277ff814e7bee0971139c2344f88e44b644/src/toil/job.py#L1087-L1095 |
224,986 | DataBiosphere/toil | src/toil/job.py | Job._serialiseJob | def _serialiseJob(self, jobStore, jobsToJobGraphs, rootJobGraph):
"""
Pickle a job and its jobGraph to disk.
"""
# Pickle the job so that its run method can be run at a later time.
# Drop out the children/followOns/predecessors/services - which are
# all recorded within the jobStore and do not need to be stored within
# the job
self._children, self._followOns, self._services = [], [], []
self._directPredecessors, self._promiseJobStore = set(), None
# The pickled job is "run" as the command of the job, see worker
# for the mechanism which unpickles the job and executes the Job.run
# method.
with jobStore.writeFileStream(rootJobGraph.jobStoreID) as (fileHandle, fileStoreID):
pickle.dump(self, fileHandle, pickle.HIGHEST_PROTOCOL)
# Note that getUserScript() may have been overridden. This is intended. If we used
# self.userModule directly, we'd be getting a reference to job.py if the job was
# specified as a function (as opposed to a class) since that is where FunctionWrappingJob
# is defined. What we really want is the module that was loaded as __main__,
# and FunctionWrappingJob overrides getUserScript() to give us just that. Only then can
# filter_main() in _unpickle( ) do its job of resolving any user-defined type or function.
userScript = self.getUserScript().globalize()
jobsToJobGraphs[self].command = ' '.join(('_toil', fileStoreID) + userScript.toCommand())
#Update the status of the jobGraph on disk
jobStore.update(jobsToJobGraphs[self]) | python | def _serialiseJob(self, jobStore, jobsToJobGraphs, rootJobGraph):
# Pickle the job so that its run method can be run at a later time.
# Drop out the children/followOns/predecessors/services - which are
# all recorded within the jobStore and do not need to be stored within
# the job
self._children, self._followOns, self._services = [], [], []
self._directPredecessors, self._promiseJobStore = set(), None
# The pickled job is "run" as the command of the job, see worker
# for the mechanism which unpickles the job and executes the Job.run
# method.
with jobStore.writeFileStream(rootJobGraph.jobStoreID) as (fileHandle, fileStoreID):
pickle.dump(self, fileHandle, pickle.HIGHEST_PROTOCOL)
# Note that getUserScript() may have been overridden. This is intended. If we used
# self.userModule directly, we'd be getting a reference to job.py if the job was
# specified as a function (as opposed to a class) since that is where FunctionWrappingJob
# is defined. What we really want is the module that was loaded as __main__,
# and FunctionWrappingJob overrides getUserScript() to give us just that. Only then can
# filter_main() in _unpickle( ) do its job of resolving any user-defined type or function.
userScript = self.getUserScript().globalize()
jobsToJobGraphs[self].command = ' '.join(('_toil', fileStoreID) + userScript.toCommand())
#Update the status of the jobGraph on disk
jobStore.update(jobsToJobGraphs[self]) | [
"def",
"_serialiseJob",
"(",
"self",
",",
"jobStore",
",",
"jobsToJobGraphs",
",",
"rootJobGraph",
")",
":",
"# Pickle the job so that its run method can be run at a later time.",
"# Drop out the children/followOns/predecessors/services - which are",
"# all recorded within the jobStore a... | Pickle a job and its jobGraph to disk. | [
"Pickle",
"a",
"job",
"and",
"its",
"jobGraph",
"to",
"disk",
"."
] | a8252277ff814e7bee0971139c2344f88e44b644 | https://github.com/DataBiosphere/toil/blob/a8252277ff814e7bee0971139c2344f88e44b644/src/toil/job.py#L1136-L1160 |
224,987 | DataBiosphere/toil | src/toil/job.py | Job._serialiseServices | def _serialiseServices(self, jobStore, jobGraph, rootJobGraph):
"""
Serialises the services for a job.
"""
def processService(serviceJob, depth):
# Extend the depth of the services if necessary
if depth == len(jobGraph.services):
jobGraph.services.append([])
# Recursively call to process child services
for childServiceJob in serviceJob.service._childServices:
processService(childServiceJob, depth+1)
# Make a job wrapper
serviceJobGraph = serviceJob._createEmptyJobGraphForJob(jobStore, predecessorNumber=1)
# Create the start and terminate flags
serviceJobGraph.startJobStoreID = jobStore.getEmptyFileStoreID()
serviceJobGraph.terminateJobStoreID = jobStore.getEmptyFileStoreID()
serviceJobGraph.errorJobStoreID = jobStore.getEmptyFileStoreID()
assert jobStore.fileExists(serviceJobGraph.startJobStoreID)
assert jobStore.fileExists(serviceJobGraph.terminateJobStoreID)
assert jobStore.fileExists(serviceJobGraph.errorJobStoreID)
# Create the service job tuple
j = ServiceJobNode(jobStoreID=serviceJobGraph.jobStoreID,
memory=serviceJobGraph.memory, cores=serviceJobGraph.cores,
disk=serviceJobGraph.disk, preemptable=serviceJobGraph.preemptable,
startJobStoreID=serviceJobGraph.startJobStoreID,
terminateJobStoreID=serviceJobGraph.terminateJobStoreID,
errorJobStoreID=serviceJobGraph.errorJobStoreID,
jobName=serviceJobGraph.jobName, unitName=serviceJobGraph.unitName,
command=serviceJobGraph.command,
predecessorNumber=serviceJobGraph.predecessorNumber)
# Add the service job tuple to the list of services to run
jobGraph.services[depth].append(j)
# Break the links between the services to stop them being serialised together
#childServices = serviceJob.service._childServices
serviceJob.service._childServices = None
assert serviceJob._services == []
#service = serviceJob.service
# Pickle the job
serviceJob.pickledService = pickle.dumps(serviceJob.service, protocol=pickle.HIGHEST_PROTOCOL)
serviceJob.service = None
# Serialise the service job and job wrapper
serviceJob._serialiseJob(jobStore, { serviceJob:serviceJobGraph }, rootJobGraph)
# Restore values
#serviceJob.service = service
#serviceJob.service._childServices = childServices
for serviceJob in self._services:
processService(serviceJob, 0)
self._services = [] | python | def _serialiseServices(self, jobStore, jobGraph, rootJobGraph):
def processService(serviceJob, depth):
# Extend the depth of the services if necessary
if depth == len(jobGraph.services):
jobGraph.services.append([])
# Recursively call to process child services
for childServiceJob in serviceJob.service._childServices:
processService(childServiceJob, depth+1)
# Make a job wrapper
serviceJobGraph = serviceJob._createEmptyJobGraphForJob(jobStore, predecessorNumber=1)
# Create the start and terminate flags
serviceJobGraph.startJobStoreID = jobStore.getEmptyFileStoreID()
serviceJobGraph.terminateJobStoreID = jobStore.getEmptyFileStoreID()
serviceJobGraph.errorJobStoreID = jobStore.getEmptyFileStoreID()
assert jobStore.fileExists(serviceJobGraph.startJobStoreID)
assert jobStore.fileExists(serviceJobGraph.terminateJobStoreID)
assert jobStore.fileExists(serviceJobGraph.errorJobStoreID)
# Create the service job tuple
j = ServiceJobNode(jobStoreID=serviceJobGraph.jobStoreID,
memory=serviceJobGraph.memory, cores=serviceJobGraph.cores,
disk=serviceJobGraph.disk, preemptable=serviceJobGraph.preemptable,
startJobStoreID=serviceJobGraph.startJobStoreID,
terminateJobStoreID=serviceJobGraph.terminateJobStoreID,
errorJobStoreID=serviceJobGraph.errorJobStoreID,
jobName=serviceJobGraph.jobName, unitName=serviceJobGraph.unitName,
command=serviceJobGraph.command,
predecessorNumber=serviceJobGraph.predecessorNumber)
# Add the service job tuple to the list of services to run
jobGraph.services[depth].append(j)
# Break the links between the services to stop them being serialised together
#childServices = serviceJob.service._childServices
serviceJob.service._childServices = None
assert serviceJob._services == []
#service = serviceJob.service
# Pickle the job
serviceJob.pickledService = pickle.dumps(serviceJob.service, protocol=pickle.HIGHEST_PROTOCOL)
serviceJob.service = None
# Serialise the service job and job wrapper
serviceJob._serialiseJob(jobStore, { serviceJob:serviceJobGraph }, rootJobGraph)
# Restore values
#serviceJob.service = service
#serviceJob.service._childServices = childServices
for serviceJob in self._services:
processService(serviceJob, 0)
self._services = [] | [
"def",
"_serialiseServices",
"(",
"self",
",",
"jobStore",
",",
"jobGraph",
",",
"rootJobGraph",
")",
":",
"def",
"processService",
"(",
"serviceJob",
",",
"depth",
")",
":",
"# Extend the depth of the services if necessary",
"if",
"depth",
"==",
"len",
"(",
"jobG... | Serialises the services for a job. | [
"Serialises",
"the",
"services",
"for",
"a",
"job",
"."
] | a8252277ff814e7bee0971139c2344f88e44b644 | https://github.com/DataBiosphere/toil/blob/a8252277ff814e7bee0971139c2344f88e44b644/src/toil/job.py#L1162-L1220 |
224,988 | DataBiosphere/toil | src/toil/job.py | Job._serialiseJobGraph | def _serialiseJobGraph(self, jobGraph, jobStore, returnValues, firstJob):
"""
Pickle the graph of jobs in the jobStore. The graph is not fully serialised \
until the jobGraph itself is written to disk, this is not performed by this \
function because of the need to coordinate this operation with other updates. \
"""
#Check if the job graph has created
#any cycles of dependencies or has multiple roots
self.checkJobGraphForDeadlocks()
#Create the jobGraphs for followOns/children
with jobStore.batch():
jobsToJobGraphs = self._makeJobGraphs(jobGraph, jobStore)
#Get an ordering on the jobs which we use for pickling the jobs in the
#correct order to ensure the promises are properly established
ordering = self.getTopologicalOrderingOfJobs()
assert len(ordering) == len(jobsToJobGraphs)
with jobStore.batch():
# Temporarily set the jobStore locators for the promise call back functions
for job in ordering:
job.prepareForPromiseRegistration(jobStore)
def setForServices(serviceJob):
serviceJob.prepareForPromiseRegistration(jobStore)
for childServiceJob in serviceJob.service._childServices:
setForServices(childServiceJob)
for serviceJob in job._services:
setForServices(serviceJob)
ordering.reverse()
assert self == ordering[-1]
if firstJob:
#If the first job we serialise all the jobs, including the root job
for job in ordering:
# Pickle the services for the job
job._serialiseServices(jobStore, jobsToJobGraphs[job], jobGraph)
# Now pickle the job
job._serialiseJob(jobStore, jobsToJobGraphs, jobGraph)
else:
#We store the return values at this point, because if a return value
#is a promise from another job, we need to register the promise
#before we serialise the other jobs
self._fulfillPromises(returnValues, jobStore)
#Pickle the non-root jobs
for job in ordering[:-1]:
# Pickle the services for the job
job._serialiseServices(jobStore, jobsToJobGraphs[job], jobGraph)
# Pickle the job itself
job._serialiseJob(jobStore, jobsToJobGraphs, jobGraph)
# Pickle any services for the job
self._serialiseServices(jobStore, jobGraph, jobGraph) | python | def _serialiseJobGraph(self, jobGraph, jobStore, returnValues, firstJob):
#Check if the job graph has created
#any cycles of dependencies or has multiple roots
self.checkJobGraphForDeadlocks()
#Create the jobGraphs for followOns/children
with jobStore.batch():
jobsToJobGraphs = self._makeJobGraphs(jobGraph, jobStore)
#Get an ordering on the jobs which we use for pickling the jobs in the
#correct order to ensure the promises are properly established
ordering = self.getTopologicalOrderingOfJobs()
assert len(ordering) == len(jobsToJobGraphs)
with jobStore.batch():
# Temporarily set the jobStore locators for the promise call back functions
for job in ordering:
job.prepareForPromiseRegistration(jobStore)
def setForServices(serviceJob):
serviceJob.prepareForPromiseRegistration(jobStore)
for childServiceJob in serviceJob.service._childServices:
setForServices(childServiceJob)
for serviceJob in job._services:
setForServices(serviceJob)
ordering.reverse()
assert self == ordering[-1]
if firstJob:
#If the first job we serialise all the jobs, including the root job
for job in ordering:
# Pickle the services for the job
job._serialiseServices(jobStore, jobsToJobGraphs[job], jobGraph)
# Now pickle the job
job._serialiseJob(jobStore, jobsToJobGraphs, jobGraph)
else:
#We store the return values at this point, because if a return value
#is a promise from another job, we need to register the promise
#before we serialise the other jobs
self._fulfillPromises(returnValues, jobStore)
#Pickle the non-root jobs
for job in ordering[:-1]:
# Pickle the services for the job
job._serialiseServices(jobStore, jobsToJobGraphs[job], jobGraph)
# Pickle the job itself
job._serialiseJob(jobStore, jobsToJobGraphs, jobGraph)
# Pickle any services for the job
self._serialiseServices(jobStore, jobGraph, jobGraph) | [
"def",
"_serialiseJobGraph",
"(",
"self",
",",
"jobGraph",
",",
"jobStore",
",",
"returnValues",
",",
"firstJob",
")",
":",
"#Check if the job graph has created",
"#any cycles of dependencies or has multiple roots",
"self",
".",
"checkJobGraphForDeadlocks",
"(",
")",
"#Crea... | Pickle the graph of jobs in the jobStore. The graph is not fully serialised \
until the jobGraph itself is written to disk, this is not performed by this \
function because of the need to coordinate this operation with other updates. \ | [
"Pickle",
"the",
"graph",
"of",
"jobs",
"in",
"the",
"jobStore",
".",
"The",
"graph",
"is",
"not",
"fully",
"serialised",
"\\",
"until",
"the",
"jobGraph",
"itself",
"is",
"written",
"to",
"disk",
"this",
"is",
"not",
"performed",
"by",
"this",
"\\",
"fu... | a8252277ff814e7bee0971139c2344f88e44b644 | https://github.com/DataBiosphere/toil/blob/a8252277ff814e7bee0971139c2344f88e44b644/src/toil/job.py#L1222-L1272 |
224,989 | DataBiosphere/toil | src/toil/job.py | Job._serialiseFirstJob | def _serialiseFirstJob(self, jobStore):
"""
Serialises the root job. Returns the wrapping job.
:param toil.jobStores.abstractJobStore.AbstractJobStore jobStore:
"""
# Check if the workflow root is a checkpoint but not a leaf vertex.
# All other job vertices in the graph are checked by checkNewCheckpointsAreLeafVertices
if self.checkpoint and not Job._isLeafVertex(self):
raise JobGraphDeadlockException(
'New checkpoint job %s is not a leaf in the job graph' % self)
# Create first jobGraph
jobGraph = self._createEmptyJobGraphForJob(jobStore=jobStore, predecessorNumber=0)
# Write the graph of jobs to disk
self._serialiseJobGraph(jobGraph, jobStore, None, True)
jobStore.update(jobGraph)
# Store the name of the first job in a file in case of restart. Up to this point the
# root job is not recoverable. FIXME: "root job" or "first job", which one is it?
jobStore.setRootJob(jobGraph.jobStoreID)
return jobGraph | python | def _serialiseFirstJob(self, jobStore):
# Check if the workflow root is a checkpoint but not a leaf vertex.
# All other job vertices in the graph are checked by checkNewCheckpointsAreLeafVertices
if self.checkpoint and not Job._isLeafVertex(self):
raise JobGraphDeadlockException(
'New checkpoint job %s is not a leaf in the job graph' % self)
# Create first jobGraph
jobGraph = self._createEmptyJobGraphForJob(jobStore=jobStore, predecessorNumber=0)
# Write the graph of jobs to disk
self._serialiseJobGraph(jobGraph, jobStore, None, True)
jobStore.update(jobGraph)
# Store the name of the first job in a file in case of restart. Up to this point the
# root job is not recoverable. FIXME: "root job" or "first job", which one is it?
jobStore.setRootJob(jobGraph.jobStoreID)
return jobGraph | [
"def",
"_serialiseFirstJob",
"(",
"self",
",",
"jobStore",
")",
":",
"# Check if the workflow root is a checkpoint but not a leaf vertex.",
"# All other job vertices in the graph are checked by checkNewCheckpointsAreLeafVertices",
"if",
"self",
".",
"checkpoint",
"and",
"not",
"Job",... | Serialises the root job. Returns the wrapping job.
:param toil.jobStores.abstractJobStore.AbstractJobStore jobStore: | [
"Serialises",
"the",
"root",
"job",
".",
"Returns",
"the",
"wrapping",
"job",
"."
] | a8252277ff814e7bee0971139c2344f88e44b644 | https://github.com/DataBiosphere/toil/blob/a8252277ff814e7bee0971139c2344f88e44b644/src/toil/job.py#L1274-L1295 |
224,990 | DataBiosphere/toil | src/toil/job.py | Job._serialiseExistingJob | def _serialiseExistingJob(self, jobGraph, jobStore, returnValues):
"""
Serialise an existing job.
"""
self._serialiseJobGraph(jobGraph, jobStore, returnValues, False)
#Drop the completed command, if not dropped already
jobGraph.command = None
#Merge any children (follow-ons) created in the initial serialisation
#with children (follow-ons) created in the subsequent scale-up.
assert len(jobGraph.stack) >= 4
combinedChildren = jobGraph.stack[-1] + jobGraph.stack[-3]
combinedFollowOns = jobGraph.stack[-2] + jobGraph.stack[-4]
jobGraph.stack = jobGraph.stack[:-4]
if len(combinedFollowOns) > 0:
jobGraph.stack.append(combinedFollowOns)
if len(combinedChildren) > 0:
jobGraph.stack.append(combinedChildren) | python | def _serialiseExistingJob(self, jobGraph, jobStore, returnValues):
self._serialiseJobGraph(jobGraph, jobStore, returnValues, False)
#Drop the completed command, if not dropped already
jobGraph.command = None
#Merge any children (follow-ons) created in the initial serialisation
#with children (follow-ons) created in the subsequent scale-up.
assert len(jobGraph.stack) >= 4
combinedChildren = jobGraph.stack[-1] + jobGraph.stack[-3]
combinedFollowOns = jobGraph.stack[-2] + jobGraph.stack[-4]
jobGraph.stack = jobGraph.stack[:-4]
if len(combinedFollowOns) > 0:
jobGraph.stack.append(combinedFollowOns)
if len(combinedChildren) > 0:
jobGraph.stack.append(combinedChildren) | [
"def",
"_serialiseExistingJob",
"(",
"self",
",",
"jobGraph",
",",
"jobStore",
",",
"returnValues",
")",
":",
"self",
".",
"_serialiseJobGraph",
"(",
"jobGraph",
",",
"jobStore",
",",
"returnValues",
",",
"False",
")",
"#Drop the completed command, if not dropped alre... | Serialise an existing job. | [
"Serialise",
"an",
"existing",
"job",
"."
] | a8252277ff814e7bee0971139c2344f88e44b644 | https://github.com/DataBiosphere/toil/blob/a8252277ff814e7bee0971139c2344f88e44b644/src/toil/job.py#L1297-L1313 |
224,991 | DataBiosphere/toil | src/toil/job.py | Job._executor | def _executor(self, jobGraph, stats, fileStore):
"""
This is the core wrapping method for running the job within a worker. It sets up the stats
and logging before yielding. After completion of the body, the function will finish up the
stats and logging, and starts the async update process for the job.
"""
if stats is not None:
startTime = time.time()
startClock = getTotalCpuTime()
baseDir = os.getcwd()
yield
# If the job is not a checkpoint job, add the promise files to delete
# to the list of jobStoreFileIDs to delete
if not self.checkpoint:
for jobStoreFileID in Promise.filesToDelete:
fileStore.deleteGlobalFile(jobStoreFileID)
else:
# Else copy them to the job wrapper to delete later
jobGraph.checkpointFilesToDelete = list(Promise.filesToDelete)
Promise.filesToDelete.clear()
# Now indicate the asynchronous update of the job can happen
fileStore._updateJobWhenDone()
# Change dir back to cwd dir, if changed by job (this is a safety issue)
if os.getcwd() != baseDir:
os.chdir(baseDir)
# Finish up the stats
if stats is not None:
totalCpuTime, totalMemoryUsage = getTotalCpuTimeAndMemoryUsage()
stats.jobs.append(
Expando(
time=str(time.time() - startTime),
clock=str(totalCpuTime - startClock),
class_name=self._jobName(),
memory=str(totalMemoryUsage)
)
) | python | def _executor(self, jobGraph, stats, fileStore):
if stats is not None:
startTime = time.time()
startClock = getTotalCpuTime()
baseDir = os.getcwd()
yield
# If the job is not a checkpoint job, add the promise files to delete
# to the list of jobStoreFileIDs to delete
if not self.checkpoint:
for jobStoreFileID in Promise.filesToDelete:
fileStore.deleteGlobalFile(jobStoreFileID)
else:
# Else copy them to the job wrapper to delete later
jobGraph.checkpointFilesToDelete = list(Promise.filesToDelete)
Promise.filesToDelete.clear()
# Now indicate the asynchronous update of the job can happen
fileStore._updateJobWhenDone()
# Change dir back to cwd dir, if changed by job (this is a safety issue)
if os.getcwd() != baseDir:
os.chdir(baseDir)
# Finish up the stats
if stats is not None:
totalCpuTime, totalMemoryUsage = getTotalCpuTimeAndMemoryUsage()
stats.jobs.append(
Expando(
time=str(time.time() - startTime),
clock=str(totalCpuTime - startClock),
class_name=self._jobName(),
memory=str(totalMemoryUsage)
)
) | [
"def",
"_executor",
"(",
"self",
",",
"jobGraph",
",",
"stats",
",",
"fileStore",
")",
":",
"if",
"stats",
"is",
"not",
"None",
":",
"startTime",
"=",
"time",
".",
"time",
"(",
")",
"startClock",
"=",
"getTotalCpuTime",
"(",
")",
"baseDir",
"=",
"os",
... | This is the core wrapping method for running the job within a worker. It sets up the stats
and logging before yielding. After completion of the body, the function will finish up the
stats and logging, and starts the async update process for the job. | [
"This",
"is",
"the",
"core",
"wrapping",
"method",
"for",
"running",
"the",
"job",
"within",
"a",
"worker",
".",
"It",
"sets",
"up",
"the",
"stats",
"and",
"logging",
"before",
"yielding",
".",
"After",
"completion",
"of",
"the",
"body",
"the",
"function",... | a8252277ff814e7bee0971139c2344f88e44b644 | https://github.com/DataBiosphere/toil/blob/a8252277ff814e7bee0971139c2344f88e44b644/src/toil/job.py#L1325-L1362 |
224,992 | DataBiosphere/toil | src/toil/job.py | Job._runner | def _runner(self, jobGraph, jobStore, fileStore):
"""
This method actually runs the job, and serialises the next jobs.
:param class jobGraph: Instance of a jobGraph object
:param class jobStore: Instance of the job store
:param toil.fileStore.FileStore fileStore: Instance of a Cached on uncached
filestore
:return:
"""
# Make fileStore available as an attribute during run() ...
self._fileStore = fileStore
# ... but also pass it to run() as an argument for backwards compatibility.
returnValues = self._run(jobGraph, fileStore)
# Serialize the new jobs defined by the run method to the jobStore
self._serialiseExistingJob(jobGraph, jobStore, returnValues) | python | def _runner(self, jobGraph, jobStore, fileStore):
# Make fileStore available as an attribute during run() ...
self._fileStore = fileStore
# ... but also pass it to run() as an argument for backwards compatibility.
returnValues = self._run(jobGraph, fileStore)
# Serialize the new jobs defined by the run method to the jobStore
self._serialiseExistingJob(jobGraph, jobStore, returnValues) | [
"def",
"_runner",
"(",
"self",
",",
"jobGraph",
",",
"jobStore",
",",
"fileStore",
")",
":",
"# Make fileStore available as an attribute during run() ...",
"self",
".",
"_fileStore",
"=",
"fileStore",
"# ... but also pass it to run() as an argument for backwards compatibility.",
... | This method actually runs the job, and serialises the next jobs.
:param class jobGraph: Instance of a jobGraph object
:param class jobStore: Instance of the job store
:param toil.fileStore.FileStore fileStore: Instance of a Cached on uncached
filestore
:return: | [
"This",
"method",
"actually",
"runs",
"the",
"job",
"and",
"serialises",
"the",
"next",
"jobs",
"."
] | a8252277ff814e7bee0971139c2344f88e44b644 | https://github.com/DataBiosphere/toil/blob/a8252277ff814e7bee0971139c2344f88e44b644/src/toil/job.py#L1364-L1379 |
224,993 | DataBiosphere/toil | src/toil/job.py | PromisedRequirementFunctionWrappingJob.create | def create(cls, userFunction, *args, **kwargs):
"""
Creates an encapsulated Toil job function with unfulfilled promised resource
requirements. After the promises are fulfilled, a child job function is created
using updated resource values. The subgraph is encapsulated to ensure that this
child job function is run before other children in the workflow. Otherwise, a
different child may try to use an unresolved promise return value from the parent.
"""
return EncapsulatedJob(cls(userFunction, *args, **kwargs)) | python | def create(cls, userFunction, *args, **kwargs):
return EncapsulatedJob(cls(userFunction, *args, **kwargs)) | [
"def",
"create",
"(",
"cls",
",",
"userFunction",
",",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
":",
"return",
"EncapsulatedJob",
"(",
"cls",
"(",
"userFunction",
",",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
")"
] | Creates an encapsulated Toil job function with unfulfilled promised resource
requirements. After the promises are fulfilled, a child job function is created
using updated resource values. The subgraph is encapsulated to ensure that this
child job function is run before other children in the workflow. Otherwise, a
different child may try to use an unresolved promise return value from the parent. | [
"Creates",
"an",
"encapsulated",
"Toil",
"job",
"function",
"with",
"unfulfilled",
"promised",
"resource",
"requirements",
".",
"After",
"the",
"promises",
"are",
"fulfilled",
"a",
"child",
"job",
"function",
"is",
"created",
"using",
"updated",
"resource",
"value... | a8252277ff814e7bee0971139c2344f88e44b644 | https://github.com/DataBiosphere/toil/blob/a8252277ff814e7bee0971139c2344f88e44b644/src/toil/job.py#L1523-L1531 |
224,994 | DataBiosphere/toil | src/toil/job.py | PromisedRequirement.getValue | def getValue(self):
"""
Returns PromisedRequirement value
"""
func = dill.loads(self._func)
return func(*self._args) | python | def getValue(self):
func = dill.loads(self._func)
return func(*self._args) | [
"def",
"getValue",
"(",
"self",
")",
":",
"func",
"=",
"dill",
".",
"loads",
"(",
"self",
".",
"_func",
")",
"return",
"func",
"(",
"*",
"self",
".",
"_args",
")"
] | Returns PromisedRequirement value | [
"Returns",
"PromisedRequirement",
"value"
] | a8252277ff814e7bee0971139c2344f88e44b644 | https://github.com/DataBiosphere/toil/blob/a8252277ff814e7bee0971139c2344f88e44b644/src/toil/job.py#L1838-L1843 |
224,995 | DataBiosphere/toil | src/toil/job.py | PromisedRequirement.convertPromises | def convertPromises(kwargs):
"""
Returns True if reserved resource keyword is a Promise or
PromisedRequirement instance. Converts Promise instance
to PromisedRequirement.
:param kwargs: function keyword arguments
:return: bool
"""
for r in ["disk", "memory", "cores"]:
if isinstance(kwargs.get(r), Promise):
kwargs[r] = PromisedRequirement(kwargs[r])
return True
elif isinstance(kwargs.get(r), PromisedRequirement):
return True
return False | python | def convertPromises(kwargs):
for r in ["disk", "memory", "cores"]:
if isinstance(kwargs.get(r), Promise):
kwargs[r] = PromisedRequirement(kwargs[r])
return True
elif isinstance(kwargs.get(r), PromisedRequirement):
return True
return False | [
"def",
"convertPromises",
"(",
"kwargs",
")",
":",
"for",
"r",
"in",
"[",
"\"disk\"",
",",
"\"memory\"",
",",
"\"cores\"",
"]",
":",
"if",
"isinstance",
"(",
"kwargs",
".",
"get",
"(",
"r",
")",
",",
"Promise",
")",
":",
"kwargs",
"[",
"r",
"]",
"=... | Returns True if reserved resource keyword is a Promise or
PromisedRequirement instance. Converts Promise instance
to PromisedRequirement.
:param kwargs: function keyword arguments
:return: bool | [
"Returns",
"True",
"if",
"reserved",
"resource",
"keyword",
"is",
"a",
"Promise",
"or",
"PromisedRequirement",
"instance",
".",
"Converts",
"Promise",
"instance",
"to",
"PromisedRequirement",
"."
] | a8252277ff814e7bee0971139c2344f88e44b644 | https://github.com/DataBiosphere/toil/blob/a8252277ff814e7bee0971139c2344f88e44b644/src/toil/job.py#L1846-L1861 |
224,996 | DataBiosphere/toil | src/toil/batchSystems/options.py | getPublicIP | def getPublicIP():
"""Get the IP that this machine uses to contact the internet.
If behind a NAT, this will still be this computer's IP, and not the router's."""
try:
# Try to get the internet-facing IP by attempting a connection
# to a non-existent server and reading what IP was used.
with closing(socket.socket(socket.AF_INET, socket.SOCK_DGRAM)) as sock:
# 203.0.113.0/24 is reserved as TEST-NET-3 by RFC 5737, so
# there is guaranteed to be no one listening on the other
# end (and we won't accidentally DOS anyone).
sock.connect(('203.0.113.1', 1))
ip = sock.getsockname()[0]
return ip
except:
# Something went terribly wrong. Just give loopback rather
# than killing everything, because this is often called just
# to provide a default argument
return '127.0.0.1' | python | def getPublicIP():
try:
# Try to get the internet-facing IP by attempting a connection
# to a non-existent server and reading what IP was used.
with closing(socket.socket(socket.AF_INET, socket.SOCK_DGRAM)) as sock:
# 203.0.113.0/24 is reserved as TEST-NET-3 by RFC 5737, so
# there is guaranteed to be no one listening on the other
# end (and we won't accidentally DOS anyone).
sock.connect(('203.0.113.1', 1))
ip = sock.getsockname()[0]
return ip
except:
# Something went terribly wrong. Just give loopback rather
# than killing everything, because this is often called just
# to provide a default argument
return '127.0.0.1' | [
"def",
"getPublicIP",
"(",
")",
":",
"try",
":",
"# Try to get the internet-facing IP by attempting a connection",
"# to a non-existent server and reading what IP was used.",
"with",
"closing",
"(",
"socket",
".",
"socket",
"(",
"socket",
".",
"AF_INET",
",",
"socket",
".",... | Get the IP that this machine uses to contact the internet.
If behind a NAT, this will still be this computer's IP, and not the router's. | [
"Get",
"the",
"IP",
"that",
"this",
"machine",
"uses",
"to",
"contact",
"the",
"internet",
"."
] | a8252277ff814e7bee0971139c2344f88e44b644 | https://github.com/DataBiosphere/toil/blob/a8252277ff814e7bee0971139c2344f88e44b644/src/toil/batchSystems/options.py#L22-L40 |
224,997 | DataBiosphere/toil | src/toil/batchSystems/options.py | setDefaultOptions | def setDefaultOptions(config):
"""
Set default options for builtin batch systems. This is required if a Config
object is not constructed from an Options object.
"""
config.batchSystem = "singleMachine"
config.disableAutoDeployment = False
config.environment = {}
config.statePollingWait = None # if not set, will default to seconds in getWaitDuration()
config.maxLocalJobs = multiprocessing.cpu_count()
config.manualMemArgs = False
# single machine
config.scale = 1
config.linkImports = False
# mesos
config.mesosMasterAddress = '%s:5050' % getPublicIP()
# parasol
config.parasolCommand = 'parasol'
config.parasolMaxBatches = 10000 | python | def setDefaultOptions(config):
config.batchSystem = "singleMachine"
config.disableAutoDeployment = False
config.environment = {}
config.statePollingWait = None # if not set, will default to seconds in getWaitDuration()
config.maxLocalJobs = multiprocessing.cpu_count()
config.manualMemArgs = False
# single machine
config.scale = 1
config.linkImports = False
# mesos
config.mesosMasterAddress = '%s:5050' % getPublicIP()
# parasol
config.parasolCommand = 'parasol'
config.parasolMaxBatches = 10000 | [
"def",
"setDefaultOptions",
"(",
"config",
")",
":",
"config",
".",
"batchSystem",
"=",
"\"singleMachine\"",
"config",
".",
"disableAutoDeployment",
"=",
"False",
"config",
".",
"environment",
"=",
"{",
"}",
"config",
".",
"statePollingWait",
"=",
"None",
"# if ... | Set default options for builtin batch systems. This is required if a Config
object is not constructed from an Options object. | [
"Set",
"default",
"options",
"for",
"builtin",
"batch",
"systems",
".",
"This",
"is",
"required",
"if",
"a",
"Config",
"object",
"is",
"not",
"constructed",
"from",
"an",
"Options",
"object",
"."
] | a8252277ff814e7bee0971139c2344f88e44b644 | https://github.com/DataBiosphere/toil/blob/a8252277ff814e7bee0971139c2344f88e44b644/src/toil/batchSystems/options.py#L140-L161 |
224,998 | DataBiosphere/toil | src/toil/jobStores/googleJobStore.py | googleRetry | def googleRetry(f):
"""
This decorator retries the wrapped function if google throws any angry service
errors.
It should wrap any function that makes use of the Google Client API
"""
@wraps(f)
def wrapper(*args, **kwargs):
for attempt in retry(delays=truncExpBackoff(),
timeout=300,
predicate=googleRetryPredicate):
with attempt:
return f(*args, **kwargs)
return wrapper | python | def googleRetry(f):
@wraps(f)
def wrapper(*args, **kwargs):
for attempt in retry(delays=truncExpBackoff(),
timeout=300,
predicate=googleRetryPredicate):
with attempt:
return f(*args, **kwargs)
return wrapper | [
"def",
"googleRetry",
"(",
"f",
")",
":",
"@",
"wraps",
"(",
"f",
")",
"def",
"wrapper",
"(",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
":",
"for",
"attempt",
"in",
"retry",
"(",
"delays",
"=",
"truncExpBackoff",
"(",
")",
",",
"timeout",
"=",
"... | This decorator retries the wrapped function if google throws any angry service
errors.
It should wrap any function that makes use of the Google Client API | [
"This",
"decorator",
"retries",
"the",
"wrapped",
"function",
"if",
"google",
"throws",
"any",
"angry",
"service",
"errors",
"."
] | a8252277ff814e7bee0971139c2344f88e44b644 | https://github.com/DataBiosphere/toil/blob/a8252277ff814e7bee0971139c2344f88e44b644/src/toil/jobStores/googleJobStore.py#L69-L83 |
224,999 | DataBiosphere/toil | src/toil/jobStores/googleJobStore.py | GoogleJobStore._getBlobFromURL | def _getBlobFromURL(cls, url, exists=False):
"""
Gets the blob specified by the url.
caution: makes no api request. blob may not ACTUALLY exist
:param urlparse.ParseResult url: the URL
:param bool exists: if True, then syncs local blob object with cloud
and raises exceptions if it doesn't exist remotely
:return: the blob requested
:rtype: :class:`~google.cloud.storage.blob.Blob`
"""
bucketName = url.netloc
fileName = url.path
# remove leading '/', which can cause problems if fileName is a path
if fileName.startswith('/'):
fileName = fileName[1:]
storageClient = storage.Client()
bucket = storageClient.get_bucket(bucketName)
blob = bucket.blob(bytes(fileName))
if exists:
if not blob.exists():
raise NoSuchFileException
# sync with cloud so info like size is available
blob.reload()
return blob | python | def _getBlobFromURL(cls, url, exists=False):
bucketName = url.netloc
fileName = url.path
# remove leading '/', which can cause problems if fileName is a path
if fileName.startswith('/'):
fileName = fileName[1:]
storageClient = storage.Client()
bucket = storageClient.get_bucket(bucketName)
blob = bucket.blob(bytes(fileName))
if exists:
if not blob.exists():
raise NoSuchFileException
# sync with cloud so info like size is available
blob.reload()
return blob | [
"def",
"_getBlobFromURL",
"(",
"cls",
",",
"url",
",",
"exists",
"=",
"False",
")",
":",
"bucketName",
"=",
"url",
".",
"netloc",
"fileName",
"=",
"url",
".",
"path",
"# remove leading '/', which can cause problems if fileName is a path",
"if",
"fileName",
".",
"s... | Gets the blob specified by the url.
caution: makes no api request. blob may not ACTUALLY exist
:param urlparse.ParseResult url: the URL
:param bool exists: if True, then syncs local blob object with cloud
and raises exceptions if it doesn't exist remotely
:return: the blob requested
:rtype: :class:`~google.cloud.storage.blob.Blob` | [
"Gets",
"the",
"blob",
"specified",
"by",
"the",
"url",
"."
] | a8252277ff814e7bee0971139c2344f88e44b644 | https://github.com/DataBiosphere/toil/blob/a8252277ff814e7bee0971139c2344f88e44b644/src/toil/jobStores/googleJobStore.py#L303-L333 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.