_id stringlengths 2 7 | title stringlengths 1 88 | partition stringclasses 3
values | text stringlengths 31 13.1k | language stringclasses 1
value | meta_information dict |
|---|---|---|---|---|---|
q265800 | create_evaluate_ops | test | def create_evaluate_ops(task_prefix,
data_format,
input_paths,
prediction_path,
metric_fn_and_keys,
validate_fn,
batch_prediction_job_id=None,
project_id=None,
region=None,
dataflow_options=None,
model_uri=None,
model_name=None,
version_name=None,
dag=None):
"""
Creates Operators needed for model evaluation and returns.
It gets prediction over inputs via Cloud ML Engine BatchPrediction API by
calling MLEngineBatchPredictionOperator, then summarize and validate
the result via Cloud Dataflow using DataFlowPythonOperator.
For details and pricing about Batch prediction, please refer to the website
https://cloud.google.com/ml-engine/docs/how-tos/batch-predict
and for Cloud Dataflow, https://cloud.google.com/dataflow/docs/
It returns three chained operators for prediction, summary, and validation,
named as <prefix>-prediction, <prefix>-summary, and <prefix>-validation,
respectively.
(<prefix> should contain only alphanumeric characters or hyphen.)
The upstream and downstream can be set accordingly like:
pred, _, val = create_evaluate_ops(...)
pred.set_upstream(upstream_op)
...
downstream_op.set_upstream(val)
Callers will provide two python callables, metric_fn and validate_fn, in
order to customize the evaluation behavior as they wish.
- metric_fn receives a dictionary per instance derived from json in the
batch prediction result. The keys might vary depending on the model.
It should return a tuple of metrics.
- validation_fn receives a dictionary of the averaged metrics that metric_fn
generated over all instances.
The key/value of the dictionary matches to what's given by
metric_fn_and_keys arg.
The dictionary contains an additional metric, 'count' to represent the
total number of instances received for evaluation.
The function would raise an exception to mark the task as failed, in a
case the validation result is not okay to proceed (i.e. to set the trained
version as default).
Typical examples are like this:
def get_metric_fn_and_keys():
import math # imports should be outside of the metric_fn below.
def error_and_squared_error(inst):
label = float(inst['input_label'])
classes = float(inst['classes']) # 0 or 1
err = abs(classes-label)
squared_err = math.pow(classes-label, 2)
return (err, squared_err) # returns a tuple.
return error_and_squared_error, ['err', 'mse'] # key order must match.
def validate_err_and_count(summary):
if summary['err'] > 0.2:
raise ValueError('Too high err>0.2; summary=%s' % summary)
if summary['mse'] > 0.05:
raise ValueError('Too high mse>0.05; summary=%s' % summary)
if summary['count'] < 1000:
raise ValueError('Too few instances<1000; summary=%s' % summary)
return summary
For the details on the other BatchPrediction-related arguments (project_id,
job_id, region, data_format, input_paths, prediction_path, model_uri),
please refer to MLEngineBatchPredictionOperator too.
:param task_prefix: a prefix for the tasks. Only alphanumeric characters and
hyphen are allowed (no underscores), since this will be used as dataflow
job name, which doesn't allow other characters.
:type task_prefix: str
:param data_format: either of 'TEXT', 'TF_RECORD', 'TF_RECORD_GZIP'
:type data_format: str
:param input_paths: a list of input paths to be sent to BatchPrediction.
:type input_paths: list[str]
:param prediction_path: GCS path to put the prediction results in.
:type prediction_path: str
:param metric_fn_and_keys: a tuple of metric_fn and metric_keys:
- metric_fn is a function that accepts a dictionary (for an instance),
and returns a tuple of metric(s) that it calculates.
- metric_keys is a list of strings to denote the key of each metric.
:type metric_fn_and_keys: tuple of a function and a list[str]
:param validate_fn: a function to validate whether the averaged metric(s) is
good enough to push the model.
:type validate_fn: function
:param batch_prediction_job_id: the id to use for the Cloud ML Batch
prediction job. Passed directly to the MLEngineBatchPredictionOperator as
the job_id argument.
:type batch_prediction_job_id: str
:param project_id: the Google Cloud Platform project id in which to execute
Cloud ML Batch Prediction and Dataflow jobs. If None, then the `dag`'s
`default_args['project_id']` will be used.
:type project_id: str
:param region: the Google Cloud Platform region in which to execute Cloud ML
Batch Prediction and Dataflow jobs. If None, then the `dag`'s
`default_args['region']` will be used.
:type region: str
:param dataflow_options: options to run Dataflow jobs. If None, then the
`dag`'s `default_args['dataflow_default_options']` will be used.
:type dataflow_options: dictionary
:param model_uri: GCS path of the model exported by Tensorflow using
tensorflow.estimator.export_savedmodel(). It cannot be used with
model_name or version_name below. See MLEngineBatchPredictionOperator for
more detail.
:type model_uri: str
:param model_name: Used to indicate a model to use for prediction. Can be
used in combination with version_name, but cannot be used together with
model_uri. See MLEngineBatchPredictionOperator for more detail. If None,
then the `dag`'s `default_args['model_name']` will be used.
:type model_name: str
:param version_name: Used to indicate a model version to use for prediction,
in combination with model_name. Cannot be used together with model_uri.
See MLEngineBatchPredictionOperator for more detail. If None, then the
`dag`'s `default_args['version_name']` will be used.
:type version_name: str
:param dag: The `DAG` to use for all Operators.
:type dag: | python | {
"resource": ""
} |
q265801 | mkdirs | test | def mkdirs(path, mode):
"""
Creates the directory specified by path, creating intermediate directories
as necessary. If directory already exists, this is a no-op.
:param path: The directory to create
:type path: str
:param mode: The mode to give to the directory e.g. 0o755, ignores umask
:type mode: int
""" | python | {
"resource": ""
} |
q265802 | _convert_to_float_if_possible | test | def _convert_to_float_if_possible(s):
"""
A small helper function to convert a string to a numeric value
if appropriate
:param s: the string to be converted
:type | python | {
"resource": ""
} |
q265803 | make_aware | test | def make_aware(value, timezone=None):
"""
Make a naive datetime.datetime in a given time zone aware.
:param value: datetime
:param timezone: timezone
:return: localized datetime in settings.TIMEZONE or timezone
"""
if timezone is None:
timezone = TIMEZONE
# Check that we won't overwrite the timezone of an aware datetime.
if is_localized(value):
raise ValueError(
"make_aware expects a naive datetime, got %s" % value)
if hasattr(value, 'fold'):
# In case of python 3.6 we want to do the same that pendulum does for python3.5
# i.e in case we move clock back we want to schedule the run at the time of the second
# instance of the same clock time rather than the | python | {
"resource": ""
} |
q265804 | make_naive | test | def make_naive(value, timezone=None):
"""
Make an aware datetime.datetime naive in a given time zone.
:param value: datetime
:param timezone: timezone
:return: naive datetime
"""
if timezone is None:
timezone = TIMEZONE
# Emulate the behavior of astimezone() on Python < 3.6.
if is_naive(value):
raise ValueError("make_naive() cannot be applied to a naive datetime")
o = value.astimezone(timezone)
| python | {
"resource": ""
} |
q265805 | datetime | test | def datetime(*args, **kwargs):
"""
Wrapper around datetime.datetime that adds settings.TIMEZONE if tzinfo not specified
:return: datetime.datetime
"""
| python | {
"resource": ""
} |
q265806 | DruidDbApiHook.get_conn | test | def get_conn(self):
"""
Establish a connection to druid broker.
"""
conn = self.get_connection(self.druid_broker_conn_id)
druid_broker_conn = connect(
host=conn.host,
port=conn.port,
path=conn.extra_dejson.get('endpoint', '/druid/v2/sql'),
| python | {
"resource": ""
} |
q265807 | HttpHook.get_conn | test | def get_conn(self, headers=None):
"""
Returns http session for use with requests
:param headers: additional headers to be passed through as a dictionary
:type headers: dict
"""
session = requests.Session()
if self.http_conn_id:
conn = self.get_connection(self.http_conn_id)
if "://" in conn.host:
self.base_url = conn.host
else:
| python | {
"resource": ""
} |
q265808 | HttpHook.run | test | def run(self, endpoint, data=None, headers=None, extra_options=None):
"""
Performs the request
:param endpoint: the endpoint to be called i.e. resource/v1/query?
:type endpoint: str
:param data: payload to be uploaded or request parameters
:type data: dict
:param headers: additional headers to be passed through as a dictionary
:type headers: dict
:param extra_options: additional options to be used when executing the request
i.e. {'check_response': False} to avoid checking raising exceptions on non
2XX or 3XX status codes
:type extra_options: dict
"""
extra_options = extra_options or {}
session = self.get_conn(headers)
if self.base_url and not self.base_url.endswith('/') and \
| python | {
"resource": ""
} |
q265809 | HttpHook.check_response | test | def check_response(self, response):
"""
Checks the status code and raise an AirflowException exception on non 2XX or 3XX
status codes
:param response: A requests response object
:type response: requests.response
"""
try:
response.raise_for_status()
| python | {
"resource": ""
} |
q265810 | HttpHook.run_and_check | test | def run_and_check(self, session, prepped_request, extra_options):
"""
Grabs extra options like timeout and actually runs the request,
checking for the result
:param session: the session to be used to execute the request
:type session: requests.Session
:param prepped_request: the prepared request generated in run()
:type prepped_request: session.prepare_request
:param extra_options: additional options to be used when executing the request
i.e. {'check_response': False} to avoid checking raising exceptions on non 2XX
or 3XX status codes
:type extra_options: dict
"""
extra_options = extra_options or {}
try:
response = session.send(
prepped_request,
stream=extra_options.get("stream", False),
verify=extra_options.get("verify", True),
| python | {
"resource": ""
} |
q265811 | create_session | test | def create_session():
"""
Contextmanager that will create and teardown a session.
"""
session = settings.Session()
try:
yield session
| python | {
"resource": ""
} |
q265812 | provide_session | test | def provide_session(func):
"""
Function decorator that provides a session if it isn't provided.
If you want to reuse a session or run the function as part of a
database transaction, you pass it to the function, if not this wrapper
will create one and close it for you.
"""
@wraps(func)
def wrapper(*args, **kwargs):
| python | {
"resource": ""
} |
q265813 | resetdb | test | def resetdb():
"""
Clear out the database
"""
from airflow import models
# alembic adds significant import time, so we import it lazily
from alembic.migration import MigrationContext
log.info("Dropping tables that exist")
models.base.Base.metadata.drop_all(settings.engine)
mc = MigrationContext.configure(settings.engine)
| python | {
"resource": ""
} |
q265814 | PrestoHook._get_pretty_exception_message | test | def _get_pretty_exception_message(e):
"""
Parses some DatabaseError to provide a better error message
"""
if (hasattr(e, 'message') and
'errorName' in e.message and
'message' in e.message):
| python | {
"resource": ""
} |
q265815 | PrestoHook.get_records | test | def get_records(self, hql, parameters=None):
"""
Get a set of records from Presto
"""
try:
| python | {
"resource": ""
} |
q265816 | PrestoHook.get_pandas_df | test | def get_pandas_df(self, hql, parameters=None):
"""
Get a pandas dataframe from a sql query.
"""
import pandas
cursor = self.get_cursor()
try:
cursor.execute(self._strip_sql(hql), parameters)
data = cursor.fetchall()
except DatabaseError as e:
| python | {
"resource": ""
} |
q265817 | PrestoHook.run | test | def run(self, hql, parameters=None):
"""
Execute the statement against Presto. Can be used to create views.
"""
| python | {
"resource": ""
} |
q265818 | PrestoHook.insert_rows | test | def insert_rows(self, table, rows, target_fields=None):
"""
A generic way to insert a set of tuples into a table.
:param table: Name of the target table
:type table: str
:param rows: The rows to insert into the table
:type rows: iterable of tuples
| python | {
"resource": ""
} |
q265819 | AzureCosmosDBHook.get_conn | test | def get_conn(self):
"""
Return a cosmos db client.
"""
if self.cosmos_client is not None:
return self.cosmos_client
# Initialize the Python Azure Cosmos DB client
| python | {
"resource": ""
} |
q265820 | AzureCosmosDBHook.does_collection_exist | test | def does_collection_exist(self, collection_name, database_name=None):
"""
Checks if a collection exists in CosmosDB.
"""
if collection_name is None:
raise AirflowBadRequest("Collection name cannot be None.")
existing_container = list(self.get_conn().QueryContainers(
get_database_link(self.__get_database_name(database_name)), {
"query": "SELECT * FROM | python | {
"resource": ""
} |
q265821 | AzureCosmosDBHook.create_collection | test | def create_collection(self, collection_name, database_name=None):
"""
Creates a new collection in the CosmosDB database.
"""
if collection_name is None:
raise AirflowBadRequest("Collection name cannot be None.")
# We need to check to see if this container already exists so we don't try
# to create it twice
existing_container = list(self.get_conn().QueryContainers(
get_database_link(self.__get_database_name(database_name)), {
"query": "SELECT * FROM r WHERE r.id=@id",
"parameters": [
| python | {
"resource": ""
} |
q265822 | AzureCosmosDBHook.does_database_exist | test | def does_database_exist(self, database_name):
"""
Checks if a database exists in CosmosDB.
"""
if database_name is None:
raise AirflowBadRequest("Database name cannot be None.")
existing_database = list(self.get_conn().QueryDatabases({
"query": "SELECT * FROM r WHERE r.id=@id",
| python | {
"resource": ""
} |
q265823 | AzureCosmosDBHook.create_database | test | def create_database(self, database_name):
"""
Creates a new database in CosmosDB.
"""
if database_name is None:
raise AirflowBadRequest("Database name cannot be None.")
# We need to check to see if this database already exists so we don't try
# to create it twice
existing_database = list(self.get_conn().QueryDatabases({
"query": "SELECT * FROM r WHERE r.id=@id",
"parameters": [
| python | {
"resource": ""
} |
q265824 | AzureCosmosDBHook.delete_database | test | def delete_database(self, database_name):
"""
Deletes an existing database in CosmosDB.
"""
if database_name is None:
| python | {
"resource": ""
} |
q265825 | AzureCosmosDBHook.delete_collection | test | def delete_collection(self, collection_name, database_name=None):
"""
Deletes an existing collection in the CosmosDB database.
"""
if collection_name is None:
raise AirflowBadRequest("Collection name cannot be None.")
| python | {
"resource": ""
} |
q265826 | AzureCosmosDBHook.insert_documents | test | def insert_documents(self, documents, database_name=None, collection_name=None):
"""
Insert a list of new documents into an existing collection in the CosmosDB database.
"""
if documents is None:
raise AirflowBadRequest("You cannot insert empty documents")
created_documents = []
for single_document in documents:
created_documents.append(
self.get_conn().CreateItem(
| python | {
"resource": ""
} |
q265827 | AzureCosmosDBHook.delete_document | test | def delete_document(self, document_id, database_name=None, collection_name=None):
"""
Delete an existing document out of a collection in the CosmosDB database.
"""
if document_id is None:
raise AirflowBadRequest("Cannot delete a document without an id")
self.get_conn().DeleteItem(
| python | {
"resource": ""
} |
q265828 | AzureCosmosDBHook.get_document | test | def get_document(self, document_id, database_name=None, collection_name=None):
"""
Get a document from an existing collection in the CosmosDB database.
"""
if document_id is None:
raise AirflowBadRequest("Cannot get a document without | python | {
"resource": ""
} |
q265829 | AzureCosmosDBHook.get_documents | test | def get_documents(self, sql_string, database_name=None, collection_name=None, partition_key=None):
"""
Get a list of documents from an existing collection in the CosmosDB database via SQL query.
"""
if sql_string is None:
raise AirflowBadRequest("SQL query string cannot be None")
# Query them in SQL
query = {'query': sql_string}
try:
result_iterable = self.get_conn().QueryItems(
get_collection_link(
| python | {
"resource": ""
} |
q265830 | GcfHook.get_function | test | def get_function(self, name):
"""
Returns the Cloud Function with the given name.
:param name: Name of the function.
:type name: str
:return: A Cloud Functions object representing the function.
:rtype: dict
""" | python | {
"resource": ""
} |
q265831 | GcfHook.create_new_function | test | def create_new_function(self, location, body, project_id=None):
"""
Creates a new function in Cloud Function in the location specified in the body.
:param location: The location of the function.
:type location: str
:param body: The body required by the Cloud Functions insert API.
:type body: dict
:param project_id: Optional, Google Cloud Project project_id where the function belongs.
If set to None or missing, the default project_id from the GCP connection is used.
:type project_id: str
:return: None
"""
| python | {
"resource": ""
} |
q265832 | GcfHook.update_function | test | def update_function(self, name, body, update_mask):
"""
Updates Cloud Functions according to the specified update mask.
:param name: The name of the function.
:type name: str
:param body: The body required by the cloud function patch API.
:type body: dict
:param update_mask: The update mask | python | {
"resource": ""
} |
q265833 | GcfHook.upload_function_zip | test | def upload_function_zip(self, location, zip_path, project_id=None):
"""
Uploads zip file with sources.
:param location: The location where the function is created.
:type location: str
:param zip_path: The path of the valid .zip file to upload.
:type zip_path: str
:param project_id: Optional, Google Cloud Project project_id where the function belongs.
If set to None or missing, the default project_id from the GCP connection is used.
:type project_id: str
:return: The upload URL that was returned by generateUploadUrl method.
"""
response = self.get_conn().projects().locations().functions().generateUploadUrl(
parent=self._full_location(project_id, location)
).execute(num_retries=self.num_retries)
upload_url = response.get('uploadUrl')
with open(zip_path, 'rb') as fp:
| python | {
"resource": ""
} |
q265834 | GcfHook.delete_function | test | def delete_function(self, name):
"""
Deletes the specified Cloud Function.
:param name: The name of the function.
:type name: str
:return: None
| python | {
"resource": ""
} |
q265835 | BaseTIDep.get_dep_statuses | test | def get_dep_statuses(self, ti, session, dep_context=None):
"""
Wrapper around the private _get_dep_statuses method that contains some global
checks for all dependencies.
:param ti: the task instance to get the dependency status for
:type ti: airflow.models.TaskInstance
:param session: database session
:type session: sqlalchemy.orm.session.Session
:param dep_context: the context for which this dependency should be evaluated for
:type dep_context: DepContext
"""
# this avoids a circular dependency
from airflow.ti_deps.dep_context import DepContext
if dep_context is None:
dep_context = | python | {
"resource": ""
} |
q265836 | BaseTIDep.is_met | test | def is_met(self, ti, session, dep_context=None):
"""
Returns whether or not this dependency is met for a given task instance. A
dependency is considered met if all of the dependency statuses it reports are
passing.
:param ti: the task instance to see if this dependency is met for
:type ti: airflow.models.TaskInstance
:param session: database session
:type session: | python | {
"resource": ""
} |
q265837 | BaseTIDep.get_failure_reasons | test | def get_failure_reasons(self, ti, session, dep_context=None):
"""
Returns an iterable of strings that explain why this dependency wasn't met.
:param ti: the task instance to see if this dependency is met for
:type ti: airflow.models.TaskInstance
:param session: database session
:type session: sqlalchemy.orm.session.Session
:param dep_context: The context this dependency is being checked under that stores
| python | {
"resource": ""
} |
q265838 | _parse_s3_config | test | def _parse_s3_config(config_file_name, config_format='boto', profile=None):
"""
Parses a config file for s3 credentials. Can currently
parse boto, s3cmd.conf and AWS SDK config formats
:param config_file_name: path to the config file
:type config_file_name: str
:param config_format: config type. One of "boto", "s3cmd" or "aws".
Defaults to "boto"
:type config_format: str
:param profile: profile name in AWS type config file
:type profile: str
"""
config = configparser.ConfigParser()
if config.read(config_file_name): # pragma: no cover
sections = config.sections()
else:
raise AirflowException("Couldn't read {0}".format(config_file_name))
# Setting option names depending on file format
if config_format is None:
config_format = 'boto'
conf_format = config_format.lower()
if conf_format == 'boto': # pragma: no cover
if profile is not None and 'profile ' + profile in sections:
cred_section = 'profile ' + profile
else:
cred_section = 'Credentials'
elif conf_format == 'aws' and profile is not None:
cred_section = profile
else:
cred_section = 'default'
# Option names
| python | {
"resource": ""
} |
q265839 | AwsHook.get_credentials | test | def get_credentials(self, region_name=None):
"""Get the underlying `botocore.Credentials` object.
This contains the following authentication attributes: access_key, secret_key and token.
"""
session, _ = self._get_credentials(region_name)
# Credentials are refreshable, so accessing your access key and
| python | {
"resource": ""
} |
q265840 | VerticaHook.get_conn | test | def get_conn(self):
"""
Returns verticaql connection object
"""
conn = self.get_connection(self.vertica_conn_id)
conn_config = {
"user": conn.login,
"password": conn.password or '',
"database": conn.schema,
"host": conn.host or 'localhost' | python | {
"resource": ""
} |
q265841 | StreamLogWriter.flush | test | def flush(self):
"""
Ensure all logging output has been flushed
"""
if len(self._buffer) > 0:
| python | {
"resource": ""
} |
q265842 | correct_maybe_zipped | test | def correct_maybe_zipped(fileloc):
"""
If the path contains a folder with a .zip suffix, then
the folder is treated as a zip archive and path to zip is returned.
"""
_, archive, filename = re.search(
| python | {
"resource": ""
} |
q265843 | list_py_file_paths | test | def list_py_file_paths(directory, safe_mode=True,
include_examples=None):
"""
Traverse a directory and look for Python files.
:param directory: the directory to traverse
:type directory: unicode
:param safe_mode: whether to use a heuristic to determine whether a file
contains Airflow DAG definitions
:return: a list of paths to Python files in the specified directory
:rtype: list[unicode]
"""
if include_examples is None:
include_examples = conf.getboolean('core', 'LOAD_EXAMPLES')
file_paths = []
if directory is None:
return []
elif os.path.isfile(directory):
return [directory]
elif os.path.isdir(directory):
patterns_by_dir = {}
for root, dirs, files in os.walk(directory, followlinks=True):
patterns = patterns_by_dir.get(root, [])
ignore_file = os.path.join(root, '.airflowignore')
if os.path.isfile(ignore_file):
with open(ignore_file, 'r') as f:
# If we have new patterns create a copy so we don't change
# the previous list (which would affect other subdirs)
patterns += [re.compile(p) for p in f.read().split('\n') if p]
# If we can ignore any subdirs entirely we should - fewer paths
# to walk is better. We have to modify the ``dirs`` array in
# place for this to affect os.walk
dirs[:] = [
d
for d in dirs
if not any(p.search(os.path.join(root, d)) for p in patterns)
]
# We want patterns defined in a parent folder's .airflowignore to
# apply to subdirs too
for d in dirs:
patterns_by_dir[os.path.join(root, d)] = patterns
for f in files:
try:
file_path = os.path.join(root, f)
if not os.path.isfile(file_path): | python | {
"resource": ""
} |
q265844 | SimpleTaskInstance.construct_task_instance | test | def construct_task_instance(self, session=None, lock_for_update=False):
"""
Construct a TaskInstance from the database based on the primary key
:param session: DB session.
:param lock_for_update: if True, indicates that the database should
lock the TaskInstance (issuing a FOR UPDATE clause) until the
session is committed.
"""
TI = airflow.models.TaskInstance
qry = session.query(TI).filter(
TI.dag_id == self._dag_id,
| python | {
"resource": ""
} |
q265845 | DagFileProcessorAgent.start | test | def start(self):
"""
Launch DagFileProcessorManager processor and start DAG parsing loop in manager.
"""
self._process = self._launch_process(self._dag_directory,
self._file_paths,
self._max_runs, | python | {
"resource": ""
} |
q265846 | DagFileProcessorAgent.terminate | test | def terminate(self):
"""
Send termination signal to DAG parsing processor manager
and expect it to terminate all DAG file processors.
"""
| python | {
"resource": ""
} |
q265847 | DagFileProcessorManager._exit_gracefully | test | def _exit_gracefully(self, signum, frame):
"""
Helper method to clean up DAG file processors to avoid leaving orphan processes.
"""
self.log.info("Exiting gracefully upon receiving | python | {
"resource": ""
} |
q265848 | DagFileProcessorManager.start | test | def start(self):
"""
Use multiple processes to parse and generate tasks for the
DAGs in parallel. By processing them in separate processes,
we can get parallelism and isolation from potentially harmful
user code.
"""
self.log.info("Processing files using up to %s processes at a time ", self._parallelism)
self.log.info("Process each file at most once every %s seconds", self._file_process_interval)
self.log.info(
| python | {
"resource": ""
} |
q265849 | DagFileProcessorManager.start_in_async | test | def start_in_async(self):
"""
Parse DAG files repeatedly in a standalone loop.
"""
while True:
loop_start_time = time.time()
if self._signal_conn.poll():
agent_signal = self._signal_conn.recv()
if agent_signal == DagParsingSignal.TERMINATE_MANAGER:
self.terminate()
break
elif agent_signal == DagParsingSignal.END_MANAGER:
self.end()
sys.exit(os.EX_OK)
self._refresh_dag_dir()
simple_dags = self.heartbeat()
for simple_dag in simple_dags:
self._result_queue.put(simple_dag)
self._print_stat()
all_files_processed | python | {
"resource": ""
} |
q265850 | DagFileProcessorManager.start_in_sync | test | def start_in_sync(self):
"""
Parse DAG files in a loop controlled by DagParsingSignal.
Actual DAG parsing loop will run once upon receiving one
agent heartbeat message and will report done when finished the loop.
"""
while True:
agent_signal = self._signal_conn.recv()
if agent_signal == DagParsingSignal.TERMINATE_MANAGER:
self.terminate()
break
elif agent_signal == DagParsingSignal.END_MANAGER:
self.end()
sys.exit(os.EX_OK)
elif agent_signal == DagParsingSignal.AGENT_HEARTBEAT:
self._refresh_dag_dir()
simple_dags = self.heartbeat()
for simple_dag in simple_dags:
| python | {
"resource": ""
} |
q265851 | DagFileProcessorManager._refresh_dag_dir | test | def _refresh_dag_dir(self):
"""
Refresh file paths from dag dir if we haven't done it for too long.
"""
elapsed_time_since_refresh = (timezone.utcnow() -
self.last_dag_dir_refresh_time).total_seconds()
if elapsed_time_since_refresh > self.dag_dir_list_interval:
# Build up a list of Python files that could contain DAGs
| python | {
"resource": ""
} |
q265852 | DagFileProcessorManager._print_stat | test | def _print_stat(self):
"""
Occasionally print out stats about how fast the files are getting processed
"""
if ((timezone.utcnow() - self.last_stat_print_time).total_seconds() >
| python | {
"resource": ""
} |
q265853 | DagFileProcessorManager.clear_nonexistent_import_errors | test | def clear_nonexistent_import_errors(self, session):
"""
Clears import errors for files that no longer exist.
:param session: session for ORM operations
:type session: sqlalchemy.orm.session.Session
"""
query = session.query(errors.ImportError)
if self._file_paths:
| python | {
"resource": ""
} |
q265854 | DagFileProcessorManager._log_file_processing_stats | test | def _log_file_processing_stats(self, known_file_paths):
"""
Print out stats about how files are getting processed.
:param known_file_paths: a list of file paths that may contain Airflow
DAG definitions
:type known_file_paths: list[unicode]
:return: None
"""
# File Path: Path to the file containing the DAG definition
# PID: PID associated with the process that's processing the file. May
# be empty.
# Runtime: If the process is currently running, how long it's been
# running for in seconds.
# Last Runtime: If the process ran before, how long did it take to
# finish in seconds
# Last Run: When the file finished processing in the previous run.
headers = ["File Path",
"PID",
"Runtime",
"Last Runtime",
"Last Run"]
rows = []
for file_path in known_file_paths:
last_runtime = self.get_last_runtime(file_path)
file_name = os.path.basename(file_path)
file_name = os.path.splitext(file_name)[0].replace(os.sep, '.')
if last_runtime:
Stats.gauge(
'dag_processing.last_runtime.{}'.format(file_name),
last_runtime
)
processor_pid = self.get_pid(file_path)
processor_start_time = self.get_start_time(file_path)
runtime = ((timezone.utcnow() - processor_start_time).total_seconds()
if processor_start_time else None)
last_run = self.get_last_finish_time(file_path)
if last_run:
seconds_ago = (timezone.utcnow() - last_run).total_seconds()
Stats.gauge(
'dag_processing.last_run.seconds_ago.{}'.format(file_name),
seconds_ago
)
rows.append((file_path,
| python | {
"resource": ""
} |
q265855 | DagFileProcessorManager.set_file_paths | test | def set_file_paths(self, new_file_paths):
"""
Update this with a new set of paths to DAG definition files.
:param new_file_paths: list of paths to DAG definition files
:type new_file_paths: list[unicode]
:return: None
"""
self._file_paths = new_file_paths
self._file_path_queue = [x for x in self._file_path_queue
if x in new_file_paths]
# Stop processors that are working on deleted files
filtered_processors = {}
| python | {
"resource": ""
} |
q265856 | DagFileProcessorManager.wait_until_finished | test | def wait_until_finished(self):
"""
Sleeps until all the processors are done.
"""
for file_path, processor | python | {
"resource": ""
} |
q265857 | DagFileProcessorManager.heartbeat | test | def heartbeat(self):
"""
This should be periodically called by the manager loop. This method will
kick off new processes to process DAG definition files and read the
results from the finished processors.
:return: a list of SimpleDags that were produced by processors that
have finished since the last time this was called
:rtype: list[airflow.utils.dag_processing.SimpleDag]
"""
finished_processors = {}
""":type : dict[unicode, AbstractDagFileProcessor]"""
running_processors = {}
""":type : dict[unicode, AbstractDagFileProcessor]"""
for file_path, processor in self._processors.items():
if processor.done:
self.log.debug("Processor for %s finished", file_path)
now = timezone.utcnow()
finished_processors[file_path] = processor
self._last_runtime[file_path] = (now -
processor.start_time).total_seconds()
self._last_finish_time[file_path] = now
self._run_count[file_path] += 1
else:
running_processors[file_path] = processor
self._processors = running_processors
self.log.debug("%s/%s DAG parsing processes running",
len(self._processors), self._parallelism)
self.log.debug("%s file paths queued for processing",
len(self._file_path_queue))
# Collect all the DAGs that were found in the processed files
simple_dags = []
for file_path, processor in finished_processors.items():
if processor.result is None:
self.log.warning(
"Processor for %s exited with return code %s.",
processor.file_path, processor.exit_code
| python | {
"resource": ""
} |
q265858 | DagFileProcessorManager.end | test | def end(self):
"""
Kill all child processes on exit since we don't want to leave
them as orphaned.
"""
pids_to_kill = self.get_all_pids()
if len(pids_to_kill) > 0:
# First try SIGTERM
this_process = psutil.Process(os.getpid())
# Only check child processes to ensure that we don't have a case
# where we kill the wrong process because a child process died
# but the PID got reused.
child_processes = [x for x in this_process.children(recursive=True)
if x.is_running() and x.pid in pids_to_kill]
for child in child_processes:
self.log.info("Terminating child PID: %s", child.pid)
child.terminate()
# TODO: Remove magic number
timeout = 5
self.log.info("Waiting up to %s seconds for processes to exit...", timeout)
try:
psutil.wait_procs(
child_processes, timeout=timeout,
| python | {
"resource": ""
} |
q265859 | SSHHook.get_conn | test | def get_conn(self):
"""
Opens a ssh connection to the remote host.
:rtype: paramiko.client.SSHClient
"""
self.log.debug('Creating SSH client for conn_id: %s', self.ssh_conn_id)
client = paramiko.SSHClient()
if not self.allow_host_key_change:
self.log.warning('Remote Identification Change is not verified. '
'This wont protect against Man-In-The-Middle attacks')
client.load_system_host_keys()
if self.no_host_key_check:
self.log.warning('No Host Key Verification. This wont protect '
'against Man-In-The-Middle attacks')
# Default is RejectPolicy
client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
if self.password and self.password.strip():
client.connect(hostname=self.remote_host,
username=self.username,
password=self.password,
key_filename=self.key_file,
| python | {
"resource": ""
} |
q265860 | GCPTransferServiceHook.create_transfer_job | test | def create_transfer_job(self, body):
"""
Creates a transfer job that runs periodically.
:param body: (Required) A request body, as described in
https://cloud.google.com/storage-transfer/docs/reference/rest/v1/transferJobs/patch#request-body
:type body: dict
:return: transfer job.
See:
| python | {
"resource": ""
} |
q265861 | GCPTransferServiceHook.get_transfer_job | test | def get_transfer_job(self, job_name, project_id=None):
"""
Gets the latest state of a long-running operation in Google Storage
Transfer Service.
:param job_name: (Required) Name of the job to be fetched
:type job_name: str
:param project_id: (Optional) the ID of the | python | {
"resource": ""
} |
q265862 | GCPTransferServiceHook.list_transfer_job | test | def list_transfer_job(self, filter):
"""
Lists long-running operations in Google Storage Transfer
Service that match the specified filter.
:param filter: (Required) A request filter, as described in
https://cloud.google.com/storage-transfer/docs/reference/rest/v1/transferJobs/list#body.QUERY_PARAMETERS.filter
:type filter: dict
:return: List of Transfer Jobs
:rtype: list[dict]
"""
conn = self.get_conn()
filter = self._inject_project_id(filter, FILTER, FILTER_PROJECT_ID)
request = conn.transferJobs().list(filter=json.dumps(filter))
| python | {
"resource": ""
} |
q265863 | GCPTransferServiceHook.update_transfer_job | test | def update_transfer_job(self, job_name, body):
"""
Updates a transfer job that runs periodically.
:param job_name: (Required) Name of the job to be updated
:type job_name: str
:param body: A request body, as described in
https://cloud.google.com/storage-transfer/docs/reference/rest/v1/transferJobs/patch#request-body
:type body: dict
:return: If successful, TransferJob.
:rtype: dict
| python | {
"resource": ""
} |
q265864 | GCPTransferServiceHook.delete_transfer_job | test | def delete_transfer_job(self, job_name, project_id):
"""
Deletes a transfer job. This is a soft delete. After a transfer job is
deleted, the job and all the transfer executions are subject to garbage
collection. Transfer jobs become eligible for garbage collection
30 days after soft delete.
:param job_name: (Required) Name of the job to be deleted
:type job_name: str
:param project_id: (Optional) the ID of the project that | python | {
"resource": ""
} |
q265865 | GCPTransferServiceHook.cancel_transfer_operation | test | def cancel_transfer_operation(self, operation_name):
"""
Cancels an transfer operation in Google Storage Transfer Service.
:param operation_name: Name | python | {
"resource": ""
} |
q265866 | GCPTransferServiceHook.pause_transfer_operation | test | def pause_transfer_operation(self, operation_name):
"""
Pauses an transfer operation in Google Storage Transfer Service.
:param operation_name: (Required) Name | python | {
"resource": ""
} |
q265867 | GCPTransferServiceHook.resume_transfer_operation | test | def resume_transfer_operation(self, operation_name):
"""
Resumes an transfer operation in Google Storage Transfer Service.
:param operation_name: (Required) Name | python | {
"resource": ""
} |
q265868 | GCPTransferServiceHook.wait_for_transfer_job | test | def wait_for_transfer_job(self, job, expected_statuses=(GcpTransferOperationStatus.SUCCESS,), timeout=60):
"""
Waits until the job reaches the expected state.
:param job: Transfer job
See:
https://cloud.google.com/storage-transfer/docs/reference/rest/v1/transferJobs#TransferJob
:type job: dict
:param expected_statuses: State that is expected
See:
https://cloud.google.com/storage-transfer/docs/reference/rest/v1/transferOperations#Status
:type expected_statuses: set[str]
:param timeout:
:type timeout: time in which the operation must end in seconds
:rtype: None
"""
while timeout > 0:
| python | {
"resource": ""
} |
q265869 | TaskReschedule.find_for_task_instance | test | def find_for_task_instance(task_instance, session):
"""
Returns all task reschedules for the task instance and try number,
in ascending order.
:param task_instance: the task instance to find task reschedules for
:type task_instance: airflow.models.TaskInstance
"""
TR = TaskReschedule
return (
session
.query(TR)
.filter(TR.dag_id == task_instance.dag_id,
| python | {
"resource": ""
} |
q265870 | Pool.open_slots | test | def open_slots(self, session):
"""
Returns the number of slots open at the moment
"""
from airflow.models.taskinstance import \
TaskInstance as TI # Avoid circular import
| python | {
"resource": ""
} |
q265871 | run_command | test | def run_command(command):
"""
Runs command and returns stdout
"""
process = subprocess.Popen(
shlex.split(command),
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
close_fds=True)
output, stderr = [stream.decode(sys.getdefaultencoding(), 'ignore')
for stream in process.communicate()]
if | python | {
"resource": ""
} |
q265872 | AirflowConfigParser.remove_option | test | def remove_option(self, section, option, remove_default=True):
"""
Remove an option if it exists in config from a file or
default config. If both of config have the same option, this removes
the option in both configs unless remove_default=False.
"""
if super().has_option(section, option):
| python | {
"resource": ""
} |
q265873 | AirflowConfigParser.getsection | test | def getsection(self, section):
"""
Returns the section as a dict. Values are converted to int, float, bool
as required.
:param section: section from the config
:rtype: dict
"""
if (section not in self._sections and
section not in self.airflow_defaults._sections):
return None
_section = copy.deepcopy(self.airflow_defaults._sections[section])
if section in self._sections:
_section.update(copy.deepcopy(self._sections[section]))
section_prefix = 'AIRFLOW__{S}__'.format(S=section.upper())
for env_var in sorted(os.environ.keys()):
if env_var.startswith(section_prefix):
key = env_var.replace(section_prefix, '').lower()
| python | {
"resource": ""
} |
q265874 | DatastoreHook.allocate_ids | test | def allocate_ids(self, partial_keys):
"""
Allocate IDs for incomplete keys.
.. seealso::
https://cloud.google.com/datastore/docs/reference/rest/v1/projects/allocateIds
:param partial_keys: a list of partial keys.
:type partial_keys: list
:return: a list of full keys.
:rtype: list
"""
conn = self.get_conn()
resp = (conn
| python | {
"resource": ""
} |
q265875 | DatastoreHook.begin_transaction | test | def begin_transaction(self):
"""
Begins a new transaction.
.. seealso::
https://cloud.google.com/datastore/docs/reference/rest/v1/projects/beginTransaction
:return: a transaction handle.
| python | {
"resource": ""
} |
q265876 | DatastoreHook.commit | test | def commit(self, body):
"""
Commit a transaction, optionally creating, deleting or modifying some entities.
.. seealso::
https://cloud.google.com/datastore/docs/reference/rest/v1/projects/commit
:param body: the body of the | python | {
"resource": ""
} |
q265877 | DatastoreHook.lookup | test | def lookup(self, keys, read_consistency=None, transaction=None):
"""
Lookup some entities by key.
.. seealso::
https://cloud.google.com/datastore/docs/reference/rest/v1/projects/lookup
:param keys: the keys to lookup.
:type keys: list
:param read_consistency: the read consistency to use. default, strong or eventual.
Cannot be used with a transaction.
:type read_consistency: str
:param transaction: the transaction to use, if any.
:type transaction: str
:return: the response body of the lookup request.
:rtype: dict
| python | {
"resource": ""
} |
q265878 | DatastoreHook.rollback | test | def rollback(self, transaction):
"""
Roll back a transaction.
.. seealso::
https://cloud.google.com/datastore/docs/reference/rest/v1/projects/rollback
:param transaction: the transaction to roll back.
:type transaction: str
"""
| python | {
"resource": ""
} |
q265879 | DatastoreHook.run_query | test | def run_query(self, body):
"""
Run a query for entities.
.. seealso::
https://cloud.google.com/datastore/docs/reference/rest/v1/projects/runQuery
:param body: the body of the query request.
:type body: dict
:return: the batch of query results.
:rtype: dict
"""
conn = self.get_conn()
resp = (conn
| python | {
"resource": ""
} |
q265880 | DatastoreHook.get_operation | test | def get_operation(self, name):
"""
Gets the latest state of a long-running operation.
.. seealso::
https://cloud.google.com/datastore/docs/reference/data/rest/v1/projects.operations/get
:param name: the name of the | python | {
"resource": ""
} |
q265881 | DatastoreHook.delete_operation | test | def delete_operation(self, name):
"""
Deletes the long-running operation.
.. seealso::
https://cloud.google.com/datastore/docs/reference/data/rest/v1/projects.operations/delete
:param name: the name of the | python | {
"resource": ""
} |
q265882 | DatastoreHook.poll_operation_until_done | test | def poll_operation_until_done(self, name, polling_interval_in_seconds):
"""
Poll backup operation state until it's completed.
:param name: the name of the operation resource
:type name: str
:param polling_interval_in_seconds: The number of seconds to wait before calling another request.
:type polling_interval_in_seconds: int
:return: a resource operation instance.
:rtype: dict
"""
while True:
result = self.get_operation(name)
state = | python | {
"resource": ""
} |
q265883 | DatastoreHook.export_to_storage_bucket | test | def export_to_storage_bucket(self, bucket, namespace=None, entity_filter=None, labels=None):
"""
Export entities from Cloud Datastore to Cloud Storage for backup.
.. note::
Keep in mind that this requests the Admin API not the Data API.
.. seealso::
https://cloud.google.com/datastore/docs/reference/admin/rest/v1/projects/export
:param bucket: The name of the Cloud Storage bucket.
:type bucket: str
:param namespace: The Cloud Storage namespace path.
:type namespace: str
:param entity_filter: Description of what data from the project is included in the export.
:type entity_filter: dict
:param labels: Client-assigned labels.
:type labels: dict of str
:return: a resource operation instance.
:rtype: dict | python | {
"resource": ""
} |
q265884 | DatastoreHook.import_from_storage_bucket | test | def import_from_storage_bucket(self, bucket, file, namespace=None, entity_filter=None, labels=None):
"""
Import a backup from Cloud Storage to Cloud Datastore.
.. note::
Keep in mind that this requests the Admin API not the Data API.
.. seealso::
https://cloud.google.com/datastore/docs/reference/admin/rest/v1/projects/import
:param bucket: The name of the Cloud Storage bucket.
:type bucket: str
:param file: the metadata file written by the projects.export operation.
:type file: str
:param namespace: The Cloud Storage namespace path.
| python | {
"resource": ""
} |
q265885 | AwsSnsHook.publish_to_target | test | def publish_to_target(self, target_arn, message):
"""
Publish a message to a topic or an endpoint.
:param target_arn: either a TopicArn or an EndpointArn
:type target_arn: str
:param message: the default message | python | {
"resource": ""
} |
q265886 | get_hostname | test | def get_hostname():
"""
Fetch the hostname using the callable from the config or using
`socket.getfqdn` as a fallback.
"""
# First we attempt to fetch the callable path from the config.
try:
callable_path = conf.get('core', 'hostname_callable')
except AirflowConfigException:
| python | {
"resource": ""
} |
q265887 | CloudNaturalLanguageHook.get_conn | test | def get_conn(self):
"""
Retrieves connection to Cloud Natural Language service.
:return: Cloud Natural Language service object
| python | {
"resource": ""
} |
q265888 | CloudNaturalLanguageHook.analyze_entities | test | def analyze_entities(self, document, encoding_type=None, retry=None, timeout=None, metadata=None):
"""
Finds named entities in the text along with entity types,
salience, mentions for each entity, and other properties.
:param document: Input document.
If a dict is provided, it must be of the same form as the protobuf message Document
:type document: dict or class google.cloud.language_v1.types.Document
:param encoding_type: The encoding type used by the API to calculate offsets.
:type encoding_type: google.cloud.language_v1.types.EncodingType
:param retry: A retry object used to retry requests. If None is specified, requests will not be
retried.
:type retry: google.api_core.retry.Retry
:param timeout: The amount of time, in seconds, to wait for the request to complete. | python | {
"resource": ""
} |
q265889 | CloudNaturalLanguageHook.annotate_text | test | def annotate_text(self, document, features, encoding_type=None, retry=None, timeout=None, metadata=None):
"""
A convenience method that provides all the features that analyzeSentiment,
analyzeEntities, and analyzeSyntax provide in one call.
:param document: Input document.
If a dict is provided, it must be of the same form as the protobuf message Document
:type document: dict or google.cloud.language_v1.types.Document
:param features: The enabled features.
If a dict is provided, it must be of the same form as the protobuf message Features
:type features: dict or google.cloud.language_v1.enums.Features
| python | {
"resource": ""
} |
q265890 | CloudNaturalLanguageHook.classify_text | test | def classify_text(self, document, retry=None, timeout=None, metadata=None):
"""
Classifies a document into categories.
:param document: Input document.
If a dict is provided, it must be of the same form as the protobuf message Document
:type document: dict or class google.cloud.language_v1.types.Document
:param retry: A retry object used to retry requests. If None is specified, requests will not be
retried.
:type retry: google.api_core.retry.Retry
:param timeout: The amount of time, in seconds, to wait for the request to complete. Note that if
retry is specified, the timeout applies to each | python | {
"resource": ""
} |
q265891 | get_template_field | test | def get_template_field(env, fullname):
"""
Gets template fields for specific operator class.
:param fullname: Full path to operator class.
For example: ``airflow.contrib.operators.gcp_vision_operator.CloudVisionProductSetCreateOperator``
:return: List of template field
:rtype: list[str]
"""
modname, classname = fullname.rsplit(".", 1)
try:
with mock(env.config.autodoc_mock_imports): | python | {
"resource": ""
} |
q265892 | template_field_role | test | def template_field_role(app, typ, rawtext, text, lineno, inliner, options={}, content=[]):
"""
A role that allows you to include a list of template fields in the middle of the text. This is especially
useful when writing guides describing how to use the operator.
The result is a list of fields where each field is shorted in the literal block.
Sample usage::
:template-fields:`airflow.contrib.operators.gcp_natural_language_operator.CloudLanguageAnalyzeSentimentOperator`
For further information look at:
* [http://docutils.sourceforge.net/docs/howto/rst-roles.html](Creating reStructuredText Interpreted
Text Roles)
"""
text = utils.unescape(text)
try:
template_fields = get_template_field(app.env, text)
except RoleException as e:
| python | {
"resource": ""
} |
q265893 | dispose_orm | test | def dispose_orm():
""" Properly close pooled database connections """
log.debug("Disposing DB connection pool (PID %s)", os.getpid())
global engine
global Session
| python | {
"resource": ""
} |
q265894 | prepare_classpath | test | def prepare_classpath():
"""
Ensures that certain subfolders of AIRFLOW_HOME are on the classpath
"""
if DAGS_FOLDER not in sys.path:
sys.path.append(DAGS_FOLDER)
# Add ./config/ for loading custom log parsers etc, or
# airflow_local_settings etc.
| python | {
"resource": ""
} |
q265895 | CeleryQueueSensor._check_task_id | test | def _check_task_id(self, context):
"""
Gets the returned Celery result from the Airflow task
ID provided to the sensor, and returns True if the
celery result has been finished execution.
:param context: Airflow's execution context
:type context: dict
:return: True if | python | {
"resource": ""
} |
q265896 | detect_conf_var | test | def detect_conf_var():
"""Return true if the ticket cache contains "conf" information as is found
in ticket caches of Kerberos 1.8.1 or later. This is incompatible with the
Sun Java Krb5LoginModule in Java6, | python | {
"resource": ""
} |
q265897 | alchemy_to_dict | test | def alchemy_to_dict(obj):
"""
Transforms a SQLAlchemy model instance into a dictionary
"""
if not obj:
return None
d = {}
for c in obj.__table__.columns:
value = getattr(obj, c.name)
if | python | {
"resource": ""
} |
q265898 | chunks | test | def chunks(items, chunk_size):
"""
Yield successive chunks of a given size from a list of items
"""
if chunk_size <= 0:
raise ValueError('Chunk size must be | python | {
"resource": ""
} |
q265899 | reduce_in_chunks | test | def reduce_in_chunks(fn, iterable, initializer, chunk_size=0):
"""
Reduce the given list of items by splitting it into chunks
of the given size and passing | python | {
"resource": ""
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.