text_prompt stringlengths 157 13.1k | code_prompt stringlengths 7 19.8k ⌀ |
|---|---|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def backlinks( self, page: 'WikipediaPage', **kwargs ) -> PagesDict: """ Returns backlinks from other pages with respect to parameters API Calls for parameters: - https://www.mediawiki.org/w/api.php?action=help&modules=query%2Bbacklinks - https://www.mediawiki.org/wiki/API:Backlinks :param page: :class:`WikipediaPage` :param kwargs: parameters used in API call :return: backlinks from other pages """ |
params = {
'action': 'query',
'list': 'backlinks',
'bltitle': page.title,
'bllimit': 500,
}
used_params = kwargs
used_params.update(params)
raw = self._query(
page,
used_params
)
self._common_attributes(raw['query'], page)
v = raw['query']
while 'continue' in raw:
params['blcontinue'] = raw['continue']['blcontinue']
raw = self._query(
page,
params
)
v['backlinks'] += raw['query']['backlinks']
return self._build_backlinks(v, page) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def categorymembers( self, page: 'WikipediaPage', **kwargs ) -> PagesDict: """ Returns pages in given category with respect to parameters API Calls for parameters: - https://www.mediawiki.org/w/api.php?action=help&modules=query%2Bcategorymembers - https://www.mediawiki.org/wiki/API:Categorymembers :param page: :class:`WikipediaPage` :param kwargs: parameters used in API call :return: pages in given category """ |
params = {
'action': 'query',
'list': 'categorymembers',
'cmtitle': page.title,
'cmlimit': 500,
}
used_params = kwargs
used_params.update(params)
raw = self._query(
page,
used_params
)
self._common_attributes(raw['query'], page)
v = raw['query']
while 'continue' in raw:
params['cmcontinue'] = raw['continue']['cmcontinue']
raw = self._query(
page,
params
)
v['categorymembers'] += raw['query']['categorymembers']
return self._build_categorymembers(v, page) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def full_text(self, level: int = 1) -> str: """ Returns text of the current section as well as all its subsections. :param level: indentation level :return: text of the current section as well as all its subsections """ |
res = ""
if self.wiki.extract_format == ExtractFormat.WIKI:
res += self.title
elif self.wiki.extract_format == ExtractFormat.HTML:
res += "<h{}>{}</h{}>".format(level, self.title, level)
else:
raise NotImplementedError("Unknown ExtractFormat type")
res += "\n"
res += self._text
if len(self._text) > 0:
res += "\n\n"
for sec in self.sections:
res += sec.full_text(level + 1)
return res |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def sections(self) -> List[WikipediaPageSection]: """ Returns all sections of the curent page. :return: List of :class:`WikipediaPageSection` """ |
if not self._called['extracts']:
self._fetch('extracts')
return self._section |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def section_by_title( self, title: str, ) -> Optional[WikipediaPageSection]: """ Returns section of the current page with given `title`. :param title: section title :return: :class:`WikipediaPageSection` """ |
if not self._called['extracts']:
self._fetch('extracts')
return self._section_mapping.get(title) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def text(self) -> str: """ Returns text of the current page. :return: text of the current page """ |
txt = self.summary
if len(txt) > 0:
txt += "\n\n"
for sec in self.sections:
txt += sec.full_text(level=2)
return txt.strip() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
async def permits(self, identity, permission, context=None):
"""Check user permissions. Return True if the identity is allowed the permission in the current context, else return False. """ |
# pylint: disable=unused-argument
user = self.user_map.get(identity)
if not user:
return False
return permission in user.permissions |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
async def remember(request, response, identity, **kwargs):
"""Remember identity into response. The action is performed by identity_policy.remember() Usually the identity is stored in user cookies somehow but may be pushed into custom header also. """ |
assert isinstance(identity, str), identity
assert identity
identity_policy = request.config_dict.get(IDENTITY_KEY)
if identity_policy is None:
text = ("Security subsystem is not initialized, "
"call aiohttp_security.setup(...) first")
# in order to see meaningful exception message both: on console
# output and rendered page we add same message to *reason* and
# *text* arguments.
raise web.HTTPInternalServerError(reason=text, text=text)
await identity_policy.remember(request, response, identity, **kwargs) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
async def forget(request, response):
"""Forget previously remembered identity. Usually it clears cookie or server-side storage to forget user session. """ |
identity_policy = request.config_dict.get(IDENTITY_KEY)
if identity_policy is None:
text = ("Security subsystem is not initialized, "
"call aiohttp_security.setup(...) first")
# in order to see meaningful exception message both: on console
# output and rendered page we add same message to *reason* and
# *text* arguments.
raise web.HTTPInternalServerError(reason=text, text=text)
await identity_policy.forget(request, response) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
async def is_anonymous(request):
"""Check if user is anonymous. User is considered anonymous if there is not identity in request. """ |
identity_policy = request.config_dict.get(IDENTITY_KEY)
if identity_policy is None:
return True
identity = await identity_policy.identify(request)
if identity is None:
return True
return False |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def login_required(fn):
"""Decorator that restrict access only for authorized users. User is considered authorized if authorized_userid returns some value. """ |
@wraps(fn)
async def wrapped(*args, **kwargs):
request = args[-1]
if not isinstance(request, web.BaseRequest):
msg = ("Incorrect decorator usage. "
"Expecting `def handler(request)` "
"or `def handler(self, request)`.")
raise RuntimeError(msg)
await check_authorized(request)
return await fn(*args, **kwargs)
warnings.warn("login_required decorator is deprecated, "
"use check_authorized instead",
DeprecationWarning)
return wrapped |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
async def check_permission(request, permission, context=None):
"""Checker that passes only to authoraised users with given permission. If user is not authorized - raises HTTPUnauthorized, if user is authorized and does not have permission - raises HTTPForbidden. """ |
await check_authorized(request)
allowed = await permits(request, permission, context)
if not allowed:
raise web.HTTPForbidden() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def has_permission( permission, context=None, ):
"""Decorator that restricts access only for authorized users with correct permissions. If user is not authorized - raises HTTPUnauthorized, if user is authorized and does not have permission - raises HTTPForbidden. """ |
def wrapper(fn):
@wraps(fn)
async def wrapped(*args, **kwargs):
request = args[-1]
if not isinstance(request, web.BaseRequest):
msg = ("Incorrect decorator usage. "
"Expecting `def handler(request)` "
"or `def handler(self, request)`.")
raise RuntimeError(msg)
await check_permission(request, permission, context)
return await fn(*args, **kwargs)
return wrapped
warnings.warn("has_permission decorator is deprecated, "
"use check_permission instead",
DeprecationWarning)
return wrapper |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def normal_print(raw):
''' no colorful text, for output.'''
lines = raw.split('\n')
for line in lines:
if line:
print(line + '\n') |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def delete_word(word):
'''delete the word or phrase from database.'''
conn = sqlite3.connect(os.path.join(DEFAULT_PATH, 'word.db'))
curs = conn.cursor()
# search fisrt
curs.execute('SELECT expl, pr FROM Word WHERE name = "%s"' % word)
res = curs.fetchall()
if res:
try:
curs.execute('DELETE FROM Word WHERE name = "%s"' % word)
except Exception as e:
print(e)
else:
print(colored('%s has been deleted from database' % word, 'green'))
conn.commit()
finally:
curs.close()
conn.close()
else:
print(colored('%s not exists in the database' % word, 'white', 'on_red')) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def count_word(arg):
'''count the number of words'''
conn = sqlite3.connect(os.path.join(DEFAULT_PATH, 'word.db'))
curs = conn.cursor()
if arg[0].isdigit():
if len(arg) == 1:
curs.execute('SELECT count(*) FROM Word WHERE pr == %d' % (int(arg[0])))
elif len(arg) == 2 and arg[1] == '+':
curs.execute('SELECT count(*) FROM Word WHERE pr >= %d' % (int(arg[0])))
elif len(arg) == 3 and arg[1] == '-':
curs.execute('SELECT count(*) FROM Word WHERE pr >= %d AND pr<= % d' % (int(arg[0]), int(arg[2])))
elif arg[0].isalpha():
if arg == 'all':
curs.execute('SELECT count(*) FROM Word')
elif len(arg) == 1:
curs.execute('SELECT count(*) FROM Word WHERE aset == "%s"' % arg.upper())
res = curs.fetchall()
print(res[0][0])
curs.close()
conn.close() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def authentication_url(self):
"""Redirect your users to here to authenticate them.""" |
params = {
'client_id': self.client_id,
'response_type': self.type,
'redirect_uri': self.callback_url
}
return AUTHENTICATION_URL + "?" + urlencode(params) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def setup(self, paths=None):
# pylint: disable=arguments-differ """Sets up the _paths attribute. Args: paths: Comma-separated list of strings representing the paths to collect. """ |
if not paths:
self.state.add_error(
'No `paths` argument provided in recipe, bailing', critical=True)
else:
self._paths = [path.strip() for path in paths.strip().split(',')] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _create_hunt(self, name, args):
"""Create specified hunt. Args: name: string containing hunt name. args: proto (*FlowArgs) for type of hunt, as defined in GRR flow proto. Returns: The newly created GRR hunt object. Raises: ValueError: if approval is needed and approvers were not specified. """ |
runner_args = self.grr_api.types.CreateHuntRunnerArgs()
runner_args.description = self.reason
hunt = self.grr_api.CreateHunt(
flow_name=name, flow_args=args, hunt_runner_args=runner_args)
print('{0!s}: Hunt created'.format(hunt.hunt_id))
self._check_approval_wrapper(hunt, hunt.Start)
return hunt |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def setup(self, artifacts, use_tsk, reason, grr_server_url, grr_username, grr_password, approvers=None, verify=True):
"""Initializes a GRR Hunt artifact collector. Args: artifacts: str, comma-separated list of GRR-defined artifacts. use_tsk: toggle for use_tsk flag. reason: justification for GRR access. grr_server_url: GRR server URL. grr_username: GRR username. grr_password: GRR password. approvers: str, comma-separated list of GRR approval recipients. verify: boolean, whether to verify the GRR server's x509 certificate. """ |
super(GRRHuntArtifactCollector, self).setup(
reason, grr_server_url, grr_username, grr_password,
approvers=approvers, verify=verify)
self.artifacts = [item.strip() for item in artifacts.strip().split(',')]
if not artifacts:
self.state.add_error('No artifacts were specified.', critical=True)
self.use_tsk = use_tsk |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def process(self):
"""Construct and start new Artifact Collection hunt. Returns: The newly created GRR hunt object. Raises: RuntimeError: if no items specified for collection. """ |
print('Artifacts to be collected: {0!s}'.format(self.artifacts))
hunt_args = flows_pb2.ArtifactCollectorFlowArgs(
artifact_list=self.artifacts,
use_tsk=self.use_tsk,
ignore_interpolation_errors=True,
apply_parsers=False,)
return self._create_hunt('ArtifactCollectorFlow', hunt_args) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def collect_hunt_results(self, hunt):
"""Download current set of files in results. Args: hunt: The GRR hunt object to download files from. Returns: list: tuples containing: str: human-readable description of the source of the collection. For example, the name of the source host. str: path to the collected data. Raises: ValueError: if approval is needed and approvers were not specified. """ |
if not os.path.isdir(self.output_path):
os.makedirs(self.output_path)
output_file_path = os.path.join(
self.output_path, '.'.join((self.hunt_id, 'zip')))
if os.path.exists(output_file_path):
print('{0:s} already exists: Skipping'.format(output_file_path))
return None
self._check_approval_wrapper(
hunt, self._get_and_write_archive, hunt, output_file_path)
results = self._extract_hunt_results(output_file_path)
print('Wrote results of {0:s} to {1:s}'.format(
hunt.hunt_id, output_file_path))
return results |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _get_and_write_archive(self, hunt, output_file_path):
"""Gets and writes a hunt archive. Function is necessary for the _check_approval_wrapper to work. Args: hunt: The GRR hunt object. output_file_path: The output path where to write the Hunt Archive. """ |
hunt_archive = hunt.GetFilesArchive()
hunt_archive.WriteToFile(output_file_path) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _get_client_fqdn(self, client_info_contents):
"""Extracts a GRR client's FQDN from its client_info.yaml file. Args: client_info_contents: The contents of the client_info.yaml file. Returns: A (str, str) tuple representing client ID and client FQDN. """ |
yamldict = yaml.safe_load(client_info_contents)
fqdn = yamldict['system_info']['fqdn']
client_id = yamldict['client_id'].split('/')[1]
return client_id, fqdn |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _extract_hunt_results(self, output_file_path):
"""Open a hunt output archive and extract files. Args: output_file_path: The path where the hunt archive is downloaded to. Returns: list: tuples containing: str: The name of the client from where the files were downloaded. str: The directory where the files were downloaded to. """ |
# Extract items from archive by host for processing
collection_paths = []
client_ids = set()
client_id_to_fqdn = {}
hunt_dir = None
try:
with zipfile.ZipFile(output_file_path) as archive:
items = archive.infolist()
for f in items:
if not hunt_dir:
hunt_dir = f.filename.split('/')[0]
# If we're dealing with client_info.yaml, use it to build a client
# ID to FQDN correspondence table & skip extraction.
if f.filename.split('/')[-1] == 'client_info.yaml':
client_id, fqdn = self._get_client_fqdn(archive.read(f))
client_id_to_fqdn[client_id] = fqdn
continue
client_id = f.filename.split('/')[1]
if client_id.startswith('C.'):
if client_id not in client_ids:
client_directory = os.path.join(self.output_path,
hunt_dir, client_id)
collection_paths.append((client_id, client_directory))
client_ids.add(client_id)
try:
archive.extract(f, self.output_path)
except KeyError as exception:
print('Extraction error: {0:s}'.format(exception))
return []
except OSError as exception:
msg = 'Error manipulating file {0:s}: {1!s}'.format(
output_file_path, exception)
self.state.add_error(msg, critical=True)
return []
except zipfile.BadZipfile as exception:
msg = 'Bad zipfile {0:s}: {1!s}'.format(
output_file_path, exception)
self.state.add_error(msg, critical=True)
return []
try:
os.remove(output_file_path)
except OSError as exception:
print('Output path {0:s} could not be removed: {1:s}'.format(
output_file_path, exception))
# Translate GRR client IDs to FQDNs with the information retrieved
# earlier
fqdn_collection_paths = []
for client_id, path in collection_paths:
fqdn = client_id_to_fqdn.get(client_id, client_id)
fqdn_collection_paths.append((fqdn, path))
if not fqdn_collection_paths:
self.state.add_error('Nothing was extracted from the hunt archive',
critical=True)
return []
return fqdn_collection_paths |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_extra(cls, name=None):
"""Gets extra configuration parameters. These parameters should be loaded through load_extra or load_extra_data. Args: name: str, the name of the configuration data to load. Returns: A dictionary containing the requested configuration data. None if data was never loaded under that name. """ |
if not name:
return cls._extra_config
return cls._extra_config.get(name, None) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def load_extra(cls, filename):
"""Loads extra JSON configuration parameters from a file on the filesystem. Args: filename: str, the filename to open. Returns: bool: True if the extra configuration parameters were read. """ |
try:
with open(filename, 'rb') as configuration_file:
cls.load_extra_data(configuration_file.read())
sys.stderr.write("Config successfully loaded from {0:s}\n".format(
filename))
return True
except IOError:
return False |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def load_extra_data(cls, data):
"""Loads extra JSON configuration parameters from a data buffer. The data buffer must represent a JSON object. Args: data: str, the buffer to load the JSON data from. """ |
try:
cls._extra_config.update(json.loads(data))
except ValueError as exception:
sys.stderr.write('Could convert to JSON. {0:s}'.format(exception))
exit(-1) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def register_recipe(cls, recipe):
"""Registers a dftimewolf recipe. Args: recipe: imported python module representing the recipe. """ |
recipe_name = recipe.contents['name']
cls._recipe_classes[recipe_name] = (
recipe.contents, recipe.args, recipe.__doc__) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _get_client_by_hostname(self, hostname):
"""Search GRR by hostname and get the latest active client. Args: hostname: hostname to search for. Returns: GRR API Client object Raises: DFTimewolfError: if no client ID found for hostname. """ |
# Search for the hostname in GRR
print('Searching for client: {0:s}'.format(hostname))
try:
search_result = self.grr_api.SearchClients(hostname)
except grr_errors.UnknownError as exception:
self.state.add_error('Could not search for host {0:s}: {1!s}'.format(
hostname, exception
), critical=True)
return None
result = []
for client in search_result:
if hostname.lower() in client.data.os_info.fqdn.lower():
result.append((client.data.last_seen_at, client))
if not result:
self.state.add_error(
'Could not get client_id for {0:s}'.format(hostname), critical=True)
return None
last_seen, client = sorted(result, key=lambda x: x[0], reverse=True)[0]
# Remove microseconds and create datetime object
last_seen_datetime = datetime.datetime.utcfromtimestamp(
last_seen / 1000000)
# Timedelta between now and when the client was last seen, in minutes.
# First, count total seconds. This will return a float.
last_seen_seconds = (
datetime.datetime.utcnow() - last_seen_datetime).total_seconds()
last_seen_minutes = int(round(last_seen_seconds / 60))
print('{0:s}: Found active client'.format(client.client_id))
print('Found active client: {0:s}'.format(client.client_id))
print('Client last seen: {0:s} ({1:d} minutes ago)'.format(
last_seen_datetime.strftime('%Y-%m-%dT%H:%M:%S+0000'),
last_seen_minutes))
return client |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def find_clients(self, hosts):
"""Finds GRR clients given a list of hosts. Args: hosts: List of hostname FQDNs Returns: List of GRR client objects. """ |
# TODO(tomchop): Thread this
clients = []
for host in hosts:
clients.append(self._get_client_by_hostname(host))
return [client for client in clients if client is not None] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _get_client_by_id(self, client_id):
"""Get GRR client dictionary and make sure valid approvals exist. Args: client_id: GRR client ID. Returns: GRR API Client object """ |
client = self.grr_api.Client(client_id)
print('Checking for client approval')
self._check_approval_wrapper(client, client.ListFlows)
print('{0:s}: Client approval is valid'.format(client_id))
return client.Get() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _launch_flow(self, client, name, args):
"""Create specified flow, setting KeepAlive if requested. Args: client: GRR Client object on which to launch the flow. name: string containing flow name. args: proto (*FlowArgs) for type of flow, as defined in GRR flow proto. Returns: string containing ID of launched flow """ |
# Start the flow and get the flow ID
flow = self._check_approval_wrapper(
client, client.CreateFlow, name=name, args=args)
flow_id = flow.flow_id
print('{0:s}: Scheduled'.format(flow_id))
if self.keepalive:
keepalive_flow = client.CreateFlow(
name='KeepAlive', args=flows_pb2.KeepAliveArgs())
print('KeepAlive Flow:{0:s} scheduled'.format(keepalive_flow.flow_id))
return flow_id |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _await_flow(self, client, flow_id):
"""Awaits flow completion. Args: client: GRR Client object in which to await the flow. flow_id: string containing ID of flow to await. Raises: DFTimewolfError: if flow error encountered. """ |
# Wait for the flow to finish
print('{0:s}: Waiting to finish'.format(flow_id))
while True:
try:
status = client.Flow(flow_id).Get().data
except grr_errors.UnknownError:
msg = 'Unable to stat flow {0:s} for host {1:s}'.format(
flow_id, client.data.os_info.fqdn.lower())
self.state.add_error(msg)
raise DFTimewolfError(
'Unable to stat flow {0:s} for host {1:s}'.format(
flow_id, client.data.os_info.fqdn.lower()))
if status.state == flows_pb2.FlowContext.ERROR:
# TODO(jbn): If one artifact fails, what happens? Test.
message = status.context.backtrace
if 'ArtifactNotRegisteredError' in status.context.backtrace:
message = status.context.backtrace.split('\n')[-2]
raise DFTimewolfError(
'{0:s}: FAILED! Message from GRR:\n{1:s}'.format(
flow_id, message))
if status.state == flows_pb2.FlowContext.TERMINATED:
print('{0:s}: Complete'.format(flow_id))
break
time.sleep(self._CHECK_FLOW_INTERVAL_SEC) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _download_files(self, client, flow_id):
"""Download files from the specified flow. Args: client: GRR Client object to which to download flow data from. flow_id: GRR flow ID. Returns: str: path of downloaded files. """ |
output_file_path = os.path.join(
self.output_path, '.'.join((flow_id, 'zip')))
if os.path.exists(output_file_path):
print('{0:s} already exists: Skipping'.format(output_file_path))
return None
flow = client.Flow(flow_id)
file_archive = flow.GetFilesArchive()
file_archive.WriteToFile(output_file_path)
# Unzip archive for processing and remove redundant zip
fqdn = client.data.os_info.fqdn.lower()
client_output_file = os.path.join(self.output_path, fqdn)
if not os.path.isdir(client_output_file):
os.makedirs(client_output_file)
with zipfile.ZipFile(output_file_path) as archive:
archive.extractall(path=client_output_file)
os.remove(output_file_path)
return client_output_file |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def setup(self, hosts, artifacts, extra_artifacts, use_tsk, reason, grr_server_url, grr_username, grr_password, approvers=None, verify=True):
"""Initializes a GRR artifact collector. Args: hosts: Comma-separated list of hostnames to launch the flow on. artifacts: list of GRR-defined artifacts. extra_artifacts: list of GRR-defined artifacts to append. use_tsk: toggle for use_tsk flag on GRR flow. reason: justification for GRR access. grr_server_url: GRR server URL. grr_username: GRR username. grr_password: GRR password. approvers: list of GRR approval recipients. verify: boolean, whether to verify the GRR server's x509 certificate. """ |
super(GRRArtifactCollector, self).setup(
reason, grr_server_url, grr_username, grr_password, approvers=approvers,
verify=verify)
if artifacts is not None:
self.artifacts = [item.strip() for item in artifacts.strip().split(',')]
if extra_artifacts is not None:
self.extra_artifacts = [item.strip() for item
in extra_artifacts.strip().split(',')]
self.hostnames = [item.strip() for item in hosts.strip().split(',')]
self.use_tsk = use_tsk |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _process_thread(self, client):
"""Process a single GRR client. Args: client: a GRR client object. """ |
system_type = client.data.os_info.system
print('System type: {0:s}'.format(system_type))
# If the list is supplied by the user via a flag, honor that.
artifact_list = []
if self.artifacts:
print('Artifacts to be collected: {0!s}'.format(self.artifacts))
artifact_list = self.artifacts
else:
default_artifacts = self.artifact_registry.get(system_type, None)
if default_artifacts:
print('Collecting default artifacts for {0:s}: {1:s}'.format(
system_type, ', '.join(default_artifacts)))
artifact_list.extend(default_artifacts)
if self.extra_artifacts:
print('Throwing in an extra {0!s}'.format(self.extra_artifacts))
artifact_list.extend(self.extra_artifacts)
artifact_list = list(set(artifact_list))
if not artifact_list:
return
flow_args = flows_pb2.ArtifactCollectorFlowArgs(
artifact_list=artifact_list,
use_tsk=self.use_tsk,
ignore_interpolation_errors=True,
apply_parsers=False)
flow_id = self._launch_flow(client, 'ArtifactCollectorFlow', flow_args)
self._await_flow(client, flow_id)
collected_flow_data = self._download_files(client, flow_id)
if collected_flow_data:
print('{0!s}: Downloaded: {1:s}'.format(flow_id, collected_flow_data))
fqdn = client.data.os_info.fqdn.lower()
self.state.output.append((fqdn, collected_flow_data)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def process(self):
"""Collect the artifacts. Raises: DFTimewolfError: if no artifacts specified nor resolved by platform. """ |
threads = []
for client in self.find_clients(self.hostnames):
print(client)
thread = threading.Thread(target=self._process_thread, args=(client, ))
threads.append(thread)
thread.start()
for thread in threads:
thread.join() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def setup(self, hosts, files, use_tsk, reason, grr_server_url, grr_username, grr_password, approvers=None, verify=True):
"""Initializes a GRR file collector. Args: hosts: Comma-separated list of hostnames to launch the flow on. files: list of file paths. use_tsk: toggle for use_tsk flag on GRR flow. reason: justification for GRR access. grr_server_url: GRR server URL. grr_username: GRR username. grr_password: GRR password. approvers: list of GRR approval recipients. verify: boolean, whether to verify the GRR server's x509 certificate. """ |
super(GRRFileCollector, self).setup(
reason, grr_server_url, grr_username, grr_password,
approvers=approvers, verify=verify)
if files is not None:
self.files = [item.strip() for item in files.strip().split(',')]
self.hostnames = [item.strip() for item in hosts.strip().split(',')]
self.use_tsk = use_tsk |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _process_thread(self, client):
"""Process a single client. Args: client: GRR client object to act on. """ |
file_list = self.files
if not file_list:
return
print('Filefinder to collect {0:d} items'.format(len(file_list)))
flow_action = flows_pb2.FileFinderAction(
action_type=flows_pb2.FileFinderAction.DOWNLOAD)
flow_args = flows_pb2.FileFinderArgs(
paths=file_list,
action=flow_action,)
flow_id = self._launch_flow(client, 'FileFinder', flow_args)
self._await_flow(client, flow_id)
collected_flow_data = self._download_files(client, flow_id)
if collected_flow_data:
print('{0!s}: Downloaded: {1:s}'.format(flow_id, collected_flow_data))
fqdn = client.data.os_info.fqdn.lower()
self.state.output.append((fqdn, collected_flow_data)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def setup(self, host, flow_id, reason, grr_server_url, grr_username, grr_password, approvers=None, verify=True):
"""Initializes a GRR flow collector. Args: host: hostname of machine. flow_id: ID of GRR flow to retrieve. reason: justification for GRR access. grr_server_url: GRR server URL. grr_username: GRR username. grr_password: GRR password. approvers: list of GRR approval recipients. verify: boolean, whether to verify the GRR server's x509 certificate. """ |
super(GRRFlowCollector, self).setup(
reason, grr_server_url, grr_username, grr_password,
approvers=approvers, verify=verify)
self.flow_id = flow_id
self.host = host |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def process(self):
"""Collect the results. Raises: DFTimewolfError: if no files specified """ |
client = self._get_client_by_hostname(self.host)
self._await_flow(client, self.flow_id)
collected_flow_data = self._download_files(client, self.flow_id)
if collected_flow_data:
print('{0:s}: Downloaded: {1:s}'.format(
self.flow_id, collected_flow_data))
fqdn = client.data.os_info.fqdn.lower()
self.state.output.append((fqdn, collected_flow_data)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def process(self):
"""Copy a disk to the analysis project.""" |
for disk in self.disks_to_copy:
print("Disk copy of {0:s} started...".format(disk.name))
snapshot = disk.snapshot()
new_disk = self.analysis_project.create_disk_from_snapshot(
snapshot, disk_name_prefix="incident" + self.incident_id)
self.analysis_vm.attach_disk(new_disk)
snapshot.delete()
print("Disk {0:s} successfully copied to {1:s}".format(
disk.name, new_disk.name))
self.state.output.append((self.analysis_vm.name, new_disk)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def setup(self, analysis_project_name, remote_project_name, incident_id, zone, boot_disk_size, cpu_cores, remote_instance_name=None, disk_names=None, all_disks=False, image_project="ubuntu-os-cloud", image_family="ubuntu-1604-lts"):
"""Sets up a Google cloud collector. This method creates and starts an analysis VM in the analysis project and selects disks to copy from the remote project. If disk_names is specified, it will copy the corresponding disks from the project, ignoring disks belonging to any specific instances. If remote_instance_name is specified, two behaviors are possible: - If no other parameters are specified, it will select the instance's boot disk - if all_disks is set to True, it will select all disks in the project that are attached to the instance disk_names takes precedence over instance_names Args: analysis_project_name: The name of the project that contains the analysis VM (string). remote_project_name: The name of the remote project where the disks must be copied from (string). incident_id: The incident ID on which the name of the analysis VM will be based (string). zone: The zone in which new resources should be created (string). boot_disk_size: The size of the analysis VM boot disk (in GB) (float). cpu_cores: The number of CPU cores to create the machine with. remote_instance_name: The name of the instance in the remote project containing the disks to be copied (string). disk_names: Comma separated string with disk names to copy (string). all_disks: Copy all disks attached to the source instance (bool). image_project: Name of the project where the analysis VM image is hosted. image_family: Name of the image to use to create the analysis VM. """ |
disk_names = disk_names.split(",") if disk_names else []
self.analysis_project = libcloudforensics.GoogleCloudProject(
analysis_project_name, default_zone=zone)
remote_project = libcloudforensics.GoogleCloudProject(
remote_project_name)
if not (remote_instance_name or disk_names):
self.state.add_error(
"You need to specify at least an instance name or disks to copy",
critical=True)
return
self.incident_id = incident_id
analysis_vm_name = "gcp-forensics-vm-{0:s}".format(incident_id)
print("Your analysis VM will be: {0:s}".format(analysis_vm_name))
print("Complimentary gcloud command:")
print("gcloud compute ssh --project {0:s} {1:s} --zone {2:s}".format(
analysis_project_name,
analysis_vm_name,
zone))
try:
# TODO: Make creating an analysis VM optional
# pylint: disable=too-many-function-args
self.analysis_vm, _ = libcloudforensics.start_analysis_vm(
self.analysis_project.project_id,
analysis_vm_name,
zone,
boot_disk_size,
int(cpu_cores),
attach_disk=None,
image_project=image_project,
image_family=image_family)
if disk_names:
for name in disk_names:
try:
self.disks_to_copy.append(remote_project.get_disk(name))
except RuntimeError:
self.state.add_error(
"Disk '{0:s}' was not found in project {1:s}".format(
name, remote_project_name),
critical=True)
break
elif remote_instance_name:
remote_instance = remote_project.get_instance(
remote_instance_name)
if all_disks:
self.disks_to_copy = [
remote_project.get_disk(disk_name)
for disk_name in remote_instance.list_disks()
]
else:
self.disks_to_copy = [remote_instance.get_boot_disk()]
if not self.disks_to_copy:
self.state.add_error("Could not find any disks to copy",
critical=True)
except AccessTokenRefreshError as err:
self.state.add_error("Something is wrong with your gcloud access token.")
self.state.add_error(err, critical=True)
except ApplicationDefaultCredentialsError as err:
self.state.add_error("Something is wrong with your Application Default "
"Credentials. Try running:\n"
" $ gcloud auth application-default login")
self.state.add_error(err, critical=True)
except HttpError as err:
if err.resp.status == 403:
self.state.add_error(
"Make sure you have the appropriate permissions on the project")
if err.resp.status == 404:
self.state.add_error(
"GCP resource not found. Maybe a typo in the project / instance / "
"disk name?")
self.state.add_error(err, critical=True) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def setup(self, timezone=None):
# pylint: disable=arguments-differ """Sets up the _timezone attribute. Args: timezone: Timezone name (optional) """ |
self._timezone = timezone
self._output_path = tempfile.mkdtemp() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def process(self):
"""Execute the Plaso process.""" |
for description, path in self.state.input:
log_file_path = os.path.join(self._output_path, 'plaso.log')
print('Log file: {0:s}'.format(log_file_path))
# Build the plaso command line.
cmd = ['log2timeline.py']
# Since we might be running alongside another Module, always disable
# the status view.
cmd.extend(['-q', '--status_view', 'none'])
if self._timezone:
cmd.extend(['-z', self._timezone])
# Analyze all available partitions.
cmd.extend(['--partition', 'all'])
# Setup logging.
cmd.extend(['--logfile', log_file_path])
# And now, the crux of the command.
# Generate a new storage file for each plaso run
plaso_storage_file_path = os.path.join(
self._output_path, '{0:s}.plaso'.format(uuid.uuid4().hex))
cmd.extend([plaso_storage_file_path, path])
# Run the l2t command
full_cmd = ' '.join(cmd)
print('Running external command: "{0:s}"'.format(full_cmd))
try:
l2t_proc = subprocess.Popen(
cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
_, error = l2t_proc.communicate()
l2t_status = l2t_proc.wait()
if l2t_status:
# self.console_out.StdErr(errors)
message = ('The log2timeline command {0:s} failed: {1:s}.'
' Check log file for details.').format(full_cmd, error)
self.state.add_error(message, critical=True)
self.state.output.append((description, plaso_storage_file_path))
except OSError as exception:
self.state.add_error(exception, critical=True)
# Catch all remaining errors since we want to gracefully report them
except Exception as exception: # pylint: disable=broad-except
self.state.add_error(exception, critical=True) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def setup(self, reason, grr_server_url, grr_username, grr_password, approvers=None, verify=True):
"""Initializes a GRR hunt result collector. Args: reason: justification for GRR access. grr_server_url: GRR server URL. grr_username: GRR username. grr_password: GRR password. approvers: list of GRR approval recipients. verify: boolean, whether to verify the GRR server's x509 certificate. """ |
grr_auth = (grr_username, grr_password)
self.approvers = []
if approvers:
self.approvers = [item.strip() for item in approvers.strip().split(',')]
self.grr_api = grr_api.InitHttp(api_endpoint=grr_server_url,
auth=grr_auth,
verify=verify)
self.output_path = tempfile.mkdtemp()
self.reason = reason |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _check_approval_wrapper(self, grr_object, grr_function, *args, **kwargs):
"""Wraps a call to GRR functions checking for approval. Args: grr_object: the GRR object to create the eventual approval on. grr_function: The GRR function requiring approval. *args: Positional arguments that are to be passed to `grr_function`. **kwargs: Keyword arguments that are to be passed to `grr_function`. Returns: The return value of the execution of grr_function(*args, **kwargs). """ |
approval_sent = False
while True:
try:
return grr_function(*args, **kwargs)
except grr_errors.AccessForbiddenError as exception:
print('No valid approval found: {0!s}'.format(exception))
# If approval was already sent, just wait a bit more.
if approval_sent:
print('Approval not yet granted, waiting {0:d}s'.format(
self._CHECK_APPROVAL_INTERVAL_SEC))
time.sleep(self._CHECK_APPROVAL_INTERVAL_SEC)
continue
# If no approvers were specified, abort.
if not self.approvers:
message = ('GRR needs approval but no approvers specified '
'(hint: use --approvers)')
self.state.add_error(message, critical=True)
return None
# Otherwise, send a request for approval
grr_object.CreateApproval(
reason=self.reason, notified_users=self.approvers)
approval_sent = True
print('{0!s}: approval request sent to: {1!s} (reason: {2:s})'.format(
grr_object, self.approvers, self.reason)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _create_session(self, username, password):
"""Create HTTP session. Args: username (str):
Timesketch username password (str):
Timesketch password Returns: requests.Session: Session object. """ |
session = requests.Session()
session.verify = False # Depending on SSL cert is verifiable
try:
response = session.get(self.host_url)
except requests.exceptions.ConnectionError:
return False
# Get the CSRF token from the response
soup = BeautifulSoup(response.text, 'html.parser')
csrf_token = soup.find('input', dict(name='csrf_token'))['value']
login_data = dict(username=username, password=password)
session.headers.update({
'x-csrftoken': csrf_token,
'referer': self.host_url
})
_ = session.post('{0:s}/login/'.format(self.host_url), data=login_data)
return session |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def create_sketch(self, name, description):
"""Create a new sketch with the specified name and description. Args: name (str):
Title of sketch description (str):
Description of sketch Returns: int: ID of created sketch """ |
resource_url = '{0:s}/sketches/'.format(self.api_base_url)
form_data = {'name': name, 'description': description}
response = self.session.post(resource_url, json=form_data)
response_dict = response.json()
sketch_id = response_dict['objects'][0]['id']
return sketch_id |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def upload_timeline(self, timeline_name, plaso_storage_path):
"""Create a timeline with the specified name from the given plaso file. Args: timeline_name (str):
Name of timeline plaso_storage_path (str):
Local path of plaso file to be uploaded Returns: int: ID of uploaded timeline Raises: RuntimeError: When the JSON response from Timesketch cannot be decoded. """ |
resource_url = '{0:s}/upload/'.format(self.api_base_url)
files = {'file': open(plaso_storage_path, 'rb')}
data = {'name': timeline_name}
response = self.session.post(resource_url, files=files, data=data)
try:
response_dict = response.json()
except ValueError:
raise RuntimeError(
'Could not decode JSON response from Timesketch'
' (Status {0:d}):\n{1:s}'.format(
response.status_code, response.content))
index_id = response_dict['objects'][0]['id']
return index_id |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def export_artifacts(self, processed_artifacts, sketch_id):
"""Upload provided artifacts to specified, or new if non-existent, sketch. Args: processed_artifacts: List of (timeline_name, artifact_path) tuples sketch_id: ID of sketch to append the timeline to Returns: int: ID of sketch. """ |
# Export processed timeline(s)
for timeline_name, artifact_path in processed_artifacts:
print('Uploading {0:s} to timeline {1:s}'.format(
artifact_path, timeline_name))
new_timeline_id = self.upload_timeline(timeline_name, artifact_path)
self.add_timeline_to_sketch(sketch_id, new_timeline_id)
return sketch_id |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def add_timeline_to_sketch(self, sketch_id, index_id):
"""Associate the specified timeline and sketch. Args: sketch_id (int):
ID of sketch index_id (int):
ID of timeline to add to sketch """ |
resource_url = '{0:s}/sketches/{1:d}/timelines/'.format(
self.api_base_url, sketch_id)
form_data = {'timeline': [index_id]}
self.session.post(resource_url, json=form_data) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_sketch(self, sketch_id):
"""Get information on the specified sketch. Args: sketch_id (int):
ID of sketch Returns: dict: Dictionary of sketch information Raises: ValueError: Sketch is inaccessible """ |
resource_url = '{0:s}/sketches/{1:d}/'.format(self.api_base_url, sketch_id)
response = self.session.get(resource_url)
response_dict = response.json()
try:
response_dict['objects']
except KeyError:
raise ValueError('Sketch does not exist or you have no access')
return response_dict |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def setup(self, keywords=None):
# pylint: disable=arguments-differ """Sets up the _keywords attribute. Args: keywords: pipe separated list of keyword to search """ |
self._keywords = keywords
self._output_path = tempfile.mkdtemp() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def process(self):
"""Execute the grep command""" |
for _, path in self.state.input:
log_file_path = os.path.join(self._output_path, 'grepper.log')
print('Log file: {0:s}'.format(log_file_path))
print('Walking through dir (absolute) = ' + os.path.abspath(path))
try:
for root, _, files in os.walk(path):
for filename in files:
found = set()
fullpath = '{0:s}/{1:s}'.format(os.path.abspath(root), filename)
if mimetypes.guess_type(filename)[0] == 'application/pdf':
found = self.grepPDF(fullpath)
else:
with open(fullpath, 'r') as fp:
for line in fp:
found.update(set(x.lower() for x in re.findall(
self._keywords, line, re.IGNORECASE)))
if [item for item in found if item]:
output = '{0:s}/{1:s}:{2:s}'.format(path, filename, ','.join(
filter(None, found)))
if self._final_output:
self._final_output += '\n' + output
else:
self._final_output = output
print(output)
except OSError as exception:
self.state.add_error(exception, critical=True)
return
# Catch all remaining errors since we want to gracefully report them
except Exception as exception: # pylint: disable=broad-except
self.state.add_error(exception, critical=True)
return |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def grepPDF(self, path):
""" Parse PDF files text content for keywords. Args: path: PDF file path. Returns: match: set of unique occurrences of every match. """ |
with open(path, 'rb') as pdf_file_obj:
match = set()
text = ''
pdf_reader = PyPDF2.PdfFileReader(pdf_file_obj)
pages = pdf_reader.numPages
for page in range(pages):
page_obj = pdf_reader.getPage(page)
text += '\n' + page_obj.extractText()
match.update(set(x.lower() for x in re.findall(
self._keywords, text, re.IGNORECASE)))
return match |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def load_recipe(self, recipe):
"""Populates the internal module pool with modules declared in a recipe. Args: recipe: Dict, recipe declaring modules to load. """ |
self.recipe = recipe
for module_description in recipe['modules']:
# Combine CLI args with args from the recipe description
module_name = module_description['name']
module = self.config.get_module(module_name)(self)
self._module_pool[module_name] = module |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def store_container(self, container):
"""Thread-safe method to store data in the state's store. Args: container (containers.interface.AttributeContainer):
The data to store. """ |
with self._store_lock:
self.store.setdefault(container.CONTAINER_TYPE, []).append(container) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_containers(self, container_class):
"""Thread-safe method to retrieve data from the state's store. Args: container_class: AttributeContainer class used to filter data. Returns: A list of AttributeContainer objects of matching CONTAINER_TYPE. """ |
with self._store_lock:
return self.store.get(container_class.CONTAINER_TYPE, []) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def setup_modules(self, args):
"""Performs setup tasks for each module in the module pool. Threads declared modules' setup() functions. Takes CLI arguments into account when replacing recipe parameters for each module. Args: args: Command line arguments that will be used to replace the parameters declared in the recipe. """ |
def _setup_module_thread(module_description):
"""Calls the module's setup() function and sets an Event object for it.
Args:
module_description (dict): Corresponding recipe module description.
"""
new_args = utils.import_args_from_dict(
module_description['args'], vars(args), self.config)
module = self._module_pool[module_description['name']]
try:
module.setup(**new_args)
except Exception as error: # pylint: disable=broad-except
self.add_error(
'An unknown error occurred: {0!s}\nFull traceback:\n{1:s}'.format(
error, traceback.format_exc()),
critical=True)
self.events[module_description['name']] = threading.Event()
self.cleanup()
threads = []
for module_description in self.recipe['modules']:
t = threading.Thread(
target=_setup_module_thread,
args=(module_description, )
)
threads.append(t)
t.start()
for t in threads:
t.join()
self.check_errors(is_global=True) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def run_modules(self):
"""Performs the actual processing for each module in the module pool.""" |
def _run_module_thread(module_description):
"""Runs the module's process() function.
Waits for any blockers to have finished before running process(), then
sets an Event flag declaring the module has completed.
"""
for blocker in module_description['wants']:
self.events[blocker].wait()
module = self._module_pool[module_description['name']]
try:
module.process()
except DFTimewolfError as error:
self.add_error(error.message, critical=True)
except Exception as error: # pylint: disable=broad-except
self.add_error(
'An unknown error occurred: {0!s}\nFull traceback:\n{1:s}'.format(
error, traceback.format_exc()),
critical=True)
print('Module {0:s} completed'.format(module_description['name']))
self.events[module_description['name']].set()
self.cleanup()
threads = []
for module_description in self.recipe['modules']:
t = threading.Thread(
target=_run_module_thread,
args=(module_description, )
)
threads.append(t)
t.start()
for t in threads:
t.join()
self.check_errors(is_global=True) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def add_error(self, error, critical=False):
"""Adds an error to the state. Args: error: The text that will be added to the error list. critical: If set to True and the error is checked with check_errors, will dfTimewolf will abort. """ |
self.errors.append((error, critical)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def cleanup(self):
"""Basic cleanup after modules. The state's output becomes the input for the next stage. Any errors are moved to the global_errors attribute so that they can be reported at a later stage. """ |
# Move any existing errors to global errors
self.global_errors.extend(self.errors)
self.errors = []
# Make the previous module's output available to the next module
self.input = self.output
self.output = [] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def check_errors(self, is_global=False):
"""Checks for errors and exits if any of them are critical. Args: is_global: If True, check the global_errors attribute. If false, check the error attribute. """ |
errors = self.global_errors if is_global else self.errors
if errors:
print('dfTimewolf encountered one or more errors:')
for error, critical in errors:
print('{0:s} {1:s}'.format('CRITICAL: ' if critical else '', error))
if critical:
print('Critical error found. Aborting.')
sys.exit(-1) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def setup(self, disk_name, project, turbinia_zone):
"""Sets up the object attributes. Args: disk_name (string):
Name of the disk to process project (string):
The project containing the disk to process turbinia_zone (string):
The zone containing the disk to process """ |
# TODO: Consider the case when multiple disks are provided by the previous
# module or by the CLI.
if project is None or turbinia_zone is None:
self.state.add_error(
'project or turbinia_zone are not all specified, bailing out',
critical=True)
return
self.disk_name = disk_name
self.project = project
self.turbinia_zone = turbinia_zone
try:
turbinia_config.LoadConfig()
self.turbinia_region = turbinia_config.TURBINIA_REGION
self.instance = turbinia_config.PUBSUB_TOPIC
if turbinia_config.PROJECT != self.project:
self.state.add_error(
'Specified project {0:s} does not match Turbinia configured '
'project {1:s}. Use gcp_turbinia_import recipe to copy the disk '
'into the same project.'.format(
self.project, turbinia_config.PROJECT), critical=True)
return
self._output_path = tempfile.mkdtemp()
self.client = turbinia_client.TurbiniaClient()
except TurbiniaException as e:
self.state.add_error(e, critical=True)
return |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _print_task_data(self, task):
"""Pretty-prints task data. Args: task: Task dict generated by Turbinia. """ |
print(' {0:s} ({1:s})'.format(task['name'], task['id']))
paths = task.get('saved_paths', [])
if not paths:
return
for path in paths:
if path.endswith('worker-log.txt'):
continue
if path.endswith('{0:s}.log'.format(task.get('id'))):
continue
if path.startswith('/'):
continue
print(' ' + path) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def display_task_progress( self, instance, project, region, request_id=None, user=None, poll_interval=60):
"""Displays the overall progress of tasks in a Turbinia job. Args: instance (string):
The name of the Turbinia instance project (string):
The project containing the disk to process region (string):
Region where turbinia is configured. request_id (string):
The request ID provided by Turbinia. user (string):
The username to filter tasks by. poll_interval (int):
The interval at which to poll for new results. """ |
total_completed = 0
while True:
task_results = self.client.get_task_data(
instance, project, region, request_id=request_id, user=user)
tasks = {task['id']: task for task in task_results}
completed_tasks = set()
pending_tasks = set()
for task in tasks.values():
if task.get('successful') is not None:
completed_tasks.add(task['id'])
else:
pending_tasks.add(task['id'])
if len(completed_tasks) > total_completed or not completed_tasks:
total_completed = len(completed_tasks)
print('Task status update (completed: {0:d} | pending: {1:d})'.format(
len(completed_tasks), len(pending_tasks)))
print('Completed tasks:')
for task_id in completed_tasks:
self._print_task_data(tasks[task_id])
print('Pending tasks:')
for task_id in pending_tasks:
self._print_task_data(tasks[task_id])
if len(completed_tasks) == len(task_results) and completed_tasks:
print('All {0:d} Tasks completed'.format(len(task_results)))
return
time.sleep(poll_interval) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def generate_help():
"""Generates help text with alphabetically sorted recipes.""" |
help_text = '\nAvailable recipes:\n\n'
recipes = config.Config.get_registered_recipes()
for contents, _, _ in sorted(recipes, key=lambda k: k[0]['name']):
help_text += ' {0:<35s}{1:s}\n'.format(
contents['name'], contents.get('short_description', 'No description'))
return help_text |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def main():
"""Main function for DFTimewolf.""" |
parser = argparse.ArgumentParser(
formatter_class=argparse.RawDescriptionHelpFormatter,
description=generate_help())
subparsers = parser.add_subparsers()
for registered_recipe in config.Config.get_registered_recipes():
recipe, recipe_args, documentation = registered_recipe
subparser = subparsers.add_parser(
recipe['name'],
formatter_class=utils.DFTimewolfFormatterClass,
description='{0:s}'.format(documentation))
subparser.set_defaults(recipe=recipe)
for switch, help_text, default in recipe_args:
subparser.add_argument(switch, help=help_text, default=default)
# Override recipe defaults with those specified in Config
# so that they can in turn be overridden in the commandline
subparser.set_defaults(**config.Config.get_extra())
args = parser.parse_args()
recipe = args.recipe
state = DFTimewolfState(config.Config)
print('Loading recipes...')
state.load_recipe(recipe)
print('Loaded recipe {0:s} with {1:d} modules'.format(
recipe['name'], len(recipe['modules'])))
print('Setting up modules...')
state.setup_modules(args)
print('Modules successfully set up!')
print('Running modules...')
state.run_modules()
print('Recipe {0:s} executed successfully.'.format(recipe['name'])) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def setup(self, # pylint: disable=arguments-differ endpoint=None, username=None, password=None, incident_id=None, sketch_id=None):
"""Setup a connection to a Timesketch server and create a sketch if needed. Args: endpoint: str, Timesketch endpoint (e.g. http://timesketch.com/) username: str, Username to authenticate against the Timesketch endpoint. password: str, Password to authenticate against the Timesketch endpoint. incident_id: str, Incident ID or reference. Used in sketch description. sketch_id: int, Sketch ID to add the resulting timeline to. If not provided, a new sketch is created. """ |
self.timesketch_api = timesketch_utils.TimesketchApiClient(
endpoint, username, password)
self.incident_id = None
self.sketch_id = int(sketch_id) if sketch_id else None
# Check that we have a timesketch session
if not self.timesketch_api.session:
message = 'Could not connect to Timesketch server at ' + endpoint
self.state.add_error(message, critical=True)
return
if not self.sketch_id: # No sketch id is provided, create it
if incident_id:
sketch_name = 'Sketch for incident ID: ' + incident_id
else:
sketch_name = 'Untitled sketch'
sketch_description = 'Sketch generated by dfTimewolf'
self.sketch_id = self.timesketch_api.create_sketch(
sketch_name, sketch_description)
print('Sketch {0:d} created'.format(self.sketch_id)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def process(self):
"""Executes a Timesketch export.""" |
# This is not the best way of catching errors, but timesketch_utils will be
# deprecated soon.
# TODO(tomchop): Consider using the official Timesketch python API.
if not self.timesketch_api.session:
message = 'Could not connect to Timesketch server'
self.state.add_error(message, critical=True)
named_timelines = []
for description, path in self.state.input:
if not description:
description = 'untitled timeline for '+path
named_timelines.append((description, path))
try:
self.timesketch_api.export_artifacts(named_timelines, self.sketch_id)
except RuntimeError as e:
self.state.add_error(
'Error occurred while working with Timesketch: {0:s}'.format(str(e)),
critical=True)
return
sketch_url = self.timesketch_api.get_sketch_url(self.sketch_id)
print('Your Timesketch URL is: {0:s}'.format(sketch_url))
self.state.output = sketch_url |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def setup(self, target_directory=None):
# pylint: disable=arguments-differ """Sets up the _target_directory attribute. Args: target_directory: Directory in which collected files will be dumped. """ |
self._target_directory = target_directory
if not target_directory:
self._target_directory = tempfile.mkdtemp()
elif not os.path.exists(target_directory):
try:
os.makedirs(target_directory)
except OSError as exception:
message = 'An unknown error occurred: {0!s}'.format(exception)
self.state.add_error(message, critical=True) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _copy_file_or_directory(self, source, destination_directory):
"""Recursively copies files from source to destination_directory. Args: source: source file or directory to copy into destination_directory destination_directory: destination directory in which to copy source """ |
if os.path.isdir(source):
for item in os.listdir(source):
full_source = os.path.join(source, item)
full_destination = os.path.join(destination_directory, item)
shutil.copytree(full_source, full_destination)
else:
shutil.copy2(source, destination_directory) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def import_args_from_dict(value, args, config):
"""Replaces some arguments by those specified by a key-value dictionary. This function will be recursively called on a dictionary looking for any value containing a "$" variable. If found, the value will be replaced by the attribute in "args" of the same name. It is used to load arguments from the CLI and any extra configuration parameters passed in recipes. Args: value: The value of a {key: value} dictionary. This is passed recursively and may change in nature: string, list, or dict. The top-level variable should be the dictionary that is supposed to be recursively traversed. args: A {key: value} dictionary used to do replacements. config: A dftimewolf.Config class containing configuration information Returns: The first caller of the function will receive a dictionary in which strings starting with "@" are replaced by the parameters in args. """ |
if isinstance(value, six.string_types):
for match in TOKEN_REGEX.finditer(str(value)):
token = match.group(1)
if token in args:
actual_param = args[token]
if isinstance(actual_param, six.string_types):
value = value.replace("@"+token, args[token])
else:
value = actual_param
elif isinstance(value, list):
return [import_args_from_dict(item, args, config) for item in value]
elif isinstance(value, dict):
return {
key: import_args_from_dict(val, args, config)
for key, val in value.items()
}
elif isinstance(value, tuple):
return tuple(import_args_from_dict(val, args, config) for val in value)
return value |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_version():
""" Simple function to extract the current version using regular expressions. """ |
reg = re.compile(r'__version__ = [\'"]([^\'"]*)[\'"]')
with open('requests_kerberos/__init__.py') as fd:
matches = list(filter(lambda x: x, map(reg.match, fd)))
if not matches:
raise RuntimeError(
'Could not find the version information for requests_kerberos'
)
return matches[0].group(1) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _negotiate_value(response):
"""Extracts the gssapi authentication token from the appropriate header""" |
if hasattr(_negotiate_value, 'regex'):
regex = _negotiate_value.regex
else:
# There's no need to re-compile this EVERY time it is called. Compile
# it once and you won't have the performance hit of the compilation.
regex = re.compile('(?:.*,)*\s*Negotiate\s*([^,]*),?', re.I)
_negotiate_value.regex = regex
authreq = response.headers.get('www-authenticate', None)
if authreq:
match_obj = regex.search(authreq)
if match_obj:
return match_obj.group(1)
return None |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def generate_request_header(self, response, host, is_preemptive=False):
""" Generates the GSSAPI authentication token with kerberos. If any GSSAPI step fails, raise KerberosExchangeError with failure detail. """ |
# Flags used by kerberos module.
gssflags = kerberos.GSS_C_MUTUAL_FLAG | kerberos.GSS_C_SEQUENCE_FLAG
if self.delegate:
gssflags |= kerberos.GSS_C_DELEG_FLAG
try:
kerb_stage = "authGSSClientInit()"
# contexts still need to be stored by host, but hostname_override
# allows use of an arbitrary hostname for the kerberos exchange
# (eg, in cases of aliased hosts, internal vs external, CNAMEs
# w/ name-based HTTP hosting)
kerb_host = self.hostname_override if self.hostname_override is not None else host
kerb_spn = "{0}@{1}".format(self.service, kerb_host)
result, self.context[host] = kerberos.authGSSClientInit(kerb_spn,
gssflags=gssflags, principal=self.principal)
if result < 1:
raise EnvironmentError(result, kerb_stage)
# if we have a previous response from the server, use it to continue
# the auth process, otherwise use an empty value
negotiate_resp_value = '' if is_preemptive else _negotiate_value(response)
kerb_stage = "authGSSClientStep()"
# If this is set pass along the struct to Kerberos
if self.cbt_struct:
result = kerberos.authGSSClientStep(self.context[host],
negotiate_resp_value,
channel_bindings=self.cbt_struct)
else:
result = kerberos.authGSSClientStep(self.context[host],
negotiate_resp_value)
if result < 0:
raise EnvironmentError(result, kerb_stage)
kerb_stage = "authGSSClientResponse()"
gss_response = kerberos.authGSSClientResponse(self.context[host])
return "Negotiate {0}".format(gss_response)
except kerberos.GSSError as error:
log.exception(
"generate_request_header(): {0} failed:".format(kerb_stage))
log.exception(error)
raise KerberosExchangeError("%s failed: %s" % (kerb_stage, str(error.args)))
except EnvironmentError as error:
# ensure we raised this for translation to KerberosExchangeError
# by comparing errno to result, re-raise if not
if error.errno != result:
raise
message = "{0} failed, result: {1}".format(kerb_stage, result)
log.error("generate_request_header(): {0}".format(message))
raise KerberosExchangeError(message) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def handle_other(self, response):
"""Handles all responses with the exception of 401s. This is necessary so that we can authenticate responses if requested""" |
log.debug("handle_other(): Handling: %d" % response.status_code)
if self.mutual_authentication in (REQUIRED, OPTIONAL) and not self.auth_done:
is_http_error = response.status_code >= 400
if _negotiate_value(response) is not None:
log.debug("handle_other(): Authenticating the server")
if not self.authenticate_server(response):
# Mutual authentication failure when mutual auth is wanted,
# raise an exception so the user doesn't use an untrusted
# response.
log.error("handle_other(): Mutual authentication failed")
raise MutualAuthenticationError("Unable to authenticate "
"{0}".format(response))
# Authentication successful
log.debug("handle_other(): returning {0}".format(response))
self.auth_done = True
return response
elif is_http_error or self.mutual_authentication == OPTIONAL:
if not response.ok:
log.error("handle_other(): Mutual authentication unavailable "
"on {0} response".format(response.status_code))
if(self.mutual_authentication == REQUIRED and
self.sanitize_mutual_error_response):
return SanitizedResponse(response)
else:
return response
else:
# Unable to attempt mutual authentication when mutual auth is
# required, raise an exception so the user doesn't use an
# untrusted response.
log.error("handle_other(): Mutual authentication failed")
raise MutualAuthenticationError("Unable to authenticate "
"{0}".format(response))
else:
log.debug("handle_other(): returning {0}".format(response))
return response |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def authenticate_server(self, response):
""" Uses GSSAPI to authenticate the server. Returns True on success, False on failure. """ |
log.debug("authenticate_server(): Authenticate header: {0}".format(
_negotiate_value(response)))
host = urlparse(response.url).hostname
try:
# If this is set pass along the struct to Kerberos
if self.cbt_struct:
result = kerberos.authGSSClientStep(self.context[host],
_negotiate_value(response),
channel_bindings=self.cbt_struct)
else:
result = kerberos.authGSSClientStep(self.context[host],
_negotiate_value(response))
except kerberos.GSSError:
log.exception("authenticate_server(): authGSSClientStep() failed:")
return False
if result < 1:
log.error("authenticate_server(): authGSSClientStep() failed: "
"{0}".format(result))
return False
log.debug("authenticate_server(): returning {0}".format(response))
return True |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def handle_response(self, response, **kwargs):
"""Takes the given response and tries kerberos-auth, as needed.""" |
num_401s = kwargs.pop('num_401s', 0)
# Check if we have already tried to get the CBT data value
if not self.cbt_binding_tried and self.send_cbt:
# If we haven't tried, try getting it now
cbt_application_data = _get_channel_bindings_application_data(response)
if cbt_application_data:
# Only the latest version of pykerberos has this method available
try:
self.cbt_struct = kerberos.channelBindings(application_data=cbt_application_data)
except AttributeError:
# Using older version set to None
self.cbt_struct = None
# Regardless of the result, set tried to True so we don't waste time next time
self.cbt_binding_tried = True
if self.pos is not None:
# Rewind the file position indicator of the body to where
# it was to resend the request.
response.request.body.seek(self.pos)
if response.status_code == 401 and num_401s < 2:
# 401 Unauthorized. Handle it, and if it still comes back as 401,
# that means authentication failed.
_r = self.handle_401(response, **kwargs)
log.debug("handle_response(): returning %s", _r)
log.debug("handle_response() has seen %d 401 responses", num_401s)
num_401s += 1
return self.handle_response(_r, num_401s=num_401s, **kwargs)
elif response.status_code == 401 and num_401s >= 2:
# Still receiving 401 responses after attempting to handle them.
# Authentication has failed. Return the 401 response.
log.debug("handle_response(): returning 401 %s", response)
return response
else:
_r = self.handle_other(response)
log.debug("handle_response(): returning %s", _r)
return _r |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _quote_query(query):
"""Turn a dictionary into a query string in a URL, with keys in alphabetical order.""" |
return "&".join("%s=%s" % (
k, urllib_quote(
unicode(query[k]).encode('utf-8'), safe='~'))
for k in sorted(query)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def api_url(self, **kwargs):
"""The URL for making the given query against the API.""" |
query = {
'Operation': self.Operation,
'Service': "AWSECommerceService",
'Timestamp': time.strftime(
"%Y-%m-%dT%H:%M:%SZ", time.gmtime()),
'Version': self.Version,
}
query.update(kwargs)
query['AWSAccessKeyId'] = self.AWSAccessKeyId
query['Timestamp'] = time.strftime("%Y-%m-%dT%H:%M:%SZ",
time.gmtime())
if self.AssociateTag:
query['AssociateTag'] = self.AssociateTag
service_domain = SERVICE_DOMAINS[self.Region][0]
quoted_strings = _quote_query(query)
data = "GET\n" + service_domain + "\n/onca/xml\n" + quoted_strings
# convert unicode to UTF8 bytes for hmac library
if type(self.AWSSecretAccessKey) is unicode:
self.AWSSecretAccessKey = self.AWSSecretAccessKey.encode('utf-8')
if type(data) is unicode:
data = data.encode('utf-8')
# calculate sha256 signature
digest = hmac.new(self.AWSSecretAccessKey, data, sha256).digest()
# base64 encode and urlencode
if sys.version_info[0] == 3:
signature = urllib.parse.quote(b64encode(digest))
else:
signature = urllib.quote(b64encode(digest))
return ("https://" + service_domain + "/onca/xml?" +
quoted_strings + "&Signature=%s" % signature) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def isSameTypeWith(self, other, matchTags=True, matchConstraints=True):
"""Examine |ASN.1| type for equality with other ASN.1 type. ASN.1 tags (:py:mod:`~pyasn1.type.tag`) and constraints (:py:mod:`~pyasn1.type.constraint`) are examined when carrying out ASN.1 types comparison. Python class inheritance relationship is NOT considered. Parameters other: a pyasn1 type object Class instance representing ASN.1 type. Returns ------- : :class:`bool` :class:`True` if *other* is |ASN.1| type, :class:`False` otherwise. """ |
return (self is other or
(not matchTags or self.tagSet == other.tagSet) and
(not matchConstraints or self.subtypeSpec == other.subtypeSpec)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def isSuperTypeOf(self, other, matchTags=True, matchConstraints=True):
"""Examine |ASN.1| type for subtype relationship with other ASN.1 type. ASN.1 tags (:py:mod:`~pyasn1.type.tag`) and constraints (:py:mod:`~pyasn1.type.constraint`) are examined when carrying out ASN.1 types comparison. Python class inheritance relationship is NOT considered. Parameters other: a pyasn1 type object Class instance representing ASN.1 type. Returns ------- : :class:`bool` :class:`True` if *other* is a subtype of |ASN.1| type, :class:`False` otherwise. """ |
return (not matchTags or
(self.tagSet.isSuperTagSetOf(other.tagSet)) and
(not matchConstraints or self.subtypeSpec.isSuperTypeOf(other.subtypeSpec))) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def clone(self, value=noValue, **kwargs):
"""Create a modified version of |ASN.1| schema or value object. The `clone()` method accepts the same set arguments as |ASN.1| class takes on instantiation except that all arguments of the `clone()` method are optional. Whatever arguments are supplied, they are used to create a copy of `self` taking precedence over the ones used to instantiate `self`. Note ---- Due to the immutable nature of the |ASN.1| object, if no arguments are supplied, no new |ASN.1| object will be created and `self` will be returned instead. """ |
if value is noValue:
if not kwargs:
return self
value = self._value
initializers = self.readOnly.copy()
initializers.update(kwargs)
return self.__class__(value, **initializers) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def subtype(self, value=noValue, **kwargs):
"""Create a specialization of |ASN.1| schema or value object. The subtype relationship between ASN.1 types has no correlation with subtype relationship between Python types. ASN.1 type is mainly identified by its tag(s) (:py:class:`~pyasn1.type.tag.TagSet`) and value range constraints (:py:class:`~pyasn1.type.constraint.ConstraintsIntersection`). These ASN.1 type properties are implemented as |ASN.1| attributes. The `subtype()` method accepts the same set arguments as |ASN.1| class takes on instantiation except that all parameters of the `subtype()` method are optional. With the exception of the arguments described below, the rest of supplied arguments they are used to create a copy of `self` taking precedence over the ones used to instantiate `self`. The following arguments to `subtype()` create a ASN.1 subtype out of |ASN.1| type: Other Parameters implicitTag: :py:class:`~pyasn1.type.tag.Tag` Implicitly apply given ASN.1 tag object to `self`'s :py:class:`~pyasn1.type.tag.TagSet`, then use the result as new object's ASN.1 tag(s). explicitTag: :py:class:`~pyasn1.type.tag.Tag` Explicitly apply given ASN.1 tag object to `self`'s :py:class:`~pyasn1.type.tag.TagSet`, then use the result as new object's ASN.1 tag(s). subtypeSpec: :py:class:`~pyasn1.type.constraint.ConstraintsIntersection` Add ASN.1 constraints object to one of the `self`'s, then use the result as new object's ASN.1 constraints. Returns ------- : new instance of |ASN.1| schema or value object Note ---- Due to the immutable nature of the |ASN.1| object, if no arguments are supplied, no new |ASN.1| object will be created and `self` will be returned instead. """ |
if value is noValue:
if not kwargs:
return self
value = self._value
initializers = self.readOnly.copy()
implicitTag = kwargs.pop('implicitTag', None)
if implicitTag is not None:
initializers['tagSet'] = self.tagSet.tagImplicitly(implicitTag)
explicitTag = kwargs.pop('explicitTag', None)
if explicitTag is not None:
initializers['tagSet'] = self.tagSet.tagExplicitly(explicitTag)
for arg, option in kwargs.items():
initializers[arg] += option
return self.__class__(value, **initializers) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def clone(self, **kwargs):
"""Create a modified version of |ASN.1| schema object. The `clone()` method accepts the same set arguments as |ASN.1| class takes on instantiation except that all arguments of the `clone()` method are optional. Whatever arguments are supplied, they are used to create a copy of `self` taking precedence over the ones used to instantiate `self`. Possible values of `self` are never copied over thus `clone()` can only create a new schema object. Returns ------- : new instance of |ASN.1| type/value Note ---- Due to the mutable nature of the |ASN.1| object, even if no arguments are supplied, new |ASN.1| object will always be created as a shallow copy of `self`. """ |
cloneValueFlag = kwargs.pop('cloneValueFlag', False)
initializers = self.readOnly.copy()
initializers.update(kwargs)
clone = self.__class__(**initializers)
if cloneValueFlag:
self._cloneComponentValues(clone, cloneValueFlag)
return clone |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def subtype(self, **kwargs):
"""Create a specialization of |ASN.1| schema object. The `subtype()` method accepts the same set arguments as |ASN.1| class takes on instantiation except that all parameters of the `subtype()` method are optional. With the exception of the arguments described below, the rest of supplied arguments they are used to create a copy of `self` taking precedence over the ones used to instantiate `self`. The following arguments to `subtype()` create a ASN.1 subtype out of |ASN.1| type. Other Parameters implicitTag: :py:class:`~pyasn1.type.tag.Tag` Implicitly apply given ASN.1 tag object to `self`'s :py:class:`~pyasn1.type.tag.TagSet`, then use the result as new object's ASN.1 tag(s). explicitTag: :py:class:`~pyasn1.type.tag.Tag` Explicitly apply given ASN.1 tag object to `self`'s :py:class:`~pyasn1.type.tag.TagSet`, then use the result as new object's ASN.1 tag(s). subtypeSpec: :py:class:`~pyasn1.type.constraint.ConstraintsIntersection` Add ASN.1 constraints object to one of the `self`'s, then use the result as new object's ASN.1 constraints. Returns ------- : new instance of |ASN.1| type/value Note ---- Due to the immutable nature of the |ASN.1| object, if no arguments are supplied, no new |ASN.1| object will be created and `self` will be returned instead. """ |
initializers = self.readOnly.copy()
cloneValueFlag = kwargs.pop('cloneValueFlag', False)
implicitTag = kwargs.pop('implicitTag', None)
if implicitTag is not None:
initializers['tagSet'] = self.tagSet.tagImplicitly(implicitTag)
explicitTag = kwargs.pop('explicitTag', None)
if explicitTag is not None:
initializers['tagSet'] = self.tagSet.tagExplicitly(explicitTag)
for arg, option in kwargs.items():
initializers[arg] += option
clone = self.__class__(**initializers)
if cloneValueFlag:
self._cloneComponentValues(clone, cloneValueFlag)
return clone |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def getTypeByPosition(self, idx):
"""Return ASN.1 type object by its position in fields set. Parameters idx: :py:class:`int` Field index Returns ------- : ASN.1 type Raises ------ : :class:`~pyasn1.error.PyAsn1Error` If given position is out of fields range """ |
try:
return self.__namedTypes[idx].asn1Object
except IndexError:
raise error.PyAsn1Error('Type position out of range') |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def getPositionByType(self, tagSet):
"""Return field position by its ASN.1 type. Parameters tagSet: :class:`~pysnmp.type.tag.TagSet` ASN.1 tag set distinguishing one ASN.1 type from others. Returns ------- : :py:class:`int` ASN.1 type position in fields set Raises ------ : :class:`~pyasn1.error.PyAsn1Error` If *tagSet* is not present or ASN.1 types are not unique within callee *NamedTypes* """ |
try:
return self.__tagToPosMap[tagSet]
except KeyError:
raise error.PyAsn1Error('Type %s not found' % (tagSet,)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def getNameByPosition(self, idx):
"""Return field name by its position in fields set. Parameters idx: :py:class:`idx` Field index Returns ------- : :py:class:`str` Field name Raises ------ : :class:`~pyasn1.error.PyAsn1Error` If given field name is not present in callee *NamedTypes* """ |
try:
return self.__namedTypes[idx].name
except IndexError:
raise error.PyAsn1Error('Type position out of range') |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def getPositionByName(self, name):
"""Return field position by filed name. Parameters name: :py:class:`str` Field name Returns ------- : :py:class:`int` Field position in fields set Raises ------ : :class:`~pyasn1.error.PyAsn1Error` If *name* is not present or not unique within callee *NamedTypes* """ |
try:
return self.__nameToPosMap[name]
except KeyError:
raise error.PyAsn1Error('Name %s not found' % (name,)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def getTagMapNearPosition(self, idx):
"""Return ASN.1 types that are allowed at or past given field position. Some ASN.1 serialisation allow for skipping optional and defaulted fields. Some constructed ASN.1 types allow reordering of the fields. When recovering such objects it may be important to know which types can possibly be present at any given position in the field sets. Parameters idx: :py:class:`int` Field index Returns ------- : :class:`~pyasn1.type.tagmap.TagMap` Map if ASN.1 types allowed at given field position Raises ------ : :class:`~pyasn1.error.PyAsn1Error` If given position is out of fields range """ |
try:
return self.__ambiguousTypes[idx].tagMap
except KeyError:
raise error.PyAsn1Error('Type position out of range') |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def getPositionNearType(self, tagSet, idx):
"""Return the closest field position where given ASN.1 type is allowed. Some ASN.1 serialisation allow for skipping optional and defaulted fields. Some constructed ASN.1 types allow reordering of the fields. When recovering such objects it may be important to know at which field position, in field set, given *tagSet* is allowed at or past *idx* position. Parameters tagSet: :class:`~pyasn1.type.tag.TagSet` ASN.1 type which field position to look up idx: :py:class:`int` Field position at or past which to perform ASN.1 type look up Returns ------- : :py:class:`int` Field position in fields set Raises ------ : :class:`~pyasn1.error.PyAsn1Error` If *tagSet* is not present or not unique within callee *NamedTypes* or *idx* is out of fields range """ |
try:
return idx + self.__ambiguousTypes[idx].getPositionByType(tagSet)
except KeyError:
raise error.PyAsn1Error('Type position out of range') |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def asBinary(self):
"""Get |ASN.1| value as a text string of bits. """ |
binString = binary.bin(self._value)[2:]
return '0' * (len(self._value) - len(binString)) + binString |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def fromOctetString(cls, value, internalFormat=False, prepend=None, padding=0):
"""Create a |ASN.1| object initialized from a string. Parameters value: :class:`str` (Py2) or :class:`bytes` (Py3) Text string like '\\\\x01\\\\xff' (Py2) or b'\\\\x01\\\\xff' (Py3) """ |
value = SizedInteger(integer.from_bytes(value) >> padding).setBitLength(len(value) * 8 - padding)
if prepend is not None:
value = SizedInteger(
(SizedInteger(prepend) << len(value)) | value
).setBitLength(len(prepend) + len(value))
if not internalFormat:
value = cls(value)
return value |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def fromBinaryString(value):
"""Create a |ASN.1| object initialized from a string of '0' and '1'. Parameters value: :class:`str` Text string like '1010111' """ |
bitNo = 8
byte = 0
r = []
for v in value:
if bitNo:
bitNo -= 1
else:
bitNo = 7
r.append(byte)
byte = 0
if v in ('0', '1'):
v = int(v)
else:
raise error.PyAsn1Error(
'Non-binary OCTET STRING initializer %s' % (v,)
)
byte |= v << bitNo
r.append(byte)
return octets.ints2octs(r) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def isPrefixOf(self, other):
"""Indicate if this |ASN.1| object is a prefix of other |ASN.1| object. Parameters other: |ASN.1| object |ASN.1| object Returns ------- : :class:`bool` :class:`True` if this |ASN.1| object is a parent (e.g. prefix) of the other |ASN.1| object or :class:`False` otherwise. """ |
l = len(self)
if l <= len(other):
if self._value[:l] == other[:l]:
return True
return False |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def getComponentByPosition(self, idx, default=noValue, instantiate=True):
"""Return |ASN.1| type component value by position. Equivalent to Python sequence subscription operation (e.g. `[]`). Parameters idx : :class:`int` Component index (zero-based). Must either refer to an existing component or to N+1 component (if *componentType* is set). In the latter case a new component type gets instantiated and appended to the |ASN.1| sequence. Keyword Args default: :class:`object` If set and requested component is a schema object, return the `default` object instead of the requested component. instantiate: :class:`bool` If `True` (default), inner component will be automatically instantiated. If 'False' either existing component or the `noValue` object will be returned. Returns ------- : :py:class:`~pyasn1.type.base.PyAsn1Item` Instantiate |ASN.1| component type or return existing component value Examples -------- .. code-block:: python # can also be SetOf class MySequenceOf(SequenceOf):
componentType = OctetString() s = MySequenceOf() # returns component #0 with `.isValue` property False s.getComponentByPosition(0) # returns None s.getComponentByPosition(0, default=None) s.clear() # returns noValue s.getComponentByPosition(0, instantiate=False) # sets component #0 to OctetString() ASN.1 schema # object and returns it s.getComponentByPosition(0, instantiate=True) # sets component #0 to ASN.1 value object s.setComponentByPosition(0, 'ABCD') # returns OctetString('ABCD') value object s.getComponentByPosition(0, instantiate=False) s.clear() # returns noValue s.getComponentByPosition(0, instantiate=False) """ |
try:
componentValue = self._componentValues[idx]
except IndexError:
if not instantiate:
return default
self.setComponentByPosition(idx)
componentValue = self._componentValues[idx]
if default is noValue or componentValue.isValue:
return componentValue
else:
return default |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.