text_prompt stringlengths 157 13.1k | code_prompt stringlengths 7 19.8k ⌀ |
|---|---|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _init_evals_bps(self, evals, breakpoints):
# If an eval or bp is the tf.Placeholder output of a tdb.PythonOp, replace it with its respective PythonOp node evals2=[op_store.get_op(t) if op_store.is_htop_out(t) else t for t in evals] breakpoints2=[op_store.get_op(t) if op_store.is_htop_out(t) else t for t in breakpoints] # compute execution order self._exe_order=op_store.compute_exe_order(evals2) # list of nodes # compute evaluation set """ HTOps may depend on tf.Tensors that are not in eval. We need to have all inputs to HTOps ready upon evaluation. 1. all evals that were originally specified are added 2. each HTOp in the execution closure needs to be in eval (they won't be eval'ed automatically by Session.run) 3. if an input to an HTOp is a tf.Tensor (not a HT placeholder tensor), it needs to be in eval as well (it's not tensorflow so we'll have to manually evaluate it). Remember, we don't track Placeholders because we instead run the HTOps that generate their values. """ |
self._evalset=set([e.name for e in evals2])
for e in self._exe_order:
if isinstance(e,HTOp):
self._evalset.add(e.name)
for t in e.inputs:
if not op_store.is_htop_out(t):
self._evalset.add(t.name)
# compute breakpoint set
self._bpset=set([bp.name for bp in breakpoints2]) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _eval(self, node):
""" node is a TensorFlow Op or Tensor from self._exe_order """ |
# if node.name == 'Momentum':
# pdb.set_trace()
if isinstance(node,HTOp):
# All Tensors MUST be in the cache.
feed_dict=dict((t,self._cache[t.name]) for t in node.inputs)
node.run(feed_dict) # this will populate self._cache on its own
else: # is a TensorFlow node
if isinstance(node,tf.Tensor):
result=self.session.run(node,self._cache)
self._cache[node.name]=result
else:
# is an operation
if node.type =='Assign' or node.type == 'AssignAdd' or node.type == 'AssignSub':
# special operation that takes in a tensor ref and mutates it
# unfortunately, we end up having to execute nearly the full graph?
# alternatively, find a way to pass the tensor_ref thru the feed_dict
# rather than the tensor values.
self.session.run(node,self._original_feed_dict) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def error_rate(predictions, labels):
"""Return the error rate based on dense predictions and 1-hot labels.""" |
return 100.0 - (
100.0 *
np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1)) /
predictions.shape[0]) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_node(name):
""" returns HTOp or tf graph element corresponding to requested node name """ |
if name in _ops:
return _ops[name]
else:
g=tf.get_default_graph()
return g.as_graph_element(name) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def cache_values(self, results):
""" loads into DebugSession cache """ |
if results is None:
# self.fn was probably only used to compute side effects.
return
elif isinstance(results,np.ndarray):
# fn returns single np.ndarray.
# re-format it into a list
results=[results]
# check validity of fn output
elif isinstance(results,list):
if len(results) is not len(self.outputs):
raise ValueError('Number of output tensors does not match number of outputs produced by function')
elif isinstance(results,np.number):
if len(self.outputs) != 1:
raise ValueError('Fn produces scalar but %d outputs expected' % (len(self.outputs)))
results=[results]
# assign each element in ndarrays to corresponding output tensor
for i,ndarray in enumerate(results):
self.session._cache_value(self.outputs[i], ndarray) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def debug(evals,feed_dict=None,breakpoints=None,break_immediately=False,session=None):
""" spawns a new debug session """ |
global _dbsession
_dbsession=debug_session.DebugSession(session)
return _dbsession.run(evals,feed_dict,breakpoints,break_immediately) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def connect():
""" establish connection to frontend notebook """ |
if not is_notebook():
print('Python session is not running in a Notebook Kernel')
return
global _comm
kernel=get_ipython().kernel
kernel.comm_manager.register_target('tdb',handle_comm_opened)
# initiate connection to frontend.
_comm=Comm(target_name='tdb',data={})
# bind recv handler
_comm.on_msg(None) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def send_action(action, params=None):
""" helper method for sending actions """ |
data={"msg_type":"action", "action":action}
if params is not None:
data['params']=params
_comm.send(data) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def send_fig(fig,name):
""" sends figure to frontend """ |
imgdata = StringIO.StringIO()
fig.savefig(imgdata, format='png')
imgdata.seek(0) # rewind the data
uri = 'data:image/png;base64,' + urllib.quote(b64encode(imgdata.buf))
send_action("update_plot",params={"src":uri, "name":name}) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def create_engine(url, con=None, header=True, show_progress=5.0, clear_progress=True):
'''Create a handler for query engine based on a URL.
The following environment variables are used for default connection:
TD_API_KEY API key
TD_API_SERVER API server (default: api.treasuredata.com)
HTTP_PROXY HTTP proxy (optional)
Parameters
----------
url : string
Engine descriptor in the form "type://apikey@host/database?params..."
Use shorthand notation "type:database?params..." for the default connection.
con : Connection, optional
Handler returned by connect. If not given, default connection is used.
header : string or boolean, default True
Prepend comment strings, in the form "-- comment", as a header of queries.
Set False to disable header.
show_progress : double or boolean, default 5.0
Number of seconds to wait before printing progress.
Set False to disable progress entirely.
clear_progress : boolean, default True
If True, clear progress when query completed.
Returns
-------
QueryEngine
'''
url = urlparse(url)
engine_type = url.scheme if url.scheme else 'presto'
if con is None:
if url.netloc:
# create connection
apikey, host = url.netloc.split('@')
con = Connection(apikey=apikey, endpoint="https://{0}/".format(host))
else:
# default connection
con = Connection()
database = url.path[1:] if url.path.startswith('/') else url.path
params = {
'type': engine_type,
}
params.update(parse_qsl(url.query))
return QueryEngine(con, database, params,
header=header,
show_progress=show_progress,
clear_progress=clear_progress) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def read_td_query(query, engine, index_col=None, parse_dates=None, distributed_join=False, params=None):
'''Read Treasure Data query into a DataFrame.
Returns a DataFrame corresponding to the result set of the query string.
Optionally provide an index_col parameter to use one of the columns as
the index, otherwise default integer index will be used.
Parameters
----------
query : string
Query string to be executed.
engine : QueryEngine
Handler returned by create_engine.
index_col : string, optional
Column name to use as index for the returned DataFrame object.
parse_dates : list or dict, optional
- List of column names to parse as dates
- Dict of {column_name: format string} where format string is strftime
compatible in case of parsing string times or is one of (D, s, ns, ms, us)
in case of parsing integer timestamps
distributed_join : boolean, default False
(Presto only) If True, distributed join is enabled. If False, broadcast join is used.
See https://prestodb.io/docs/current/release/release-0.77.html
params : dict, optional
Parameters to pass to execute method.
Available parameters:
- result_url (str): result output URL
- priority (int or str): priority (e.g. "NORMAL", "HIGH", etc.)
- retry_limit (int): retry limit
Returns
-------
DataFrame
'''
if params is None:
params = {}
# header
header = engine.create_header("read_td_query")
if engine.type == 'presto' and distributed_join is not None:
header += "-- set session distributed_join = '{0}'\n".format('true' if distributed_join else 'false')
# execute
r = engine.execute(header + query, **params)
return r.to_dataframe(index_col=index_col, parse_dates=parse_dates) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def read_td_job(job_id, engine, index_col=None, parse_dates=None):
'''Read Treasure Data job result into a DataFrame.
Returns a DataFrame corresponding to the result set of the job.
This method waits for job completion if the specified job is still running.
Optionally provide an index_col parameter to use one of the columns as
the index, otherwise default integer index will be used.
Parameters
----------
job_id : integer
Job ID.
engine : QueryEngine
Handler returned by create_engine.
index_col : string, optional
Column name to use as index for the returned DataFrame object.
parse_dates : list or dict, optional
- List of column names to parse as dates
- Dict of {column_name: format string} where format string is strftime
compatible in case of parsing string times or is one of (D, s, ns, ms, us)
in case of parsing integer timestamps
Returns
-------
DataFrame
'''
# get job
job = engine.connection.client.job(job_id)
# result
r = engine.get_result(job, wait=True)
return r.to_dataframe(index_col=index_col, parse_dates=parse_dates) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def read_td_table(table_name, engine, index_col=None, parse_dates=None, columns=None, time_range=None, limit=10000):
'''Read Treasure Data table into a DataFrame.
The number of returned rows is limited by "limit" (default 10,000).
Setting limit=None means all rows. Be careful when you set limit=None
because your table might be very large and the result does not fit into memory.
Parameters
----------
table_name : string
Name of Treasure Data table in database.
engine : QueryEngine
Handler returned by create_engine.
index_col : string, optional
Column name to use as index for the returned DataFrame object.
parse_dates : list or dict, optional
- List of column names to parse as dates
- Dict of {column_name: format string} where format string is strftime
compatible in case of parsing string times or is one of (D, s, ns, ms, us)
in case of parsing integer timestamps
columns : list, optional
List of column names to select from table.
time_range : tuple (start, end), optional
Limit time range to select. "start" and "end" are one of None, integers,
strings or datetime objects. "end" is exclusive, not included in the result.
limit : int, default 10,000
Maximum number of rows to select.
Returns
-------
DataFrame
'''
# header
query = engine.create_header("read_td_table('{0}')".format(table_name))
# SELECT
query += "SELECT {0}\n".format('*' if columns is None else ', '.join(columns))
# FROM
query += "FROM {0}\n".format(table_name)
# WHERE
if time_range is not None:
start, end = time_range
query += "WHERE td_time_range(time, {0}, {1})\n".format(_convert_time(start), _convert_time(end))
# LIMIT
if limit is not None:
query += "LIMIT {0}\n".format(limit)
# execute
r = engine.execute(query)
return r.to_dataframe(index_col=index_col, parse_dates=parse_dates) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def to_td(frame, name, con, if_exists='fail', time_col=None, time_index=None, index=True, index_label=None, chunksize=10000, date_format=None):
'''Write a DataFrame to a Treasure Data table.
This method converts the dataframe into a series of key-value pairs
and send them using the Treasure Data streaming API. The data is divided
into chunks of rows (default 10,000) and uploaded separately. If upload
failed, the client retries the process for a certain amount of time
(max_cumul_retry_delay; default 600 secs). This method may fail and
raise an exception when retries did not success, in which case the data
may be partially inserted. Use the bulk import utility if you cannot
accept partial inserts.
Parameters
----------
frame : DataFrame
DataFrame to be written.
name : string
Name of table to be written, in the form 'database.table'.
con : Connection
Connection to a Treasure Data account.
if_exists: {'fail', 'replace', 'append'}, default 'fail'
- fail: If table exists, do nothing.
- replace: If table exists, drop it, recreate it, and insert data.
- append: If table exists, insert data. Create if does not exist.
time_col : string, optional
Column name to use as "time" column for the table. Column type must be
integer (unixtime), datetime, or string. If None is given (default),
then the current time is used as time values.
time_index : int, optional
Level of index to use as "time" column for the table. Set 0 for a single index.
This parameter implies index=False.
index : boolean, default True
Write DataFrame index as a column.
index_label : string or sequence, default None
Column label for index column(s). If None is given (default) and index is True,
then the index names are used. A sequence should be given if the DataFrame uses
MultiIndex.
chunksize : int, default 10,000
Number of rows to be inserted in each chunk from the dataframe.
date_format : string, default None
Format string for datetime objects
'''
database, table = name.split('.')
uploader = StreamingUploader(con.client, database, table, show_progress=True, clear_progress=True)
uploader.message('Streaming import into: {0}.{1}'.format(database, table))
# check existence
if if_exists == 'fail':
try:
con.client.table(database, table)
except tdclient.api.NotFoundError:
uploader.message('creating new table...')
con.client.create_log_table(database, table)
else:
raise RuntimeError('table "%s" already exists' % name)
elif if_exists == 'replace':
try:
con.client.table(database, table)
except tdclient.api.NotFoundError:
pass
else:
uploader.message('deleting old table...')
con.client.delete_table(database, table)
uploader.message('creating new table...')
con.client.create_log_table(database, table)
elif if_exists == 'append':
try:
con.client.table(database, table)
except tdclient.api.NotFoundError:
uploader.message('creating new table...')
con.client.create_log_table(database, table)
else:
raise ValueError('invalid value for if_exists: %s' % if_exists)
# "time_index" implies "index=False"
if time_index:
index = None
# convert
frame = frame.copy()
frame = _convert_time_column(frame, time_col, time_index)
frame = _convert_index_column(frame, index, index_label)
frame = _convert_date_format(frame, date_format)
# upload
uploader.upload_frame(frame, chunksize)
uploader.wait_for_import(len(frame)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def ensure_dir(path):
"""Ensure directory exists. Args: path(str):
dir path """ |
dirpath = os.path.dirname(path)
if dirpath and not os.path.exists(dirpath):
os.makedirs(dirpath) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def normalize_value(value):
"""Convert value to string and make it lower cased. """ |
cast = str
if six.PY2:
cast = unicode # noqa
return cast(value).lower() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def infer(data, row_limit, confidence, encoding, to_file):
"""Infer a schema from data. * data must be a local filepath * data must be CSV * the file encoding is assumed to be UTF-8 unless an encoding is passed with --encoding * the first line of data must be headers * these constraints are just for the CLI """ |
descriptor = tableschema.infer(data,
encoding=encoding,
limit=row_limit,
confidence=confidence)
if to_file:
with io.open(to_file, mode='w+t', encoding='utf-8') as dest:
dest.write(json.dumps(descriptor, ensure_ascii=False, indent=4))
click.echo(descriptor) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def validate(schema):
"""Validate that a supposed schema is in fact a Table Schema.""" |
try:
tableschema.validate(schema)
click.echo("Schema is valid")
sys.exit(0)
except tableschema.exceptions.ValidationError as exception:
click.echo("Schema is not valid")
click.echo(exception.errors)
sys.exit(1) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _CheckKeyPath(self, registry_key, search_depth):
"""Checks the key path find specification. Args: registry_key (WinRegistryKey):
Windows Registry key. search_depth (int):
number of key path segments to compare. Returns: bool: True if the Windows Registry key matches the find specification, False if not. """ |
if self._key_path_segments is None:
return False
if search_depth < 0 or search_depth > self._number_of_key_path_segments:
return False
# Note that the root has no entry in the key path segments and
# no name to match.
if search_depth == 0:
segment_name = ''
else:
segment_name = self._key_path_segments[search_depth - 1]
if self._is_regex:
if isinstance(segment_name, py2to3.STRING_TYPES):
# Allow '\n' to be matched by '.' and make '\w', '\W', '\b', '\B',
# '\d', '\D', '\s' and '\S' Unicode safe.
flags = re.DOTALL | re.IGNORECASE | re.UNICODE
try:
segment_name = r'^{0:s}$'.format(segment_name)
segment_name = re.compile(segment_name, flags=flags)
except sre_constants.error:
# TODO: set self._key_path_segments[search_depth - 1] to None ?
return False
self._key_path_segments[search_depth - 1] = segment_name
else:
segment_name = segment_name.lower()
self._key_path_segments[search_depth - 1] = segment_name
if search_depth > 0:
if self._is_regex:
# pylint: disable=no-member
if not segment_name.match(registry_key.name):
return False
elif segment_name != registry_key.name.lower():
return False
return True |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def Matches(self, registry_key, search_depth):
"""Determines if the Windows Registry key matches the find specification. Args: registry_key (WinRegistryKey):
Windows Registry key. search_depth (int):
number of key path segments to compare. Returns: tuple: contains: bool: True if the Windows Registry key matches the find specification, False otherwise. bool: True if the key path matches, False if not or None if no key path specified. """ |
if self._key_path_segments is None:
key_path_match = None
else:
key_path_match = self._CheckKeyPath(registry_key, search_depth)
if not key_path_match:
return False, key_path_match
if search_depth != self._number_of_key_path_segments:
return False, key_path_match
return True, key_path_match |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _FindInKey(self, registry_key, find_specs, search_depth):
"""Searches for matching keys within the Windows Registry key. Args: registry_key (WinRegistryKey):
Windows Registry key. find_specs (list[FindSpec]):
find specifications. search_depth (int):
number of key path segments to compare. Yields: str: key path of a matching Windows Registry key. """ |
sub_find_specs = []
for find_spec in find_specs:
match, key_path_match = find_spec.Matches(registry_key, search_depth)
if match:
yield registry_key.path
# pylint: disable=singleton-comparison
if key_path_match != False and not find_spec.AtMaximumDepth(search_depth):
sub_find_specs.append(find_spec)
if sub_find_specs:
search_depth += 1
for sub_registry_key in registry_key.GetSubkeys():
for matching_path in self._FindInKey(
sub_registry_key, sub_find_specs, search_depth):
yield matching_path |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def Find(self, find_specs=None):
"""Searches for matching keys within the Windows Registry. Args: find_specs (list[FindSpec]):
find specifications. where None will return all allocated Windows Registry keys. Yields: str: key path of a matching Windows Registry key. """ |
if not find_specs:
find_specs = [FindSpec()]
registry_key = self._win_registry.GetRootKey()
for matching_path in self._FindInKey(registry_key, find_specs, 0):
yield matching_path |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def RecurseKeys(self):
"""Recurses the Windows Registry keys starting with the root key. Yields: WinRegistryKey: Windows Registry key. """ |
root_key = self.GetRootKey()
if root_key:
for registry_key in root_key.RecurseKeys():
yield registry_key |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def SetKeyPathPrefix(self, key_path_prefix):
"""Sets the Window Registry key path prefix. Args: key_path_prefix (str):
Windows Registry key path prefix. """ |
self._key_path_prefix = key_path_prefix
self._key_path_prefix_length = len(key_path_prefix)
self._key_path_prefix_upper = key_path_prefix.upper() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def RecurseKeys(self):
"""Recurses the subkeys starting with the key. Yields: WinRegistryKey: Windows Registry key. """ |
yield self
for subkey in self.GetSubkeys():
for key in subkey.RecurseKeys():
yield key |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def DataIsInteger(self):
"""Determines, based on the data type, if the data is an integer. The data types considered strings are: REG_DWORD (REG_DWORD_LITTLE_ENDIAN), REG_DWORD_BIG_ENDIAN and REG_QWORD. Returns: bool: True if the data is an integer, False otherwise. """ |
return self.data_type in (
definitions.REG_DWORD, definitions.REG_DWORD_BIG_ENDIAN,
definitions.REG_QWORD) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def AddKeyByPath(self, key_path, registry_key):
"""Adds a Windows Registry key for a specific key path. Args: key_path (str):
Windows Registry key path to add the key. registry_key (WinRegistryKey):
Windows Registry key. Raises: KeyError: if the subkey already exists. ValueError: if the Windows Registry key cannot be added. """ |
if not key_path.startswith(definitions.KEY_PATH_SEPARATOR):
raise ValueError('Key path does not start with: {0:s}'.format(
definitions.KEY_PATH_SEPARATOR))
if not self._root_key:
self._root_key = FakeWinRegistryKey(self._key_path_prefix)
path_segments = key_paths.SplitKeyPath(key_path)
parent_key = self._root_key
for path_segment in path_segments:
try:
subkey = FakeWinRegistryKey(path_segment)
parent_key.AddSubkey(subkey)
except KeyError:
subkey = parent_key.GetSubkeyByName(path_segment)
parent_key = subkey
parent_key.AddSubkey(registry_key) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _BuildKeyHierarchy(self, subkeys, values):
"""Builds the Windows Registry key hierarchy. Args: subkeys (list[FakeWinRegistryKey]):
list of subkeys. values (list[FakeWinRegistryValue]):
list of values. """ |
if subkeys:
for registry_key in subkeys:
name = registry_key.name.upper()
if name in self._subkeys:
continue
self._subkeys[name] = registry_key
# pylint: disable=protected-access
registry_key._key_path = key_paths.JoinKeyPath([
self._key_path, registry_key.name])
if values:
for registry_value in values:
name = registry_value.name.upper()
if name in self._values:
continue
self._values[name] = registry_value |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def AddValue(self, registry_value):
"""Adds a value. Args: registry_value (WinRegistryValue):
Windows Registry value. Raises: KeyError: if the value already exists. """ |
name = registry_value.name.upper()
if name in self._values:
raise KeyError(
'Value: {0:s} already exists.'.format(registry_value.name))
self._values[name] = registry_value |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _GetCachedFileByPath(self, key_path_upper):
"""Retrieves a cached Windows Registry file for a key path. Args: key_path_upper (str):
Windows Registry key path, in upper case with a resolved root key alias. Returns: tuple: consist: str: key path prefix WinRegistryFile: corresponding Windows Registry file or None if not available. """ |
longest_key_path_prefix_upper = ''
longest_key_path_prefix_length = len(longest_key_path_prefix_upper)
for key_path_prefix_upper in self._registry_files:
if key_path_upper.startswith(key_path_prefix_upper):
key_path_prefix_length = len(key_path_prefix_upper)
if key_path_prefix_length > longest_key_path_prefix_length:
longest_key_path_prefix_upper = key_path_prefix_upper
longest_key_path_prefix_length = key_path_prefix_length
if not longest_key_path_prefix_upper:
return None, None
registry_file = self._registry_files.get(
longest_key_path_prefix_upper, None)
return longest_key_path_prefix_upper, registry_file |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _GetCurrentControlSet(self, key_path_suffix):
"""Virtual key callback to determine the current control set. Args: key_path_suffix (str):
current control set Windows Registry key path suffix with leading path separator. Returns: WinRegistryKey: the current control set Windows Registry key or None if not available. """ |
select_key_path = 'HKEY_LOCAL_MACHINE\\System\\Select'
select_key = self.GetKeyByPath(select_key_path)
if not select_key:
return None
# To determine the current control set check:
# 1. The "Current" value.
# 2. The "Default" value.
# 3. The "LastKnownGood" value.
control_set = None
for value_name in ('Current', 'Default', 'LastKnownGood'):
value = select_key.GetValueByName(value_name)
if not value or not value.DataIsInteger():
continue
control_set = value.GetDataAsObject()
# If the control set is 0 then we need to check the other values.
if control_set > 0 or control_set <= 999:
break
if not control_set or control_set <= 0 or control_set > 999:
return None
control_set_path = 'HKEY_LOCAL_MACHINE\\System\\ControlSet{0:03d}'.format(
control_set)
key_path = ''.join([control_set_path, key_path_suffix])
return self.GetKeyByPath(key_path) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _GetUsers(self, key_path_suffix):
"""Virtual key callback to determine the users sub keys. Args: key_path_suffix (str):
users Windows Registry key path suffix with leading path separator. Returns: WinRegistryKey: the users Windows Registry key or None if not available. """ |
user_key_name, _, key_path_suffix = key_path_suffix.partition(
definitions.KEY_PATH_SEPARATOR)
# HKEY_USERS\.DEFAULT is an alias for HKEY_USERS\S-1-5-18 which is
# the Local System account.
if user_key_name == '.DEFAULT':
search_key_name = 'S-1-5-18'
else:
search_key_name = user_key_name
user_profile_list_key = self.GetKeyByPath(self._USER_PROFILE_LIST_KEY_PATH)
if not user_profile_list_key:
return None
for user_profile_key in user_profile_list_key.GetSubkeys():
if search_key_name == user_profile_key.name:
profile_path_value = user_profile_key.GetValueByName('ProfileImagePath')
if not profile_path_value:
break
profile_path = profile_path_value.GetDataAsObject()
if not profile_path:
break
key_name_upper = user_profile_key.name.upper()
if key_name_upper.endswith('_CLASSES'):
profile_path = '\\'.join([
profile_path, 'AppData', 'Local', 'Microsoft', 'Windows',
'UsrClass.dat'])
else:
profile_path = '\\'.join([profile_path, 'NTUSER.DAT'])
profile_path_upper = profile_path.upper()
registry_file = self._GetCachedUserFileByPath(profile_path_upper)
if not registry_file:
break
key_path_prefix = definitions.KEY_PATH_SEPARATOR.join([
'HKEY_USERS', user_key_name])
key_path = ''.join([key_path_prefix, key_path_suffix])
registry_file.SetKeyPathPrefix(key_path_prefix)
return registry_file.GetKeyByPath(key_path)
return None |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _GetFileByPath(self, key_path_upper):
"""Retrieves a Windows Registry file for a specific path. Args: key_path_upper (str):
Windows Registry key path, in upper case with a resolved root key alias. Returns: tuple: consists: str: upper case key path prefix WinRegistryFile: corresponding Windows Registry file or None if not available. """ |
# TODO: handle HKEY_USERS in both 9X and NT.
key_path_prefix, registry_file = self._GetCachedFileByPath(key_path_upper)
if not registry_file:
for mapping in self._GetFileMappingsByPath(key_path_upper):
try:
registry_file = self._OpenFile(mapping.windows_path)
except IOError:
registry_file = None
if not registry_file:
continue
if not key_path_prefix:
key_path_prefix = mapping.key_path_prefix
self.MapFile(key_path_prefix, registry_file)
key_path_prefix = key_path_prefix.upper()
break
return key_path_prefix, registry_file |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _GetFileMappingsByPath(self, key_path_upper):
"""Retrieves the Windows Registry file mappings for a specific path. Args: key_path_upper (str):
Windows Registry key path, in upper case with a resolved root key alias. Yields: WinRegistryFileMapping: Windows Registry file mapping. """ |
candidate_mappings = []
for mapping in self._REGISTRY_FILE_MAPPINGS_NT:
if key_path_upper.startswith(mapping.key_path_prefix.upper()):
candidate_mappings.append(mapping)
# Sort the candidate mappings by longest (most specific) match first.
candidate_mappings.sort(
key=lambda mapping: len(mapping.key_path_prefix), reverse=True)
for mapping in candidate_mappings:
yield mapping |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _OpenFile(self, path):
"""Opens a Windows Registry file. Args: path (str):
path of the Windows Registry file. Returns: WinRegistryFile: Windows Registry file or None if not available. """ |
if not self._registry_file_reader:
return None
return self._registry_file_reader.Open(
path, ascii_codepage=self._ascii_codepage) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def GetRegistryFileMapping(self, registry_file):
"""Determines the Registry file mapping based on the content of the file. Args: registry_file (WinRegistyFile):
Windows Registry file. Returns: str: key path prefix or an empty string. Raises: RuntimeError: if there are multiple matching mappings and the correct mapping cannot be resolved. """ |
if not registry_file:
return ''
candidate_mappings = []
for mapping in self._REGISTRY_FILE_MAPPINGS_NT:
if not mapping.unique_key_paths:
continue
# If all unique key paths are found consider the file to match.
match = True
for key_path in mapping.unique_key_paths:
registry_key = registry_file.GetKeyByPath(key_path)
if not registry_key:
match = False
if match:
candidate_mappings.append(mapping)
if not candidate_mappings:
return ''
if len(candidate_mappings) == 1:
return candidate_mappings[0].key_path_prefix
key_path_prefixes = frozenset([
mapping.key_path_prefix for mapping in candidate_mappings])
expected_key_path_prefixes = frozenset([
'HKEY_CURRENT_USER',
'HKEY_CURRENT_USER\\Software\\Classes'])
if key_path_prefixes == expected_key_path_prefixes:
return 'HKEY_CURRENT_USER'
raise RuntimeError('Unable to resolve Windows Registry file mapping.') |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def GetRootKey(self):
"""Retrieves the Windows Registry root key. Returns: WinRegistryKey: Windows Registry root key. Raises: RuntimeError: if there are multiple matching mappings and the correct mapping cannot be resolved. """ |
root_registry_key = virtual.VirtualWinRegistryKey('')
for mapped_key in self._MAPPED_KEYS:
key_path_segments = key_paths.SplitKeyPath(mapped_key)
if not key_path_segments:
continue
registry_key = root_registry_key
for name in key_path_segments[:-1]:
sub_registry_key = registry_key.GetSubkeyByName(name)
if not sub_registry_key:
sub_registry_key = virtual.VirtualWinRegistryKey(name)
registry_key.AddSubkey(sub_registry_key)
registry_key = sub_registry_key
sub_registry_key = registry_key.GetSubkeyByName(key_path_segments[-1])
if (not sub_registry_key and
isinstance(registry_key, virtual.VirtualWinRegistryKey)):
sub_registry_key = virtual.VirtualWinRegistryKey(
key_path_segments[-1], registry=self)
registry_key.AddSubkey(sub_registry_key)
return root_registry_key |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def MapFile(self, key_path_prefix, registry_file):
"""Maps the Windows Registry file to a specific key path prefix. Args: key_path_prefix (str):
key path prefix. registry_file (WinRegistryFile):
Windows Registry file. """ |
self._registry_files[key_path_prefix.upper()] = registry_file
registry_file.SetKeyPathPrefix(key_path_prefix) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def GetRootKey(self):
"""Retrieves the root key. Returns: WinRegistryKey: Windows Registry root key or None if not available. """ |
regf_key = self._regf_file.get_root_key()
if not regf_key:
return None
return REGFWinRegistryKey(regf_key, key_path=self._key_path_prefix) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def Open(self, file_object):
"""Opens the Windows Registry file using a file-like object. Args: file_object (file):
file-like object. Returns: bool: True if successful or False if not. """ |
self._file_object = file_object
self._regf_file.open_file_object(self._file_object)
return True |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def SplitKeyPath(key_path, path_separator=definitions.KEY_PATH_SEPARATOR):
"""Splits the key path into path segments. Args: key_path (str):
key path. path_separator (Optional[str]):
path separator. Returns: list[str]: key path segments without the root path segment, which is an empty string. """ |
# Split the path with the path separator and remove empty path segments.
return list(filter(None, key_path.split(path_separator))) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _GetKeyFromRegistry(self):
"""Determines the key from the Windows Registry.""" |
if not self._registry:
return
try:
self._registry_key = self._registry.GetKeyByPath(self._key_path)
except RuntimeError:
pass
if not self._registry_key:
return
for sub_registry_key in self._registry_key.GetSubkeys():
self.AddSubkey(sub_registry_key)
if self._key_path == 'HKEY_LOCAL_MACHINE\\System':
sub_registry_key = VirtualWinRegistryKey(
'CurrentControlSet', registry=self._registry)
self.AddSubkey(sub_registry_key)
self._registry = None |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def GetValues(self):
"""Retrieves all values within the key. Returns: generator[WinRegistryValue]: Windows Registry value generator. """ |
if not self._registry_key and self._registry:
self._GetKeyFromRegistry()
if self._registry_key:
return self._registry_key.GetValues()
return iter([]) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def perform(self):
"""This method converts payload into args and calls the ``perform`` method on the payload class. Before calling ``perform``, a ``before_perform`` class method is called, if it exists. It takes a dictionary as an argument; currently the only things stored on the dictionary are the args passed into ``perform`` and a timestamp of when the job was enqueued. Similarly, an ``after_perform`` class method is called after ``perform`` is finished. The metadata dictionary contains the same data, plus a timestamp of when the job was performed, a ``failed`` boolean value, and if it did fail, a ``retried`` boolean value. This method is called after retry, and is called regardless of whether an exception is ultimately thrown by the perform method. """ |
payload_class_str = self._payload["class"]
payload_class = self.safe_str_to_class(payload_class_str)
payload_class.resq = self.resq
args = self._payload.get("args")
metadata = dict(args=args)
if self.enqueue_timestamp:
metadata["enqueue_timestamp"] = self.enqueue_timestamp
before_perform = getattr(payload_class, "before_perform", None)
metadata["failed"] = False
metadata["perform_timestamp"] = time.time()
check_after = True
try:
if before_perform:
payload_class.before_perform(metadata)
return payload_class.perform(*args)
except Exception as e:
metadata["failed"] = True
metadata["exception"] = e
if not self.retry(payload_class, args):
metadata["retried"] = False
raise
else:
metadata["retried"] = True
logging.exception("Retry scheduled after error in %s", self._payload)
finally:
after_perform = getattr(payload_class, "after_perform", None)
if after_perform:
payload_class.after_perform(metadata)
delattr(payload_class,'resq') |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def fail(self, exception):
"""This method provides a way to fail a job and will use whatever failure backend you've provided. The default is the ``RedisBackend``. """ |
fail = failure.create(exception, self._queue, self._payload,
self._worker)
fail.save(self.resq)
return fail |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def retry(self, payload_class, args):
"""This method provides a way to retry a job after a failure. If the jobclass defined by the payload containes a ``retry_every`` attribute then pyres will attempt to retry the job until successful or until timeout defined by ``retry_timeout`` on the payload class. """ |
retry_every = getattr(payload_class, 'retry_every', None)
retry_timeout = getattr(payload_class, 'retry_timeout', 0)
if retry_every:
now = ResQ._current_time()
first_attempt = self._payload.get("first_attempt", now)
retry_until = first_attempt + timedelta(seconds=retry_timeout)
retry_at = now + timedelta(seconds=retry_every)
if retry_at < retry_until:
self.resq.enqueue_at(retry_at, payload_class, *args,
**{'first_attempt':first_attempt})
return True
return False |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def reserve(cls, queues, res, worker=None, timeout=10):
"""Reserve a job on one of the queues. This marks this job so that other workers will not pick it up. """ |
if isinstance(queues, string_types):
queues = [queues]
queue, payload = res.pop(queues, timeout=timeout)
if payload:
return cls(queue, payload, res, worker) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def my_import(name):
"""Helper function for walking import calls when searching for classes by string names. """ |
mod = __import__(name)
components = name.split('.')
for comp in components[1:]:
mod = getattr(mod, comp)
return mod |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def safe_str_to_class(s):
"""Helper function to map string class names to module classes.""" |
lst = s.split(".")
klass = lst[-1]
mod_list = lst[:-1]
module = ".".join(mod_list)
# ruby compatibility kludge: resque sends just a class name and
# not a module name so if I use resque to queue a ruby class
# called "Worker" then pyres will throw a "ValueError: Empty
# module name" exception. To avoid that, if there's no module in
# the json then we'll use the classname as a module name.
if not module:
module = klass
mod = my_import(module)
if hasattr(mod, klass):
return getattr(mod, klass)
else:
raise ImportError('') |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def str_to_class(s):
"""Alternate helper function to map string class names to module classes.""" |
lst = s.split(".")
klass = lst[-1]
mod_list = lst[:-1]
module = ".".join(mod_list)
try:
mod = __import__(module)
if hasattr(mod, klass):
return getattr(mod, klass)
else:
return None
except ImportError:
return None |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def info(self):
"""Returns a dictionary of the current status of the pending jobs, processed, no. of queues, no. of workers, no. of failed jobs. """ |
pending = 0
for q in self.queues():
pending += self.size(q)
return {
'pending' : pending,
'processed' : Stat('processed',self).get(),
'queues' : len(self.queues()),
'workers' : len(self.workers()),
#'working' : len(self.working()),
'failed' : Stat('failed',self).get(),
'servers' : ['%s:%s' % (self.host, self.port)]
} |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _shutdown_minions(self):
""" send the SIGNINT signal to each worker in the pool. """ |
setproctitle('pyres_manager: Waiting on children to shutdown.')
for minion in self._workers.values():
minion.terminate()
minion.join() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def work(self, interval=5):
"""Invoked by ``run`` method. ``work`` listens on a list of queues and sleeps for ``interval`` time. ``interval`` -- Number of seconds the worker will wait until processing the next job. Default is "5". Whenever a worker finds a job on the queue it first calls ``reserve`` on that job to make sure another worker won't run it, then *forks* itself to work on that job. """ |
self._setproctitle("Starting")
logger.info("starting")
self.startup()
while True:
if self._shutdown:
logger.info('shutdown scheduled')
break
self.register_worker()
job = self.reserve(interval)
if job:
self.fork_worker(job)
else:
if interval == 0:
break
#procline @paused ? "Paused" : "Waiting for #{@queues.join(',')}"
self._setproctitle("Waiting")
#time.sleep(interval)
self.unregister_worker() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def fork_worker(self, job):
"""Invoked by ``work`` method. ``fork_worker`` does the actual forking to create the child process that will process the job. It's also responsible for monitoring the child process and handling hangs and crashes. Finally, the ``process`` method actually processes the job by eventually calling the Job instance's ``perform`` method. """ |
logger.debug('picked up job')
logger.debug('job details: %s' % job)
self.before_fork(job)
self.child = os.fork()
if self.child:
self._setproctitle("Forked %s at %s" %
(self.child,
datetime.datetime.now()))
logger.info('Forked %s at %s' % (self.child,
datetime.datetime.now()))
try:
start = datetime.datetime.now()
# waits for the result or times out
while True:
pid, status = os.waitpid(self.child, os.WNOHANG)
if pid != 0:
if os.WIFEXITED(status) and os.WEXITSTATUS(status) == 0:
break
if os.WIFSTOPPED(status):
logger.warning("Process stopped by signal %d" % os.WSTOPSIG(status))
else:
if os.WIFSIGNALED(status):
raise CrashError("Unexpected exit by signal %d" % os.WTERMSIG(status))
raise CrashError("Unexpected exit status %d" % os.WEXITSTATUS(status))
time.sleep(0.5)
now = datetime.datetime.now()
if self.timeout and ((now - start).seconds > self.timeout):
os.kill(self.child, signal.SIGKILL)
os.waitpid(-1, os.WNOHANG)
raise TimeoutError("Timed out after %d seconds" % self.timeout)
except OSError as ose:
import errno
if ose.errno != errno.EINTR:
raise ose
except JobError:
self._handle_job_exception(job)
finally:
# If the child process' job called os._exit manually we need to
# finish the clean up here.
if self.job():
self.done_working(job)
logger.debug('done waiting')
else:
self._setproctitle("Processing %s since %s" %
(job,
datetime.datetime.now()))
logger.info('Processing %s since %s' %
(job, datetime.datetime.now()))
self.after_fork(job)
# re-seed the Python PRNG after forking, otherwise
# all job process will share the same sequence of
# random numbers
random.seed()
self.process(job)
os._exit(0)
self.child = None |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def save(self, resq=None):
"""Saves the failed Job into a "failed" Redis queue preserving all its original enqueud info.""" |
if not resq:
resq = ResQ()
data = {
'failed_at' : datetime.datetime.now().strftime('%Y/%m/%d %H:%M:%S'),
'payload' : self._payload,
'exception' : self._exception.__class__.__name__,
'error' : self._parse_message(self._exception),
'backtrace' : self._parse_traceback(self._traceback),
'queue' : self._queue
}
if self._worker:
data['worker'] = self._worker
data = ResQ.encode(data)
resq.redis.rpush('resque:failed', data) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def make_csr(A):
""" Convert A to CSR, if A is not a CSR or BSR matrix already. Parameters A : array, matrix, sparse matrix (n x n) matrix to convert to CSR Returns ------- A : csr_matrix, bsr_matrix If A is csr_matrix or bsr_matrix, then do nothing and return A. Else, convert A to CSR if possible and return. Examples -------- Implicit conversion of A to CSR in pyamg.blackbox.make_csr """ |
# Convert to CSR or BSR if necessary
if not (isspmatrix_csr(A) or isspmatrix_bsr(A)):
try:
A = csr_matrix(A)
print('Implicit conversion of A to CSR in pyamg.blackbox.make_csr')
except BaseException:
raise TypeError('Argument A must have type csr_matrix or\
bsr_matrix, or be convertible to csr_matrix')
if A.shape[0] != A.shape[1]:
raise TypeError('Argument A must be a square')
A = A.asfptype()
return A |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def solver(A, config):
"""Generate an SA solver given matrix A and a configuration. Parameters A : array, matrix, csr_matrix, bsr_matrix Matrix to invert, CSR or BSR format preferred for efficiency config : dict A dictionary of solver configuration parameters that is used to generate a smoothed aggregation solver Returns ------- ml : smoothed_aggregation_solver smoothed aggregation hierarchy Notes ----- config must contain the following parameter entries for smoothed_aggregation_solver: symmetry, smooth, presmoother, postsmoother, B, strength, max_levels, max_coarse, coarse_solver, aggregate, keep Examples -------- """ |
# Convert A to acceptable format
A = make_csr(A)
# Generate smoothed aggregation solver
try:
return \
smoothed_aggregation_solver(A,
B=config['B'],
BH=config['BH'],
smooth=config['smooth'],
strength=config['strength'],
max_levels=config['max_levels'],
max_coarse=config['max_coarse'],
coarse_solver=config['coarse_solver'],
symmetry=config['symmetry'],
aggregate=config['aggregate'],
presmoother=config['presmoother'],
postsmoother=config['postsmoother'],
keep=config['keep'])
except BaseException:
raise TypeError('Failed generating smoothed_aggregation_solver') |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def solve(A, b, x0=None, tol=1e-5, maxiter=400, return_solver=False, existing_solver=None, verb=True, residuals=None):
"""Solve Ax=b. Solve the arbitrary system Ax=b with the best out-of-the box choice for a solver. The matrix A can be non-Hermitian, indefinite, Hermitian smoothed_aggregation_solver(..) are used to invert A. Parameters A : array, matrix, csr_matrix, bsr_matrix Matrix to invert, CSR or BSR format preferred for efficiency b : array Right hand side. x0 : array Initial guess (default random vector) tol : float Stopping criteria: relative residual r[k]/r[0] tolerance maxiter : int Stopping criteria: maximum number of allowable iterations return_solver : bool True: return the solver generated existing_solver : smoothed_aggregation_solver If instance of a multilevel solver, then existing_solver is used to invert A, thus saving time on setup cost. verb : bool If True, print verbose output during runtime residuals : list List to contain residual norms at each iteration. The preconditioned norm is used, namely ||r||_M = (M r, r)^(1/2) = (r, r)^(1/2) Returns ------- x : array Solution to Ax = b ml : multilevel_solver Optional return of the multilevel structure used for the solve Notes ----- is easy and efficient. Set "return_solver=True", and the return value will be a tuple, (x,ml), where ml is the solver used to invert A, and x is the "existing_solver=ml". Examples -------- 6.28e-06 """ |
# Convert A to acceptable CSR/BSR format
A = make_csr(A)
# Generate solver if necessary
if existing_solver is None:
# Parameter dictionary for smoothed_aggregation_solver
config = solver_configuration(A, B=None, verb=verb)
# Generate solver
existing_solver = solver(A, config)
else:
if existing_solver.levels[0].A.shape[0] != A.shape[0]:
raise TypeError('Argument existing_solver must have level 0 matrix\
of same size as A')
# Krylov acceleration depends on symmetry of A
if existing_solver.levels[0].A.symmetry == 'hermitian':
accel = 'cg'
else:
accel = 'gmres'
# Initial guess
if x0 is None:
x0 = np.array(sp.rand(A.shape[0],), dtype=A.dtype)
# Callback function to print iteration number
if verb:
iteration = np.zeros((1,))
print(" maxiter = %d" % maxiter)
def callback(x, iteration):
iteration[0] = iteration[0] + 1
print(" iteration %d" % iteration[0])
def callback2(x):
return callback(x, iteration)
else:
callback2 = None
# Solve with accelerated Krylov method
x = existing_solver.solve(b, x0=x0, accel=accel, tol=tol, maxiter=maxiter,
callback=callback2, residuals=residuals)
if verb:
r0 = b - A * x0
rk = b - A * x
M = existing_solver.aspreconditioner()
nr0 = np.sqrt(np.inner(np.conjugate(M * r0), r0))
nrk = np.sqrt(np.inner(np.conjugate(M * rk), rk))
print(" Residuals ||r_k||_M, ||r_0||_M = %1.2e, %1.2e" % (nrk, nr0))
if np.abs(nr0) > 1e-15:
print(" Residual reduction ||r_k||_M/||r_0||_M = %1.2e"
% (nrk / nr0))
if return_solver:
return (x.reshape(b.shape), existing_solver)
else:
return x.reshape(b.shape) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def find_comments(fname, ch):
""" Find the comments for a function. fname: filename ch: CppHeaderParser parse tree The function must look like /* * comments * comments */ -or- /* * comments * comments */ -or- with // style comments Then, take off the first three spaces """ |
with open(fname, 'r') as inf:
fdata = inf.readlines()
comments = {}
for f in ch.functions:
lineno = f['line_number'] - 1 # zero based indexing
# set starting position
lineptr = lineno - 1
if f['template']:
lineptr -= 1
start = lineptr
# find the top of the comment block
while fdata[lineptr].startswith('//') or\
fdata[lineptr].startswith('/*') or\
fdata[lineptr].startswith(' *'):
lineptr -= 1
lineptr += 1
comment = fdata[lineptr:(start + 1)]
comment = [c[3:].rstrip() for c in comment]
comments[f['name']] = '\n'.join(comment).strip()
return comments |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def fit_candidates(AggOp, B, tol=1e-10):
"""Fit near-nullspace candidates to form the tentative prolongator. Parameters AggOp : csr_matrix Describes the sparsity pattern of the tentative prolongator. Has dimension (#blocks, #aggregates) B : array The near-nullspace candidates stored in column-wise fashion. Has dimension (#blocks * blocksize, #candidates) tol : scalar Threshold for eliminating local basis functions. If after orthogonalization a local basis function Q[:, j] is small, i.e. ||Q[:, j]|| < tol, then Q[:, j] is set to zero. Returns ------- (Q, R) : (bsr_matrix, array) The tentative prolongator Q is a sparse block matrix with dimensions (#blocks * blocksize, #aggregates * #candidates) formed by dense blocks of size (blocksize, #candidates). The coarse level candidates are stored in R which has dimensions (#aggregates * #candidates, #candidates). See Also -------- amg_core.fit_candidates Notes ----- Assuming that each row of AggOp contains exactly one non-zero entry, i.e. all unknowns belong to an aggregate, then Q and R satisfy the relationship B = Q*R. In other words, the near-nullspace candidates are represented exactly by the tentative prolongator. If AggOp contains rows with no non-zero entries, then the range of the tentative prolongator will not include those degrees of freedom. This situation is illustrated in the examples below. References .. [1] Vanek, P. and Mandel, J. and Brezina, M., "Algebraic Multigrid by Smoothed Aggregation for Second and Fourth Order Elliptic Problems", Computing, vol. 56, no. 3, pp. 179--196, 1996. http://citeseer.ist.psu.edu/vanek96algebraic.html Examples -------- matrix([[ 0.70710678, 0. ], [ 0.70710678, 0. ], [ 0. , 0.70710678], [ 0. , 0.70710678]]) array([[ 1.41421356], [ 1.41421356]]) matrix([[ 0.70710678, -0.70710678, 0. , 0. ], [ 0.70710678, 0.70710678, 0. , 0. ], [ 0. , 0. , 0.70710678, -0.70710678], [ 0. , 0. , 0.70710678, 0.70710678]]) array([[ 1.41421356, 0.70710678], [ 0. , 0.70710678], [ 1.41421356, 3.53553391], [ 0. , 0.70710678]]) matrix([[ 0.70710678, 0. ], [ 0.70710678, 0. ], [ 0. , 0. ], [ 0. , 1. ]]) array([[ 1.41421356], [ 1. ]]) """ |
if not isspmatrix_csr(AggOp):
raise TypeError('expected csr_matrix for argument AggOp')
B = np.asarray(B)
if B.dtype not in ['float32', 'float64', 'complex64', 'complex128']:
B = np.asarray(B, dtype='float64')
if len(B.shape) != 2:
raise ValueError('expected 2d array for argument B')
if B.shape[0] % AggOp.shape[0] != 0:
raise ValueError('dimensions of AggOp %s and B %s are \
incompatible' % (AggOp.shape, B.shape))
N_fine, N_coarse = AggOp.shape
K1 = int(B.shape[0] / N_fine) # dof per supernode (e.g. 3 for 3d vectors)
K2 = B.shape[1] # candidates
# the first two dimensions of R and Qx are collapsed later
R = np.empty((N_coarse, K2, K2), dtype=B.dtype) # coarse candidates
Qx = np.empty((AggOp.nnz, K1, K2), dtype=B.dtype) # BSR data array
AggOp_csc = AggOp.tocsc()
fn = amg_core.fit_candidates
fn(N_fine, N_coarse, K1, K2,
AggOp_csc.indptr, AggOp_csc.indices, Qx.ravel(),
B.ravel(), R.ravel(), tol)
Q = bsr_matrix((Qx.swapaxes(1, 2).copy(), AggOp_csc.indices,
AggOp_csc.indptr), shape=(K2*N_coarse, K1*N_fine))
Q = Q.T.tobsr()
R = R.reshape(-1, K2)
return Q, R |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _rand_sparse(m, n, density, format='csr'):
"""Construct base function for sprand, sprandn.""" |
nnz = max(min(int(m*n*density), m*n), 0)
row = np.random.randint(low=0, high=m-1, size=nnz)
col = np.random.randint(low=0, high=n-1, size=nnz)
data = np.ones(nnz, dtype=float)
# duplicate (i,j) entries will be summed together
return sp.sparse.csr_matrix((data, (row, col)), shape=(m, n)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def sprand(m, n, density, format='csr'):
"""Return a random sparse matrix. Parameters m, n : int shape of the result density : float target a matrix with nnz(A) = m*n*density, 0<=density<=1 format : string sparse matrix format to return, e.g. 'csr', 'coo', etc. Return ------ A : sparse matrix m x n sparse matrix Examples -------- """ |
m, n = int(m), int(n)
# get sparsity pattern
A = _rand_sparse(m, n, density, format='csr')
# replace data with random values
A.data = sp.rand(A.nnz)
return A.asformat(format) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def linear_elasticity(grid, spacing=None, E=1e5, nu=0.3, format=None):
"""Linear elasticity problem discretizes with Q1 finite elements on a regular rectangular grid. Parameters grid : tuple length 2 tuple of grid sizes, e.g. (10, 10) spacing : tuple length 2 tuple of grid spacings, e.g. (1.0, 0.1) E : float Young's modulus nu : float Poisson's ratio format : string Format of the returned sparse matrix (eg. 'csr', 'bsr', etc.) Returns ------- A : csr_matrix FE Q1 stiffness matrix B : array rigid body modes See Also -------- linear_elasticity_p1 Notes ----- - only 2d for now Examples -------- References .. [1] J. Alberty, C. Carstensen, S. A. Funken, and R. KloseDOI "Matlab implementation of the finite element method in elasticity" Computing, Volume 69, Issue 3 (November 2002) Pages: 239 - 263 http://www.math.hu-berlin.de/~cc/ """ |
if len(grid) == 2:
return q12d(grid, spacing=spacing, E=E, nu=nu, format=format)
else:
raise NotImplemented('no support for grid=%s' % str(grid)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def q12d_local(vertices, lame, mu):
"""Local stiffness matrix for two dimensional elasticity on a square element. Parameters lame : Float Lame's first parameter mu : Float shear modulus See Also -------- linear_elasticity Notes ----- Vertices should be listed in counter-clockwise order:: [3]----[2] | | | | [0]----[1] Degrees of freedom are enumerated as follows:: [x=6,y=7]----[x=4,y=5] | | | | [x=0,y=1]----[x=2,y=3] """ |
M = lame + 2*mu # P-wave modulus
R_11 = np.matrix([[2, -2, -1, 1],
[-2, 2, 1, -1],
[-1, 1, 2, -2],
[1, -1, -2, 2]]) / 6.0
R_12 = np.matrix([[1, 1, -1, -1],
[-1, -1, 1, 1],
[-1, -1, 1, 1],
[1, 1, -1, -1]]) / 4.0
R_22 = np.matrix([[2, 1, -1, -2],
[1, 2, -2, -1],
[-1, -2, 2, 1],
[-2, -1, 1, 2]]) / 6.0
F = inv(np.vstack((vertices[1] - vertices[0], vertices[3] - vertices[0])))
K = np.zeros((8, 8)) # stiffness matrix
E = F.T * np.matrix([[M, 0], [0, mu]]) * F
K[0::2, 0::2] = E[0, 0] * R_11 + E[0, 1] * R_12 +\
E[1, 0] * R_12.T + E[1, 1] * R_22
E = F.T * np.matrix([[mu, 0], [0, M]]) * F
K[1::2, 1::2] = E[0, 0] * R_11 + E[0, 1] * R_12 +\
E[1, 0] * R_12.T + E[1, 1] * R_22
E = F.T * np.matrix([[0, mu], [lame, 0]]) * F
K[1::2, 0::2] = E[0, 0] * R_11 + E[0, 1] * R_12 +\
E[1, 0] * R_12.T + E[1, 1] * R_22
K[0::2, 1::2] = K[1::2, 0::2].T
K /= det(F)
return K |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def p12d_local(vertices, lame, mu):
"""Local stiffness matrix for P1 elements in 2d.""" |
assert(vertices.shape == (3, 2))
A = np.vstack((np.ones((1, 3)), vertices.T))
PhiGrad = inv(A)[:, 1:] # gradients of basis functions
R = np.zeros((3, 6))
R[[[0], [2]], [0, 2, 4]] = PhiGrad.T
R[[[2], [1]], [1, 3, 5]] = PhiGrad.T
C = mu*np.array([[2, 0, 0], [0, 2, 0], [0, 0, 1]]) +\
lame*np.array([[1, 1, 0], [1, 1, 0], [0, 0, 0]])
K = det(A)/2.0*np.dot(np.dot(R.T, C), R)
return K |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def write_basic_mesh(Verts, E2V=None, mesh_type='tri', pdata=None, pvdata=None, cdata=None, cvdata=None, fname='output.vtk'):
"""Write mesh file for basic types of elements. Parameters fname : {string} file to be written, e.g. 'mymesh.vtu' Verts : {array} coordinate array (N x D) E2V : {array} element index array (Nel x Nelnodes) mesh_type : {string} type of elements: tri, quad, tet, hex (all 3d) pdata : {array} scalar data on vertices (N x Nfields) pvdata : {array} vector data on vertices (3*Nfields x N) cdata : {array} scalar data on cells (Nfields x Nel) cvdata : {array} vector data on cells (3*Nfields x Nel) Returns ------- writes a .vtu file for use in Paraview Notes ----- The difference between write_basic_mesh and write_vtu is that write_vtu is more general and requires dictionaries of cell information. write_basic_mesh calls write_vtu Examples -------- pvdata=pvdata, cdata=cdata, cvdata=cvdata, fname='test.vtu') See Also -------- write_vtu """ |
if E2V is None:
mesh_type = 'vertex'
map_type_to_key = {'vertex': 1, 'tri': 5, 'quad': 9, 'tet': 10, 'hex': 12}
if mesh_type not in map_type_to_key:
raise ValueError('unknown mesh_type=%s' % mesh_type)
key = map_type_to_key[mesh_type]
if mesh_type == 'vertex':
uidx = np.arange(0, Verts.shape[0]).reshape((Verts.shape[0], 1))
E2V = {key: uidx}
else:
E2V = {key: E2V}
if cdata is not None:
cdata = {key: cdata}
if cvdata is not None:
cvdata = {key: cvdata}
write_vtu(Verts=Verts, Cells=E2V, pdata=pdata, pvdata=pvdata,
cdata=cdata, cvdata=cvdata, fname=fname) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def set_attributes(d, elm):
"""Set attributes from dictionary of values.""" |
for key in d:
elm.setAttribute(key, d[key]) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def eliminate_local_candidates(x, AggOp, A, T, Ca=1.0, **kwargs):
"""Eliminate canidates locally. Helper function that determines where to eliminate candidates locally on a per aggregate basis. Parameters --------- x : array n x 1 vector of new candidate AggOp : CSR or CSC sparse matrix Aggregation operator for the level that x was generated for A : sparse matrix Operator for the level that x was generated for T : sparse matrix Tentative prolongation operator for the level that x was generated for Ca : scalar Constant threshold parameter to decide when to drop candidates Returns ------- Nothing, x is modified in place """ |
if not (isspmatrix_csr(AggOp) or isspmatrix_csc(AggOp)):
raise TypeError('AggOp must be a CSR or CSC matrix')
else:
AggOp = AggOp.tocsc()
ndof = max(x.shape)
nPDEs = int(ndof/AggOp.shape[0])
def aggregate_wise_inner_product(z, AggOp, nPDEs, ndof):
"""Inner products per aggregate.
Helper function that calculates <z, z>_i, i.e., the
inner product of z only over aggregate i
Returns a vector of length num_aggregates where entry i is <z, z>_i
"""
z = np.ravel(z)*np.ravel(z)
innerp = np.zeros((1, AggOp.shape[1]), dtype=z.dtype)
for j in range(nPDEs):
innerp += z[slice(j, ndof, nPDEs)].reshape(1, -1) * AggOp
return innerp.reshape(-1, 1)
def get_aggregate_weights(AggOp, A, z, nPDEs, ndof):
"""Weights per aggregate.
Calculate local aggregate quantities
Return a vector of length num_aggregates where entry i is
(card(agg_i)/A.shape[0]) ( <Az, z>/rho(A) )
"""
rho = approximate_spectral_radius(A)
zAz = np.dot(z.reshape(1, -1), A*z.reshape(-1, 1))
card = nPDEs*(AggOp.indptr[1:]-AggOp.indptr[:-1])
weights = (np.ravel(card)*zAz)/(A.shape[0]*rho)
return weights.reshape(-1, 1)
# Run test 1, which finds where x is small relative to its energy
weights = Ca*get_aggregate_weights(AggOp, A, x, nPDEs, ndof)
mask1 = aggregate_wise_inner_product(x, AggOp, nPDEs, ndof) <= weights
# Run test 2, which finds where x is already approximated
# accurately by the existing T
projected_x = x - T*(T.T*x)
mask2 = aggregate_wise_inner_product(projected_x,
AggOp, nPDEs, ndof) <= weights
# Combine masks and zero out corresponding aggregates in x
mask = np.ravel(mask1 + mask2).nonzero()[0]
if mask.shape[0] > 0:
mask = nPDEs*AggOp[:, mask].indices
for j in range(nPDEs):
x[mask+j] = 0.0 |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def poisson(grid, spacing=None, dtype=float, format=None, type='FD'):
"""Return a sparse matrix for the N-dimensional Poisson problem. The matrix represents a finite Difference approximation to the Poisson problem on a regular n-dimensional grid with unit grid spacing and Dirichlet boundary conditions. Parameters grid : tuple of integers grid dimensions e.g. (100,100) Notes ----- The matrix is symmetric and positive definite (SPD). Examples -------- matrix([[ 2., -1., 0., 0.], [-1., 2., -1., 0.], [ 0., -1., 2., -1.], [ 0., 0., -1., 2.]]) matrix([[ 4., -1., 0., -1., 0., 0.], [-1., 4., -1., 0., -1., 0.], [ 0., -1., 4., 0., 0., -1.], [-1., 0., 0., 4., -1., 0.], [ 0., -1., 0., -1., 4., -1.], [ 0., 0., -1., 0., -1., 4.]]) """ |
grid = tuple(grid)
N = len(grid) # grid dimension
if N < 1 or min(grid) < 1:
raise ValueError('invalid grid shape: %s' % str(grid))
# create N-dimension Laplacian stencil
if type == 'FD':
stencil = np.zeros((3,) * N, dtype=dtype)
for i in range(N):
stencil[(1,)*i + (0,) + (1,)*(N-i-1)] = -1
stencil[(1,)*i + (2,) + (1,)*(N-i-1)] = -1
stencil[(1,)*N] = 2*N
if type == 'FE':
stencil = -np.ones((3,) * N, dtype=dtype)
stencil[(1,)*N] = 3**N - 1
return stencil_grid(stencil, grid, format=format) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def norm(x, pnorm='2'):
"""2-norm of a vector. Parameters x : array_like Vector of complex or real values pnorm : string '2' calculates the 2-norm 'inf' calculates the infinity-norm Returns ------- n : float 2-norm of a vector Notes ----- - currently 1+ order of magnitude faster than scipy.linalg.norm(x), which calls sqrt(numpy.sum(real((conjugate(x)*x)),axis=0)) resulting in an extra copy - only handles the 2-norm and infinity-norm for vectors See Also -------- scipy.linalg.norm : scipy general matrix or vector norm """ |
# TODO check dimensions of x
# TODO speedup complex case
x = np.ravel(x)
if pnorm == '2':
return np.sqrt(np.inner(x.conj(), x).real)
elif pnorm == 'inf':
return np.max(np.abs(x))
else:
raise ValueError('Only the 2-norm and infinity-norm are supported') |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def approximate_spectral_radius(A, tol=0.01, maxiter=15, restart=5, symmetric=None, initial_guess=None, return_vector=False):
"""Approximate the spectral radius of a matrix. Parameters A : {dense or sparse matrix} E.g. csr_matrix, csc_matrix, ndarray, etc. tol : {scalar} Relative tolerance of approximation, i.e., the error divided by the approximate spectral radius is compared to tol. maxiter : {integer} Maximum number of iterations to perform restart : {integer} Number of restarted Arnoldi processes. For example, a value of 0 will run Arnoldi once, for maxiter iterations, and a value of 1 will restart Arnoldi once, using the maximal eigenvector from the first Arnoldi process as the initial guess. symmetric : {boolean} True - if A is symmetric Lanczos iteration is used (more efficient) False - if A is non-symmetric Arnoldi iteration is used (less efficient) initial_guess : {array|None} If n x 1 array, then use as initial guess for Arnoldi/Lanczos. If None, then use a random initial guess. return_vector : {boolean} True - return an approximate dominant eigenvector, in addition to the spectral radius. False - Do not return the approximate dominant eigenvector Returns ------- An approximation to the spectral radius of A, and if return_vector=True, then also return the approximate dominant eigenvector Notes ----- The spectral radius is approximated by looking at the Ritz eigenvalues. Arnoldi iteration (or Lanczos) is used to project the matrix A onto a Krylov subspace: H = Q* A Q. The eigenvalues of H (i.e. the Ritz eigenvalues) should represent the eigenvalues of A in the sense that the minimum and maximum values are usually well matched (for the symmetric case it is true since the eigenvalues are real). References .. [1] Z. Bai, J. Demmel, J. Dongarra, A. Ruhe, and H. van der Vorst, editors. "Templates for the Solution of Algebraic Eigenvalue Problems: A Practical Guide", SIAM, Philadelphia, 2000. Examples -------- 1.0 1.0 """ |
if not hasattr(A, 'rho') or return_vector:
# somehow more restart causes a nonsymmetric case to fail...look at
# this what about A.dtype=int? convert somehow?
# The use of the restart vector v0 requires that the full Krylov
# subspace V be stored. So, set symmetric to False.
symmetric = False
if maxiter < 1:
raise ValueError('expected maxiter > 0')
if restart < 0:
raise ValueError('expected restart >= 0')
if A.dtype == int:
raise ValueError('expected A to be float (complex or real)')
if A.shape[0] != A.shape[1]:
raise ValueError('expected square A')
if initial_guess is None:
v0 = sp.rand(A.shape[1], 1)
if A.dtype == complex:
v0 = v0 + 1.0j * sp.rand(A.shape[1], 1)
else:
if initial_guess.shape[0] != A.shape[0]:
raise ValueError('initial_guess and A must have same shape')
if (len(initial_guess.shape) > 1) and (initial_guess.shape[1] > 1):
raise ValueError('initial_guess must be an (n,1) or\
(n,) vector')
v0 = initial_guess.reshape(-1, 1)
v0 = np.array(v0, dtype=A.dtype)
for j in range(restart+1):
[evect, ev, H, V, breakdown_flag] =\
_approximate_eigenvalues(A, tol, maxiter,
symmetric, initial_guess=v0)
# Calculate error in dominant eigenvector
nvecs = ev.shape[0]
max_index = np.abs(ev).argmax()
error = H[nvecs, nvecs-1]*evect[-1, max_index]
# error is a fast way of calculating the following line
# error2 = ( A - ev[max_index]*sp.mat(
# sp.eye(A.shape[0],A.shape[1])) )*\
# ( sp.mat(sp.hstack(V[:-1]))*\
# evect[:,max_index].reshape(-1,1) )
# print str(error) + " " + str(sp.linalg.norm(e2))
if (np.abs(error)/np.abs(ev[max_index]) < tol) or\
breakdown_flag:
# halt if below relative tolerance
v0 = np.dot(np.hstack(V[:-1]),
evect[:, max_index].reshape(-1, 1))
break
else:
v0 = np.dot(np.hstack(V[:-1]),
evect[:, max_index].reshape(-1, 1))
# end j-loop
rho = np.abs(ev[max_index])
if sparse.isspmatrix(A):
A.rho = rho
if return_vector:
return (rho, v0)
else:
return rho
else:
return A.rho |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def condest(A, tol=0.1, maxiter=25, symmetric=False):
r"""Estimates the condition number of A. Parameters A : {dense or sparse matrix} tol : {float} Approximation tolerance, currently not used maxiter: {int} Max number of Arnoldi/Lanczos iterations symmetric : {bool} If symmetric use the far more efficient Lanczos algorithm, Else use Arnoldi Returns ------- Estimate of cond(A) with \|lambda_max\| / \|lambda_min\| through the use of Arnoldi or Lanczos iterations, depending on the symmetric flag Notes ----- The condition number measures how large of a change in the the problems solution is caused by a change in problem's input. Large condition numbers indicate that small perturbations and numerical errors are magnified greatly when solving the system. Examples -------- 2.0 """ |
[evect, ev, H, V, breakdown_flag] =\
_approximate_eigenvalues(A, tol, maxiter, symmetric)
return np.max([norm(x) for x in ev])/min([norm(x) for x in ev]) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def cond(A):
"""Return condition number of A. Parameters A : {dense or sparse matrix} Returns ------- 2-norm condition number through use of the SVD Use for small to moderate sized dense matrices. For large sparse matrices, use condest. Notes ----- The condition number measures how large of a change in the problems solution is caused by a change in problem's input. Large condition numbers indicate that small perturbations and numerical errors are magnified greatly when solving the system. Examples -------- 2.0 """ |
if A.shape[0] != A.shape[1]:
raise ValueError('expected square matrix')
if sparse.isspmatrix(A):
A = A.todense()
# 2-Norm Condition Number
from scipy.linalg import svd
U, Sigma, Vh = svd(A)
return np.max(Sigma)/min(Sigma) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def ishermitian(A, fast_check=True, tol=1e-6, verbose=False):
r"""Return True if A is Hermitian to within tol. Parameters A : {dense or sparse matrix} fast_check : {bool} If True, use the heuristic < Ax, y> = < x, Ay> for random vectors x and y to check for conjugate symmetry. If False, compute A - A.H. tol : {float} Symmetry tolerance verbose: {bool} prints max( \|A - A.H\| ) if nonhermitian and fast_check=False abs( <Ax, y> - <x, Ay> ) if nonhermitian and fast_check=False Returns ------- True if hermitian False if nonhermitian Notes ----- This function applies a simple test of conjugate symmetry Examples -------- False True """ |
# convert to matrix type
if not sparse.isspmatrix(A):
A = np.asmatrix(A)
if fast_check:
x = sp.rand(A.shape[0], 1)
y = sp.rand(A.shape[0], 1)
if A.dtype == complex:
x = x + 1.0j*sp.rand(A.shape[0], 1)
y = y + 1.0j*sp.rand(A.shape[0], 1)
xAy = np.dot((A*x).conjugate().T, y)
xAty = np.dot(x.conjugate().T, A*y)
diff = float(np.abs(xAy - xAty) / np.sqrt(np.abs(xAy*xAty)))
else:
# compute the difference, A - A.H
if sparse.isspmatrix(A):
diff = np.ravel((A - A.H).data)
else:
diff = np.ravel(A - A.H)
if np.max(diff.shape) == 0:
diff = 0
else:
diff = np.max(np.abs(diff))
if diff < tol:
diff = 0
return True
else:
if verbose:
print(diff)
return False
return diff |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def pinv_array(a, cond=None):
"""Calculate the Moore-Penrose pseudo inverse of each block of the three dimensional array a. Parameters a : {dense array} Is of size (n, m, m) cond : {float} Used by gelss to filter numerically zeros singular values. If None, a suitable value is chosen for you. Returns ------- Nothing, a is modified in place so that a[k] holds the pseudoinverse of that block. Notes ----- By using lapack wrappers, this can be much faster for large n, than directly calling pinv2 Examples -------- """ |
n = a.shape[0]
m = a.shape[1]
if m == 1:
# Pseudo-inverse of 1 x 1 matrices is trivial
zero_entries = (a == 0.0).nonzero()[0]
a[zero_entries] = 1.0
a[:] = 1.0/a
a[zero_entries] = 0.0
del zero_entries
else:
# The block size is greater than 1
# Create necessary arrays and function pointers for calculating pinv
gelss, gelss_lwork = get_lapack_funcs(('gelss', 'gelss_lwork'),
(np.ones((1,), dtype=a.dtype)))
RHS = np.eye(m, dtype=a.dtype)
lwork = _compute_lwork(gelss_lwork, m, m, m)
# Choose tolerance for which singular values are zero in *gelss below
if cond is None:
t = a.dtype.char
eps = np.finfo(np.float).eps
feps = np.finfo(np.single).eps
geps = np.finfo(np.longfloat).eps
_array_precision = {'f': 0, 'd': 1, 'g': 2, 'F': 0, 'D': 1, 'G': 2}
cond = {0: feps*1e3, 1: eps*1e6, 2: geps*1e6}[_array_precision[t]]
# Invert each block of a
for kk in range(n):
gelssoutput = gelss(a[kk], RHS, cond=cond, lwork=lwork,
overwrite_a=True, overwrite_b=False)
a[kk] = gelssoutput[1] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def distance_strength_of_connection(A, V, theta=2.0, relative_drop=True):
"""Distance based strength-of-connection. Parameters A : csr_matrix or bsr_matrix Square, sparse matrix in CSR or BSR format V : array Coordinates of the vertices of the graph of A relative_drop : bool If false, then a connection must be within a distance of theta from a point to be strongly connected. If true, then the closest connection is always strong, and other points must be within theta times the smallest distance to be strong Returns ------- C : csr_matrix C(i,j) = distance(point_i, point_j) Strength of connection matrix where strength values are distances, i.e. the smaller the value, the stronger the connection. Sparsity pattern of C is copied from A. Notes ----- - theta is a drop tolerance that is applied row-wise - If a BSR matrix given, then the return matrix is still CSR. The strength is given between super nodes based on the BSR block size. Examples -------- """ |
# Amalgamate for the supernode case
if sparse.isspmatrix_bsr(A):
sn = int(A.shape[0] / A.blocksize[0])
u = np.ones((A.data.shape[0],))
A = sparse.csr_matrix((u, A.indices, A.indptr), shape=(sn, sn))
if not sparse.isspmatrix_csr(A):
warn("Implicit conversion of A to csr", sparse.SparseEfficiencyWarning)
A = sparse.csr_matrix(A)
dim = V.shape[1]
# Create two arrays for differencing the different coordinates such
# that C(i,j) = distance(point_i, point_j)
cols = A.indices
rows = np.repeat(np.arange(A.shape[0]), A.indptr[1:] - A.indptr[0:-1])
# Insert difference for each coordinate into C
C = (V[rows, 0] - V[cols, 0])**2
for d in range(1, dim):
C += (V[rows, d] - V[cols, d])**2
C = np.sqrt(C)
C[C < 1e-6] = 1e-6
C = sparse.csr_matrix((C, A.indices.copy(), A.indptr.copy()),
shape=A.shape)
# Apply drop tolerance
if relative_drop is True:
if theta != np.inf:
amg_core.apply_distance_filter(C.shape[0], theta, C.indptr,
C.indices, C.data)
else:
amg_core.apply_absolute_distance_filter(C.shape[0], theta, C.indptr,
C.indices, C.data)
C.eliminate_zeros()
C = C + sparse.eye(C.shape[0], C.shape[1], format='csr')
# Standardized strength values require small values be weak and large
# values be strong. So, we invert the distances.
C.data = 1.0 / C.data
# Scale C by the largest magnitude entry in each row
C = scale_rows_by_largest_entry(C)
return C |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def classical_strength_of_connection(A, theta=0.0, norm='abs'):
"""Classical Strength Measure. Return a strength of connection matrix using the classical AMG measure An off-diagonal entry A[i,j] is a strong connection iff:: A[i,j] >= theta * max(|A[i,k]|), where k != i (norm='abs') -A[i,j] >= theta * max(-A[i,k]), where k != i (norm='min') Parameters A : csr_matrix or bsr_matrix Square, sparse matrix in CSR or BSR format theta : float Threshold parameter in [0,1]. norm: 'string' 'abs' : to use the absolute value, 'min' : to use the negative value (see above) Returns ------- S : csr_matrix Matrix graph defining strong connections. S[i,j]=1 if vertex i is strongly influenced by vertex j. See Also -------- symmetric_strength_of_connection : symmetric measure used in SA evolution_strength_of_connection : relaxation based strength measure Notes ----- - A symmetric A does not necessarily yield a symmetric strength matrix S - Calls C++ function classical_strength_of_connection - The version as implemented is designed form M-matrices. Trottenberg et al. use max A[i,k] over all negative entries, which is the same. A positive edge weight never indicates a strong connection. - See [2000BrHeMc]_ and [2001bTrOoSc]_ References .. [2000BrHeMc] Briggs, W. L., Henson, V. E., McCormick, S. F., "A multigrid tutorial", Second edition. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 2000. xii+193 pp. .. [2001bTrOoSc] Trottenberg, U., Oosterlee, C. W., Schuller, A., "Multigrid", Academic Press, Inc., San Diego, CA, 2001. xvi+631 pp. Examples -------- """ |
if sparse.isspmatrix_bsr(A):
blocksize = A.blocksize[0]
else:
blocksize = 1
if not sparse.isspmatrix_csr(A):
warn("Implicit conversion of A to csr", sparse.SparseEfficiencyWarning)
A = sparse.csr_matrix(A)
if (theta < 0 or theta > 1):
raise ValueError('expected theta in [0,1]')
Sp = np.empty_like(A.indptr)
Sj = np.empty_like(A.indices)
Sx = np.empty_like(A.data)
if norm == 'abs':
amg_core.classical_strength_of_connection_abs(
A.shape[0], theta, A.indptr, A.indices, A.data, Sp, Sj, Sx)
elif norm == 'min':
amg_core.classical_strength_of_connection_min(
A.shape[0], theta, A.indptr, A.indices, A.data, Sp, Sj, Sx)
else:
raise ValueError('Unknown norm')
S = sparse.csr_matrix((Sx, Sj, Sp), shape=A.shape)
if blocksize > 1:
S = amalgamate(S, blocksize)
# Strength represents "distance", so take the magnitude
S.data = np.abs(S.data)
# Scale S by the largest magnitude entry in each row
S = scale_rows_by_largest_entry(S)
return S |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def symmetric_strength_of_connection(A, theta=0):
"""Symmetric Strength Measure. Compute strength of connection matrix using the standard symmetric measure An off-diagonal connection A[i,j] is strong iff:: abs(A[i,j]) >= theta * sqrt( abs(A[i,i]) * abs(A[j,j]) ) Parameters A : csr_matrix Matrix graph defined in sparse format. Entry A[i,j] describes the strength of edge [i,j] theta : float Threshold parameter (positive). Returns ------- S : csr_matrix Matrix graph defining strong connections. S[i,j]=1 if vertex i is strongly influenced by vertex j. See Also -------- symmetric_strength_of_connection : symmetric measure used in SA evolution_strength_of_connection : relaxation based strength measure Notes ----- - For vector problems, standard strength measures may produce undesirable aggregates. A "block approach" from Vanek et al. is used to replace vertex comparisons with block-type comparisons. A connection between nodes i and j in the block case is strong if:: ||AB[i,j]|| >= theta * sqrt( ||AB[i,i]||*||AB[j,j]|| ) where AB[k,l] is the matrix block (degrees of freedom) associated with nodes k and l and ||.|| is a matrix norm, such a Frobenius. - See [1996bVaMaBr]_ for more details. References .. [1996bVaMaBr] Vanek, P. and Mandel, J. and Brezina, M., "Algebraic Multigrid by Smoothed Aggregation for Second and Fourth Order Elliptic Problems", Computing, vol. 56, no. 3, pp. 179--196, 1996. http://citeseer.ist.psu.edu/vanek96algebraic.html Examples -------- """ |
if theta < 0:
raise ValueError('expected a positive theta')
if sparse.isspmatrix_csr(A):
# if theta == 0:
# return A
Sp = np.empty_like(A.indptr)
Sj = np.empty_like(A.indices)
Sx = np.empty_like(A.data)
fn = amg_core.symmetric_strength_of_connection
fn(A.shape[0], theta, A.indptr, A.indices, A.data, Sp, Sj, Sx)
S = sparse.csr_matrix((Sx, Sj, Sp), shape=A.shape)
elif sparse.isspmatrix_bsr(A):
M, N = A.shape
R, C = A.blocksize
if R != C:
raise ValueError('matrix must have square blocks')
if theta == 0:
data = np.ones(len(A.indices), dtype=A.dtype)
S = sparse.csr_matrix((data, A.indices.copy(), A.indptr.copy()),
shape=(int(M / R), int(N / C)))
else:
# the strength of connection matrix is based on the
# Frobenius norms of the blocks
data = (np.conjugate(A.data) * A.data).reshape(-1, R * C)
data = data.sum(axis=1)
A = sparse.csr_matrix((data, A.indices, A.indptr),
shape=(int(M / R), int(N / C)))
return symmetric_strength_of_connection(A, theta)
else:
raise TypeError('expected csr_matrix or bsr_matrix')
# Strength represents "distance", so take the magnitude
S.data = np.abs(S.data)
# Scale S by the largest magnitude entry in each row
S = scale_rows_by_largest_entry(S)
return S |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def relaxation_vectors(A, R, k, alpha):
"""Generate test vectors by relaxing on Ax=0 for some random vectors x. Parameters A : csr_matrix Sparse NxN matrix alpha : scalar Weight for Jacobi R : integer Number of random vectors k : integer Number of relaxation passes Returns ------- x : array Dense array N x k array of relaxation vectors """ |
# random n x R block in column ordering
n = A.shape[0]
x = np.random.rand(n * R) - 0.5
x = np.reshape(x, (n, R), order='F')
# for i in range(R):
# x[:,i] = x[:,i] - np.mean(x[:,i])
b = np.zeros((n, 1))
for r in range(0, R):
jacobi(A, x[:, r], b, iterations=k, omega=alpha)
# x[:,r] = x[:,r]/norm(x[:,r])
return x |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def affinity_distance(A, alpha=0.5, R=5, k=20, epsilon=4.0):
"""Affinity Distance Strength Measure. Parameters A : csr_matrix Sparse NxN matrix alpha : scalar Weight for Jacobi R : integer Number of random vectors k : integer Number of relaxation passes epsilon : scalar Drop tolerance Returns ------- C : csr_matrix Sparse matrix of strength values References .. [LiBr] Oren E. Livne and Achi Brandt, "Lean Algebraic Multigrid (LAMG):
Fast Graph Laplacian Linear Solver" Notes ----- No unit testing yet. Does not handle BSR matrices yet. See [LiBr]_ for more details. """ |
if not sparse.isspmatrix_csr(A):
A = sparse.csr_matrix(A)
if alpha < 0:
raise ValueError('expected alpha>0')
if R <= 0 or not isinstance(R, int):
raise ValueError('expected integer R>0')
if k <= 0 or not isinstance(k, int):
raise ValueError('expected integer k>0')
if epsilon < 1:
raise ValueError('expected epsilon>1.0')
def distance(x):
(rows, cols) = A.nonzero()
return 1 - np.sum(x[rows] * x[cols], axis=1)**2 / \
(np.sum(x[rows]**2, axis=1) * np.sum(x[cols]**2, axis=1))
return distance_measure_common(A, distance, alpha, R, k, epsilon) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def algebraic_distance(A, alpha=0.5, R=5, k=20, epsilon=2.0, p=2):
"""Algebraic Distance Strength Measure. Parameters A : csr_matrix Sparse NxN matrix alpha : scalar Weight for Jacobi R : integer Number of random vectors k : integer Number of relaxation passes epsilon : scalar Drop tolerance p : scalar or inf p-norm of the measure Returns ------- C : csr_matrix Sparse matrix of strength values References .. [SaSaSc] Ilya Safro, Peter Sanders, and Christian Schulz, "Advanced Coarsening Schemes for Graph Partitioning" Notes ----- No unit testing yet. Does not handle BSR matrices yet. See [SaSaSc]_ for more details. """ |
if not sparse.isspmatrix_csr(A):
A = sparse.csr_matrix(A)
if alpha < 0:
raise ValueError('expected alpha>0')
if R <= 0 or not isinstance(R, int):
raise ValueError('expected integer R>0')
if k <= 0 or not isinstance(k, int):
raise ValueError('expected integer k>0')
if epsilon < 1:
raise ValueError('expected epsilon>1.0')
if p < 1:
raise ValueError('expected p>1 or equal to numpy.inf')
def distance(x):
(rows, cols) = A.nonzero()
if p != np.inf:
avg = np.sum(np.abs(x[rows] - x[cols])**p, axis=1) / R
return (avg)**(1.0 / p)
else:
return np.abs(x[rows] - x[cols]).max(axis=1)
return distance_measure_common(A, distance, alpha, R, k, epsilon) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def distance_measure_common(A, func, alpha, R, k, epsilon):
"""Create strength of connection matrixfrom a function applied to relaxation vectors.""" |
# create test vectors
x = relaxation_vectors(A, R, k, alpha)
# apply distance measure function to vectors
d = func(x)
# drop distances to self
(rows, cols) = A.nonzero()
weak = np.where(rows == cols)[0]
d[weak] = 0
C = sparse.csr_matrix((d, (rows, cols)), shape=A.shape)
C.eliminate_zeros()
# remove weak connections
# removes entry e from a row if e > theta * min of all entries in the row
amg_core.apply_distance_filter(C.shape[0], epsilon, C.indptr,
C.indices, C.data)
C.eliminate_zeros()
# Standardized strength values require small values be weak and large
# values be strong. So, we invert the distances.
C.data = 1.0 / C.data
# Put an identity on the diagonal
C = C + sparse.eye(C.shape[0], C.shape[1], format='csr')
# Scale C by the largest magnitude entry in each row
C = scale_rows_by_largest_entry(C)
return C |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def jacobi_prolongation_smoother(S, T, C, B, omega=4.0/3.0, degree=1, filter=False, weighting='diagonal'):
"""Jacobi prolongation smoother. Parameters S : csr_matrix, bsr_matrix Sparse NxN matrix used for smoothing. Typically, A. T : csr_matrix, bsr_matrix Tentative prolongator C : csr_matrix, bsr_matrix Strength-of-connection matrix B : array Near nullspace modes for the coarse grid such that T*B exactly reproduces the fine grid near nullspace modes omega : scalar Damping parameter filter : boolean If true, filter S before smoothing T. This option can greatly control complexity. weighting : string 'block', 'diagonal' or 'local' weighting for constructing the Jacobi D 'local' Uses a local row-wise weight based on the Gershgorin estimate. Avoids any potential under-damping due to inaccurate spectral radius estimates. 'block' uses a block diagonal inverse of A if A is BSR 'diagonal' uses classic Jacobi with D = diagonal(A) Returns ------- P : csr_matrix, bsr_matrix Smoothed (final) prolongator defined by P = (I - omega/rho(K) K) * T where K = diag(S)^-1 * S and rho(K) is an approximation to the spectral radius of K. Notes ----- If weighting is not 'local', then results using Jacobi prolongation smoother are not precisely reproducible due to a random initial guess used for the spectral radius approximation. For precise reproducibility, set numpy.random.seed(..) to the same value before each test. Examples -------- matrix([[ 1., 0.], [ 1., 0.], [ 1., 0.], [ 0., 1.], [ 0., 1.], [ 0., 1.]]) matrix([[ 0.64930164, 0. ], [ 1. , 0. ], [ 0.64930164, 0.35069836], [ 0.35069836, 0.64930164], [ 0. , 1. ], [ 0. , 0.64930164]]) """ |
# preprocess weighting
if weighting == 'block':
if sparse.isspmatrix_csr(S):
weighting = 'diagonal'
elif sparse.isspmatrix_bsr(S):
if S.blocksize[0] == 1:
weighting = 'diagonal'
if filter:
# Implement filtered prolongation smoothing for the general case by
# utilizing satisfy constraints
if sparse.isspmatrix_bsr(S):
numPDEs = S.blocksize[0]
else:
numPDEs = 1
# Create a filtered S with entries dropped that aren't in C
C = UnAmal(C, numPDEs, numPDEs)
S = S.multiply(C)
S.eliminate_zeros()
if weighting == 'diagonal':
# Use diagonal of S
D_inv = get_diagonal(S, inv=True)
D_inv_S = scale_rows(S, D_inv, copy=True)
D_inv_S = (omega/approximate_spectral_radius(D_inv_S))*D_inv_S
elif weighting == 'block':
# Use block diagonal of S
D_inv = get_block_diag(S, blocksize=S.blocksize[0], inv_flag=True)
D_inv = sparse.bsr_matrix((D_inv, np.arange(D_inv.shape[0]),
np.arange(D_inv.shape[0]+1)),
shape=S.shape)
D_inv_S = D_inv*S
D_inv_S = (omega/approximate_spectral_radius(D_inv_S))*D_inv_S
elif weighting == 'local':
# Use the Gershgorin estimate as each row's weight, instead of a global
# spectral radius estimate
D = np.abs(S)*np.ones((S.shape[0], 1), dtype=S.dtype)
D_inv = np.zeros_like(D)
D_inv[D != 0] = 1.0 / np.abs(D[D != 0])
D_inv_S = scale_rows(S, D_inv, copy=True)
D_inv_S = omega*D_inv_S
else:
raise ValueError('Incorrect weighting option')
if filter:
# Carry out Jacobi, but after calculating the prolongator update, U,
# apply satisfy constraints so that U*B = 0
P = T
for i in range(degree):
U = (D_inv_S*P).tobsr(blocksize=P.blocksize)
# Enforce U*B = 0 (1) Construct array of inv(Bi'Bi), where Bi is B
# restricted to row i's sparsity pattern in Sparsity Pattern. This
# array is used multiple times in Satisfy_Constraints(...).
BtBinv = compute_BtBinv(B, U)
# (2) Apply satisfy constraints
Satisfy_Constraints(U, B, BtBinv)
# Update P
P = P - U
else:
# Carry out Jacobi as normal
P = T
for i in range(degree):
P = P - (D_inv_S*P)
return P |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def richardson_prolongation_smoother(S, T, omega=4.0/3.0, degree=1):
"""Richardson prolongation smoother. Parameters S : csr_matrix, bsr_matrix Sparse NxN matrix used for smoothing. Typically, A or the "filtered matrix" obtained from A by lumping weak connections onto the diagonal of A. T : csr_matrix, bsr_matrix Tentative prolongator omega : scalar Damping parameter Returns ------- P : csr_matrix, bsr_matrix Smoothed (final) prolongator defined by P = (I - omega/rho(S) S) * T where rho(S) is an approximation to the spectral radius of S. Notes ----- Results using Richardson prolongation smoother are not precisely reproducible due to a random initial guess used for the spectral radius approximation. For precise reproducibility, set numpy.random.seed(..) to the same value before each test. Examples -------- matrix([[ 1., 0.], [ 1., 0.], [ 1., 0.], [ 0., 1.], [ 0., 1.], [ 0., 1.]]) matrix([[ 0.64930164, 0. ], [ 1. , 0. ], [ 0.64930164, 0.35069836], [ 0.35069836, 0.64930164], [ 0. , 1. ], [ 0. , 0.64930164]]) """ |
weight = omega/approximate_spectral_radius(S)
P = T
for i in range(degree):
P = P - weight*(S*P)
return P |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def matrix_asformat(lvl, name, format, blocksize=None):
"""Set a matrix to a specific format. This routine looks for the matrix "name" in the specified format as a member of the level instance, lvl. For example, if name='A', format='bsr' and blocksize=(4,4), and if lvl.Absr44 exists with the correct blocksize, then lvl.Absr is returned. If the matrix doesn't already exist, lvl.name is converted to the desired format, and made a member of lvl. Only create such persistent copies of a matrix for routines such as presmoothing and postsmoothing, where the matrix conversion is done every cycle. Calling this function can _dramatically_ increase your memory costs. Be careful with it's usage. """ |
desired_matrix = name + format
M = getattr(lvl, name)
if format == 'bsr':
desired_matrix += str(blocksize[0])+str(blocksize[1])
if hasattr(lvl, desired_matrix):
# if lvl already contains lvl.name+format
pass
elif M.format == format and format != 'bsr':
# is base_matrix already in the correct format?
setattr(lvl, desired_matrix, M)
elif M.format == format and format == 'bsr':
# convert to bsr with the right blocksize
# tobsr() will not do anything extra if this is uneeded
setattr(lvl, desired_matrix, M.tobsr(blocksize=blocksize))
else:
# convert
newM = getattr(M, 'to' + format)()
setattr(lvl, desired_matrix, newM)
return getattr(lvl, desired_matrix) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def regular_triangle_mesh(nx, ny):
"""Construct a regular triangular mesh in the unit square. Parameters nx : int Number of nodes in the x-direction ny : int Number of nodes in the y-direction Returns ------- Vert : array nx*ny x 2 vertex list E2V : array Nex x 3 element list Examples -------- """ |
nx, ny = int(nx), int(ny)
if nx < 2 or ny < 2:
raise ValueError('minimum mesh dimension is 2: %s' % ((nx, ny),))
Vert1 = np.tile(np.arange(0, nx-1), ny - 1) +\
np.repeat(np.arange(0, nx * (ny - 1), nx), nx - 1)
Vert3 = np.tile(np.arange(0, nx-1), ny - 1) +\
np.repeat(np.arange(0, nx * (ny - 1), nx), nx - 1) + nx
Vert2 = Vert3 + 1
Vert4 = Vert1 + 1
Verttmp = np.meshgrid(np.arange(0, nx, dtype='float'),
np.arange(0, ny, dtype='float'))
Verttmp = (Verttmp[0].ravel(), Verttmp[1].ravel())
Vert = np.vstack(Verttmp).transpose()
Vert[:, 0] = (1.0 / (nx - 1)) * Vert[:, 0]
Vert[:, 1] = (1.0 / (ny - 1)) * Vert[:, 1]
E2V1 = np.vstack((Vert1, Vert2, Vert3)).transpose()
E2V2 = np.vstack((Vert1, Vert4, Vert2)).transpose()
E2V = np.vstack((E2V1, E2V2))
return Vert, E2V |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def check_input(Verts=None, E2V=None, Agg=None, A=None, splitting=None, mesh_type=None):
"""Check input for local functions.""" |
if Verts is not None:
if not np.issubdtype(Verts.dtype, np.floating):
raise ValueError('Verts should be of type float')
if E2V is not None:
if not np.issubdtype(E2V.dtype, np.integer):
raise ValueError('E2V should be of type integer')
if E2V.min() != 0:
warnings.warn('element indices begin at %d' % E2V.min())
if Agg is not None:
if Agg.shape[1] > Agg.shape[0]:
raise ValueError('Agg should be of size Npts x Nagg')
if A is not None:
if Agg is not None:
if (A.shape[0] != A.shape[1]) or (A.shape[0] != Agg.shape[0]):
raise ValueError('expected square matrix A\
and compatible with Agg')
else:
raise ValueError('problem with check_input')
if splitting is not None:
splitting = splitting.ravel()
if Verts is not None:
if (len(splitting) % Verts.shape[0]) != 0:
raise ValueError('splitting must be a multiple of N')
else:
raise ValueError('problem with check_input')
if mesh_type is not None:
valid_mesh_types = ('vertex', 'tri', 'quad', 'tet', 'hex')
if mesh_type not in valid_mesh_types:
raise ValueError('mesh_type should be %s' %
' or '.join(valid_mesh_types)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def MIS(G, weights, maxiter=None):
"""Compute a maximal independent set of a graph in parallel. Parameters G : csr_matrix Matrix graph, G[i,j] != 0 indicates an edge weights : ndarray Array of weights for each vertex in the graph G maxiter : int Maximum number of iterations (default: None) Returns ------- mis : array Array of length of G of zeros/ones indicating the independent set Examples -------- See Also -------- fn = amg_core.maximal_independent_set_parallel """ |
if not isspmatrix_csr(G):
raise TypeError('expected csr_matrix')
G = remove_diagonal(G)
mis = np.empty(G.shape[0], dtype='intc')
mis[:] = -1
fn = amg_core.maximal_independent_set_parallel
if maxiter is None:
fn(G.shape[0], G.indptr, G.indices, -1, 1, 0, mis, weights, -1)
else:
if maxiter < 0:
raise ValueError('maxiter must be >= 0')
fn(G.shape[0], G.indptr, G.indices, -1, 1, 0, mis, weights, maxiter)
return mis |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def preprocess(S, coloring_method=None):
"""Preprocess splitting functions. Parameters S : csr_matrix Strength of connection matrix method : string Algorithm used to compute the vertex coloring: * 'MIS' - Maximal Independent Set * 'JP' - Jones-Plassmann (parallel) * 'LDF' - Largest-Degree-First (parallel) Returns ------- weights: ndarray Weights from a graph coloring of G S : csr_matrix Strength matrix with ones T : csr_matrix transpose of S G : csr_matrix union of S and T Notes ----- Performs the following operations: - Checks input strength of connection matrix S - Replaces S.data with ones - Creates T = S.T in CSR format - Creates G = S union T in CSR format - Creates random weights - Augments weights with graph coloring (if use_color == True) """ |
if not isspmatrix_csr(S):
raise TypeError('expected csr_matrix')
if S.shape[0] != S.shape[1]:
raise ValueError('expected square matrix, shape=%s' % (S.shape,))
N = S.shape[0]
S = csr_matrix((np.ones(S.nnz, dtype='int8'), S.indices, S.indptr),
shape=(N, N))
T = S.T.tocsr() # transpose S for efficient column access
G = S + T # form graph (must be symmetric)
G.data[:] = 1
weights = np.ravel(T.sum(axis=1)) # initial weights
# weights -= T.diagonal() # discount self loops
if coloring_method is None:
weights = weights + sp.rand(len(weights))
else:
coloring = vertex_coloring(G, coloring_method)
num_colors = coloring.max() + 1
weights = weights + (sp.rand(len(weights)) + coloring)/num_colors
return (weights, G, S, T) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def load_example(name):
"""Load an example problem by name. Parameters name : string (e.g. 'airfoil') Name of the example to load Notes ----- Each example is stored in a dictionary with the following keys: - 'A' : sparse matrix - 'B' : near-nullspace candidates - 'vertices' : dense array of nodal coordinates - 'elements' : dense array of element indices Current example names are:%s Examples -------- """ |
if name not in example_names:
raise ValueError('no example with name (%s)' % name)
else:
return loadmat(os.path.join(example_dir, name + '.mat'),
struct_as_record=True) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def stencil_grid(S, grid, dtype=None, format=None):
"""Construct a sparse matrix form a local matrix stencil. Parameters S : ndarray matrix stencil stored in N-d array grid : tuple tuple containing the N grid dimensions dtype : data type of the result format : string sparse matrix format to return, e.g. "csr", "coo", etc. Returns ------- A : sparse matrix Sparse matrix which represents the operator given by applying stencil S at each vertex of a regular grid with given dimensions. Notes ----- The grid vertices are enumerated as arange(prod(grid)).reshape(grid). This implies that the last grid dimension cycles fastest, while the first dimension cycles slowest. For example, if grid=(2,3) then the grid vertices are ordered as (0,0), (0,1), (0,2), (1,0), (1,1), (1,2). This coincides with the ordering used by the NumPy functions ndenumerate() and mgrid(). Examples -------- matrix([[ 2., -1., 0., 0., 0.], [-1., 2., -1., 0., 0.], [ 0., -1., 2., -1., 0.], [ 0., 0., -1., 2., -1.], [ 0., 0., 0., -1., 2.]]) matrix([[ 4., -1., 0., -1., 0., 0., 0., 0., 0.], [-1., 4., -1., 0., -1., 0., 0., 0., 0.], [ 0., -1., 4., 0., 0., -1., 0., 0., 0.], [-1., 0., 0., 4., -1., 0., -1., 0., 0.], [ 0., -1., 0., -1., 4., -1., 0., -1., 0.], [ 0., 0., -1., 0., -1., 4., 0., 0., -1.], [ 0., 0., 0., -1., 0., 0., 4., -1., 0.], [ 0., 0., 0., 0., -1., 0., -1., 4., -1.], [ 0., 0., 0., 0., 0., -1., 0., -1., 4.]]) """ |
S = np.asarray(S, dtype=dtype)
grid = tuple(grid)
if not (np.asarray(S.shape) % 2 == 1).all():
raise ValueError('all stencil dimensions must be odd')
if len(grid) != np.ndim(S):
raise ValueError('stencil dimension must equal number of grid\
dimensions')
if min(grid) < 1:
raise ValueError('grid dimensions must be positive')
N_v = np.prod(grid) # number of vertices in the mesh
N_s = (S != 0).sum() # number of nonzero stencil entries
# diagonal offsets
diags = np.zeros(N_s, dtype=int)
# compute index offset of each dof within the stencil
strides = np.cumprod([1] + list(reversed(grid)))[:-1]
indices = tuple(i.copy() for i in S.nonzero())
for i, s in zip(indices, S.shape):
i -= s // 2
# i = (i - s) // 2
# i = i // 2
# i = i - (s // 2)
for stride, coords in zip(strides, reversed(indices)):
diags += stride * coords
data = S[S != 0].repeat(N_v).reshape(N_s, N_v)
indices = np.vstack(indices).T
# zero boundary connections
for index, diag in zip(indices, data):
diag = diag.reshape(grid)
for n, i in enumerate(index):
if i > 0:
s = [slice(None)] * len(grid)
s[n] = slice(0, i)
s = tuple(s)
diag[s] = 0
elif i < 0:
s = [slice(None)]*len(grid)
s[n] = slice(i, None)
s = tuple(s)
diag[s] = 0
# remove diagonals that lie outside matrix
mask = abs(diags) < N_v
if not mask.all():
diags = diags[mask]
data = data[mask]
# sum duplicate diagonals
if len(np.unique(diags)) != len(diags):
new_diags = np.unique(diags)
new_data = np.zeros((len(new_diags), data.shape[1]),
dtype=data.dtype)
for dia, dat in zip(diags, data):
n = np.searchsorted(new_diags, dia)
new_data[n, :] += dat
diags = new_diags
data = new_data
return sparse.dia_matrix((data, diags),
shape=(N_v, N_v)).asformat(format) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _CRsweep(A, B, Findex, Cindex, nu, thetacr, method):
"""Perform CR sweeps on a target vector. Internal function called by CR. Performs habituated or concurrent relaxation sweeps on target vector. Stops when either (i) very fast convergence, CF < 0.1*thetacr, are observed, or at least a given number of sweeps have been performed and the relative change in CF < 0.1. Parameters A : csr_matrix B : array like Target near null space mode Findex : array like List of F indices in current splitting Cindex : array like List of C indices in current splitting nu : int minimum number of relaxation sweeps to do thetacr Desired convergence factor Returns ------- rho : float Convergence factor of last iteration e : array like Smoothed error vector """ |
n = A.shape[0] # problem size
numax = nu
z = np.zeros((n,))
e = deepcopy(B[:, 0])
e[Cindex] = 0.0
enorm = norm(e)
rhok = 1
it = 0
while True:
if method == 'habituated':
gauss_seidel(A, e, z, iterations=1)
e[Cindex] = 0.0
elif method == 'concurrent':
gauss_seidel_indexed(A, e, z, indices=Findex, iterations=1)
else:
raise NotImplementedError('method not recognized: need habituated '
'or concurrent')
enorm_old = enorm
enorm = norm(e)
rhok_old = rhok
rhok = enorm / enorm_old
it += 1
# criteria 1 -- fast convergence
if rhok < 0.1 * thetacr:
break
# criteria 2 -- at least nu iters, small relative change in CF (<0.1)
elif ((abs(rhok - rhok_old) / rhok) < 0.1) and (it >= nu):
break
return rhok, e |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def binormalize(A, tol=1e-5, maxiter=10):
"""Binormalize matrix A. Attempt to create unit l_1 norm rows. Parameters A : csr_matrix sparse matrix (n x n) tol : float tolerance x : array guess at the diagonal maxiter : int maximum number of iterations to try Returns ------- C : csr_matrix diagonally scaled A, C=DAD Notes ----- - Goal: Scale A so that l_1 norm of the rows are equal to 1: - B = DAD - want row sum of B = 1 - easily done with tol=0 if B=DA, but this is not symmetric - algorithm is O(N log (1.0/tol)) Examples -------- References .. [1] Livne, Golub, "Scaling by Binormalization" Tech Report SCCM-03-12, SCCM, Stanford, 2003 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.3.1679 """ |
if not isspmatrix(A):
raise TypeError('expecting sparse matrix A')
if A.dtype == complex:
raise NotImplementedError('complex A not implemented')
n = A.shape[0]
it = 0
x = np.ones((n, 1)).ravel()
# 1.
B = A.multiply(A).tocsc() # power(A,2) inconsistent in numpy, scipy.sparse
d = B.diagonal().ravel()
# 2.
beta = B * x
betabar = (1.0/n) * np.dot(x, beta)
stdev = rowsum_stdev(x, beta)
# 3
while stdev > tol and it < maxiter:
for i in range(0, n):
# solve equation x_i, keeping x_j's fixed
# see equation (12)
c2 = (n-1)*d[i]
c1 = (n-2)*(beta[i] - d[i]*x[i])
c0 = -d[i]*x[i]*x[i] + 2*beta[i]*x[i] - n*betabar
if (-c0 < 1e-14):
print('warning: A nearly un-binormalizable...')
return A
else:
# see equation (12)
xnew = (2*c0)/(-c1 - np.sqrt(c1*c1 - 4*c0*c2))
dx = xnew - x[i]
# here we assume input matrix is symmetric since we grab a row of B
# instead of a column
ii = B.indptr[i]
iii = B.indptr[i+1]
dot_Bcol = np.dot(x[B.indices[ii:iii]], B.data[ii:iii])
betabar = betabar + (1.0/n)*dx*(dot_Bcol + beta[i] + d[i]*dx)
beta[B.indices[ii:iii]] += dx*B.data[ii:iii]
x[i] = xnew
stdev = rowsum_stdev(x, beta)
it += 1
# rescale for unit 2-norm
d = np.sqrt(x)
D = spdiags(d.ravel(), [0], n, n)
C = D * A * D
C = C.tocsr()
beta = C.multiply(C).sum(axis=1)
scale = np.sqrt((1.0/n) * np.sum(beta))
return (1/scale)*C |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def rowsum_stdev(x, beta):
r"""Compute row sum standard deviation. Compute for approximation x, the std dev of the row sums s(x) = ( 1/n \sum_k (x_k beta_k - betabar)^2 )^(1/2) with betabar = 1/n dot(beta,x) Parameters x : array beta : array Returns ------- s(x)/betabar : float Notes ----- equation (7) in Livne/Golub """ |
n = x.size
betabar = (1.0/n) * np.dot(x, beta)
stdev = np.sqrt((1.0/n) *
np.sum(np.power(np.multiply(x, beta) - betabar, 2)))
return stdev/betabar |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def mls_polynomial_coefficients(rho, degree):
"""Determine the coefficients for a MLS polynomial smoother. Parameters rho : float Spectral radius of the matrix in question degree : int Degree of polynomial coefficients to generate Returns ------- Tuple of arrays (coeffs,roots) containing the coefficients for the (symmetric) polynomial smoother and the roots of polynomial prolongation smoother. The coefficients of the polynomial are in descending order References .. [1] Parallel multigrid smoothing: polynomial versus Gauss--Seidel M. F. Adams, M. Brezina, J. J. Hu, and R. S. Tuminaro J. Comp. Phys., 188 (2003), pp. 593--610 Examples -------- [ 6.4 -48. 144. -220. 180. -75.8 14.5] [ 1.4472136 0.5527864] """ |
# std_roots = np.cos(np.pi * (np.arange(degree) + 0.5)/ degree)
# print std_roots
roots = rho/2.0 * \
(1.0 - np.cos(2*np.pi*(np.arange(degree, dtype='float64') + 1)/(2.0*degree+1.0)))
# print roots
roots = 1.0/roots
# S_coeffs = list(-np.poly(roots)[1:][::-1])
S = np.poly(roots)[::-1] # monomial coefficients of S error propagator
SSA_max = rho/((2.0*degree+1.0)**2) # upper bound spectral radius of S^2A
S_hat = np.polymul(S, S) # monomial coefficients of \hat{S} propagator
S_hat = np.hstack(((-1.0/SSA_max)*S_hat, [1]))
# coeff for combined error propagator \hat{S}S
coeffs = np.polymul(S_hat, S)
coeffs = -coeffs[:-1] # coeff for smoother
return (coeffs, roots) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def steepest_descent(A, b, x0=None, tol=1e-5, maxiter=None, xtype=None, M=None, callback=None, residuals=None):
"""Steepest descent algorithm. Solves the linear system Ax = b. Left preconditioning is supported. Parameters A : array, matrix, sparse matrix, LinearOperator n x n, linear system to solve b : array, matrix right hand side, shape is (n,) or (n,1) x0 : array, matrix initial guess, default is a vector of zeros tol : float relative convergence tolerance, i.e. tol is scaled by the preconditioner norm of r_0, or ||r_0||_M. maxiter : int maximum number of allowed iterations xtype : type dtype for the solution, default is automatic type detection M : array, matrix, sparse matrix, LinearOperator n x n, inverted preconditioner, i.e. solve M A x = M b. callback : function User-supplied function is called after each iteration as callback(xk), where xk is the current solution vector residuals : list residuals contains the residual norm history, including the initial residual. The preconditioner norm is used, instead of the Euclidean norm. Returns ------- (xNew, info) xNew : an updated guess to the solution of Ax = b info : halting status of cg == ======================================= 0 successful exit >0 convergence to tolerance not achieved, return iteration count instead. <0 numerical breakdown, or illegal input == ======================================= Notes ----- The LinearOperator class is in scipy.sparse.linalg.interface. Use this class if you prefer to define A or M as a mat-vec routine as opposed to explicitly constructing the matrix. A.psolve(..) is still supported as a legacy. The residual in the preconditioner norm is both used for halting and returned in the residuals list. Examples -------- 7.89436429704 References .. [1] Yousef Saad, "Iterative Methods for Sparse Linear Systems, Second Edition", SIAM, pp. 137--142, 2003 http://www-users.cs.umn.edu/~saad/books.html """ |
A, M, x, b, postprocess = make_system(A, M, x0, b)
# Ensure that warnings are always reissued from this function
import warnings
warnings.filterwarnings('always',
module='pyamg\.krylov\._steepest_descent')
# determine maxiter
if maxiter is None:
maxiter = int(len(b))
elif maxiter < 1:
raise ValueError('Number of iterations must be positive')
# setup method
r = b - A*x
z = M*r
rz = np.inner(r.conjugate(), z)
# use preconditioner norm
normr = np.sqrt(rz)
if residuals is not None:
residuals[:] = [normr] # initial residual
# Check initial guess ( scaling by b, if b != 0,
# must account for case when norm(b) is very small)
normb = norm(b)
if normb == 0.0:
normb = 1.0
if normr < tol*normb:
return (postprocess(x), 0)
# Scale tol by ||r_0||_M
if normr != 0.0:
tol = tol*normr
# How often should r be recomputed
recompute_r = 50
iter = 0
while True:
iter = iter+1
q = A*z
zAz = np.inner(z.conjugate(), q) # check curvature of A
if zAz < 0.0:
warn("\nIndefinite matrix detected in steepest descent,\
aborting\n")
return (postprocess(x), -1)
alpha = rz / zAz # step size
x = x + alpha*z
if np.mod(iter, recompute_r) and iter > 0:
r = b - A*x
else:
r = r - alpha*q
z = M*r
rz = np.inner(r.conjugate(), z)
if rz < 0.0: # check curvature of M
warn("\nIndefinite preconditioner detected in steepest descent,\
aborting\n")
return (postprocess(x), -1)
normr = np.sqrt(rz) # use preconditioner norm
if residuals is not None:
residuals.append(normr)
if callback is not None:
callback(x)
if normr < tol:
return (postprocess(x), 0)
elif rz == 0.0:
# important to test after testing normr < tol. rz == 0.0 is an
# indicator of convergence when r = 0.0
warn("\nSingular preconditioner detected in steepest descent,\
ceasing iterations\n")
return (postprocess(x), -1)
if iter == maxiter:
return (postprocess(x), iter) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def demo():
"""Outline basic demo.""" |
A = poisson((100, 100), format='csr') # 2D FD Poisson problem
B = None # no near-null spaces guesses for SA
b = sp.rand(A.shape[0], 1) # a random right-hand side
# use AMG based on Smoothed Aggregation (SA) and display info
mls = smoothed_aggregation_solver(A, B=B)
print(mls)
# Solve Ax=b with no acceleration ('standalone' solver)
standalone_residuals = []
x = mls.solve(b, tol=1e-10, accel=None, residuals=standalone_residuals)
# Solve Ax=b with Conjugate Gradient (AMG as a preconditioner to CG)
accelerated_residuals = []
x = mls.solve(b, tol=1e-10, accel='cg', residuals=accelerated_residuals)
del x
# Compute relative residuals
standalone_residuals = \
np.array(standalone_residuals) / standalone_residuals[0]
accelerated_residuals = \
np.array(accelerated_residuals) / accelerated_residuals[0]
# Compute (geometric) convergence factors
factor1 = standalone_residuals[-1]**(1.0/len(standalone_residuals))
factor2 = accelerated_residuals[-1]**(1.0/len(accelerated_residuals))
print(" MG convergence factor: %g" % (factor1))
print("MG with CG acceleration convergence factor: %g" % (factor2))
# Plot convergence history
try:
import matplotlib.pyplot as plt
plt.figure()
plt.title('Convergence History')
plt.xlabel('Iteration')
plt.ylabel('Relative Residual')
plt.semilogy(standalone_residuals, label='Standalone',
linestyle='-', marker='o')
plt.semilogy(accelerated_residuals, label='Accelerated',
linestyle='-', marker='s')
plt.legend()
plt.show()
except ImportError:
print("\n\nNote: pylab not available on your system.") |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def lloyd_aggregation(C, ratio=0.03, distance='unit', maxiter=10):
"""Aggregate nodes using Lloyd Clustering. Parameters C : csr_matrix strength of connection matrix ratio : scalar Fraction of the nodes which will be seeds. distance : ['unit','abs','inv',None] Distance assigned to each edge of the graph G used in Lloyd clustering For each nonzero value C[i,j]: ======= =========================== 'unit' G[i,j] = 1 'abs' G[i,j] = abs(C[i,j]) 'inv' G[i,j] = 1.0/abs(C[i,j]) 'same' G[i,j] = C[i,j] 'sub' G[i,j] = C[i,j] - min(C) ======= =========================== maxiter : int Maximum number of iterations to perform Returns ------- AggOp : csr_matrix aggregation operator which determines the sparsity pattern of the tentative prolongator seeds : array array of Cpts, i.e., Cpts[i] = root node of aggregate i See Also -------- amg_core.standard_aggregation Examples -------- matrix([[ 2., -1., 0., 0.], [-1., 2., -1., 0.], [ 0., -1., 2., -1.], [ 0., 0., -1., 2.]]) matrix([[1], [1], [1], [1]], dtype=int8) """ |
if ratio <= 0 or ratio > 1:
raise ValueError('ratio must be > 0.0 and <= 1.0')
if not (isspmatrix_csr(C) or isspmatrix_csc(C)):
raise TypeError('expected csr_matrix or csc_matrix')
if distance == 'unit':
data = np.ones_like(C.data).astype(float)
elif distance == 'abs':
data = abs(C.data)
elif distance == 'inv':
data = 1.0/abs(C.data)
elif distance is 'same':
data = C.data
elif distance is 'min':
data = C.data - C.data.min()
else:
raise ValueError('unrecognized value distance=%s' % distance)
if C.dtype == complex:
data = np.real(data)
assert(data.min() >= 0)
G = C.__class__((data, C.indices, C.indptr), shape=C.shape)
num_seeds = int(min(max(ratio * G.shape[0], 1), G.shape[0]))
distances, clusters, seeds = lloyd_cluster(G, num_seeds, maxiter=maxiter)
row = (clusters >= 0).nonzero()[0]
col = clusters[row]
data = np.ones(len(row), dtype='int8')
AggOp = coo_matrix((data, (row, col)),
shape=(G.shape[0], num_seeds)).tocsr()
return AggOp, seeds |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def cr(A, b, x0=None, tol=1e-5, maxiter=None, xtype=None, M=None, callback=None, residuals=None):
"""Conjugate Residual algorithm. Solves the linear system Ax = b. Left preconditioning is supported. The matrix A must be Hermitian symmetric (but not necessarily definite). Parameters A : array, matrix, sparse matrix, LinearOperator n x n, linear system to solve b : array, matrix right hand side, shape is (n,) or (n,1) x0 : array, matrix initial guess, default is a vector of zeros tol : float relative convergence tolerance, i.e. tol is scaled by the preconditioner norm of r_0, or ||r_0||_M. maxiter : int maximum number of allowed iterations xtype : type dtype for the solution, default is automatic type detection M : array, matrix, sparse matrix, LinearOperator n x n, inverted preconditioner, i.e. solve M A x = M b. callback : function User-supplied function is called after each iteration as callback(xk), where xk is the current solution vector residuals : list residuals contains the residual norm history, including the initial residual. The preconditioner norm is used, instead of the Euclidean norm. Returns ------- (xNew, info) xNew : an updated guess to the solution of Ax = b info : halting status of cr == ======================================= 0 successful exit >0 convergence to tolerance not achieved, return iteration count instead. <0 numerical breakdown, or illegal input == ======================================= Notes ----- The LinearOperator class is in scipy.sparse.linalg.interface. Use this class if you prefer to define A or M as a mat-vec routine as opposed to explicitly constructing the matrix. A.psolve(..) is still supported as a legacy. The 2-norm of the preconditioned residual is used both for halting and returned in the residuals list. Examples -------- 10.9370700187 References .. [1] Yousef Saad, "Iterative Methods for Sparse Linear Systems, Second Edition", SIAM, pp. 262-67, 2003 http://www-users.cs.umn.edu/~saad/books.html """ |
A, M, x, b, postprocess = make_system(A, M, x0, b)
# n = len(b)
# Ensure that warnings are always reissued from this function
import warnings
warnings.filterwarnings('always', module='pyamg\.krylov\._cr')
# determine maxiter
if maxiter is None:
maxiter = int(1.3*len(b)) + 2
elif maxiter < 1:
raise ValueError('Number of iterations must be positive')
# choose tolerance for numerically zero values
# t = A.dtype.char
# eps = np.finfo(np.float).eps
# feps = np.finfo(np.single).eps
# geps = np.finfo(np.longfloat).eps
# _array_precision = {'f': 0, 'd': 1, 'g': 2, 'F': 0, 'D': 1, 'G': 2}
# numerically_zero = {0: feps*1e3, 1: eps*1e6,
# 2: geps*1e6}[_array_precision[t]]
# setup method
r = b - A*x
z = M*r
p = z.copy()
zz = np.inner(z.conjugate(), z)
# use preconditioner norm
normr = np.sqrt(zz)
if residuals is not None:
residuals[:] = [normr] # initial residual
# Check initial guess ( scaling by b, if b != 0,
# must account for case when norm(b) is very small)
normb = norm(b)
if normb == 0.0:
normb = 1.0
if normr < tol*normb:
return (postprocess(x), 0)
# Scale tol by ||r_0||_M
if normr != 0.0:
tol = tol*normr
# How often should r be recomputed
recompute_r = 8
iter = 0
Az = A*z
rAz = np.inner(r.conjugate(), Az)
Ap = A*p
while True:
rAz_old = rAz
alpha = rAz / np.inner(Ap.conjugate(), Ap) # 3
x += alpha * p # 4
if np.mod(iter, recompute_r) and iter > 0: # 5
r -= alpha * Ap
else:
r = b - A*x
z = M*r
Az = A*z
rAz = np.inner(r.conjugate(), Az)
beta = rAz/rAz_old # 6
p *= beta # 7
p += z
Ap *= beta # 8
Ap += Az
iter += 1
zz = np.inner(z.conjugate(), z)
normr = np.sqrt(zz) # use preconditioner norm
if residuals is not None:
residuals.append(normr)
if callback is not None:
callback(x)
if normr < tol:
return (postprocess(x), 0)
elif zz == 0.0:
# important to test after testing normr < tol. rz == 0.0 is an
# indicator of convergence when r = 0.0
warn("\nSingular preconditioner detected in CR, ceasing \
iterations\n")
return (postprocess(x), -1)
if iter == maxiter:
return (postprocess(x), iter) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def BSR_Get_Row(A, i):
"""Return row i in BSR matrix A. Only nonzero entries are returned Parameters A : bsr_matrix Input matrix i : int Row number Returns ------- z : array Actual nonzero values for row i colindx Array of column indices for the nonzeros of row i Examples -------- [4 5] """ |
blocksize = A.blocksize[0]
BlockIndx = int(i/blocksize)
rowstart = A.indptr[BlockIndx]
rowend = A.indptr[BlockIndx+1]
localRowIndx = i % blocksize
# Get z
indys = A.data[rowstart:rowend, localRowIndx, :].nonzero()
z = A.data[rowstart:rowend, localRowIndx, :][indys[0], indys[1]]
colindx = np.zeros((1, z.__len__()), dtype=np.int32)
counter = 0
for j in range(rowstart, rowend):
coloffset = blocksize*A.indices[j]
indys = A.data[j, localRowIndx, :].nonzero()[0]
increment = indys.shape[0]
colindx[0, counter:(counter+increment)] = coloffset + indys
counter += increment
return np.mat(z).T, colindx[0, :] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.