repo
stringlengths 7
55
| path
stringlengths 4
127
| func_name
stringlengths 1
88
| original_string
stringlengths 75
19.8k
| language
stringclasses 1
value | code
stringlengths 75
19.8k
| code_tokens
list | docstring
stringlengths 3
17.3k
| docstring_tokens
list | sha
stringlengths 40
40
| url
stringlengths 87
242
| partition
stringclasses 1
value |
|---|---|---|---|---|---|---|---|---|---|---|---|
Azure/azure-cosmos-table-python
|
azure-cosmosdb-table/azure/cosmosdb/table/tableservice.py
|
TableService.get_table_service_stats
|
def get_table_service_stats(self, timeout=None):
'''
Retrieves statistics related to replication for the Table service. It is
only available when read-access geo-redundant replication is enabled for
the storage account.
With geo-redundant replication, Azure Storage maintains your data durable
in two locations. In both locations, Azure Storage constantly maintains
multiple healthy replicas of your data. The location where you read,
create, update, or delete data is the primary storage account location.
The primary location exists in the region you choose at the time you
create an account via the Azure Management Azure classic portal, for
example, North Central US. The location to which your data is replicated
is the secondary location. The secondary location is automatically
determined based on the location of the primary; it is in a second data
center that resides in the same region as the primary location. Read-only
access is available from the secondary location, if read-access geo-redundant
replication is enabled for your storage account.
:param int timeout:
The timeout parameter is expressed in seconds.
:return: The table service stats.
:rtype: :class:`~azure.storage.common.models.ServiceStats`
'''
request = HTTPRequest()
request.method = 'GET'
request.host_locations = self._get_host_locations(primary=False, secondary=True)
request.path = '/'
request.query = {
'restype': 'service',
'comp': 'stats',
'timeout': _int_to_str(timeout),
}
return self._perform_request(request, _convert_xml_to_service_stats)
|
python
|
def get_table_service_stats(self, timeout=None):
'''
Retrieves statistics related to replication for the Table service. It is
only available when read-access geo-redundant replication is enabled for
the storage account.
With geo-redundant replication, Azure Storage maintains your data durable
in two locations. In both locations, Azure Storage constantly maintains
multiple healthy replicas of your data. The location where you read,
create, update, or delete data is the primary storage account location.
The primary location exists in the region you choose at the time you
create an account via the Azure Management Azure classic portal, for
example, North Central US. The location to which your data is replicated
is the secondary location. The secondary location is automatically
determined based on the location of the primary; it is in a second data
center that resides in the same region as the primary location. Read-only
access is available from the secondary location, if read-access geo-redundant
replication is enabled for your storage account.
:param int timeout:
The timeout parameter is expressed in seconds.
:return: The table service stats.
:rtype: :class:`~azure.storage.common.models.ServiceStats`
'''
request = HTTPRequest()
request.method = 'GET'
request.host_locations = self._get_host_locations(primary=False, secondary=True)
request.path = '/'
request.query = {
'restype': 'service',
'comp': 'stats',
'timeout': _int_to_str(timeout),
}
return self._perform_request(request, _convert_xml_to_service_stats)
|
[
"def",
"get_table_service_stats",
"(",
"self",
",",
"timeout",
"=",
"None",
")",
":",
"request",
"=",
"HTTPRequest",
"(",
")",
"request",
".",
"method",
"=",
"'GET'",
"request",
".",
"host_locations",
"=",
"self",
".",
"_get_host_locations",
"(",
"primary",
"=",
"False",
",",
"secondary",
"=",
"True",
")",
"request",
".",
"path",
"=",
"'/'",
"request",
".",
"query",
"=",
"{",
"'restype'",
":",
"'service'",
",",
"'comp'",
":",
"'stats'",
",",
"'timeout'",
":",
"_int_to_str",
"(",
"timeout",
")",
",",
"}",
"return",
"self",
".",
"_perform_request",
"(",
"request",
",",
"_convert_xml_to_service_stats",
")"
] |
Retrieves statistics related to replication for the Table service. It is
only available when read-access geo-redundant replication is enabled for
the storage account.
With geo-redundant replication, Azure Storage maintains your data durable
in two locations. In both locations, Azure Storage constantly maintains
multiple healthy replicas of your data. The location where you read,
create, update, or delete data is the primary storage account location.
The primary location exists in the region you choose at the time you
create an account via the Azure Management Azure classic portal, for
example, North Central US. The location to which your data is replicated
is the secondary location. The secondary location is automatically
determined based on the location of the primary; it is in a second data
center that resides in the same region as the primary location. Read-only
access is available from the secondary location, if read-access geo-redundant
replication is enabled for your storage account.
:param int timeout:
The timeout parameter is expressed in seconds.
:return: The table service stats.
:rtype: :class:`~azure.storage.common.models.ServiceStats`
|
[
"Retrieves",
"statistics",
"related",
"to",
"replication",
"for",
"the",
"Table",
"service",
".",
"It",
"is",
"only",
"available",
"when",
"read",
"-",
"access",
"geo",
"-",
"redundant",
"replication",
"is",
"enabled",
"for",
"the",
"storage",
"account",
"."
] |
a7b618f6bddc465c9fdf899ea2971dfe4d04fcf0
|
https://github.com/Azure/azure-cosmos-table-python/blob/a7b618f6bddc465c9fdf899ea2971dfe4d04fcf0/azure-cosmosdb-table/azure/cosmosdb/table/tableservice.py#L335-L369
|
train
|
Azure/azure-cosmos-table-python
|
azure-cosmosdb-table/azure/cosmosdb/table/tableservice.py
|
TableService.get_table_service_properties
|
def get_table_service_properties(self, timeout=None):
'''
Gets the properties of a storage account's Table service, including
logging, analytics and CORS rules.
:param int timeout:
The server timeout, expressed in seconds.
:return: The table service properties.
:rtype: :class:`~azure.storage.common.models.ServiceProperties`
'''
request = HTTPRequest()
request.method = 'GET'
request.host_locations = self._get_host_locations(secondary=True)
request.path = '/'
request.query = {
'restype': 'service',
'comp': 'properties',
'timeout': _int_to_str(timeout),
}
return self._perform_request(request, _convert_xml_to_service_properties)
|
python
|
def get_table_service_properties(self, timeout=None):
'''
Gets the properties of a storage account's Table service, including
logging, analytics and CORS rules.
:param int timeout:
The server timeout, expressed in seconds.
:return: The table service properties.
:rtype: :class:`~azure.storage.common.models.ServiceProperties`
'''
request = HTTPRequest()
request.method = 'GET'
request.host_locations = self._get_host_locations(secondary=True)
request.path = '/'
request.query = {
'restype': 'service',
'comp': 'properties',
'timeout': _int_to_str(timeout),
}
return self._perform_request(request, _convert_xml_to_service_properties)
|
[
"def",
"get_table_service_properties",
"(",
"self",
",",
"timeout",
"=",
"None",
")",
":",
"request",
"=",
"HTTPRequest",
"(",
")",
"request",
".",
"method",
"=",
"'GET'",
"request",
".",
"host_locations",
"=",
"self",
".",
"_get_host_locations",
"(",
"secondary",
"=",
"True",
")",
"request",
".",
"path",
"=",
"'/'",
"request",
".",
"query",
"=",
"{",
"'restype'",
":",
"'service'",
",",
"'comp'",
":",
"'properties'",
",",
"'timeout'",
":",
"_int_to_str",
"(",
"timeout",
")",
",",
"}",
"return",
"self",
".",
"_perform_request",
"(",
"request",
",",
"_convert_xml_to_service_properties",
")"
] |
Gets the properties of a storage account's Table service, including
logging, analytics and CORS rules.
:param int timeout:
The server timeout, expressed in seconds.
:return: The table service properties.
:rtype: :class:`~azure.storage.common.models.ServiceProperties`
|
[
"Gets",
"the",
"properties",
"of",
"a",
"storage",
"account",
"s",
"Table",
"service",
"including",
"logging",
"analytics",
"and",
"CORS",
"rules",
"."
] |
a7b618f6bddc465c9fdf899ea2971dfe4d04fcf0
|
https://github.com/Azure/azure-cosmos-table-python/blob/a7b618f6bddc465c9fdf899ea2971dfe4d04fcf0/azure-cosmosdb-table/azure/cosmosdb/table/tableservice.py#L371-L391
|
train
|
Azure/azure-cosmos-table-python
|
azure-cosmosdb-table/azure/cosmosdb/table/tableservice.py
|
TableService.delete_table
|
def delete_table(self, table_name, fail_not_exist=False, timeout=None):
'''
Deletes the specified table and any data it contains.
When a table is successfully deleted, it is immediately marked for deletion
and is no longer accessible to clients. The table is later removed from
the Table service during garbage collection.
Note that deleting a table is likely to take at least 40 seconds to complete.
If an operation is attempted against the table while it was being deleted,
an :class:`AzureConflictHttpError` will be thrown.
:param str table_name:
The name of the table to delete.
:param bool fail_not_exist:
Specifies whether to throw an exception if the table doesn't exist.
:param int timeout:
The server timeout, expressed in seconds.
:return:
A boolean indicating whether the table was deleted. If fail_not_exist
was set to True, this will throw instead of returning false.
:rtype: bool
'''
_validate_not_none('table_name', table_name)
request = HTTPRequest()
request.method = 'DELETE'
request.host_locations = self._get_host_locations()
request.path = '/Tables(\'' + _to_str(table_name) + '\')'
request.query = {'timeout': _int_to_str(timeout)}
request.headers = {_DEFAULT_ACCEPT_HEADER[0]: _DEFAULT_ACCEPT_HEADER[1]}
if not fail_not_exist:
try:
self._perform_request(request)
return True
except AzureHttpError as ex:
_dont_fail_not_exist(ex)
return False
else:
self._perform_request(request)
return True
|
python
|
def delete_table(self, table_name, fail_not_exist=False, timeout=None):
'''
Deletes the specified table and any data it contains.
When a table is successfully deleted, it is immediately marked for deletion
and is no longer accessible to clients. The table is later removed from
the Table service during garbage collection.
Note that deleting a table is likely to take at least 40 seconds to complete.
If an operation is attempted against the table while it was being deleted,
an :class:`AzureConflictHttpError` will be thrown.
:param str table_name:
The name of the table to delete.
:param bool fail_not_exist:
Specifies whether to throw an exception if the table doesn't exist.
:param int timeout:
The server timeout, expressed in seconds.
:return:
A boolean indicating whether the table was deleted. If fail_not_exist
was set to True, this will throw instead of returning false.
:rtype: bool
'''
_validate_not_none('table_name', table_name)
request = HTTPRequest()
request.method = 'DELETE'
request.host_locations = self._get_host_locations()
request.path = '/Tables(\'' + _to_str(table_name) + '\')'
request.query = {'timeout': _int_to_str(timeout)}
request.headers = {_DEFAULT_ACCEPT_HEADER[0]: _DEFAULT_ACCEPT_HEADER[1]}
if not fail_not_exist:
try:
self._perform_request(request)
return True
except AzureHttpError as ex:
_dont_fail_not_exist(ex)
return False
else:
self._perform_request(request)
return True
|
[
"def",
"delete_table",
"(",
"self",
",",
"table_name",
",",
"fail_not_exist",
"=",
"False",
",",
"timeout",
"=",
"None",
")",
":",
"_validate_not_none",
"(",
"'table_name'",
",",
"table_name",
")",
"request",
"=",
"HTTPRequest",
"(",
")",
"request",
".",
"method",
"=",
"'DELETE'",
"request",
".",
"host_locations",
"=",
"self",
".",
"_get_host_locations",
"(",
")",
"request",
".",
"path",
"=",
"'/Tables(\\''",
"+",
"_to_str",
"(",
"table_name",
")",
"+",
"'\\')'",
"request",
".",
"query",
"=",
"{",
"'timeout'",
":",
"_int_to_str",
"(",
"timeout",
")",
"}",
"request",
".",
"headers",
"=",
"{",
"_DEFAULT_ACCEPT_HEADER",
"[",
"0",
"]",
":",
"_DEFAULT_ACCEPT_HEADER",
"[",
"1",
"]",
"}",
"if",
"not",
"fail_not_exist",
":",
"try",
":",
"self",
".",
"_perform_request",
"(",
"request",
")",
"return",
"True",
"except",
"AzureHttpError",
"as",
"ex",
":",
"_dont_fail_not_exist",
"(",
"ex",
")",
"return",
"False",
"else",
":",
"self",
".",
"_perform_request",
"(",
"request",
")",
"return",
"True"
] |
Deletes the specified table and any data it contains.
When a table is successfully deleted, it is immediately marked for deletion
and is no longer accessible to clients. The table is later removed from
the Table service during garbage collection.
Note that deleting a table is likely to take at least 40 seconds to complete.
If an operation is attempted against the table while it was being deleted,
an :class:`AzureConflictHttpError` will be thrown.
:param str table_name:
The name of the table to delete.
:param bool fail_not_exist:
Specifies whether to throw an exception if the table doesn't exist.
:param int timeout:
The server timeout, expressed in seconds.
:return:
A boolean indicating whether the table was deleted. If fail_not_exist
was set to True, this will throw instead of returning false.
:rtype: bool
|
[
"Deletes",
"the",
"specified",
"table",
"and",
"any",
"data",
"it",
"contains",
"."
] |
a7b618f6bddc465c9fdf899ea2971dfe4d04fcf0
|
https://github.com/Azure/azure-cosmos-table-python/blob/a7b618f6bddc465c9fdf899ea2971dfe4d04fcf0/azure-cosmosdb-table/azure/cosmosdb/table/tableservice.py#L571-L611
|
train
|
Azure/azure-cosmos-table-python
|
azure-cosmosdb-table/azure/cosmosdb/table/tableservice.py
|
TableService.query_entities
|
def query_entities(self, table_name, filter=None, select=None, num_results=None,
marker=None, accept=TablePayloadFormat.JSON_MINIMAL_METADATA,
property_resolver=None, timeout=None):
'''
Returns a generator to list the entities in the table specified. The
generator will lazily follow the continuation tokens returned by the
service and stop when all entities have been returned or num_results is
reached.
If num_results is specified and the account has more than that number of
entities, the generator will have a populated next_marker field once it
finishes. This marker can be used to create a new generator if more
results are desired.
:param str table_name:
The name of the table to query.
:param str filter:
Returns only entities that satisfy the specified filter. Note that
no more than 15 discrete comparisons are permitted within a $filter
string. See http://msdn.microsoft.com/en-us/library/windowsazure/dd894031.aspx
for more information on constructing filters.
:param str select:
Returns only the desired properties of an entity from the set.
:param int num_results:
The maximum number of entities to return.
:param marker:
An opaque continuation object. This value can be retrieved from the
next_marker field of a previous generator object if max_results was
specified and that generator has finished enumerating results. If
specified, this generator will begin returning results from the point
where the previous generator stopped.
:type marker: obj
:param str accept:
Specifies the accepted content type of the response payload. See
:class:`~azure.storage.table.models.TablePayloadFormat` for possible
values.
:param property_resolver:
A function which given the partition key, row key, property name,
property value, and the property EdmType if returned by the service,
returns the EdmType of the property. Generally used if accept is set
to JSON_NO_METADATA.
:type property_resolver: func(pk, rk, prop_name, prop_value, service_edm_type)
:param int timeout:
The server timeout, expressed in seconds. This function may make multiple
calls to the service in which case the timeout value specified will be
applied to each individual call.
:return: A generator which produces :class:`~azure.storage.table.models.Entity` objects.
:rtype: :class:`~azure.storage.common.models.ListGenerator`
'''
operation_context = _OperationContext(location_lock=True)
if self.key_encryption_key is not None or self.key_resolver_function is not None:
# If query already requests all properties, no need to add the metadata columns
if select is not None and select != '*':
select += ',_ClientEncryptionMetadata1,_ClientEncryptionMetadata2'
args = (table_name,)
kwargs = {'filter': filter, 'select': select, 'max_results': num_results, 'marker': marker,
'accept': accept, 'property_resolver': property_resolver, 'timeout': timeout,
'_context': operation_context}
resp = self._query_entities(*args, **kwargs)
return ListGenerator(resp, self._query_entities, args, kwargs)
|
python
|
def query_entities(self, table_name, filter=None, select=None, num_results=None,
marker=None, accept=TablePayloadFormat.JSON_MINIMAL_METADATA,
property_resolver=None, timeout=None):
'''
Returns a generator to list the entities in the table specified. The
generator will lazily follow the continuation tokens returned by the
service and stop when all entities have been returned or num_results is
reached.
If num_results is specified and the account has more than that number of
entities, the generator will have a populated next_marker field once it
finishes. This marker can be used to create a new generator if more
results are desired.
:param str table_name:
The name of the table to query.
:param str filter:
Returns only entities that satisfy the specified filter. Note that
no more than 15 discrete comparisons are permitted within a $filter
string. See http://msdn.microsoft.com/en-us/library/windowsazure/dd894031.aspx
for more information on constructing filters.
:param str select:
Returns only the desired properties of an entity from the set.
:param int num_results:
The maximum number of entities to return.
:param marker:
An opaque continuation object. This value can be retrieved from the
next_marker field of a previous generator object if max_results was
specified and that generator has finished enumerating results. If
specified, this generator will begin returning results from the point
where the previous generator stopped.
:type marker: obj
:param str accept:
Specifies the accepted content type of the response payload. See
:class:`~azure.storage.table.models.TablePayloadFormat` for possible
values.
:param property_resolver:
A function which given the partition key, row key, property name,
property value, and the property EdmType if returned by the service,
returns the EdmType of the property. Generally used if accept is set
to JSON_NO_METADATA.
:type property_resolver: func(pk, rk, prop_name, prop_value, service_edm_type)
:param int timeout:
The server timeout, expressed in seconds. This function may make multiple
calls to the service in which case the timeout value specified will be
applied to each individual call.
:return: A generator which produces :class:`~azure.storage.table.models.Entity` objects.
:rtype: :class:`~azure.storage.common.models.ListGenerator`
'''
operation_context = _OperationContext(location_lock=True)
if self.key_encryption_key is not None or self.key_resolver_function is not None:
# If query already requests all properties, no need to add the metadata columns
if select is not None and select != '*':
select += ',_ClientEncryptionMetadata1,_ClientEncryptionMetadata2'
args = (table_name,)
kwargs = {'filter': filter, 'select': select, 'max_results': num_results, 'marker': marker,
'accept': accept, 'property_resolver': property_resolver, 'timeout': timeout,
'_context': operation_context}
resp = self._query_entities(*args, **kwargs)
return ListGenerator(resp, self._query_entities, args, kwargs)
|
[
"def",
"query_entities",
"(",
"self",
",",
"table_name",
",",
"filter",
"=",
"None",
",",
"select",
"=",
"None",
",",
"num_results",
"=",
"None",
",",
"marker",
"=",
"None",
",",
"accept",
"=",
"TablePayloadFormat",
".",
"JSON_MINIMAL_METADATA",
",",
"property_resolver",
"=",
"None",
",",
"timeout",
"=",
"None",
")",
":",
"operation_context",
"=",
"_OperationContext",
"(",
"location_lock",
"=",
"True",
")",
"if",
"self",
".",
"key_encryption_key",
"is",
"not",
"None",
"or",
"self",
".",
"key_resolver_function",
"is",
"not",
"None",
":",
"# If query already requests all properties, no need to add the metadata columns",
"if",
"select",
"is",
"not",
"None",
"and",
"select",
"!=",
"'*'",
":",
"select",
"+=",
"',_ClientEncryptionMetadata1,_ClientEncryptionMetadata2'",
"args",
"=",
"(",
"table_name",
",",
")",
"kwargs",
"=",
"{",
"'filter'",
":",
"filter",
",",
"'select'",
":",
"select",
",",
"'max_results'",
":",
"num_results",
",",
"'marker'",
":",
"marker",
",",
"'accept'",
":",
"accept",
",",
"'property_resolver'",
":",
"property_resolver",
",",
"'timeout'",
":",
"timeout",
",",
"'_context'",
":",
"operation_context",
"}",
"resp",
"=",
"self",
".",
"_query_entities",
"(",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
"return",
"ListGenerator",
"(",
"resp",
",",
"self",
".",
"_query_entities",
",",
"args",
",",
"kwargs",
")"
] |
Returns a generator to list the entities in the table specified. The
generator will lazily follow the continuation tokens returned by the
service and stop when all entities have been returned or num_results is
reached.
If num_results is specified and the account has more than that number of
entities, the generator will have a populated next_marker field once it
finishes. This marker can be used to create a new generator if more
results are desired.
:param str table_name:
The name of the table to query.
:param str filter:
Returns only entities that satisfy the specified filter. Note that
no more than 15 discrete comparisons are permitted within a $filter
string. See http://msdn.microsoft.com/en-us/library/windowsazure/dd894031.aspx
for more information on constructing filters.
:param str select:
Returns only the desired properties of an entity from the set.
:param int num_results:
The maximum number of entities to return.
:param marker:
An opaque continuation object. This value can be retrieved from the
next_marker field of a previous generator object if max_results was
specified and that generator has finished enumerating results. If
specified, this generator will begin returning results from the point
where the previous generator stopped.
:type marker: obj
:param str accept:
Specifies the accepted content type of the response payload. See
:class:`~azure.storage.table.models.TablePayloadFormat` for possible
values.
:param property_resolver:
A function which given the partition key, row key, property name,
property value, and the property EdmType if returned by the service,
returns the EdmType of the property. Generally used if accept is set
to JSON_NO_METADATA.
:type property_resolver: func(pk, rk, prop_name, prop_value, service_edm_type)
:param int timeout:
The server timeout, expressed in seconds. This function may make multiple
calls to the service in which case the timeout value specified will be
applied to each individual call.
:return: A generator which produces :class:`~azure.storage.table.models.Entity` objects.
:rtype: :class:`~azure.storage.common.models.ListGenerator`
|
[
"Returns",
"a",
"generator",
"to",
"list",
"the",
"entities",
"in",
"the",
"table",
"specified",
".",
"The",
"generator",
"will",
"lazily",
"follow",
"the",
"continuation",
"tokens",
"returned",
"by",
"the",
"service",
"and",
"stop",
"when",
"all",
"entities",
"have",
"been",
"returned",
"or",
"num_results",
"is",
"reached",
"."
] |
a7b618f6bddc465c9fdf899ea2971dfe4d04fcf0
|
https://github.com/Azure/azure-cosmos-table-python/blob/a7b618f6bddc465c9fdf899ea2971dfe4d04fcf0/azure-cosmosdb-table/azure/cosmosdb/table/tableservice.py#L678-L740
|
train
|
Azure/azure-cosmos-table-python
|
azure-cosmosdb-table/azure/cosmosdb/table/tableservice.py
|
TableService.merge_entity
|
def merge_entity(self, table_name, entity, if_match='*', timeout=None):
'''
Updates an existing entity by merging the entity's properties. Throws
if the entity does not exist.
This operation does not replace the existing entity as the update_entity
operation does. A property cannot be removed with merge_entity.
Any properties with null values are ignored. All other properties will be
updated or added.
:param str table_name:
The name of the table containing the entity to merge.
:param entity:
The entity to merge. Could be a dict or an entity object.
Must contain a PartitionKey and a RowKey.
:type entity: dict or :class:`~azure.storage.table.models.Entity`
:param str if_match:
The client may specify the ETag for the entity on the
request in order to compare to the ETag maintained by the service
for the purpose of optimistic concurrency. The merge operation
will be performed only if the ETag sent by the client matches the
value maintained by the server, indicating that the entity has
not been modified since it was retrieved by the client. To force
an unconditional merge, set If-Match to the wildcard character (*).
:param int timeout:
The server timeout, expressed in seconds.
:return: The etag of the entity.
:rtype: str
'''
_validate_not_none('table_name', table_name)
request = _merge_entity(entity, if_match, self.require_encryption,
self.key_encryption_key)
request.host_locations = self._get_host_locations()
request.query['timeout'] = _int_to_str(timeout)
request.path = _get_entity_path(table_name, entity['PartitionKey'], entity['RowKey'])
return self._perform_request(request, _extract_etag)
|
python
|
def merge_entity(self, table_name, entity, if_match='*', timeout=None):
'''
Updates an existing entity by merging the entity's properties. Throws
if the entity does not exist.
This operation does not replace the existing entity as the update_entity
operation does. A property cannot be removed with merge_entity.
Any properties with null values are ignored. All other properties will be
updated or added.
:param str table_name:
The name of the table containing the entity to merge.
:param entity:
The entity to merge. Could be a dict or an entity object.
Must contain a PartitionKey and a RowKey.
:type entity: dict or :class:`~azure.storage.table.models.Entity`
:param str if_match:
The client may specify the ETag for the entity on the
request in order to compare to the ETag maintained by the service
for the purpose of optimistic concurrency. The merge operation
will be performed only if the ETag sent by the client matches the
value maintained by the server, indicating that the entity has
not been modified since it was retrieved by the client. To force
an unconditional merge, set If-Match to the wildcard character (*).
:param int timeout:
The server timeout, expressed in seconds.
:return: The etag of the entity.
:rtype: str
'''
_validate_not_none('table_name', table_name)
request = _merge_entity(entity, if_match, self.require_encryption,
self.key_encryption_key)
request.host_locations = self._get_host_locations()
request.query['timeout'] = _int_to_str(timeout)
request.path = _get_entity_path(table_name, entity['PartitionKey'], entity['RowKey'])
return self._perform_request(request, _extract_etag)
|
[
"def",
"merge_entity",
"(",
"self",
",",
"table_name",
",",
"entity",
",",
"if_match",
"=",
"'*'",
",",
"timeout",
"=",
"None",
")",
":",
"_validate_not_none",
"(",
"'table_name'",
",",
"table_name",
")",
"request",
"=",
"_merge_entity",
"(",
"entity",
",",
"if_match",
",",
"self",
".",
"require_encryption",
",",
"self",
".",
"key_encryption_key",
")",
"request",
".",
"host_locations",
"=",
"self",
".",
"_get_host_locations",
"(",
")",
"request",
".",
"query",
"[",
"'timeout'",
"]",
"=",
"_int_to_str",
"(",
"timeout",
")",
"request",
".",
"path",
"=",
"_get_entity_path",
"(",
"table_name",
",",
"entity",
"[",
"'PartitionKey'",
"]",
",",
"entity",
"[",
"'RowKey'",
"]",
")",
"return",
"self",
".",
"_perform_request",
"(",
"request",
",",
"_extract_etag",
")"
] |
Updates an existing entity by merging the entity's properties. Throws
if the entity does not exist.
This operation does not replace the existing entity as the update_entity
operation does. A property cannot be removed with merge_entity.
Any properties with null values are ignored. All other properties will be
updated or added.
:param str table_name:
The name of the table containing the entity to merge.
:param entity:
The entity to merge. Could be a dict or an entity object.
Must contain a PartitionKey and a RowKey.
:type entity: dict or :class:`~azure.storage.table.models.Entity`
:param str if_match:
The client may specify the ETag for the entity on the
request in order to compare to the ETag maintained by the service
for the purpose of optimistic concurrency. The merge operation
will be performed only if the ETag sent by the client matches the
value maintained by the server, indicating that the entity has
not been modified since it was retrieved by the client. To force
an unconditional merge, set If-Match to the wildcard character (*).
:param int timeout:
The server timeout, expressed in seconds.
:return: The etag of the entity.
:rtype: str
|
[
"Updates",
"an",
"existing",
"entity",
"by",
"merging",
"the",
"entity",
"s",
"properties",
".",
"Throws",
"if",
"the",
"entity",
"does",
"not",
"exist",
".",
"This",
"operation",
"does",
"not",
"replace",
"the",
"existing",
"entity",
"as",
"the",
"update_entity",
"operation",
"does",
".",
"A",
"property",
"cannot",
"be",
"removed",
"with",
"merge_entity",
".",
"Any",
"properties",
"with",
"null",
"values",
"are",
"ignored",
".",
"All",
"other",
"properties",
"will",
"be",
"updated",
"or",
"added",
"."
] |
a7b618f6bddc465c9fdf899ea2971dfe4d04fcf0
|
https://github.com/Azure/azure-cosmos-table-python/blob/a7b618f6bddc465c9fdf899ea2971dfe4d04fcf0/azure-cosmosdb-table/azure/cosmosdb/table/tableservice.py#L969-L1008
|
train
|
Azure/azure-cosmos-table-python
|
azure-cosmosdb-table/samples/table/table_usage.py
|
TableSamples.create_entity_class
|
def create_entity_class(self):
'''
Creates a class-based entity with fixed values, using all of the supported data types.
'''
entity = Entity()
# Partition key and row key must be strings and are required
entity.PartitionKey = 'pk{}'.format(str(uuid.uuid4()).replace('-', ''))
entity.RowKey = 'rk{}'.format(str(uuid.uuid4()).replace('-', ''))
# Some basic types are inferred
entity.age = 39 # EdmType.INT64
entity.large = 933311100 # EdmType.INT64
entity.sex = 'male' # EdmType.STRING
entity.married = True # EdmType.BOOLEAN
entity.ratio = 3.1 # EdmType.DOUBLE
entity.birthday = datetime(1970, 10, 4) # EdmType.DATETIME
# Binary, Int32 and GUID must be explicitly typed
entity.binary = EntityProperty(EdmType.BINARY, b'xyz')
entity.other = EntityProperty(EdmType.INT32, 20)
entity.clsid = EntityProperty(EdmType.GUID, 'c9da6455-213d-42c9-9a79-3e9149a57833')
return entity
|
python
|
def create_entity_class(self):
'''
Creates a class-based entity with fixed values, using all of the supported data types.
'''
entity = Entity()
# Partition key and row key must be strings and are required
entity.PartitionKey = 'pk{}'.format(str(uuid.uuid4()).replace('-', ''))
entity.RowKey = 'rk{}'.format(str(uuid.uuid4()).replace('-', ''))
# Some basic types are inferred
entity.age = 39 # EdmType.INT64
entity.large = 933311100 # EdmType.INT64
entity.sex = 'male' # EdmType.STRING
entity.married = True # EdmType.BOOLEAN
entity.ratio = 3.1 # EdmType.DOUBLE
entity.birthday = datetime(1970, 10, 4) # EdmType.DATETIME
# Binary, Int32 and GUID must be explicitly typed
entity.binary = EntityProperty(EdmType.BINARY, b'xyz')
entity.other = EntityProperty(EdmType.INT32, 20)
entity.clsid = EntityProperty(EdmType.GUID, 'c9da6455-213d-42c9-9a79-3e9149a57833')
return entity
|
[
"def",
"create_entity_class",
"(",
"self",
")",
":",
"entity",
"=",
"Entity",
"(",
")",
"# Partition key and row key must be strings and are required",
"entity",
".",
"PartitionKey",
"=",
"'pk{}'",
".",
"format",
"(",
"str",
"(",
"uuid",
".",
"uuid4",
"(",
")",
")",
".",
"replace",
"(",
"'-'",
",",
"''",
")",
")",
"entity",
".",
"RowKey",
"=",
"'rk{}'",
".",
"format",
"(",
"str",
"(",
"uuid",
".",
"uuid4",
"(",
")",
")",
".",
"replace",
"(",
"'-'",
",",
"''",
")",
")",
"# Some basic types are inferred",
"entity",
".",
"age",
"=",
"39",
"# EdmType.INT64",
"entity",
".",
"large",
"=",
"933311100",
"# EdmType.INT64",
"entity",
".",
"sex",
"=",
"'male'",
"# EdmType.STRING",
"entity",
".",
"married",
"=",
"True",
"# EdmType.BOOLEAN",
"entity",
".",
"ratio",
"=",
"3.1",
"# EdmType.DOUBLE",
"entity",
".",
"birthday",
"=",
"datetime",
"(",
"1970",
",",
"10",
",",
"4",
")",
"# EdmType.DATETIME",
"# Binary, Int32 and GUID must be explicitly typed",
"entity",
".",
"binary",
"=",
"EntityProperty",
"(",
"EdmType",
".",
"BINARY",
",",
"b'xyz'",
")",
"entity",
".",
"other",
"=",
"EntityProperty",
"(",
"EdmType",
".",
"INT32",
",",
"20",
")",
"entity",
".",
"clsid",
"=",
"EntityProperty",
"(",
"EdmType",
".",
"GUID",
",",
"'c9da6455-213d-42c9-9a79-3e9149a57833'",
")",
"return",
"entity"
] |
Creates a class-based entity with fixed values, using all of the supported data types.
|
[
"Creates",
"a",
"class",
"-",
"based",
"entity",
"with",
"fixed",
"values",
"using",
"all",
"of",
"the",
"supported",
"data",
"types",
"."
] |
a7b618f6bddc465c9fdf899ea2971dfe4d04fcf0
|
https://github.com/Azure/azure-cosmos-table-python/blob/a7b618f6bddc465c9fdf899ea2971dfe4d04fcf0/azure-cosmosdb-table/samples/table/table_usage.py#L203-L225
|
train
|
Azure/azure-cosmos-table-python
|
azure-cosmosdb-table/samples/table/table_usage.py
|
TableSamples.create_entity_dict
|
def create_entity_dict(self):
'''
Creates a dict-based entity with fixed values, using all of the supported data types.
'''
entity = {}
# Partition key and row key must be strings and are required
entity['PartitionKey'] = 'pk{}'.format(str(uuid.uuid4()).replace('-', ''))
entity['RowKey'] = 'rk{}'.format(str(uuid.uuid4()).replace('-', ''))
# Some basic types are inferred
entity['age'] = 39 # EdmType.INT64
entity['large'] = 933311100 # EdmType.INT64
entity['sex'] = 'male' # EdmType.STRING
entity['married'] = True # EdmType.BOOLEAN
entity['ratio'] = 3.1 # EdmType.DOUBLE
entity['birthday'] = datetime(1970, 10, 4) # EdmType.DATETIME
# Binary, Int32 and GUID must be explicitly typed
entity['binary'] = EntityProperty(EdmType.BINARY, b'xyz')
entity['other'] = EntityProperty(EdmType.INT32, 20)
entity['clsid'] = EntityProperty(EdmType.GUID, 'c9da6455-213d-42c9-9a79-3e9149a57833')
return entity
|
python
|
def create_entity_dict(self):
'''
Creates a dict-based entity with fixed values, using all of the supported data types.
'''
entity = {}
# Partition key and row key must be strings and are required
entity['PartitionKey'] = 'pk{}'.format(str(uuid.uuid4()).replace('-', ''))
entity['RowKey'] = 'rk{}'.format(str(uuid.uuid4()).replace('-', ''))
# Some basic types are inferred
entity['age'] = 39 # EdmType.INT64
entity['large'] = 933311100 # EdmType.INT64
entity['sex'] = 'male' # EdmType.STRING
entity['married'] = True # EdmType.BOOLEAN
entity['ratio'] = 3.1 # EdmType.DOUBLE
entity['birthday'] = datetime(1970, 10, 4) # EdmType.DATETIME
# Binary, Int32 and GUID must be explicitly typed
entity['binary'] = EntityProperty(EdmType.BINARY, b'xyz')
entity['other'] = EntityProperty(EdmType.INT32, 20)
entity['clsid'] = EntityProperty(EdmType.GUID, 'c9da6455-213d-42c9-9a79-3e9149a57833')
return entity
|
[
"def",
"create_entity_dict",
"(",
"self",
")",
":",
"entity",
"=",
"{",
"}",
"# Partition key and row key must be strings and are required",
"entity",
"[",
"'PartitionKey'",
"]",
"=",
"'pk{}'",
".",
"format",
"(",
"str",
"(",
"uuid",
".",
"uuid4",
"(",
")",
")",
".",
"replace",
"(",
"'-'",
",",
"''",
")",
")",
"entity",
"[",
"'RowKey'",
"]",
"=",
"'rk{}'",
".",
"format",
"(",
"str",
"(",
"uuid",
".",
"uuid4",
"(",
")",
")",
".",
"replace",
"(",
"'-'",
",",
"''",
")",
")",
"# Some basic types are inferred",
"entity",
"[",
"'age'",
"]",
"=",
"39",
"# EdmType.INT64",
"entity",
"[",
"'large'",
"]",
"=",
"933311100",
"# EdmType.INT64",
"entity",
"[",
"'sex'",
"]",
"=",
"'male'",
"# EdmType.STRING",
"entity",
"[",
"'married'",
"]",
"=",
"True",
"# EdmType.BOOLEAN",
"entity",
"[",
"'ratio'",
"]",
"=",
"3.1",
"# EdmType.DOUBLE",
"entity",
"[",
"'birthday'",
"]",
"=",
"datetime",
"(",
"1970",
",",
"10",
",",
"4",
")",
"# EdmType.DATETIME",
"# Binary, Int32 and GUID must be explicitly typed",
"entity",
"[",
"'binary'",
"]",
"=",
"EntityProperty",
"(",
"EdmType",
".",
"BINARY",
",",
"b'xyz'",
")",
"entity",
"[",
"'other'",
"]",
"=",
"EntityProperty",
"(",
"EdmType",
".",
"INT32",
",",
"20",
")",
"entity",
"[",
"'clsid'",
"]",
"=",
"EntityProperty",
"(",
"EdmType",
".",
"GUID",
",",
"'c9da6455-213d-42c9-9a79-3e9149a57833'",
")",
"return",
"entity"
] |
Creates a dict-based entity with fixed values, using all of the supported data types.
|
[
"Creates",
"a",
"dict",
"-",
"based",
"entity",
"with",
"fixed",
"values",
"using",
"all",
"of",
"the",
"supported",
"data",
"types",
"."
] |
a7b618f6bddc465c9fdf899ea2971dfe4d04fcf0
|
https://github.com/Azure/azure-cosmos-table-python/blob/a7b618f6bddc465c9fdf899ea2971dfe4d04fcf0/azure-cosmosdb-table/samples/table/table_usage.py#L227-L249
|
train
|
Azure/azure-cosmos-table-python
|
azure-cosmosdb-table/azure/cosmosdb/table/_serialization.py
|
_convert_batch_to_json
|
def _convert_batch_to_json(batch_requests):
'''
Create json to send for an array of batch requests.
batch_requests:
an array of requests
'''
batch_boundary = b'batch_' + _new_boundary()
changeset_boundary = b'changeset_' + _new_boundary()
body = [b'--' + batch_boundary + b'\n',
b'Content-Type: multipart/mixed; boundary=',
changeset_boundary + b'\n\n']
content_id = 1
# Adds each request body to the POST data.
for _, request in batch_requests:
body.append(b'--' + changeset_boundary + b'\n')
body.append(b'Content-Type: application/http\n')
body.append(b'Content-Transfer-Encoding: binary\n\n')
body.append(request.method.encode('utf-8'))
body.append(b' ')
body.append(request.path.encode('utf-8'))
body.append(b' HTTP/1.1\n')
body.append(b'Content-ID: ')
body.append(str(content_id).encode('utf-8') + b'\n')
content_id += 1
for name, value in request.headers.items():
if name in _SUB_HEADERS:
body.append(name.encode('utf-8') + b': ')
body.append(value.encode('utf-8') + b'\n')
# Add different headers for different request types.
if not request.method == 'DELETE':
body.append(b'Content-Length: ')
body.append(str(len(request.body)).encode('utf-8'))
body.append(b'\n\n')
body.append(request.body + b'\n')
body.append(b'\n')
body.append(b'--' + changeset_boundary + b'--' + b'\n')
body.append(b'--' + batch_boundary + b'--')
return b''.join(body), 'multipart/mixed; boundary=' + batch_boundary.decode('utf-8')
|
python
|
def _convert_batch_to_json(batch_requests):
'''
Create json to send for an array of batch requests.
batch_requests:
an array of requests
'''
batch_boundary = b'batch_' + _new_boundary()
changeset_boundary = b'changeset_' + _new_boundary()
body = [b'--' + batch_boundary + b'\n',
b'Content-Type: multipart/mixed; boundary=',
changeset_boundary + b'\n\n']
content_id = 1
# Adds each request body to the POST data.
for _, request in batch_requests:
body.append(b'--' + changeset_boundary + b'\n')
body.append(b'Content-Type: application/http\n')
body.append(b'Content-Transfer-Encoding: binary\n\n')
body.append(request.method.encode('utf-8'))
body.append(b' ')
body.append(request.path.encode('utf-8'))
body.append(b' HTTP/1.1\n')
body.append(b'Content-ID: ')
body.append(str(content_id).encode('utf-8') + b'\n')
content_id += 1
for name, value in request.headers.items():
if name in _SUB_HEADERS:
body.append(name.encode('utf-8') + b': ')
body.append(value.encode('utf-8') + b'\n')
# Add different headers for different request types.
if not request.method == 'DELETE':
body.append(b'Content-Length: ')
body.append(str(len(request.body)).encode('utf-8'))
body.append(b'\n\n')
body.append(request.body + b'\n')
body.append(b'\n')
body.append(b'--' + changeset_boundary + b'--' + b'\n')
body.append(b'--' + batch_boundary + b'--')
return b''.join(body), 'multipart/mixed; boundary=' + batch_boundary.decode('utf-8')
|
[
"def",
"_convert_batch_to_json",
"(",
"batch_requests",
")",
":",
"batch_boundary",
"=",
"b'batch_'",
"+",
"_new_boundary",
"(",
")",
"changeset_boundary",
"=",
"b'changeset_'",
"+",
"_new_boundary",
"(",
")",
"body",
"=",
"[",
"b'--'",
"+",
"batch_boundary",
"+",
"b'\\n'",
",",
"b'Content-Type: multipart/mixed; boundary='",
",",
"changeset_boundary",
"+",
"b'\\n\\n'",
"]",
"content_id",
"=",
"1",
"# Adds each request body to the POST data.",
"for",
"_",
",",
"request",
"in",
"batch_requests",
":",
"body",
".",
"append",
"(",
"b'--'",
"+",
"changeset_boundary",
"+",
"b'\\n'",
")",
"body",
".",
"append",
"(",
"b'Content-Type: application/http\\n'",
")",
"body",
".",
"append",
"(",
"b'Content-Transfer-Encoding: binary\\n\\n'",
")",
"body",
".",
"append",
"(",
"request",
".",
"method",
".",
"encode",
"(",
"'utf-8'",
")",
")",
"body",
".",
"append",
"(",
"b' '",
")",
"body",
".",
"append",
"(",
"request",
".",
"path",
".",
"encode",
"(",
"'utf-8'",
")",
")",
"body",
".",
"append",
"(",
"b' HTTP/1.1\\n'",
")",
"body",
".",
"append",
"(",
"b'Content-ID: '",
")",
"body",
".",
"append",
"(",
"str",
"(",
"content_id",
")",
".",
"encode",
"(",
"'utf-8'",
")",
"+",
"b'\\n'",
")",
"content_id",
"+=",
"1",
"for",
"name",
",",
"value",
"in",
"request",
".",
"headers",
".",
"items",
"(",
")",
":",
"if",
"name",
"in",
"_SUB_HEADERS",
":",
"body",
".",
"append",
"(",
"name",
".",
"encode",
"(",
"'utf-8'",
")",
"+",
"b': '",
")",
"body",
".",
"append",
"(",
"value",
".",
"encode",
"(",
"'utf-8'",
")",
"+",
"b'\\n'",
")",
"# Add different headers for different request types.",
"if",
"not",
"request",
".",
"method",
"==",
"'DELETE'",
":",
"body",
".",
"append",
"(",
"b'Content-Length: '",
")",
"body",
".",
"append",
"(",
"str",
"(",
"len",
"(",
"request",
".",
"body",
")",
")",
".",
"encode",
"(",
"'utf-8'",
")",
")",
"body",
".",
"append",
"(",
"b'\\n\\n'",
")",
"body",
".",
"append",
"(",
"request",
".",
"body",
"+",
"b'\\n'",
")",
"body",
".",
"append",
"(",
"b'\\n'",
")",
"body",
".",
"append",
"(",
"b'--'",
"+",
"changeset_boundary",
"+",
"b'--'",
"+",
"b'\\n'",
")",
"body",
".",
"append",
"(",
"b'--'",
"+",
"batch_boundary",
"+",
"b'--'",
")",
"return",
"b''",
".",
"join",
"(",
"body",
")",
",",
"'multipart/mixed; boundary='",
"+",
"batch_boundary",
".",
"decode",
"(",
"'utf-8'",
")"
] |
Create json to send for an array of batch requests.
batch_requests:
an array of requests
|
[
"Create",
"json",
"to",
"send",
"for",
"an",
"array",
"of",
"batch",
"requests",
"."
] |
a7b618f6bddc465c9fdf899ea2971dfe4d04fcf0
|
https://github.com/Azure/azure-cosmos-table-python/blob/a7b618f6bddc465c9fdf899ea2971dfe4d04fcf0/azure-cosmosdb-table/azure/cosmosdb/table/_serialization.py#L220-L266
|
train
|
Azure/azure-cosmos-table-python
|
azure-cosmosdb-table/azure/cosmosdb/table/_encryption.py
|
_decrypt_entity
|
def _decrypt_entity(entity, encrypted_properties_list, content_encryption_key, entityIV, isJavaV1):
'''
Decrypts the specified entity using AES256 in CBC mode with 128 bit padding. Unwraps the CEK
using either the specified KEK or the key returned by the key_resolver. Properties
specified in the encrypted_properties_list, will be decrypted and decoded to utf-8 strings.
:param entity:
The entity being retrieved and decrypted. Could be a dict or an entity object.
:param list encrypted_properties_list:
The encrypted list of all the properties that are encrypted.
:param bytes[] content_encryption_key:
The key used internally to encrypt the entity. Extrated from the entity metadata.
:param bytes[] entityIV:
The intialization vector used to seed the encryption algorithm. Extracted from the
entity metadata.
:return: The decrypted entity
:rtype: Entity
'''
_validate_not_none('entity', entity)
decrypted_entity = deepcopy(entity)
try:
for property in entity.keys():
if property in encrypted_properties_list:
value = entity[property]
propertyIV = _generate_property_iv(entityIV,
entity['PartitionKey'], entity['RowKey'],
property, isJavaV1)
cipher = _generate_AES_CBC_cipher(content_encryption_key,
propertyIV)
# Decrypt the property.
decryptor = cipher.decryptor()
decrypted_data = (decryptor.update(value.value) + decryptor.finalize())
# Unpad the data.
unpadder = PKCS7(128).unpadder()
decrypted_data = (unpadder.update(decrypted_data) + unpadder.finalize())
decrypted_data = decrypted_data.decode('utf-8')
decrypted_entity[property] = decrypted_data
decrypted_entity.pop('_ClientEncryptionMetadata1')
decrypted_entity.pop('_ClientEncryptionMetadata2')
return decrypted_entity
except:
raise AzureException(_ERROR_DECRYPTION_FAILURE)
|
python
|
def _decrypt_entity(entity, encrypted_properties_list, content_encryption_key, entityIV, isJavaV1):
'''
Decrypts the specified entity using AES256 in CBC mode with 128 bit padding. Unwraps the CEK
using either the specified KEK or the key returned by the key_resolver. Properties
specified in the encrypted_properties_list, will be decrypted and decoded to utf-8 strings.
:param entity:
The entity being retrieved and decrypted. Could be a dict or an entity object.
:param list encrypted_properties_list:
The encrypted list of all the properties that are encrypted.
:param bytes[] content_encryption_key:
The key used internally to encrypt the entity. Extrated from the entity metadata.
:param bytes[] entityIV:
The intialization vector used to seed the encryption algorithm. Extracted from the
entity metadata.
:return: The decrypted entity
:rtype: Entity
'''
_validate_not_none('entity', entity)
decrypted_entity = deepcopy(entity)
try:
for property in entity.keys():
if property in encrypted_properties_list:
value = entity[property]
propertyIV = _generate_property_iv(entityIV,
entity['PartitionKey'], entity['RowKey'],
property, isJavaV1)
cipher = _generate_AES_CBC_cipher(content_encryption_key,
propertyIV)
# Decrypt the property.
decryptor = cipher.decryptor()
decrypted_data = (decryptor.update(value.value) + decryptor.finalize())
# Unpad the data.
unpadder = PKCS7(128).unpadder()
decrypted_data = (unpadder.update(decrypted_data) + unpadder.finalize())
decrypted_data = decrypted_data.decode('utf-8')
decrypted_entity[property] = decrypted_data
decrypted_entity.pop('_ClientEncryptionMetadata1')
decrypted_entity.pop('_ClientEncryptionMetadata2')
return decrypted_entity
except:
raise AzureException(_ERROR_DECRYPTION_FAILURE)
|
[
"def",
"_decrypt_entity",
"(",
"entity",
",",
"encrypted_properties_list",
",",
"content_encryption_key",
",",
"entityIV",
",",
"isJavaV1",
")",
":",
"_validate_not_none",
"(",
"'entity'",
",",
"entity",
")",
"decrypted_entity",
"=",
"deepcopy",
"(",
"entity",
")",
"try",
":",
"for",
"property",
"in",
"entity",
".",
"keys",
"(",
")",
":",
"if",
"property",
"in",
"encrypted_properties_list",
":",
"value",
"=",
"entity",
"[",
"property",
"]",
"propertyIV",
"=",
"_generate_property_iv",
"(",
"entityIV",
",",
"entity",
"[",
"'PartitionKey'",
"]",
",",
"entity",
"[",
"'RowKey'",
"]",
",",
"property",
",",
"isJavaV1",
")",
"cipher",
"=",
"_generate_AES_CBC_cipher",
"(",
"content_encryption_key",
",",
"propertyIV",
")",
"# Decrypt the property.",
"decryptor",
"=",
"cipher",
".",
"decryptor",
"(",
")",
"decrypted_data",
"=",
"(",
"decryptor",
".",
"update",
"(",
"value",
".",
"value",
")",
"+",
"decryptor",
".",
"finalize",
"(",
")",
")",
"# Unpad the data.",
"unpadder",
"=",
"PKCS7",
"(",
"128",
")",
".",
"unpadder",
"(",
")",
"decrypted_data",
"=",
"(",
"unpadder",
".",
"update",
"(",
"decrypted_data",
")",
"+",
"unpadder",
".",
"finalize",
"(",
")",
")",
"decrypted_data",
"=",
"decrypted_data",
".",
"decode",
"(",
"'utf-8'",
")",
"decrypted_entity",
"[",
"property",
"]",
"=",
"decrypted_data",
"decrypted_entity",
".",
"pop",
"(",
"'_ClientEncryptionMetadata1'",
")",
"decrypted_entity",
".",
"pop",
"(",
"'_ClientEncryptionMetadata2'",
")",
"return",
"decrypted_entity",
"except",
":",
"raise",
"AzureException",
"(",
"_ERROR_DECRYPTION_FAILURE",
")"
] |
Decrypts the specified entity using AES256 in CBC mode with 128 bit padding. Unwraps the CEK
using either the specified KEK or the key returned by the key_resolver. Properties
specified in the encrypted_properties_list, will be decrypted and decoded to utf-8 strings.
:param entity:
The entity being retrieved and decrypted. Could be a dict or an entity object.
:param list encrypted_properties_list:
The encrypted list of all the properties that are encrypted.
:param bytes[] content_encryption_key:
The key used internally to encrypt the entity. Extrated from the entity metadata.
:param bytes[] entityIV:
The intialization vector used to seed the encryption algorithm. Extracted from the
entity metadata.
:return: The decrypted entity
:rtype: Entity
|
[
"Decrypts",
"the",
"specified",
"entity",
"using",
"AES256",
"in",
"CBC",
"mode",
"with",
"128",
"bit",
"padding",
".",
"Unwraps",
"the",
"CEK",
"using",
"either",
"the",
"specified",
"KEK",
"or",
"the",
"key",
"returned",
"by",
"the",
"key_resolver",
".",
"Properties",
"specified",
"in",
"the",
"encrypted_properties_list",
"will",
"be",
"decrypted",
"and",
"decoded",
"to",
"utf",
"-",
"8",
"strings",
"."
] |
a7b618f6bddc465c9fdf899ea2971dfe4d04fcf0
|
https://github.com/Azure/azure-cosmos-table-python/blob/a7b618f6bddc465c9fdf899ea2971dfe4d04fcf0/azure-cosmosdb-table/azure/cosmosdb/table/_encryption.py#L163-L212
|
train
|
Azure/azure-cosmos-table-python
|
azure-cosmosdb-table/azure/cosmosdb/table/_encryption.py
|
_generate_property_iv
|
def _generate_property_iv(entity_iv, pk, rk, property_name, isJavaV1):
'''
Uses the entity_iv, partition key, and row key to generate and return
the iv for the specified property.
'''
digest = Hash(SHA256(), default_backend())
if not isJavaV1:
digest.update(entity_iv +
(rk + pk + property_name).encode('utf-8'))
else:
digest.update(entity_iv +
(pk + rk + property_name).encode('utf-8'))
propertyIV = digest.finalize()
return propertyIV[:16]
|
python
|
def _generate_property_iv(entity_iv, pk, rk, property_name, isJavaV1):
'''
Uses the entity_iv, partition key, and row key to generate and return
the iv for the specified property.
'''
digest = Hash(SHA256(), default_backend())
if not isJavaV1:
digest.update(entity_iv +
(rk + pk + property_name).encode('utf-8'))
else:
digest.update(entity_iv +
(pk + rk + property_name).encode('utf-8'))
propertyIV = digest.finalize()
return propertyIV[:16]
|
[
"def",
"_generate_property_iv",
"(",
"entity_iv",
",",
"pk",
",",
"rk",
",",
"property_name",
",",
"isJavaV1",
")",
":",
"digest",
"=",
"Hash",
"(",
"SHA256",
"(",
")",
",",
"default_backend",
"(",
")",
")",
"if",
"not",
"isJavaV1",
":",
"digest",
".",
"update",
"(",
"entity_iv",
"+",
"(",
"rk",
"+",
"pk",
"+",
"property_name",
")",
".",
"encode",
"(",
"'utf-8'",
")",
")",
"else",
":",
"digest",
".",
"update",
"(",
"entity_iv",
"+",
"(",
"pk",
"+",
"rk",
"+",
"property_name",
")",
".",
"encode",
"(",
"'utf-8'",
")",
")",
"propertyIV",
"=",
"digest",
".",
"finalize",
"(",
")",
"return",
"propertyIV",
"[",
":",
"16",
"]"
] |
Uses the entity_iv, partition key, and row key to generate and return
the iv for the specified property.
|
[
"Uses",
"the",
"entity_iv",
"partition",
"key",
"and",
"row",
"key",
"to",
"generate",
"and",
"return",
"the",
"iv",
"for",
"the",
"specified",
"property",
"."
] |
a7b618f6bddc465c9fdf899ea2971dfe4d04fcf0
|
https://github.com/Azure/azure-cosmos-table-python/blob/a7b618f6bddc465c9fdf899ea2971dfe4d04fcf0/azure-cosmosdb-table/azure/cosmosdb/table/_encryption.py#L287-L300
|
train
|
fuhrysteve/marshmallow-jsonschema
|
marshmallow_jsonschema/base.py
|
JSONSchema._get_default_mapping
|
def _get_default_mapping(self, obj):
"""Return default mapping if there are no special needs."""
mapping = {v: k for k, v in obj.TYPE_MAPPING.items()}
mapping.update({
fields.Email: text_type,
fields.Dict: dict,
fields.Url: text_type,
fields.List: list,
fields.LocalDateTime: datetime.datetime,
fields.Nested: '_from_nested_schema',
})
return mapping
|
python
|
def _get_default_mapping(self, obj):
"""Return default mapping if there are no special needs."""
mapping = {v: k for k, v in obj.TYPE_MAPPING.items()}
mapping.update({
fields.Email: text_type,
fields.Dict: dict,
fields.Url: text_type,
fields.List: list,
fields.LocalDateTime: datetime.datetime,
fields.Nested: '_from_nested_schema',
})
return mapping
|
[
"def",
"_get_default_mapping",
"(",
"self",
",",
"obj",
")",
":",
"mapping",
"=",
"{",
"v",
":",
"k",
"for",
"k",
",",
"v",
"in",
"obj",
".",
"TYPE_MAPPING",
".",
"items",
"(",
")",
"}",
"mapping",
".",
"update",
"(",
"{",
"fields",
".",
"Email",
":",
"text_type",
",",
"fields",
".",
"Dict",
":",
"dict",
",",
"fields",
".",
"Url",
":",
"text_type",
",",
"fields",
".",
"List",
":",
"list",
",",
"fields",
".",
"LocalDateTime",
":",
"datetime",
".",
"datetime",
",",
"fields",
".",
"Nested",
":",
"'_from_nested_schema'",
",",
"}",
")",
"return",
"mapping"
] |
Return default mapping if there are no special needs.
|
[
"Return",
"default",
"mapping",
"if",
"there",
"are",
"no",
"special",
"needs",
"."
] |
3e0891a79d586c49deb75188d9ee1728597d093b
|
https://github.com/fuhrysteve/marshmallow-jsonschema/blob/3e0891a79d586c49deb75188d9ee1728597d093b/marshmallow_jsonschema/base.py#L96-L107
|
train
|
fuhrysteve/marshmallow-jsonschema
|
marshmallow_jsonschema/base.py
|
JSONSchema.get_properties
|
def get_properties(self, obj):
"""Fill out properties field."""
properties = {}
for field_name, field in sorted(obj.fields.items()):
schema = self._get_schema_for_field(obj, field)
properties[field.name] = schema
return properties
|
python
|
def get_properties(self, obj):
"""Fill out properties field."""
properties = {}
for field_name, field in sorted(obj.fields.items()):
schema = self._get_schema_for_field(obj, field)
properties[field.name] = schema
return properties
|
[
"def",
"get_properties",
"(",
"self",
",",
"obj",
")",
":",
"properties",
"=",
"{",
"}",
"for",
"field_name",
",",
"field",
"in",
"sorted",
"(",
"obj",
".",
"fields",
".",
"items",
"(",
")",
")",
":",
"schema",
"=",
"self",
".",
"_get_schema_for_field",
"(",
"obj",
",",
"field",
")",
"properties",
"[",
"field",
".",
"name",
"]",
"=",
"schema",
"return",
"properties"
] |
Fill out properties field.
|
[
"Fill",
"out",
"properties",
"field",
"."
] |
3e0891a79d586c49deb75188d9ee1728597d093b
|
https://github.com/fuhrysteve/marshmallow-jsonschema/blob/3e0891a79d586c49deb75188d9ee1728597d093b/marshmallow_jsonschema/base.py#L109-L117
|
train
|
fuhrysteve/marshmallow-jsonschema
|
marshmallow_jsonschema/base.py
|
JSONSchema.get_required
|
def get_required(self, obj):
"""Fill out required field."""
required = []
for field_name, field in sorted(obj.fields.items()):
if field.required:
required.append(field.name)
return required or missing
|
python
|
def get_required(self, obj):
"""Fill out required field."""
required = []
for field_name, field in sorted(obj.fields.items()):
if field.required:
required.append(field.name)
return required or missing
|
[
"def",
"get_required",
"(",
"self",
",",
"obj",
")",
":",
"required",
"=",
"[",
"]",
"for",
"field_name",
",",
"field",
"in",
"sorted",
"(",
"obj",
".",
"fields",
".",
"items",
"(",
")",
")",
":",
"if",
"field",
".",
"required",
":",
"required",
".",
"append",
"(",
"field",
".",
"name",
")",
"return",
"required",
"or",
"missing"
] |
Fill out required field.
|
[
"Fill",
"out",
"required",
"field",
"."
] |
3e0891a79d586c49deb75188d9ee1728597d093b
|
https://github.com/fuhrysteve/marshmallow-jsonschema/blob/3e0891a79d586c49deb75188d9ee1728597d093b/marshmallow_jsonschema/base.py#L119-L127
|
train
|
fuhrysteve/marshmallow-jsonschema
|
marshmallow_jsonschema/base.py
|
JSONSchema._from_python_type
|
def _from_python_type(self, obj, field, pytype):
"""Get schema definition from python type."""
json_schema = {
'title': field.attribute or field.name,
}
for key, val in TYPE_MAP[pytype].items():
json_schema[key] = val
if field.dump_only:
json_schema['readonly'] = True
if field.default is not missing:
json_schema['default'] = field.default
# NOTE: doubled up to maintain backwards compatibility
metadata = field.metadata.get('metadata', {})
metadata.update(field.metadata)
for md_key, md_val in metadata.items():
if md_key == 'metadata':
continue
json_schema[md_key] = md_val
if isinstance(field, fields.List):
json_schema['items'] = self._get_schema_for_field(
obj, field.container
)
return json_schema
|
python
|
def _from_python_type(self, obj, field, pytype):
"""Get schema definition from python type."""
json_schema = {
'title': field.attribute or field.name,
}
for key, val in TYPE_MAP[pytype].items():
json_schema[key] = val
if field.dump_only:
json_schema['readonly'] = True
if field.default is not missing:
json_schema['default'] = field.default
# NOTE: doubled up to maintain backwards compatibility
metadata = field.metadata.get('metadata', {})
metadata.update(field.metadata)
for md_key, md_val in metadata.items():
if md_key == 'metadata':
continue
json_schema[md_key] = md_val
if isinstance(field, fields.List):
json_schema['items'] = self._get_schema_for_field(
obj, field.container
)
return json_schema
|
[
"def",
"_from_python_type",
"(",
"self",
",",
"obj",
",",
"field",
",",
"pytype",
")",
":",
"json_schema",
"=",
"{",
"'title'",
":",
"field",
".",
"attribute",
"or",
"field",
".",
"name",
",",
"}",
"for",
"key",
",",
"val",
"in",
"TYPE_MAP",
"[",
"pytype",
"]",
".",
"items",
"(",
")",
":",
"json_schema",
"[",
"key",
"]",
"=",
"val",
"if",
"field",
".",
"dump_only",
":",
"json_schema",
"[",
"'readonly'",
"]",
"=",
"True",
"if",
"field",
".",
"default",
"is",
"not",
"missing",
":",
"json_schema",
"[",
"'default'",
"]",
"=",
"field",
".",
"default",
"# NOTE: doubled up to maintain backwards compatibility",
"metadata",
"=",
"field",
".",
"metadata",
".",
"get",
"(",
"'metadata'",
",",
"{",
"}",
")",
"metadata",
".",
"update",
"(",
"field",
".",
"metadata",
")",
"for",
"md_key",
",",
"md_val",
"in",
"metadata",
".",
"items",
"(",
")",
":",
"if",
"md_key",
"==",
"'metadata'",
":",
"continue",
"json_schema",
"[",
"md_key",
"]",
"=",
"md_val",
"if",
"isinstance",
"(",
"field",
",",
"fields",
".",
"List",
")",
":",
"json_schema",
"[",
"'items'",
"]",
"=",
"self",
".",
"_get_schema_for_field",
"(",
"obj",
",",
"field",
".",
"container",
")",
"return",
"json_schema"
] |
Get schema definition from python type.
|
[
"Get",
"schema",
"definition",
"from",
"python",
"type",
"."
] |
3e0891a79d586c49deb75188d9ee1728597d093b
|
https://github.com/fuhrysteve/marshmallow-jsonschema/blob/3e0891a79d586c49deb75188d9ee1728597d093b/marshmallow_jsonschema/base.py#L129-L157
|
train
|
fuhrysteve/marshmallow-jsonschema
|
marshmallow_jsonschema/base.py
|
JSONSchema._get_schema_for_field
|
def _get_schema_for_field(self, obj, field):
"""Get schema and validators for field."""
mapping = self._get_default_mapping(obj)
if hasattr(field, '_jsonschema_type_mapping'):
schema = field._jsonschema_type_mapping()
elif '_jsonschema_type_mapping' in field.metadata:
schema = field.metadata['_jsonschema_type_mapping']
elif field.__class__ in mapping:
pytype = mapping[field.__class__]
if isinstance(pytype, basestring):
schema = getattr(self, pytype)(obj, field)
else:
schema = self._from_python_type(
obj, field, pytype
)
else:
raise ValueError('unsupported field type %s' % field)
# Apply any and all validators that field may have
for validator in field.validators:
if validator.__class__ in FIELD_VALIDATORS:
schema = FIELD_VALIDATORS[validator.__class__](
schema, field, validator, obj
)
return schema
|
python
|
def _get_schema_for_field(self, obj, field):
"""Get schema and validators for field."""
mapping = self._get_default_mapping(obj)
if hasattr(field, '_jsonschema_type_mapping'):
schema = field._jsonschema_type_mapping()
elif '_jsonschema_type_mapping' in field.metadata:
schema = field.metadata['_jsonschema_type_mapping']
elif field.__class__ in mapping:
pytype = mapping[field.__class__]
if isinstance(pytype, basestring):
schema = getattr(self, pytype)(obj, field)
else:
schema = self._from_python_type(
obj, field, pytype
)
else:
raise ValueError('unsupported field type %s' % field)
# Apply any and all validators that field may have
for validator in field.validators:
if validator.__class__ in FIELD_VALIDATORS:
schema = FIELD_VALIDATORS[validator.__class__](
schema, field, validator, obj
)
return schema
|
[
"def",
"_get_schema_for_field",
"(",
"self",
",",
"obj",
",",
"field",
")",
":",
"mapping",
"=",
"self",
".",
"_get_default_mapping",
"(",
"obj",
")",
"if",
"hasattr",
"(",
"field",
",",
"'_jsonschema_type_mapping'",
")",
":",
"schema",
"=",
"field",
".",
"_jsonschema_type_mapping",
"(",
")",
"elif",
"'_jsonschema_type_mapping'",
"in",
"field",
".",
"metadata",
":",
"schema",
"=",
"field",
".",
"metadata",
"[",
"'_jsonschema_type_mapping'",
"]",
"elif",
"field",
".",
"__class__",
"in",
"mapping",
":",
"pytype",
"=",
"mapping",
"[",
"field",
".",
"__class__",
"]",
"if",
"isinstance",
"(",
"pytype",
",",
"basestring",
")",
":",
"schema",
"=",
"getattr",
"(",
"self",
",",
"pytype",
")",
"(",
"obj",
",",
"field",
")",
"else",
":",
"schema",
"=",
"self",
".",
"_from_python_type",
"(",
"obj",
",",
"field",
",",
"pytype",
")",
"else",
":",
"raise",
"ValueError",
"(",
"'unsupported field type %s'",
"%",
"field",
")",
"# Apply any and all validators that field may have",
"for",
"validator",
"in",
"field",
".",
"validators",
":",
"if",
"validator",
".",
"__class__",
"in",
"FIELD_VALIDATORS",
":",
"schema",
"=",
"FIELD_VALIDATORS",
"[",
"validator",
".",
"__class__",
"]",
"(",
"schema",
",",
"field",
",",
"validator",
",",
"obj",
")",
"return",
"schema"
] |
Get schema and validators for field.
|
[
"Get",
"schema",
"and",
"validators",
"for",
"field",
"."
] |
3e0891a79d586c49deb75188d9ee1728597d093b
|
https://github.com/fuhrysteve/marshmallow-jsonschema/blob/3e0891a79d586c49deb75188d9ee1728597d093b/marshmallow_jsonschema/base.py#L159-L183
|
train
|
fuhrysteve/marshmallow-jsonschema
|
marshmallow_jsonschema/base.py
|
JSONSchema._from_nested_schema
|
def _from_nested_schema(self, obj, field):
"""Support nested field."""
if isinstance(field.nested, basestring):
nested = get_class(field.nested)
else:
nested = field.nested
name = nested.__name__
outer_name = obj.__class__.__name__
only = field.only
exclude = field.exclude
# If this is not a schema we've seen, and it's not this schema,
# put it in our list of schema defs
if name not in self._nested_schema_classes and name != outer_name:
wrapped_nested = self.__class__(nested=True)
wrapped_dumped = wrapped_nested.dump(
nested(only=only, exclude=exclude)
)
# Handle change in return value type between Marshmallow
# versions 2 and 3.
if marshmallow.__version__.split('.', 1)[0] >= '3':
self._nested_schema_classes[name] = wrapped_dumped
else:
self._nested_schema_classes[name] = wrapped_dumped.data
self._nested_schema_classes.update(
wrapped_nested._nested_schema_classes
)
# and the schema is just a reference to the def
schema = {
'type': 'object',
'$ref': '#/definitions/{}'.format(name)
}
# NOTE: doubled up to maintain backwards compatibility
metadata = field.metadata.get('metadata', {})
metadata.update(field.metadata)
for md_key, md_val in metadata.items():
if md_key == 'metadata':
continue
schema[md_key] = md_val
if field.many:
schema = {
'type': ["array"] if field.required else ['array', 'null'],
'items': schema,
}
return schema
|
python
|
def _from_nested_schema(self, obj, field):
"""Support nested field."""
if isinstance(field.nested, basestring):
nested = get_class(field.nested)
else:
nested = field.nested
name = nested.__name__
outer_name = obj.__class__.__name__
only = field.only
exclude = field.exclude
# If this is not a schema we've seen, and it's not this schema,
# put it in our list of schema defs
if name not in self._nested_schema_classes and name != outer_name:
wrapped_nested = self.__class__(nested=True)
wrapped_dumped = wrapped_nested.dump(
nested(only=only, exclude=exclude)
)
# Handle change in return value type between Marshmallow
# versions 2 and 3.
if marshmallow.__version__.split('.', 1)[0] >= '3':
self._nested_schema_classes[name] = wrapped_dumped
else:
self._nested_schema_classes[name] = wrapped_dumped.data
self._nested_schema_classes.update(
wrapped_nested._nested_schema_classes
)
# and the schema is just a reference to the def
schema = {
'type': 'object',
'$ref': '#/definitions/{}'.format(name)
}
# NOTE: doubled up to maintain backwards compatibility
metadata = field.metadata.get('metadata', {})
metadata.update(field.metadata)
for md_key, md_val in metadata.items():
if md_key == 'metadata':
continue
schema[md_key] = md_val
if field.many:
schema = {
'type': ["array"] if field.required else ['array', 'null'],
'items': schema,
}
return schema
|
[
"def",
"_from_nested_schema",
"(",
"self",
",",
"obj",
",",
"field",
")",
":",
"if",
"isinstance",
"(",
"field",
".",
"nested",
",",
"basestring",
")",
":",
"nested",
"=",
"get_class",
"(",
"field",
".",
"nested",
")",
"else",
":",
"nested",
"=",
"field",
".",
"nested",
"name",
"=",
"nested",
".",
"__name__",
"outer_name",
"=",
"obj",
".",
"__class__",
".",
"__name__",
"only",
"=",
"field",
".",
"only",
"exclude",
"=",
"field",
".",
"exclude",
"# If this is not a schema we've seen, and it's not this schema,",
"# put it in our list of schema defs",
"if",
"name",
"not",
"in",
"self",
".",
"_nested_schema_classes",
"and",
"name",
"!=",
"outer_name",
":",
"wrapped_nested",
"=",
"self",
".",
"__class__",
"(",
"nested",
"=",
"True",
")",
"wrapped_dumped",
"=",
"wrapped_nested",
".",
"dump",
"(",
"nested",
"(",
"only",
"=",
"only",
",",
"exclude",
"=",
"exclude",
")",
")",
"# Handle change in return value type between Marshmallow",
"# versions 2 and 3.",
"if",
"marshmallow",
".",
"__version__",
".",
"split",
"(",
"'.'",
",",
"1",
")",
"[",
"0",
"]",
">=",
"'3'",
":",
"self",
".",
"_nested_schema_classes",
"[",
"name",
"]",
"=",
"wrapped_dumped",
"else",
":",
"self",
".",
"_nested_schema_classes",
"[",
"name",
"]",
"=",
"wrapped_dumped",
".",
"data",
"self",
".",
"_nested_schema_classes",
".",
"update",
"(",
"wrapped_nested",
".",
"_nested_schema_classes",
")",
"# and the schema is just a reference to the def",
"schema",
"=",
"{",
"'type'",
":",
"'object'",
",",
"'$ref'",
":",
"'#/definitions/{}'",
".",
"format",
"(",
"name",
")",
"}",
"# NOTE: doubled up to maintain backwards compatibility",
"metadata",
"=",
"field",
".",
"metadata",
".",
"get",
"(",
"'metadata'",
",",
"{",
"}",
")",
"metadata",
".",
"update",
"(",
"field",
".",
"metadata",
")",
"for",
"md_key",
",",
"md_val",
"in",
"metadata",
".",
"items",
"(",
")",
":",
"if",
"md_key",
"==",
"'metadata'",
":",
"continue",
"schema",
"[",
"md_key",
"]",
"=",
"md_val",
"if",
"field",
".",
"many",
":",
"schema",
"=",
"{",
"'type'",
":",
"[",
"\"array\"",
"]",
"if",
"field",
".",
"required",
"else",
"[",
"'array'",
",",
"'null'",
"]",
",",
"'items'",
":",
"schema",
",",
"}",
"return",
"schema"
] |
Support nested field.
|
[
"Support",
"nested",
"field",
"."
] |
3e0891a79d586c49deb75188d9ee1728597d093b
|
https://github.com/fuhrysteve/marshmallow-jsonschema/blob/3e0891a79d586c49deb75188d9ee1728597d093b/marshmallow_jsonschema/base.py#L185-L236
|
train
|
fuhrysteve/marshmallow-jsonschema
|
marshmallow_jsonschema/base.py
|
JSONSchema.wrap
|
def wrap(self, data):
"""Wrap this with the root schema definitions."""
if self.nested: # no need to wrap, will be in outer defs
return data
name = self.obj.__class__.__name__
self._nested_schema_classes[name] = data
root = {
'definitions': self._nested_schema_classes,
'$ref': '#/definitions/{name}'.format(name=name)
}
return root
|
python
|
def wrap(self, data):
"""Wrap this with the root schema definitions."""
if self.nested: # no need to wrap, will be in outer defs
return data
name = self.obj.__class__.__name__
self._nested_schema_classes[name] = data
root = {
'definitions': self._nested_schema_classes,
'$ref': '#/definitions/{name}'.format(name=name)
}
return root
|
[
"def",
"wrap",
"(",
"self",
",",
"data",
")",
":",
"if",
"self",
".",
"nested",
":",
"# no need to wrap, will be in outer defs",
"return",
"data",
"name",
"=",
"self",
".",
"obj",
".",
"__class__",
".",
"__name__",
"self",
".",
"_nested_schema_classes",
"[",
"name",
"]",
"=",
"data",
"root",
"=",
"{",
"'definitions'",
":",
"self",
".",
"_nested_schema_classes",
",",
"'$ref'",
":",
"'#/definitions/{name}'",
".",
"format",
"(",
"name",
"=",
"name",
")",
"}",
"return",
"root"
] |
Wrap this with the root schema definitions.
|
[
"Wrap",
"this",
"with",
"the",
"root",
"schema",
"definitions",
"."
] |
3e0891a79d586c49deb75188d9ee1728597d093b
|
https://github.com/fuhrysteve/marshmallow-jsonschema/blob/3e0891a79d586c49deb75188d9ee1728597d093b/marshmallow_jsonschema/base.py#L244-L255
|
train
|
fuhrysteve/marshmallow-jsonschema
|
marshmallow_jsonschema/validation.py
|
handle_length
|
def handle_length(schema, field, validator, parent_schema):
"""Adds validation logic for ``marshmallow.validate.Length``, setting the
values appropriately for ``fields.List``, ``fields.Nested``, and
``fields.String``.
Args:
schema (dict): The original JSON schema we generated. This is what we
want to post-process.
field (fields.Field): The field that generated the original schema and
who this post-processor belongs to.
validator (marshmallow.validate.Length): The validator attached to the
passed in field.
parent_schema (marshmallow.Schema): The Schema instance that the field
belongs to.
Returns:
dict: A, possibly, new JSON Schema that has been post processed and
altered.
Raises:
ValueError: Raised if the `field` is something other than
`fields.List`, `fields.Nested`, or `fields.String`
"""
if isinstance(field, fields.String):
minKey = 'minLength'
maxKey = 'maxLength'
elif isinstance(field, (fields.List, fields.Nested)):
minKey = 'minItems'
maxKey = 'maxItems'
else:
raise ValueError("In order to set the Length validator for JSON "
"schema, the field must be either a List or a String")
if validator.min:
schema[minKey] = validator.min
if validator.max:
schema[maxKey] = validator.max
if validator.equal:
schema[minKey] = validator.equal
schema[maxKey] = validator.equal
return schema
|
python
|
def handle_length(schema, field, validator, parent_schema):
"""Adds validation logic for ``marshmallow.validate.Length``, setting the
values appropriately for ``fields.List``, ``fields.Nested``, and
``fields.String``.
Args:
schema (dict): The original JSON schema we generated. This is what we
want to post-process.
field (fields.Field): The field that generated the original schema and
who this post-processor belongs to.
validator (marshmallow.validate.Length): The validator attached to the
passed in field.
parent_schema (marshmallow.Schema): The Schema instance that the field
belongs to.
Returns:
dict: A, possibly, new JSON Schema that has been post processed and
altered.
Raises:
ValueError: Raised if the `field` is something other than
`fields.List`, `fields.Nested`, or `fields.String`
"""
if isinstance(field, fields.String):
minKey = 'minLength'
maxKey = 'maxLength'
elif isinstance(field, (fields.List, fields.Nested)):
minKey = 'minItems'
maxKey = 'maxItems'
else:
raise ValueError("In order to set the Length validator for JSON "
"schema, the field must be either a List or a String")
if validator.min:
schema[minKey] = validator.min
if validator.max:
schema[maxKey] = validator.max
if validator.equal:
schema[minKey] = validator.equal
schema[maxKey] = validator.equal
return schema
|
[
"def",
"handle_length",
"(",
"schema",
",",
"field",
",",
"validator",
",",
"parent_schema",
")",
":",
"if",
"isinstance",
"(",
"field",
",",
"fields",
".",
"String",
")",
":",
"minKey",
"=",
"'minLength'",
"maxKey",
"=",
"'maxLength'",
"elif",
"isinstance",
"(",
"field",
",",
"(",
"fields",
".",
"List",
",",
"fields",
".",
"Nested",
")",
")",
":",
"minKey",
"=",
"'minItems'",
"maxKey",
"=",
"'maxItems'",
"else",
":",
"raise",
"ValueError",
"(",
"\"In order to set the Length validator for JSON \"",
"\"schema, the field must be either a List or a String\"",
")",
"if",
"validator",
".",
"min",
":",
"schema",
"[",
"minKey",
"]",
"=",
"validator",
".",
"min",
"if",
"validator",
".",
"max",
":",
"schema",
"[",
"maxKey",
"]",
"=",
"validator",
".",
"max",
"if",
"validator",
".",
"equal",
":",
"schema",
"[",
"minKey",
"]",
"=",
"validator",
".",
"equal",
"schema",
"[",
"maxKey",
"]",
"=",
"validator",
".",
"equal",
"return",
"schema"
] |
Adds validation logic for ``marshmallow.validate.Length``, setting the
values appropriately for ``fields.List``, ``fields.Nested``, and
``fields.String``.
Args:
schema (dict): The original JSON schema we generated. This is what we
want to post-process.
field (fields.Field): The field that generated the original schema and
who this post-processor belongs to.
validator (marshmallow.validate.Length): The validator attached to the
passed in field.
parent_schema (marshmallow.Schema): The Schema instance that the field
belongs to.
Returns:
dict: A, possibly, new JSON Schema that has been post processed and
altered.
Raises:
ValueError: Raised if the `field` is something other than
`fields.List`, `fields.Nested`, or `fields.String`
|
[
"Adds",
"validation",
"logic",
"for",
"marshmallow",
".",
"validate",
".",
"Length",
"setting",
"the",
"values",
"appropriately",
"for",
"fields",
".",
"List",
"fields",
".",
"Nested",
"and",
"fields",
".",
"String",
"."
] |
3e0891a79d586c49deb75188d9ee1728597d093b
|
https://github.com/fuhrysteve/marshmallow-jsonschema/blob/3e0891a79d586c49deb75188d9ee1728597d093b/marshmallow_jsonschema/validation.py#L4-L47
|
train
|
fuhrysteve/marshmallow-jsonschema
|
marshmallow_jsonschema/validation.py
|
handle_one_of
|
def handle_one_of(schema, field, validator, parent_schema):
"""Adds the validation logic for ``marshmallow.validate.OneOf`` by setting
the JSONSchema `enum` property to the allowed choices in the validator.
Args:
schema (dict): The original JSON schema we generated. This is what we
want to post-process.
field (fields.Field): The field that generated the original schema and
who this post-processor belongs to.
validator (marshmallow.validate.OneOf): The validator attached to the
passed in field.
parent_schema (marshmallow.Schema): The Schema instance that the field
belongs to.
Returns:
dict: A, possibly, new JSON Schema that has been post processed and
altered.
"""
if validator.choices:
schema['enum'] = list(validator.choices)
schema['enumNames'] = list(validator.labels)
return schema
|
python
|
def handle_one_of(schema, field, validator, parent_schema):
"""Adds the validation logic for ``marshmallow.validate.OneOf`` by setting
the JSONSchema `enum` property to the allowed choices in the validator.
Args:
schema (dict): The original JSON schema we generated. This is what we
want to post-process.
field (fields.Field): The field that generated the original schema and
who this post-processor belongs to.
validator (marshmallow.validate.OneOf): The validator attached to the
passed in field.
parent_schema (marshmallow.Schema): The Schema instance that the field
belongs to.
Returns:
dict: A, possibly, new JSON Schema that has been post processed and
altered.
"""
if validator.choices:
schema['enum'] = list(validator.choices)
schema['enumNames'] = list(validator.labels)
return schema
|
[
"def",
"handle_one_of",
"(",
"schema",
",",
"field",
",",
"validator",
",",
"parent_schema",
")",
":",
"if",
"validator",
".",
"choices",
":",
"schema",
"[",
"'enum'",
"]",
"=",
"list",
"(",
"validator",
".",
"choices",
")",
"schema",
"[",
"'enumNames'",
"]",
"=",
"list",
"(",
"validator",
".",
"labels",
")",
"return",
"schema"
] |
Adds the validation logic for ``marshmallow.validate.OneOf`` by setting
the JSONSchema `enum` property to the allowed choices in the validator.
Args:
schema (dict): The original JSON schema we generated. This is what we
want to post-process.
field (fields.Field): The field that generated the original schema and
who this post-processor belongs to.
validator (marshmallow.validate.OneOf): The validator attached to the
passed in field.
parent_schema (marshmallow.Schema): The Schema instance that the field
belongs to.
Returns:
dict: A, possibly, new JSON Schema that has been post processed and
altered.
|
[
"Adds",
"the",
"validation",
"logic",
"for",
"marshmallow",
".",
"validate",
".",
"OneOf",
"by",
"setting",
"the",
"JSONSchema",
"enum",
"property",
"to",
"the",
"allowed",
"choices",
"in",
"the",
"validator",
"."
] |
3e0891a79d586c49deb75188d9ee1728597d093b
|
https://github.com/fuhrysteve/marshmallow-jsonschema/blob/3e0891a79d586c49deb75188d9ee1728597d093b/marshmallow_jsonschema/validation.py#L50-L72
|
train
|
fuhrysteve/marshmallow-jsonschema
|
marshmallow_jsonschema/validation.py
|
handle_range
|
def handle_range(schema, field, validator, parent_schema):
"""Adds validation logic for ``marshmallow.validate.Range``, setting the
values appropriately ``fields.Number`` and it's subclasses.
Args:
schema (dict): The original JSON schema we generated. This is what we
want to post-process.
field (fields.Field): The field that generated the original schema and
who this post-processor belongs to.
validator (marshmallow.validate.Length): The validator attached to the
passed in field.
parent_schema (marshmallow.Schema): The Schema instance that the field
belongs to.
Returns:
dict: A, possibly, new JSON Schema that has been post processed and
altered.
"""
if not isinstance(field, fields.Number):
return schema
if validator.min:
schema['minimum'] = validator.min
schema['exclusiveMinimum'] = True
else:
schema['minimum'] = 0
schema['exclusiveMinimum'] = False
if validator.max:
schema['maximum'] = validator.max
schema['exclusiveMaximum'] = True
return schema
|
python
|
def handle_range(schema, field, validator, parent_schema):
"""Adds validation logic for ``marshmallow.validate.Range``, setting the
values appropriately ``fields.Number`` and it's subclasses.
Args:
schema (dict): The original JSON schema we generated. This is what we
want to post-process.
field (fields.Field): The field that generated the original schema and
who this post-processor belongs to.
validator (marshmallow.validate.Length): The validator attached to the
passed in field.
parent_schema (marshmallow.Schema): The Schema instance that the field
belongs to.
Returns:
dict: A, possibly, new JSON Schema that has been post processed and
altered.
"""
if not isinstance(field, fields.Number):
return schema
if validator.min:
schema['minimum'] = validator.min
schema['exclusiveMinimum'] = True
else:
schema['minimum'] = 0
schema['exclusiveMinimum'] = False
if validator.max:
schema['maximum'] = validator.max
schema['exclusiveMaximum'] = True
return schema
|
[
"def",
"handle_range",
"(",
"schema",
",",
"field",
",",
"validator",
",",
"parent_schema",
")",
":",
"if",
"not",
"isinstance",
"(",
"field",
",",
"fields",
".",
"Number",
")",
":",
"return",
"schema",
"if",
"validator",
".",
"min",
":",
"schema",
"[",
"'minimum'",
"]",
"=",
"validator",
".",
"min",
"schema",
"[",
"'exclusiveMinimum'",
"]",
"=",
"True",
"else",
":",
"schema",
"[",
"'minimum'",
"]",
"=",
"0",
"schema",
"[",
"'exclusiveMinimum'",
"]",
"=",
"False",
"if",
"validator",
".",
"max",
":",
"schema",
"[",
"'maximum'",
"]",
"=",
"validator",
".",
"max",
"schema",
"[",
"'exclusiveMaximum'",
"]",
"=",
"True",
"return",
"schema"
] |
Adds validation logic for ``marshmallow.validate.Range``, setting the
values appropriately ``fields.Number`` and it's subclasses.
Args:
schema (dict): The original JSON schema we generated. This is what we
want to post-process.
field (fields.Field): The field that generated the original schema and
who this post-processor belongs to.
validator (marshmallow.validate.Length): The validator attached to the
passed in field.
parent_schema (marshmallow.Schema): The Schema instance that the field
belongs to.
Returns:
dict: A, possibly, new JSON Schema that has been post processed and
altered.
|
[
"Adds",
"validation",
"logic",
"for",
"marshmallow",
".",
"validate",
".",
"Range",
"setting",
"the",
"values",
"appropriately",
"fields",
".",
"Number",
"and",
"it",
"s",
"subclasses",
"."
] |
3e0891a79d586c49deb75188d9ee1728597d093b
|
https://github.com/fuhrysteve/marshmallow-jsonschema/blob/3e0891a79d586c49deb75188d9ee1728597d093b/marshmallow_jsonschema/validation.py#L75-L107
|
train
|
mmp2/megaman
|
megaman/utils/eigendecomp.py
|
check_eigen_solver
|
def check_eigen_solver(eigen_solver, solver_kwds, size=None, nvec=None):
"""Check that the selected eigensolver is valid
Parameters
----------
eigen_solver : string
string value to validate
size, nvec : int (optional)
if both provided, use the specified problem size and number of vectors
to determine the optimal method to use with eigen_solver='auto'
Returns
-------
eigen_solver : string
The eigen solver. This only differs from the input if
eigen_solver == 'auto' and `size` is specified.
"""
if eigen_solver in BAD_EIGEN_SOLVERS:
raise ValueError(BAD_EIGEN_SOLVERS[eigen_solver])
elif eigen_solver not in EIGEN_SOLVERS:
raise ValueError("Unrecognized eigen_solver: '{0}'."
"Should be one of: {1}".format(eigen_solver,
EIGEN_SOLVERS))
if size is not None and nvec is not None:
# do some checks of the eigensolver
if eigen_solver == 'lobpcg' and size < 5 * nvec + 1:
warnings.warn("lobpcg does not perform well with small matrices or "
"with large numbers of vectors. Switching to 'dense'")
eigen_solver = 'dense'
solver_kwds = None
elif eigen_solver == 'auto':
if size > 200 and nvec < 10:
if PYAMG_LOADED:
eigen_solver = 'amg'
solver_kwds = None
else:
eigen_solver = 'arpack'
solver_kwds = None
else:
eigen_solver = 'dense'
solver_kwds = None
return eigen_solver, solver_kwds
|
python
|
def check_eigen_solver(eigen_solver, solver_kwds, size=None, nvec=None):
"""Check that the selected eigensolver is valid
Parameters
----------
eigen_solver : string
string value to validate
size, nvec : int (optional)
if both provided, use the specified problem size and number of vectors
to determine the optimal method to use with eigen_solver='auto'
Returns
-------
eigen_solver : string
The eigen solver. This only differs from the input if
eigen_solver == 'auto' and `size` is specified.
"""
if eigen_solver in BAD_EIGEN_SOLVERS:
raise ValueError(BAD_EIGEN_SOLVERS[eigen_solver])
elif eigen_solver not in EIGEN_SOLVERS:
raise ValueError("Unrecognized eigen_solver: '{0}'."
"Should be one of: {1}".format(eigen_solver,
EIGEN_SOLVERS))
if size is not None and nvec is not None:
# do some checks of the eigensolver
if eigen_solver == 'lobpcg' and size < 5 * nvec + 1:
warnings.warn("lobpcg does not perform well with small matrices or "
"with large numbers of vectors. Switching to 'dense'")
eigen_solver = 'dense'
solver_kwds = None
elif eigen_solver == 'auto':
if size > 200 and nvec < 10:
if PYAMG_LOADED:
eigen_solver = 'amg'
solver_kwds = None
else:
eigen_solver = 'arpack'
solver_kwds = None
else:
eigen_solver = 'dense'
solver_kwds = None
return eigen_solver, solver_kwds
|
[
"def",
"check_eigen_solver",
"(",
"eigen_solver",
",",
"solver_kwds",
",",
"size",
"=",
"None",
",",
"nvec",
"=",
"None",
")",
":",
"if",
"eigen_solver",
"in",
"BAD_EIGEN_SOLVERS",
":",
"raise",
"ValueError",
"(",
"BAD_EIGEN_SOLVERS",
"[",
"eigen_solver",
"]",
")",
"elif",
"eigen_solver",
"not",
"in",
"EIGEN_SOLVERS",
":",
"raise",
"ValueError",
"(",
"\"Unrecognized eigen_solver: '{0}'.\"",
"\"Should be one of: {1}\"",
".",
"format",
"(",
"eigen_solver",
",",
"EIGEN_SOLVERS",
")",
")",
"if",
"size",
"is",
"not",
"None",
"and",
"nvec",
"is",
"not",
"None",
":",
"# do some checks of the eigensolver",
"if",
"eigen_solver",
"==",
"'lobpcg'",
"and",
"size",
"<",
"5",
"*",
"nvec",
"+",
"1",
":",
"warnings",
".",
"warn",
"(",
"\"lobpcg does not perform well with small matrices or \"",
"\"with large numbers of vectors. Switching to 'dense'\"",
")",
"eigen_solver",
"=",
"'dense'",
"solver_kwds",
"=",
"None",
"elif",
"eigen_solver",
"==",
"'auto'",
":",
"if",
"size",
">",
"200",
"and",
"nvec",
"<",
"10",
":",
"if",
"PYAMG_LOADED",
":",
"eigen_solver",
"=",
"'amg'",
"solver_kwds",
"=",
"None",
"else",
":",
"eigen_solver",
"=",
"'arpack'",
"solver_kwds",
"=",
"None",
"else",
":",
"eigen_solver",
"=",
"'dense'",
"solver_kwds",
"=",
"None",
"return",
"eigen_solver",
",",
"solver_kwds"
] |
Check that the selected eigensolver is valid
Parameters
----------
eigen_solver : string
string value to validate
size, nvec : int (optional)
if both provided, use the specified problem size and number of vectors
to determine the optimal method to use with eigen_solver='auto'
Returns
-------
eigen_solver : string
The eigen solver. This only differs from the input if
eigen_solver == 'auto' and `size` is specified.
|
[
"Check",
"that",
"the",
"selected",
"eigensolver",
"is",
"valid"
] |
faccaf267aad0a8b18ec8a705735fd9dd838ca1e
|
https://github.com/mmp2/megaman/blob/faccaf267aad0a8b18ec8a705735fd9dd838ca1e/megaman/utils/eigendecomp.py#L28-L72
|
train
|
mmp2/megaman
|
megaman/relaxation/precomputed.py
|
precompute_optimzation_Y
|
def precompute_optimzation_Y(laplacian_matrix, n_samples, relaxation_kwds):
"""compute Lk, neighbors and subset to index map for projected == False"""
relaxation_kwds.setdefault('presave',False)
relaxation_kwds.setdefault('presave_name','pre_comp_current.npy')
relaxation_kwds.setdefault('verbose',False)
if relaxation_kwds['verbose']:
print ('Making Lk and nbhds')
Lk_tensor, nbk, si_map = \
compute_Lk(laplacian_matrix, n_samples, relaxation_kwds['subset'])
if relaxation_kwds['presave']:
raise NotImplementedError('Not yet implemented presave')
return { 'Lk': Lk_tensor, 'nbk': nbk, 'si_map': si_map }
|
python
|
def precompute_optimzation_Y(laplacian_matrix, n_samples, relaxation_kwds):
"""compute Lk, neighbors and subset to index map for projected == False"""
relaxation_kwds.setdefault('presave',False)
relaxation_kwds.setdefault('presave_name','pre_comp_current.npy')
relaxation_kwds.setdefault('verbose',False)
if relaxation_kwds['verbose']:
print ('Making Lk and nbhds')
Lk_tensor, nbk, si_map = \
compute_Lk(laplacian_matrix, n_samples, relaxation_kwds['subset'])
if relaxation_kwds['presave']:
raise NotImplementedError('Not yet implemented presave')
return { 'Lk': Lk_tensor, 'nbk': nbk, 'si_map': si_map }
|
[
"def",
"precompute_optimzation_Y",
"(",
"laplacian_matrix",
",",
"n_samples",
",",
"relaxation_kwds",
")",
":",
"relaxation_kwds",
".",
"setdefault",
"(",
"'presave'",
",",
"False",
")",
"relaxation_kwds",
".",
"setdefault",
"(",
"'presave_name'",
",",
"'pre_comp_current.npy'",
")",
"relaxation_kwds",
".",
"setdefault",
"(",
"'verbose'",
",",
"False",
")",
"if",
"relaxation_kwds",
"[",
"'verbose'",
"]",
":",
"print",
"(",
"'Making Lk and nbhds'",
")",
"Lk_tensor",
",",
"nbk",
",",
"si_map",
"=",
"compute_Lk",
"(",
"laplacian_matrix",
",",
"n_samples",
",",
"relaxation_kwds",
"[",
"'subset'",
"]",
")",
"if",
"relaxation_kwds",
"[",
"'presave'",
"]",
":",
"raise",
"NotImplementedError",
"(",
"'Not yet implemented presave'",
")",
"return",
"{",
"'Lk'",
":",
"Lk_tensor",
",",
"'nbk'",
":",
"nbk",
",",
"'si_map'",
":",
"si_map",
"}"
] |
compute Lk, neighbors and subset to index map for projected == False
|
[
"compute",
"Lk",
"neighbors",
"and",
"subset",
"to",
"index",
"map",
"for",
"projected",
"==",
"False"
] |
faccaf267aad0a8b18ec8a705735fd9dd838ca1e
|
https://github.com/mmp2/megaman/blob/faccaf267aad0a8b18ec8a705735fd9dd838ca1e/megaman/relaxation/precomputed.py#L8-L19
|
train
|
mmp2/megaman
|
megaman/relaxation/precomputed.py
|
compute_Lk
|
def compute_Lk(laplacian_matrix,n_samples,subset):
"""
Compute sparse L matrix, neighbors and subset to L matrix index map.
Returns
-------
Lk_tensor : array-like. Length = n
each component correspond to the sparse matrix of Lk, which is
generated by extracting the kth row of laplacian and removing zeros.
nbk : array-like. Length = n
each component correspond to the neighbor index of point k, which is
used in slicing the gradient, Y or S arrays.
si_map : dictionary.
subset index to Lk_tensor (or nbk) index mapping.
"""
Lk_tensor = []
nbk = []
row,column = laplacian_matrix.T.nonzero()
nnz_val = np.squeeze(np.asarray(laplacian_matrix.T[(row,column)]))
sorted_col_args = np.argsort(column)
sorted_col_vals = column[sorted_col_args]
breaks_row = np.diff(row).nonzero()[0]
breaks_col = np.diff(sorted_col_vals).nonzero()[0]
si_map = {}
for idx,k in enumerate(subset):
if k == 0:
nbk.append( column[:breaks_row[k]+1].T )
lk = nnz_val[np.sort(sorted_col_args[:breaks_col[k]+1])]
elif k == n_samples-1:
nbk.append( column[breaks_row[k-1]+1:].T )
lk = nnz_val[np.sort(sorted_col_args[breaks_col[k-1]+1:])]
else:
nbk.append( column[breaks_row[k-1]+1:breaks_row[k]+1].T )
lk = nnz_val[np.sort(
sorted_col_args[breaks_col[k-1]+1:breaks_col[k]+1])]
npair = nbk[idx].shape[0]
rk = (nbk[idx] == k).nonzero()[0]
Lk = sp.sparse.lil_matrix((npair,npair))
Lk.setdiag(lk)
Lk[:,rk] = -(lk.reshape(-1,1))
Lk[rk,:] = -(lk.reshape(1,-1))
Lk_tensor.append(sp.sparse.csr_matrix(Lk))
si_map[k] = idx
assert len(Lk_tensor) == subset.shape[0], \
'Size of Lk_tensor should be the same as subset.'
return Lk_tensor, nbk, si_map
|
python
|
def compute_Lk(laplacian_matrix,n_samples,subset):
"""
Compute sparse L matrix, neighbors and subset to L matrix index map.
Returns
-------
Lk_tensor : array-like. Length = n
each component correspond to the sparse matrix of Lk, which is
generated by extracting the kth row of laplacian and removing zeros.
nbk : array-like. Length = n
each component correspond to the neighbor index of point k, which is
used in slicing the gradient, Y or S arrays.
si_map : dictionary.
subset index to Lk_tensor (or nbk) index mapping.
"""
Lk_tensor = []
nbk = []
row,column = laplacian_matrix.T.nonzero()
nnz_val = np.squeeze(np.asarray(laplacian_matrix.T[(row,column)]))
sorted_col_args = np.argsort(column)
sorted_col_vals = column[sorted_col_args]
breaks_row = np.diff(row).nonzero()[0]
breaks_col = np.diff(sorted_col_vals).nonzero()[0]
si_map = {}
for idx,k in enumerate(subset):
if k == 0:
nbk.append( column[:breaks_row[k]+1].T )
lk = nnz_val[np.sort(sorted_col_args[:breaks_col[k]+1])]
elif k == n_samples-1:
nbk.append( column[breaks_row[k-1]+1:].T )
lk = nnz_val[np.sort(sorted_col_args[breaks_col[k-1]+1:])]
else:
nbk.append( column[breaks_row[k-1]+1:breaks_row[k]+1].T )
lk = nnz_val[np.sort(
sorted_col_args[breaks_col[k-1]+1:breaks_col[k]+1])]
npair = nbk[idx].shape[0]
rk = (nbk[idx] == k).nonzero()[0]
Lk = sp.sparse.lil_matrix((npair,npair))
Lk.setdiag(lk)
Lk[:,rk] = -(lk.reshape(-1,1))
Lk[rk,:] = -(lk.reshape(1,-1))
Lk_tensor.append(sp.sparse.csr_matrix(Lk))
si_map[k] = idx
assert len(Lk_tensor) == subset.shape[0], \
'Size of Lk_tensor should be the same as subset.'
return Lk_tensor, nbk, si_map
|
[
"def",
"compute_Lk",
"(",
"laplacian_matrix",
",",
"n_samples",
",",
"subset",
")",
":",
"Lk_tensor",
"=",
"[",
"]",
"nbk",
"=",
"[",
"]",
"row",
",",
"column",
"=",
"laplacian_matrix",
".",
"T",
".",
"nonzero",
"(",
")",
"nnz_val",
"=",
"np",
".",
"squeeze",
"(",
"np",
".",
"asarray",
"(",
"laplacian_matrix",
".",
"T",
"[",
"(",
"row",
",",
"column",
")",
"]",
")",
")",
"sorted_col_args",
"=",
"np",
".",
"argsort",
"(",
"column",
")",
"sorted_col_vals",
"=",
"column",
"[",
"sorted_col_args",
"]",
"breaks_row",
"=",
"np",
".",
"diff",
"(",
"row",
")",
".",
"nonzero",
"(",
")",
"[",
"0",
"]",
"breaks_col",
"=",
"np",
".",
"diff",
"(",
"sorted_col_vals",
")",
".",
"nonzero",
"(",
")",
"[",
"0",
"]",
"si_map",
"=",
"{",
"}",
"for",
"idx",
",",
"k",
"in",
"enumerate",
"(",
"subset",
")",
":",
"if",
"k",
"==",
"0",
":",
"nbk",
".",
"append",
"(",
"column",
"[",
":",
"breaks_row",
"[",
"k",
"]",
"+",
"1",
"]",
".",
"T",
")",
"lk",
"=",
"nnz_val",
"[",
"np",
".",
"sort",
"(",
"sorted_col_args",
"[",
":",
"breaks_col",
"[",
"k",
"]",
"+",
"1",
"]",
")",
"]",
"elif",
"k",
"==",
"n_samples",
"-",
"1",
":",
"nbk",
".",
"append",
"(",
"column",
"[",
"breaks_row",
"[",
"k",
"-",
"1",
"]",
"+",
"1",
":",
"]",
".",
"T",
")",
"lk",
"=",
"nnz_val",
"[",
"np",
".",
"sort",
"(",
"sorted_col_args",
"[",
"breaks_col",
"[",
"k",
"-",
"1",
"]",
"+",
"1",
":",
"]",
")",
"]",
"else",
":",
"nbk",
".",
"append",
"(",
"column",
"[",
"breaks_row",
"[",
"k",
"-",
"1",
"]",
"+",
"1",
":",
"breaks_row",
"[",
"k",
"]",
"+",
"1",
"]",
".",
"T",
")",
"lk",
"=",
"nnz_val",
"[",
"np",
".",
"sort",
"(",
"sorted_col_args",
"[",
"breaks_col",
"[",
"k",
"-",
"1",
"]",
"+",
"1",
":",
"breaks_col",
"[",
"k",
"]",
"+",
"1",
"]",
")",
"]",
"npair",
"=",
"nbk",
"[",
"idx",
"]",
".",
"shape",
"[",
"0",
"]",
"rk",
"=",
"(",
"nbk",
"[",
"idx",
"]",
"==",
"k",
")",
".",
"nonzero",
"(",
")",
"[",
"0",
"]",
"Lk",
"=",
"sp",
".",
"sparse",
".",
"lil_matrix",
"(",
"(",
"npair",
",",
"npair",
")",
")",
"Lk",
".",
"setdiag",
"(",
"lk",
")",
"Lk",
"[",
":",
",",
"rk",
"]",
"=",
"-",
"(",
"lk",
".",
"reshape",
"(",
"-",
"1",
",",
"1",
")",
")",
"Lk",
"[",
"rk",
",",
":",
"]",
"=",
"-",
"(",
"lk",
".",
"reshape",
"(",
"1",
",",
"-",
"1",
")",
")",
"Lk_tensor",
".",
"append",
"(",
"sp",
".",
"sparse",
".",
"csr_matrix",
"(",
"Lk",
")",
")",
"si_map",
"[",
"k",
"]",
"=",
"idx",
"assert",
"len",
"(",
"Lk_tensor",
")",
"==",
"subset",
".",
"shape",
"[",
"0",
"]",
",",
"'Size of Lk_tensor should be the same as subset.'",
"return",
"Lk_tensor",
",",
"nbk",
",",
"si_map"
] |
Compute sparse L matrix, neighbors and subset to L matrix index map.
Returns
-------
Lk_tensor : array-like. Length = n
each component correspond to the sparse matrix of Lk, which is
generated by extracting the kth row of laplacian and removing zeros.
nbk : array-like. Length = n
each component correspond to the neighbor index of point k, which is
used in slicing the gradient, Y or S arrays.
si_map : dictionary.
subset index to Lk_tensor (or nbk) index mapping.
|
[
"Compute",
"sparse",
"L",
"matrix",
"neighbors",
"and",
"subset",
"to",
"L",
"matrix",
"index",
"map",
"."
] |
faccaf267aad0a8b18ec8a705735fd9dd838ca1e
|
https://github.com/mmp2/megaman/blob/faccaf267aad0a8b18ec8a705735fd9dd838ca1e/megaman/relaxation/precomputed.py#L21-L71
|
train
|
mmp2/megaman
|
megaman/relaxation/precomputed.py
|
precompute_optimzation_S
|
def precompute_optimzation_S(laplacian_matrix,n_samples,relaxation_kwds):
"""compute Rk, A, ATAinv, neighbors and pairs for projected mode"""
relaxation_kwds.setdefault('presave',False)
relaxation_kwds.setdefault('presave_name','pre_comp_current.npy')
relaxation_kwds.setdefault('verbose',False)
if relaxation_kwds['verbose']:
print ('Pre-computing quantities Y to S conversions')
print ('Making A and Pairs')
A, pairs = makeA(laplacian_matrix)
if relaxation_kwds['verbose']:
print ('Making Rk and nbhds')
Rk_tensor, nbk = compute_Rk(laplacian_matrix,A,n_samples)
# TODO: not quite sure what is ATAinv? why we need this?
ATAinv = np.linalg.pinv(A.T.dot(A).todense())
if relaxation_kwds['verbose']:
print ('Finish calculating pseudo inverse')
if relaxation_kwds['presave']:
raise NotImplementedError('Not yet implemented presave')
return { 'RK': Rk_tensor, 'nbk': nbk,
'ATAinv': ATAinv, 'pairs': pairs, 'A': A }
|
python
|
def precompute_optimzation_S(laplacian_matrix,n_samples,relaxation_kwds):
"""compute Rk, A, ATAinv, neighbors and pairs for projected mode"""
relaxation_kwds.setdefault('presave',False)
relaxation_kwds.setdefault('presave_name','pre_comp_current.npy')
relaxation_kwds.setdefault('verbose',False)
if relaxation_kwds['verbose']:
print ('Pre-computing quantities Y to S conversions')
print ('Making A and Pairs')
A, pairs = makeA(laplacian_matrix)
if relaxation_kwds['verbose']:
print ('Making Rk and nbhds')
Rk_tensor, nbk = compute_Rk(laplacian_matrix,A,n_samples)
# TODO: not quite sure what is ATAinv? why we need this?
ATAinv = np.linalg.pinv(A.T.dot(A).todense())
if relaxation_kwds['verbose']:
print ('Finish calculating pseudo inverse')
if relaxation_kwds['presave']:
raise NotImplementedError('Not yet implemented presave')
return { 'RK': Rk_tensor, 'nbk': nbk,
'ATAinv': ATAinv, 'pairs': pairs, 'A': A }
|
[
"def",
"precompute_optimzation_S",
"(",
"laplacian_matrix",
",",
"n_samples",
",",
"relaxation_kwds",
")",
":",
"relaxation_kwds",
".",
"setdefault",
"(",
"'presave'",
",",
"False",
")",
"relaxation_kwds",
".",
"setdefault",
"(",
"'presave_name'",
",",
"'pre_comp_current.npy'",
")",
"relaxation_kwds",
".",
"setdefault",
"(",
"'verbose'",
",",
"False",
")",
"if",
"relaxation_kwds",
"[",
"'verbose'",
"]",
":",
"print",
"(",
"'Pre-computing quantities Y to S conversions'",
")",
"print",
"(",
"'Making A and Pairs'",
")",
"A",
",",
"pairs",
"=",
"makeA",
"(",
"laplacian_matrix",
")",
"if",
"relaxation_kwds",
"[",
"'verbose'",
"]",
":",
"print",
"(",
"'Making Rk and nbhds'",
")",
"Rk_tensor",
",",
"nbk",
"=",
"compute_Rk",
"(",
"laplacian_matrix",
",",
"A",
",",
"n_samples",
")",
"# TODO: not quite sure what is ATAinv? why we need this?",
"ATAinv",
"=",
"np",
".",
"linalg",
".",
"pinv",
"(",
"A",
".",
"T",
".",
"dot",
"(",
"A",
")",
".",
"todense",
"(",
")",
")",
"if",
"relaxation_kwds",
"[",
"'verbose'",
"]",
":",
"print",
"(",
"'Finish calculating pseudo inverse'",
")",
"if",
"relaxation_kwds",
"[",
"'presave'",
"]",
":",
"raise",
"NotImplementedError",
"(",
"'Not yet implemented presave'",
")",
"return",
"{",
"'RK'",
":",
"Rk_tensor",
",",
"'nbk'",
":",
"nbk",
",",
"'ATAinv'",
":",
"ATAinv",
",",
"'pairs'",
":",
"pairs",
",",
"'A'",
":",
"A",
"}"
] |
compute Rk, A, ATAinv, neighbors and pairs for projected mode
|
[
"compute",
"Rk",
"A",
"ATAinv",
"neighbors",
"and",
"pairs",
"for",
"projected",
"mode"
] |
faccaf267aad0a8b18ec8a705735fd9dd838ca1e
|
https://github.com/mmp2/megaman/blob/faccaf267aad0a8b18ec8a705735fd9dd838ca1e/megaman/relaxation/precomputed.py#L73-L92
|
train
|
mmp2/megaman
|
megaman/relaxation/precomputed.py
|
compute_Rk
|
def compute_Rk(L,A,n_samples):
# TODO: need to inspect more into compute Rk.
"""
Compute sparse L matrix and neighbors.
Returns
-------
Rk_tensor : array-like. Length = n
each component correspond to the sparse matrix of Lk, which is
generated by extracting the kth row of laplacian and removing zeros.
nbk : array-like. Length = n
each component correspond to the neighbor index of point k, which is
used in slicing the gradient, Y or S arrays.
"""
laplacian_matrix = L.copy()
laplacian_matrix.setdiag(0)
laplacian_matrix.eliminate_zeros()
n = n_samples
Rk_tensor = []
nbk = []
row_A,column_A = A.T.nonzero()
row,column = laplacian_matrix.nonzero()
nnz_val = np.squeeze(np.asarray(laplacian_matrix.T[(row,column)]))
sorted_col_args = np.argsort(column)
sorted_col_vals = column[sorted_col_args]
breaks_row_A = np.diff(row_A).nonzero()[0]
breaks_col = np.diff(sorted_col_vals).nonzero()[0]
for k in range(n_samples):
if k == 0:
nbk.append( column_A[:breaks_row_A[k]+1].T )
Rk_tensor.append(
nnz_val[np.sort(sorted_col_args[:breaks_col[k]+1])])
elif k == n_samples-1:
nbk.append( column_A[breaks_row_A[k-1]+1:].T )
Rk_tensor.append(
nnz_val[np.sort(sorted_col_args[breaks_col[k-1]+1:])])
else:
nbk.append( column_A[breaks_row_A[k-1]+1:breaks_row_A[k]+1].T )
Rk_tensor.append(nnz_val[np.sort(
sorted_col_args[breaks_col[k-1]+1:breaks_col[k]+1])])
return Rk_tensor, nbk
|
python
|
def compute_Rk(L,A,n_samples):
# TODO: need to inspect more into compute Rk.
"""
Compute sparse L matrix and neighbors.
Returns
-------
Rk_tensor : array-like. Length = n
each component correspond to the sparse matrix of Lk, which is
generated by extracting the kth row of laplacian and removing zeros.
nbk : array-like. Length = n
each component correspond to the neighbor index of point k, which is
used in slicing the gradient, Y or S arrays.
"""
laplacian_matrix = L.copy()
laplacian_matrix.setdiag(0)
laplacian_matrix.eliminate_zeros()
n = n_samples
Rk_tensor = []
nbk = []
row_A,column_A = A.T.nonzero()
row,column = laplacian_matrix.nonzero()
nnz_val = np.squeeze(np.asarray(laplacian_matrix.T[(row,column)]))
sorted_col_args = np.argsort(column)
sorted_col_vals = column[sorted_col_args]
breaks_row_A = np.diff(row_A).nonzero()[0]
breaks_col = np.diff(sorted_col_vals).nonzero()[0]
for k in range(n_samples):
if k == 0:
nbk.append( column_A[:breaks_row_A[k]+1].T )
Rk_tensor.append(
nnz_val[np.sort(sorted_col_args[:breaks_col[k]+1])])
elif k == n_samples-1:
nbk.append( column_A[breaks_row_A[k-1]+1:].T )
Rk_tensor.append(
nnz_val[np.sort(sorted_col_args[breaks_col[k-1]+1:])])
else:
nbk.append( column_A[breaks_row_A[k-1]+1:breaks_row_A[k]+1].T )
Rk_tensor.append(nnz_val[np.sort(
sorted_col_args[breaks_col[k-1]+1:breaks_col[k]+1])])
return Rk_tensor, nbk
|
[
"def",
"compute_Rk",
"(",
"L",
",",
"A",
",",
"n_samples",
")",
":",
"# TODO: need to inspect more into compute Rk.",
"laplacian_matrix",
"=",
"L",
".",
"copy",
"(",
")",
"laplacian_matrix",
".",
"setdiag",
"(",
"0",
")",
"laplacian_matrix",
".",
"eliminate_zeros",
"(",
")",
"n",
"=",
"n_samples",
"Rk_tensor",
"=",
"[",
"]",
"nbk",
"=",
"[",
"]",
"row_A",
",",
"column_A",
"=",
"A",
".",
"T",
".",
"nonzero",
"(",
")",
"row",
",",
"column",
"=",
"laplacian_matrix",
".",
"nonzero",
"(",
")",
"nnz_val",
"=",
"np",
".",
"squeeze",
"(",
"np",
".",
"asarray",
"(",
"laplacian_matrix",
".",
"T",
"[",
"(",
"row",
",",
"column",
")",
"]",
")",
")",
"sorted_col_args",
"=",
"np",
".",
"argsort",
"(",
"column",
")",
"sorted_col_vals",
"=",
"column",
"[",
"sorted_col_args",
"]",
"breaks_row_A",
"=",
"np",
".",
"diff",
"(",
"row_A",
")",
".",
"nonzero",
"(",
")",
"[",
"0",
"]",
"breaks_col",
"=",
"np",
".",
"diff",
"(",
"sorted_col_vals",
")",
".",
"nonzero",
"(",
")",
"[",
"0",
"]",
"for",
"k",
"in",
"range",
"(",
"n_samples",
")",
":",
"if",
"k",
"==",
"0",
":",
"nbk",
".",
"append",
"(",
"column_A",
"[",
":",
"breaks_row_A",
"[",
"k",
"]",
"+",
"1",
"]",
".",
"T",
")",
"Rk_tensor",
".",
"append",
"(",
"nnz_val",
"[",
"np",
".",
"sort",
"(",
"sorted_col_args",
"[",
":",
"breaks_col",
"[",
"k",
"]",
"+",
"1",
"]",
")",
"]",
")",
"elif",
"k",
"==",
"n_samples",
"-",
"1",
":",
"nbk",
".",
"append",
"(",
"column_A",
"[",
"breaks_row_A",
"[",
"k",
"-",
"1",
"]",
"+",
"1",
":",
"]",
".",
"T",
")",
"Rk_tensor",
".",
"append",
"(",
"nnz_val",
"[",
"np",
".",
"sort",
"(",
"sorted_col_args",
"[",
"breaks_col",
"[",
"k",
"-",
"1",
"]",
"+",
"1",
":",
"]",
")",
"]",
")",
"else",
":",
"nbk",
".",
"append",
"(",
"column_A",
"[",
"breaks_row_A",
"[",
"k",
"-",
"1",
"]",
"+",
"1",
":",
"breaks_row_A",
"[",
"k",
"]",
"+",
"1",
"]",
".",
"T",
")",
"Rk_tensor",
".",
"append",
"(",
"nnz_val",
"[",
"np",
".",
"sort",
"(",
"sorted_col_args",
"[",
"breaks_col",
"[",
"k",
"-",
"1",
"]",
"+",
"1",
":",
"breaks_col",
"[",
"k",
"]",
"+",
"1",
"]",
")",
"]",
")",
"return",
"Rk_tensor",
",",
"nbk"
] |
Compute sparse L matrix and neighbors.
Returns
-------
Rk_tensor : array-like. Length = n
each component correspond to the sparse matrix of Lk, which is
generated by extracting the kth row of laplacian and removing zeros.
nbk : array-like. Length = n
each component correspond to the neighbor index of point k, which is
used in slicing the gradient, Y or S arrays.
|
[
"Compute",
"sparse",
"L",
"matrix",
"and",
"neighbors",
"."
] |
faccaf267aad0a8b18ec8a705735fd9dd838ca1e
|
https://github.com/mmp2/megaman/blob/faccaf267aad0a8b18ec8a705735fd9dd838ca1e/megaman/relaxation/precomputed.py#L116-L161
|
train
|
mmp2/megaman
|
doc/sphinxext/numpy_ext/automodapi.py
|
_mod_info
|
def _mod_info(modname, toskip=[], onlylocals=True):
"""
Determines if a module is a module or a package and whether or not
it has classes or functions.
"""
hascls = hasfunc = False
for localnm, fqnm, obj in zip(*find_mod_objs(modname, onlylocals=onlylocals)):
if localnm not in toskip:
hascls = hascls or inspect.isclass(obj)
hasfunc = hasfunc or inspect.isroutine(obj)
if hascls and hasfunc:
break
# find_mod_objs has already imported modname
# TODO: There is probably a cleaner way to do this, though this is pretty
# reliable for all Python versions for most cases that we care about.
pkg = sys.modules[modname]
ispkg = (hasattr(pkg, '__file__') and isinstance(pkg.__file__, str) and
os.path.split(pkg.__file__)[1].startswith('__init__.py'))
return ispkg, hascls, hasfunc
|
python
|
def _mod_info(modname, toskip=[], onlylocals=True):
"""
Determines if a module is a module or a package and whether or not
it has classes or functions.
"""
hascls = hasfunc = False
for localnm, fqnm, obj in zip(*find_mod_objs(modname, onlylocals=onlylocals)):
if localnm not in toskip:
hascls = hascls or inspect.isclass(obj)
hasfunc = hasfunc or inspect.isroutine(obj)
if hascls and hasfunc:
break
# find_mod_objs has already imported modname
# TODO: There is probably a cleaner way to do this, though this is pretty
# reliable for all Python versions for most cases that we care about.
pkg = sys.modules[modname]
ispkg = (hasattr(pkg, '__file__') and isinstance(pkg.__file__, str) and
os.path.split(pkg.__file__)[1].startswith('__init__.py'))
return ispkg, hascls, hasfunc
|
[
"def",
"_mod_info",
"(",
"modname",
",",
"toskip",
"=",
"[",
"]",
",",
"onlylocals",
"=",
"True",
")",
":",
"hascls",
"=",
"hasfunc",
"=",
"False",
"for",
"localnm",
",",
"fqnm",
",",
"obj",
"in",
"zip",
"(",
"*",
"find_mod_objs",
"(",
"modname",
",",
"onlylocals",
"=",
"onlylocals",
")",
")",
":",
"if",
"localnm",
"not",
"in",
"toskip",
":",
"hascls",
"=",
"hascls",
"or",
"inspect",
".",
"isclass",
"(",
"obj",
")",
"hasfunc",
"=",
"hasfunc",
"or",
"inspect",
".",
"isroutine",
"(",
"obj",
")",
"if",
"hascls",
"and",
"hasfunc",
":",
"break",
"# find_mod_objs has already imported modname",
"# TODO: There is probably a cleaner way to do this, though this is pretty",
"# reliable for all Python versions for most cases that we care about.",
"pkg",
"=",
"sys",
".",
"modules",
"[",
"modname",
"]",
"ispkg",
"=",
"(",
"hasattr",
"(",
"pkg",
",",
"'__file__'",
")",
"and",
"isinstance",
"(",
"pkg",
".",
"__file__",
",",
"str",
")",
"and",
"os",
".",
"path",
".",
"split",
"(",
"pkg",
".",
"__file__",
")",
"[",
"1",
"]",
".",
"startswith",
"(",
"'__init__.py'",
")",
")",
"return",
"ispkg",
",",
"hascls",
",",
"hasfunc"
] |
Determines if a module is a module or a package and whether or not
it has classes or functions.
|
[
"Determines",
"if",
"a",
"module",
"is",
"a",
"module",
"or",
"a",
"package",
"and",
"whether",
"or",
"not",
"it",
"has",
"classes",
"or",
"functions",
"."
] |
faccaf267aad0a8b18ec8a705735fd9dd838ca1e
|
https://github.com/mmp2/megaman/blob/faccaf267aad0a8b18ec8a705735fd9dd838ca1e/doc/sphinxext/numpy_ext/automodapi.py#L328-L350
|
train
|
mmp2/megaman
|
megaman/geometry/affinity.py
|
compute_affinity_matrix
|
def compute_affinity_matrix(adjacency_matrix, method='auto', **kwargs):
"""Compute the affinity matrix with the given method"""
if method == 'auto':
method = 'gaussian'
return Affinity.init(method, **kwargs).affinity_matrix(adjacency_matrix)
|
python
|
def compute_affinity_matrix(adjacency_matrix, method='auto', **kwargs):
"""Compute the affinity matrix with the given method"""
if method == 'auto':
method = 'gaussian'
return Affinity.init(method, **kwargs).affinity_matrix(adjacency_matrix)
|
[
"def",
"compute_affinity_matrix",
"(",
"adjacency_matrix",
",",
"method",
"=",
"'auto'",
",",
"*",
"*",
"kwargs",
")",
":",
"if",
"method",
"==",
"'auto'",
":",
"method",
"=",
"'gaussian'",
"return",
"Affinity",
".",
"init",
"(",
"method",
",",
"*",
"*",
"kwargs",
")",
".",
"affinity_matrix",
"(",
"adjacency_matrix",
")"
] |
Compute the affinity matrix with the given method
|
[
"Compute",
"the",
"affinity",
"matrix",
"with",
"the",
"given",
"method"
] |
faccaf267aad0a8b18ec8a705735fd9dd838ca1e
|
https://github.com/mmp2/megaman/blob/faccaf267aad0a8b18ec8a705735fd9dd838ca1e/megaman/geometry/affinity.py#L11-L15
|
train
|
mmp2/megaman
|
megaman/embedding/locally_linear.py
|
barycenter_graph
|
def barycenter_graph(distance_matrix, X, reg=1e-3):
"""
Computes the barycenter weighted graph for points in X
Parameters
----------
distance_matrix: sparse Ndarray, (N_obs, N_obs) pairwise distance matrix.
X : Ndarray (N_obs, N_dim) observed data matrix.
reg : float, optional
Amount of regularization when solving the least-squares
problem. Only relevant if mode='barycenter'. If None, use the
default.
Returns
-------
W : sparse matrix in CSR format, shape = [n_samples, n_samples]
W[i, j] is assigned the weight of edge that connects i to j.
"""
(N, d_in) = X.shape
(rows, cols) = distance_matrix.nonzero()
W = sparse.lil_matrix((N, N)) # best for W[i, nbrs_i] = w/np.sum(w)
for i in range(N):
nbrs_i = cols[rows == i]
n_neighbors_i = len(nbrs_i)
v = np.ones(n_neighbors_i, dtype=X.dtype)
C = X[nbrs_i] - X[i]
G = np.dot(C, C.T)
trace = np.trace(G)
if trace > 0:
R = reg * trace
else:
R = reg
G.flat[::n_neighbors_i + 1] += R
w = solve(G, v, sym_pos = True)
W[i, nbrs_i] = w / np.sum(w)
return W
|
python
|
def barycenter_graph(distance_matrix, X, reg=1e-3):
"""
Computes the barycenter weighted graph for points in X
Parameters
----------
distance_matrix: sparse Ndarray, (N_obs, N_obs) pairwise distance matrix.
X : Ndarray (N_obs, N_dim) observed data matrix.
reg : float, optional
Amount of regularization when solving the least-squares
problem. Only relevant if mode='barycenter'. If None, use the
default.
Returns
-------
W : sparse matrix in CSR format, shape = [n_samples, n_samples]
W[i, j] is assigned the weight of edge that connects i to j.
"""
(N, d_in) = X.shape
(rows, cols) = distance_matrix.nonzero()
W = sparse.lil_matrix((N, N)) # best for W[i, nbrs_i] = w/np.sum(w)
for i in range(N):
nbrs_i = cols[rows == i]
n_neighbors_i = len(nbrs_i)
v = np.ones(n_neighbors_i, dtype=X.dtype)
C = X[nbrs_i] - X[i]
G = np.dot(C, C.T)
trace = np.trace(G)
if trace > 0:
R = reg * trace
else:
R = reg
G.flat[::n_neighbors_i + 1] += R
w = solve(G, v, sym_pos = True)
W[i, nbrs_i] = w / np.sum(w)
return W
|
[
"def",
"barycenter_graph",
"(",
"distance_matrix",
",",
"X",
",",
"reg",
"=",
"1e-3",
")",
":",
"(",
"N",
",",
"d_in",
")",
"=",
"X",
".",
"shape",
"(",
"rows",
",",
"cols",
")",
"=",
"distance_matrix",
".",
"nonzero",
"(",
")",
"W",
"=",
"sparse",
".",
"lil_matrix",
"(",
"(",
"N",
",",
"N",
")",
")",
"# best for W[i, nbrs_i] = w/np.sum(w)",
"for",
"i",
"in",
"range",
"(",
"N",
")",
":",
"nbrs_i",
"=",
"cols",
"[",
"rows",
"==",
"i",
"]",
"n_neighbors_i",
"=",
"len",
"(",
"nbrs_i",
")",
"v",
"=",
"np",
".",
"ones",
"(",
"n_neighbors_i",
",",
"dtype",
"=",
"X",
".",
"dtype",
")",
"C",
"=",
"X",
"[",
"nbrs_i",
"]",
"-",
"X",
"[",
"i",
"]",
"G",
"=",
"np",
".",
"dot",
"(",
"C",
",",
"C",
".",
"T",
")",
"trace",
"=",
"np",
".",
"trace",
"(",
"G",
")",
"if",
"trace",
">",
"0",
":",
"R",
"=",
"reg",
"*",
"trace",
"else",
":",
"R",
"=",
"reg",
"G",
".",
"flat",
"[",
":",
":",
"n_neighbors_i",
"+",
"1",
"]",
"+=",
"R",
"w",
"=",
"solve",
"(",
"G",
",",
"v",
",",
"sym_pos",
"=",
"True",
")",
"W",
"[",
"i",
",",
"nbrs_i",
"]",
"=",
"w",
"/",
"np",
".",
"sum",
"(",
"w",
")",
"return",
"W"
] |
Computes the barycenter weighted graph for points in X
Parameters
----------
distance_matrix: sparse Ndarray, (N_obs, N_obs) pairwise distance matrix.
X : Ndarray (N_obs, N_dim) observed data matrix.
reg : float, optional
Amount of regularization when solving the least-squares
problem. Only relevant if mode='barycenter'. If None, use the
default.
Returns
-------
W : sparse matrix in CSR format, shape = [n_samples, n_samples]
W[i, j] is assigned the weight of edge that connects i to j.
|
[
"Computes",
"the",
"barycenter",
"weighted",
"graph",
"for",
"points",
"in",
"X"
] |
faccaf267aad0a8b18ec8a705735fd9dd838ca1e
|
https://github.com/mmp2/megaman/blob/faccaf267aad0a8b18ec8a705735fd9dd838ca1e/megaman/embedding/locally_linear.py#L22-L57
|
train
|
mmp2/megaman
|
megaman/embedding/locally_linear.py
|
locally_linear_embedding
|
def locally_linear_embedding(geom, n_components, reg=1e-3,
eigen_solver='auto', random_state=None,
solver_kwds=None):
"""
Perform a Locally Linear Embedding analysis on the data.
Parameters
----------
geom : a Geometry object from megaman.geometry.geometry
n_components : integer
number of coordinates for the manifold.
reg : float
regularization constant, multiplies the trace of the local covariance
matrix of the distances.
eigen_solver : {'auto', 'dense', 'arpack', 'lobpcg', or 'amg'}
'auto' :
algorithm will attempt to choose the best method for input data
'dense' :
use standard dense matrix operations for the eigenvalue decomposition.
For this method, M must be an array or matrix type. This method should be avoided for large problems.
'arpack' :
use arnoldi iteration in shift-invert mode. For this method,
M may be a dense matrix, sparse matrix, or general linear operator.
Warning: ARPACK can be unstable for some problems. It is best to
try several random seeds in order to check results.
'lobpcg' :
Locally Optimal Block Preconditioned Conjugate Gradient Method.
A preconditioned eigensolver for large symmetric positive definite
(SPD) generalized eigenproblems.
'amg' :
AMG requires pyamg to be installed. It can be faster on very large,
sparse problems, but may also lead to instabilities.
random_state : numpy.RandomState or int, optional
The generator or seed used to determine the starting vector for arpack
iterations. Defaults to numpy.random.
solver_kwds : any additional keyword arguments to pass to the selected eigen_solver
Returns
-------
Y : array-like, shape [n_samples, n_components]
Embedding vectors.
squared_error : float
Reconstruction error for the embedding vectors. Equivalent to
``norm(Y - W Y, 'fro')**2``, where W are the reconstruction weights.
References
----------
.. [1] Roweis, S. & Saul, L. Nonlinear dimensionality reduction
by locally linear embedding. Science 290:2323 (2000).
"""
if geom.X is None:
raise ValueError("Must pass data matrix X to Geometry")
if geom.adjacency_matrix is None:
geom.compute_adjacency_matrix()
W = barycenter_graph(geom.adjacency_matrix, geom.X, reg=reg)
# we'll compute M = (I-W)'(I-W)
# depending on the solver, we'll do this differently
eigen_solver, solver_kwds = check_eigen_solver(eigen_solver, solver_kwds,
size=W.shape[0],
nvec=n_components + 1)
if eigen_solver != 'dense':
M = eye(*W.shape, format=W.format) - W
M = (M.T * M).tocsr()
else:
M = (W.T * W - W.T - W).toarray()
M.flat[::M.shape[0] + 1] += 1 # W = W - I = W - I
return null_space(M, n_components, k_skip=1, eigen_solver=eigen_solver,
random_state=random_state)
|
python
|
def locally_linear_embedding(geom, n_components, reg=1e-3,
eigen_solver='auto', random_state=None,
solver_kwds=None):
"""
Perform a Locally Linear Embedding analysis on the data.
Parameters
----------
geom : a Geometry object from megaman.geometry.geometry
n_components : integer
number of coordinates for the manifold.
reg : float
regularization constant, multiplies the trace of the local covariance
matrix of the distances.
eigen_solver : {'auto', 'dense', 'arpack', 'lobpcg', or 'amg'}
'auto' :
algorithm will attempt to choose the best method for input data
'dense' :
use standard dense matrix operations for the eigenvalue decomposition.
For this method, M must be an array or matrix type. This method should be avoided for large problems.
'arpack' :
use arnoldi iteration in shift-invert mode. For this method,
M may be a dense matrix, sparse matrix, or general linear operator.
Warning: ARPACK can be unstable for some problems. It is best to
try several random seeds in order to check results.
'lobpcg' :
Locally Optimal Block Preconditioned Conjugate Gradient Method.
A preconditioned eigensolver for large symmetric positive definite
(SPD) generalized eigenproblems.
'amg' :
AMG requires pyamg to be installed. It can be faster on very large,
sparse problems, but may also lead to instabilities.
random_state : numpy.RandomState or int, optional
The generator or seed used to determine the starting vector for arpack
iterations. Defaults to numpy.random.
solver_kwds : any additional keyword arguments to pass to the selected eigen_solver
Returns
-------
Y : array-like, shape [n_samples, n_components]
Embedding vectors.
squared_error : float
Reconstruction error for the embedding vectors. Equivalent to
``norm(Y - W Y, 'fro')**2``, where W are the reconstruction weights.
References
----------
.. [1] Roweis, S. & Saul, L. Nonlinear dimensionality reduction
by locally linear embedding. Science 290:2323 (2000).
"""
if geom.X is None:
raise ValueError("Must pass data matrix X to Geometry")
if geom.adjacency_matrix is None:
geom.compute_adjacency_matrix()
W = barycenter_graph(geom.adjacency_matrix, geom.X, reg=reg)
# we'll compute M = (I-W)'(I-W)
# depending on the solver, we'll do this differently
eigen_solver, solver_kwds = check_eigen_solver(eigen_solver, solver_kwds,
size=W.shape[0],
nvec=n_components + 1)
if eigen_solver != 'dense':
M = eye(*W.shape, format=W.format) - W
M = (M.T * M).tocsr()
else:
M = (W.T * W - W.T - W).toarray()
M.flat[::M.shape[0] + 1] += 1 # W = W - I = W - I
return null_space(M, n_components, k_skip=1, eigen_solver=eigen_solver,
random_state=random_state)
|
[
"def",
"locally_linear_embedding",
"(",
"geom",
",",
"n_components",
",",
"reg",
"=",
"1e-3",
",",
"eigen_solver",
"=",
"'auto'",
",",
"random_state",
"=",
"None",
",",
"solver_kwds",
"=",
"None",
")",
":",
"if",
"geom",
".",
"X",
"is",
"None",
":",
"raise",
"ValueError",
"(",
"\"Must pass data matrix X to Geometry\"",
")",
"if",
"geom",
".",
"adjacency_matrix",
"is",
"None",
":",
"geom",
".",
"compute_adjacency_matrix",
"(",
")",
"W",
"=",
"barycenter_graph",
"(",
"geom",
".",
"adjacency_matrix",
",",
"geom",
".",
"X",
",",
"reg",
"=",
"reg",
")",
"# we'll compute M = (I-W)'(I-W)",
"# depending on the solver, we'll do this differently",
"eigen_solver",
",",
"solver_kwds",
"=",
"check_eigen_solver",
"(",
"eigen_solver",
",",
"solver_kwds",
",",
"size",
"=",
"W",
".",
"shape",
"[",
"0",
"]",
",",
"nvec",
"=",
"n_components",
"+",
"1",
")",
"if",
"eigen_solver",
"!=",
"'dense'",
":",
"M",
"=",
"eye",
"(",
"*",
"W",
".",
"shape",
",",
"format",
"=",
"W",
".",
"format",
")",
"-",
"W",
"M",
"=",
"(",
"M",
".",
"T",
"*",
"M",
")",
".",
"tocsr",
"(",
")",
"else",
":",
"M",
"=",
"(",
"W",
".",
"T",
"*",
"W",
"-",
"W",
".",
"T",
"-",
"W",
")",
".",
"toarray",
"(",
")",
"M",
".",
"flat",
"[",
":",
":",
"M",
".",
"shape",
"[",
"0",
"]",
"+",
"1",
"]",
"+=",
"1",
"# W = W - I = W - I",
"return",
"null_space",
"(",
"M",
",",
"n_components",
",",
"k_skip",
"=",
"1",
",",
"eigen_solver",
"=",
"eigen_solver",
",",
"random_state",
"=",
"random_state",
")"
] |
Perform a Locally Linear Embedding analysis on the data.
Parameters
----------
geom : a Geometry object from megaman.geometry.geometry
n_components : integer
number of coordinates for the manifold.
reg : float
regularization constant, multiplies the trace of the local covariance
matrix of the distances.
eigen_solver : {'auto', 'dense', 'arpack', 'lobpcg', or 'amg'}
'auto' :
algorithm will attempt to choose the best method for input data
'dense' :
use standard dense matrix operations for the eigenvalue decomposition.
For this method, M must be an array or matrix type. This method should be avoided for large problems.
'arpack' :
use arnoldi iteration in shift-invert mode. For this method,
M may be a dense matrix, sparse matrix, or general linear operator.
Warning: ARPACK can be unstable for some problems. It is best to
try several random seeds in order to check results.
'lobpcg' :
Locally Optimal Block Preconditioned Conjugate Gradient Method.
A preconditioned eigensolver for large symmetric positive definite
(SPD) generalized eigenproblems.
'amg' :
AMG requires pyamg to be installed. It can be faster on very large,
sparse problems, but may also lead to instabilities.
random_state : numpy.RandomState or int, optional
The generator or seed used to determine the starting vector for arpack
iterations. Defaults to numpy.random.
solver_kwds : any additional keyword arguments to pass to the selected eigen_solver
Returns
-------
Y : array-like, shape [n_samples, n_components]
Embedding vectors.
squared_error : float
Reconstruction error for the embedding vectors. Equivalent to
``norm(Y - W Y, 'fro')**2``, where W are the reconstruction weights.
References
----------
.. [1] Roweis, S. & Saul, L. Nonlinear dimensionality reduction
by locally linear embedding. Science 290:2323 (2000).
|
[
"Perform",
"a",
"Locally",
"Linear",
"Embedding",
"analysis",
"on",
"the",
"data",
"."
] |
faccaf267aad0a8b18ec8a705735fd9dd838ca1e
|
https://github.com/mmp2/megaman/blob/faccaf267aad0a8b18ec8a705735fd9dd838ca1e/megaman/embedding/locally_linear.py#L60-L128
|
train
|
mmp2/megaman
|
megaman/utils/validation.py
|
_num_samples
|
def _num_samples(x):
"""Return number of samples in array-like x."""
if hasattr(x, 'fit'):
# Don't get num_samples from an ensembles length!
raise TypeError('Expected sequence or array-like, got '
'estimator %s' % x)
if not hasattr(x, '__len__') and not hasattr(x, 'shape'):
if hasattr(x, '__array__'):
x = np.asarray(x)
else:
raise TypeError("Expected sequence or array-like, got %s" %
type(x))
if hasattr(x, 'shape'):
if len(x.shape) == 0:
raise TypeError("Singleton array %r cannot be considered"
" a valid collection." % x)
return x.shape[0]
else:
return len(x)
|
python
|
def _num_samples(x):
"""Return number of samples in array-like x."""
if hasattr(x, 'fit'):
# Don't get num_samples from an ensembles length!
raise TypeError('Expected sequence or array-like, got '
'estimator %s' % x)
if not hasattr(x, '__len__') and not hasattr(x, 'shape'):
if hasattr(x, '__array__'):
x = np.asarray(x)
else:
raise TypeError("Expected sequence or array-like, got %s" %
type(x))
if hasattr(x, 'shape'):
if len(x.shape) == 0:
raise TypeError("Singleton array %r cannot be considered"
" a valid collection." % x)
return x.shape[0]
else:
return len(x)
|
[
"def",
"_num_samples",
"(",
"x",
")",
":",
"if",
"hasattr",
"(",
"x",
",",
"'fit'",
")",
":",
"# Don't get num_samples from an ensembles length!",
"raise",
"TypeError",
"(",
"'Expected sequence or array-like, got '",
"'estimator %s'",
"%",
"x",
")",
"if",
"not",
"hasattr",
"(",
"x",
",",
"'__len__'",
")",
"and",
"not",
"hasattr",
"(",
"x",
",",
"'shape'",
")",
":",
"if",
"hasattr",
"(",
"x",
",",
"'__array__'",
")",
":",
"x",
"=",
"np",
".",
"asarray",
"(",
"x",
")",
"else",
":",
"raise",
"TypeError",
"(",
"\"Expected sequence or array-like, got %s\"",
"%",
"type",
"(",
"x",
")",
")",
"if",
"hasattr",
"(",
"x",
",",
"'shape'",
")",
":",
"if",
"len",
"(",
"x",
".",
"shape",
")",
"==",
"0",
":",
"raise",
"TypeError",
"(",
"\"Singleton array %r cannot be considered\"",
"\" a valid collection.\"",
"%",
"x",
")",
"return",
"x",
".",
"shape",
"[",
"0",
"]",
"else",
":",
"return",
"len",
"(",
"x",
")"
] |
Return number of samples in array-like x.
|
[
"Return",
"number",
"of",
"samples",
"in",
"array",
"-",
"like",
"x",
"."
] |
faccaf267aad0a8b18ec8a705735fd9dd838ca1e
|
https://github.com/mmp2/megaman/blob/faccaf267aad0a8b18ec8a705735fd9dd838ca1e/megaman/utils/validation.py#L68-L86
|
train
|
mmp2/megaman
|
megaman/utils/spectral_clustering.py
|
spectral_clustering
|
def spectral_clustering(geom, K, eigen_solver = 'dense', random_state = None, solver_kwds = None,
renormalize = True, stabalize = True, additional_vectors = 0):
"""
Spectral clustering for find K clusters by using the eigenvectors of a
matrix which is derived from a set of similarities S.
Parameters
-----------
S: array-like,shape(n_sample,n_sample)
similarity matrix
K: integer
number of K clusters
eigen_solver : {'auto', 'dense', 'arpack', 'lobpcg', or 'amg'}
'auto' :
algorithm will attempt to choose the best method for input data
'dense' :
use standard dense matrix operations for the eigenvalue decomposition.
For this method, M must be an array or matrix type. This method should be avoided for large problems.
'arpack' :
use arnoldi iteration in shift-invert mode. For this method,
M may be a dense matrix, sparse matrix, or general linear operator.
Warning: ARPACK can be unstable for some problems. It is best to
try several random seeds in order to check results.
'lobpcg' :
Locally Optimal Block Preconditioned Conjugate Gradient Method.
A preconditioned eigensolver for large symmetric positive definite
(SPD) generalized eigenproblems.
'amg' :
AMG requires pyamg to be installed. It can be faster on very large,
sparse problems, but may also lead to instabilities.
random_state : numpy.RandomState or int, optional
The generator or seed used to determine the starting vector for arpack
iterations. Defaults to numpy.random.RandomState
solver_kwds : any additional keyword arguments to pass to the selected eigen_solver
renormalize : (bool) whether or not to set the rows of the eigenvectors to have norm 1
this can improve label quality
stabalize : (bool) whether or not to compute the (more stable) eigenvectors of L = D^-1/2*S*D^-1/2
instead of P = D^-1*S
additional_vectors : (int) compute additional eigen vectors when computing eigen decomposition.
When eigen_solver = 'amg' or 'lopcg' often if a small number of eigen values is sought the
largest eigenvalue returned is *not* equal to 1 (it should be). This can usually be fixed
by requesting more than K eigenvalues until the first eigenvalue is close to 1 and then
omitted. The remaining K-1 eigenvectors should be informative.
Returns
-------
labels: array-like, shape (1,n_samples)
"""
# Step 1: get similarity matrix
if geom.affinity_matrix is None:
S = geom.compute_affinity_matrix()
else:
S = geom.affinity_matrix
# Check for stability method, symmetric solvers require this
if eigen_solver in ['lobpcg', 'amg']:
stabalize = True
if stabalize:
geom.laplacian_type = 'symmetricnormalized'
return_lapsym = True
else:
geom.laplacian_type = 'randomwalk'
return_lapsym = False
# Step 2: get the Laplacian matrix
P = geom.compute_laplacian_matrix(return_lapsym = return_lapsym)
# by default the Laplacian is subtracted from the Identify matrix (this step may not be needed)
P += identity(P.shape[0])
# Step 3: Compute the top K eigenvectors and drop the first
if eigen_solver in ['auto', 'amg', 'lobpcg']:
n_components = 2*int(np.log(P.shape[0]))*K + 1
n_components += int(additional_vectors)
else:
n_components = K
n_components = min(n_components, P.shape[0])
(lambdas, eigen_vectors) = eigen_decomposition(P, n_components=n_components, eigen_solver=eigen_solver,
random_state=random_state, drop_first = True,
solver_kwds=solver_kwds)
# the first vector is usually uninformative
if eigen_solver in ['auto', 'lobpcg', 'amg']:
if np.abs(lambdas[0] - 1) > 1e-4:
warnings.warn("largest eigenvalue not equal to 1. Results may be poor. Try increasing additional_vectors parameter")
eigen_vectors = eigen_vectors[:, 1:K]
lambdas = lambdas[1:K]
# If stability method chosen, adjust eigenvectors
if stabalize:
w = np.array(geom.laplacian_weights)
eigen_vectors /= np.sqrt(w[:,np.newaxis])
eigen_vectors /= np.linalg.norm(eigen_vectors, axis = 0)
# If renormalize: set each data point to unit length
if renormalize:
norms = np.linalg.norm(eigen_vectors, axis=1)
eigen_vectors /= norms[:,np.newaxis]
# Step 4: run k-means clustering
labels = k_means_clustering(eigen_vectors,K)
return labels, eigen_vectors, P
|
python
|
def spectral_clustering(geom, K, eigen_solver = 'dense', random_state = None, solver_kwds = None,
renormalize = True, stabalize = True, additional_vectors = 0):
"""
Spectral clustering for find K clusters by using the eigenvectors of a
matrix which is derived from a set of similarities S.
Parameters
-----------
S: array-like,shape(n_sample,n_sample)
similarity matrix
K: integer
number of K clusters
eigen_solver : {'auto', 'dense', 'arpack', 'lobpcg', or 'amg'}
'auto' :
algorithm will attempt to choose the best method for input data
'dense' :
use standard dense matrix operations for the eigenvalue decomposition.
For this method, M must be an array or matrix type. This method should be avoided for large problems.
'arpack' :
use arnoldi iteration in shift-invert mode. For this method,
M may be a dense matrix, sparse matrix, or general linear operator.
Warning: ARPACK can be unstable for some problems. It is best to
try several random seeds in order to check results.
'lobpcg' :
Locally Optimal Block Preconditioned Conjugate Gradient Method.
A preconditioned eigensolver for large symmetric positive definite
(SPD) generalized eigenproblems.
'amg' :
AMG requires pyamg to be installed. It can be faster on very large,
sparse problems, but may also lead to instabilities.
random_state : numpy.RandomState or int, optional
The generator or seed used to determine the starting vector for arpack
iterations. Defaults to numpy.random.RandomState
solver_kwds : any additional keyword arguments to pass to the selected eigen_solver
renormalize : (bool) whether or not to set the rows of the eigenvectors to have norm 1
this can improve label quality
stabalize : (bool) whether or not to compute the (more stable) eigenvectors of L = D^-1/2*S*D^-1/2
instead of P = D^-1*S
additional_vectors : (int) compute additional eigen vectors when computing eigen decomposition.
When eigen_solver = 'amg' or 'lopcg' often if a small number of eigen values is sought the
largest eigenvalue returned is *not* equal to 1 (it should be). This can usually be fixed
by requesting more than K eigenvalues until the first eigenvalue is close to 1 and then
omitted. The remaining K-1 eigenvectors should be informative.
Returns
-------
labels: array-like, shape (1,n_samples)
"""
# Step 1: get similarity matrix
if geom.affinity_matrix is None:
S = geom.compute_affinity_matrix()
else:
S = geom.affinity_matrix
# Check for stability method, symmetric solvers require this
if eigen_solver in ['lobpcg', 'amg']:
stabalize = True
if stabalize:
geom.laplacian_type = 'symmetricnormalized'
return_lapsym = True
else:
geom.laplacian_type = 'randomwalk'
return_lapsym = False
# Step 2: get the Laplacian matrix
P = geom.compute_laplacian_matrix(return_lapsym = return_lapsym)
# by default the Laplacian is subtracted from the Identify matrix (this step may not be needed)
P += identity(P.shape[0])
# Step 3: Compute the top K eigenvectors and drop the first
if eigen_solver in ['auto', 'amg', 'lobpcg']:
n_components = 2*int(np.log(P.shape[0]))*K + 1
n_components += int(additional_vectors)
else:
n_components = K
n_components = min(n_components, P.shape[0])
(lambdas, eigen_vectors) = eigen_decomposition(P, n_components=n_components, eigen_solver=eigen_solver,
random_state=random_state, drop_first = True,
solver_kwds=solver_kwds)
# the first vector is usually uninformative
if eigen_solver in ['auto', 'lobpcg', 'amg']:
if np.abs(lambdas[0] - 1) > 1e-4:
warnings.warn("largest eigenvalue not equal to 1. Results may be poor. Try increasing additional_vectors parameter")
eigen_vectors = eigen_vectors[:, 1:K]
lambdas = lambdas[1:K]
# If stability method chosen, adjust eigenvectors
if stabalize:
w = np.array(geom.laplacian_weights)
eigen_vectors /= np.sqrt(w[:,np.newaxis])
eigen_vectors /= np.linalg.norm(eigen_vectors, axis = 0)
# If renormalize: set each data point to unit length
if renormalize:
norms = np.linalg.norm(eigen_vectors, axis=1)
eigen_vectors /= norms[:,np.newaxis]
# Step 4: run k-means clustering
labels = k_means_clustering(eigen_vectors,K)
return labels, eigen_vectors, P
|
[
"def",
"spectral_clustering",
"(",
"geom",
",",
"K",
",",
"eigen_solver",
"=",
"'dense'",
",",
"random_state",
"=",
"None",
",",
"solver_kwds",
"=",
"None",
",",
"renormalize",
"=",
"True",
",",
"stabalize",
"=",
"True",
",",
"additional_vectors",
"=",
"0",
")",
":",
"# Step 1: get similarity matrix",
"if",
"geom",
".",
"affinity_matrix",
"is",
"None",
":",
"S",
"=",
"geom",
".",
"compute_affinity_matrix",
"(",
")",
"else",
":",
"S",
"=",
"geom",
".",
"affinity_matrix",
"# Check for stability method, symmetric solvers require this",
"if",
"eigen_solver",
"in",
"[",
"'lobpcg'",
",",
"'amg'",
"]",
":",
"stabalize",
"=",
"True",
"if",
"stabalize",
":",
"geom",
".",
"laplacian_type",
"=",
"'symmetricnormalized'",
"return_lapsym",
"=",
"True",
"else",
":",
"geom",
".",
"laplacian_type",
"=",
"'randomwalk'",
"return_lapsym",
"=",
"False",
"# Step 2: get the Laplacian matrix",
"P",
"=",
"geom",
".",
"compute_laplacian_matrix",
"(",
"return_lapsym",
"=",
"return_lapsym",
")",
"# by default the Laplacian is subtracted from the Identify matrix (this step may not be needed)",
"P",
"+=",
"identity",
"(",
"P",
".",
"shape",
"[",
"0",
"]",
")",
"# Step 3: Compute the top K eigenvectors and drop the first ",
"if",
"eigen_solver",
"in",
"[",
"'auto'",
",",
"'amg'",
",",
"'lobpcg'",
"]",
":",
"n_components",
"=",
"2",
"*",
"int",
"(",
"np",
".",
"log",
"(",
"P",
".",
"shape",
"[",
"0",
"]",
")",
")",
"*",
"K",
"+",
"1",
"n_components",
"+=",
"int",
"(",
"additional_vectors",
")",
"else",
":",
"n_components",
"=",
"K",
"n_components",
"=",
"min",
"(",
"n_components",
",",
"P",
".",
"shape",
"[",
"0",
"]",
")",
"(",
"lambdas",
",",
"eigen_vectors",
")",
"=",
"eigen_decomposition",
"(",
"P",
",",
"n_components",
"=",
"n_components",
",",
"eigen_solver",
"=",
"eigen_solver",
",",
"random_state",
"=",
"random_state",
",",
"drop_first",
"=",
"True",
",",
"solver_kwds",
"=",
"solver_kwds",
")",
"# the first vector is usually uninformative ",
"if",
"eigen_solver",
"in",
"[",
"'auto'",
",",
"'lobpcg'",
",",
"'amg'",
"]",
":",
"if",
"np",
".",
"abs",
"(",
"lambdas",
"[",
"0",
"]",
"-",
"1",
")",
">",
"1e-4",
":",
"warnings",
".",
"warn",
"(",
"\"largest eigenvalue not equal to 1. Results may be poor. Try increasing additional_vectors parameter\"",
")",
"eigen_vectors",
"=",
"eigen_vectors",
"[",
":",
",",
"1",
":",
"K",
"]",
"lambdas",
"=",
"lambdas",
"[",
"1",
":",
"K",
"]",
"# If stability method chosen, adjust eigenvectors",
"if",
"stabalize",
":",
"w",
"=",
"np",
".",
"array",
"(",
"geom",
".",
"laplacian_weights",
")",
"eigen_vectors",
"/=",
"np",
".",
"sqrt",
"(",
"w",
"[",
":",
",",
"np",
".",
"newaxis",
"]",
")",
"eigen_vectors",
"/=",
"np",
".",
"linalg",
".",
"norm",
"(",
"eigen_vectors",
",",
"axis",
"=",
"0",
")",
"# If renormalize: set each data point to unit length",
"if",
"renormalize",
":",
"norms",
"=",
"np",
".",
"linalg",
".",
"norm",
"(",
"eigen_vectors",
",",
"axis",
"=",
"1",
")",
"eigen_vectors",
"/=",
"norms",
"[",
":",
",",
"np",
".",
"newaxis",
"]",
"# Step 4: run k-means clustering",
"labels",
"=",
"k_means_clustering",
"(",
"eigen_vectors",
",",
"K",
")",
"return",
"labels",
",",
"eigen_vectors",
",",
"P"
] |
Spectral clustering for find K clusters by using the eigenvectors of a
matrix which is derived from a set of similarities S.
Parameters
-----------
S: array-like,shape(n_sample,n_sample)
similarity matrix
K: integer
number of K clusters
eigen_solver : {'auto', 'dense', 'arpack', 'lobpcg', or 'amg'}
'auto' :
algorithm will attempt to choose the best method for input data
'dense' :
use standard dense matrix operations for the eigenvalue decomposition.
For this method, M must be an array or matrix type. This method should be avoided for large problems.
'arpack' :
use arnoldi iteration in shift-invert mode. For this method,
M may be a dense matrix, sparse matrix, or general linear operator.
Warning: ARPACK can be unstable for some problems. It is best to
try several random seeds in order to check results.
'lobpcg' :
Locally Optimal Block Preconditioned Conjugate Gradient Method.
A preconditioned eigensolver for large symmetric positive definite
(SPD) generalized eigenproblems.
'amg' :
AMG requires pyamg to be installed. It can be faster on very large,
sparse problems, but may also lead to instabilities.
random_state : numpy.RandomState or int, optional
The generator or seed used to determine the starting vector for arpack
iterations. Defaults to numpy.random.RandomState
solver_kwds : any additional keyword arguments to pass to the selected eigen_solver
renormalize : (bool) whether or not to set the rows of the eigenvectors to have norm 1
this can improve label quality
stabalize : (bool) whether or not to compute the (more stable) eigenvectors of L = D^-1/2*S*D^-1/2
instead of P = D^-1*S
additional_vectors : (int) compute additional eigen vectors when computing eigen decomposition.
When eigen_solver = 'amg' or 'lopcg' often if a small number of eigen values is sought the
largest eigenvalue returned is *not* equal to 1 (it should be). This can usually be fixed
by requesting more than K eigenvalues until the first eigenvalue is close to 1 and then
omitted. The remaining K-1 eigenvectors should be informative.
Returns
-------
labels: array-like, shape (1,n_samples)
|
[
"Spectral",
"clustering",
"for",
"find",
"K",
"clusters",
"by",
"using",
"the",
"eigenvectors",
"of",
"a",
"matrix",
"which",
"is",
"derived",
"from",
"a",
"set",
"of",
"similarities",
"S",
"."
] |
faccaf267aad0a8b18ec8a705735fd9dd838ca1e
|
https://github.com/mmp2/megaman/blob/faccaf267aad0a8b18ec8a705735fd9dd838ca1e/megaman/utils/spectral_clustering.py#L94-L193
|
train
|
mmp2/megaman
|
megaman/plotter/covar_plotter3.py
|
pathpatch_2d_to_3d
|
def pathpatch_2d_to_3d(pathpatch, z = 0, normal = 'z'):
"""
Transforms a 2D Patch to a 3D patch using the given normal vector.
The patch is projected into they XY plane, rotated about the origin
and finally translated by z.
"""
if type(normal) is str: #Translate strings to normal vectors
index = "xyz".index(normal)
normal = np.roll((1.0,0,0), index)
normal /= np.linalg.norm(normal) #Make sure the vector is normalised
path = pathpatch.get_path() #Get the path and the associated transform
trans = pathpatch.get_patch_transform()
path = trans.transform_path(path) #Apply the transform
pathpatch.__class__ = art3d.PathPatch3D #Change the class
pathpatch._code3d = path.codes #Copy the codes
pathpatch._facecolor3d = pathpatch.get_facecolor #Get the face color
verts = path.vertices #Get the vertices in 2D
d = np.cross(normal, (0, 0, 1)) #Obtain the rotation vector
M = rotation_matrix(d) #Get the rotation matrix
pathpatch._segment3d = \
np.array([np.dot(M, (x, y, 0)) + (0, 0, z) for x, y in verts])
return pathpatch
|
python
|
def pathpatch_2d_to_3d(pathpatch, z = 0, normal = 'z'):
"""
Transforms a 2D Patch to a 3D patch using the given normal vector.
The patch is projected into they XY plane, rotated about the origin
and finally translated by z.
"""
if type(normal) is str: #Translate strings to normal vectors
index = "xyz".index(normal)
normal = np.roll((1.0,0,0), index)
normal /= np.linalg.norm(normal) #Make sure the vector is normalised
path = pathpatch.get_path() #Get the path and the associated transform
trans = pathpatch.get_patch_transform()
path = trans.transform_path(path) #Apply the transform
pathpatch.__class__ = art3d.PathPatch3D #Change the class
pathpatch._code3d = path.codes #Copy the codes
pathpatch._facecolor3d = pathpatch.get_facecolor #Get the face color
verts = path.vertices #Get the vertices in 2D
d = np.cross(normal, (0, 0, 1)) #Obtain the rotation vector
M = rotation_matrix(d) #Get the rotation matrix
pathpatch._segment3d = \
np.array([np.dot(M, (x, y, 0)) + (0, 0, z) for x, y in verts])
return pathpatch
|
[
"def",
"pathpatch_2d_to_3d",
"(",
"pathpatch",
",",
"z",
"=",
"0",
",",
"normal",
"=",
"'z'",
")",
":",
"if",
"type",
"(",
"normal",
")",
"is",
"str",
":",
"#Translate strings to normal vectors",
"index",
"=",
"\"xyz\"",
".",
"index",
"(",
"normal",
")",
"normal",
"=",
"np",
".",
"roll",
"(",
"(",
"1.0",
",",
"0",
",",
"0",
")",
",",
"index",
")",
"normal",
"/=",
"np",
".",
"linalg",
".",
"norm",
"(",
"normal",
")",
"#Make sure the vector is normalised",
"path",
"=",
"pathpatch",
".",
"get_path",
"(",
")",
"#Get the path and the associated transform",
"trans",
"=",
"pathpatch",
".",
"get_patch_transform",
"(",
")",
"path",
"=",
"trans",
".",
"transform_path",
"(",
"path",
")",
"#Apply the transform",
"pathpatch",
".",
"__class__",
"=",
"art3d",
".",
"PathPatch3D",
"#Change the class",
"pathpatch",
".",
"_code3d",
"=",
"path",
".",
"codes",
"#Copy the codes",
"pathpatch",
".",
"_facecolor3d",
"=",
"pathpatch",
".",
"get_facecolor",
"#Get the face color",
"verts",
"=",
"path",
".",
"vertices",
"#Get the vertices in 2D",
"d",
"=",
"np",
".",
"cross",
"(",
"normal",
",",
"(",
"0",
",",
"0",
",",
"1",
")",
")",
"#Obtain the rotation vector",
"M",
"=",
"rotation_matrix",
"(",
"d",
")",
"#Get the rotation matrix",
"pathpatch",
".",
"_segment3d",
"=",
"np",
".",
"array",
"(",
"[",
"np",
".",
"dot",
"(",
"M",
",",
"(",
"x",
",",
"y",
",",
"0",
")",
")",
"+",
"(",
"0",
",",
"0",
",",
"z",
")",
"for",
"x",
",",
"y",
"in",
"verts",
"]",
")",
"return",
"pathpatch"
] |
Transforms a 2D Patch to a 3D patch using the given normal vector.
The patch is projected into they XY plane, rotated about the origin
and finally translated by z.
|
[
"Transforms",
"a",
"2D",
"Patch",
"to",
"a",
"3D",
"patch",
"using",
"the",
"given",
"normal",
"vector",
"."
] |
faccaf267aad0a8b18ec8a705735fd9dd838ca1e
|
https://github.com/mmp2/megaman/blob/faccaf267aad0a8b18ec8a705735fd9dd838ca1e/megaman/plotter/covar_plotter3.py#L44-L73
|
train
|
mmp2/megaman
|
megaman/plotter/covar_plotter3.py
|
calc_2d_ellipse_properties
|
def calc_2d_ellipse_properties(cov,nstd=2):
"""Calculate the properties for 2d ellipse given the covariance matrix."""
def eigsorted(cov):
vals, vecs = np.linalg.eigh(cov)
order = vals.argsort()[::-1]
return vals[order], vecs[:,order]
vals, vecs = eigsorted(cov)
width, height = 2 * nstd * np.sqrt(vals[:2])
normal = vecs[:,2] if vecs[2,2] > 0 else -vecs[:,2]
d = np.cross(normal, (0, 0, 1))
M = rotation_matrix(d)
x_trans = np.dot(M,(1,0,0))
cos_val = np.dot(vecs[:,0],x_trans)/np.linalg.norm(vecs[:,0])/np.linalg.norm(x_trans)
theta = np.degrees(np.arccos(np.clip(cos_val, -1, 1))) # if you really want the angle
return { 'width': width, 'height': height, 'angle': theta }, normal
|
python
|
def calc_2d_ellipse_properties(cov,nstd=2):
"""Calculate the properties for 2d ellipse given the covariance matrix."""
def eigsorted(cov):
vals, vecs = np.linalg.eigh(cov)
order = vals.argsort()[::-1]
return vals[order], vecs[:,order]
vals, vecs = eigsorted(cov)
width, height = 2 * nstd * np.sqrt(vals[:2])
normal = vecs[:,2] if vecs[2,2] > 0 else -vecs[:,2]
d = np.cross(normal, (0, 0, 1))
M = rotation_matrix(d)
x_trans = np.dot(M,(1,0,0))
cos_val = np.dot(vecs[:,0],x_trans)/np.linalg.norm(vecs[:,0])/np.linalg.norm(x_trans)
theta = np.degrees(np.arccos(np.clip(cos_val, -1, 1))) # if you really want the angle
return { 'width': width, 'height': height, 'angle': theta }, normal
|
[
"def",
"calc_2d_ellipse_properties",
"(",
"cov",
",",
"nstd",
"=",
"2",
")",
":",
"def",
"eigsorted",
"(",
"cov",
")",
":",
"vals",
",",
"vecs",
"=",
"np",
".",
"linalg",
".",
"eigh",
"(",
"cov",
")",
"order",
"=",
"vals",
".",
"argsort",
"(",
")",
"[",
":",
":",
"-",
"1",
"]",
"return",
"vals",
"[",
"order",
"]",
",",
"vecs",
"[",
":",
",",
"order",
"]",
"vals",
",",
"vecs",
"=",
"eigsorted",
"(",
"cov",
")",
"width",
",",
"height",
"=",
"2",
"*",
"nstd",
"*",
"np",
".",
"sqrt",
"(",
"vals",
"[",
":",
"2",
"]",
")",
"normal",
"=",
"vecs",
"[",
":",
",",
"2",
"]",
"if",
"vecs",
"[",
"2",
",",
"2",
"]",
">",
"0",
"else",
"-",
"vecs",
"[",
":",
",",
"2",
"]",
"d",
"=",
"np",
".",
"cross",
"(",
"normal",
",",
"(",
"0",
",",
"0",
",",
"1",
")",
")",
"M",
"=",
"rotation_matrix",
"(",
"d",
")",
"x_trans",
"=",
"np",
".",
"dot",
"(",
"M",
",",
"(",
"1",
",",
"0",
",",
"0",
")",
")",
"cos_val",
"=",
"np",
".",
"dot",
"(",
"vecs",
"[",
":",
",",
"0",
"]",
",",
"x_trans",
")",
"/",
"np",
".",
"linalg",
".",
"norm",
"(",
"vecs",
"[",
":",
",",
"0",
"]",
")",
"/",
"np",
".",
"linalg",
".",
"norm",
"(",
"x_trans",
")",
"theta",
"=",
"np",
".",
"degrees",
"(",
"np",
".",
"arccos",
"(",
"np",
".",
"clip",
"(",
"cos_val",
",",
"-",
"1",
",",
"1",
")",
")",
")",
"# if you really want the angle",
"return",
"{",
"'width'",
":",
"width",
",",
"'height'",
":",
"height",
",",
"'angle'",
":",
"theta",
"}",
",",
"normal"
] |
Calculate the properties for 2d ellipse given the covariance matrix.
|
[
"Calculate",
"the",
"properties",
"for",
"2d",
"ellipse",
"given",
"the",
"covariance",
"matrix",
"."
] |
faccaf267aad0a8b18ec8a705735fd9dd838ca1e
|
https://github.com/mmp2/megaman/blob/faccaf267aad0a8b18ec8a705735fd9dd838ca1e/megaman/plotter/covar_plotter3.py#L101-L116
|
train
|
mmp2/megaman
|
megaman/plotter/covar_plotter3.py
|
rotation_matrix
|
def rotation_matrix(d):
"""
Calculates a rotation matrix given a vector d. The direction of d
corresponds to the rotation axis. The length of d corresponds to
the sin of the angle of rotation.
Variant of: http://mail.scipy.org/pipermail/numpy-discussion/2009-March/040806.html
"""
sin_angle = np.linalg.norm(d)
if sin_angle == 0:
return np.identity(3)
d /= sin_angle
eye = np.eye(3)
ddt = np.outer(d, d)
skew = np.array([[ 0, d[2], -d[1]],
[-d[2], 0, d[0]],
[ d[1], -d[0], 0]], dtype=np.float64)
M = ddt + np.sqrt(1 - sin_angle**2) * (eye - ddt) + sin_angle * skew
return M
|
python
|
def rotation_matrix(d):
"""
Calculates a rotation matrix given a vector d. The direction of d
corresponds to the rotation axis. The length of d corresponds to
the sin of the angle of rotation.
Variant of: http://mail.scipy.org/pipermail/numpy-discussion/2009-March/040806.html
"""
sin_angle = np.linalg.norm(d)
if sin_angle == 0:
return np.identity(3)
d /= sin_angle
eye = np.eye(3)
ddt = np.outer(d, d)
skew = np.array([[ 0, d[2], -d[1]],
[-d[2], 0, d[0]],
[ d[1], -d[0], 0]], dtype=np.float64)
M = ddt + np.sqrt(1 - sin_angle**2) * (eye - ddt) + sin_angle * skew
return M
|
[
"def",
"rotation_matrix",
"(",
"d",
")",
":",
"sin_angle",
"=",
"np",
".",
"linalg",
".",
"norm",
"(",
"d",
")",
"if",
"sin_angle",
"==",
"0",
":",
"return",
"np",
".",
"identity",
"(",
"3",
")",
"d",
"/=",
"sin_angle",
"eye",
"=",
"np",
".",
"eye",
"(",
"3",
")",
"ddt",
"=",
"np",
".",
"outer",
"(",
"d",
",",
"d",
")",
"skew",
"=",
"np",
".",
"array",
"(",
"[",
"[",
"0",
",",
"d",
"[",
"2",
"]",
",",
"-",
"d",
"[",
"1",
"]",
"]",
",",
"[",
"-",
"d",
"[",
"2",
"]",
",",
"0",
",",
"d",
"[",
"0",
"]",
"]",
",",
"[",
"d",
"[",
"1",
"]",
",",
"-",
"d",
"[",
"0",
"]",
",",
"0",
"]",
"]",
",",
"dtype",
"=",
"np",
".",
"float64",
")",
"M",
"=",
"ddt",
"+",
"np",
".",
"sqrt",
"(",
"1",
"-",
"sin_angle",
"**",
"2",
")",
"*",
"(",
"eye",
"-",
"ddt",
")",
"+",
"sin_angle",
"*",
"skew",
"return",
"M"
] |
Calculates a rotation matrix given a vector d. The direction of d
corresponds to the rotation axis. The length of d corresponds to
the sin of the angle of rotation.
Variant of: http://mail.scipy.org/pipermail/numpy-discussion/2009-March/040806.html
|
[
"Calculates",
"a",
"rotation",
"matrix",
"given",
"a",
"vector",
"d",
".",
"The",
"direction",
"of",
"d",
"corresponds",
"to",
"the",
"rotation",
"axis",
".",
"The",
"length",
"of",
"d",
"corresponds",
"to",
"the",
"sin",
"of",
"the",
"angle",
"of",
"rotation",
"."
] |
faccaf267aad0a8b18ec8a705735fd9dd838ca1e
|
https://github.com/mmp2/megaman/blob/faccaf267aad0a8b18ec8a705735fd9dd838ca1e/megaman/plotter/covar_plotter3.py#L118-L140
|
train
|
mmp2/megaman
|
megaman/plotter/covar_plotter3.py
|
create_ellipse
|
def create_ellipse(width,height,angle):
"""Create parametric ellipse from 200 points."""
angle = angle / 180.0 * np.pi
thetas = np.linspace(0,2*np.pi,200)
a = width / 2.0
b = height / 2.0
x = a*np.cos(thetas)*np.cos(angle) - b*np.sin(thetas)*np.sin(angle)
y = a*np.cos(thetas)*np.sin(angle) + b*np.sin(thetas)*np.cos(angle)
z = np.zeros(thetas.shape)
return np.vstack((x,y,z)).T
|
python
|
def create_ellipse(width,height,angle):
"""Create parametric ellipse from 200 points."""
angle = angle / 180.0 * np.pi
thetas = np.linspace(0,2*np.pi,200)
a = width / 2.0
b = height / 2.0
x = a*np.cos(thetas)*np.cos(angle) - b*np.sin(thetas)*np.sin(angle)
y = a*np.cos(thetas)*np.sin(angle) + b*np.sin(thetas)*np.cos(angle)
z = np.zeros(thetas.shape)
return np.vstack((x,y,z)).T
|
[
"def",
"create_ellipse",
"(",
"width",
",",
"height",
",",
"angle",
")",
":",
"angle",
"=",
"angle",
"/",
"180.0",
"*",
"np",
".",
"pi",
"thetas",
"=",
"np",
".",
"linspace",
"(",
"0",
",",
"2",
"*",
"np",
".",
"pi",
",",
"200",
")",
"a",
"=",
"width",
"/",
"2.0",
"b",
"=",
"height",
"/",
"2.0",
"x",
"=",
"a",
"*",
"np",
".",
"cos",
"(",
"thetas",
")",
"*",
"np",
".",
"cos",
"(",
"angle",
")",
"-",
"b",
"*",
"np",
".",
"sin",
"(",
"thetas",
")",
"*",
"np",
".",
"sin",
"(",
"angle",
")",
"y",
"=",
"a",
"*",
"np",
".",
"cos",
"(",
"thetas",
")",
"*",
"np",
".",
"sin",
"(",
"angle",
")",
"+",
"b",
"*",
"np",
".",
"sin",
"(",
"thetas",
")",
"*",
"np",
".",
"cos",
"(",
"angle",
")",
"z",
"=",
"np",
".",
"zeros",
"(",
"thetas",
".",
"shape",
")",
"return",
"np",
".",
"vstack",
"(",
"(",
"x",
",",
"y",
",",
"z",
")",
")",
".",
"T"
] |
Create parametric ellipse from 200 points.
|
[
"Create",
"parametric",
"ellipse",
"from",
"200",
"points",
"."
] |
faccaf267aad0a8b18ec8a705735fd9dd838ca1e
|
https://github.com/mmp2/megaman/blob/faccaf267aad0a8b18ec8a705735fd9dd838ca1e/megaman/plotter/covar_plotter3.py#L142-L152
|
train
|
mmp2/megaman
|
megaman/plotter/covar_plotter3.py
|
transform_to_3d
|
def transform_to_3d(points,normal,z=0):
"""Project points into 3d from 2d points."""
d = np.cross(normal, (0, 0, 1))
M = rotation_matrix(d)
transformed_points = M.dot(points.T).T + z
return transformed_points
|
python
|
def transform_to_3d(points,normal,z=0):
"""Project points into 3d from 2d points."""
d = np.cross(normal, (0, 0, 1))
M = rotation_matrix(d)
transformed_points = M.dot(points.T).T + z
return transformed_points
|
[
"def",
"transform_to_3d",
"(",
"points",
",",
"normal",
",",
"z",
"=",
"0",
")",
":",
"d",
"=",
"np",
".",
"cross",
"(",
"normal",
",",
"(",
"0",
",",
"0",
",",
"1",
")",
")",
"M",
"=",
"rotation_matrix",
"(",
"d",
")",
"transformed_points",
"=",
"M",
".",
"dot",
"(",
"points",
".",
"T",
")",
".",
"T",
"+",
"z",
"return",
"transformed_points"
] |
Project points into 3d from 2d points.
|
[
"Project",
"points",
"into",
"3d",
"from",
"2d",
"points",
"."
] |
faccaf267aad0a8b18ec8a705735fd9dd838ca1e
|
https://github.com/mmp2/megaman/blob/faccaf267aad0a8b18ec8a705735fd9dd838ca1e/megaman/plotter/covar_plotter3.py#L154-L159
|
train
|
mmp2/megaman
|
megaman/plotter/covar_plotter3.py
|
create_ellipse_mesh
|
def create_ellipse_mesh(points,**kwargs):
"""Visualize the ellipse by using the mesh of the points."""
import plotly.graph_objs as go
x,y,z = points.T
return (go.Mesh3d(x=x,y=y,z=z,**kwargs),
go.Scatter3d(x=x, y=y, z=z,
marker=dict(size=0.01),
line=dict(width=2,color='#000000'),
showlegend=False,
hoverinfo='none'
)
)
|
python
|
def create_ellipse_mesh(points,**kwargs):
"""Visualize the ellipse by using the mesh of the points."""
import plotly.graph_objs as go
x,y,z = points.T
return (go.Mesh3d(x=x,y=y,z=z,**kwargs),
go.Scatter3d(x=x, y=y, z=z,
marker=dict(size=0.01),
line=dict(width=2,color='#000000'),
showlegend=False,
hoverinfo='none'
)
)
|
[
"def",
"create_ellipse_mesh",
"(",
"points",
",",
"*",
"*",
"kwargs",
")",
":",
"import",
"plotly",
".",
"graph_objs",
"as",
"go",
"x",
",",
"y",
",",
"z",
"=",
"points",
".",
"T",
"return",
"(",
"go",
".",
"Mesh3d",
"(",
"x",
"=",
"x",
",",
"y",
"=",
"y",
",",
"z",
"=",
"z",
",",
"*",
"*",
"kwargs",
")",
",",
"go",
".",
"Scatter3d",
"(",
"x",
"=",
"x",
",",
"y",
"=",
"y",
",",
"z",
"=",
"z",
",",
"marker",
"=",
"dict",
"(",
"size",
"=",
"0.01",
")",
",",
"line",
"=",
"dict",
"(",
"width",
"=",
"2",
",",
"color",
"=",
"'#000000'",
")",
",",
"showlegend",
"=",
"False",
",",
"hoverinfo",
"=",
"'none'",
")",
")"
] |
Visualize the ellipse by using the mesh of the points.
|
[
"Visualize",
"the",
"ellipse",
"by",
"using",
"the",
"mesh",
"of",
"the",
"points",
"."
] |
faccaf267aad0a8b18ec8a705735fd9dd838ca1e
|
https://github.com/mmp2/megaman/blob/faccaf267aad0a8b18ec8a705735fd9dd838ca1e/megaman/plotter/covar_plotter3.py#L166-L177
|
train
|
mmp2/megaman
|
megaman/embedding/ltsa.py
|
ltsa
|
def ltsa(geom, n_components, eigen_solver='auto',
random_state=None, solver_kwds=None):
"""
Perform a Local Tangent Space Alignment analysis on the data.
Parameters
----------
geom : a Geometry object from megaman.geometry.geometry
n_components : integer
number of coordinates for the manifold.
eigen_solver : {'auto', 'dense', 'arpack', 'lobpcg', or 'amg'}
'auto' :
algorithm will attempt to choose the best method for input data
'dense' :
use standard dense matrix operations for the eigenvalue decomposition.
For this method, M must be an array or matrix type. This method should be avoided for large problems.
'arpack' :
use arnoldi iteration in shift-invert mode. For this method,
M may be a dense matrix, sparse matrix, or general linear operator.
Warning: ARPACK can be unstable for some problems. It is best to
try several random seeds in order to check results.
'lobpcg' :
Locally Optimal Block Preconditioned Conjugate Gradient Method.
A preconditioned eigensolver for large symmetric positive definite
(SPD) generalized eigenproblems.
'amg' :
AMG requires pyamg to be installed. It can be faster on very large,
sparse problems, but may also lead to instabilities.
random_state : numpy.RandomState or int, optional
The generator or seed used to determine the starting vector for arpack
iterations. Defaults to numpy.random.
solver_kwds : any additional keyword arguments to pass to the selected eigen_solver
Returns
-------
embedding : array-like, shape [n_samples, n_components]
Embedding vectors.
squared_error : float
Reconstruction error for the embedding vectors. Equivalent to
``norm(Y - W Y, 'fro')**2``, where W are the reconstruction weights.
References
----------
* Zhang, Z. & Zha, H. Principal manifolds and nonlinear
dimensionality reduction via tangent space alignment.
Journal of Shanghai Univ. 8:406 (2004)
"""
if geom.X is None:
raise ValueError("Must pass data matrix X to Geometry")
(N, d_in) = geom.X.shape
if n_components > d_in:
raise ValueError("output dimension must be less than or equal "
"to input dimension")
# get the distance matrix and neighbors list
if geom.adjacency_matrix is None:
geom.compute_adjacency_matrix()
(rows, cols) = geom.adjacency_matrix.nonzero()
eigen_solver, solver_kwds = check_eigen_solver(eigen_solver, solver_kwds,
size=geom.adjacency_matrix.shape[0],
nvec=n_components + 1)
if eigen_solver != 'dense':
M = sparse.csr_matrix((N, N))
else:
M = np.zeros((N, N))
for i in range(N):
neighbors_i = cols[rows == i]
n_neighbors_i = len(neighbors_i)
use_svd = (n_neighbors_i > d_in)
Xi = geom.X[neighbors_i]
Xi -= Xi.mean(0)
# compute n_components largest eigenvalues of Xi * Xi^T
if use_svd:
v = svd(Xi, full_matrices=True)[0]
else:
Ci = np.dot(Xi, Xi.T)
v = eigh(Ci)[1][:, ::-1]
Gi = np.zeros((n_neighbors_i, n_components + 1))
Gi[:, 1:] = v[:, :n_components]
Gi[:, 0] = 1. / np.sqrt(n_neighbors_i)
GiGiT = np.dot(Gi, Gi.T)
nbrs_x, nbrs_y = np.meshgrid(neighbors_i, neighbors_i)
with warnings.catch_warnings():
# sparse will complain this is better with lil_matrix but it doesn't work
warnings.simplefilter("ignore")
M[nbrs_x, nbrs_y] -= GiGiT
M[neighbors_i, neighbors_i] += 1
return null_space(M, n_components, k_skip=1, eigen_solver=eigen_solver,
random_state=random_state,solver_kwds=solver_kwds)
|
python
|
def ltsa(geom, n_components, eigen_solver='auto',
random_state=None, solver_kwds=None):
"""
Perform a Local Tangent Space Alignment analysis on the data.
Parameters
----------
geom : a Geometry object from megaman.geometry.geometry
n_components : integer
number of coordinates for the manifold.
eigen_solver : {'auto', 'dense', 'arpack', 'lobpcg', or 'amg'}
'auto' :
algorithm will attempt to choose the best method for input data
'dense' :
use standard dense matrix operations for the eigenvalue decomposition.
For this method, M must be an array or matrix type. This method should be avoided for large problems.
'arpack' :
use arnoldi iteration in shift-invert mode. For this method,
M may be a dense matrix, sparse matrix, or general linear operator.
Warning: ARPACK can be unstable for some problems. It is best to
try several random seeds in order to check results.
'lobpcg' :
Locally Optimal Block Preconditioned Conjugate Gradient Method.
A preconditioned eigensolver for large symmetric positive definite
(SPD) generalized eigenproblems.
'amg' :
AMG requires pyamg to be installed. It can be faster on very large,
sparse problems, but may also lead to instabilities.
random_state : numpy.RandomState or int, optional
The generator or seed used to determine the starting vector for arpack
iterations. Defaults to numpy.random.
solver_kwds : any additional keyword arguments to pass to the selected eigen_solver
Returns
-------
embedding : array-like, shape [n_samples, n_components]
Embedding vectors.
squared_error : float
Reconstruction error for the embedding vectors. Equivalent to
``norm(Y - W Y, 'fro')**2``, where W are the reconstruction weights.
References
----------
* Zhang, Z. & Zha, H. Principal manifolds and nonlinear
dimensionality reduction via tangent space alignment.
Journal of Shanghai Univ. 8:406 (2004)
"""
if geom.X is None:
raise ValueError("Must pass data matrix X to Geometry")
(N, d_in) = geom.X.shape
if n_components > d_in:
raise ValueError("output dimension must be less than or equal "
"to input dimension")
# get the distance matrix and neighbors list
if geom.adjacency_matrix is None:
geom.compute_adjacency_matrix()
(rows, cols) = geom.adjacency_matrix.nonzero()
eigen_solver, solver_kwds = check_eigen_solver(eigen_solver, solver_kwds,
size=geom.adjacency_matrix.shape[0],
nvec=n_components + 1)
if eigen_solver != 'dense':
M = sparse.csr_matrix((N, N))
else:
M = np.zeros((N, N))
for i in range(N):
neighbors_i = cols[rows == i]
n_neighbors_i = len(neighbors_i)
use_svd = (n_neighbors_i > d_in)
Xi = geom.X[neighbors_i]
Xi -= Xi.mean(0)
# compute n_components largest eigenvalues of Xi * Xi^T
if use_svd:
v = svd(Xi, full_matrices=True)[0]
else:
Ci = np.dot(Xi, Xi.T)
v = eigh(Ci)[1][:, ::-1]
Gi = np.zeros((n_neighbors_i, n_components + 1))
Gi[:, 1:] = v[:, :n_components]
Gi[:, 0] = 1. / np.sqrt(n_neighbors_i)
GiGiT = np.dot(Gi, Gi.T)
nbrs_x, nbrs_y = np.meshgrid(neighbors_i, neighbors_i)
with warnings.catch_warnings():
# sparse will complain this is better with lil_matrix but it doesn't work
warnings.simplefilter("ignore")
M[nbrs_x, nbrs_y] -= GiGiT
M[neighbors_i, neighbors_i] += 1
return null_space(M, n_components, k_skip=1, eigen_solver=eigen_solver,
random_state=random_state,solver_kwds=solver_kwds)
|
[
"def",
"ltsa",
"(",
"geom",
",",
"n_components",
",",
"eigen_solver",
"=",
"'auto'",
",",
"random_state",
"=",
"None",
",",
"solver_kwds",
"=",
"None",
")",
":",
"if",
"geom",
".",
"X",
"is",
"None",
":",
"raise",
"ValueError",
"(",
"\"Must pass data matrix X to Geometry\"",
")",
"(",
"N",
",",
"d_in",
")",
"=",
"geom",
".",
"X",
".",
"shape",
"if",
"n_components",
">",
"d_in",
":",
"raise",
"ValueError",
"(",
"\"output dimension must be less than or equal \"",
"\"to input dimension\"",
")",
"# get the distance matrix and neighbors list",
"if",
"geom",
".",
"adjacency_matrix",
"is",
"None",
":",
"geom",
".",
"compute_adjacency_matrix",
"(",
")",
"(",
"rows",
",",
"cols",
")",
"=",
"geom",
".",
"adjacency_matrix",
".",
"nonzero",
"(",
")",
"eigen_solver",
",",
"solver_kwds",
"=",
"check_eigen_solver",
"(",
"eigen_solver",
",",
"solver_kwds",
",",
"size",
"=",
"geom",
".",
"adjacency_matrix",
".",
"shape",
"[",
"0",
"]",
",",
"nvec",
"=",
"n_components",
"+",
"1",
")",
"if",
"eigen_solver",
"!=",
"'dense'",
":",
"M",
"=",
"sparse",
".",
"csr_matrix",
"(",
"(",
"N",
",",
"N",
")",
")",
"else",
":",
"M",
"=",
"np",
".",
"zeros",
"(",
"(",
"N",
",",
"N",
")",
")",
"for",
"i",
"in",
"range",
"(",
"N",
")",
":",
"neighbors_i",
"=",
"cols",
"[",
"rows",
"==",
"i",
"]",
"n_neighbors_i",
"=",
"len",
"(",
"neighbors_i",
")",
"use_svd",
"=",
"(",
"n_neighbors_i",
">",
"d_in",
")",
"Xi",
"=",
"geom",
".",
"X",
"[",
"neighbors_i",
"]",
"Xi",
"-=",
"Xi",
".",
"mean",
"(",
"0",
")",
"# compute n_components largest eigenvalues of Xi * Xi^T",
"if",
"use_svd",
":",
"v",
"=",
"svd",
"(",
"Xi",
",",
"full_matrices",
"=",
"True",
")",
"[",
"0",
"]",
"else",
":",
"Ci",
"=",
"np",
".",
"dot",
"(",
"Xi",
",",
"Xi",
".",
"T",
")",
"v",
"=",
"eigh",
"(",
"Ci",
")",
"[",
"1",
"]",
"[",
":",
",",
":",
":",
"-",
"1",
"]",
"Gi",
"=",
"np",
".",
"zeros",
"(",
"(",
"n_neighbors_i",
",",
"n_components",
"+",
"1",
")",
")",
"Gi",
"[",
":",
",",
"1",
":",
"]",
"=",
"v",
"[",
":",
",",
":",
"n_components",
"]",
"Gi",
"[",
":",
",",
"0",
"]",
"=",
"1.",
"/",
"np",
".",
"sqrt",
"(",
"n_neighbors_i",
")",
"GiGiT",
"=",
"np",
".",
"dot",
"(",
"Gi",
",",
"Gi",
".",
"T",
")",
"nbrs_x",
",",
"nbrs_y",
"=",
"np",
".",
"meshgrid",
"(",
"neighbors_i",
",",
"neighbors_i",
")",
"with",
"warnings",
".",
"catch_warnings",
"(",
")",
":",
"# sparse will complain this is better with lil_matrix but it doesn't work",
"warnings",
".",
"simplefilter",
"(",
"\"ignore\"",
")",
"M",
"[",
"nbrs_x",
",",
"nbrs_y",
"]",
"-=",
"GiGiT",
"M",
"[",
"neighbors_i",
",",
"neighbors_i",
"]",
"+=",
"1",
"return",
"null_space",
"(",
"M",
",",
"n_components",
",",
"k_skip",
"=",
"1",
",",
"eigen_solver",
"=",
"eigen_solver",
",",
"random_state",
"=",
"random_state",
",",
"solver_kwds",
"=",
"solver_kwds",
")"
] |
Perform a Local Tangent Space Alignment analysis on the data.
Parameters
----------
geom : a Geometry object from megaman.geometry.geometry
n_components : integer
number of coordinates for the manifold.
eigen_solver : {'auto', 'dense', 'arpack', 'lobpcg', or 'amg'}
'auto' :
algorithm will attempt to choose the best method for input data
'dense' :
use standard dense matrix operations for the eigenvalue decomposition.
For this method, M must be an array or matrix type. This method should be avoided for large problems.
'arpack' :
use arnoldi iteration in shift-invert mode. For this method,
M may be a dense matrix, sparse matrix, or general linear operator.
Warning: ARPACK can be unstable for some problems. It is best to
try several random seeds in order to check results.
'lobpcg' :
Locally Optimal Block Preconditioned Conjugate Gradient Method.
A preconditioned eigensolver for large symmetric positive definite
(SPD) generalized eigenproblems.
'amg' :
AMG requires pyamg to be installed. It can be faster on very large,
sparse problems, but may also lead to instabilities.
random_state : numpy.RandomState or int, optional
The generator or seed used to determine the starting vector for arpack
iterations. Defaults to numpy.random.
solver_kwds : any additional keyword arguments to pass to the selected eigen_solver
Returns
-------
embedding : array-like, shape [n_samples, n_components]
Embedding vectors.
squared_error : float
Reconstruction error for the embedding vectors. Equivalent to
``norm(Y - W Y, 'fro')**2``, where W are the reconstruction weights.
References
----------
* Zhang, Z. & Zha, H. Principal manifolds and nonlinear
dimensionality reduction via tangent space alignment.
Journal of Shanghai Univ. 8:406 (2004)
|
[
"Perform",
"a",
"Local",
"Tangent",
"Space",
"Alignment",
"analysis",
"on",
"the",
"data",
"."
] |
faccaf267aad0a8b18ec8a705735fd9dd838ca1e
|
https://github.com/mmp2/megaman/blob/faccaf267aad0a8b18ec8a705735fd9dd838ca1e/megaman/embedding/ltsa.py#L24-L111
|
train
|
mmp2/megaman
|
megaman/relaxation/riemannian_relaxation.py
|
run_riemannian_relaxation
|
def run_riemannian_relaxation(laplacian, initial_guess,
intrinsic_dim, relaxation_kwds):
"""Helper function for creating a RiemannianRelaxation class."""
n, s = initial_guess.shape
relaxation_kwds = initialize_kwds(relaxation_kwds, n, s, intrinsic_dim)
if relaxation_kwds['save_init']:
directory = relaxation_kwds['backup_dir']
np.save(os.path.join(directory, 'Y0.npy'),initial_guess)
sp.io.mmwrite(os.path.join(directory, 'L_used.mtx'),
sp.sparse.csc_matrix(laplacian))
lossf = relaxation_kwds['lossf']
return RiemannianRelaxation.init(lossf, laplacian, initial_guess,
intrinsic_dim, relaxation_kwds)
|
python
|
def run_riemannian_relaxation(laplacian, initial_guess,
intrinsic_dim, relaxation_kwds):
"""Helper function for creating a RiemannianRelaxation class."""
n, s = initial_guess.shape
relaxation_kwds = initialize_kwds(relaxation_kwds, n, s, intrinsic_dim)
if relaxation_kwds['save_init']:
directory = relaxation_kwds['backup_dir']
np.save(os.path.join(directory, 'Y0.npy'),initial_guess)
sp.io.mmwrite(os.path.join(directory, 'L_used.mtx'),
sp.sparse.csc_matrix(laplacian))
lossf = relaxation_kwds['lossf']
return RiemannianRelaxation.init(lossf, laplacian, initial_guess,
intrinsic_dim, relaxation_kwds)
|
[
"def",
"run_riemannian_relaxation",
"(",
"laplacian",
",",
"initial_guess",
",",
"intrinsic_dim",
",",
"relaxation_kwds",
")",
":",
"n",
",",
"s",
"=",
"initial_guess",
".",
"shape",
"relaxation_kwds",
"=",
"initialize_kwds",
"(",
"relaxation_kwds",
",",
"n",
",",
"s",
",",
"intrinsic_dim",
")",
"if",
"relaxation_kwds",
"[",
"'save_init'",
"]",
":",
"directory",
"=",
"relaxation_kwds",
"[",
"'backup_dir'",
"]",
"np",
".",
"save",
"(",
"os",
".",
"path",
".",
"join",
"(",
"directory",
",",
"'Y0.npy'",
")",
",",
"initial_guess",
")",
"sp",
".",
"io",
".",
"mmwrite",
"(",
"os",
".",
"path",
".",
"join",
"(",
"directory",
",",
"'L_used.mtx'",
")",
",",
"sp",
".",
"sparse",
".",
"csc_matrix",
"(",
"laplacian",
")",
")",
"lossf",
"=",
"relaxation_kwds",
"[",
"'lossf'",
"]",
"return",
"RiemannianRelaxation",
".",
"init",
"(",
"lossf",
",",
"laplacian",
",",
"initial_guess",
",",
"intrinsic_dim",
",",
"relaxation_kwds",
")"
] |
Helper function for creating a RiemannianRelaxation class.
|
[
"Helper",
"function",
"for",
"creating",
"a",
"RiemannianRelaxation",
"class",
"."
] |
faccaf267aad0a8b18ec8a705735fd9dd838ca1e
|
https://github.com/mmp2/megaman/blob/faccaf267aad0a8b18ec8a705735fd9dd838ca1e/megaman/relaxation/riemannian_relaxation.py#L19-L32
|
train
|
mmp2/megaman
|
megaman/relaxation/riemannian_relaxation.py
|
RiemannianRelaxation.relax_isometry
|
def relax_isometry(self):
"""Main function for doing riemannian relaxation."""
for ii in range(self.relaxation_kwds['niter']):
self.H = self.compute_dual_rmetric()
self.loss = self.rieman_loss()
self.trace_var.update(ii,self.H,self.Y,self.eta,self.loss)
self.trace_var.print_report(ii)
self.trace_var.save_backup(ii)
self.compute_gradient()
self.make_optimization_step(first_iter=(ii == 0))
self.H = self.compute_dual_rmetric()
self.trace_var.update(-1,self.H,self.Y,self.eta,self.loss)
self.trace_var.print_report(ii)
tracevar_path = os.path.join(self.trace_var.backup_dir, 'results.pyc')
TracingVariable.save(self.trace_var,tracevar_path)
|
python
|
def relax_isometry(self):
"""Main function for doing riemannian relaxation."""
for ii in range(self.relaxation_kwds['niter']):
self.H = self.compute_dual_rmetric()
self.loss = self.rieman_loss()
self.trace_var.update(ii,self.H,self.Y,self.eta,self.loss)
self.trace_var.print_report(ii)
self.trace_var.save_backup(ii)
self.compute_gradient()
self.make_optimization_step(first_iter=(ii == 0))
self.H = self.compute_dual_rmetric()
self.trace_var.update(-1,self.H,self.Y,self.eta,self.loss)
self.trace_var.print_report(ii)
tracevar_path = os.path.join(self.trace_var.backup_dir, 'results.pyc')
TracingVariable.save(self.trace_var,tracevar_path)
|
[
"def",
"relax_isometry",
"(",
"self",
")",
":",
"for",
"ii",
"in",
"range",
"(",
"self",
".",
"relaxation_kwds",
"[",
"'niter'",
"]",
")",
":",
"self",
".",
"H",
"=",
"self",
".",
"compute_dual_rmetric",
"(",
")",
"self",
".",
"loss",
"=",
"self",
".",
"rieman_loss",
"(",
")",
"self",
".",
"trace_var",
".",
"update",
"(",
"ii",
",",
"self",
".",
"H",
",",
"self",
".",
"Y",
",",
"self",
".",
"eta",
",",
"self",
".",
"loss",
")",
"self",
".",
"trace_var",
".",
"print_report",
"(",
"ii",
")",
"self",
".",
"trace_var",
".",
"save_backup",
"(",
"ii",
")",
"self",
".",
"compute_gradient",
"(",
")",
"self",
".",
"make_optimization_step",
"(",
"first_iter",
"=",
"(",
"ii",
"==",
"0",
")",
")",
"self",
".",
"H",
"=",
"self",
".",
"compute_dual_rmetric",
"(",
")",
"self",
".",
"trace_var",
".",
"update",
"(",
"-",
"1",
",",
"self",
".",
"H",
",",
"self",
".",
"Y",
",",
"self",
".",
"eta",
",",
"self",
".",
"loss",
")",
"self",
".",
"trace_var",
".",
"print_report",
"(",
"ii",
")",
"tracevar_path",
"=",
"os",
".",
"path",
".",
"join",
"(",
"self",
".",
"trace_var",
".",
"backup_dir",
",",
"'results.pyc'",
")",
"TracingVariable",
".",
"save",
"(",
"self",
".",
"trace_var",
",",
"tracevar_path",
")"
] |
Main function for doing riemannian relaxation.
|
[
"Main",
"function",
"for",
"doing",
"riemannian",
"relaxation",
"."
] |
faccaf267aad0a8b18ec8a705735fd9dd838ca1e
|
https://github.com/mmp2/megaman/blob/faccaf267aad0a8b18ec8a705735fd9dd838ca1e/megaman/relaxation/riemannian_relaxation.py#L77-L96
|
train
|
mmp2/megaman
|
megaman/relaxation/riemannian_relaxation.py
|
RiemannianRelaxation.calc_loss
|
def calc_loss(self, embedding):
"""Helper function to calculate rieman loss given new embedding"""
Hnew = self.compute_dual_rmetric(Ynew=embedding)
return self.rieman_loss(Hnew=Hnew)
|
python
|
def calc_loss(self, embedding):
"""Helper function to calculate rieman loss given new embedding"""
Hnew = self.compute_dual_rmetric(Ynew=embedding)
return self.rieman_loss(Hnew=Hnew)
|
[
"def",
"calc_loss",
"(",
"self",
",",
"embedding",
")",
":",
"Hnew",
"=",
"self",
".",
"compute_dual_rmetric",
"(",
"Ynew",
"=",
"embedding",
")",
"return",
"self",
".",
"rieman_loss",
"(",
"Hnew",
"=",
"Hnew",
")"
] |
Helper function to calculate rieman loss given new embedding
|
[
"Helper",
"function",
"to",
"calculate",
"rieman",
"loss",
"given",
"new",
"embedding"
] |
faccaf267aad0a8b18ec8a705735fd9dd838ca1e
|
https://github.com/mmp2/megaman/blob/faccaf267aad0a8b18ec8a705735fd9dd838ca1e/megaman/relaxation/riemannian_relaxation.py#L98-L101
|
train
|
mmp2/megaman
|
megaman/relaxation/riemannian_relaxation.py
|
RiemannianRelaxation.compute_dual_rmetric
|
def compute_dual_rmetric(self,Ynew=None):
"""Helper function to calculate the """
usedY = self.Y if Ynew is None else Ynew
rieman_metric = RiemannMetric(usedY, self.laplacian_matrix)
return rieman_metric.get_dual_rmetric()
|
python
|
def compute_dual_rmetric(self,Ynew=None):
"""Helper function to calculate the """
usedY = self.Y if Ynew is None else Ynew
rieman_metric = RiemannMetric(usedY, self.laplacian_matrix)
return rieman_metric.get_dual_rmetric()
|
[
"def",
"compute_dual_rmetric",
"(",
"self",
",",
"Ynew",
"=",
"None",
")",
":",
"usedY",
"=",
"self",
".",
"Y",
"if",
"Ynew",
"is",
"None",
"else",
"Ynew",
"rieman_metric",
"=",
"RiemannMetric",
"(",
"usedY",
",",
"self",
".",
"laplacian_matrix",
")",
"return",
"rieman_metric",
".",
"get_dual_rmetric",
"(",
")"
] |
Helper function to calculate the
|
[
"Helper",
"function",
"to",
"calculate",
"the"
] |
faccaf267aad0a8b18ec8a705735fd9dd838ca1e
|
https://github.com/mmp2/megaman/blob/faccaf267aad0a8b18ec8a705735fd9dd838ca1e/megaman/relaxation/riemannian_relaxation.py#L103-L107
|
train
|
mmp2/megaman
|
doc/sphinxext/numpy_ext/automodsumm.py
|
automodsumm_to_autosummary_lines
|
def automodsumm_to_autosummary_lines(fn, app):
"""
Generates lines from a file with an "automodsumm" entry suitable for
feeding into "autosummary".
Searches the provided file for `automodsumm` directives and returns
a list of lines specifying the `autosummary` commands for the modules
requested. This does *not* return the whole file contents - just an
autosummary section in place of any :automodsumm: entries. Note that
any options given for `automodsumm` are also included in the
generated `autosummary` section.
Parameters
----------
fn : str
The name of the file to search for `automodsumm` entries.
app : sphinx.application.Application
The sphinx Application object
Return
------
lines : list of str
Lines for all `automodsumm` entries with the entries replaced by
`autosummary` and the module's members added.
"""
fullfn = os.path.join(app.builder.env.srcdir, fn)
with open(fullfn) as fr:
if 'astropy_helpers.sphinx.ext.automodapi' in app._extensions:
from astropy_helpers.sphinx.ext.automodapi import automodapi_replace
# Must do the automodapi on the source to get the automodsumm
# that might be in there
docname = os.path.splitext(fn)[0]
filestr = automodapi_replace(fr.read(), app, True, docname, False)
else:
filestr = fr.read()
spl = _automodsummrex.split(filestr)
#0th entry is the stuff before the first automodsumm line
indent1s = spl[1::5]
mods = spl[2::5]
opssecs = spl[3::5]
indent2s = spl[4::5]
remainders = spl[5::5]
# only grab automodsumm sections and convert them to autosummary with the
# entries for all the public objects
newlines = []
#loop over all automodsumms in this document
for i, (i1, i2, modnm, ops, rem) in enumerate(zip(indent1s, indent2s, mods,
opssecs, remainders)):
allindent = i1 + ('' if i2 is None else i2)
#filter out functions-only and classes-only options if present
oplines = ops.split('\n')
toskip = []
allowedpkgnms = []
funcsonly = clssonly = False
for i, ln in reversed(list(enumerate(oplines))):
if ':functions-only:' in ln:
funcsonly = True
del oplines[i]
if ':classes-only:' in ln:
clssonly = True
del oplines[i]
if ':skip:' in ln:
toskip.extend(_str_list_converter(ln.replace(':skip:', '')))
del oplines[i]
if ':allowed-package-names:' in ln:
allowedpkgnms.extend(_str_list_converter(ln.replace(':allowed-package-names:', '')))
del oplines[i]
if funcsonly and clssonly:
msg = ('Defined both functions-only and classes-only options. '
'Skipping this directive.')
lnnum = sum([spl[j].count('\n') for j in range(i * 5 + 1)])
app.warn('[automodsumm]' + msg, (fn, lnnum))
continue
# Use the currentmodule directive so we can just put the local names
# in the autosummary table. Note that this doesn't always seem to
# actually "take" in Sphinx's eyes, so in `Automodsumm.run`, we have to
# force it internally, as well.
newlines.extend([i1 + '.. currentmodule:: ' + modnm,
'',
'.. autosummary::'])
newlines.extend(oplines)
ols = True if len(allowedpkgnms) == 0 else allowedpkgnms
for nm, fqn, obj in zip(*find_mod_objs(modnm, onlylocals=ols)):
if nm in toskip:
continue
if funcsonly and not inspect.isroutine(obj):
continue
if clssonly and not inspect.isclass(obj):
continue
newlines.append(allindent + nm)
# add one newline at the end of the autosummary block
newlines.append('')
return newlines
|
python
|
def automodsumm_to_autosummary_lines(fn, app):
"""
Generates lines from a file with an "automodsumm" entry suitable for
feeding into "autosummary".
Searches the provided file for `automodsumm` directives and returns
a list of lines specifying the `autosummary` commands for the modules
requested. This does *not* return the whole file contents - just an
autosummary section in place of any :automodsumm: entries. Note that
any options given for `automodsumm` are also included in the
generated `autosummary` section.
Parameters
----------
fn : str
The name of the file to search for `automodsumm` entries.
app : sphinx.application.Application
The sphinx Application object
Return
------
lines : list of str
Lines for all `automodsumm` entries with the entries replaced by
`autosummary` and the module's members added.
"""
fullfn = os.path.join(app.builder.env.srcdir, fn)
with open(fullfn) as fr:
if 'astropy_helpers.sphinx.ext.automodapi' in app._extensions:
from astropy_helpers.sphinx.ext.automodapi import automodapi_replace
# Must do the automodapi on the source to get the automodsumm
# that might be in there
docname = os.path.splitext(fn)[0]
filestr = automodapi_replace(fr.read(), app, True, docname, False)
else:
filestr = fr.read()
spl = _automodsummrex.split(filestr)
#0th entry is the stuff before the first automodsumm line
indent1s = spl[1::5]
mods = spl[2::5]
opssecs = spl[3::5]
indent2s = spl[4::5]
remainders = spl[5::5]
# only grab automodsumm sections and convert them to autosummary with the
# entries for all the public objects
newlines = []
#loop over all automodsumms in this document
for i, (i1, i2, modnm, ops, rem) in enumerate(zip(indent1s, indent2s, mods,
opssecs, remainders)):
allindent = i1 + ('' if i2 is None else i2)
#filter out functions-only and classes-only options if present
oplines = ops.split('\n')
toskip = []
allowedpkgnms = []
funcsonly = clssonly = False
for i, ln in reversed(list(enumerate(oplines))):
if ':functions-only:' in ln:
funcsonly = True
del oplines[i]
if ':classes-only:' in ln:
clssonly = True
del oplines[i]
if ':skip:' in ln:
toskip.extend(_str_list_converter(ln.replace(':skip:', '')))
del oplines[i]
if ':allowed-package-names:' in ln:
allowedpkgnms.extend(_str_list_converter(ln.replace(':allowed-package-names:', '')))
del oplines[i]
if funcsonly and clssonly:
msg = ('Defined both functions-only and classes-only options. '
'Skipping this directive.')
lnnum = sum([spl[j].count('\n') for j in range(i * 5 + 1)])
app.warn('[automodsumm]' + msg, (fn, lnnum))
continue
# Use the currentmodule directive so we can just put the local names
# in the autosummary table. Note that this doesn't always seem to
# actually "take" in Sphinx's eyes, so in `Automodsumm.run`, we have to
# force it internally, as well.
newlines.extend([i1 + '.. currentmodule:: ' + modnm,
'',
'.. autosummary::'])
newlines.extend(oplines)
ols = True if len(allowedpkgnms) == 0 else allowedpkgnms
for nm, fqn, obj in zip(*find_mod_objs(modnm, onlylocals=ols)):
if nm in toskip:
continue
if funcsonly and not inspect.isroutine(obj):
continue
if clssonly and not inspect.isclass(obj):
continue
newlines.append(allindent + nm)
# add one newline at the end of the autosummary block
newlines.append('')
return newlines
|
[
"def",
"automodsumm_to_autosummary_lines",
"(",
"fn",
",",
"app",
")",
":",
"fullfn",
"=",
"os",
".",
"path",
".",
"join",
"(",
"app",
".",
"builder",
".",
"env",
".",
"srcdir",
",",
"fn",
")",
"with",
"open",
"(",
"fullfn",
")",
"as",
"fr",
":",
"if",
"'astropy_helpers.sphinx.ext.automodapi'",
"in",
"app",
".",
"_extensions",
":",
"from",
"astropy_helpers",
".",
"sphinx",
".",
"ext",
".",
"automodapi",
"import",
"automodapi_replace",
"# Must do the automodapi on the source to get the automodsumm",
"# that might be in there",
"docname",
"=",
"os",
".",
"path",
".",
"splitext",
"(",
"fn",
")",
"[",
"0",
"]",
"filestr",
"=",
"automodapi_replace",
"(",
"fr",
".",
"read",
"(",
")",
",",
"app",
",",
"True",
",",
"docname",
",",
"False",
")",
"else",
":",
"filestr",
"=",
"fr",
".",
"read",
"(",
")",
"spl",
"=",
"_automodsummrex",
".",
"split",
"(",
"filestr",
")",
"#0th entry is the stuff before the first automodsumm line",
"indent1s",
"=",
"spl",
"[",
"1",
":",
":",
"5",
"]",
"mods",
"=",
"spl",
"[",
"2",
":",
":",
"5",
"]",
"opssecs",
"=",
"spl",
"[",
"3",
":",
":",
"5",
"]",
"indent2s",
"=",
"spl",
"[",
"4",
":",
":",
"5",
"]",
"remainders",
"=",
"spl",
"[",
"5",
":",
":",
"5",
"]",
"# only grab automodsumm sections and convert them to autosummary with the",
"# entries for all the public objects",
"newlines",
"=",
"[",
"]",
"#loop over all automodsumms in this document",
"for",
"i",
",",
"(",
"i1",
",",
"i2",
",",
"modnm",
",",
"ops",
",",
"rem",
")",
"in",
"enumerate",
"(",
"zip",
"(",
"indent1s",
",",
"indent2s",
",",
"mods",
",",
"opssecs",
",",
"remainders",
")",
")",
":",
"allindent",
"=",
"i1",
"+",
"(",
"''",
"if",
"i2",
"is",
"None",
"else",
"i2",
")",
"#filter out functions-only and classes-only options if present",
"oplines",
"=",
"ops",
".",
"split",
"(",
"'\\n'",
")",
"toskip",
"=",
"[",
"]",
"allowedpkgnms",
"=",
"[",
"]",
"funcsonly",
"=",
"clssonly",
"=",
"False",
"for",
"i",
",",
"ln",
"in",
"reversed",
"(",
"list",
"(",
"enumerate",
"(",
"oplines",
")",
")",
")",
":",
"if",
"':functions-only:'",
"in",
"ln",
":",
"funcsonly",
"=",
"True",
"del",
"oplines",
"[",
"i",
"]",
"if",
"':classes-only:'",
"in",
"ln",
":",
"clssonly",
"=",
"True",
"del",
"oplines",
"[",
"i",
"]",
"if",
"':skip:'",
"in",
"ln",
":",
"toskip",
".",
"extend",
"(",
"_str_list_converter",
"(",
"ln",
".",
"replace",
"(",
"':skip:'",
",",
"''",
")",
")",
")",
"del",
"oplines",
"[",
"i",
"]",
"if",
"':allowed-package-names:'",
"in",
"ln",
":",
"allowedpkgnms",
".",
"extend",
"(",
"_str_list_converter",
"(",
"ln",
".",
"replace",
"(",
"':allowed-package-names:'",
",",
"''",
")",
")",
")",
"del",
"oplines",
"[",
"i",
"]",
"if",
"funcsonly",
"and",
"clssonly",
":",
"msg",
"=",
"(",
"'Defined both functions-only and classes-only options. '",
"'Skipping this directive.'",
")",
"lnnum",
"=",
"sum",
"(",
"[",
"spl",
"[",
"j",
"]",
".",
"count",
"(",
"'\\n'",
")",
"for",
"j",
"in",
"range",
"(",
"i",
"*",
"5",
"+",
"1",
")",
"]",
")",
"app",
".",
"warn",
"(",
"'[automodsumm]'",
"+",
"msg",
",",
"(",
"fn",
",",
"lnnum",
")",
")",
"continue",
"# Use the currentmodule directive so we can just put the local names",
"# in the autosummary table. Note that this doesn't always seem to",
"# actually \"take\" in Sphinx's eyes, so in `Automodsumm.run`, we have to",
"# force it internally, as well.",
"newlines",
".",
"extend",
"(",
"[",
"i1",
"+",
"'.. currentmodule:: '",
"+",
"modnm",
",",
"''",
",",
"'.. autosummary::'",
"]",
")",
"newlines",
".",
"extend",
"(",
"oplines",
")",
"ols",
"=",
"True",
"if",
"len",
"(",
"allowedpkgnms",
")",
"==",
"0",
"else",
"allowedpkgnms",
"for",
"nm",
",",
"fqn",
",",
"obj",
"in",
"zip",
"(",
"*",
"find_mod_objs",
"(",
"modnm",
",",
"onlylocals",
"=",
"ols",
")",
")",
":",
"if",
"nm",
"in",
"toskip",
":",
"continue",
"if",
"funcsonly",
"and",
"not",
"inspect",
".",
"isroutine",
"(",
"obj",
")",
":",
"continue",
"if",
"clssonly",
"and",
"not",
"inspect",
".",
"isclass",
"(",
"obj",
")",
":",
"continue",
"newlines",
".",
"append",
"(",
"allindent",
"+",
"nm",
")",
"# add one newline at the end of the autosummary block",
"newlines",
".",
"append",
"(",
"''",
")",
"return",
"newlines"
] |
Generates lines from a file with an "automodsumm" entry suitable for
feeding into "autosummary".
Searches the provided file for `automodsumm` directives and returns
a list of lines specifying the `autosummary` commands for the modules
requested. This does *not* return the whole file contents - just an
autosummary section in place of any :automodsumm: entries. Note that
any options given for `automodsumm` are also included in the
generated `autosummary` section.
Parameters
----------
fn : str
The name of the file to search for `automodsumm` entries.
app : sphinx.application.Application
The sphinx Application object
Return
------
lines : list of str
Lines for all `automodsumm` entries with the entries replaced by
`autosummary` and the module's members added.
|
[
"Generates",
"lines",
"from",
"a",
"file",
"with",
"an",
"automodsumm",
"entry",
"suitable",
"for",
"feeding",
"into",
"autosummary",
"."
] |
faccaf267aad0a8b18ec8a705735fd9dd838ca1e
|
https://github.com/mmp2/megaman/blob/faccaf267aad0a8b18ec8a705735fd9dd838ca1e/doc/sphinxext/numpy_ext/automodsumm.py#L265-L369
|
train
|
mmp2/megaman
|
megaman/geometry/adjacency.py
|
compute_adjacency_matrix
|
def compute_adjacency_matrix(X, method='auto', **kwargs):
"""Compute an adjacency matrix with the given method"""
if method == 'auto':
if X.shape[0] > 10000:
method = 'cyflann'
else:
method = 'kd_tree'
return Adjacency.init(method, **kwargs).adjacency_graph(X.astype('float'))
|
python
|
def compute_adjacency_matrix(X, method='auto', **kwargs):
"""Compute an adjacency matrix with the given method"""
if method == 'auto':
if X.shape[0] > 10000:
method = 'cyflann'
else:
method = 'kd_tree'
return Adjacency.init(method, **kwargs).adjacency_graph(X.astype('float'))
|
[
"def",
"compute_adjacency_matrix",
"(",
"X",
",",
"method",
"=",
"'auto'",
",",
"*",
"*",
"kwargs",
")",
":",
"if",
"method",
"==",
"'auto'",
":",
"if",
"X",
".",
"shape",
"[",
"0",
"]",
">",
"10000",
":",
"method",
"=",
"'cyflann'",
"else",
":",
"method",
"=",
"'kd_tree'",
"return",
"Adjacency",
".",
"init",
"(",
"method",
",",
"*",
"*",
"kwargs",
")",
".",
"adjacency_graph",
"(",
"X",
".",
"astype",
"(",
"'float'",
")",
")"
] |
Compute an adjacency matrix with the given method
|
[
"Compute",
"an",
"adjacency",
"matrix",
"with",
"the",
"given",
"method"
] |
faccaf267aad0a8b18ec8a705735fd9dd838ca1e
|
https://github.com/mmp2/megaman/blob/faccaf267aad0a8b18ec8a705735fd9dd838ca1e/megaman/geometry/adjacency.py#L17-L24
|
train
|
mmp2/megaman
|
megaman/relaxation/utils.py
|
split_kwargs
|
def split_kwargs(relaxation_kwds):
"""Split relaxation keywords to keywords for optimizer and others"""
optimizer_keys_list = [
'step_method',
'linesearch',
'eta_max',
'eta',
'm',
'linesearch_first'
]
optimizer_kwargs = { k:relaxation_kwds.pop(k) for k in optimizer_keys_list if k in relaxation_kwds }
if 'm' in optimizer_kwargs:
optimizer_kwargs['momentum'] = optimizer_kwargs.pop('m')
return optimizer_kwargs, relaxation_kwds
|
python
|
def split_kwargs(relaxation_kwds):
"""Split relaxation keywords to keywords for optimizer and others"""
optimizer_keys_list = [
'step_method',
'linesearch',
'eta_max',
'eta',
'm',
'linesearch_first'
]
optimizer_kwargs = { k:relaxation_kwds.pop(k) for k in optimizer_keys_list if k in relaxation_kwds }
if 'm' in optimizer_kwargs:
optimizer_kwargs['momentum'] = optimizer_kwargs.pop('m')
return optimizer_kwargs, relaxation_kwds
|
[
"def",
"split_kwargs",
"(",
"relaxation_kwds",
")",
":",
"optimizer_keys_list",
"=",
"[",
"'step_method'",
",",
"'linesearch'",
",",
"'eta_max'",
",",
"'eta'",
",",
"'m'",
",",
"'linesearch_first'",
"]",
"optimizer_kwargs",
"=",
"{",
"k",
":",
"relaxation_kwds",
".",
"pop",
"(",
"k",
")",
"for",
"k",
"in",
"optimizer_keys_list",
"if",
"k",
"in",
"relaxation_kwds",
"}",
"if",
"'m'",
"in",
"optimizer_kwargs",
":",
"optimizer_kwargs",
"[",
"'momentum'",
"]",
"=",
"optimizer_kwargs",
".",
"pop",
"(",
"'m'",
")",
"return",
"optimizer_kwargs",
",",
"relaxation_kwds"
] |
Split relaxation keywords to keywords for optimizer and others
|
[
"Split",
"relaxation",
"keywords",
"to",
"keywords",
"for",
"optimizer",
"and",
"others"
] |
faccaf267aad0a8b18ec8a705735fd9dd838ca1e
|
https://github.com/mmp2/megaman/blob/faccaf267aad0a8b18ec8a705735fd9dd838ca1e/megaman/relaxation/utils.py#L10-L23
|
train
|
mmp2/megaman
|
megaman/relaxation/utils.py
|
initialize_kwds
|
def initialize_kwds(relaxation_kwds, n_samples, n_components, intrinsic_dim):
"""
Initialize relaxation keywords.
Parameters
----------
relaxation_kwds : dict
weights : numpy array, the weights
step_method : string { 'fixed', 'momentum' }
which optimizers to use
linesearch : bool
whether to do linesearch in search for eta in optimization
verbose : bool
whether to print reports to I/O when doing relaxation
niter : int
number of iterations to run.
niter_trace : int
number of iterations to be traced.
presave : bool
whether to store precomputed keywords to files or not.
sqrd : bool
whether to use squared norm in loss function. Default : True
alpha : float
shrinkage rate for previous gradient. Default : 0
projected : bool
whether or not to optimize via projected gradient descent on differences S
lossf : string { 'epsilon', 'rloss' }
which loss function to optimize.
Default : 'rloss' if n == d, otherwise 'epsilon'
subset : numpy array
Subset to do relaxation on.
sub_dir : string
sub_dir used to store the outputs.
backup_base_dir : string
base directory used to store outputs
Final path will be backup_base_dir/sub_dir
saveiter : int
save backup on every saveiter iterations
printiter : int
print report on every printiter iterations
save_init : bool
whether to save Y0 and L before running relaxation.
"""
new_relaxation_kwds = {
'weights': np.array([],dtype=np.float64),
'step_method': 'fixed',
'linesearch': True,
'verbose': False,
'niter': 2000,
'niter_trace': 0,
'presave': False,
'sqrd': True,
'alpha': 0,
'projected': False,
'lossf': 'epsilon' if n_components > intrinsic_dim else 'rloss',
'subset': np.arange(n_samples),
'sub_dir': current_time_str(),
'backup_base_dir': default_basedir,
'saveiter': 10,
'printiter': 1,
'save_init': False,
}
new_relaxation_kwds.update(relaxation_kwds)
backup_dir = os.path.join(new_relaxation_kwds['backup_base_dir'], new_relaxation_kwds['sub_dir'])
new_relaxation_kwds['backup_dir'] = backup_dir
create_output_dir(backup_dir)
new_relaxation_kwds = convert_to_int(new_relaxation_kwds)
if new_relaxation_kwds['weights'].shape[0] != 0:
weights = np.absolute(new_relaxation_kwds['weights']).astype(np.float64)
new_relaxation_kwds['weights'] = weights / np.sum(weights)
if new_relaxation_kwds['lossf'] == 'epsilon':
new_relaxation_kwds.setdefault('eps_orth', 0.1)
if n_components != intrinsic_dim and new_relaxation_kwds['lossf'] == 'rloss':
raise ValueError('loss function rloss is for n_components equal intrinsic_dim')
if n_components == intrinsic_dim and new_relaxation_kwds['lossf'] == 'epsilon':
raise ValueError('loss function rloss is for n_components equal intrinsic_dim')
if new_relaxation_kwds['projected'] and new_relaxation_kwds['subset'].shape[0] < n_samples:
raise ValueError('Projection derivative not working for subset methods.')
prefix = 'projected' if new_relaxation_kwds['projected'] else 'nonprojected'
new_relaxation_kwds['lossf'] = '{}_{}'.format(prefix,new_relaxation_kwds['lossf'])
step_method = new_relaxation_kwds['step_method']
if new_relaxation_kwds['linesearch'] == True:
new_relaxation_kwds.setdefault('linesearch_first', False)
init_eta_max = 2**11 if new_relaxation_kwds['projected'] else 2**4
new_relaxation_kwds.setdefault('eta_max',init_eta_max)
else:
new_relaxation_kwds.setdefault('eta', 1.0)
if step_method == 'momentum':
new_relaxation_kwds.setdefault('m', 0.05)
return new_relaxation_kwds
|
python
|
def initialize_kwds(relaxation_kwds, n_samples, n_components, intrinsic_dim):
"""
Initialize relaxation keywords.
Parameters
----------
relaxation_kwds : dict
weights : numpy array, the weights
step_method : string { 'fixed', 'momentum' }
which optimizers to use
linesearch : bool
whether to do linesearch in search for eta in optimization
verbose : bool
whether to print reports to I/O when doing relaxation
niter : int
number of iterations to run.
niter_trace : int
number of iterations to be traced.
presave : bool
whether to store precomputed keywords to files or not.
sqrd : bool
whether to use squared norm in loss function. Default : True
alpha : float
shrinkage rate for previous gradient. Default : 0
projected : bool
whether or not to optimize via projected gradient descent on differences S
lossf : string { 'epsilon', 'rloss' }
which loss function to optimize.
Default : 'rloss' if n == d, otherwise 'epsilon'
subset : numpy array
Subset to do relaxation on.
sub_dir : string
sub_dir used to store the outputs.
backup_base_dir : string
base directory used to store outputs
Final path will be backup_base_dir/sub_dir
saveiter : int
save backup on every saveiter iterations
printiter : int
print report on every printiter iterations
save_init : bool
whether to save Y0 and L before running relaxation.
"""
new_relaxation_kwds = {
'weights': np.array([],dtype=np.float64),
'step_method': 'fixed',
'linesearch': True,
'verbose': False,
'niter': 2000,
'niter_trace': 0,
'presave': False,
'sqrd': True,
'alpha': 0,
'projected': False,
'lossf': 'epsilon' if n_components > intrinsic_dim else 'rloss',
'subset': np.arange(n_samples),
'sub_dir': current_time_str(),
'backup_base_dir': default_basedir,
'saveiter': 10,
'printiter': 1,
'save_init': False,
}
new_relaxation_kwds.update(relaxation_kwds)
backup_dir = os.path.join(new_relaxation_kwds['backup_base_dir'], new_relaxation_kwds['sub_dir'])
new_relaxation_kwds['backup_dir'] = backup_dir
create_output_dir(backup_dir)
new_relaxation_kwds = convert_to_int(new_relaxation_kwds)
if new_relaxation_kwds['weights'].shape[0] != 0:
weights = np.absolute(new_relaxation_kwds['weights']).astype(np.float64)
new_relaxation_kwds['weights'] = weights / np.sum(weights)
if new_relaxation_kwds['lossf'] == 'epsilon':
new_relaxation_kwds.setdefault('eps_orth', 0.1)
if n_components != intrinsic_dim and new_relaxation_kwds['lossf'] == 'rloss':
raise ValueError('loss function rloss is for n_components equal intrinsic_dim')
if n_components == intrinsic_dim and new_relaxation_kwds['lossf'] == 'epsilon':
raise ValueError('loss function rloss is for n_components equal intrinsic_dim')
if new_relaxation_kwds['projected'] and new_relaxation_kwds['subset'].shape[0] < n_samples:
raise ValueError('Projection derivative not working for subset methods.')
prefix = 'projected' if new_relaxation_kwds['projected'] else 'nonprojected'
new_relaxation_kwds['lossf'] = '{}_{}'.format(prefix,new_relaxation_kwds['lossf'])
step_method = new_relaxation_kwds['step_method']
if new_relaxation_kwds['linesearch'] == True:
new_relaxation_kwds.setdefault('linesearch_first', False)
init_eta_max = 2**11 if new_relaxation_kwds['projected'] else 2**4
new_relaxation_kwds.setdefault('eta_max',init_eta_max)
else:
new_relaxation_kwds.setdefault('eta', 1.0)
if step_method == 'momentum':
new_relaxation_kwds.setdefault('m', 0.05)
return new_relaxation_kwds
|
[
"def",
"initialize_kwds",
"(",
"relaxation_kwds",
",",
"n_samples",
",",
"n_components",
",",
"intrinsic_dim",
")",
":",
"new_relaxation_kwds",
"=",
"{",
"'weights'",
":",
"np",
".",
"array",
"(",
"[",
"]",
",",
"dtype",
"=",
"np",
".",
"float64",
")",
",",
"'step_method'",
":",
"'fixed'",
",",
"'linesearch'",
":",
"True",
",",
"'verbose'",
":",
"False",
",",
"'niter'",
":",
"2000",
",",
"'niter_trace'",
":",
"0",
",",
"'presave'",
":",
"False",
",",
"'sqrd'",
":",
"True",
",",
"'alpha'",
":",
"0",
",",
"'projected'",
":",
"False",
",",
"'lossf'",
":",
"'epsilon'",
"if",
"n_components",
">",
"intrinsic_dim",
"else",
"'rloss'",
",",
"'subset'",
":",
"np",
".",
"arange",
"(",
"n_samples",
")",
",",
"'sub_dir'",
":",
"current_time_str",
"(",
")",
",",
"'backup_base_dir'",
":",
"default_basedir",
",",
"'saveiter'",
":",
"10",
",",
"'printiter'",
":",
"1",
",",
"'save_init'",
":",
"False",
",",
"}",
"new_relaxation_kwds",
".",
"update",
"(",
"relaxation_kwds",
")",
"backup_dir",
"=",
"os",
".",
"path",
".",
"join",
"(",
"new_relaxation_kwds",
"[",
"'backup_base_dir'",
"]",
",",
"new_relaxation_kwds",
"[",
"'sub_dir'",
"]",
")",
"new_relaxation_kwds",
"[",
"'backup_dir'",
"]",
"=",
"backup_dir",
"create_output_dir",
"(",
"backup_dir",
")",
"new_relaxation_kwds",
"=",
"convert_to_int",
"(",
"new_relaxation_kwds",
")",
"if",
"new_relaxation_kwds",
"[",
"'weights'",
"]",
".",
"shape",
"[",
"0",
"]",
"!=",
"0",
":",
"weights",
"=",
"np",
".",
"absolute",
"(",
"new_relaxation_kwds",
"[",
"'weights'",
"]",
")",
".",
"astype",
"(",
"np",
".",
"float64",
")",
"new_relaxation_kwds",
"[",
"'weights'",
"]",
"=",
"weights",
"/",
"np",
".",
"sum",
"(",
"weights",
")",
"if",
"new_relaxation_kwds",
"[",
"'lossf'",
"]",
"==",
"'epsilon'",
":",
"new_relaxation_kwds",
".",
"setdefault",
"(",
"'eps_orth'",
",",
"0.1",
")",
"if",
"n_components",
"!=",
"intrinsic_dim",
"and",
"new_relaxation_kwds",
"[",
"'lossf'",
"]",
"==",
"'rloss'",
":",
"raise",
"ValueError",
"(",
"'loss function rloss is for n_components equal intrinsic_dim'",
")",
"if",
"n_components",
"==",
"intrinsic_dim",
"and",
"new_relaxation_kwds",
"[",
"'lossf'",
"]",
"==",
"'epsilon'",
":",
"raise",
"ValueError",
"(",
"'loss function rloss is for n_components equal intrinsic_dim'",
")",
"if",
"new_relaxation_kwds",
"[",
"'projected'",
"]",
"and",
"new_relaxation_kwds",
"[",
"'subset'",
"]",
".",
"shape",
"[",
"0",
"]",
"<",
"n_samples",
":",
"raise",
"ValueError",
"(",
"'Projection derivative not working for subset methods.'",
")",
"prefix",
"=",
"'projected'",
"if",
"new_relaxation_kwds",
"[",
"'projected'",
"]",
"else",
"'nonprojected'",
"new_relaxation_kwds",
"[",
"'lossf'",
"]",
"=",
"'{}_{}'",
".",
"format",
"(",
"prefix",
",",
"new_relaxation_kwds",
"[",
"'lossf'",
"]",
")",
"step_method",
"=",
"new_relaxation_kwds",
"[",
"'step_method'",
"]",
"if",
"new_relaxation_kwds",
"[",
"'linesearch'",
"]",
"==",
"True",
":",
"new_relaxation_kwds",
".",
"setdefault",
"(",
"'linesearch_first'",
",",
"False",
")",
"init_eta_max",
"=",
"2",
"**",
"11",
"if",
"new_relaxation_kwds",
"[",
"'projected'",
"]",
"else",
"2",
"**",
"4",
"new_relaxation_kwds",
".",
"setdefault",
"(",
"'eta_max'",
",",
"init_eta_max",
")",
"else",
":",
"new_relaxation_kwds",
".",
"setdefault",
"(",
"'eta'",
",",
"1.0",
")",
"if",
"step_method",
"==",
"'momentum'",
":",
"new_relaxation_kwds",
".",
"setdefault",
"(",
"'m'",
",",
"0.05",
")",
"return",
"new_relaxation_kwds"
] |
Initialize relaxation keywords.
Parameters
----------
relaxation_kwds : dict
weights : numpy array, the weights
step_method : string { 'fixed', 'momentum' }
which optimizers to use
linesearch : bool
whether to do linesearch in search for eta in optimization
verbose : bool
whether to print reports to I/O when doing relaxation
niter : int
number of iterations to run.
niter_trace : int
number of iterations to be traced.
presave : bool
whether to store precomputed keywords to files or not.
sqrd : bool
whether to use squared norm in loss function. Default : True
alpha : float
shrinkage rate for previous gradient. Default : 0
projected : bool
whether or not to optimize via projected gradient descent on differences S
lossf : string { 'epsilon', 'rloss' }
which loss function to optimize.
Default : 'rloss' if n == d, otherwise 'epsilon'
subset : numpy array
Subset to do relaxation on.
sub_dir : string
sub_dir used to store the outputs.
backup_base_dir : string
base directory used to store outputs
Final path will be backup_base_dir/sub_dir
saveiter : int
save backup on every saveiter iterations
printiter : int
print report on every printiter iterations
save_init : bool
whether to save Y0 and L before running relaxation.
|
[
"Initialize",
"relaxation",
"keywords",
"."
] |
faccaf267aad0a8b18ec8a705735fd9dd838ca1e
|
https://github.com/mmp2/megaman/blob/faccaf267aad0a8b18ec8a705735fd9dd838ca1e/megaman/relaxation/utils.py#L26-L127
|
train
|
mmp2/megaman
|
megaman/embedding/spectral_embedding.py
|
_graph_connected_component
|
def _graph_connected_component(graph, node_id):
"""
Find the largest graph connected components the contains one
given node
Parameters
----------
graph : array-like, shape: (n_samples, n_samples)
adjacency matrix of the graph, non-zero weight means an edge
between the nodes
node_id : int
The index of the query node of the graph
Returns
-------
connected_components : array-like, shape: (n_samples,)
An array of bool value indicates the indexes of the nodes
belong to the largest connected components of the given query
node
"""
connected_components = np.zeros(shape=(graph.shape[0]), dtype=np.bool)
connected_components[node_id] = True
n_node = graph.shape[0]
for i in range(n_node):
last_num_component = connected_components.sum()
_, node_to_add = np.where(graph[connected_components] != 0)
connected_components[node_to_add] = True
if last_num_component >= connected_components.sum():
break
return connected_components
|
python
|
def _graph_connected_component(graph, node_id):
"""
Find the largest graph connected components the contains one
given node
Parameters
----------
graph : array-like, shape: (n_samples, n_samples)
adjacency matrix of the graph, non-zero weight means an edge
between the nodes
node_id : int
The index of the query node of the graph
Returns
-------
connected_components : array-like, shape: (n_samples,)
An array of bool value indicates the indexes of the nodes
belong to the largest connected components of the given query
node
"""
connected_components = np.zeros(shape=(graph.shape[0]), dtype=np.bool)
connected_components[node_id] = True
n_node = graph.shape[0]
for i in range(n_node):
last_num_component = connected_components.sum()
_, node_to_add = np.where(graph[connected_components] != 0)
connected_components[node_to_add] = True
if last_num_component >= connected_components.sum():
break
return connected_components
|
[
"def",
"_graph_connected_component",
"(",
"graph",
",",
"node_id",
")",
":",
"connected_components",
"=",
"np",
".",
"zeros",
"(",
"shape",
"=",
"(",
"graph",
".",
"shape",
"[",
"0",
"]",
")",
",",
"dtype",
"=",
"np",
".",
"bool",
")",
"connected_components",
"[",
"node_id",
"]",
"=",
"True",
"n_node",
"=",
"graph",
".",
"shape",
"[",
"0",
"]",
"for",
"i",
"in",
"range",
"(",
"n_node",
")",
":",
"last_num_component",
"=",
"connected_components",
".",
"sum",
"(",
")",
"_",
",",
"node_to_add",
"=",
"np",
".",
"where",
"(",
"graph",
"[",
"connected_components",
"]",
"!=",
"0",
")",
"connected_components",
"[",
"node_to_add",
"]",
"=",
"True",
"if",
"last_num_component",
">=",
"connected_components",
".",
"sum",
"(",
")",
":",
"break",
"return",
"connected_components"
] |
Find the largest graph connected components the contains one
given node
Parameters
----------
graph : array-like, shape: (n_samples, n_samples)
adjacency matrix of the graph, non-zero weight means an edge
between the nodes
node_id : int
The index of the query node of the graph
Returns
-------
connected_components : array-like, shape: (n_samples,)
An array of bool value indicates the indexes of the nodes
belong to the largest connected components of the given query
node
|
[
"Find",
"the",
"largest",
"graph",
"connected",
"components",
"the",
"contains",
"one",
"given",
"node"
] |
faccaf267aad0a8b18ec8a705735fd9dd838ca1e
|
https://github.com/mmp2/megaman/blob/faccaf267aad0a8b18ec8a705735fd9dd838ca1e/megaman/embedding/spectral_embedding.py#L28-L58
|
train
|
mmp2/megaman
|
megaman/embedding/spectral_embedding.py
|
SpectralEmbedding.predict
|
def predict(self, X_test, y=None):
"""
Predict embedding on new data X_test given the existing embedding on training data
Uses the Nystrom Extension to estimate the eigenvectors.
Currently only works with input_type data (i.e. not affinity or distance)
"""
if not hasattr(self, 'geom_'):
raise RuntimeError('the .fit() function must be called before the .predict() function')
if self.geom_.X is None:
raise NotImplementedError('method only implemented when X passed as data')
# Complete the adjacency matrix
adjacency_kwds = self.geom_.adjacency_kwds
if self.geom_.adjacency_method == 'cyflann':
if 'cyflann_kwds' in adjacency_kwds.keys():
cyflann_kwds = adjacency_kwds['cyflann_kwds']
else:
cyflann_kwds = {}
total_adjacency_matrix = complete_adjacency_matrix(self.geom_.adjacency_matrix,
self.geom_.X,
X_test,adjacency_kwds)
# Compute the affinity matrix, check method and kwds
if self.geom_.affinity_kwds is not None:
affinity_kwds = self.geom_.affinity_kwds
else:
affinity_kwds = {}
if self.geom_.affinity_method is not None:
affinity_method = self.geom_.affinity_method
else:
affinity_method = 'auto'
total_affinity_matrix = compute_affinity_matrix(total_adjacency_matrix, affinity_method,
**affinity_kwds)
# Compute the affinity matrix, check method and kwds
if self.geom_.laplacian_kwds is not None:
laplacian_kwds = self.geom_.laplacian_kwds
else:
laplacian_kwds = {}
if self.geom_.laplacian_method is not None:
laplacian_method = self.geom_.laplacian_method
else:
self.laplacian_method = 'auto'
total_laplacian_matrix = compute_laplacian_matrix(total_affinity_matrix, laplacian_method,
**laplacian_kwds)
# Take the columns of Laplacian and existing embedding and pass to Nystrom Extension
(n_sample_train) = self.geom_.adjacency_matrix.shape[0]
total_laplacian_matrix = total_laplacian_matrix.tocsr()
C = total_laplacian_matrix[:, :n_sample_train]
# warnings.warn(str(C.shape))
eigenvalues, eigenvectors = nystrom_extension(C, self.eigenvectors_, self.eigenvalues_)
# If diffusion maps compute diffusion time etc
if self.diffusion_maps:
embedding = compute_diffusion_maps(laplacian_method, eigenvectors, eigenvalues, self.diffusion_time)
else:
embedding = eigenvectors
(n_sample_test) = X_test.shape[0]
embedding_test=embedding[-n_sample_test:, :]
return embedding_test, embedding
|
python
|
def predict(self, X_test, y=None):
"""
Predict embedding on new data X_test given the existing embedding on training data
Uses the Nystrom Extension to estimate the eigenvectors.
Currently only works with input_type data (i.e. not affinity or distance)
"""
if not hasattr(self, 'geom_'):
raise RuntimeError('the .fit() function must be called before the .predict() function')
if self.geom_.X is None:
raise NotImplementedError('method only implemented when X passed as data')
# Complete the adjacency matrix
adjacency_kwds = self.geom_.adjacency_kwds
if self.geom_.adjacency_method == 'cyflann':
if 'cyflann_kwds' in adjacency_kwds.keys():
cyflann_kwds = adjacency_kwds['cyflann_kwds']
else:
cyflann_kwds = {}
total_adjacency_matrix = complete_adjacency_matrix(self.geom_.adjacency_matrix,
self.geom_.X,
X_test,adjacency_kwds)
# Compute the affinity matrix, check method and kwds
if self.geom_.affinity_kwds is not None:
affinity_kwds = self.geom_.affinity_kwds
else:
affinity_kwds = {}
if self.geom_.affinity_method is not None:
affinity_method = self.geom_.affinity_method
else:
affinity_method = 'auto'
total_affinity_matrix = compute_affinity_matrix(total_adjacency_matrix, affinity_method,
**affinity_kwds)
# Compute the affinity matrix, check method and kwds
if self.geom_.laplacian_kwds is not None:
laplacian_kwds = self.geom_.laplacian_kwds
else:
laplacian_kwds = {}
if self.geom_.laplacian_method is not None:
laplacian_method = self.geom_.laplacian_method
else:
self.laplacian_method = 'auto'
total_laplacian_matrix = compute_laplacian_matrix(total_affinity_matrix, laplacian_method,
**laplacian_kwds)
# Take the columns of Laplacian and existing embedding and pass to Nystrom Extension
(n_sample_train) = self.geom_.adjacency_matrix.shape[0]
total_laplacian_matrix = total_laplacian_matrix.tocsr()
C = total_laplacian_matrix[:, :n_sample_train]
# warnings.warn(str(C.shape))
eigenvalues, eigenvectors = nystrom_extension(C, self.eigenvectors_, self.eigenvalues_)
# If diffusion maps compute diffusion time etc
if self.diffusion_maps:
embedding = compute_diffusion_maps(laplacian_method, eigenvectors, eigenvalues, self.diffusion_time)
else:
embedding = eigenvectors
(n_sample_test) = X_test.shape[0]
embedding_test=embedding[-n_sample_test:, :]
return embedding_test, embedding
|
[
"def",
"predict",
"(",
"self",
",",
"X_test",
",",
"y",
"=",
"None",
")",
":",
"if",
"not",
"hasattr",
"(",
"self",
",",
"'geom_'",
")",
":",
"raise",
"RuntimeError",
"(",
"'the .fit() function must be called before the .predict() function'",
")",
"if",
"self",
".",
"geom_",
".",
"X",
"is",
"None",
":",
"raise",
"NotImplementedError",
"(",
"'method only implemented when X passed as data'",
")",
"# Complete the adjacency matrix",
"adjacency_kwds",
"=",
"self",
".",
"geom_",
".",
"adjacency_kwds",
"if",
"self",
".",
"geom_",
".",
"adjacency_method",
"==",
"'cyflann'",
":",
"if",
"'cyflann_kwds'",
"in",
"adjacency_kwds",
".",
"keys",
"(",
")",
":",
"cyflann_kwds",
"=",
"adjacency_kwds",
"[",
"'cyflann_kwds'",
"]",
"else",
":",
"cyflann_kwds",
"=",
"{",
"}",
"total_adjacency_matrix",
"=",
"complete_adjacency_matrix",
"(",
"self",
".",
"geom_",
".",
"adjacency_matrix",
",",
"self",
".",
"geom_",
".",
"X",
",",
"X_test",
",",
"adjacency_kwds",
")",
"# Compute the affinity matrix, check method and kwds",
"if",
"self",
".",
"geom_",
".",
"affinity_kwds",
"is",
"not",
"None",
":",
"affinity_kwds",
"=",
"self",
".",
"geom_",
".",
"affinity_kwds",
"else",
":",
"affinity_kwds",
"=",
"{",
"}",
"if",
"self",
".",
"geom_",
".",
"affinity_method",
"is",
"not",
"None",
":",
"affinity_method",
"=",
"self",
".",
"geom_",
".",
"affinity_method",
"else",
":",
"affinity_method",
"=",
"'auto'",
"total_affinity_matrix",
"=",
"compute_affinity_matrix",
"(",
"total_adjacency_matrix",
",",
"affinity_method",
",",
"*",
"*",
"affinity_kwds",
")",
"# Compute the affinity matrix, check method and kwds",
"if",
"self",
".",
"geom_",
".",
"laplacian_kwds",
"is",
"not",
"None",
":",
"laplacian_kwds",
"=",
"self",
".",
"geom_",
".",
"laplacian_kwds",
"else",
":",
"laplacian_kwds",
"=",
"{",
"}",
"if",
"self",
".",
"geom_",
".",
"laplacian_method",
"is",
"not",
"None",
":",
"laplacian_method",
"=",
"self",
".",
"geom_",
".",
"laplacian_method",
"else",
":",
"self",
".",
"laplacian_method",
"=",
"'auto'",
"total_laplacian_matrix",
"=",
"compute_laplacian_matrix",
"(",
"total_affinity_matrix",
",",
"laplacian_method",
",",
"*",
"*",
"laplacian_kwds",
")",
"# Take the columns of Laplacian and existing embedding and pass to Nystrom Extension",
"(",
"n_sample_train",
")",
"=",
"self",
".",
"geom_",
".",
"adjacency_matrix",
".",
"shape",
"[",
"0",
"]",
"total_laplacian_matrix",
"=",
"total_laplacian_matrix",
".",
"tocsr",
"(",
")",
"C",
"=",
"total_laplacian_matrix",
"[",
":",
",",
":",
"n_sample_train",
"]",
"# warnings.warn(str(C.shape))",
"eigenvalues",
",",
"eigenvectors",
"=",
"nystrom_extension",
"(",
"C",
",",
"self",
".",
"eigenvectors_",
",",
"self",
".",
"eigenvalues_",
")",
"# If diffusion maps compute diffusion time etc",
"if",
"self",
".",
"diffusion_maps",
":",
"embedding",
"=",
"compute_diffusion_maps",
"(",
"laplacian_method",
",",
"eigenvectors",
",",
"eigenvalues",
",",
"self",
".",
"diffusion_time",
")",
"else",
":",
"embedding",
"=",
"eigenvectors",
"(",
"n_sample_test",
")",
"=",
"X_test",
".",
"shape",
"[",
"0",
"]",
"embedding_test",
"=",
"embedding",
"[",
"-",
"n_sample_test",
":",
",",
":",
"]",
"return",
"embedding_test",
",",
"embedding"
] |
Predict embedding on new data X_test given the existing embedding on training data
Uses the Nystrom Extension to estimate the eigenvectors.
Currently only works with input_type data (i.e. not affinity or distance)
|
[
"Predict",
"embedding",
"on",
"new",
"data",
"X_test",
"given",
"the",
"existing",
"embedding",
"on",
"training",
"data"
] |
faccaf267aad0a8b18ec8a705735fd9dd838ca1e
|
https://github.com/mmp2/megaman/blob/faccaf267aad0a8b18ec8a705735fd9dd838ca1e/megaman/embedding/spectral_embedding.py#L408-L465
|
train
|
mmp2/megaman
|
megaman/geometry/laplacian.py
|
compute_laplacian_matrix
|
def compute_laplacian_matrix(affinity_matrix, method='auto', **kwargs):
"""Compute the laplacian matrix with the given method"""
if method == 'auto':
method = 'geometric'
return Laplacian.init(method, **kwargs).laplacian_matrix(affinity_matrix)
|
python
|
def compute_laplacian_matrix(affinity_matrix, method='auto', **kwargs):
"""Compute the laplacian matrix with the given method"""
if method == 'auto':
method = 'geometric'
return Laplacian.init(method, **kwargs).laplacian_matrix(affinity_matrix)
|
[
"def",
"compute_laplacian_matrix",
"(",
"affinity_matrix",
",",
"method",
"=",
"'auto'",
",",
"*",
"*",
"kwargs",
")",
":",
"if",
"method",
"==",
"'auto'",
":",
"method",
"=",
"'geometric'",
"return",
"Laplacian",
".",
"init",
"(",
"method",
",",
"*",
"*",
"kwargs",
")",
".",
"laplacian_matrix",
"(",
"affinity_matrix",
")"
] |
Compute the laplacian matrix with the given method
|
[
"Compute",
"the",
"laplacian",
"matrix",
"with",
"the",
"given",
"method"
] |
faccaf267aad0a8b18ec8a705735fd9dd838ca1e
|
https://github.com/mmp2/megaman/blob/faccaf267aad0a8b18ec8a705735fd9dd838ca1e/megaman/geometry/laplacian.py#L10-L14
|
train
|
mmp2/megaman
|
megaman/embedding/base.py
|
BaseEmbedding.fit_geometry
|
def fit_geometry(self, X=None, input_type='data'):
"""Inputs self.geom, and produces the fitted geometry self.geom_"""
if self.geom is None:
self.geom_ = Geometry()
elif isinstance(self.geom, Geometry):
self.geom_ = self.geom
else:
try:
kwds = dict(**self.geom)
except TypeError:
raise ValueError("geom must be a Geometry instance or "
"a mappable/dictionary")
self.geom_ = Geometry(**kwds)
if self.radius is not None:
self.geom_.set_radius(self.radius, override=False)
# if self.radius == 'auto':
# if X is not None and input_type != 'affinity':
# self.geom_.set_radius(self.estimate_radius(X, input_type),
# override=False)
# else:
# self.geom_.set_radius(self.radius,
# override=False)
if X is not None:
self.geom_.set_matrix(X, input_type)
return self
|
python
|
def fit_geometry(self, X=None, input_type='data'):
"""Inputs self.geom, and produces the fitted geometry self.geom_"""
if self.geom is None:
self.geom_ = Geometry()
elif isinstance(self.geom, Geometry):
self.geom_ = self.geom
else:
try:
kwds = dict(**self.geom)
except TypeError:
raise ValueError("geom must be a Geometry instance or "
"a mappable/dictionary")
self.geom_ = Geometry(**kwds)
if self.radius is not None:
self.geom_.set_radius(self.radius, override=False)
# if self.radius == 'auto':
# if X is not None and input_type != 'affinity':
# self.geom_.set_radius(self.estimate_radius(X, input_type),
# override=False)
# else:
# self.geom_.set_radius(self.radius,
# override=False)
if X is not None:
self.geom_.set_matrix(X, input_type)
return self
|
[
"def",
"fit_geometry",
"(",
"self",
",",
"X",
"=",
"None",
",",
"input_type",
"=",
"'data'",
")",
":",
"if",
"self",
".",
"geom",
"is",
"None",
":",
"self",
".",
"geom_",
"=",
"Geometry",
"(",
")",
"elif",
"isinstance",
"(",
"self",
".",
"geom",
",",
"Geometry",
")",
":",
"self",
".",
"geom_",
"=",
"self",
".",
"geom",
"else",
":",
"try",
":",
"kwds",
"=",
"dict",
"(",
"*",
"*",
"self",
".",
"geom",
")",
"except",
"TypeError",
":",
"raise",
"ValueError",
"(",
"\"geom must be a Geometry instance or \"",
"\"a mappable/dictionary\"",
")",
"self",
".",
"geom_",
"=",
"Geometry",
"(",
"*",
"*",
"kwds",
")",
"if",
"self",
".",
"radius",
"is",
"not",
"None",
":",
"self",
".",
"geom_",
".",
"set_radius",
"(",
"self",
".",
"radius",
",",
"override",
"=",
"False",
")",
"# if self.radius == 'auto':",
"# if X is not None and input_type != 'affinity':",
"# self.geom_.set_radius(self.estimate_radius(X, input_type),",
"# override=False)",
"# else:",
"# self.geom_.set_radius(self.radius,",
"# override=False)",
"if",
"X",
"is",
"not",
"None",
":",
"self",
".",
"geom_",
".",
"set_matrix",
"(",
"X",
",",
"input_type",
")",
"return",
"self"
] |
Inputs self.geom, and produces the fitted geometry self.geom_
|
[
"Inputs",
"self",
".",
"geom",
"and",
"produces",
"the",
"fitted",
"geometry",
"self",
".",
"geom_"
] |
faccaf267aad0a8b18ec8a705735fd9dd838ca1e
|
https://github.com/mmp2/megaman/blob/faccaf267aad0a8b18ec8a705735fd9dd838ca1e/megaman/embedding/base.py#L87-L115
|
train
|
mmp2/megaman
|
megaman/geometry/geometry.py
|
Geometry.set_radius
|
def set_radius(self, radius, override=True, X=None, n_components=2):
"""Set the radius for the adjacency and affinity computation
By default, this will override keyword arguments provided on
initialization.
Parameters
----------
radius : float
radius to set for adjacency and affinity.
override : bool (default: True)
if False, then only set radius if not already defined in
`adjacency_args` and `affinity_args`.
X : ndarray or sparse (optional)
if provided, estimate a suitable radius from this data.
n_components : int (default=2)
the number of components to use when estimating the radius
"""
if radius < 0:
raise ValueError("radius must be non-negative")
if override or ('radius' not in self.adjacency_kwds and
'n_neighbors' not in self.adjacency_kwds):
self.adjacency_kwds['radius'] = radius
if override or ('radius' not in self.affinity_kwds):
self.affinity_kwds['radius'] = radius
|
python
|
def set_radius(self, radius, override=True, X=None, n_components=2):
"""Set the radius for the adjacency and affinity computation
By default, this will override keyword arguments provided on
initialization.
Parameters
----------
radius : float
radius to set for adjacency and affinity.
override : bool (default: True)
if False, then only set radius if not already defined in
`adjacency_args` and `affinity_args`.
X : ndarray or sparse (optional)
if provided, estimate a suitable radius from this data.
n_components : int (default=2)
the number of components to use when estimating the radius
"""
if radius < 0:
raise ValueError("radius must be non-negative")
if override or ('radius' not in self.adjacency_kwds and
'n_neighbors' not in self.adjacency_kwds):
self.adjacency_kwds['radius'] = radius
if override or ('radius' not in self.affinity_kwds):
self.affinity_kwds['radius'] = radius
|
[
"def",
"set_radius",
"(",
"self",
",",
"radius",
",",
"override",
"=",
"True",
",",
"X",
"=",
"None",
",",
"n_components",
"=",
"2",
")",
":",
"if",
"radius",
"<",
"0",
":",
"raise",
"ValueError",
"(",
"\"radius must be non-negative\"",
")",
"if",
"override",
"or",
"(",
"'radius'",
"not",
"in",
"self",
".",
"adjacency_kwds",
"and",
"'n_neighbors'",
"not",
"in",
"self",
".",
"adjacency_kwds",
")",
":",
"self",
".",
"adjacency_kwds",
"[",
"'radius'",
"]",
"=",
"radius",
"if",
"override",
"or",
"(",
"'radius'",
"not",
"in",
"self",
".",
"affinity_kwds",
")",
":",
"self",
".",
"affinity_kwds",
"[",
"'radius'",
"]",
"=",
"radius"
] |
Set the radius for the adjacency and affinity computation
By default, this will override keyword arguments provided on
initialization.
Parameters
----------
radius : float
radius to set for adjacency and affinity.
override : bool (default: True)
if False, then only set radius if not already defined in
`adjacency_args` and `affinity_args`.
X : ndarray or sparse (optional)
if provided, estimate a suitable radius from this data.
n_components : int (default=2)
the number of components to use when estimating the radius
|
[
"Set",
"the",
"radius",
"for",
"the",
"adjacency",
"and",
"affinity",
"computation"
] |
faccaf267aad0a8b18ec8a705735fd9dd838ca1e
|
https://github.com/mmp2/megaman/blob/faccaf267aad0a8b18ec8a705735fd9dd838ca1e/megaman/geometry/geometry.py#L114-L140
|
train
|
mmp2/megaman
|
megaman/geometry/rmetric.py
|
RiemannMetric.get_rmetric
|
def get_rmetric( self, mode_inv = 'svd', return_svd = False ):
"""
Compute the Reimannian Metric
"""
if self.H is None:
self.H, self.G, self.Hvv, self.Hsval = riemann_metric(self.Y, self.L, self.mdimG, invert_h = True, mode_inv = mode_inv)
if self.G is None:
self.G, self.Hvv, self.Hsvals, self.Gsvals = compute_G_from_H( self.H, mode_inv = self.mode_inv )
if mode_inv is 'svd' and return_svd:
return self.G, self.Hvv, self.Hsvals, self.Gsvals
else:
return self.G
|
python
|
def get_rmetric( self, mode_inv = 'svd', return_svd = False ):
"""
Compute the Reimannian Metric
"""
if self.H is None:
self.H, self.G, self.Hvv, self.Hsval = riemann_metric(self.Y, self.L, self.mdimG, invert_h = True, mode_inv = mode_inv)
if self.G is None:
self.G, self.Hvv, self.Hsvals, self.Gsvals = compute_G_from_H( self.H, mode_inv = self.mode_inv )
if mode_inv is 'svd' and return_svd:
return self.G, self.Hvv, self.Hsvals, self.Gsvals
else:
return self.G
|
[
"def",
"get_rmetric",
"(",
"self",
",",
"mode_inv",
"=",
"'svd'",
",",
"return_svd",
"=",
"False",
")",
":",
"if",
"self",
".",
"H",
"is",
"None",
":",
"self",
".",
"H",
",",
"self",
".",
"G",
",",
"self",
".",
"Hvv",
",",
"self",
".",
"Hsval",
"=",
"riemann_metric",
"(",
"self",
".",
"Y",
",",
"self",
".",
"L",
",",
"self",
".",
"mdimG",
",",
"invert_h",
"=",
"True",
",",
"mode_inv",
"=",
"mode_inv",
")",
"if",
"self",
".",
"G",
"is",
"None",
":",
"self",
".",
"G",
",",
"self",
".",
"Hvv",
",",
"self",
".",
"Hsvals",
",",
"self",
".",
"Gsvals",
"=",
"compute_G_from_H",
"(",
"self",
".",
"H",
",",
"mode_inv",
"=",
"self",
".",
"mode_inv",
")",
"if",
"mode_inv",
"is",
"'svd'",
"and",
"return_svd",
":",
"return",
"self",
".",
"G",
",",
"self",
".",
"Hvv",
",",
"self",
".",
"Hsvals",
",",
"self",
".",
"Gsvals",
"else",
":",
"return",
"self",
".",
"G"
] |
Compute the Reimannian Metric
|
[
"Compute",
"the",
"Reimannian",
"Metric"
] |
faccaf267aad0a8b18ec8a705735fd9dd838ca1e
|
https://github.com/mmp2/megaman/blob/faccaf267aad0a8b18ec8a705735fd9dd838ca1e/megaman/geometry/rmetric.py#L270-L281
|
train
|
mmp2/megaman
|
megaman/relaxation/trace_variable.py
|
TracingVariable.report_and_save_keywords
|
def report_and_save_keywords(self,relaxation_kwds,precomputed_kwds):
"""Save relaxation keywords to .txt and .pyc file"""
report_name = os.path.join(self.backup_dir,'relaxation_keywords.txt')
pretty_relax_kwds = pprint.pformat(relaxation_kwds,indent=4)
with open(report_name,'w') as wf:
wf.write(pretty_relax_kwds)
wf.close()
origin_name = os.path.join(self.backup_dir,'relaxation_keywords.pyc')
with open(origin_name,'wb') as ro:
pickle.dump(relaxation_kwds,ro,protocol=pickle.HIGHEST_PROTOCOL)
ro.close()
if relaxation_kwds['presave']:
precomp_kwds_name = os.path.join(self.backup_dir,
'precomputed_keywords.pyc')
with open(precomp_kwds_name, 'wb') as po:
pickle.dump(precomputed_kwds, po,
protocol=pickle.HIGHEST_PROTOCOL)
po.close()
|
python
|
def report_and_save_keywords(self,relaxation_kwds,precomputed_kwds):
"""Save relaxation keywords to .txt and .pyc file"""
report_name = os.path.join(self.backup_dir,'relaxation_keywords.txt')
pretty_relax_kwds = pprint.pformat(relaxation_kwds,indent=4)
with open(report_name,'w') as wf:
wf.write(pretty_relax_kwds)
wf.close()
origin_name = os.path.join(self.backup_dir,'relaxation_keywords.pyc')
with open(origin_name,'wb') as ro:
pickle.dump(relaxation_kwds,ro,protocol=pickle.HIGHEST_PROTOCOL)
ro.close()
if relaxation_kwds['presave']:
precomp_kwds_name = os.path.join(self.backup_dir,
'precomputed_keywords.pyc')
with open(precomp_kwds_name, 'wb') as po:
pickle.dump(precomputed_kwds, po,
protocol=pickle.HIGHEST_PROTOCOL)
po.close()
|
[
"def",
"report_and_save_keywords",
"(",
"self",
",",
"relaxation_kwds",
",",
"precomputed_kwds",
")",
":",
"report_name",
"=",
"os",
".",
"path",
".",
"join",
"(",
"self",
".",
"backup_dir",
",",
"'relaxation_keywords.txt'",
")",
"pretty_relax_kwds",
"=",
"pprint",
".",
"pformat",
"(",
"relaxation_kwds",
",",
"indent",
"=",
"4",
")",
"with",
"open",
"(",
"report_name",
",",
"'w'",
")",
"as",
"wf",
":",
"wf",
".",
"write",
"(",
"pretty_relax_kwds",
")",
"wf",
".",
"close",
"(",
")",
"origin_name",
"=",
"os",
".",
"path",
".",
"join",
"(",
"self",
".",
"backup_dir",
",",
"'relaxation_keywords.pyc'",
")",
"with",
"open",
"(",
"origin_name",
",",
"'wb'",
")",
"as",
"ro",
":",
"pickle",
".",
"dump",
"(",
"relaxation_kwds",
",",
"ro",
",",
"protocol",
"=",
"pickle",
".",
"HIGHEST_PROTOCOL",
")",
"ro",
".",
"close",
"(",
")",
"if",
"relaxation_kwds",
"[",
"'presave'",
"]",
":",
"precomp_kwds_name",
"=",
"os",
".",
"path",
".",
"join",
"(",
"self",
".",
"backup_dir",
",",
"'precomputed_keywords.pyc'",
")",
"with",
"open",
"(",
"precomp_kwds_name",
",",
"'wb'",
")",
"as",
"po",
":",
"pickle",
".",
"dump",
"(",
"precomputed_kwds",
",",
"po",
",",
"protocol",
"=",
"pickle",
".",
"HIGHEST_PROTOCOL",
")",
"po",
".",
"close",
"(",
")"
] |
Save relaxation keywords to .txt and .pyc file
|
[
"Save",
"relaxation",
"keywords",
"to",
".",
"txt",
"and",
".",
"pyc",
"file"
] |
faccaf267aad0a8b18ec8a705735fd9dd838ca1e
|
https://github.com/mmp2/megaman/blob/faccaf267aad0a8b18ec8a705735fd9dd838ca1e/megaman/relaxation/trace_variable.py#L36-L55
|
train
|
mmp2/megaman
|
megaman/relaxation/trace_variable.py
|
TracingVariable.update
|
def update(self,iiter,H,Y,eta,loss):
"""Update the trace_var in new iteration"""
if iiter <= self.niter_trace+1:
self.H[iiter] = H
self.Y[iiter] = Y
elif iiter >self.niter - self.niter_trace + 1:
self.H[self.ltrace+iiter-self.niter-1] = H
self.Y[self.ltrace+iiter-self.niter-1] = Y
self.etas[iiter] = eta
self.loss[iiter] = loss
if self.loss[iiter] < self.lmin:
self.Yh = Y
self.lmin = self.loss[iiter]
self.miniter = iiter if not iiter == -1 else self.niter + 1
|
python
|
def update(self,iiter,H,Y,eta,loss):
"""Update the trace_var in new iteration"""
if iiter <= self.niter_trace+1:
self.H[iiter] = H
self.Y[iiter] = Y
elif iiter >self.niter - self.niter_trace + 1:
self.H[self.ltrace+iiter-self.niter-1] = H
self.Y[self.ltrace+iiter-self.niter-1] = Y
self.etas[iiter] = eta
self.loss[iiter] = loss
if self.loss[iiter] < self.lmin:
self.Yh = Y
self.lmin = self.loss[iiter]
self.miniter = iiter if not iiter == -1 else self.niter + 1
|
[
"def",
"update",
"(",
"self",
",",
"iiter",
",",
"H",
",",
"Y",
",",
"eta",
",",
"loss",
")",
":",
"if",
"iiter",
"<=",
"self",
".",
"niter_trace",
"+",
"1",
":",
"self",
".",
"H",
"[",
"iiter",
"]",
"=",
"H",
"self",
".",
"Y",
"[",
"iiter",
"]",
"=",
"Y",
"elif",
"iiter",
">",
"self",
".",
"niter",
"-",
"self",
".",
"niter_trace",
"+",
"1",
":",
"self",
".",
"H",
"[",
"self",
".",
"ltrace",
"+",
"iiter",
"-",
"self",
".",
"niter",
"-",
"1",
"]",
"=",
"H",
"self",
".",
"Y",
"[",
"self",
".",
"ltrace",
"+",
"iiter",
"-",
"self",
".",
"niter",
"-",
"1",
"]",
"=",
"Y",
"self",
".",
"etas",
"[",
"iiter",
"]",
"=",
"eta",
"self",
".",
"loss",
"[",
"iiter",
"]",
"=",
"loss",
"if",
"self",
".",
"loss",
"[",
"iiter",
"]",
"<",
"self",
".",
"lmin",
":",
"self",
".",
"Yh",
"=",
"Y",
"self",
".",
"lmin",
"=",
"self",
".",
"loss",
"[",
"iiter",
"]",
"self",
".",
"miniter",
"=",
"iiter",
"if",
"not",
"iiter",
"==",
"-",
"1",
"else",
"self",
".",
"niter",
"+",
"1"
] |
Update the trace_var in new iteration
|
[
"Update",
"the",
"trace_var",
"in",
"new",
"iteration"
] |
faccaf267aad0a8b18ec8a705735fd9dd838ca1e
|
https://github.com/mmp2/megaman/blob/faccaf267aad0a8b18ec8a705735fd9dd838ca1e/megaman/relaxation/trace_variable.py#L57-L71
|
train
|
mmp2/megaman
|
megaman/relaxation/trace_variable.py
|
TracingVariable.save
|
def save(cls,instance,filename):
"""Class method save for saving TracingVariable."""
filename = cls.correct_file_extension(filename)
try:
with open(filename,'wb') as f:
pickle.dump(instance,f,protocol=pickle.HIGHEST_PROTOCOL)
except MemoryError as e:
print ('{} occurred, will downsampled the saved file by 20.'
.format(type(e).__name__))
copy_instance = instance.copy()
copy_instance.H = copy_instance.H[::20,:,:]
copy_instance.Y = copy_instance.Y[::20,:]
with open(filename,'wb') as f:
pickle.dump(copy_instance,f,protocol=pickle.HIGHEST_PROTOCOL)
|
python
|
def save(cls,instance,filename):
"""Class method save for saving TracingVariable."""
filename = cls.correct_file_extension(filename)
try:
with open(filename,'wb') as f:
pickle.dump(instance,f,protocol=pickle.HIGHEST_PROTOCOL)
except MemoryError as e:
print ('{} occurred, will downsampled the saved file by 20.'
.format(type(e).__name__))
copy_instance = instance.copy()
copy_instance.H = copy_instance.H[::20,:,:]
copy_instance.Y = copy_instance.Y[::20,:]
with open(filename,'wb') as f:
pickle.dump(copy_instance,f,protocol=pickle.HIGHEST_PROTOCOL)
|
[
"def",
"save",
"(",
"cls",
",",
"instance",
",",
"filename",
")",
":",
"filename",
"=",
"cls",
".",
"correct_file_extension",
"(",
"filename",
")",
"try",
":",
"with",
"open",
"(",
"filename",
",",
"'wb'",
")",
"as",
"f",
":",
"pickle",
".",
"dump",
"(",
"instance",
",",
"f",
",",
"protocol",
"=",
"pickle",
".",
"HIGHEST_PROTOCOL",
")",
"except",
"MemoryError",
"as",
"e",
":",
"print",
"(",
"'{} occurred, will downsampled the saved file by 20.'",
".",
"format",
"(",
"type",
"(",
"e",
")",
".",
"__name__",
")",
")",
"copy_instance",
"=",
"instance",
".",
"copy",
"(",
")",
"copy_instance",
".",
"H",
"=",
"copy_instance",
".",
"H",
"[",
":",
":",
"20",
",",
":",
",",
":",
"]",
"copy_instance",
".",
"Y",
"=",
"copy_instance",
".",
"Y",
"[",
":",
":",
"20",
",",
":",
"]",
"with",
"open",
"(",
"filename",
",",
"'wb'",
")",
"as",
"f",
":",
"pickle",
".",
"dump",
"(",
"copy_instance",
",",
"f",
",",
"protocol",
"=",
"pickle",
".",
"HIGHEST_PROTOCOL",
")"
] |
Class method save for saving TracingVariable.
|
[
"Class",
"method",
"save",
"for",
"saving",
"TracingVariable",
"."
] |
faccaf267aad0a8b18ec8a705735fd9dd838ca1e
|
https://github.com/mmp2/megaman/blob/faccaf267aad0a8b18ec8a705735fd9dd838ca1e/megaman/relaxation/trace_variable.py#L93-L106
|
train
|
mmp2/megaman
|
megaman/relaxation/trace_variable.py
|
TracingVariable.load
|
def load(cls,filename):
"""Load from stored files"""
filename = cls.correct_file_extension(filename)
with open(filename,'rb') as f:
return pickle.load(f)
|
python
|
def load(cls,filename):
"""Load from stored files"""
filename = cls.correct_file_extension(filename)
with open(filename,'rb') as f:
return pickle.load(f)
|
[
"def",
"load",
"(",
"cls",
",",
"filename",
")",
":",
"filename",
"=",
"cls",
".",
"correct_file_extension",
"(",
"filename",
")",
"with",
"open",
"(",
"filename",
",",
"'rb'",
")",
"as",
"f",
":",
"return",
"pickle",
".",
"load",
"(",
"f",
")"
] |
Load from stored files
|
[
"Load",
"from",
"stored",
"files"
] |
faccaf267aad0a8b18ec8a705735fd9dd838ca1e
|
https://github.com/mmp2/megaman/blob/faccaf267aad0a8b18ec8a705735fd9dd838ca1e/megaman/relaxation/trace_variable.py#L109-L113
|
train
|
mmp2/megaman
|
doc/sphinxext/numpy_ext/utils.py
|
find_mod_objs
|
def find_mod_objs(modname, onlylocals=False):
""" Returns all the public attributes of a module referenced by name.
.. note::
The returned list *not* include subpackages or modules of
`modname`,nor does it include private attributes (those that
beginwith '_' or are not in `__all__`).
Parameters
----------
modname : str
The name of the module to search.
onlylocals : bool
If True, only attributes that are either members of `modname` OR one of
its modules or subpackages will be included.
Returns
-------
localnames : list of str
A list of the names of the attributes as they are named in the
module `modname` .
fqnames : list of str
A list of the full qualified names of the attributes (e.g.,
``astropy.utils.misc.find_mod_objs``). For attributes that are
simple variables, this is based on the local name, but for
functions or classes it can be different if they are actually
defined elsewhere and just referenced in `modname`.
objs : list of objects
A list of the actual attributes themselves (in the same order as
the other arguments)
"""
__import__(modname)
mod = sys.modules[modname]
if hasattr(mod, '__all__'):
pkgitems = [(k, mod.__dict__[k]) for k in mod.__all__]
else:
pkgitems = [(k, mod.__dict__[k]) for k in dir(mod) if k[0] != '_']
# filter out modules and pull the names and objs out
ismodule = inspect.ismodule
localnames = [k for k, v in pkgitems if not ismodule(v)]
objs = [v for k, v in pkgitems if not ismodule(v)]
# fully qualified names can be determined from the object's module
fqnames = []
for obj, lnm in zip(objs, localnames):
if hasattr(obj, '__module__') and hasattr(obj, '__name__'):
fqnames.append(obj.__module__ + '.' + obj.__name__)
else:
fqnames.append(modname + '.' + lnm)
if onlylocals:
valids = [fqn.startswith(modname) for fqn in fqnames]
localnames = [e for i, e in enumerate(localnames) if valids[i]]
fqnames = [e for i, e in enumerate(fqnames) if valids[i]]
objs = [e for i, e in enumerate(objs) if valids[i]]
return localnames, fqnames, objs
|
python
|
def find_mod_objs(modname, onlylocals=False):
""" Returns all the public attributes of a module referenced by name.
.. note::
The returned list *not* include subpackages or modules of
`modname`,nor does it include private attributes (those that
beginwith '_' or are not in `__all__`).
Parameters
----------
modname : str
The name of the module to search.
onlylocals : bool
If True, only attributes that are either members of `modname` OR one of
its modules or subpackages will be included.
Returns
-------
localnames : list of str
A list of the names of the attributes as they are named in the
module `modname` .
fqnames : list of str
A list of the full qualified names of the attributes (e.g.,
``astropy.utils.misc.find_mod_objs``). For attributes that are
simple variables, this is based on the local name, but for
functions or classes it can be different if they are actually
defined elsewhere and just referenced in `modname`.
objs : list of objects
A list of the actual attributes themselves (in the same order as
the other arguments)
"""
__import__(modname)
mod = sys.modules[modname]
if hasattr(mod, '__all__'):
pkgitems = [(k, mod.__dict__[k]) for k in mod.__all__]
else:
pkgitems = [(k, mod.__dict__[k]) for k in dir(mod) if k[0] != '_']
# filter out modules and pull the names and objs out
ismodule = inspect.ismodule
localnames = [k for k, v in pkgitems if not ismodule(v)]
objs = [v for k, v in pkgitems if not ismodule(v)]
# fully qualified names can be determined from the object's module
fqnames = []
for obj, lnm in zip(objs, localnames):
if hasattr(obj, '__module__') and hasattr(obj, '__name__'):
fqnames.append(obj.__module__ + '.' + obj.__name__)
else:
fqnames.append(modname + '.' + lnm)
if onlylocals:
valids = [fqn.startswith(modname) for fqn in fqnames]
localnames = [e for i, e in enumerate(localnames) if valids[i]]
fqnames = [e for i, e in enumerate(fqnames) if valids[i]]
objs = [e for i, e in enumerate(objs) if valids[i]]
return localnames, fqnames, objs
|
[
"def",
"find_mod_objs",
"(",
"modname",
",",
"onlylocals",
"=",
"False",
")",
":",
"__import__",
"(",
"modname",
")",
"mod",
"=",
"sys",
".",
"modules",
"[",
"modname",
"]",
"if",
"hasattr",
"(",
"mod",
",",
"'__all__'",
")",
":",
"pkgitems",
"=",
"[",
"(",
"k",
",",
"mod",
".",
"__dict__",
"[",
"k",
"]",
")",
"for",
"k",
"in",
"mod",
".",
"__all__",
"]",
"else",
":",
"pkgitems",
"=",
"[",
"(",
"k",
",",
"mod",
".",
"__dict__",
"[",
"k",
"]",
")",
"for",
"k",
"in",
"dir",
"(",
"mod",
")",
"if",
"k",
"[",
"0",
"]",
"!=",
"'_'",
"]",
"# filter out modules and pull the names and objs out",
"ismodule",
"=",
"inspect",
".",
"ismodule",
"localnames",
"=",
"[",
"k",
"for",
"k",
",",
"v",
"in",
"pkgitems",
"if",
"not",
"ismodule",
"(",
"v",
")",
"]",
"objs",
"=",
"[",
"v",
"for",
"k",
",",
"v",
"in",
"pkgitems",
"if",
"not",
"ismodule",
"(",
"v",
")",
"]",
"# fully qualified names can be determined from the object's module",
"fqnames",
"=",
"[",
"]",
"for",
"obj",
",",
"lnm",
"in",
"zip",
"(",
"objs",
",",
"localnames",
")",
":",
"if",
"hasattr",
"(",
"obj",
",",
"'__module__'",
")",
"and",
"hasattr",
"(",
"obj",
",",
"'__name__'",
")",
":",
"fqnames",
".",
"append",
"(",
"obj",
".",
"__module__",
"+",
"'.'",
"+",
"obj",
".",
"__name__",
")",
"else",
":",
"fqnames",
".",
"append",
"(",
"modname",
"+",
"'.'",
"+",
"lnm",
")",
"if",
"onlylocals",
":",
"valids",
"=",
"[",
"fqn",
".",
"startswith",
"(",
"modname",
")",
"for",
"fqn",
"in",
"fqnames",
"]",
"localnames",
"=",
"[",
"e",
"for",
"i",
",",
"e",
"in",
"enumerate",
"(",
"localnames",
")",
"if",
"valids",
"[",
"i",
"]",
"]",
"fqnames",
"=",
"[",
"e",
"for",
"i",
",",
"e",
"in",
"enumerate",
"(",
"fqnames",
")",
"if",
"valids",
"[",
"i",
"]",
"]",
"objs",
"=",
"[",
"e",
"for",
"i",
",",
"e",
"in",
"enumerate",
"(",
"objs",
")",
"if",
"valids",
"[",
"i",
"]",
"]",
"return",
"localnames",
",",
"fqnames",
",",
"objs"
] |
Returns all the public attributes of a module referenced by name.
.. note::
The returned list *not* include subpackages or modules of
`modname`,nor does it include private attributes (those that
beginwith '_' or are not in `__all__`).
Parameters
----------
modname : str
The name of the module to search.
onlylocals : bool
If True, only attributes that are either members of `modname` OR one of
its modules or subpackages will be included.
Returns
-------
localnames : list of str
A list of the names of the attributes as they are named in the
module `modname` .
fqnames : list of str
A list of the full qualified names of the attributes (e.g.,
``astropy.utils.misc.find_mod_objs``). For attributes that are
simple variables, this is based on the local name, but for
functions or classes it can be different if they are actually
defined elsewhere and just referenced in `modname`.
objs : list of objects
A list of the actual attributes themselves (in the same order as
the other arguments)
|
[
"Returns",
"all",
"the",
"public",
"attributes",
"of",
"a",
"module",
"referenced",
"by",
"name",
"."
] |
faccaf267aad0a8b18ec8a705735fd9dd838ca1e
|
https://github.com/mmp2/megaman/blob/faccaf267aad0a8b18ec8a705735fd9dd838ca1e/doc/sphinxext/numpy_ext/utils.py#L5-L65
|
train
|
mmp2/megaman
|
megaman/datasets/datasets.py
|
get_megaman_image
|
def get_megaman_image(factor=1):
"""Return an RGBA representation of the megaman icon"""
imfile = os.path.join(os.path.dirname(__file__), 'megaman.png')
data = ndimage.imread(imfile) / 255
if factor > 1:
data = data.repeat(factor, axis=0).repeat(factor, axis=1)
return data
|
python
|
def get_megaman_image(factor=1):
"""Return an RGBA representation of the megaman icon"""
imfile = os.path.join(os.path.dirname(__file__), 'megaman.png')
data = ndimage.imread(imfile) / 255
if factor > 1:
data = data.repeat(factor, axis=0).repeat(factor, axis=1)
return data
|
[
"def",
"get_megaman_image",
"(",
"factor",
"=",
"1",
")",
":",
"imfile",
"=",
"os",
".",
"path",
".",
"join",
"(",
"os",
".",
"path",
".",
"dirname",
"(",
"__file__",
")",
",",
"'megaman.png'",
")",
"data",
"=",
"ndimage",
".",
"imread",
"(",
"imfile",
")",
"/",
"255",
"if",
"factor",
">",
"1",
":",
"data",
"=",
"data",
".",
"repeat",
"(",
"factor",
",",
"axis",
"=",
"0",
")",
".",
"repeat",
"(",
"factor",
",",
"axis",
"=",
"1",
")",
"return",
"data"
] |
Return an RGBA representation of the megaman icon
|
[
"Return",
"an",
"RGBA",
"representation",
"of",
"the",
"megaman",
"icon"
] |
faccaf267aad0a8b18ec8a705735fd9dd838ca1e
|
https://github.com/mmp2/megaman/blob/faccaf267aad0a8b18ec8a705735fd9dd838ca1e/megaman/datasets/datasets.py#L12-L18
|
train
|
mmp2/megaman
|
megaman/datasets/datasets.py
|
generate_megaman_data
|
def generate_megaman_data(sampling=2):
"""Generate 2D point data of the megaman image"""
data = get_megaman_image()
x = np.arange(sampling * data.shape[1]) / float(sampling)
y = np.arange(sampling * data.shape[0]) / float(sampling)
X, Y = map(np.ravel, np.meshgrid(x, y))
C = data[np.floor(Y.max() - Y).astype(int),
np.floor(X).astype(int)]
return np.vstack([X, Y]).T, C
|
python
|
def generate_megaman_data(sampling=2):
"""Generate 2D point data of the megaman image"""
data = get_megaman_image()
x = np.arange(sampling * data.shape[1]) / float(sampling)
y = np.arange(sampling * data.shape[0]) / float(sampling)
X, Y = map(np.ravel, np.meshgrid(x, y))
C = data[np.floor(Y.max() - Y).astype(int),
np.floor(X).astype(int)]
return np.vstack([X, Y]).T, C
|
[
"def",
"generate_megaman_data",
"(",
"sampling",
"=",
"2",
")",
":",
"data",
"=",
"get_megaman_image",
"(",
")",
"x",
"=",
"np",
".",
"arange",
"(",
"sampling",
"*",
"data",
".",
"shape",
"[",
"1",
"]",
")",
"/",
"float",
"(",
"sampling",
")",
"y",
"=",
"np",
".",
"arange",
"(",
"sampling",
"*",
"data",
".",
"shape",
"[",
"0",
"]",
")",
"/",
"float",
"(",
"sampling",
")",
"X",
",",
"Y",
"=",
"map",
"(",
"np",
".",
"ravel",
",",
"np",
".",
"meshgrid",
"(",
"x",
",",
"y",
")",
")",
"C",
"=",
"data",
"[",
"np",
".",
"floor",
"(",
"Y",
".",
"max",
"(",
")",
"-",
"Y",
")",
".",
"astype",
"(",
"int",
")",
",",
"np",
".",
"floor",
"(",
"X",
")",
".",
"astype",
"(",
"int",
")",
"]",
"return",
"np",
".",
"vstack",
"(",
"[",
"X",
",",
"Y",
"]",
")",
".",
"T",
",",
"C"
] |
Generate 2D point data of the megaman image
|
[
"Generate",
"2D",
"point",
"data",
"of",
"the",
"megaman",
"image"
] |
faccaf267aad0a8b18ec8a705735fd9dd838ca1e
|
https://github.com/mmp2/megaman/blob/faccaf267aad0a8b18ec8a705735fd9dd838ca1e/megaman/datasets/datasets.py#L21-L29
|
train
|
mmp2/megaman
|
megaman/datasets/datasets.py
|
_make_S_curve
|
def _make_S_curve(x, range=(-0.75, 0.75)):
"""Make a 2D S-curve from a 1D vector"""
assert x.ndim == 1
x = x - x.min()
theta = 2 * np.pi * (range[0] + (range[1] - range[0]) * x / x.max())
X = np.empty((x.shape[0], 2), dtype=float)
X[:, 0] = np.sign(theta) * (1 - np.cos(theta))
X[:, 1] = np.sin(theta)
X *= x.max() / (2 * np.pi * (range[1] - range[0]))
return X
|
python
|
def _make_S_curve(x, range=(-0.75, 0.75)):
"""Make a 2D S-curve from a 1D vector"""
assert x.ndim == 1
x = x - x.min()
theta = 2 * np.pi * (range[0] + (range[1] - range[0]) * x / x.max())
X = np.empty((x.shape[0], 2), dtype=float)
X[:, 0] = np.sign(theta) * (1 - np.cos(theta))
X[:, 1] = np.sin(theta)
X *= x.max() / (2 * np.pi * (range[1] - range[0]))
return X
|
[
"def",
"_make_S_curve",
"(",
"x",
",",
"range",
"=",
"(",
"-",
"0.75",
",",
"0.75",
")",
")",
":",
"assert",
"x",
".",
"ndim",
"==",
"1",
"x",
"=",
"x",
"-",
"x",
".",
"min",
"(",
")",
"theta",
"=",
"2",
"*",
"np",
".",
"pi",
"*",
"(",
"range",
"[",
"0",
"]",
"+",
"(",
"range",
"[",
"1",
"]",
"-",
"range",
"[",
"0",
"]",
")",
"*",
"x",
"/",
"x",
".",
"max",
"(",
")",
")",
"X",
"=",
"np",
".",
"empty",
"(",
"(",
"x",
".",
"shape",
"[",
"0",
"]",
",",
"2",
")",
",",
"dtype",
"=",
"float",
")",
"X",
"[",
":",
",",
"0",
"]",
"=",
"np",
".",
"sign",
"(",
"theta",
")",
"*",
"(",
"1",
"-",
"np",
".",
"cos",
"(",
"theta",
")",
")",
"X",
"[",
":",
",",
"1",
"]",
"=",
"np",
".",
"sin",
"(",
"theta",
")",
"X",
"*=",
"x",
".",
"max",
"(",
")",
"/",
"(",
"2",
"*",
"np",
".",
"pi",
"*",
"(",
"range",
"[",
"1",
"]",
"-",
"range",
"[",
"0",
"]",
")",
")",
"return",
"X"
] |
Make a 2D S-curve from a 1D vector
|
[
"Make",
"a",
"2D",
"S",
"-",
"curve",
"from",
"a",
"1D",
"vector"
] |
faccaf267aad0a8b18ec8a705735fd9dd838ca1e
|
https://github.com/mmp2/megaman/blob/faccaf267aad0a8b18ec8a705735fd9dd838ca1e/megaman/datasets/datasets.py#L32-L41
|
train
|
mmp2/megaman
|
megaman/datasets/datasets.py
|
generate_megaman_manifold
|
def generate_megaman_manifold(sampling=2, nfolds=2,
rotate=True, random_state=None):
"""Generate a manifold of the megaman data"""
X, c = generate_megaman_data(sampling)
for i in range(nfolds):
X = np.hstack([_make_S_curve(x) for x in X.T])
if rotate:
rand = check_random_state(random_state)
R = rand.randn(X.shape[1], X.shape[1])
U, s, VT = np.linalg.svd(R)
X = np.dot(X, U)
return X, c
|
python
|
def generate_megaman_manifold(sampling=2, nfolds=2,
rotate=True, random_state=None):
"""Generate a manifold of the megaman data"""
X, c = generate_megaman_data(sampling)
for i in range(nfolds):
X = np.hstack([_make_S_curve(x) for x in X.T])
if rotate:
rand = check_random_state(random_state)
R = rand.randn(X.shape[1], X.shape[1])
U, s, VT = np.linalg.svd(R)
X = np.dot(X, U)
return X, c
|
[
"def",
"generate_megaman_manifold",
"(",
"sampling",
"=",
"2",
",",
"nfolds",
"=",
"2",
",",
"rotate",
"=",
"True",
",",
"random_state",
"=",
"None",
")",
":",
"X",
",",
"c",
"=",
"generate_megaman_data",
"(",
"sampling",
")",
"for",
"i",
"in",
"range",
"(",
"nfolds",
")",
":",
"X",
"=",
"np",
".",
"hstack",
"(",
"[",
"_make_S_curve",
"(",
"x",
")",
"for",
"x",
"in",
"X",
".",
"T",
"]",
")",
"if",
"rotate",
":",
"rand",
"=",
"check_random_state",
"(",
"random_state",
")",
"R",
"=",
"rand",
".",
"randn",
"(",
"X",
".",
"shape",
"[",
"1",
"]",
",",
"X",
".",
"shape",
"[",
"1",
"]",
")",
"U",
",",
"s",
",",
"VT",
"=",
"np",
".",
"linalg",
".",
"svd",
"(",
"R",
")",
"X",
"=",
"np",
".",
"dot",
"(",
"X",
",",
"U",
")",
"return",
"X",
",",
"c"
] |
Generate a manifold of the megaman data
|
[
"Generate",
"a",
"manifold",
"of",
"the",
"megaman",
"data"
] |
faccaf267aad0a8b18ec8a705735fd9dd838ca1e
|
https://github.com/mmp2/megaman/blob/faccaf267aad0a8b18ec8a705735fd9dd838ca1e/megaman/datasets/datasets.py#L44-L57
|
train
|
presslabs/z3
|
z3/ssh_sync.py
|
snapshots_to_send
|
def snapshots_to_send(source_snaps, dest_snaps):
"""return pair of snapshots"""
if len(source_snaps) == 0:
raise AssertionError("No snapshots exist locally!")
if len(dest_snaps) == 0:
# nothing on the remote side, send everything
return None, source_snaps[-1]
last_remote = dest_snaps[-1]
for snap in reversed(source_snaps):
if snap == last_remote:
# found a common snapshot
return last_remote, source_snaps[-1]
# sys.stderr.write("source:'{}', dest:'{}'".format(source_snaps, dest_snaps))
raise AssertionError("Latest snapshot on destination doesn't exist on source!")
|
python
|
def snapshots_to_send(source_snaps, dest_snaps):
"""return pair of snapshots"""
if len(source_snaps) == 0:
raise AssertionError("No snapshots exist locally!")
if len(dest_snaps) == 0:
# nothing on the remote side, send everything
return None, source_snaps[-1]
last_remote = dest_snaps[-1]
for snap in reversed(source_snaps):
if snap == last_remote:
# found a common snapshot
return last_remote, source_snaps[-1]
# sys.stderr.write("source:'{}', dest:'{}'".format(source_snaps, dest_snaps))
raise AssertionError("Latest snapshot on destination doesn't exist on source!")
|
[
"def",
"snapshots_to_send",
"(",
"source_snaps",
",",
"dest_snaps",
")",
":",
"if",
"len",
"(",
"source_snaps",
")",
"==",
"0",
":",
"raise",
"AssertionError",
"(",
"\"No snapshots exist locally!\"",
")",
"if",
"len",
"(",
"dest_snaps",
")",
"==",
"0",
":",
"# nothing on the remote side, send everything",
"return",
"None",
",",
"source_snaps",
"[",
"-",
"1",
"]",
"last_remote",
"=",
"dest_snaps",
"[",
"-",
"1",
"]",
"for",
"snap",
"in",
"reversed",
"(",
"source_snaps",
")",
":",
"if",
"snap",
"==",
"last_remote",
":",
"# found a common snapshot",
"return",
"last_remote",
",",
"source_snaps",
"[",
"-",
"1",
"]",
"# sys.stderr.write(\"source:'{}', dest:'{}'\".format(source_snaps, dest_snaps))",
"raise",
"AssertionError",
"(",
"\"Latest snapshot on destination doesn't exist on source!\"",
")"
] |
return pair of snapshots
|
[
"return",
"pair",
"of",
"snapshots"
] |
965898cccddd351ce4c56402a215c3bda9f37b5e
|
https://github.com/presslabs/z3/blob/965898cccddd351ce4c56402a215c3bda9f37b5e/z3/ssh_sync.py#L25-L38
|
train
|
presslabs/z3
|
z3/pput.py
|
StreamHandler.get_chunk
|
def get_chunk(self):
"""Return complete chunks or None if EOF reached"""
while not self._eof_reached:
read = self.input_stream.read(self.chunk_size - len(self._partial_chunk))
if len(read) == 0:
self._eof_reached = True
self._partial_chunk += read
if len(self._partial_chunk) == self.chunk_size or self._eof_reached:
chunk = self._partial_chunk
self._partial_chunk = ""
return chunk
|
python
|
def get_chunk(self):
"""Return complete chunks or None if EOF reached"""
while not self._eof_reached:
read = self.input_stream.read(self.chunk_size - len(self._partial_chunk))
if len(read) == 0:
self._eof_reached = True
self._partial_chunk += read
if len(self._partial_chunk) == self.chunk_size or self._eof_reached:
chunk = self._partial_chunk
self._partial_chunk = ""
return chunk
|
[
"def",
"get_chunk",
"(",
"self",
")",
":",
"while",
"not",
"self",
".",
"_eof_reached",
":",
"read",
"=",
"self",
".",
"input_stream",
".",
"read",
"(",
"self",
".",
"chunk_size",
"-",
"len",
"(",
"self",
".",
"_partial_chunk",
")",
")",
"if",
"len",
"(",
"read",
")",
"==",
"0",
":",
"self",
".",
"_eof_reached",
"=",
"True",
"self",
".",
"_partial_chunk",
"+=",
"read",
"if",
"len",
"(",
"self",
".",
"_partial_chunk",
")",
"==",
"self",
".",
"chunk_size",
"or",
"self",
".",
"_eof_reached",
":",
"chunk",
"=",
"self",
".",
"_partial_chunk",
"self",
".",
"_partial_chunk",
"=",
"\"\"",
"return",
"chunk"
] |
Return complete chunks or None if EOF reached
|
[
"Return",
"complete",
"chunks",
"or",
"None",
"if",
"EOF",
"reached"
] |
965898cccddd351ce4c56402a215c3bda9f37b5e
|
https://github.com/presslabs/z3/blob/965898cccddd351ce4c56402a215c3bda9f37b5e/z3/pput.py#L76-L86
|
train
|
presslabs/z3
|
z3/pput.py
|
UploadSupervisor._handle_result
|
def _handle_result(self):
"""Process one result. Block untill one is available
"""
result = self.inbox.get()
if result.success:
if self._verbosity >= VERB_PROGRESS:
sys.stderr.write("\nuploaded chunk {} \n".format(result.index))
self.results.append((result.index, result.md5))
self._pending_chunks -= 1
else:
raise result.traceback
|
python
|
def _handle_result(self):
"""Process one result. Block untill one is available
"""
result = self.inbox.get()
if result.success:
if self._verbosity >= VERB_PROGRESS:
sys.stderr.write("\nuploaded chunk {} \n".format(result.index))
self.results.append((result.index, result.md5))
self._pending_chunks -= 1
else:
raise result.traceback
|
[
"def",
"_handle_result",
"(",
"self",
")",
":",
"result",
"=",
"self",
".",
"inbox",
".",
"get",
"(",
")",
"if",
"result",
".",
"success",
":",
"if",
"self",
".",
"_verbosity",
">=",
"VERB_PROGRESS",
":",
"sys",
".",
"stderr",
".",
"write",
"(",
"\"\\nuploaded chunk {} \\n\"",
".",
"format",
"(",
"result",
".",
"index",
")",
")",
"self",
".",
"results",
".",
"append",
"(",
"(",
"result",
".",
"index",
",",
"result",
".",
"md5",
")",
")",
"self",
".",
"_pending_chunks",
"-=",
"1",
"else",
":",
"raise",
"result",
".",
"traceback"
] |
Process one result. Block untill one is available
|
[
"Process",
"one",
"result",
".",
"Block",
"untill",
"one",
"is",
"available"
] |
965898cccddd351ce4c56402a215c3bda9f37b5e
|
https://github.com/presslabs/z3/blob/965898cccddd351ce4c56402a215c3bda9f37b5e/z3/pput.py#L201-L211
|
train
|
presslabs/z3
|
z3/pput.py
|
UploadSupervisor._send_chunk
|
def _send_chunk(self, index, chunk):
"""Send the current chunk to the workers for processing.
Called when the _partial_chunk is complete.
Blocks when the outbox is full.
"""
self._pending_chunks += 1
self.outbox.put((index, chunk))
|
python
|
def _send_chunk(self, index, chunk):
"""Send the current chunk to the workers for processing.
Called when the _partial_chunk is complete.
Blocks when the outbox is full.
"""
self._pending_chunks += 1
self.outbox.put((index, chunk))
|
[
"def",
"_send_chunk",
"(",
"self",
",",
"index",
",",
"chunk",
")",
":",
"self",
".",
"_pending_chunks",
"+=",
"1",
"self",
".",
"outbox",
".",
"put",
"(",
"(",
"index",
",",
"chunk",
")",
")"
] |
Send the current chunk to the workers for processing.
Called when the _partial_chunk is complete.
Blocks when the outbox is full.
|
[
"Send",
"the",
"current",
"chunk",
"to",
"the",
"workers",
"for",
"processing",
".",
"Called",
"when",
"the",
"_partial_chunk",
"is",
"complete",
"."
] |
965898cccddd351ce4c56402a215c3bda9f37b5e
|
https://github.com/presslabs/z3/blob/965898cccddd351ce4c56402a215c3bda9f37b5e/z3/pput.py#L220-L227
|
train
|
presslabs/z3
|
z3/config.py
|
OnionDict._get
|
def _get(self, key, section=None, default=_onion_dict_guard):
"""Try to get the key from each dict in turn.
If you specify the optional section it looks there first.
"""
if section is not None:
section_dict = self.__sections.get(section, {})
if key in section_dict:
return section_dict[key]
for d in self.__dictionaries:
if key in d:
return d[key]
if default is _onion_dict_guard:
raise KeyError(key)
else:
return default
|
python
|
def _get(self, key, section=None, default=_onion_dict_guard):
"""Try to get the key from each dict in turn.
If you specify the optional section it looks there first.
"""
if section is not None:
section_dict = self.__sections.get(section, {})
if key in section_dict:
return section_dict[key]
for d in self.__dictionaries:
if key in d:
return d[key]
if default is _onion_dict_guard:
raise KeyError(key)
else:
return default
|
[
"def",
"_get",
"(",
"self",
",",
"key",
",",
"section",
"=",
"None",
",",
"default",
"=",
"_onion_dict_guard",
")",
":",
"if",
"section",
"is",
"not",
"None",
":",
"section_dict",
"=",
"self",
".",
"__sections",
".",
"get",
"(",
"section",
",",
"{",
"}",
")",
"if",
"key",
"in",
"section_dict",
":",
"return",
"section_dict",
"[",
"key",
"]",
"for",
"d",
"in",
"self",
".",
"__dictionaries",
":",
"if",
"key",
"in",
"d",
":",
"return",
"d",
"[",
"key",
"]",
"if",
"default",
"is",
"_onion_dict_guard",
":",
"raise",
"KeyError",
"(",
"key",
")",
"else",
":",
"return",
"default"
] |
Try to get the key from each dict in turn.
If you specify the optional section it looks there first.
|
[
"Try",
"to",
"get",
"the",
"key",
"from",
"each",
"dict",
"in",
"turn",
".",
"If",
"you",
"specify",
"the",
"optional",
"section",
"it",
"looks",
"there",
"first",
"."
] |
965898cccddd351ce4c56402a215c3bda9f37b5e
|
https://github.com/presslabs/z3/blob/965898cccddd351ce4c56402a215c3bda9f37b5e/z3/config.py#L21-L35
|
train
|
presslabs/z3
|
z3/snap.py
|
ZFSSnapshotManager._parse_snapshots
|
def _parse_snapshots(self):
"""Returns all snapshots grouped by filesystem, a dict of OrderedDict's
The order of snapshots matters when determining parents for incremental send,
so it's preserved.
Data is indexed by filesystem then for each filesystem we have an OrderedDict
of snapshots.
"""
try:
snap = self._list_snapshots()
except OSError as err:
logging.error("unable to list local snapshots!")
return {}
vols = {}
for line in snap.splitlines():
if len(line) == 0:
continue
name, used, refer, mountpoint, written = line.split('\t')
vol_name, snap_name = name.split('@', 1)
snapshots = vols.setdefault(vol_name, OrderedDict())
snapshots[snap_name] = {
'name': name,
'used': used,
'refer': refer,
'mountpoint': mountpoint,
'written': written,
}
return vols
|
python
|
def _parse_snapshots(self):
"""Returns all snapshots grouped by filesystem, a dict of OrderedDict's
The order of snapshots matters when determining parents for incremental send,
so it's preserved.
Data is indexed by filesystem then for each filesystem we have an OrderedDict
of snapshots.
"""
try:
snap = self._list_snapshots()
except OSError as err:
logging.error("unable to list local snapshots!")
return {}
vols = {}
for line in snap.splitlines():
if len(line) == 0:
continue
name, used, refer, mountpoint, written = line.split('\t')
vol_name, snap_name = name.split('@', 1)
snapshots = vols.setdefault(vol_name, OrderedDict())
snapshots[snap_name] = {
'name': name,
'used': used,
'refer': refer,
'mountpoint': mountpoint,
'written': written,
}
return vols
|
[
"def",
"_parse_snapshots",
"(",
"self",
")",
":",
"try",
":",
"snap",
"=",
"self",
".",
"_list_snapshots",
"(",
")",
"except",
"OSError",
"as",
"err",
":",
"logging",
".",
"error",
"(",
"\"unable to list local snapshots!\"",
")",
"return",
"{",
"}",
"vols",
"=",
"{",
"}",
"for",
"line",
"in",
"snap",
".",
"splitlines",
"(",
")",
":",
"if",
"len",
"(",
"line",
")",
"==",
"0",
":",
"continue",
"name",
",",
"used",
",",
"refer",
",",
"mountpoint",
",",
"written",
"=",
"line",
".",
"split",
"(",
"'\\t'",
")",
"vol_name",
",",
"snap_name",
"=",
"name",
".",
"split",
"(",
"'@'",
",",
"1",
")",
"snapshots",
"=",
"vols",
".",
"setdefault",
"(",
"vol_name",
",",
"OrderedDict",
"(",
")",
")",
"snapshots",
"[",
"snap_name",
"]",
"=",
"{",
"'name'",
":",
"name",
",",
"'used'",
":",
"used",
",",
"'refer'",
":",
"refer",
",",
"'mountpoint'",
":",
"mountpoint",
",",
"'written'",
":",
"written",
",",
"}",
"return",
"vols"
] |
Returns all snapshots grouped by filesystem, a dict of OrderedDict's
The order of snapshots matters when determining parents for incremental send,
so it's preserved.
Data is indexed by filesystem then for each filesystem we have an OrderedDict
of snapshots.
|
[
"Returns",
"all",
"snapshots",
"grouped",
"by",
"filesystem",
"a",
"dict",
"of",
"OrderedDict",
"s",
"The",
"order",
"of",
"snapshots",
"matters",
"when",
"determining",
"parents",
"for",
"incremental",
"send",
"so",
"it",
"s",
"preserved",
".",
"Data",
"is",
"indexed",
"by",
"filesystem",
"then",
"for",
"each",
"filesystem",
"we",
"have",
"an",
"OrderedDict",
"of",
"snapshots",
"."
] |
965898cccddd351ce4c56402a215c3bda9f37b5e
|
https://github.com/presslabs/z3/blob/965898cccddd351ce4c56402a215c3bda9f37b5e/z3/snap.py#L176-L202
|
train
|
presslabs/z3
|
z3/snap.py
|
PairManager._compress
|
def _compress(self, cmd):
"""Adds the appropriate command to compress the zfs stream"""
compressor = COMPRESSORS.get(self.compressor)
if compressor is None:
return cmd
compress_cmd = compressor['compress']
return "{} | {}".format(compress_cmd, cmd)
|
python
|
def _compress(self, cmd):
"""Adds the appropriate command to compress the zfs stream"""
compressor = COMPRESSORS.get(self.compressor)
if compressor is None:
return cmd
compress_cmd = compressor['compress']
return "{} | {}".format(compress_cmd, cmd)
|
[
"def",
"_compress",
"(",
"self",
",",
"cmd",
")",
":",
"compressor",
"=",
"COMPRESSORS",
".",
"get",
"(",
"self",
".",
"compressor",
")",
"if",
"compressor",
"is",
"None",
":",
"return",
"cmd",
"compress_cmd",
"=",
"compressor",
"[",
"'compress'",
"]",
"return",
"\"{} | {}\"",
".",
"format",
"(",
"compress_cmd",
",",
"cmd",
")"
] |
Adds the appropriate command to compress the zfs stream
|
[
"Adds",
"the",
"appropriate",
"command",
"to",
"compress",
"the",
"zfs",
"stream"
] |
965898cccddd351ce4c56402a215c3bda9f37b5e
|
https://github.com/presslabs/z3/blob/965898cccddd351ce4c56402a215c3bda9f37b5e/z3/snap.py#L311-L317
|
train
|
presslabs/z3
|
z3/snap.py
|
PairManager._decompress
|
def _decompress(self, cmd, s3_snap):
"""Adds the appropriate command to decompress the zfs stream
This is determined from the metadata of the s3_snap.
"""
compressor = COMPRESSORS.get(s3_snap.compressor)
if compressor is None:
return cmd
decompress_cmd = compressor['decompress']
return "{} | {}".format(decompress_cmd, cmd)
|
python
|
def _decompress(self, cmd, s3_snap):
"""Adds the appropriate command to decompress the zfs stream
This is determined from the metadata of the s3_snap.
"""
compressor = COMPRESSORS.get(s3_snap.compressor)
if compressor is None:
return cmd
decompress_cmd = compressor['decompress']
return "{} | {}".format(decompress_cmd, cmd)
|
[
"def",
"_decompress",
"(",
"self",
",",
"cmd",
",",
"s3_snap",
")",
":",
"compressor",
"=",
"COMPRESSORS",
".",
"get",
"(",
"s3_snap",
".",
"compressor",
")",
"if",
"compressor",
"is",
"None",
":",
"return",
"cmd",
"decompress_cmd",
"=",
"compressor",
"[",
"'decompress'",
"]",
"return",
"\"{} | {}\"",
".",
"format",
"(",
"decompress_cmd",
",",
"cmd",
")"
] |
Adds the appropriate command to decompress the zfs stream
This is determined from the metadata of the s3_snap.
|
[
"Adds",
"the",
"appropriate",
"command",
"to",
"decompress",
"the",
"zfs",
"stream",
"This",
"is",
"determined",
"from",
"the",
"metadata",
"of",
"the",
"s3_snap",
"."
] |
965898cccddd351ce4c56402a215c3bda9f37b5e
|
https://github.com/presslabs/z3/blob/965898cccddd351ce4c56402a215c3bda9f37b5e/z3/snap.py#L319-L327
|
train
|
presslabs/z3
|
z3/snap.py
|
PairManager.backup_full
|
def backup_full(self, snap_name=None, dry_run=False):
"""Do a full backup of a snapshot. By default latest local snapshot"""
z_snap = self._snapshot_to_backup(snap_name)
estimated_size = self._parse_estimated_size(
self._cmd.shell(
"zfs send -nvP '{}'".format(z_snap.name),
capture=True))
self._cmd.pipe(
"zfs send '{}'".format(z_snap.name),
self._compress(
self._pput_cmd(
estimated=estimated_size,
s3_prefix=self.s3_manager.s3_prefix,
snap_name=z_snap.name)
),
dry_run=dry_run,
estimated_size=estimated_size,
)
return [{'snap_name': z_snap.name, 'size': estimated_size}]
|
python
|
def backup_full(self, snap_name=None, dry_run=False):
"""Do a full backup of a snapshot. By default latest local snapshot"""
z_snap = self._snapshot_to_backup(snap_name)
estimated_size = self._parse_estimated_size(
self._cmd.shell(
"zfs send -nvP '{}'".format(z_snap.name),
capture=True))
self._cmd.pipe(
"zfs send '{}'".format(z_snap.name),
self._compress(
self._pput_cmd(
estimated=estimated_size,
s3_prefix=self.s3_manager.s3_prefix,
snap_name=z_snap.name)
),
dry_run=dry_run,
estimated_size=estimated_size,
)
return [{'snap_name': z_snap.name, 'size': estimated_size}]
|
[
"def",
"backup_full",
"(",
"self",
",",
"snap_name",
"=",
"None",
",",
"dry_run",
"=",
"False",
")",
":",
"z_snap",
"=",
"self",
".",
"_snapshot_to_backup",
"(",
"snap_name",
")",
"estimated_size",
"=",
"self",
".",
"_parse_estimated_size",
"(",
"self",
".",
"_cmd",
".",
"shell",
"(",
"\"zfs send -nvP '{}'\"",
".",
"format",
"(",
"z_snap",
".",
"name",
")",
",",
"capture",
"=",
"True",
")",
")",
"self",
".",
"_cmd",
".",
"pipe",
"(",
"\"zfs send '{}'\"",
".",
"format",
"(",
"z_snap",
".",
"name",
")",
",",
"self",
".",
"_compress",
"(",
"self",
".",
"_pput_cmd",
"(",
"estimated",
"=",
"estimated_size",
",",
"s3_prefix",
"=",
"self",
".",
"s3_manager",
".",
"s3_prefix",
",",
"snap_name",
"=",
"z_snap",
".",
"name",
")",
")",
",",
"dry_run",
"=",
"dry_run",
",",
"estimated_size",
"=",
"estimated_size",
",",
")",
"return",
"[",
"{",
"'snap_name'",
":",
"z_snap",
".",
"name",
",",
"'size'",
":",
"estimated_size",
"}",
"]"
] |
Do a full backup of a snapshot. By default latest local snapshot
|
[
"Do",
"a",
"full",
"backup",
"of",
"a",
"snapshot",
".",
"By",
"default",
"latest",
"local",
"snapshot"
] |
965898cccddd351ce4c56402a215c3bda9f37b5e
|
https://github.com/presslabs/z3/blob/965898cccddd351ce4c56402a215c3bda9f37b5e/z3/snap.py#L341-L359
|
train
|
presslabs/z3
|
z3/snap.py
|
PairManager.backup_incremental
|
def backup_incremental(self, snap_name=None, dry_run=False):
"""Uploads named snapshot or latest, along with any other snapshots
required for an incremental backup.
"""
z_snap = self._snapshot_to_backup(snap_name)
to_upload = []
current = z_snap
uploaded_meta = []
while True:
s3_snap = self.s3_manager.get(current.name)
if s3_snap is not None:
if not s3_snap.is_healthy:
# abort everything if we run in to unhealthy snapshots
raise IntegrityError(
"Broken snapshot detected {}, reason: '{}'".format(
s3_snap.name, s3_snap.reason_broken
))
break
to_upload.append(current)
if current.parent is None:
break
current = current.parent
for z_snap in reversed(to_upload):
estimated_size = self._parse_estimated_size(
self._cmd.shell(
"zfs send -nvP -i '{}' '{}'".format(
z_snap.parent.name, z_snap.name),
capture=True))
self._cmd.pipe(
"zfs send -i '{}' '{}'".format(
z_snap.parent.name, z_snap.name),
self._compress(
self._pput_cmd(
estimated=estimated_size,
parent=z_snap.parent.name,
s3_prefix=self.s3_manager.s3_prefix,
snap_name=z_snap.name)
),
dry_run=dry_run,
estimated_size=estimated_size,
)
uploaded_meta.append({'snap_name': z_snap.name, 'size': estimated_size})
return uploaded_meta
|
python
|
def backup_incremental(self, snap_name=None, dry_run=False):
"""Uploads named snapshot or latest, along with any other snapshots
required for an incremental backup.
"""
z_snap = self._snapshot_to_backup(snap_name)
to_upload = []
current = z_snap
uploaded_meta = []
while True:
s3_snap = self.s3_manager.get(current.name)
if s3_snap is not None:
if not s3_snap.is_healthy:
# abort everything if we run in to unhealthy snapshots
raise IntegrityError(
"Broken snapshot detected {}, reason: '{}'".format(
s3_snap.name, s3_snap.reason_broken
))
break
to_upload.append(current)
if current.parent is None:
break
current = current.parent
for z_snap in reversed(to_upload):
estimated_size = self._parse_estimated_size(
self._cmd.shell(
"zfs send -nvP -i '{}' '{}'".format(
z_snap.parent.name, z_snap.name),
capture=True))
self._cmd.pipe(
"zfs send -i '{}' '{}'".format(
z_snap.parent.name, z_snap.name),
self._compress(
self._pput_cmd(
estimated=estimated_size,
parent=z_snap.parent.name,
s3_prefix=self.s3_manager.s3_prefix,
snap_name=z_snap.name)
),
dry_run=dry_run,
estimated_size=estimated_size,
)
uploaded_meta.append({'snap_name': z_snap.name, 'size': estimated_size})
return uploaded_meta
|
[
"def",
"backup_incremental",
"(",
"self",
",",
"snap_name",
"=",
"None",
",",
"dry_run",
"=",
"False",
")",
":",
"z_snap",
"=",
"self",
".",
"_snapshot_to_backup",
"(",
"snap_name",
")",
"to_upload",
"=",
"[",
"]",
"current",
"=",
"z_snap",
"uploaded_meta",
"=",
"[",
"]",
"while",
"True",
":",
"s3_snap",
"=",
"self",
".",
"s3_manager",
".",
"get",
"(",
"current",
".",
"name",
")",
"if",
"s3_snap",
"is",
"not",
"None",
":",
"if",
"not",
"s3_snap",
".",
"is_healthy",
":",
"# abort everything if we run in to unhealthy snapshots",
"raise",
"IntegrityError",
"(",
"\"Broken snapshot detected {}, reason: '{}'\"",
".",
"format",
"(",
"s3_snap",
".",
"name",
",",
"s3_snap",
".",
"reason_broken",
")",
")",
"break",
"to_upload",
".",
"append",
"(",
"current",
")",
"if",
"current",
".",
"parent",
"is",
"None",
":",
"break",
"current",
"=",
"current",
".",
"parent",
"for",
"z_snap",
"in",
"reversed",
"(",
"to_upload",
")",
":",
"estimated_size",
"=",
"self",
".",
"_parse_estimated_size",
"(",
"self",
".",
"_cmd",
".",
"shell",
"(",
"\"zfs send -nvP -i '{}' '{}'\"",
".",
"format",
"(",
"z_snap",
".",
"parent",
".",
"name",
",",
"z_snap",
".",
"name",
")",
",",
"capture",
"=",
"True",
")",
")",
"self",
".",
"_cmd",
".",
"pipe",
"(",
"\"zfs send -i '{}' '{}'\"",
".",
"format",
"(",
"z_snap",
".",
"parent",
".",
"name",
",",
"z_snap",
".",
"name",
")",
",",
"self",
".",
"_compress",
"(",
"self",
".",
"_pput_cmd",
"(",
"estimated",
"=",
"estimated_size",
",",
"parent",
"=",
"z_snap",
".",
"parent",
".",
"name",
",",
"s3_prefix",
"=",
"self",
".",
"s3_manager",
".",
"s3_prefix",
",",
"snap_name",
"=",
"z_snap",
".",
"name",
")",
")",
",",
"dry_run",
"=",
"dry_run",
",",
"estimated_size",
"=",
"estimated_size",
",",
")",
"uploaded_meta",
".",
"append",
"(",
"{",
"'snap_name'",
":",
"z_snap",
".",
"name",
",",
"'size'",
":",
"estimated_size",
"}",
")",
"return",
"uploaded_meta"
] |
Uploads named snapshot or latest, along with any other snapshots
required for an incremental backup.
|
[
"Uploads",
"named",
"snapshot",
"or",
"latest",
"along",
"with",
"any",
"other",
"snapshots",
"required",
"for",
"an",
"incremental",
"backup",
"."
] |
965898cccddd351ce4c56402a215c3bda9f37b5e
|
https://github.com/presslabs/z3/blob/965898cccddd351ce4c56402a215c3bda9f37b5e/z3/snap.py#L361-L403
|
train
|
pyannote/pyannote-metrics
|
pyannote/metrics/utils.py
|
UEMSupportMixin.extrude
|
def extrude(self, uem, reference, collar=0.0, skip_overlap=False):
"""Extrude reference boundary collars from uem
reference |----| |--------------| |-------------|
uem |---------------------| |-------------------------------|
extruded |--| |--| |---| |-----| |-| |-----| |-----------| |-----|
Parameters
----------
uem : Timeline
Evaluation map.
reference : Annotation
Reference annotation.
collar : float, optional
When provided, set the duration of collars centered around
reference segment boundaries that are extruded from both reference
and hypothesis. Defaults to 0. (i.e. no collar).
skip_overlap : bool, optional
Set to True to not evaluate overlap regions.
Defaults to False (i.e. keep overlap regions).
Returns
-------
extruded_uem : Timeline
"""
if collar == 0. and not skip_overlap:
return uem
collars, overlap_regions = [], []
# build list of collars if needed
if collar > 0.:
# iterate over all segments in reference
for segment in reference.itersegments():
# add collar centered on start time
t = segment.start
collars.append(Segment(t - .5 * collar, t + .5 * collar))
# add collar centered on end time
t = segment.end
collars.append(Segment(t - .5 * collar, t + .5 * collar))
# build list of overlap regions if needed
if skip_overlap:
# iterate over pair of intersecting segments
for (segment1, track1), (segment2, track2) in reference.co_iter(reference):
if segment1 == segment2 and track1 == track2:
continue
# add their intersection
overlap_regions.append(segment1 & segment2)
segments = collars + overlap_regions
return Timeline(segments=segments).support().gaps(support=uem)
|
python
|
def extrude(self, uem, reference, collar=0.0, skip_overlap=False):
"""Extrude reference boundary collars from uem
reference |----| |--------------| |-------------|
uem |---------------------| |-------------------------------|
extruded |--| |--| |---| |-----| |-| |-----| |-----------| |-----|
Parameters
----------
uem : Timeline
Evaluation map.
reference : Annotation
Reference annotation.
collar : float, optional
When provided, set the duration of collars centered around
reference segment boundaries that are extruded from both reference
and hypothesis. Defaults to 0. (i.e. no collar).
skip_overlap : bool, optional
Set to True to not evaluate overlap regions.
Defaults to False (i.e. keep overlap regions).
Returns
-------
extruded_uem : Timeline
"""
if collar == 0. and not skip_overlap:
return uem
collars, overlap_regions = [], []
# build list of collars if needed
if collar > 0.:
# iterate over all segments in reference
for segment in reference.itersegments():
# add collar centered on start time
t = segment.start
collars.append(Segment(t - .5 * collar, t + .5 * collar))
# add collar centered on end time
t = segment.end
collars.append(Segment(t - .5 * collar, t + .5 * collar))
# build list of overlap regions if needed
if skip_overlap:
# iterate over pair of intersecting segments
for (segment1, track1), (segment2, track2) in reference.co_iter(reference):
if segment1 == segment2 and track1 == track2:
continue
# add their intersection
overlap_regions.append(segment1 & segment2)
segments = collars + overlap_regions
return Timeline(segments=segments).support().gaps(support=uem)
|
[
"def",
"extrude",
"(",
"self",
",",
"uem",
",",
"reference",
",",
"collar",
"=",
"0.0",
",",
"skip_overlap",
"=",
"False",
")",
":",
"if",
"collar",
"==",
"0.",
"and",
"not",
"skip_overlap",
":",
"return",
"uem",
"collars",
",",
"overlap_regions",
"=",
"[",
"]",
",",
"[",
"]",
"# build list of collars if needed",
"if",
"collar",
">",
"0.",
":",
"# iterate over all segments in reference",
"for",
"segment",
"in",
"reference",
".",
"itersegments",
"(",
")",
":",
"# add collar centered on start time",
"t",
"=",
"segment",
".",
"start",
"collars",
".",
"append",
"(",
"Segment",
"(",
"t",
"-",
".5",
"*",
"collar",
",",
"t",
"+",
".5",
"*",
"collar",
")",
")",
"# add collar centered on end time",
"t",
"=",
"segment",
".",
"end",
"collars",
".",
"append",
"(",
"Segment",
"(",
"t",
"-",
".5",
"*",
"collar",
",",
"t",
"+",
".5",
"*",
"collar",
")",
")",
"# build list of overlap regions if needed",
"if",
"skip_overlap",
":",
"# iterate over pair of intersecting segments",
"for",
"(",
"segment1",
",",
"track1",
")",
",",
"(",
"segment2",
",",
"track2",
")",
"in",
"reference",
".",
"co_iter",
"(",
"reference",
")",
":",
"if",
"segment1",
"==",
"segment2",
"and",
"track1",
"==",
"track2",
":",
"continue",
"# add their intersection",
"overlap_regions",
".",
"append",
"(",
"segment1",
"&",
"segment2",
")",
"segments",
"=",
"collars",
"+",
"overlap_regions",
"return",
"Timeline",
"(",
"segments",
"=",
"segments",
")",
".",
"support",
"(",
")",
".",
"gaps",
"(",
"support",
"=",
"uem",
")"
] |
Extrude reference boundary collars from uem
reference |----| |--------------| |-------------|
uem |---------------------| |-------------------------------|
extruded |--| |--| |---| |-----| |-| |-----| |-----------| |-----|
Parameters
----------
uem : Timeline
Evaluation map.
reference : Annotation
Reference annotation.
collar : float, optional
When provided, set the duration of collars centered around
reference segment boundaries that are extruded from both reference
and hypothesis. Defaults to 0. (i.e. no collar).
skip_overlap : bool, optional
Set to True to not evaluate overlap regions.
Defaults to False (i.e. keep overlap regions).
Returns
-------
extruded_uem : Timeline
|
[
"Extrude",
"reference",
"boundary",
"collars",
"from",
"uem"
] |
b433fec3bd37ca36fe026a428cd72483d646871a
|
https://github.com/pyannote/pyannote-metrics/blob/b433fec3bd37ca36fe026a428cd72483d646871a/pyannote/metrics/utils.py#L38-L93
|
train
|
pyannote/pyannote-metrics
|
pyannote/metrics/utils.py
|
UEMSupportMixin.common_timeline
|
def common_timeline(self, reference, hypothesis):
"""Return timeline common to both reference and hypothesis
reference |--------| |------------| |---------| |----|
hypothesis |--------------| |------| |----------------|
timeline |--|-----|----|---|-|------| |-|---------|----| |----|
Parameters
----------
reference : Annotation
hypothesis : Annotation
Returns
-------
timeline : Timeline
"""
timeline = reference.get_timeline(copy=True)
timeline.update(hypothesis.get_timeline(copy=False))
return timeline.segmentation()
|
python
|
def common_timeline(self, reference, hypothesis):
"""Return timeline common to both reference and hypothesis
reference |--------| |------------| |---------| |----|
hypothesis |--------------| |------| |----------------|
timeline |--|-----|----|---|-|------| |-|---------|----| |----|
Parameters
----------
reference : Annotation
hypothesis : Annotation
Returns
-------
timeline : Timeline
"""
timeline = reference.get_timeline(copy=True)
timeline.update(hypothesis.get_timeline(copy=False))
return timeline.segmentation()
|
[
"def",
"common_timeline",
"(",
"self",
",",
"reference",
",",
"hypothesis",
")",
":",
"timeline",
"=",
"reference",
".",
"get_timeline",
"(",
"copy",
"=",
"True",
")",
"timeline",
".",
"update",
"(",
"hypothesis",
".",
"get_timeline",
"(",
"copy",
"=",
"False",
")",
")",
"return",
"timeline",
".",
"segmentation",
"(",
")"
] |
Return timeline common to both reference and hypothesis
reference |--------| |------------| |---------| |----|
hypothesis |--------------| |------| |----------------|
timeline |--|-----|----|---|-|------| |-|---------|----| |----|
Parameters
----------
reference : Annotation
hypothesis : Annotation
Returns
-------
timeline : Timeline
|
[
"Return",
"timeline",
"common",
"to",
"both",
"reference",
"and",
"hypothesis"
] |
b433fec3bd37ca36fe026a428cd72483d646871a
|
https://github.com/pyannote/pyannote-metrics/blob/b433fec3bd37ca36fe026a428cd72483d646871a/pyannote/metrics/utils.py#L95-L113
|
train
|
pyannote/pyannote-metrics
|
pyannote/metrics/utils.py
|
UEMSupportMixin.project
|
def project(self, annotation, timeline):
"""Project annotation onto timeline segments
reference |__A__| |__B__|
|____C____|
timeline |---|---|---| |---|
projection |_A_|_A_|_C_| |_B_|
|_C_|
Parameters
----------
annotation : Annotation
timeline : Timeline
Returns
-------
projection : Annotation
"""
projection = annotation.empty()
timeline_ = annotation.get_timeline(copy=False)
for segment_, segment in timeline_.co_iter(timeline):
for track_ in annotation.get_tracks(segment_):
track = projection.new_track(segment, candidate=track_)
projection[segment, track] = annotation[segment_, track_]
return projection
|
python
|
def project(self, annotation, timeline):
"""Project annotation onto timeline segments
reference |__A__| |__B__|
|____C____|
timeline |---|---|---| |---|
projection |_A_|_A_|_C_| |_B_|
|_C_|
Parameters
----------
annotation : Annotation
timeline : Timeline
Returns
-------
projection : Annotation
"""
projection = annotation.empty()
timeline_ = annotation.get_timeline(copy=False)
for segment_, segment in timeline_.co_iter(timeline):
for track_ in annotation.get_tracks(segment_):
track = projection.new_track(segment, candidate=track_)
projection[segment, track] = annotation[segment_, track_]
return projection
|
[
"def",
"project",
"(",
"self",
",",
"annotation",
",",
"timeline",
")",
":",
"projection",
"=",
"annotation",
".",
"empty",
"(",
")",
"timeline_",
"=",
"annotation",
".",
"get_timeline",
"(",
"copy",
"=",
"False",
")",
"for",
"segment_",
",",
"segment",
"in",
"timeline_",
".",
"co_iter",
"(",
"timeline",
")",
":",
"for",
"track_",
"in",
"annotation",
".",
"get_tracks",
"(",
"segment_",
")",
":",
"track",
"=",
"projection",
".",
"new_track",
"(",
"segment",
",",
"candidate",
"=",
"track_",
")",
"projection",
"[",
"segment",
",",
"track",
"]",
"=",
"annotation",
"[",
"segment_",
",",
"track_",
"]",
"return",
"projection"
] |
Project annotation onto timeline segments
reference |__A__| |__B__|
|____C____|
timeline |---|---|---| |---|
projection |_A_|_A_|_C_| |_B_|
|_C_|
Parameters
----------
annotation : Annotation
timeline : Timeline
Returns
-------
projection : Annotation
|
[
"Project",
"annotation",
"onto",
"timeline",
"segments"
] |
b433fec3bd37ca36fe026a428cd72483d646871a
|
https://github.com/pyannote/pyannote-metrics/blob/b433fec3bd37ca36fe026a428cd72483d646871a/pyannote/metrics/utils.py#L115-L141
|
train
|
pyannote/pyannote-metrics
|
pyannote/metrics/utils.py
|
UEMSupportMixin.uemify
|
def uemify(self, reference, hypothesis, uem=None, collar=0.,
skip_overlap=False, returns_uem=False, returns_timeline=False):
"""Crop 'reference' and 'hypothesis' to 'uem' support
Parameters
----------
reference, hypothesis : Annotation
Reference and hypothesis annotations.
uem : Timeline, optional
Evaluation map.
collar : float, optional
When provided, set the duration of collars centered around
reference segment boundaries that are extruded from both reference
and hypothesis. Defaults to 0. (i.e. no collar).
skip_overlap : bool, optional
Set to True to not evaluate overlap regions.
Defaults to False (i.e. keep overlap regions).
returns_uem : bool, optional
Set to True to return extruded uem as well.
Defaults to False (i.e. only return reference and hypothesis)
returns_timeline : bool, optional
Set to True to oversegment reference and hypothesis so that they
share the same internal timeline.
Returns
-------
reference, hypothesis : Annotation
Extruded reference and hypothesis annotations
uem : Timeline
Extruded uem (returned only when 'returns_uem' is True)
timeline : Timeline:
Common timeline (returned only when 'returns_timeline' is True)
"""
# when uem is not provided, use the union of reference and hypothesis
# extents -- and warn the user about that.
if uem is None:
r_extent = reference.get_timeline().extent()
h_extent = hypothesis.get_timeline().extent()
extent = r_extent | h_extent
uem = Timeline(segments=[extent] if extent else [],
uri=reference.uri)
warnings.warn(
"'uem' was approximated by the union of 'reference' "
"and 'hypothesis' extents.")
# extrude collars (and overlap regions) from uem
uem = self.extrude(uem, reference, collar=collar,
skip_overlap=skip_overlap)
# extrude regions outside of uem
reference = reference.crop(uem, mode='intersection')
hypothesis = hypothesis.crop(uem, mode='intersection')
# project reference and hypothesis on common timeline
if returns_timeline:
timeline = self.common_timeline(reference, hypothesis)
reference = self.project(reference, timeline)
hypothesis = self.project(hypothesis, timeline)
result = (reference, hypothesis)
if returns_uem:
result += (uem, )
if returns_timeline:
result += (timeline, )
return result
|
python
|
def uemify(self, reference, hypothesis, uem=None, collar=0.,
skip_overlap=False, returns_uem=False, returns_timeline=False):
"""Crop 'reference' and 'hypothesis' to 'uem' support
Parameters
----------
reference, hypothesis : Annotation
Reference and hypothesis annotations.
uem : Timeline, optional
Evaluation map.
collar : float, optional
When provided, set the duration of collars centered around
reference segment boundaries that are extruded from both reference
and hypothesis. Defaults to 0. (i.e. no collar).
skip_overlap : bool, optional
Set to True to not evaluate overlap regions.
Defaults to False (i.e. keep overlap regions).
returns_uem : bool, optional
Set to True to return extruded uem as well.
Defaults to False (i.e. only return reference and hypothesis)
returns_timeline : bool, optional
Set to True to oversegment reference and hypothesis so that they
share the same internal timeline.
Returns
-------
reference, hypothesis : Annotation
Extruded reference and hypothesis annotations
uem : Timeline
Extruded uem (returned only when 'returns_uem' is True)
timeline : Timeline:
Common timeline (returned only when 'returns_timeline' is True)
"""
# when uem is not provided, use the union of reference and hypothesis
# extents -- and warn the user about that.
if uem is None:
r_extent = reference.get_timeline().extent()
h_extent = hypothesis.get_timeline().extent()
extent = r_extent | h_extent
uem = Timeline(segments=[extent] if extent else [],
uri=reference.uri)
warnings.warn(
"'uem' was approximated by the union of 'reference' "
"and 'hypothesis' extents.")
# extrude collars (and overlap regions) from uem
uem = self.extrude(uem, reference, collar=collar,
skip_overlap=skip_overlap)
# extrude regions outside of uem
reference = reference.crop(uem, mode='intersection')
hypothesis = hypothesis.crop(uem, mode='intersection')
# project reference and hypothesis on common timeline
if returns_timeline:
timeline = self.common_timeline(reference, hypothesis)
reference = self.project(reference, timeline)
hypothesis = self.project(hypothesis, timeline)
result = (reference, hypothesis)
if returns_uem:
result += (uem, )
if returns_timeline:
result += (timeline, )
return result
|
[
"def",
"uemify",
"(",
"self",
",",
"reference",
",",
"hypothesis",
",",
"uem",
"=",
"None",
",",
"collar",
"=",
"0.",
",",
"skip_overlap",
"=",
"False",
",",
"returns_uem",
"=",
"False",
",",
"returns_timeline",
"=",
"False",
")",
":",
"# when uem is not provided, use the union of reference and hypothesis",
"# extents -- and warn the user about that.",
"if",
"uem",
"is",
"None",
":",
"r_extent",
"=",
"reference",
".",
"get_timeline",
"(",
")",
".",
"extent",
"(",
")",
"h_extent",
"=",
"hypothesis",
".",
"get_timeline",
"(",
")",
".",
"extent",
"(",
")",
"extent",
"=",
"r_extent",
"|",
"h_extent",
"uem",
"=",
"Timeline",
"(",
"segments",
"=",
"[",
"extent",
"]",
"if",
"extent",
"else",
"[",
"]",
",",
"uri",
"=",
"reference",
".",
"uri",
")",
"warnings",
".",
"warn",
"(",
"\"'uem' was approximated by the union of 'reference' \"",
"\"and 'hypothesis' extents.\"",
")",
"# extrude collars (and overlap regions) from uem",
"uem",
"=",
"self",
".",
"extrude",
"(",
"uem",
",",
"reference",
",",
"collar",
"=",
"collar",
",",
"skip_overlap",
"=",
"skip_overlap",
")",
"# extrude regions outside of uem",
"reference",
"=",
"reference",
".",
"crop",
"(",
"uem",
",",
"mode",
"=",
"'intersection'",
")",
"hypothesis",
"=",
"hypothesis",
".",
"crop",
"(",
"uem",
",",
"mode",
"=",
"'intersection'",
")",
"# project reference and hypothesis on common timeline",
"if",
"returns_timeline",
":",
"timeline",
"=",
"self",
".",
"common_timeline",
"(",
"reference",
",",
"hypothesis",
")",
"reference",
"=",
"self",
".",
"project",
"(",
"reference",
",",
"timeline",
")",
"hypothesis",
"=",
"self",
".",
"project",
"(",
"hypothesis",
",",
"timeline",
")",
"result",
"=",
"(",
"reference",
",",
"hypothesis",
")",
"if",
"returns_uem",
":",
"result",
"+=",
"(",
"uem",
",",
")",
"if",
"returns_timeline",
":",
"result",
"+=",
"(",
"timeline",
",",
")",
"return",
"result"
] |
Crop 'reference' and 'hypothesis' to 'uem' support
Parameters
----------
reference, hypothesis : Annotation
Reference and hypothesis annotations.
uem : Timeline, optional
Evaluation map.
collar : float, optional
When provided, set the duration of collars centered around
reference segment boundaries that are extruded from both reference
and hypothesis. Defaults to 0. (i.e. no collar).
skip_overlap : bool, optional
Set to True to not evaluate overlap regions.
Defaults to False (i.e. keep overlap regions).
returns_uem : bool, optional
Set to True to return extruded uem as well.
Defaults to False (i.e. only return reference and hypothesis)
returns_timeline : bool, optional
Set to True to oversegment reference and hypothesis so that they
share the same internal timeline.
Returns
-------
reference, hypothesis : Annotation
Extruded reference and hypothesis annotations
uem : Timeline
Extruded uem (returned only when 'returns_uem' is True)
timeline : Timeline:
Common timeline (returned only when 'returns_timeline' is True)
|
[
"Crop",
"reference",
"and",
"hypothesis",
"to",
"uem",
"support"
] |
b433fec3bd37ca36fe026a428cd72483d646871a
|
https://github.com/pyannote/pyannote-metrics/blob/b433fec3bd37ca36fe026a428cd72483d646871a/pyannote/metrics/utils.py#L143-L210
|
train
|
pyannote/pyannote-metrics
|
scripts/pyannote-metrics.py
|
get_hypothesis
|
def get_hypothesis(hypotheses, current_file):
"""Get hypothesis for given file
Parameters
----------
hypotheses : `dict`
Speaker diarization hypothesis provided by `load_rttm`.
current_file : `dict`
File description as given by pyannote.database protocols.
Returns
-------
hypothesis : `pyannote.core.Annotation`
Hypothesis corresponding to `current_file`.
"""
uri = current_file['uri']
if uri in hypotheses:
return hypotheses[uri]
# if the exact 'uri' is not available in hypothesis,
# look for matching substring
tmp_uri = [u for u in hypotheses if u in uri]
# no matching speech turns. return empty annotation
if len(tmp_uri) == 0:
msg = f'Could not find hypothesis for file "{uri}"; assuming empty file.'
warnings.warn(msg)
return Annotation(uri=uri, modality='speaker')
# exactly one matching file. return it
if len(tmp_uri) == 1:
hypothesis = hypotheses[tmp_uri[0]]
hypothesis.uri = uri
return hypothesis
# more that one matching file. error.
msg = f'Found too many hypotheses matching file "{uri}" ({uris}).'
raise ValueError(msg.format(uri=uri, uris=tmp_uri))
|
python
|
def get_hypothesis(hypotheses, current_file):
"""Get hypothesis for given file
Parameters
----------
hypotheses : `dict`
Speaker diarization hypothesis provided by `load_rttm`.
current_file : `dict`
File description as given by pyannote.database protocols.
Returns
-------
hypothesis : `pyannote.core.Annotation`
Hypothesis corresponding to `current_file`.
"""
uri = current_file['uri']
if uri in hypotheses:
return hypotheses[uri]
# if the exact 'uri' is not available in hypothesis,
# look for matching substring
tmp_uri = [u for u in hypotheses if u in uri]
# no matching speech turns. return empty annotation
if len(tmp_uri) == 0:
msg = f'Could not find hypothesis for file "{uri}"; assuming empty file.'
warnings.warn(msg)
return Annotation(uri=uri, modality='speaker')
# exactly one matching file. return it
if len(tmp_uri) == 1:
hypothesis = hypotheses[tmp_uri[0]]
hypothesis.uri = uri
return hypothesis
# more that one matching file. error.
msg = f'Found too many hypotheses matching file "{uri}" ({uris}).'
raise ValueError(msg.format(uri=uri, uris=tmp_uri))
|
[
"def",
"get_hypothesis",
"(",
"hypotheses",
",",
"current_file",
")",
":",
"uri",
"=",
"current_file",
"[",
"'uri'",
"]",
"if",
"uri",
"in",
"hypotheses",
":",
"return",
"hypotheses",
"[",
"uri",
"]",
"# if the exact 'uri' is not available in hypothesis,",
"# look for matching substring",
"tmp_uri",
"=",
"[",
"u",
"for",
"u",
"in",
"hypotheses",
"if",
"u",
"in",
"uri",
"]",
"# no matching speech turns. return empty annotation",
"if",
"len",
"(",
"tmp_uri",
")",
"==",
"0",
":",
"msg",
"=",
"f'Could not find hypothesis for file \"{uri}\"; assuming empty file.'",
"warnings",
".",
"warn",
"(",
"msg",
")",
"return",
"Annotation",
"(",
"uri",
"=",
"uri",
",",
"modality",
"=",
"'speaker'",
")",
"# exactly one matching file. return it",
"if",
"len",
"(",
"tmp_uri",
")",
"==",
"1",
":",
"hypothesis",
"=",
"hypotheses",
"[",
"tmp_uri",
"[",
"0",
"]",
"]",
"hypothesis",
".",
"uri",
"=",
"uri",
"return",
"hypothesis",
"# more that one matching file. error.",
"msg",
"=",
"f'Found too many hypotheses matching file \"{uri}\" ({uris}).'",
"raise",
"ValueError",
"(",
"msg",
".",
"format",
"(",
"uri",
"=",
"uri",
",",
"uris",
"=",
"tmp_uri",
")",
")"
] |
Get hypothesis for given file
Parameters
----------
hypotheses : `dict`
Speaker diarization hypothesis provided by `load_rttm`.
current_file : `dict`
File description as given by pyannote.database protocols.
Returns
-------
hypothesis : `pyannote.core.Annotation`
Hypothesis corresponding to `current_file`.
|
[
"Get",
"hypothesis",
"for",
"given",
"file"
] |
b433fec3bd37ca36fe026a428cd72483d646871a
|
https://github.com/pyannote/pyannote-metrics/blob/b433fec3bd37ca36fe026a428cd72483d646871a/scripts/pyannote-metrics.py#L142-L181
|
train
|
pyannote/pyannote-metrics
|
scripts/pyannote-metrics.py
|
reindex
|
def reindex(report):
"""Reindex report so that 'TOTAL' is the last row"""
index = list(report.index)
i = index.index('TOTAL')
return report.reindex(index[:i] + index[i+1:] + ['TOTAL'])
|
python
|
def reindex(report):
"""Reindex report so that 'TOTAL' is the last row"""
index = list(report.index)
i = index.index('TOTAL')
return report.reindex(index[:i] + index[i+1:] + ['TOTAL'])
|
[
"def",
"reindex",
"(",
"report",
")",
":",
"index",
"=",
"list",
"(",
"report",
".",
"index",
")",
"i",
"=",
"index",
".",
"index",
"(",
"'TOTAL'",
")",
"return",
"report",
".",
"reindex",
"(",
"index",
"[",
":",
"i",
"]",
"+",
"index",
"[",
"i",
"+",
"1",
":",
"]",
"+",
"[",
"'TOTAL'",
"]",
")"
] |
Reindex report so that 'TOTAL' is the last row
|
[
"Reindex",
"report",
"so",
"that",
"TOTAL",
"is",
"the",
"last",
"row"
] |
b433fec3bd37ca36fe026a428cd72483d646871a
|
https://github.com/pyannote/pyannote-metrics/blob/b433fec3bd37ca36fe026a428cd72483d646871a/scripts/pyannote-metrics.py#L219-L223
|
train
|
pyannote/pyannote-metrics
|
pyannote/metrics/binary_classification.py
|
precision_recall_curve
|
def precision_recall_curve(y_true, scores, distances=False):
"""Precision-recall curve
Parameters
----------
y_true : (n_samples, ) array-like
Boolean reference.
scores : (n_samples, ) array-like
Predicted score.
distances : boolean, optional
When True, indicate that `scores` are actually `distances`
Returns
-------
precision : numpy array
Precision
recall : numpy array
Recall
thresholds : numpy array
Corresponding thresholds
auc : float
Area under curve
"""
if distances:
scores = -scores
precision, recall, thresholds = sklearn.metrics.precision_recall_curve(
y_true, scores, pos_label=True)
if distances:
thresholds = -thresholds
auc = sklearn.metrics.auc(precision, recall, reorder=True)
return precision, recall, thresholds, auc
|
python
|
def precision_recall_curve(y_true, scores, distances=False):
"""Precision-recall curve
Parameters
----------
y_true : (n_samples, ) array-like
Boolean reference.
scores : (n_samples, ) array-like
Predicted score.
distances : boolean, optional
When True, indicate that `scores` are actually `distances`
Returns
-------
precision : numpy array
Precision
recall : numpy array
Recall
thresholds : numpy array
Corresponding thresholds
auc : float
Area under curve
"""
if distances:
scores = -scores
precision, recall, thresholds = sklearn.metrics.precision_recall_curve(
y_true, scores, pos_label=True)
if distances:
thresholds = -thresholds
auc = sklearn.metrics.auc(precision, recall, reorder=True)
return precision, recall, thresholds, auc
|
[
"def",
"precision_recall_curve",
"(",
"y_true",
",",
"scores",
",",
"distances",
"=",
"False",
")",
":",
"if",
"distances",
":",
"scores",
"=",
"-",
"scores",
"precision",
",",
"recall",
",",
"thresholds",
"=",
"sklearn",
".",
"metrics",
".",
"precision_recall_curve",
"(",
"y_true",
",",
"scores",
",",
"pos_label",
"=",
"True",
")",
"if",
"distances",
":",
"thresholds",
"=",
"-",
"thresholds",
"auc",
"=",
"sklearn",
".",
"metrics",
".",
"auc",
"(",
"precision",
",",
"recall",
",",
"reorder",
"=",
"True",
")",
"return",
"precision",
",",
"recall",
",",
"thresholds",
",",
"auc"
] |
Precision-recall curve
Parameters
----------
y_true : (n_samples, ) array-like
Boolean reference.
scores : (n_samples, ) array-like
Predicted score.
distances : boolean, optional
When True, indicate that `scores` are actually `distances`
Returns
-------
precision : numpy array
Precision
recall : numpy array
Recall
thresholds : numpy array
Corresponding thresholds
auc : float
Area under curve
|
[
"Precision",
"-",
"recall",
"curve"
] |
b433fec3bd37ca36fe026a428cd72483d646871a
|
https://github.com/pyannote/pyannote-metrics/blob/b433fec3bd37ca36fe026a428cd72483d646871a/pyannote/metrics/binary_classification.py#L81-L117
|
train
|
pyannote/pyannote-metrics
|
pyannote/metrics/errors/identification.py
|
IdentificationErrorAnalysis.difference
|
def difference(self, reference, hypothesis, uem=None, uemified=False):
"""Get error analysis as `Annotation`
Labels are (status, reference_label, hypothesis_label) tuples.
`status` is either 'correct', 'confusion', 'missed detection' or
'false alarm'.
`reference_label` is None in case of 'false alarm'.
`hypothesis_label` is None in case of 'missed detection'.
Parameters
----------
uemified : bool, optional
Returns "uemified" version of reference and hypothesis.
Defaults to False.
Returns
-------
errors : `Annotation`
"""
R, H, common_timeline = self.uemify(
reference, hypothesis, uem=uem,
collar=self.collar, skip_overlap=self.skip_overlap,
returns_timeline=True)
errors = Annotation(uri=reference.uri, modality=reference.modality)
# loop on all segments
for segment in common_timeline:
# list of labels in reference segment
rlabels = R.get_labels(segment, unique=False)
# list of labels in hypothesis segment
hlabels = H.get_labels(segment, unique=False)
_, details = self.matcher(rlabels, hlabels)
for r, h in details[MATCH_CORRECT]:
track = errors.new_track(segment, prefix=MATCH_CORRECT)
errors[segment, track] = (MATCH_CORRECT, r, h)
for r, h in details[MATCH_CONFUSION]:
track = errors.new_track(segment, prefix=MATCH_CONFUSION)
errors[segment, track] = (MATCH_CONFUSION, r, h)
for r in details[MATCH_MISSED_DETECTION]:
track = errors.new_track(segment,
prefix=MATCH_MISSED_DETECTION)
errors[segment, track] = (MATCH_MISSED_DETECTION, r, None)
for h in details[MATCH_FALSE_ALARM]:
track = errors.new_track(segment, prefix=MATCH_FALSE_ALARM)
errors[segment, track] = (MATCH_FALSE_ALARM, None, h)
if uemified:
return reference, hypothesis, errors
else:
return errors
|
python
|
def difference(self, reference, hypothesis, uem=None, uemified=False):
"""Get error analysis as `Annotation`
Labels are (status, reference_label, hypothesis_label) tuples.
`status` is either 'correct', 'confusion', 'missed detection' or
'false alarm'.
`reference_label` is None in case of 'false alarm'.
`hypothesis_label` is None in case of 'missed detection'.
Parameters
----------
uemified : bool, optional
Returns "uemified" version of reference and hypothesis.
Defaults to False.
Returns
-------
errors : `Annotation`
"""
R, H, common_timeline = self.uemify(
reference, hypothesis, uem=uem,
collar=self.collar, skip_overlap=self.skip_overlap,
returns_timeline=True)
errors = Annotation(uri=reference.uri, modality=reference.modality)
# loop on all segments
for segment in common_timeline:
# list of labels in reference segment
rlabels = R.get_labels(segment, unique=False)
# list of labels in hypothesis segment
hlabels = H.get_labels(segment, unique=False)
_, details = self.matcher(rlabels, hlabels)
for r, h in details[MATCH_CORRECT]:
track = errors.new_track(segment, prefix=MATCH_CORRECT)
errors[segment, track] = (MATCH_CORRECT, r, h)
for r, h in details[MATCH_CONFUSION]:
track = errors.new_track(segment, prefix=MATCH_CONFUSION)
errors[segment, track] = (MATCH_CONFUSION, r, h)
for r in details[MATCH_MISSED_DETECTION]:
track = errors.new_track(segment,
prefix=MATCH_MISSED_DETECTION)
errors[segment, track] = (MATCH_MISSED_DETECTION, r, None)
for h in details[MATCH_FALSE_ALARM]:
track = errors.new_track(segment, prefix=MATCH_FALSE_ALARM)
errors[segment, track] = (MATCH_FALSE_ALARM, None, h)
if uemified:
return reference, hypothesis, errors
else:
return errors
|
[
"def",
"difference",
"(",
"self",
",",
"reference",
",",
"hypothesis",
",",
"uem",
"=",
"None",
",",
"uemified",
"=",
"False",
")",
":",
"R",
",",
"H",
",",
"common_timeline",
"=",
"self",
".",
"uemify",
"(",
"reference",
",",
"hypothesis",
",",
"uem",
"=",
"uem",
",",
"collar",
"=",
"self",
".",
"collar",
",",
"skip_overlap",
"=",
"self",
".",
"skip_overlap",
",",
"returns_timeline",
"=",
"True",
")",
"errors",
"=",
"Annotation",
"(",
"uri",
"=",
"reference",
".",
"uri",
",",
"modality",
"=",
"reference",
".",
"modality",
")",
"# loop on all segments",
"for",
"segment",
"in",
"common_timeline",
":",
"# list of labels in reference segment",
"rlabels",
"=",
"R",
".",
"get_labels",
"(",
"segment",
",",
"unique",
"=",
"False",
")",
"# list of labels in hypothesis segment",
"hlabels",
"=",
"H",
".",
"get_labels",
"(",
"segment",
",",
"unique",
"=",
"False",
")",
"_",
",",
"details",
"=",
"self",
".",
"matcher",
"(",
"rlabels",
",",
"hlabels",
")",
"for",
"r",
",",
"h",
"in",
"details",
"[",
"MATCH_CORRECT",
"]",
":",
"track",
"=",
"errors",
".",
"new_track",
"(",
"segment",
",",
"prefix",
"=",
"MATCH_CORRECT",
")",
"errors",
"[",
"segment",
",",
"track",
"]",
"=",
"(",
"MATCH_CORRECT",
",",
"r",
",",
"h",
")",
"for",
"r",
",",
"h",
"in",
"details",
"[",
"MATCH_CONFUSION",
"]",
":",
"track",
"=",
"errors",
".",
"new_track",
"(",
"segment",
",",
"prefix",
"=",
"MATCH_CONFUSION",
")",
"errors",
"[",
"segment",
",",
"track",
"]",
"=",
"(",
"MATCH_CONFUSION",
",",
"r",
",",
"h",
")",
"for",
"r",
"in",
"details",
"[",
"MATCH_MISSED_DETECTION",
"]",
":",
"track",
"=",
"errors",
".",
"new_track",
"(",
"segment",
",",
"prefix",
"=",
"MATCH_MISSED_DETECTION",
")",
"errors",
"[",
"segment",
",",
"track",
"]",
"=",
"(",
"MATCH_MISSED_DETECTION",
",",
"r",
",",
"None",
")",
"for",
"h",
"in",
"details",
"[",
"MATCH_FALSE_ALARM",
"]",
":",
"track",
"=",
"errors",
".",
"new_track",
"(",
"segment",
",",
"prefix",
"=",
"MATCH_FALSE_ALARM",
")",
"errors",
"[",
"segment",
",",
"track",
"]",
"=",
"(",
"MATCH_FALSE_ALARM",
",",
"None",
",",
"h",
")",
"if",
"uemified",
":",
"return",
"reference",
",",
"hypothesis",
",",
"errors",
"else",
":",
"return",
"errors"
] |
Get error analysis as `Annotation`
Labels are (status, reference_label, hypothesis_label) tuples.
`status` is either 'correct', 'confusion', 'missed detection' or
'false alarm'.
`reference_label` is None in case of 'false alarm'.
`hypothesis_label` is None in case of 'missed detection'.
Parameters
----------
uemified : bool, optional
Returns "uemified" version of reference and hypothesis.
Defaults to False.
Returns
-------
errors : `Annotation`
|
[
"Get",
"error",
"analysis",
"as",
"Annotation"
] |
b433fec3bd37ca36fe026a428cd72483d646871a
|
https://github.com/pyannote/pyannote-metrics/blob/b433fec3bd37ca36fe026a428cd72483d646871a/pyannote/metrics/errors/identification.py#L75-L134
|
train
|
pyannote/pyannote-metrics
|
pyannote/metrics/base.py
|
BaseMetric.reset
|
def reset(self):
"""Reset accumulated components and metric values"""
if self.parallel:
from pyannote.metrics import manager_
self.accumulated_ = manager_.dict()
self.results_ = manager_.list()
self.uris_ = manager_.dict()
else:
self.accumulated_ = dict()
self.results_ = list()
self.uris_ = dict()
for value in self.components_:
self.accumulated_[value] = 0.
|
python
|
def reset(self):
"""Reset accumulated components and metric values"""
if self.parallel:
from pyannote.metrics import manager_
self.accumulated_ = manager_.dict()
self.results_ = manager_.list()
self.uris_ = manager_.dict()
else:
self.accumulated_ = dict()
self.results_ = list()
self.uris_ = dict()
for value in self.components_:
self.accumulated_[value] = 0.
|
[
"def",
"reset",
"(",
"self",
")",
":",
"if",
"self",
".",
"parallel",
":",
"from",
"pyannote",
".",
"metrics",
"import",
"manager_",
"self",
".",
"accumulated_",
"=",
"manager_",
".",
"dict",
"(",
")",
"self",
".",
"results_",
"=",
"manager_",
".",
"list",
"(",
")",
"self",
".",
"uris_",
"=",
"manager_",
".",
"dict",
"(",
")",
"else",
":",
"self",
".",
"accumulated_",
"=",
"dict",
"(",
")",
"self",
".",
"results_",
"=",
"list",
"(",
")",
"self",
".",
"uris_",
"=",
"dict",
"(",
")",
"for",
"value",
"in",
"self",
".",
"components_",
":",
"self",
".",
"accumulated_",
"[",
"value",
"]",
"=",
"0."
] |
Reset accumulated components and metric values
|
[
"Reset",
"accumulated",
"components",
"and",
"metric",
"values"
] |
b433fec3bd37ca36fe026a428cd72483d646871a
|
https://github.com/pyannote/pyannote-metrics/blob/b433fec3bd37ca36fe026a428cd72483d646871a/pyannote/metrics/base.py#L76-L88
|
train
|
pyannote/pyannote-metrics
|
pyannote/metrics/base.py
|
BaseMetric.confidence_interval
|
def confidence_interval(self, alpha=0.9):
"""Compute confidence interval on accumulated metric values
Parameters
----------
alpha : float, optional
Probability that the returned confidence interval contains
the true metric value.
Returns
-------
(center, (lower, upper))
with center the mean of the conditional pdf of the metric value
and (lower, upper) is a confidence interval centered on the median,
containing the estimate to a probability alpha.
See Also:
---------
scipy.stats.bayes_mvs
"""
m, _, _ = scipy.stats.bayes_mvs(
[r[self.metric_name_] for _, r in self.results_], alpha=alpha)
return m
|
python
|
def confidence_interval(self, alpha=0.9):
"""Compute confidence interval on accumulated metric values
Parameters
----------
alpha : float, optional
Probability that the returned confidence interval contains
the true metric value.
Returns
-------
(center, (lower, upper))
with center the mean of the conditional pdf of the metric value
and (lower, upper) is a confidence interval centered on the median,
containing the estimate to a probability alpha.
See Also:
---------
scipy.stats.bayes_mvs
"""
m, _, _ = scipy.stats.bayes_mvs(
[r[self.metric_name_] for _, r in self.results_], alpha=alpha)
return m
|
[
"def",
"confidence_interval",
"(",
"self",
",",
"alpha",
"=",
"0.9",
")",
":",
"m",
",",
"_",
",",
"_",
"=",
"scipy",
".",
"stats",
".",
"bayes_mvs",
"(",
"[",
"r",
"[",
"self",
".",
"metric_name_",
"]",
"for",
"_",
",",
"r",
"in",
"self",
".",
"results_",
"]",
",",
"alpha",
"=",
"alpha",
")",
"return",
"m"
] |
Compute confidence interval on accumulated metric values
Parameters
----------
alpha : float, optional
Probability that the returned confidence interval contains
the true metric value.
Returns
-------
(center, (lower, upper))
with center the mean of the conditional pdf of the metric value
and (lower, upper) is a confidence interval centered on the median,
containing the estimate to a probability alpha.
See Also:
---------
scipy.stats.bayes_mvs
|
[
"Compute",
"confidence",
"interval",
"on",
"accumulated",
"metric",
"values"
] |
b433fec3bd37ca36fe026a428cd72483d646871a
|
https://github.com/pyannote/pyannote-metrics/blob/b433fec3bd37ca36fe026a428cd72483d646871a/pyannote/metrics/base.py#L296-L319
|
train
|
pyannote/pyannote-metrics
|
pyannote/metrics/base.py
|
Precision.compute_metric
|
def compute_metric(self, components):
"""Compute precision from `components`"""
numerator = components[PRECISION_RELEVANT_RETRIEVED]
denominator = components[PRECISION_RETRIEVED]
if denominator == 0.:
if numerator == 0:
return 1.
else:
raise ValueError('')
else:
return numerator/denominator
|
python
|
def compute_metric(self, components):
"""Compute precision from `components`"""
numerator = components[PRECISION_RELEVANT_RETRIEVED]
denominator = components[PRECISION_RETRIEVED]
if denominator == 0.:
if numerator == 0:
return 1.
else:
raise ValueError('')
else:
return numerator/denominator
|
[
"def",
"compute_metric",
"(",
"self",
",",
"components",
")",
":",
"numerator",
"=",
"components",
"[",
"PRECISION_RELEVANT_RETRIEVED",
"]",
"denominator",
"=",
"components",
"[",
"PRECISION_RETRIEVED",
"]",
"if",
"denominator",
"==",
"0.",
":",
"if",
"numerator",
"==",
"0",
":",
"return",
"1.",
"else",
":",
"raise",
"ValueError",
"(",
"''",
")",
"else",
":",
"return",
"numerator",
"/",
"denominator"
] |
Compute precision from `components`
|
[
"Compute",
"precision",
"from",
"components"
] |
b433fec3bd37ca36fe026a428cd72483d646871a
|
https://github.com/pyannote/pyannote-metrics/blob/b433fec3bd37ca36fe026a428cd72483d646871a/pyannote/metrics/base.py#L347-L357
|
train
|
pyannote/pyannote-metrics
|
pyannote/metrics/base.py
|
Recall.compute_metric
|
def compute_metric(self, components):
"""Compute recall from `components`"""
numerator = components[RECALL_RELEVANT_RETRIEVED]
denominator = components[RECALL_RELEVANT]
if denominator == 0.:
if numerator == 0:
return 1.
else:
raise ValueError('')
else:
return numerator/denominator
|
python
|
def compute_metric(self, components):
"""Compute recall from `components`"""
numerator = components[RECALL_RELEVANT_RETRIEVED]
denominator = components[RECALL_RELEVANT]
if denominator == 0.:
if numerator == 0:
return 1.
else:
raise ValueError('')
else:
return numerator/denominator
|
[
"def",
"compute_metric",
"(",
"self",
",",
"components",
")",
":",
"numerator",
"=",
"components",
"[",
"RECALL_RELEVANT_RETRIEVED",
"]",
"denominator",
"=",
"components",
"[",
"RECALL_RELEVANT",
"]",
"if",
"denominator",
"==",
"0.",
":",
"if",
"numerator",
"==",
"0",
":",
"return",
"1.",
"else",
":",
"raise",
"ValueError",
"(",
"''",
")",
"else",
":",
"return",
"numerator",
"/",
"denominator"
] |
Compute recall from `components`
|
[
"Compute",
"recall",
"from",
"components"
] |
b433fec3bd37ca36fe026a428cd72483d646871a
|
https://github.com/pyannote/pyannote-metrics/blob/b433fec3bd37ca36fe026a428cd72483d646871a/pyannote/metrics/base.py#L384-L394
|
train
|
pyannote/pyannote-metrics
|
pyannote/metrics/diarization.py
|
DiarizationErrorRate.optimal_mapping
|
def optimal_mapping(self, reference, hypothesis, uem=None):
"""Optimal label mapping
Parameters
----------
reference : Annotation
hypothesis : Annotation
Reference and hypothesis diarization
uem : Timeline
Evaluation map
Returns
-------
mapping : dict
Mapping between hypothesis (key) and reference (value) labels
"""
# NOTE that this 'uemification' will not be called when
# 'optimal_mapping' is called from 'compute_components' as it
# has already been done in 'compute_components'
if uem:
reference, hypothesis = self.uemify(reference, hypothesis, uem=uem)
# call hungarian mapper
mapping = self.mapper_(hypothesis, reference)
return mapping
|
python
|
def optimal_mapping(self, reference, hypothesis, uem=None):
"""Optimal label mapping
Parameters
----------
reference : Annotation
hypothesis : Annotation
Reference and hypothesis diarization
uem : Timeline
Evaluation map
Returns
-------
mapping : dict
Mapping between hypothesis (key) and reference (value) labels
"""
# NOTE that this 'uemification' will not be called when
# 'optimal_mapping' is called from 'compute_components' as it
# has already been done in 'compute_components'
if uem:
reference, hypothesis = self.uemify(reference, hypothesis, uem=uem)
# call hungarian mapper
mapping = self.mapper_(hypothesis, reference)
return mapping
|
[
"def",
"optimal_mapping",
"(",
"self",
",",
"reference",
",",
"hypothesis",
",",
"uem",
"=",
"None",
")",
":",
"# NOTE that this 'uemification' will not be called when",
"# 'optimal_mapping' is called from 'compute_components' as it",
"# has already been done in 'compute_components'",
"if",
"uem",
":",
"reference",
",",
"hypothesis",
"=",
"self",
".",
"uemify",
"(",
"reference",
",",
"hypothesis",
",",
"uem",
"=",
"uem",
")",
"# call hungarian mapper",
"mapping",
"=",
"self",
".",
"mapper_",
"(",
"hypothesis",
",",
"reference",
")",
"return",
"mapping"
] |
Optimal label mapping
Parameters
----------
reference : Annotation
hypothesis : Annotation
Reference and hypothesis diarization
uem : Timeline
Evaluation map
Returns
-------
mapping : dict
Mapping between hypothesis (key) and reference (value) labels
|
[
"Optimal",
"label",
"mapping"
] |
b433fec3bd37ca36fe026a428cd72483d646871a
|
https://github.com/pyannote/pyannote-metrics/blob/b433fec3bd37ca36fe026a428cd72483d646871a/pyannote/metrics/diarization.py#L106-L131
|
train
|
pyannote/pyannote-metrics
|
pyannote/metrics/diarization.py
|
GreedyDiarizationErrorRate.greedy_mapping
|
def greedy_mapping(self, reference, hypothesis, uem=None):
"""Greedy label mapping
Parameters
----------
reference : Annotation
hypothesis : Annotation
Reference and hypothesis diarization
uem : Timeline
Evaluation map
Returns
-------
mapping : dict
Mapping between hypothesis (key) and reference (value) labels
"""
if uem:
reference, hypothesis = self.uemify(reference, hypothesis, uem=uem)
return self.mapper_(hypothesis, reference)
|
python
|
def greedy_mapping(self, reference, hypothesis, uem=None):
"""Greedy label mapping
Parameters
----------
reference : Annotation
hypothesis : Annotation
Reference and hypothesis diarization
uem : Timeline
Evaluation map
Returns
-------
mapping : dict
Mapping between hypothesis (key) and reference (value) labels
"""
if uem:
reference, hypothesis = self.uemify(reference, hypothesis, uem=uem)
return self.mapper_(hypothesis, reference)
|
[
"def",
"greedy_mapping",
"(",
"self",
",",
"reference",
",",
"hypothesis",
",",
"uem",
"=",
"None",
")",
":",
"if",
"uem",
":",
"reference",
",",
"hypothesis",
"=",
"self",
".",
"uemify",
"(",
"reference",
",",
"hypothesis",
",",
"uem",
"=",
"uem",
")",
"return",
"self",
".",
"mapper_",
"(",
"hypothesis",
",",
"reference",
")"
] |
Greedy label mapping
Parameters
----------
reference : Annotation
hypothesis : Annotation
Reference and hypothesis diarization
uem : Timeline
Evaluation map
Returns
-------
mapping : dict
Mapping between hypothesis (key) and reference (value) labels
|
[
"Greedy",
"label",
"mapping"
] |
b433fec3bd37ca36fe026a428cd72483d646871a
|
https://github.com/pyannote/pyannote-metrics/blob/b433fec3bd37ca36fe026a428cd72483d646871a/pyannote/metrics/diarization.py#L223-L241
|
train
|
brian-rose/climlab
|
climlab/radiation/radiation.py
|
default_absorbers
|
def default_absorbers(Tatm,
ozone_file = 'apeozone_cam3_5_54.nc',
verbose = True,):
'''Initialize a dictionary of well-mixed radiatively active gases
All values are volumetric mixing ratios.
Ozone is set to a climatology.
All other gases are assumed well-mixed:
- CO2
- CH4
- N2O
- O2
- CFC11
- CFC12
- CFC22
- CCL4
Specific values are based on the AquaPlanet Experiment protocols,
except for O2 which is set the realistic value 0.21
(affects the RRTMG scheme).
'''
absorber_vmr = {}
absorber_vmr['CO2'] = 348. / 1E6
absorber_vmr['CH4'] = 1650. / 1E9
absorber_vmr['N2O'] = 306. / 1E9
absorber_vmr['O2'] = 0.21
absorber_vmr['CFC11'] = 0.
absorber_vmr['CFC12'] = 0.
absorber_vmr['CFC22'] = 0.
absorber_vmr['CCL4'] = 0.
# Ozone: start with all zeros, interpolate to data if we can
xTatm = Tatm.to_xarray()
O3 = 0. * xTatm
if ozone_file is not None:
ozonefilepath = os.path.join(os.path.dirname(__file__), 'data', 'ozone', ozone_file)
remotepath_http = 'http://thredds.atmos.albany.edu:8080/thredds/fileServer/CLIMLAB/ozone/' + ozone_file
remotepath_opendap = 'http://thredds.atmos.albany.edu:8080/thredds/dodsC/CLIMLAB/ozone/' + ozone_file
ozonedata, path = load_data_source(local_path=ozonefilepath,
remote_source_list=[remotepath_http, remotepath_opendap],
open_method=xr.open_dataset,
remote_kwargs={'engine':'pydap'},
verbose=verbose,)
## zonal and time average
ozone_zon = ozonedata.OZONE.mean(dim=('time','lon')).transpose('lat','lev')
if ('lat' in xTatm.dims):
O3source = ozone_zon
else:
weight = np.cos(np.deg2rad(ozonedata.lat))
ozone_global = (ozone_zon * weight).mean(dim='lat') / weight.mean(dim='lat')
O3source = ozone_global
try:
O3 = O3source.interp_like(xTatm)
# There will be NaNs for gridpoints outside the ozone file domain
assert not np.any(np.isnan(O3))
except:
warnings.warn('Some grid points are beyond the bounds of the ozone file. Ozone values will be extrapolated.')
try:
# passing fill_value=None to the underlying scipy interpolator
# will result in extrapolation instead of NaNs
O3 = O3source.interp_like(xTatm, kwargs={'fill_value':None})
assert not np.any(np.isnan(O3))
except:
warnings.warn('Interpolation of ozone data failed. Setting O3 to zero instead.')
O3 = 0. * xTatm
absorber_vmr['O3'] = O3.values
return absorber_vmr
|
python
|
def default_absorbers(Tatm,
ozone_file = 'apeozone_cam3_5_54.nc',
verbose = True,):
'''Initialize a dictionary of well-mixed radiatively active gases
All values are volumetric mixing ratios.
Ozone is set to a climatology.
All other gases are assumed well-mixed:
- CO2
- CH4
- N2O
- O2
- CFC11
- CFC12
- CFC22
- CCL4
Specific values are based on the AquaPlanet Experiment protocols,
except for O2 which is set the realistic value 0.21
(affects the RRTMG scheme).
'''
absorber_vmr = {}
absorber_vmr['CO2'] = 348. / 1E6
absorber_vmr['CH4'] = 1650. / 1E9
absorber_vmr['N2O'] = 306. / 1E9
absorber_vmr['O2'] = 0.21
absorber_vmr['CFC11'] = 0.
absorber_vmr['CFC12'] = 0.
absorber_vmr['CFC22'] = 0.
absorber_vmr['CCL4'] = 0.
# Ozone: start with all zeros, interpolate to data if we can
xTatm = Tatm.to_xarray()
O3 = 0. * xTatm
if ozone_file is not None:
ozonefilepath = os.path.join(os.path.dirname(__file__), 'data', 'ozone', ozone_file)
remotepath_http = 'http://thredds.atmos.albany.edu:8080/thredds/fileServer/CLIMLAB/ozone/' + ozone_file
remotepath_opendap = 'http://thredds.atmos.albany.edu:8080/thredds/dodsC/CLIMLAB/ozone/' + ozone_file
ozonedata, path = load_data_source(local_path=ozonefilepath,
remote_source_list=[remotepath_http, remotepath_opendap],
open_method=xr.open_dataset,
remote_kwargs={'engine':'pydap'},
verbose=verbose,)
## zonal and time average
ozone_zon = ozonedata.OZONE.mean(dim=('time','lon')).transpose('lat','lev')
if ('lat' in xTatm.dims):
O3source = ozone_zon
else:
weight = np.cos(np.deg2rad(ozonedata.lat))
ozone_global = (ozone_zon * weight).mean(dim='lat') / weight.mean(dim='lat')
O3source = ozone_global
try:
O3 = O3source.interp_like(xTatm)
# There will be NaNs for gridpoints outside the ozone file domain
assert not np.any(np.isnan(O3))
except:
warnings.warn('Some grid points are beyond the bounds of the ozone file. Ozone values will be extrapolated.')
try:
# passing fill_value=None to the underlying scipy interpolator
# will result in extrapolation instead of NaNs
O3 = O3source.interp_like(xTatm, kwargs={'fill_value':None})
assert not np.any(np.isnan(O3))
except:
warnings.warn('Interpolation of ozone data failed. Setting O3 to zero instead.')
O3 = 0. * xTatm
absorber_vmr['O3'] = O3.values
return absorber_vmr
|
[
"def",
"default_absorbers",
"(",
"Tatm",
",",
"ozone_file",
"=",
"'apeozone_cam3_5_54.nc'",
",",
"verbose",
"=",
"True",
",",
")",
":",
"absorber_vmr",
"=",
"{",
"}",
"absorber_vmr",
"[",
"'CO2'",
"]",
"=",
"348.",
"/",
"1E6",
"absorber_vmr",
"[",
"'CH4'",
"]",
"=",
"1650.",
"/",
"1E9",
"absorber_vmr",
"[",
"'N2O'",
"]",
"=",
"306.",
"/",
"1E9",
"absorber_vmr",
"[",
"'O2'",
"]",
"=",
"0.21",
"absorber_vmr",
"[",
"'CFC11'",
"]",
"=",
"0.",
"absorber_vmr",
"[",
"'CFC12'",
"]",
"=",
"0.",
"absorber_vmr",
"[",
"'CFC22'",
"]",
"=",
"0.",
"absorber_vmr",
"[",
"'CCL4'",
"]",
"=",
"0.",
"# Ozone: start with all zeros, interpolate to data if we can",
"xTatm",
"=",
"Tatm",
".",
"to_xarray",
"(",
")",
"O3",
"=",
"0.",
"*",
"xTatm",
"if",
"ozone_file",
"is",
"not",
"None",
":",
"ozonefilepath",
"=",
"os",
".",
"path",
".",
"join",
"(",
"os",
".",
"path",
".",
"dirname",
"(",
"__file__",
")",
",",
"'data'",
",",
"'ozone'",
",",
"ozone_file",
")",
"remotepath_http",
"=",
"'http://thredds.atmos.albany.edu:8080/thredds/fileServer/CLIMLAB/ozone/'",
"+",
"ozone_file",
"remotepath_opendap",
"=",
"'http://thredds.atmos.albany.edu:8080/thredds/dodsC/CLIMLAB/ozone/'",
"+",
"ozone_file",
"ozonedata",
",",
"path",
"=",
"load_data_source",
"(",
"local_path",
"=",
"ozonefilepath",
",",
"remote_source_list",
"=",
"[",
"remotepath_http",
",",
"remotepath_opendap",
"]",
",",
"open_method",
"=",
"xr",
".",
"open_dataset",
",",
"remote_kwargs",
"=",
"{",
"'engine'",
":",
"'pydap'",
"}",
",",
"verbose",
"=",
"verbose",
",",
")",
"## zonal and time average",
"ozone_zon",
"=",
"ozonedata",
".",
"OZONE",
".",
"mean",
"(",
"dim",
"=",
"(",
"'time'",
",",
"'lon'",
")",
")",
".",
"transpose",
"(",
"'lat'",
",",
"'lev'",
")",
"if",
"(",
"'lat'",
"in",
"xTatm",
".",
"dims",
")",
":",
"O3source",
"=",
"ozone_zon",
"else",
":",
"weight",
"=",
"np",
".",
"cos",
"(",
"np",
".",
"deg2rad",
"(",
"ozonedata",
".",
"lat",
")",
")",
"ozone_global",
"=",
"(",
"ozone_zon",
"*",
"weight",
")",
".",
"mean",
"(",
"dim",
"=",
"'lat'",
")",
"/",
"weight",
".",
"mean",
"(",
"dim",
"=",
"'lat'",
")",
"O3source",
"=",
"ozone_global",
"try",
":",
"O3",
"=",
"O3source",
".",
"interp_like",
"(",
"xTatm",
")",
"# There will be NaNs for gridpoints outside the ozone file domain",
"assert",
"not",
"np",
".",
"any",
"(",
"np",
".",
"isnan",
"(",
"O3",
")",
")",
"except",
":",
"warnings",
".",
"warn",
"(",
"'Some grid points are beyond the bounds of the ozone file. Ozone values will be extrapolated.'",
")",
"try",
":",
"# passing fill_value=None to the underlying scipy interpolator",
"# will result in extrapolation instead of NaNs",
"O3",
"=",
"O3source",
".",
"interp_like",
"(",
"xTatm",
",",
"kwargs",
"=",
"{",
"'fill_value'",
":",
"None",
"}",
")",
"assert",
"not",
"np",
".",
"any",
"(",
"np",
".",
"isnan",
"(",
"O3",
")",
")",
"except",
":",
"warnings",
".",
"warn",
"(",
"'Interpolation of ozone data failed. Setting O3 to zero instead.'",
")",
"O3",
"=",
"0.",
"*",
"xTatm",
"absorber_vmr",
"[",
"'O3'",
"]",
"=",
"O3",
".",
"values",
"return",
"absorber_vmr"
] |
Initialize a dictionary of well-mixed radiatively active gases
All values are volumetric mixing ratios.
Ozone is set to a climatology.
All other gases are assumed well-mixed:
- CO2
- CH4
- N2O
- O2
- CFC11
- CFC12
- CFC22
- CCL4
Specific values are based on the AquaPlanet Experiment protocols,
except for O2 which is set the realistic value 0.21
(affects the RRTMG scheme).
|
[
"Initialize",
"a",
"dictionary",
"of",
"well",
"-",
"mixed",
"radiatively",
"active",
"gases",
"All",
"values",
"are",
"volumetric",
"mixing",
"ratios",
"."
] |
eae188a2ae9308229b8cbb8fe0b65f51b50ee1e6
|
https://github.com/brian-rose/climlab/blob/eae188a2ae9308229b8cbb8fe0b65f51b50ee1e6/climlab/radiation/radiation.py#L98-L166
|
train
|
brian-rose/climlab
|
climlab/radiation/radiation.py
|
init_interface
|
def init_interface(field):
'''Return a Field object defined at the vertical interfaces of the input Field object.'''
interface_shape = np.array(field.shape); interface_shape[-1] += 1
interfaces = np.tile(False,len(interface_shape)); interfaces[-1] = True
interface_zero = Field(np.zeros(interface_shape), domain=field.domain, interfaces=interfaces)
return interface_zero
|
python
|
def init_interface(field):
'''Return a Field object defined at the vertical interfaces of the input Field object.'''
interface_shape = np.array(field.shape); interface_shape[-1] += 1
interfaces = np.tile(False,len(interface_shape)); interfaces[-1] = True
interface_zero = Field(np.zeros(interface_shape), domain=field.domain, interfaces=interfaces)
return interface_zero
|
[
"def",
"init_interface",
"(",
"field",
")",
":",
"interface_shape",
"=",
"np",
".",
"array",
"(",
"field",
".",
"shape",
")",
"interface_shape",
"[",
"-",
"1",
"]",
"+=",
"1",
"interfaces",
"=",
"np",
".",
"tile",
"(",
"False",
",",
"len",
"(",
"interface_shape",
")",
")",
"interfaces",
"[",
"-",
"1",
"]",
"=",
"True",
"interface_zero",
"=",
"Field",
"(",
"np",
".",
"zeros",
"(",
"interface_shape",
")",
",",
"domain",
"=",
"field",
".",
"domain",
",",
"interfaces",
"=",
"interfaces",
")",
"return",
"interface_zero"
] |
Return a Field object defined at the vertical interfaces of the input Field object.
|
[
"Return",
"a",
"Field",
"object",
"defined",
"at",
"the",
"vertical",
"interfaces",
"of",
"the",
"input",
"Field",
"object",
"."
] |
eae188a2ae9308229b8cbb8fe0b65f51b50ee1e6
|
https://github.com/brian-rose/climlab/blob/eae188a2ae9308229b8cbb8fe0b65f51b50ee1e6/climlab/radiation/radiation.py#L168-L173
|
train
|
brian-rose/climlab
|
climlab/convection/akmaev_adjustment.py
|
convective_adjustment_direct
|
def convective_adjustment_direct(p, T, c, lapserate=6.5):
"""Convective Adjustment to a specified lapse rate.
Input argument lapserate gives the lapse rate expressed in degrees K per km
(positive means temperature increasing downward).
Default lapse rate is 6.5 K / km.
Returns the adjusted Column temperature.
inputs:
p is pressure in hPa
T is temperature in K
c is heat capacity in in J / m**2 / K
Implements the conservative adjustment algorithm from Akmaev (1991) MWR
"""
# largely follows notation and algorithm in Akmaev (1991) MWR
alpha = const.Rd / const.g * lapserate / 1.E3 # same dimensions as lapserate
L = p.size
### now handles variable lapse rate
pextended = np.insert(p,0,const.ps) # prepend const.ps = 1000 hPa as ref pressure to compute potential temperature
Pi = np.cumprod((p / pextended[:-1])**alpha) # Akmaev's equation 14 recurrence formula
beta = 1./Pi
theta = T * beta
q = Pi * c
n_k = np.zeros(L, dtype=np.int8)
theta_k = np.zeros_like(p)
s_k = np.zeros_like(p)
t_k = np.zeros_like(p)
thetaadj = Akmaev_adjustment_multidim(theta, q, beta, n_k,
theta_k, s_k, t_k)
T = thetaadj * Pi
return T
|
python
|
def convective_adjustment_direct(p, T, c, lapserate=6.5):
"""Convective Adjustment to a specified lapse rate.
Input argument lapserate gives the lapse rate expressed in degrees K per km
(positive means temperature increasing downward).
Default lapse rate is 6.5 K / km.
Returns the adjusted Column temperature.
inputs:
p is pressure in hPa
T is temperature in K
c is heat capacity in in J / m**2 / K
Implements the conservative adjustment algorithm from Akmaev (1991) MWR
"""
# largely follows notation and algorithm in Akmaev (1991) MWR
alpha = const.Rd / const.g * lapserate / 1.E3 # same dimensions as lapserate
L = p.size
### now handles variable lapse rate
pextended = np.insert(p,0,const.ps) # prepend const.ps = 1000 hPa as ref pressure to compute potential temperature
Pi = np.cumprod((p / pextended[:-1])**alpha) # Akmaev's equation 14 recurrence formula
beta = 1./Pi
theta = T * beta
q = Pi * c
n_k = np.zeros(L, dtype=np.int8)
theta_k = np.zeros_like(p)
s_k = np.zeros_like(p)
t_k = np.zeros_like(p)
thetaadj = Akmaev_adjustment_multidim(theta, q, beta, n_k,
theta_k, s_k, t_k)
T = thetaadj * Pi
return T
|
[
"def",
"convective_adjustment_direct",
"(",
"p",
",",
"T",
",",
"c",
",",
"lapserate",
"=",
"6.5",
")",
":",
"# largely follows notation and algorithm in Akmaev (1991) MWR",
"alpha",
"=",
"const",
".",
"Rd",
"/",
"const",
".",
"g",
"*",
"lapserate",
"/",
"1.E3",
"# same dimensions as lapserate",
"L",
"=",
"p",
".",
"size",
"### now handles variable lapse rate",
"pextended",
"=",
"np",
".",
"insert",
"(",
"p",
",",
"0",
",",
"const",
".",
"ps",
")",
"# prepend const.ps = 1000 hPa as ref pressure to compute potential temperature",
"Pi",
"=",
"np",
".",
"cumprod",
"(",
"(",
"p",
"/",
"pextended",
"[",
":",
"-",
"1",
"]",
")",
"**",
"alpha",
")",
"# Akmaev's equation 14 recurrence formula",
"beta",
"=",
"1.",
"/",
"Pi",
"theta",
"=",
"T",
"*",
"beta",
"q",
"=",
"Pi",
"*",
"c",
"n_k",
"=",
"np",
".",
"zeros",
"(",
"L",
",",
"dtype",
"=",
"np",
".",
"int8",
")",
"theta_k",
"=",
"np",
".",
"zeros_like",
"(",
"p",
")",
"s_k",
"=",
"np",
".",
"zeros_like",
"(",
"p",
")",
"t_k",
"=",
"np",
".",
"zeros_like",
"(",
"p",
")",
"thetaadj",
"=",
"Akmaev_adjustment_multidim",
"(",
"theta",
",",
"q",
",",
"beta",
",",
"n_k",
",",
"theta_k",
",",
"s_k",
",",
"t_k",
")",
"T",
"=",
"thetaadj",
"*",
"Pi",
"return",
"T"
] |
Convective Adjustment to a specified lapse rate.
Input argument lapserate gives the lapse rate expressed in degrees K per km
(positive means temperature increasing downward).
Default lapse rate is 6.5 K / km.
Returns the adjusted Column temperature.
inputs:
p is pressure in hPa
T is temperature in K
c is heat capacity in in J / m**2 / K
Implements the conservative adjustment algorithm from Akmaev (1991) MWR
|
[
"Convective",
"Adjustment",
"to",
"a",
"specified",
"lapse",
"rate",
"."
] |
eae188a2ae9308229b8cbb8fe0b65f51b50ee1e6
|
https://github.com/brian-rose/climlab/blob/eae188a2ae9308229b8cbb8fe0b65f51b50ee1e6/climlab/convection/akmaev_adjustment.py#L7-L39
|
train
|
brian-rose/climlab
|
climlab/convection/akmaev_adjustment.py
|
Akmaev_adjustment
|
def Akmaev_adjustment(theta, q, beta, n_k, theta_k, s_k, t_k):
'''Single column only.'''
L = q.size # number of vertical levels
# Akmaev step 1
k = 1
n_k[k-1] = 1
theta_k[k-1] = theta[k-1]
l = 2
while True:
# Akmaev step 2
n = 1
thistheta = theta[l-1]
while True:
# Akmaev step 3
if theta_k[k-1] <= thistheta:
# Akmaev step 6
k += 1
break # to step 7
else:
if n <= 1:
s = q[l-1]
t = s*thistheta
# Akmaev step 4
if n_k[k-1] <= 1:
# lower adjacent level is not an earlier-formed neutral layer
s_k[k-1] = q[l-n-1]
t_k[k-1] = s_k[k-1] * theta_k[k-1]
# Akmaev step 5
# join current and underlying layers
n += n_k[k-1]
s += s_k[k-1]
t += t_k[k-1]
s_k[k-1] = s
t_k[k-1] = t
thistheta = t/s
if k==1:
# joint neutral layer is the first one
break # to step 7
k -= 1
# back to step 3
# Akmaev step 7
if l == L: # the scan is over
break # to step 8
l += 1
n_k[k-1] = n
theta_k[k-1] = thistheta
# back to step 2
# update the potential temperatures
while True:
while True:
# Akmaev step 8
if n==1: # current model level was not included in any neutral layer
break # to step 11
while True:
# Akmaev step 9
theta[l-1] = thistheta
if n==1:
break
# Akmaev step 10
l -= 1
n -= 1
# back to step 9
# Akmaev step 11
if k==1:
break
k -= 1
l -= 1
n = n_k[k-1]
thistheta = theta_k[k-1]
# back to step 8
return theta
|
python
|
def Akmaev_adjustment(theta, q, beta, n_k, theta_k, s_k, t_k):
'''Single column only.'''
L = q.size # number of vertical levels
# Akmaev step 1
k = 1
n_k[k-1] = 1
theta_k[k-1] = theta[k-1]
l = 2
while True:
# Akmaev step 2
n = 1
thistheta = theta[l-1]
while True:
# Akmaev step 3
if theta_k[k-1] <= thistheta:
# Akmaev step 6
k += 1
break # to step 7
else:
if n <= 1:
s = q[l-1]
t = s*thistheta
# Akmaev step 4
if n_k[k-1] <= 1:
# lower adjacent level is not an earlier-formed neutral layer
s_k[k-1] = q[l-n-1]
t_k[k-1] = s_k[k-1] * theta_k[k-1]
# Akmaev step 5
# join current and underlying layers
n += n_k[k-1]
s += s_k[k-1]
t += t_k[k-1]
s_k[k-1] = s
t_k[k-1] = t
thistheta = t/s
if k==1:
# joint neutral layer is the first one
break # to step 7
k -= 1
# back to step 3
# Akmaev step 7
if l == L: # the scan is over
break # to step 8
l += 1
n_k[k-1] = n
theta_k[k-1] = thistheta
# back to step 2
# update the potential temperatures
while True:
while True:
# Akmaev step 8
if n==1: # current model level was not included in any neutral layer
break # to step 11
while True:
# Akmaev step 9
theta[l-1] = thistheta
if n==1:
break
# Akmaev step 10
l -= 1
n -= 1
# back to step 9
# Akmaev step 11
if k==1:
break
k -= 1
l -= 1
n = n_k[k-1]
thistheta = theta_k[k-1]
# back to step 8
return theta
|
[
"def",
"Akmaev_adjustment",
"(",
"theta",
",",
"q",
",",
"beta",
",",
"n_k",
",",
"theta_k",
",",
"s_k",
",",
"t_k",
")",
":",
"L",
"=",
"q",
".",
"size",
"# number of vertical levels",
"# Akmaev step 1",
"k",
"=",
"1",
"n_k",
"[",
"k",
"-",
"1",
"]",
"=",
"1",
"theta_k",
"[",
"k",
"-",
"1",
"]",
"=",
"theta",
"[",
"k",
"-",
"1",
"]",
"l",
"=",
"2",
"while",
"True",
":",
"# Akmaev step 2",
"n",
"=",
"1",
"thistheta",
"=",
"theta",
"[",
"l",
"-",
"1",
"]",
"while",
"True",
":",
"# Akmaev step 3",
"if",
"theta_k",
"[",
"k",
"-",
"1",
"]",
"<=",
"thistheta",
":",
"# Akmaev step 6",
"k",
"+=",
"1",
"break",
"# to step 7",
"else",
":",
"if",
"n",
"<=",
"1",
":",
"s",
"=",
"q",
"[",
"l",
"-",
"1",
"]",
"t",
"=",
"s",
"*",
"thistheta",
"# Akmaev step 4",
"if",
"n_k",
"[",
"k",
"-",
"1",
"]",
"<=",
"1",
":",
"# lower adjacent level is not an earlier-formed neutral layer",
"s_k",
"[",
"k",
"-",
"1",
"]",
"=",
"q",
"[",
"l",
"-",
"n",
"-",
"1",
"]",
"t_k",
"[",
"k",
"-",
"1",
"]",
"=",
"s_k",
"[",
"k",
"-",
"1",
"]",
"*",
"theta_k",
"[",
"k",
"-",
"1",
"]",
"# Akmaev step 5",
"# join current and underlying layers",
"n",
"+=",
"n_k",
"[",
"k",
"-",
"1",
"]",
"s",
"+=",
"s_k",
"[",
"k",
"-",
"1",
"]",
"t",
"+=",
"t_k",
"[",
"k",
"-",
"1",
"]",
"s_k",
"[",
"k",
"-",
"1",
"]",
"=",
"s",
"t_k",
"[",
"k",
"-",
"1",
"]",
"=",
"t",
"thistheta",
"=",
"t",
"/",
"s",
"if",
"k",
"==",
"1",
":",
"# joint neutral layer is the first one",
"break",
"# to step 7",
"k",
"-=",
"1",
"# back to step 3",
"# Akmaev step 7",
"if",
"l",
"==",
"L",
":",
"# the scan is over",
"break",
"# to step 8",
"l",
"+=",
"1",
"n_k",
"[",
"k",
"-",
"1",
"]",
"=",
"n",
"theta_k",
"[",
"k",
"-",
"1",
"]",
"=",
"thistheta",
"# back to step 2",
"# update the potential temperatures",
"while",
"True",
":",
"while",
"True",
":",
"# Akmaev step 8",
"if",
"n",
"==",
"1",
":",
"# current model level was not included in any neutral layer",
"break",
"# to step 11",
"while",
"True",
":",
"# Akmaev step 9",
"theta",
"[",
"l",
"-",
"1",
"]",
"=",
"thistheta",
"if",
"n",
"==",
"1",
":",
"break",
"# Akmaev step 10",
"l",
"-=",
"1",
"n",
"-=",
"1",
"# back to step 9",
"# Akmaev step 11",
"if",
"k",
"==",
"1",
":",
"break",
"k",
"-=",
"1",
"l",
"-=",
"1",
"n",
"=",
"n_k",
"[",
"k",
"-",
"1",
"]",
"thistheta",
"=",
"theta_k",
"[",
"k",
"-",
"1",
"]",
"# back to step 8",
"return",
"theta"
] |
Single column only.
|
[
"Single",
"column",
"only",
"."
] |
eae188a2ae9308229b8cbb8fe0b65f51b50ee1e6
|
https://github.com/brian-rose/climlab/blob/eae188a2ae9308229b8cbb8fe0b65f51b50ee1e6/climlab/convection/akmaev_adjustment.py#L58-L129
|
train
|
brian-rose/climlab
|
climlab/model/column.py
|
GreyRadiationModel.do_diagnostics
|
def do_diagnostics(self):
'''Set all the diagnostics from long and shortwave radiation.'''
self.OLR = self.subprocess['LW'].flux_to_space
self.LW_down_sfc = self.subprocess['LW'].flux_to_sfc
self.LW_up_sfc = self.subprocess['LW'].flux_from_sfc
self.LW_absorbed_sfc = self.LW_down_sfc - self.LW_up_sfc
self.LW_absorbed_atm = self.subprocess['LW'].absorbed
self.LW_emission = self.subprocess['LW'].emission
# contributions to OLR from surface and atm. levels
#self.diagnostics['OLR_sfc'] = self.flux['sfc2space']
#self.diagnostics['OLR_atm'] = self.flux['atm2space']
self.ASR = (self.subprocess['SW'].flux_from_space -
self.subprocess['SW'].flux_to_space)
#self.SW_absorbed_sfc = (self.subprocess['surface'].SW_from_atm -
# self.subprocess['surface'].SW_to_atm)
self.SW_absorbed_atm = self.subprocess['SW'].absorbed
self.SW_down_sfc = self.subprocess['SW'].flux_to_sfc
self.SW_up_sfc = self.subprocess['SW'].flux_from_sfc
self.SW_absorbed_sfc = self.SW_down_sfc - self.SW_up_sfc
self.SW_up_TOA = self.subprocess['SW'].flux_to_space
self.SW_down_TOA = self.subprocess['SW'].flux_from_space
self.planetary_albedo = (self.subprocess['SW'].flux_to_space /
self.subprocess['SW'].flux_from_space)
|
python
|
def do_diagnostics(self):
'''Set all the diagnostics from long and shortwave radiation.'''
self.OLR = self.subprocess['LW'].flux_to_space
self.LW_down_sfc = self.subprocess['LW'].flux_to_sfc
self.LW_up_sfc = self.subprocess['LW'].flux_from_sfc
self.LW_absorbed_sfc = self.LW_down_sfc - self.LW_up_sfc
self.LW_absorbed_atm = self.subprocess['LW'].absorbed
self.LW_emission = self.subprocess['LW'].emission
# contributions to OLR from surface and atm. levels
#self.diagnostics['OLR_sfc'] = self.flux['sfc2space']
#self.diagnostics['OLR_atm'] = self.flux['atm2space']
self.ASR = (self.subprocess['SW'].flux_from_space -
self.subprocess['SW'].flux_to_space)
#self.SW_absorbed_sfc = (self.subprocess['surface'].SW_from_atm -
# self.subprocess['surface'].SW_to_atm)
self.SW_absorbed_atm = self.subprocess['SW'].absorbed
self.SW_down_sfc = self.subprocess['SW'].flux_to_sfc
self.SW_up_sfc = self.subprocess['SW'].flux_from_sfc
self.SW_absorbed_sfc = self.SW_down_sfc - self.SW_up_sfc
self.SW_up_TOA = self.subprocess['SW'].flux_to_space
self.SW_down_TOA = self.subprocess['SW'].flux_from_space
self.planetary_albedo = (self.subprocess['SW'].flux_to_space /
self.subprocess['SW'].flux_from_space)
|
[
"def",
"do_diagnostics",
"(",
"self",
")",
":",
"self",
".",
"OLR",
"=",
"self",
".",
"subprocess",
"[",
"'LW'",
"]",
".",
"flux_to_space",
"self",
".",
"LW_down_sfc",
"=",
"self",
".",
"subprocess",
"[",
"'LW'",
"]",
".",
"flux_to_sfc",
"self",
".",
"LW_up_sfc",
"=",
"self",
".",
"subprocess",
"[",
"'LW'",
"]",
".",
"flux_from_sfc",
"self",
".",
"LW_absorbed_sfc",
"=",
"self",
".",
"LW_down_sfc",
"-",
"self",
".",
"LW_up_sfc",
"self",
".",
"LW_absorbed_atm",
"=",
"self",
".",
"subprocess",
"[",
"'LW'",
"]",
".",
"absorbed",
"self",
".",
"LW_emission",
"=",
"self",
".",
"subprocess",
"[",
"'LW'",
"]",
".",
"emission",
"# contributions to OLR from surface and atm. levels",
"#self.diagnostics['OLR_sfc'] = self.flux['sfc2space']",
"#self.diagnostics['OLR_atm'] = self.flux['atm2space']",
"self",
".",
"ASR",
"=",
"(",
"self",
".",
"subprocess",
"[",
"'SW'",
"]",
".",
"flux_from_space",
"-",
"self",
".",
"subprocess",
"[",
"'SW'",
"]",
".",
"flux_to_space",
")",
"#self.SW_absorbed_sfc = (self.subprocess['surface'].SW_from_atm -",
"# self.subprocess['surface'].SW_to_atm)",
"self",
".",
"SW_absorbed_atm",
"=",
"self",
".",
"subprocess",
"[",
"'SW'",
"]",
".",
"absorbed",
"self",
".",
"SW_down_sfc",
"=",
"self",
".",
"subprocess",
"[",
"'SW'",
"]",
".",
"flux_to_sfc",
"self",
".",
"SW_up_sfc",
"=",
"self",
".",
"subprocess",
"[",
"'SW'",
"]",
".",
"flux_from_sfc",
"self",
".",
"SW_absorbed_sfc",
"=",
"self",
".",
"SW_down_sfc",
"-",
"self",
".",
"SW_up_sfc",
"self",
".",
"SW_up_TOA",
"=",
"self",
".",
"subprocess",
"[",
"'SW'",
"]",
".",
"flux_to_space",
"self",
".",
"SW_down_TOA",
"=",
"self",
".",
"subprocess",
"[",
"'SW'",
"]",
".",
"flux_from_space",
"self",
".",
"planetary_albedo",
"=",
"(",
"self",
".",
"subprocess",
"[",
"'SW'",
"]",
".",
"flux_to_space",
"/",
"self",
".",
"subprocess",
"[",
"'SW'",
"]",
".",
"flux_from_space",
")"
] |
Set all the diagnostics from long and shortwave radiation.
|
[
"Set",
"all",
"the",
"diagnostics",
"from",
"long",
"and",
"shortwave",
"radiation",
"."
] |
eae188a2ae9308229b8cbb8fe0b65f51b50ee1e6
|
https://github.com/brian-rose/climlab/blob/eae188a2ae9308229b8cbb8fe0b65f51b50ee1e6/climlab/model/column.py#L119-L141
|
train
|
brian-rose/climlab
|
climlab/utils/thermo.py
|
clausius_clapeyron
|
def clausius_clapeyron(T):
"""Compute saturation vapor pressure as function of temperature T.
Input: T is temperature in Kelvin
Output: saturation vapor pressure in mb or hPa
Formula from Rogers and Yau "A Short Course in Cloud Physics" (Pergammon Press), p. 16
claimed to be accurate to within 0.1% between -30degC and 35 degC
Based on the paper by Bolton (1980, Monthly Weather Review).
"""
Tcel = T - tempCtoK
es = 6.112 * exp(17.67*Tcel/(Tcel+243.5))
return es
|
python
|
def clausius_clapeyron(T):
"""Compute saturation vapor pressure as function of temperature T.
Input: T is temperature in Kelvin
Output: saturation vapor pressure in mb or hPa
Formula from Rogers and Yau "A Short Course in Cloud Physics" (Pergammon Press), p. 16
claimed to be accurate to within 0.1% between -30degC and 35 degC
Based on the paper by Bolton (1980, Monthly Weather Review).
"""
Tcel = T - tempCtoK
es = 6.112 * exp(17.67*Tcel/(Tcel+243.5))
return es
|
[
"def",
"clausius_clapeyron",
"(",
"T",
")",
":",
"Tcel",
"=",
"T",
"-",
"tempCtoK",
"es",
"=",
"6.112",
"*",
"exp",
"(",
"17.67",
"*",
"Tcel",
"/",
"(",
"Tcel",
"+",
"243.5",
")",
")",
"return",
"es"
] |
Compute saturation vapor pressure as function of temperature T.
Input: T is temperature in Kelvin
Output: saturation vapor pressure in mb or hPa
Formula from Rogers and Yau "A Short Course in Cloud Physics" (Pergammon Press), p. 16
claimed to be accurate to within 0.1% between -30degC and 35 degC
Based on the paper by Bolton (1980, Monthly Weather Review).
|
[
"Compute",
"saturation",
"vapor",
"pressure",
"as",
"function",
"of",
"temperature",
"T",
"."
] |
eae188a2ae9308229b8cbb8fe0b65f51b50ee1e6
|
https://github.com/brian-rose/climlab/blob/eae188a2ae9308229b8cbb8fe0b65f51b50ee1e6/climlab/utils/thermo.py#L41-L54
|
train
|
brian-rose/climlab
|
climlab/utils/thermo.py
|
qsat
|
def qsat(T,p):
"""Compute saturation specific humidity as function of temperature and pressure.
Input: T is temperature in Kelvin
p is pressure in hPa or mb
Output: saturation specific humidity (dimensionless).
"""
es = clausius_clapeyron(T)
q = eps * es / (p - (1 - eps) * es )
return q
|
python
|
def qsat(T,p):
"""Compute saturation specific humidity as function of temperature and pressure.
Input: T is temperature in Kelvin
p is pressure in hPa or mb
Output: saturation specific humidity (dimensionless).
"""
es = clausius_clapeyron(T)
q = eps * es / (p - (1 - eps) * es )
return q
|
[
"def",
"qsat",
"(",
"T",
",",
"p",
")",
":",
"es",
"=",
"clausius_clapeyron",
"(",
"T",
")",
"q",
"=",
"eps",
"*",
"es",
"/",
"(",
"p",
"-",
"(",
"1",
"-",
"eps",
")",
"*",
"es",
")",
"return",
"q"
] |
Compute saturation specific humidity as function of temperature and pressure.
Input: T is temperature in Kelvin
p is pressure in hPa or mb
Output: saturation specific humidity (dimensionless).
|
[
"Compute",
"saturation",
"specific",
"humidity",
"as",
"function",
"of",
"temperature",
"and",
"pressure",
"."
] |
eae188a2ae9308229b8cbb8fe0b65f51b50ee1e6
|
https://github.com/brian-rose/climlab/blob/eae188a2ae9308229b8cbb8fe0b65f51b50ee1e6/climlab/utils/thermo.py#L56-L66
|
train
|
brian-rose/climlab
|
climlab/utils/thermo.py
|
pseudoadiabat
|
def pseudoadiabat(T,p):
"""Compute the local slope of the pseudoadiabat at given temperature and pressure
Inputs: p is pressure in hPa or mb
T is local temperature in Kelvin
Output: dT/dp, the rate of temperature change for pseudoadiabatic ascent
in units of K / hPa
The pseudoadiabat describes changes in temperature and pressure for an air
parcel at saturation assuming instantaneous rain-out of the super-saturated water
Formula consistent with eq. (2.33) from Raymond Pierrehumbert, "Principles of Planetary Climate"
which nominally accounts for non-dilute effects, but computes the derivative
dT/dpa, where pa is the partial pressure of the non-condensible gas.
Integrating the result dT/dp treating p as total pressure effectively makes the dilute assumption.
"""
esoverp = clausius_clapeyron(T) / p
Tcel = T - tempCtoK
L = (2.501 - 0.00237 * Tcel) * 1.E6 # Accurate form of latent heat of vaporization in J/kg
ratio = L / T / Rv
dTdp = (T / p * kappa * (1 + esoverp * ratio) /
(1 + kappa * (cpv / Rv + (ratio-1) * ratio) * esoverp))
return dTdp
|
python
|
def pseudoadiabat(T,p):
"""Compute the local slope of the pseudoadiabat at given temperature and pressure
Inputs: p is pressure in hPa or mb
T is local temperature in Kelvin
Output: dT/dp, the rate of temperature change for pseudoadiabatic ascent
in units of K / hPa
The pseudoadiabat describes changes in temperature and pressure for an air
parcel at saturation assuming instantaneous rain-out of the super-saturated water
Formula consistent with eq. (2.33) from Raymond Pierrehumbert, "Principles of Planetary Climate"
which nominally accounts for non-dilute effects, but computes the derivative
dT/dpa, where pa is the partial pressure of the non-condensible gas.
Integrating the result dT/dp treating p as total pressure effectively makes the dilute assumption.
"""
esoverp = clausius_clapeyron(T) / p
Tcel = T - tempCtoK
L = (2.501 - 0.00237 * Tcel) * 1.E6 # Accurate form of latent heat of vaporization in J/kg
ratio = L / T / Rv
dTdp = (T / p * kappa * (1 + esoverp * ratio) /
(1 + kappa * (cpv / Rv + (ratio-1) * ratio) * esoverp))
return dTdp
|
[
"def",
"pseudoadiabat",
"(",
"T",
",",
"p",
")",
":",
"esoverp",
"=",
"clausius_clapeyron",
"(",
"T",
")",
"/",
"p",
"Tcel",
"=",
"T",
"-",
"tempCtoK",
"L",
"=",
"(",
"2.501",
"-",
"0.00237",
"*",
"Tcel",
")",
"*",
"1.E6",
"# Accurate form of latent heat of vaporization in J/kg",
"ratio",
"=",
"L",
"/",
"T",
"/",
"Rv",
"dTdp",
"=",
"(",
"T",
"/",
"p",
"*",
"kappa",
"*",
"(",
"1",
"+",
"esoverp",
"*",
"ratio",
")",
"/",
"(",
"1",
"+",
"kappa",
"*",
"(",
"cpv",
"/",
"Rv",
"+",
"(",
"ratio",
"-",
"1",
")",
"*",
"ratio",
")",
"*",
"esoverp",
")",
")",
"return",
"dTdp"
] |
Compute the local slope of the pseudoadiabat at given temperature and pressure
Inputs: p is pressure in hPa or mb
T is local temperature in Kelvin
Output: dT/dp, the rate of temperature change for pseudoadiabatic ascent
in units of K / hPa
The pseudoadiabat describes changes in temperature and pressure for an air
parcel at saturation assuming instantaneous rain-out of the super-saturated water
Formula consistent with eq. (2.33) from Raymond Pierrehumbert, "Principles of Planetary Climate"
which nominally accounts for non-dilute effects, but computes the derivative
dT/dpa, where pa is the partial pressure of the non-condensible gas.
Integrating the result dT/dp treating p as total pressure effectively makes the dilute assumption.
|
[
"Compute",
"the",
"local",
"slope",
"of",
"the",
"pseudoadiabat",
"at",
"given",
"temperature",
"and",
"pressure"
] |
eae188a2ae9308229b8cbb8fe0b65f51b50ee1e6
|
https://github.com/brian-rose/climlab/blob/eae188a2ae9308229b8cbb8fe0b65f51b50ee1e6/climlab/utils/thermo.py#L101-L124
|
train
|
brian-rose/climlab
|
climlab/dynamics/diffusion.py
|
_solve_implicit_banded
|
def _solve_implicit_banded(current, banded_matrix):
"""Uses a banded solver for matrix inversion of a tridiagonal matrix.
Converts the complete listed tridiagonal matrix *(nxn)* into a three row
matrix *(3xn)* and calls :py:func:`scipy.linalg.solve_banded()`.
:param array current: the current state of the variable for which
matrix inversion should be computed
:param array banded_matrix: complete diffusion matrix (*dimension: nxn*)
:returns: output of :py:func:`scipy.linalg.solve_banded()`
:rtype: array
"""
# can improve performance by storing the banded form once and not
# recalculating it...
# but whatever
J = banded_matrix.shape[0]
diag = np.zeros((3, J))
diag[1, :] = np.diag(banded_matrix, k=0)
diag[0, 1:] = np.diag(banded_matrix, k=1)
diag[2, :-1] = np.diag(banded_matrix, k=-1)
return solve_banded((1, 1), diag, current)
|
python
|
def _solve_implicit_banded(current, banded_matrix):
"""Uses a banded solver for matrix inversion of a tridiagonal matrix.
Converts the complete listed tridiagonal matrix *(nxn)* into a three row
matrix *(3xn)* and calls :py:func:`scipy.linalg.solve_banded()`.
:param array current: the current state of the variable for which
matrix inversion should be computed
:param array banded_matrix: complete diffusion matrix (*dimension: nxn*)
:returns: output of :py:func:`scipy.linalg.solve_banded()`
:rtype: array
"""
# can improve performance by storing the banded form once and not
# recalculating it...
# but whatever
J = banded_matrix.shape[0]
diag = np.zeros((3, J))
diag[1, :] = np.diag(banded_matrix, k=0)
diag[0, 1:] = np.diag(banded_matrix, k=1)
diag[2, :-1] = np.diag(banded_matrix, k=-1)
return solve_banded((1, 1), diag, current)
|
[
"def",
"_solve_implicit_banded",
"(",
"current",
",",
"banded_matrix",
")",
":",
"# can improve performance by storing the banded form once and not",
"# recalculating it...",
"# but whatever",
"J",
"=",
"banded_matrix",
".",
"shape",
"[",
"0",
"]",
"diag",
"=",
"np",
".",
"zeros",
"(",
"(",
"3",
",",
"J",
")",
")",
"diag",
"[",
"1",
",",
":",
"]",
"=",
"np",
".",
"diag",
"(",
"banded_matrix",
",",
"k",
"=",
"0",
")",
"diag",
"[",
"0",
",",
"1",
":",
"]",
"=",
"np",
".",
"diag",
"(",
"banded_matrix",
",",
"k",
"=",
"1",
")",
"diag",
"[",
"2",
",",
":",
"-",
"1",
"]",
"=",
"np",
".",
"diag",
"(",
"banded_matrix",
",",
"k",
"=",
"-",
"1",
")",
"return",
"solve_banded",
"(",
"(",
"1",
",",
"1",
")",
",",
"diag",
",",
"current",
")"
] |
Uses a banded solver for matrix inversion of a tridiagonal matrix.
Converts the complete listed tridiagonal matrix *(nxn)* into a three row
matrix *(3xn)* and calls :py:func:`scipy.linalg.solve_banded()`.
:param array current: the current state of the variable for which
matrix inversion should be computed
:param array banded_matrix: complete diffusion matrix (*dimension: nxn*)
:returns: output of :py:func:`scipy.linalg.solve_banded()`
:rtype: array
|
[
"Uses",
"a",
"banded",
"solver",
"for",
"matrix",
"inversion",
"of",
"a",
"tridiagonal",
"matrix",
"."
] |
eae188a2ae9308229b8cbb8fe0b65f51b50ee1e6
|
https://github.com/brian-rose/climlab/blob/eae188a2ae9308229b8cbb8fe0b65f51b50ee1e6/climlab/dynamics/diffusion.py#L360-L381
|
train
|
brian-rose/climlab
|
climlab/dynamics/diffusion.py
|
_guess_diffusion_axis
|
def _guess_diffusion_axis(process_or_domain):
"""Scans given process, domain or dictionary of domains for a diffusion axis
and returns appropriate name.
In case only one axis with length > 1 in the process or set of domains
exists, the name of that axis is returned. Otherwise an error is raised.
:param process_or_domain: input from where diffusion axis should be guessed
:type process_or_domain: :class:`~climlab.process.process.Process`,
:class:`~climlab.domain.domain._Domain` or
:py:class:`dict` of domains
:raises: :exc:`ValueError` if more than one diffusion axis is possible.
:returns: name of the diffusion axis
:rtype: str
"""
axes = get_axes(process_or_domain)
diff_ax = {}
for axname, ax in axes.items():
if ax.num_points > 1:
diff_ax.update({axname: ax})
if len(list(diff_ax.keys())) == 1:
return list(diff_ax.keys())[0]
else:
raise ValueError('More than one possible diffusion axis.')
|
python
|
def _guess_diffusion_axis(process_or_domain):
"""Scans given process, domain or dictionary of domains for a diffusion axis
and returns appropriate name.
In case only one axis with length > 1 in the process or set of domains
exists, the name of that axis is returned. Otherwise an error is raised.
:param process_or_domain: input from where diffusion axis should be guessed
:type process_or_domain: :class:`~climlab.process.process.Process`,
:class:`~climlab.domain.domain._Domain` or
:py:class:`dict` of domains
:raises: :exc:`ValueError` if more than one diffusion axis is possible.
:returns: name of the diffusion axis
:rtype: str
"""
axes = get_axes(process_or_domain)
diff_ax = {}
for axname, ax in axes.items():
if ax.num_points > 1:
diff_ax.update({axname: ax})
if len(list(diff_ax.keys())) == 1:
return list(diff_ax.keys())[0]
else:
raise ValueError('More than one possible diffusion axis.')
|
[
"def",
"_guess_diffusion_axis",
"(",
"process_or_domain",
")",
":",
"axes",
"=",
"get_axes",
"(",
"process_or_domain",
")",
"diff_ax",
"=",
"{",
"}",
"for",
"axname",
",",
"ax",
"in",
"axes",
".",
"items",
"(",
")",
":",
"if",
"ax",
".",
"num_points",
">",
"1",
":",
"diff_ax",
".",
"update",
"(",
"{",
"axname",
":",
"ax",
"}",
")",
"if",
"len",
"(",
"list",
"(",
"diff_ax",
".",
"keys",
"(",
")",
")",
")",
"==",
"1",
":",
"return",
"list",
"(",
"diff_ax",
".",
"keys",
"(",
")",
")",
"[",
"0",
"]",
"else",
":",
"raise",
"ValueError",
"(",
"'More than one possible diffusion axis.'",
")"
] |
Scans given process, domain or dictionary of domains for a diffusion axis
and returns appropriate name.
In case only one axis with length > 1 in the process or set of domains
exists, the name of that axis is returned. Otherwise an error is raised.
:param process_or_domain: input from where diffusion axis should be guessed
:type process_or_domain: :class:`~climlab.process.process.Process`,
:class:`~climlab.domain.domain._Domain` or
:py:class:`dict` of domains
:raises: :exc:`ValueError` if more than one diffusion axis is possible.
:returns: name of the diffusion axis
:rtype: str
|
[
"Scans",
"given",
"process",
"domain",
"or",
"dictionary",
"of",
"domains",
"for",
"a",
"diffusion",
"axis",
"and",
"returns",
"appropriate",
"name",
"."
] |
eae188a2ae9308229b8cbb8fe0b65f51b50ee1e6
|
https://github.com/brian-rose/climlab/blob/eae188a2ae9308229b8cbb8fe0b65f51b50ee1e6/climlab/dynamics/diffusion.py#L384-L408
|
train
|
brian-rose/climlab
|
climlab/dynamics/diffusion.py
|
Diffusion._implicit_solver
|
def _implicit_solver(self):
"""Invertes and solves the matrix problem for diffusion matrix
and temperature T.
The method is called by the
:func:`~climlab.process.implicit.ImplicitProcess._compute()` function
of the :class:`~climlab.process.implicit.ImplicitProcess` class and
solves the matrix problem
.. math::
A \\cdot T_{\\textrm{new}} = T_{\\textrm{old}}
for diffusion matrix A and corresponding temperatures.
:math:`T_{\\textrm{old}}` is in this case the current state variable
which already has been adjusted by the explicit processes.
:math:`T_{\\textrm{new}}` is the new state of the variable. To
derive the temperature tendency of the diffusion process the adjustment
has to be calculated and muliplied with the timestep which is done by
the :func:`~climlab.process.implicit.ImplicitProcess._compute()`
function of the :class:`~climlab.process.implicit.ImplicitProcess`
class.
This method calculates the matrix inversion for every state variable
and calling either :func:`solve_implicit_banded()` or
:py:func:`numpy.linalg.solve()` dependent on the flag
``self.use_banded_solver``.
:ivar dict state: method uses current state variables
but does not modify them
:ivar bool use_banded_solver: input flag whether to use
:func:`_solve_implicit_banded()` or
:py:func:`numpy.linalg.solve()` to do
the matrix inversion
:ivar array _diffTriDiag: the diffusion matrix which is given
with the current state variable to
the method solving the matrix problem
"""
#if self.update_diffusivity:
# Time-stepping the diffusion is just inverting this matrix problem:
newstate = {}
for varname, value in self.state.items():
if self.use_banded_solver:
newvar = _solve_implicit_banded(value, self._diffTriDiag)
else:
newvar = np.linalg.solve(self._diffTriDiag, value)
newstate[varname] = newvar
return newstate
|
python
|
def _implicit_solver(self):
"""Invertes and solves the matrix problem for diffusion matrix
and temperature T.
The method is called by the
:func:`~climlab.process.implicit.ImplicitProcess._compute()` function
of the :class:`~climlab.process.implicit.ImplicitProcess` class and
solves the matrix problem
.. math::
A \\cdot T_{\\textrm{new}} = T_{\\textrm{old}}
for diffusion matrix A and corresponding temperatures.
:math:`T_{\\textrm{old}}` is in this case the current state variable
which already has been adjusted by the explicit processes.
:math:`T_{\\textrm{new}}` is the new state of the variable. To
derive the temperature tendency of the diffusion process the adjustment
has to be calculated and muliplied with the timestep which is done by
the :func:`~climlab.process.implicit.ImplicitProcess._compute()`
function of the :class:`~climlab.process.implicit.ImplicitProcess`
class.
This method calculates the matrix inversion for every state variable
and calling either :func:`solve_implicit_banded()` or
:py:func:`numpy.linalg.solve()` dependent on the flag
``self.use_banded_solver``.
:ivar dict state: method uses current state variables
but does not modify them
:ivar bool use_banded_solver: input flag whether to use
:func:`_solve_implicit_banded()` or
:py:func:`numpy.linalg.solve()` to do
the matrix inversion
:ivar array _diffTriDiag: the diffusion matrix which is given
with the current state variable to
the method solving the matrix problem
"""
#if self.update_diffusivity:
# Time-stepping the diffusion is just inverting this matrix problem:
newstate = {}
for varname, value in self.state.items():
if self.use_banded_solver:
newvar = _solve_implicit_banded(value, self._diffTriDiag)
else:
newvar = np.linalg.solve(self._diffTriDiag, value)
newstate[varname] = newvar
return newstate
|
[
"def",
"_implicit_solver",
"(",
"self",
")",
":",
"#if self.update_diffusivity:",
"# Time-stepping the diffusion is just inverting this matrix problem:",
"newstate",
"=",
"{",
"}",
"for",
"varname",
",",
"value",
"in",
"self",
".",
"state",
".",
"items",
"(",
")",
":",
"if",
"self",
".",
"use_banded_solver",
":",
"newvar",
"=",
"_solve_implicit_banded",
"(",
"value",
",",
"self",
".",
"_diffTriDiag",
")",
"else",
":",
"newvar",
"=",
"np",
".",
"linalg",
".",
"solve",
"(",
"self",
".",
"_diffTriDiag",
",",
"value",
")",
"newstate",
"[",
"varname",
"]",
"=",
"newvar",
"return",
"newstate"
] |
Invertes and solves the matrix problem for diffusion matrix
and temperature T.
The method is called by the
:func:`~climlab.process.implicit.ImplicitProcess._compute()` function
of the :class:`~climlab.process.implicit.ImplicitProcess` class and
solves the matrix problem
.. math::
A \\cdot T_{\\textrm{new}} = T_{\\textrm{old}}
for diffusion matrix A and corresponding temperatures.
:math:`T_{\\textrm{old}}` is in this case the current state variable
which already has been adjusted by the explicit processes.
:math:`T_{\\textrm{new}}` is the new state of the variable. To
derive the temperature tendency of the diffusion process the adjustment
has to be calculated and muliplied with the timestep which is done by
the :func:`~climlab.process.implicit.ImplicitProcess._compute()`
function of the :class:`~climlab.process.implicit.ImplicitProcess`
class.
This method calculates the matrix inversion for every state variable
and calling either :func:`solve_implicit_banded()` or
:py:func:`numpy.linalg.solve()` dependent on the flag
``self.use_banded_solver``.
:ivar dict state: method uses current state variables
but does not modify them
:ivar bool use_banded_solver: input flag whether to use
:func:`_solve_implicit_banded()` or
:py:func:`numpy.linalg.solve()` to do
the matrix inversion
:ivar array _diffTriDiag: the diffusion matrix which is given
with the current state variable to
the method solving the matrix problem
|
[
"Invertes",
"and",
"solves",
"the",
"matrix",
"problem",
"for",
"diffusion",
"matrix",
"and",
"temperature",
"T",
"."
] |
eae188a2ae9308229b8cbb8fe0b65f51b50ee1e6
|
https://github.com/brian-rose/climlab/blob/eae188a2ae9308229b8cbb8fe0b65f51b50ee1e6/climlab/dynamics/diffusion.py#L143-L192
|
train
|
brian-rose/climlab
|
climlab/surface/albedo.py
|
P2Albedo._compute_fixed
|
def _compute_fixed(self):
'''Recompute any fixed quantities after a change in parameters'''
try:
lon, lat = np.meshgrid(self.lon, self.lat)
except:
lat = self.lat
phi = np.deg2rad(lat)
try:
albedo = self.a0 + self.a2 * P2(np.sin(phi))
except:
albedo = np.zeros_like(phi)
# make sure that the diagnostic has the correct field dimensions.
#dom = self.domains['default']
# this is a more robust way to get the single value from dictionary:
dom = next(iter(self.domains.values()))
self.albedo = Field(albedo, domain=dom)
|
python
|
def _compute_fixed(self):
'''Recompute any fixed quantities after a change in parameters'''
try:
lon, lat = np.meshgrid(self.lon, self.lat)
except:
lat = self.lat
phi = np.deg2rad(lat)
try:
albedo = self.a0 + self.a2 * P2(np.sin(phi))
except:
albedo = np.zeros_like(phi)
# make sure that the diagnostic has the correct field dimensions.
#dom = self.domains['default']
# this is a more robust way to get the single value from dictionary:
dom = next(iter(self.domains.values()))
self.albedo = Field(albedo, domain=dom)
|
[
"def",
"_compute_fixed",
"(",
"self",
")",
":",
"try",
":",
"lon",
",",
"lat",
"=",
"np",
".",
"meshgrid",
"(",
"self",
".",
"lon",
",",
"self",
".",
"lat",
")",
"except",
":",
"lat",
"=",
"self",
".",
"lat",
"phi",
"=",
"np",
".",
"deg2rad",
"(",
"lat",
")",
"try",
":",
"albedo",
"=",
"self",
".",
"a0",
"+",
"self",
".",
"a2",
"*",
"P2",
"(",
"np",
".",
"sin",
"(",
"phi",
")",
")",
"except",
":",
"albedo",
"=",
"np",
".",
"zeros_like",
"(",
"phi",
")",
"# make sure that the diagnostic has the correct field dimensions.",
"#dom = self.domains['default']",
"# this is a more robust way to get the single value from dictionary:",
"dom",
"=",
"next",
"(",
"iter",
"(",
"self",
".",
"domains",
".",
"values",
"(",
")",
")",
")",
"self",
".",
"albedo",
"=",
"Field",
"(",
"albedo",
",",
"domain",
"=",
"dom",
")"
] |
Recompute any fixed quantities after a change in parameters
|
[
"Recompute",
"any",
"fixed",
"quantities",
"after",
"a",
"change",
"in",
"parameters"
] |
eae188a2ae9308229b8cbb8fe0b65f51b50ee1e6
|
https://github.com/brian-rose/climlab/blob/eae188a2ae9308229b8cbb8fe0b65f51b50ee1e6/climlab/surface/albedo.py#L179-L194
|
train
|
brian-rose/climlab
|
climlab/surface/albedo.py
|
Iceline.find_icelines
|
def find_icelines(self):
"""Finds iceline according to the surface temperature.
This method is called by the private function
:func:`~climlab.surface.albedo.Iceline._compute`
and updates following attributes according to the freezing temperature
``self.param['Tf']`` and the surface temperature ``self.param['Ts']``:
**Object attributes** \n
:ivar Field noice: a Field of booleans which are ``True`` where
:math:`T_s \\ge T_f`
:ivar Field ice: a Field of booleans which are ``True`` where
:math:`T_s < T_f`
:ivar array icelat: an array with two elements indicating the
ice-edge latitudes
:ivar float ice_area: fractional area covered by ice (0 - 1)
:ivar dict diagnostics: keys ``'icelat'`` and ``'ice_area'`` are updated
"""
Tf = self.param['Tf']
Ts = self.state['Ts']
lat_bounds = self.domains['Ts'].axes['lat'].bounds
self.noice = np.where(Ts >= Tf, True, False)
self.ice = np.where(Ts < Tf, True, False)
# Ice cover in fractional area
self.ice_area = global_mean(self.ice * np.ones_like(self.Ts))
# Express ice cover in terms of ice edge latitudes
if self.ice.all():
# 100% ice cover
self.icelat = np.array([-0., 0.])
elif self.noice.all():
# zero ice cover
self.icelat = np.array([-90., 90.])
else: # there is some ice edge
# Taking np.diff of a boolean array gives True at the boundaries between True and False
boundary_indices = np.where(np.diff(self.ice.squeeze()))[0]+1
# check for asymmetry case: [-90,x] or [x,90]
# -> boundary_indices hold only one value for icelat
if boundary_indices.size == 1:
if self.ice[0] == True: # case: [x,90]
# extend indice array by missing value for northpole
boundary_indices = np.append(boundary_indices, self.ice.size)
elif self.ice[-1] == True: # case: [-90,x]
# extend indice array by missing value for northpole
boundary_indices = np.insert(boundary_indices,0 ,0)
# check for asymmetry case: [-90,x] or [x,90]
# -> boundary_indices hold only one value for icelat
if boundary_indices.size == 1:
if self.ice[0] == True: # case: [x,90]
# extend indice array by missing value for northpole
boundary_indices = np.append(boundary_indices, self.ice.size)
elif self.ice[-1] == True: # case: [-90,x]
# extend indice array by missing value for northpole
boundary_indices = np.insert(boundary_indices,0 ,0)
self.icelat = lat_bounds[boundary_indices]
|
python
|
def find_icelines(self):
"""Finds iceline according to the surface temperature.
This method is called by the private function
:func:`~climlab.surface.albedo.Iceline._compute`
and updates following attributes according to the freezing temperature
``self.param['Tf']`` and the surface temperature ``self.param['Ts']``:
**Object attributes** \n
:ivar Field noice: a Field of booleans which are ``True`` where
:math:`T_s \\ge T_f`
:ivar Field ice: a Field of booleans which are ``True`` where
:math:`T_s < T_f`
:ivar array icelat: an array with two elements indicating the
ice-edge latitudes
:ivar float ice_area: fractional area covered by ice (0 - 1)
:ivar dict diagnostics: keys ``'icelat'`` and ``'ice_area'`` are updated
"""
Tf = self.param['Tf']
Ts = self.state['Ts']
lat_bounds = self.domains['Ts'].axes['lat'].bounds
self.noice = np.where(Ts >= Tf, True, False)
self.ice = np.where(Ts < Tf, True, False)
# Ice cover in fractional area
self.ice_area = global_mean(self.ice * np.ones_like(self.Ts))
# Express ice cover in terms of ice edge latitudes
if self.ice.all():
# 100% ice cover
self.icelat = np.array([-0., 0.])
elif self.noice.all():
# zero ice cover
self.icelat = np.array([-90., 90.])
else: # there is some ice edge
# Taking np.diff of a boolean array gives True at the boundaries between True and False
boundary_indices = np.where(np.diff(self.ice.squeeze()))[0]+1
# check for asymmetry case: [-90,x] or [x,90]
# -> boundary_indices hold only one value for icelat
if boundary_indices.size == 1:
if self.ice[0] == True: # case: [x,90]
# extend indice array by missing value for northpole
boundary_indices = np.append(boundary_indices, self.ice.size)
elif self.ice[-1] == True: # case: [-90,x]
# extend indice array by missing value for northpole
boundary_indices = np.insert(boundary_indices,0 ,0)
# check for asymmetry case: [-90,x] or [x,90]
# -> boundary_indices hold only one value for icelat
if boundary_indices.size == 1:
if self.ice[0] == True: # case: [x,90]
# extend indice array by missing value for northpole
boundary_indices = np.append(boundary_indices, self.ice.size)
elif self.ice[-1] == True: # case: [-90,x]
# extend indice array by missing value for northpole
boundary_indices = np.insert(boundary_indices,0 ,0)
self.icelat = lat_bounds[boundary_indices]
|
[
"def",
"find_icelines",
"(",
"self",
")",
":",
"Tf",
"=",
"self",
".",
"param",
"[",
"'Tf'",
"]",
"Ts",
"=",
"self",
".",
"state",
"[",
"'Ts'",
"]",
"lat_bounds",
"=",
"self",
".",
"domains",
"[",
"'Ts'",
"]",
".",
"axes",
"[",
"'lat'",
"]",
".",
"bounds",
"self",
".",
"noice",
"=",
"np",
".",
"where",
"(",
"Ts",
">=",
"Tf",
",",
"True",
",",
"False",
")",
"self",
".",
"ice",
"=",
"np",
".",
"where",
"(",
"Ts",
"<",
"Tf",
",",
"True",
",",
"False",
")",
"# Ice cover in fractional area",
"self",
".",
"ice_area",
"=",
"global_mean",
"(",
"self",
".",
"ice",
"*",
"np",
".",
"ones_like",
"(",
"self",
".",
"Ts",
")",
")",
"# Express ice cover in terms of ice edge latitudes",
"if",
"self",
".",
"ice",
".",
"all",
"(",
")",
":",
"# 100% ice cover",
"self",
".",
"icelat",
"=",
"np",
".",
"array",
"(",
"[",
"-",
"0.",
",",
"0.",
"]",
")",
"elif",
"self",
".",
"noice",
".",
"all",
"(",
")",
":",
"# zero ice cover",
"self",
".",
"icelat",
"=",
"np",
".",
"array",
"(",
"[",
"-",
"90.",
",",
"90.",
"]",
")",
"else",
":",
"# there is some ice edge",
"# Taking np.diff of a boolean array gives True at the boundaries between True and False",
"boundary_indices",
"=",
"np",
".",
"where",
"(",
"np",
".",
"diff",
"(",
"self",
".",
"ice",
".",
"squeeze",
"(",
")",
")",
")",
"[",
"0",
"]",
"+",
"1",
"# check for asymmetry case: [-90,x] or [x,90]",
"# -> boundary_indices hold only one value for icelat",
"if",
"boundary_indices",
".",
"size",
"==",
"1",
":",
"if",
"self",
".",
"ice",
"[",
"0",
"]",
"==",
"True",
":",
"# case: [x,90]",
"# extend indice array by missing value for northpole",
"boundary_indices",
"=",
"np",
".",
"append",
"(",
"boundary_indices",
",",
"self",
".",
"ice",
".",
"size",
")",
"elif",
"self",
".",
"ice",
"[",
"-",
"1",
"]",
"==",
"True",
":",
"# case: [-90,x]",
"# extend indice array by missing value for northpole",
"boundary_indices",
"=",
"np",
".",
"insert",
"(",
"boundary_indices",
",",
"0",
",",
"0",
")",
"# check for asymmetry case: [-90,x] or [x,90]",
"# -> boundary_indices hold only one value for icelat",
"if",
"boundary_indices",
".",
"size",
"==",
"1",
":",
"if",
"self",
".",
"ice",
"[",
"0",
"]",
"==",
"True",
":",
"# case: [x,90]",
"# extend indice array by missing value for northpole",
"boundary_indices",
"=",
"np",
".",
"append",
"(",
"boundary_indices",
",",
"self",
".",
"ice",
".",
"size",
")",
"elif",
"self",
".",
"ice",
"[",
"-",
"1",
"]",
"==",
"True",
":",
"# case: [-90,x]",
"# extend indice array by missing value for northpole",
"boundary_indices",
"=",
"np",
".",
"insert",
"(",
"boundary_indices",
",",
"0",
",",
"0",
")",
"self",
".",
"icelat",
"=",
"lat_bounds",
"[",
"boundary_indices",
"]"
] |
Finds iceline according to the surface temperature.
This method is called by the private function
:func:`~climlab.surface.albedo.Iceline._compute`
and updates following attributes according to the freezing temperature
``self.param['Tf']`` and the surface temperature ``self.param['Ts']``:
**Object attributes** \n
:ivar Field noice: a Field of booleans which are ``True`` where
:math:`T_s \\ge T_f`
:ivar Field ice: a Field of booleans which are ``True`` where
:math:`T_s < T_f`
:ivar array icelat: an array with two elements indicating the
ice-edge latitudes
:ivar float ice_area: fractional area covered by ice (0 - 1)
:ivar dict diagnostics: keys ``'icelat'`` and ``'ice_area'`` are updated
|
[
"Finds",
"iceline",
"according",
"to",
"the",
"surface",
"temperature",
"."
] |
eae188a2ae9308229b8cbb8fe0b65f51b50ee1e6
|
https://github.com/brian-rose/climlab/blob/eae188a2ae9308229b8cbb8fe0b65f51b50ee1e6/climlab/surface/albedo.py#L236-L291
|
train
|
brian-rose/climlab
|
climlab/surface/albedo.py
|
StepFunctionAlbedo._get_current_albedo
|
def _get_current_albedo(self):
'''Simple step-function albedo based on ice line at temperature Tf.'''
ice = self.subprocess['iceline'].ice
# noice = self.subprocess['iceline'].diagnostics['noice']
cold_albedo = self.subprocess['cold_albedo'].albedo
warm_albedo = self.subprocess['warm_albedo'].albedo
albedo = Field(np.where(ice, cold_albedo, warm_albedo), domain=self.domains['Ts'])
return albedo
|
python
|
def _get_current_albedo(self):
'''Simple step-function albedo based on ice line at temperature Tf.'''
ice = self.subprocess['iceline'].ice
# noice = self.subprocess['iceline'].diagnostics['noice']
cold_albedo = self.subprocess['cold_albedo'].albedo
warm_albedo = self.subprocess['warm_albedo'].albedo
albedo = Field(np.where(ice, cold_albedo, warm_albedo), domain=self.domains['Ts'])
return albedo
|
[
"def",
"_get_current_albedo",
"(",
"self",
")",
":",
"ice",
"=",
"self",
".",
"subprocess",
"[",
"'iceline'",
"]",
".",
"ice",
"# noice = self.subprocess['iceline'].diagnostics['noice']",
"cold_albedo",
"=",
"self",
".",
"subprocess",
"[",
"'cold_albedo'",
"]",
".",
"albedo",
"warm_albedo",
"=",
"self",
".",
"subprocess",
"[",
"'warm_albedo'",
"]",
".",
"albedo",
"albedo",
"=",
"Field",
"(",
"np",
".",
"where",
"(",
"ice",
",",
"cold_albedo",
",",
"warm_albedo",
")",
",",
"domain",
"=",
"self",
".",
"domains",
"[",
"'Ts'",
"]",
")",
"return",
"albedo"
] |
Simple step-function albedo based on ice line at temperature Tf.
|
[
"Simple",
"step",
"-",
"function",
"albedo",
"based",
"on",
"ice",
"line",
"at",
"temperature",
"Tf",
"."
] |
eae188a2ae9308229b8cbb8fe0b65f51b50ee1e6
|
https://github.com/brian-rose/climlab/blob/eae188a2ae9308229b8cbb8fe0b65f51b50ee1e6/climlab/surface/albedo.py#L374-L381
|
train
|
brian-rose/climlab
|
climlab/process/process.py
|
process_like
|
def process_like(proc):
"""Make an exact clone of a process, including state and all subprocesses.
The creation date is updated.
:param proc: process
:type proc: :class:`~climlab.process.process.Process`
:return: new process identical to the given process
:rtype: :class:`~climlab.process.process.Process`
:Example:
::
>>> import climlab
>>> from climlab.process.process import process_like
>>> model = climlab.EBM()
>>> model.subprocess.keys()
['diffusion', 'LW', 'albedo', 'insolation']
>>> albedo = model.subprocess['albedo']
>>> albedo_copy = process_like(albedo)
>>> albedo.creation_date
'Thu, 24 Mar 2016 01:32:25 +0000'
>>> albedo_copy.creation_date
'Thu, 24 Mar 2016 01:33:29 +0000'
"""
newproc = copy.deepcopy(proc)
newproc.creation_date = time.strftime("%a, %d %b %Y %H:%M:%S %z",
time.localtime())
return newproc
|
python
|
def process_like(proc):
"""Make an exact clone of a process, including state and all subprocesses.
The creation date is updated.
:param proc: process
:type proc: :class:`~climlab.process.process.Process`
:return: new process identical to the given process
:rtype: :class:`~climlab.process.process.Process`
:Example:
::
>>> import climlab
>>> from climlab.process.process import process_like
>>> model = climlab.EBM()
>>> model.subprocess.keys()
['diffusion', 'LW', 'albedo', 'insolation']
>>> albedo = model.subprocess['albedo']
>>> albedo_copy = process_like(albedo)
>>> albedo.creation_date
'Thu, 24 Mar 2016 01:32:25 +0000'
>>> albedo_copy.creation_date
'Thu, 24 Mar 2016 01:33:29 +0000'
"""
newproc = copy.deepcopy(proc)
newproc.creation_date = time.strftime("%a, %d %b %Y %H:%M:%S %z",
time.localtime())
return newproc
|
[
"def",
"process_like",
"(",
"proc",
")",
":",
"newproc",
"=",
"copy",
".",
"deepcopy",
"(",
"proc",
")",
"newproc",
".",
"creation_date",
"=",
"time",
".",
"strftime",
"(",
"\"%a, %d %b %Y %H:%M:%S %z\"",
",",
"time",
".",
"localtime",
"(",
")",
")",
"return",
"newproc"
] |
Make an exact clone of a process, including state and all subprocesses.
The creation date is updated.
:param proc: process
:type proc: :class:`~climlab.process.process.Process`
:return: new process identical to the given process
:rtype: :class:`~climlab.process.process.Process`
:Example:
::
>>> import climlab
>>> from climlab.process.process import process_like
>>> model = climlab.EBM()
>>> model.subprocess.keys()
['diffusion', 'LW', 'albedo', 'insolation']
>>> albedo = model.subprocess['albedo']
>>> albedo_copy = process_like(albedo)
>>> albedo.creation_date
'Thu, 24 Mar 2016 01:32:25 +0000'
>>> albedo_copy.creation_date
'Thu, 24 Mar 2016 01:33:29 +0000'
|
[
"Make",
"an",
"exact",
"clone",
"of",
"a",
"process",
"including",
"state",
"and",
"all",
"subprocesses",
"."
] |
eae188a2ae9308229b8cbb8fe0b65f51b50ee1e6
|
https://github.com/brian-rose/climlab/blob/eae188a2ae9308229b8cbb8fe0b65f51b50ee1e6/climlab/process/process.py#L783-L817
|
train
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.